The Curse of the Secret Question

I was at Bruce Schneier's site reading about the problems with SHA-1 and came across this perfectly articulated gem:

It's happened to all of us: We sign up for some online account, choose a difficult-to-remember and hard-to-guess password, and are then presented with a “secret question” to answer. Twenty years ago, there was just one secret question: “What's your mother's maiden name?” Today, there are more: “What street did you grow up on?” “What's the name of your first pet?” “What's your favorite color?” And so on.

The point of all these questions is the same: a backup password. If you forget your password, the secret question can verify your identity so you can choose another password or have the site e-mail your current password to you. It's a great idea from a customer service perspective — a user is less likely to forget his first pet's name than some random password — but terrible for security. The answer to the secret question is much easier to guess than a good password, and the information is much more public. (I'll bet the name of my family's first pet is in some database somewhere.) And even worse, everybody seems to use the same series of secret questions.

The result is the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

What can one do? My usual technique is to type a completely random answer — I madly slap at my keyboard for a few seconds — and then forget about it. This ensures that some attacker can't bypass my password and try to guess the answer to my secret question, but is pretty unpleasant if I forget my password. The one time this happened to me, I had to call the company to get my password and question reset. (Honestly, I don't remember how I authenticated myself to the customer service rep at the other end of the phone line.)

Which is maybe what should have happened in the first place. I like to think that if I forget my password, it should be really hard to gain access to my account. I want it to be so hard that an attacker can't possibly do it. I know this is a customer service issue, but it's a security issue too. And if the password is controlling access to something important — like my bank account — then the bypass mechanism should be harder, not easier.

Passwords have reached the end of their useful life. Today, they only work for low-security applications. The secret question is just one manifestation of that fact.

Why Identity is Part of the Picture

Responding to my comments on Patrick Keefe's Village Voice “Darknet” piece, Todd Dailey writes:

“If your point is that better identity management would prevent phishing and other end-user identity theft attacks, I agree. However most of the techniques described in the article point to the need for better security, such as firewalls, virus protection, and software updates, not the need for better identity management. The only way identity management would solve this problem is if you had to identify yourself in some secure way before you were able to use the internet, perhaps a global 802.1x network. I think that's still a little way off. :)”

I had said that Keefe's contention that the machines of unsuspecting consumers are being hijacked by sinister forces:

“… speaks directly to the urgency of the need for an identity system for the Internet: an identity system that people fully understand and are willing to buy into because it is designed in accordance with the laws of identity.”

Now I agree that fixing these problems requires better “firewalls, virus protection and software updates”. But what software is safe, and what isn't? Isn't identity required here – identity mechanisms that are understandable (i.e. in keeping with the sixth law, where the three foot channel between the computer and the individual's brain is a reliable one)? And exactly who should be allowed in through firewalls? So, solving this problem goes beyond ascertaining the identity of the computer user. It involves knowing the identity of organizations, and of the products they produce. It also includes various important intersections.

Multiple Intersecting Identities

As a user, for example, I should be able access my contact list. Since I use Outlook for mail, Outlook should be able to access my contact list when I am using it. But some attachment I download through Outlook shouldn't be able to access it.

There are many identities that need to work together in a harmonious system if we want to nail this scenario – my identity as a user of a computer, Microsoft's identity as a supplier of the software I use, Outlook's identity as a specific Microsoft product, the identity of my Contact List, and that of some policy which hooks them all together. And we need the right ways to “reify” these identities so they are easily understood.

Specific is Good

The idea of having some “secure identity” before gaining access the network won't in itself keep sinister forces at bay (they can be stolen and purchased). The best way to protect a resource is by making it necessary to have not only “some identity”, but a very specific identity. Then the only way for a sinister actor to obtain access to the resource is to procure one of the very specific identities which are able to access the resource. Doing this requires knowing what the specific set of identities is. The combined effect is a very high barrier.

Extrapolating a bit further, we need to get to the point where the only way you can get to resources on internet machines is to have the very specific identities which open those very specific resources. This approach, combined with the security measures you talk about, is the only road to progress on these problems.

What stands in our way?

Outside of the enterprise, current identity systems are too hard to deploy. They are too hard to understand. And too hard to use. The different systems exist in silos, making everything harder still, and the number of silos is likely to increase. Many people feel the only way to get anything done quickly is turn protection off – maybe with the intent of turning it on later… But if you forget, there is no way to know what you've left undone or who can access what.

All of this needs to be fixed. At the center of everything is the construction of a unifying and easily used identity system.

Two Big Issues

Whispers of Probing Mind points out that the Brittan School District may be the first in California to use RFID tags for children, but not in the US or around the world:

November 18, 2004: Suburban Houston school district is tagging 28,000 students with RFID-equipped ID badges that are read when children get on and off school buses. The children's’ locations are automatically sent wirelessly to police and school administrators. School officials say the $180,000 system was enthusiastically supported by parents as a school safety measure. We're guessing the kids haven't yet hired ACLU or EFF lawyers.

In Japan, Schoolkids were tagged with RFID chips in larger scale.

July 12 2004: The rights and wrongs of RFID-chipping human beings have been debated since the tracking tags reached the technological mainstream. Now, school authorities in the Japanese city of Osaka have decided the benefits outweigh the disadvantages and will now be chipping children in one primary school.

The tags will be read by readers installed in school gates and other key locations to track the kids’ movements. The chips will be put onto kids’ schoolbags, name tags or clothing in one Wakayama prefecture school. Denmark's Legoland introduced a similar scheme last month to stop young children going astray.

Again, from my point of view there are two issues here – consent (law 1) and omnidirectionality (law 4).

James Kobelius argues that by sending your child to a school you consent to the way that school is run, and that informing parents about use of RFID is basically a formality. This argument touches on the relationship between societies, individuals and their childrens’ schools – issues which are far beyond the scope of this blog. My point here is simply that one way or another, consent is required, or there will be a ruckus which undermines the success of the system. For the system to succeed, consent should be as clear as possible. In the California incident, a number of parents did not feel they had given their consent, so the consent was not clear. I am very curious to see whether the system will recover from this.

Given my interests, I generalize from this whole experience: When trying to build a successful system of identity for the Internet, let's all agree to make this kind of dynamic a thing of the past by ensuring that above all, the users of the system are in firm control of it.

In terms of omnidirectionality, I very much suspect that the children in the all these cases wear their tags home. And that the tags are omnidirectional, cabable of being energized by any compatible reader employed by any stranger. I believe we need to nip this in the bud. If children are to be tagged, the devices employed should refuse to respond except to readers run by parties known and approved by their parents. The identity of the party monitoring the children is at least as important as the identity of the children. We are capable of building such systems for use in protecting our children, and don't have to fall back to technologies suitable for boxes of cereal.

Vodafone's Future Vision Site

A reader of yesterday's piece on bodynets suggested checking out this Vodafone site, which is a must-see for the identity affectionado. It's superbly put together, although at one point I got trapped in Vincenzo's incredibly messy bedroom as he played, if you can believe this, a mediterranean version of “This is my dog” to a Mitch Miller-like bouncing ball reborn on a foldable organic screen. But many of the scenarios are very concrete and believeable.

This world is lush with communicators sensing your digital ID and adjusting all aspects of your environment in cahoots with your visual bracelet, a kind of wrappable cellphone that filters incoming events on your behalf. It is Eric Norlin's polycomm scenario gone Hollywood, with privacy issues galore. All in all, a great accomplishment.

Of course we have a lot of work to do in figuring out the implications of the laws of identity for these scenarios. I wonder if Vodafone has a paper on these issues?

Bodynets

From Scott C. Lemon, this intriguing post:

Funny what you find on the net! While reading through some links related to wearable computer research I cam across this great page with some thoughts by Ana Viseu about “bodynets” and Identity. Besides that fact that I really like the look of the web site, I like this train of thought:

Identity, loosely defined as the way we see and present ourselves, is not static. On the contrary, identity is primarily established in social interaction. This interaction consists, in its most basic form, of an exchange of information. In this information exchange individuals define the images of themselves and of others. This interaction can be mediated-through a technology, for example-and it can involve entities of all sorts, e.g., an institution or a technology. I am investigating this interaction through the study of bodynets.

Bodynets can be thought of as new bridges or interfaces between the individual and the environment. My working definition of a bodynets is: A body networked for (potentially) continuous communication with the environment (humans or computers) through at least one wearable device-a computer worn on the body that is always on, ready and accessible. This working definition excludes implants, genetic alterations, dedicated devices and all other devices that are portable but not wearable, such as cell phones, smart cards or PDAs.

Besides the matters related to identity, bodynets also raise serious issues concerning privacy, which in turn feedback on identity changes. Bodynets are composed of digital technologies, which inherently possess tracking capabilities, this has major privacy implications.

If you like this, continue reading … there is a lot of additional material. Whenever I see the University of Toronto, I have to guess that Steve Mann is involved. These are all important directions to look at.

I couldn't agree more.

The Internet as a Two-Edged Sword

There is a thought-provoking piece by Patrick Radden Keefe in the Village Voice about the “darknet”. If that's a new term for anyone, Keefe says that:

“In 2002 four Microsoft engineers published a paper in which they coined the term the “darknet.” This was essentially an extensive and opaque Internet black market, ‘not a separate physical network but an application and protocol layer riding on existing networks…'”

Keefe goes on to look at the relation between the darknet and terrorism:

“The dark regions of the Internet have allowed Al Qaeda to reconstitute itself as a virtual terrorist group, one that is beginning, through its masterful distribution of propaganda, to resemble not so much an organization as a movement, and one that has used America's accelerated rate of technological growth to its own advantage…

“Bin Laden associates employ cutting-edge steganography, which involves implanting a text message into a single image or letter on a website. “

He argues government agencies are ill-prepared to deal with these threats, that Internet users are unwitting accomplices, and that Internet technology, which promised so much “good”, is a two-edged sword:

“If American forces are unaccustomed to pursuing adversaries through the caves of Afghanistan or the streets of Baghdad, they will have even more trouble tracking Al Qaeda online, because Internet technology favors the fugitive criminal and the migrant threat, and because terrorists know how to turn the new digital divide to their advantage. In this evasive game they have at their disposal a most unusual accomplice: unwitting Americans with personal computers and Internet connections…

“What's… unsettling is that American computer users may assist in this growth phase for Al Qaeda.”

The article keeps coming back to the idea that to escape detection, terrorists hijack legitimate resources left vulnerable because people don't understand how to protect them. And this, of course, speaks directly to the urgency of the need for an identity system for the Internet: an identity system that people fully understand and are willing to buy into because it is designed in accordance with the laws of identity.

Just for the record, while the concept is very key, I'm not a fan of the word “darknet”. I think we can do better than that. The dark-light dichotomy is too last-century.

Active versus Passive RFID Tags

Mark Wahl has written to clarify the difference between the ScanPak RFIDs mentioned in my earlier piece and typical passive tags. He says the ScanPak press release mentions they have a read range of up to 200 meters and that the RF tag is “powered by two parallel lithium coin cells.”

These are active tags: they contain their own power source. In contrast, a passive tag does not contain batteries, it obtains power from from the reader, and generally could not be reliably accessed over such a distance. A tag that is intended to be read only from a few, well-defined locations such as a passing through a doorway, or a tag that is intended to be attached to a low-value consumer item, would likely be a passive tag.

Thanks for the heads up, Mark. It looks like hackers will have to get up out of the Food Court and head right on in to the stores.

Still, experts are routinely quoted as saying the range of passive RFID devices is being significantly extended by new reader and antenna technology. For example:

But what about a more powerful RFID reader, created by criminals or police who don't mind violating FCC regulations? Eric Blossom, a veteran radio engineer, said it would not be difficult to build a beefier transmitter and a more sensitive receiver that would make the range far greater. “I don't see any problem building a sensitive receiver,” Blossom said. “It's well-known technology, particularly if it's a specialty item where you're willing to spend five times as much.”

You can build quite a transciever for the price of a Comme des Garcons outfit.

And strangely, the RFID components used by the Britton School District were the size of a “roll of dimes” – meaning they could easily be Active tags.

I Wish All Taxonomies Were This Amusing

Chris Ceppi has picked up and extended an interesting piece by Stefan Brands where he uses a transportation analogy to classify personal digital identity systems such as FOAF and LID as bicycles whereas SAML and Liberty are jet planes. Chris goes on to say:

UniUnder this taxonomy, I see LID as a unicycle – novel, but impractical and limited to people with a very specialized set of skills. As has been dissected in numerous other places (most expertly at Burningbird), LID's dependence on URLs as an identifier misses the mark in a number of ways – like a unicycle, LID is just not a useful way to get around.

CesSXIP would then be a Cessna – complex enough (with its hosted identities and 3rd party assertions) that you need a pilots license to use it, but not rigorous enough for a broad set of air travel requirements (e.g. SXIP is not based on standards).

ShuttleSAML and Liberty as they have currently been implemented might be considered the space shuttles of identity.

I'm not sure I buy this taxonomy – I think several of the systems have a lot to offer – but it is really amusing. And I do buy Chris’ conclusions – we have work to do in getting to a unifying metasystem.

Back to you, William

William Heath of Ideal Government has been thinking and talking with colleagues in the United Kingdom about what we have called the Law of Control:

Technical identity systems MUST only reveal information identifying a user with the user's consent.

He writes:

Kim's laws (as well as Liberty Alliance and the state-of-the-art identity debate) take shape in a crucible of US-based entrpreneurial creativity. This is principally and primarily business and consumer focussed. Just like every other aspect of IT it needs a bit of a stretch and a rethink when we come to apply it to public services.

Imagine we get arrested (for a crime of conscience, eg deliberate trespass on a foreign military base). We don't control the process as our identity details are taken by the police and passed to court to prison to probation services. Yet we may accept collectively that institutions within a democratically elected government have the right to do this to one of us. In this sense “collective consent” (or just “consent”) might be a closer expression of what we mean than “control”. So I'm not entirely comfortable with it being called the law of control.

I'm aware of the inevitable limitations of our perspective, although I confess to having many friends and collaborators in the public service. My limitations make me deeply interested in the perspectives of people like William, so I look forward to reaching a mutual understanding on these issues.

William is discussing the relations between the individual and the institutions of democracy, which operate just as he describes, and owe their endurance to deep collective consent.

I'm not sure what this has to do with the Law of Control, which discusses the relation between the computer user and her technical identity system.

Let's leave the name aside for a moment, and concentrate on the content of the law itself.

Would those in the public services rather have it read, “Technical identity systems MUST only reveal information identifying a user with the user's consent – or that of the state”? And if not this formulation, what would they like to see expressed?

I think one way to look at it is to say that the individual controls her identity system – even if under certain circumstances the state may control the individual.

But I am open to the idea that there is more to it than this, and am waiting to hear what William has in mind.

"Far Out"… of Compliance

A picture named id_badge_meeting.jpg

Jamie Lewis has caught a good one here:

According to this story on SFGate.com, the Brittan School District — a small district in California — in January began requiring all students to wear RFID-enabled badges that monitor their whereabouts on campus. The district has 587 kindergarten through eighth-graders who now have the privilege of being “the first public school kids in the country to be tracked on campus by such a system.” The story says the system “is designed to ease attendance taking and increase campus security.” The school district did this without involving the parents, many of whom are now raising a ruckus. How many ways does this system violate Kim's laws of identity?

It's strange – I was just catching up on RFID progress myself… But this is a really nutso development. Do you think one day products will need to carry a tag that says ‘Compliant with the laws of identity’? That would sure cut down on embarassing public pronouncements.

Of course, we know that the reaction of the outraged parents was totally predictable through the first law of identity (which states that people will tend to reject identity systems which do not obtain consent about the release of identity information). There has so far been no explicit reaction to the improper use of omnidirectional identifiers (an equal or worse offense in Identity Court), but that seems to be because criminals have just begun to take advantage of the technology. Those of us who think about this know it is only a matter of time before we witness some very bad outcomes.

“It's baffling why so many people are bothered by the district being able to tell them where their kids are at,” said Tim Crabtree, a high school teacher who said he hoped the technology would come to his classroom.

I like the word ‘baffling’ as used here.

Seven classrooms were equipped with the readers, as were two bathrooms. The bathroom readers were never turned on, according to school and company officials, and were removed Wednesday by InCom because of objections by parents.

Yes, bathrooms are very important. Of course administrators often fit them with sensors and never turn them on.

InCom has also disabled its system and deleted data it has collected to date. Readers have been turned off until the board reaches a decision next week.

I can hardly wait to see what the outcome will be. The RF readers have been turned off – but not the tracking badges themselves, which I assume continue to emit omnidirectional “public” identifiers when queried.

Developers of the system say parents concerned over privacy violations don't understand the short range of radio frequency identification devices.

“The tags physically can't be read from a long distance,” said Doug Ahlers, an InCom partner.

I wonder what distance the developers are quoting. It wouldn't be 15 feet by any chance, would it? Seems like not many people follow radio technology and advanced antenna design these days.

I would like to brainstorm with the InCom partners about what could be done to bring their system into compliance with the laws of identity. If anyone knows them, why not introduce us?