Feeling fragmented?

Here is one of the best comments on digital identity I have ever seen.  Francis Shannahan makes it crystal clear how overly fragmented our online identities really are.  But he also shows perfectly that it would make no sense to try to compress all aspects of ourselves into a single context.  This is exactly the contradiction the Identity Metasystem is intended to resolve.

I love the diagram because it is so concrete – click on it to make it readable.  At the same time, it avoids over-simplification.  Francis has captured a lot about the yin and yang of identity, and I hope this diagram becomes an emblem for us.

“A few weeks ago I joined Facebook (after much resistence). Facebook sucks you in, making it so easy to give up bits of information about yourself, many times without even realizing it. It occurred to me that I'm leaving pieces of my identity everywhere.

“Last night I took a stab at listing out the various entities that know me, regardless of how they know me. The list is overwhelming. It quickly became apparent that to develop a comprehensive list was not feasible. What I ended up with was a good all around representation. I then generalized it to include things not solely pertaining to me as an individual (e.g. I'm an immigrant, I can never have govt clearance).

“With all the talk of identity and claims federation, this was a good way to step back and at least understand the problem space a little better. I'm sure there are other such diagrams out there but the benefit for me was to go through the process of drawing it rather than take one off the shelf.

“Here's the diagram, it turns out there are bits of us EVERYWHERE!!!  Click for a larger view.

Continue reading Feeling fragmented?

Display Tokens in Information Cards

I recommend an interesting exchange between Citi's Francis Shanahan, Microsoft's Vittorio Bertocci, and University of Wisconsin's Eric Norman.  Here's the background.  Francis has spent a lot of time recently looking in depth at the way CardSpace uses WS-Trust, and even built a test harness that I will describe in another piece.  While doing this he found something he thought was surprising:  CardSpace doesn't show the user the actual binary token sent to a relying party – it shows a description crafted by the identity provider to best communicate with the user.

(The behavior of the system on this – and other – points is documented in a paper Mike Jones and I put together quite a while ago called “Design Rationale Behind the Identity Metasystem Architecture“.  There is a link to it in the WhitePapers section on my blog.)

Francis has his priorities straight, given the way he sees things:

I'm not sure how to solve this. I'm not sure if it's a fault inherent in the Identity Meta-system or if it's just a fact of life we have to live with.

I would never want to put the elegance of a meta-system design and accommodation of potential future token types ahead of supporting Law #1.

There is no question that elegance of design cannot be pitted against the Laws of Identity without causing the whole design to fail and any purported elegance to evaporate.

So clearly I think that the current design delivers the user control and consent mandated by the First Law of Identity. 

Let's start by seeing the constraints from a practical point of view.  In an auditing identity provider, one of the main characteristics of the system is that the provider knows the identity of the relying party.  Think of the consequences.  The identity provider is capable of opening a back channel to any relying party and telling it whatever it wants to.  In fact, from a purely technical point of view, the identity provider can just broadcast all the information it knows about you, me and all our activities to the entire world! 

We put trust in the identity provider when we provide it with information that we don't want universally known.  And more trust is involved when we accept “auditing mode”, in which the identity provider is able to help protect us by seeing the identity of the party we are connecting with (e.g. during a banking transaction).

Should we conclude the existence of scenarios requiring auditing mean the laws aren't “laws”?

“Going back to my original question which was “Does the DisplayToken violate the First Law of Identity?” I am not convinced it does. What I think I am discovering is that the First Law of Identity is not necessarily enforced.

“For me, being Irish Catholic (and riddled with guilt as a result) I take a very hard-line approach when you start talking about “Laws”. For example, I expect the Law of Gravity to be obeyed. I don't view it as a “Recommendation for the Correct Implementation of Gravity”…

The point here is that when the user employs an auditing identity provider, she should understand that's what she is doing. 

While we can't then prevent evil, we can detect and punish it.  The claims in the token are cryptographically bound to the claims in the display token.  The binding is auditable.  So policy enforcers can now audit that the human readable claims, and associated information policy, convey the nature of the underlying information transfer.  

This auditability means it is possible to determine if identity providers are abiding by the information policies they claim they are employing.  This provides a handle for enforcing and regulating the behavior of system participants.

We've spoken so far about “auditing” identity providers.  The system also supports “non-auditing” providers, who do not know the identity of the relying party.  In this case, a back channel is not possible.  The auditing of the accuracy of the display token is still possible however.

There is also an option for going even further, through the use of “minimal disclosure tokens”.  In such a system, the user can have an identity provider that she operates, and which submits her claims to a service for validation.  In this architecture, the user can really be guaranteed that there are no back channels or identity-provider injected goop.

Again, we are brought to understand that the identity metasystem spans a whole series of requirements, use cases and behaviors.  The most important thing is that it support all of them. 

I do not want a “non-auditing” bank account.  In that context, display tokens bound to information tokens and associated with an information policy all seem fine to me.

On the other hand, when browsing the web and doing many other types of transactions I want to prevent any identity provider from profiling me or obtaining any more information than necessary.  Minimal disclosure tokens are the best answer under those circumstances.

The uberpoint:  unless we have a system that embraces all these use cases, we break more than the first law.  We break the laws of minimal disclosure, necessary parties, identity beacons, polymorphism, human integration and consistent experience.   We need a balanced, pragmatic approach that builts transparency, privacy, user control and understanding into the system – integrated with a legal framework for the digital age.

Ready or not: Barbie is an identity provider…

From Wired's THREAT LEVEL, news of an identity provider for girls.

Just today at the CSI Conference in Washington, DC, Robert Richardson was saying he saw signs everywhere that we were “on the cusp of digital identity truly going mainstream”.  Could anything be more emblematic of this than the emergence of Barbie as an identity provider?  It's really a sign of the times. 

From the comments on the Wired site (which are must-reads), it seems Mattel would be a lot better off giving parents control over whitelist settings (Law 1:  user control and consent).  It would be interesting to review other aspects of the implementation.  I guess we should be talking to Mattel about support for “Barbie Cards” and minimal disclosure…  I certainly tip my hat to those involved at Mattel for understanding the role identity can play for their customers.

At last, a USB security token for girls! 

Pre-teens in Mattels’ free Barbie Girls virtual world can chat with their friends online using a feature called Secret B Chat. But as an ingenious (and presumably profitable) bulwark against internet scum, Mattel only lets girls chat with “Best Friends,” defined as people they know in real life.

That relationship first has to be authenticated by way of the Barbie Girl, a $59.95 MP3 player that looks like a cross between a Bratz doll and a Cue Cat, and was recently rated one of the hottest new toys of the 2008 holiday season.

The idea is, Sally brings her Barbie Girl over to her friend Tiffany's house, and sets it in Tiffany's docking station — which is plugged into a USB port on Tiffany's PC.  Mattel's (Windows only) software apparently reads some sort of globally unique identifier embedded in Sally's Barbie Girl, and authenticates Sally as one of Tiffany's Best Friends.

Now when Sally gets home, the two can talk in Secret B Chat. (If Sally's parents can't afford the gadget, then she has no business calling herself Tiffany's best friend.)

It's sort of like an RSA token, but with cute fashion accessories and snap-on hair styles. THREAT LEVEL foresees a wave of Barbie Girl parties in the future, where tweens all meet and authenticate to each other — like a PGP key signing party, but with cupcakes.

Without the device, girls can only chat over Barbie Girls’ standard chat system, which limits them to a menu of greetings, questions and phrases pre-selected by Mattel for their wholesome quality. 

In contrast, Secret B Chat  lets girls chat with their keyboards — just like a real chat room. But it limits the girl-talk to a white list of approved words. “If you happen to use a word that's not on our list (even if it's not a bad one), it will get blocked,” the service cautioned girls at launch. “But don't worry —  we're always adding cool new words!”

By the way, Kevin Poulsen has to get the “High Tech Line of the Year Award” for “a PGP key signing party, but with cupcakes.”  Fantastic!

[Thanks to Sid Sidner at ACI for telling me about this one…]