I recommend an interesting exchange between Citi's Francis Shanahan, Microsoft's Vittorio Bertocci, and University of Wisconsin's Eric Norman. Here's the background. Francis has spent a lot of time recently looking in depth at the way CardSpace uses WS-Trust, and even built a test harness that I will describe in another piece. While doing this he found something he thought was surprising: CardSpace doesn't show the user the actual binary token sent to a relying party – it shows a description crafted by the identity provider to best communicate with the user.
(The behavior of the system on this – and other – points is documented in a paper Mike Jones and I put together quite a while ago called “Design Rationale Behind the Identity Metasystem Architecture“. There is a link to it in the WhitePapers section on my blog.)
Francis has his priorities straight, given the way he sees things:
I'm not sure how to solve this. I'm not sure if it's a fault inherent in the Identity Meta-system or if it's just a fact of life we have to live with.
I would never want to put the elegance of a meta-system design and accommodation of potential future token types ahead of supporting Law #1.
There is no question that elegance of design cannot be pitted against the Laws of Identity without causing the whole design to fail and any purported elegance to evaporate.
So clearly I think that the current design delivers the user control and consent mandated by the First Law of Identity.
Let's start by seeing the constraints from a practical point of view. In an auditing identity provider, one of the main characteristics of the system is that the provider knows the identity of the relying party. Think of the consequences. The identity provider is capable of opening a back channel to any relying party and telling it whatever it wants to. In fact, from a purely technical point of view, the identity provider can just broadcast all the information it knows about you, me and all our activities to the entire world!
We put trust in the identity provider when we provide it with information that we don't want universally known. And more trust is involved when we accept “auditing mode”, in which the identity provider is able to help protect us by seeing the identity of the party we are connecting with (e.g. during a banking transaction).
Should we conclude the existence of scenarios requiring auditing mean the laws aren't “laws”?
“Going back to my original question which was “Does the DisplayToken violate the First Law of Identity?” I am not convinced it does. What I think I am discovering is that the First Law of Identity is not necessarily enforced.
“For me, being Irish Catholic (and riddled with guilt as a result) I take a very hard-line approach when you start talking about “Laws”. For example, I expect the Law of Gravity to be obeyed. I don't view it as a “Recommendation for the Correct Implementation of Gravity”…
The point here is that when the user employs an auditing identity provider, she should understand that's what she is doing.
While we can't then prevent evil, we can detect and punish it. The claims in the token are cryptographically bound to the claims in the display token. The binding is auditable. So policy enforcers can now audit that the human readable claims, and associated information policy, convey the nature of the underlying information transfer.
This auditability means it is possible to determine if identity providers are abiding by the information policies they claim they are employing. This provides a handle for enforcing and regulating the behavior of system participants.
We've spoken so far about “auditing” identity providers. The system also supports “non-auditing” providers, who do not know the identity of the relying party. In this case, a back channel is not possible. The auditing of the accuracy of the display token is still possible however.
There is also an option for going even further, through the use of “minimal disclosure tokens”. In such a system, the user can have an identity provider that she operates, and which submits her claims to a service for validation. In this architecture, the user can really be guaranteed that there are no back channels or identity-provider injected goop.
Again, we are brought to understand that the identity metasystem spans a whole series of requirements, use cases and behaviors. The most important thing is that it support all of them.
I do not want a “non-auditing” bank account. In that context, display tokens bound to information tokens and associated with an information policy all seem fine to me.
On the other hand, when browsing the web and doing many other types of transactions I want to prevent any identity provider from profiling me or obtaining any more information than necessary. Minimal disclosure tokens are the best answer under those circumstances.
The uberpoint: unless we have a system that embraces all these use cases, we break more than the first law. We break the laws of minimal disclosure, necessary parties, identity beacons, polymorphism, human integration and consistent experience. We need a balanced, pragmatic approach that builts transparency, privacy, user control and understanding into the system – integrated with a legal framework for the digital age.