I've been working on InfoCards for what seems like ages now. Projects this complex turn into a bit of a blur, but the last Microsoft PDC (Professional Developers Conference) in 2003 stands out as a milestone. That's when we first talked publicly about the concept of a non- proprietary identity metasystem that used a “visual card metaphor” to represent personal, professional, employment-related and government identities.
The presentation was focused around the idea that we needed a multi-centered system in which there would be multiple identity providers represented within a unifying interface offering separation between contexts. But the idea of incorporating multiple different underlying security technologies seemed too hard to achieve at the time. We knew that if we were successful, the Infocard system would be one of the most attacked systems in the history of computing. How could we build an InfoCard client with multiple “protocol heads” and still keep it secure?
So, in my thinking I fell back to the “simplest thing” – a hybrid that would employ existing PKI and “self-issued” certificates to sign long-lived SAML-encoded attribute statements represented visually as cards. It was admittedly arbitrary. But it could work. The X.509 and SAML standards were well accepted – if not well deployed! On the other hand, there were aspects of the proposal with which I wasn't happy. The inflexibility of long-lived tokens implied that they inadvertantly, simply by being unchanging, became tracking mechanisms – and always gave away maximum information. This seemed progressively more significant as I became more sensitized to the privacy issues involved in individual identity.
It was around this point that I met John Shewchuk, who was CTO of Web Services at Microsoft. The meeting was bizarre because he took one look at the InfoCard presentation and said, “Yeah. The InfoCard concept is amazing. I want you to look at this thing I'm working on.”
At Microsoft, almost everyone likes to “push back”, testing the limits of what they're examining. So John's “instant understanding” was disconcerting.
I said, “John – uh – I don't think you really get how important this is.”
He replied, “No, I do. I get it. It's the missing piece. Believe me. I get it.” And it turned out he did.
A few days later, we got together and he filled me in on WS-Trust. I could see that in a world transformed by WS-Trust, the problems of certificate management and key distribution in “third party trust” scenarios would just “go away”. So would the inflexibility of long-lived tokens. It even gave us a way to support multiple security systems without increasing our vulnerability. So as quickly as John had understood InfoCards, I was on board with WS-Trust. After all, the problems with certificate management and key distribution are huge, huge, huge.
The more I thought about WS-Trust, the more I understood its power and simplicity. I say simplicity because it is really just a mechanism for exchanging one token for another. You present a first token, tell the system what kind of token you want in return, and, assuming all goes well, get the new token back.
What can you do with it? Authentication. Authorization. Secure exchange of claims about… anything at all. What kind of payload can you handle? Anything you want. And that is power.
This meant one could make the metasystem not only multi-centered, but polymorphic in the sense of supporting different underlying technologies. My thinking about the metasystem – later captured in “The Laws of Identity” – had led me to worry that InfoCards might end up being another technology silo. The polymorphic capabilities of WS-Trust could be employed so InfoCards not only escaped becoming a silo, but cut across existing silos in a very synergistic way.
Through John I met the witty and razor-sharp Tony Nadalin from IBM. Later I found out that they, along with Mary Ann Hondo from IBM and Chris Kaler from Microsoft, were the original four musketeers behind WS-Trust and the related microspecs. I thought it would be a good idea to talk with them about their work so others could get their perspective on things – and get to know them.
In this conversation, John and Tony tell us why they invented the WS-Trust protocol, what it does, how it differs from earlier technologies, and the kinds of reactions various parties initially had to their concepts. They go on to discuss “claims”, and the advantages of “claims neutrality” over their factoring into authentication, authorization, and attribute release “buckets”. Tony, who also works with Liberty, shares his thinking about the way tokens will, through the use of policy, be scoped to applications, and why the competitive advantages of dynamic systems mean rigidly taxonomized formats are unlikely to survive in the long term.
John explores the differences between what he calls the old “fixed offset protocols” and the approach, possible today with modern parsers, that can handle new composable and flexible payloads in what he calls “linear time” – meaning that CPUs can decompose the information flow faster than networks can deliver it. Then we look at where we are in the process of getting the protocol into wide use, and what remains to be done.
The conversation concludes by exploring the issues of standardization and complexity. Both John and Tony say we are already moving beyond the theoretical phase and shipping products. This leads to a discussion of profiles and interoperability.