Drilling further into delegation

Still further to my recent piece on delegation, Eric Norman writes to give another example of a user-absent scenario.  Again, to me, it is an example of a user delegating rights to a service which should run under its own identity, presenting the delegation token given to it by the user.

For an example of user-absent scenarios, look at grid computing. In this scenario, a researcher is going to launch a long-running batch job into the computing grid. Such a job may run for days and the researcher needs to go home and feed the dog and may be absent if a particular stage in the job requires authentication. The grid folks have invented a “proxy certificate” for this case. While it’s still the case that a user is present when their “main” identity is used, the purpose of the proxy cert is to delegate authentication to an agent in their absence such that if that agent is compromised, all the researcher loses is that temporary credential.

Perhaps this doesn’t count as a “user absent scenario”. Nevertheless, I think it’s certainly relevant to discussions about delegation.

I agree this is relevant.  The proxy cert is a kind of practical hybrid that gets to some of what we are trying to do without attempting to fix the underlying infrastructure.  It's way better than what we've had before, and a step on the right road.  But I think those behind proxy certs will likely agree with me about the theoretical issues under discussion here.

As an aside, it's interesting that their scheme is based on public key, and that's what makes delegation across multiple parties “tractable” even in a less than perfect form.  I say public key without at all limiting my point to X.509.

With respect to the problem of having identities on different devices, Eric adds:

Um, I think one of the scenarios Eve might have had in mind is the use of smart cards. A lot of people think that the “proper” way smart cards should operate is that secrets (e.g. private keys) are generated an the card and will reside on that card for their entire life and cannot be copied anywhere else. I’m not commenting on whether that’s really proper or not, but there sure are a lot of folks who think it is, and there are manufactures that are creating smart cards do indeed exhibit that behavior.

If users are doing million dollar bank transfers, I think it makes sense to keep their keys in a self-destroying dongle.  In many other cases, it makes sense to let users move them around.  After all, right now they spew their passwords and usernames into any dialog box that opens in front of them, so controlled movement of keys from one device to another would be a huge step forward.

In terms of the deeper discussion about devices, I think we also have to be careful to separate between credentials and digital identities.  For example, I could have one digital identity, in the sense of a set of claims my employer or my bank makes about me, and I could prove my presence to that party using several different credentials-in-the-strict-sense:  a key on smart card when I was at work; a key on a phone while on the road; even, if the sky was falling and there was an emergency, a password and backup questions.

If we don't clearly make this distinction,, we'll end up in a “fist full of dongles” nightmare that will even make Clint Eastwood run for the hills.  When I hear people talk about CardSpace as a “credential selector” it makes my hair stand on end:  it is an identity selector, and various credentials can be used at different times to prove to the claims issuer that I am some given subject.

Speaking of smart card credentials, one of the big problems in last-generation use of smartcards was that if a trojan was running on your machine, it could use your smartcard and perform signatures without your knowledge.  Worst of all, smartcards lend themselves to cross-site scripting attacks (not possible with CardSpace).  To me this is yet another call to have the user involved in the process of activating the trusted device.

Separating the identity of users from services

Geoff Arnold added a comment to an earlier piece that helps tease out the issues around delegation:

In otherwords, there is no user-absent scenario. There is a user is present and delegates authority scenario. After all, how can a user delegate authority if she isn’t present???

That's fine as long as one of the rights that can be delegated is ability to delegate further. And I'm guessing that that's what Eve is really talking about. Not delegating 100% of her rights to some agent, but delegating sufficient rights that the agent can act as a more-or-less first class entity, negotiating and delegating on her behalf.

In fact the only (obvious) right that I should not be able to delegate is the right of revocation….

OK.  So delegation is recursive.  If we accept the notion that services operate within their own identity when they do something a user has asked them to – then if they want to delegate further, they need to create a Delegation Token that:

  • asserts they have the right to delegate in some particular regard; and 
  • defines exactly what they want to delegate further, and to whom.

They need to present this along with the user's authorization.  One ends up with the whole delegation chain – all of which is auditable.

In this scenario, the user's identity remains under her control.  That's one of the things we mean when we say, “user centric”.  By issuing an authorization for some service to do something, she actually asserts her control over it.  I think this would be true even if, given a suitable incentive, she delegated the right of revocation (there are limits here…)

Multiple tokens to capture these semantics 

CardSpace is built on top of WS-Trust, though it also supports simple HTTP posts. 

One of the main advantages of WS-Trust is that it allows multiple security tokens to be stapled together and exchanged for other tokens.  

This multi-token design perfectly supports strong identification of a service combined with presentation of a separate delegation token from the user.    It is a lot cleaner for this scenario than the single-token designs such as SAML, proposed by Liberty, or the consequent “disappearing” of the user.

I guess I find Eve's contention that, “By contrast, Liberty’s ID-WSF was developed to support both the ‘human present’ and ‘human absent’ modes” a bit triumphalist and simplistic.   

Going forward I'll write more about why I think WS-Trust is a step forward for these delegation scenarios.  And about why I think getting them right is so very important.

Interesting summary by Kobielus

Despite the puns in his first paragraph, this piece by James Kobielus is very interesting, and sums up a lot of the conversations he has been having with people involved in the identity milieu:

“First off, I'd like to suggest that what we should be focusing on is not ‘user-centric identity’, per se, but ‘internet-scalable identity metasystems’ (a thought that Andre ping'd me on and Dick got me to take to hardt). What are the principles for making our identity metasystems truly internet-scalable? Could it be that user-centricity (however defined) is a necessary (but perhaps not sufficient) condition for internet-scalability?

“Now, let's look back to that previous post where I enumerated the main internet-scalability questions that Mr. Hardt laid out for our consideration:

  1. How do we scale up user-centric identity schemes, in which claims/attributes flow through and are forwarded by the user, so that they work on an open internet scale, not just within self-contained federations or circles of trust?
  2. How do we enable the free movement of claims from anywhere to anywhere?
  3. How do we extend lightweight identity management to the “long tail” of websites that don't and won't implement a heavyweight trust/federation model such as SAML or Liberty requires just to do chained/proxied authentication?
  4. How do we leverage the same core universal lightweight internet design patterns–i.e., REST using URIs and HTTP/HTTPS–to do internet-scale ubiquitous identity?

“Now I'm going to slightly shift the context for a moment to Kim Cameron's “laws of identity,” and then attempt to map that, plus Hardt's concerns, back to the notion of what it takes to make an identity metasystem truly internet-scalable. First, what I'll do is just republish Kim's actual written principles, but in a different order:

  • Consistent Experience Across Contexts: The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.
  • Pluralism of Operators and Technologies: A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.
  • Human Integration: The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.
  • User Control and Consent: Technical identity systems must only reveal information identifying a user with the user’s consent.
  • Minimal Disclosure for a Constrained Use: The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.
  • Justifiable Parties: Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.
  • Directed Identity: A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

“Now, I'll reclassify/regroup/rewrite these principles into three higher-order principles:

  • Abstraction: An internet-scalable identity metasystem must provide all end- and intermediary entities (i.e., users, identity agents, IdPs, RP/SPs, identity brokers, etc.) with a consistent, abstract, standardized , lightweight, reliable, speedy, and secure experience/interface across all use cases, interactions, credentials, protocols, platforms, etc while enabling separation of identity contexts across myriad domains, operators, and technologies.
  • Heterogeneity: An internet-scalable identity metasystem must enable seamless, standards-based interoperability across diverse identity use cases, interactions, design patterns, credentials, protocols, IdPs, RP/SPs, platforms, etc.
  • Mutuality: An internet-scalable identity metasystem must ensure that all end- and intermediary-entities (i.e., human users, identity agents, IdPs, RP/SPs, identity brokers, etc.) can engage in mutually acceptable interactions, with mutual risk balancing, and ensure that their various policies are continually enforced in all interactions, including, from the human user’s point of view, such key personal policies/peeves as the need for unambiguous human-machine communication mechanisms, privacy protection, user control and consent, minimal disclosure for a constrained use, limitation of disclosures to necessary and justifiable parties, and so on and so forth.

“Now, how would conformance to these three wordy uber-principles contribute to internet-scalability? Well, abstraction is the face of the universal interoperability backplane of any ubiquitous infrastructure (be it REST, SOA, ESB, or what have you). And heterogeneity is the fabric of any hyper-decentralized, federated, multidomain interoperability environment. And mutuality (i.e., a balancing of rights, responsibilities, risks, restrictions, rewards, etc.) is essential for any endpoint (e..g, the end user, an RP/SP, etc.) to participate in this heterogeneous, abstract environment with any degree of confidence that they can fend for themselves and actually benefit from plugging in.

“User-centric identity got going as an industry concern when it became clear that federated identity environments are not always mutual, from the end user's point of view. In other words, under “traditional” federation, some “attribute authority” (not necessarily under your or my direct control) may be coughing up major pieces (attributes) of our identity to unseen RP/SPs (also not under our control) without consulting us on the matter. In other words, those RP/SPs can selectively deny us access to the resources (i.e., apps, data, etc.) we seek, but we often can't selectively deny them access to the resources (i.e., our identity attributes) that they seek. Doesn't seem like a balanced equation, does it?

“Now, tying all this back to Dick's key design criteria for the identity metasystem (in summary): open, free, lightweight, ubiquitous interaction patterns. Seems to scream for abstraction plus heterogeneity plus mutuality, which are necessary and, taken together, sufficient conditions for internet scalability.

“In other words, necessary for the identity metasystem to be universally feasible, flexible, interoperable, implementable, extensible, and acceptable.

I think James makes good points. 

Certainly one of the main things that will get us to the identity big bang is correcting the way earlier systems  “disappeared” the user.  You can see this in the enterprise domain-based systems, where the domain was all-powerful, and it was just assumed that a user was an artifact of one single administrative domain. We now realize we need more flexible constructs.

And you can see it in consumer systems as well.

Why did we all do this?  It depends on the context.  In the consumer space, I think, for example, it was assumed that customers would be loyal to the convenience of a “circle of trust” set up by portal operators and their suppliers.  There was nothing innately wrong about this, but it is just one scenario seen from one point of view. 

From the individual customer's point of view, the “circles of trust” should really have been called “circles of profit” between which they were supposed to choose.  As Doc Searls says, this isn't the only customer relationship which is possible!  Basically, we're talking very last-century stuff that didn't understand the restructuring impact of the web – and these ideas now have to grow into a much wider context.  This is a world of really deep relationships with customers, not of forced confinement.

So a big “correction” was in the cards, and the popularity of the “user-centric” view derives partly from this.  But there are other forces at play, too.  People can talk about “user agents operating on our behalf” as much as they want.  But who decides what “our behalf” really is?  We need as individuals to control those agents – delegate to them as James says – and keep them from getting “too big for their britches…” 

So my basic thesis is not that there shouldn't be agents and services operating on our behalf – or that I would support an architecture that made this impossible.  It is that all these services and agents must begin and end by being under the user's control, and we need a consistent technology to achieve that.

I totally buy the notion that a web site gets to decide who accesses it, and what the rules of engagement are for that to happen (“trust is local”).  So the user's control with respect to her interests do not diminish a service's control with respect to its interests.  Should we call this mutuality?  I think what we really have is a mutual veto – both the user and the site being visited can set whatever bar they want before they back out of the transaction.  To me, in a world of competition, this remains control.

 

Bandit and Higgins hit interop milestone

I was so snowed under trying to work against time for the OpenID annoucement at RSA that I missed blogging another imporant milestone that has been reached by the identity community.  This report on progress in the Higgins and Bandit side of the house is great news for everyone:

The Bandit and Eclipse Higgins Projects today announced the achievement of a key milestone in the development of open source identity services. Based on working code from the two projects and the larger community of open source developers, the teams have created a reference application that showcases open source identity services that are interoperable with Microsoft’s Windows* CardSpace* identity management system and enable Liberty Alliance-based identity federation via Novell® Access Manager. This reference application is a first-of-its-kind open source identity system that features interoperability with leading platforms and protocols. This ground-breaking work will be demonstrated at the upcoming RSA Conference in San Francisco.

“There are two basic requirements for translating the potential of recent identity infrastructure developments into real-world benefits for users: interoperability and a consistent means of developing identity-aware applications,” said Jamie Lewis, CEO and research chair of Burton Group. “First, vendors must deliver on their promise to enable interoperability between different identity systems serving different needs. Second, developers need a consistent means of creating applications that leverage identity while masking many of the underlying differences in those systems from the programmer. The Bandit and Eclipse Higgins interoperability demonstration shows progress on the path toward these goals. And the fact that they are open source software projects increases the potential that the identity infrastructure will emerge as a common, open system for the Internet.”

The Bandit and Higgins projects are developing open source identity services to help individuals and organizations by providing a consistent approach to managing digital identity information regardless of the underlying technology. This reference application leverages the information card metaphor that allows an individual to use different digital identity ‘I-Cards’ to gain access to online sites and services. This is the metaphor used in the Window’s CardSpace identity management system that ships with the Vista* operating system.

“Windows CardSpace is an implementation of Microsoft’s vision of an identity metasystem, which we have promoted as a model for identity interoperability,” said Kim Cameron, architect for identity and access at Microsoft. “It’s rewarding to see the Bandit and Higgins projects, as well as the larger open source community, embracing this concept and delivering on the promise of identity interoperability.”

The open source technology developed by Bandit and Higgins enables initial integration between a non-Liberty Alliance identity system and a Liberty Alliance-based federated identity system provided by Novell Access Manager. Specifically, these technologies enable Novell Access Manager to authenticate a user via a Microsoft infocard (CardSpace) and consume identity information from an external identity system. It will further show that identity information from Novell Access Manager can be used within an infocard system. This is a significant step forward in the integration of separate identity systems to deliver a seamless experience for the user as demonstrated by the reference application.

“The Liberty Alliance project fully supports the development of open source identity services that advance the deployment of Liberty-enabled federation and Web Services as part of the broader Internet identity layer,” said Brett McDowell, executive director of the Liberty Alliance. “The open source community’s embrace of Liberty Alliance protocols is validation of the benefits this technology provides, and we salute the Bandit and Higgins teams for their role in making the technology more broadly accessible.”

Higgins is an open source software project that is developing an extensible, platform-independent, identity protocol-independent software framework to support existing and new applications that give users more convenience, privacy and control over their identity information. The reference application leverages several parts of Higgins including an identity abstraction layer called the Identity Attribute Service (IdAS). To support a dynamic environment where sources of identity information may change, it is necessary to provide a common means to access identity and attribute information from across multiple identity repositories. The IdAS virtualizes identity sources and provides a unified view of identity information. Different identity stores or identity management systems can connect to the IdAS via “context providers” and thus provide interoperability among multiple systems.

“Many groups have been working towards the goals of Internet identity interoperability,” said Paul Trevithick, technology lead for the Higgins project. “This milestone represents a major step in having multiple open source projects work together to support multi-protocol interoperability.”

The Bandit project, sponsored by Novell, is focused on delivering a consistent approach to enterprise identity management challenges, including secure access and compliance reporting. The Bandit team’s contributions to the reference application include the development of multiple “context providers” that plug into the Higgins Identity Attribute Service (IdAS) abstraction layer to provide access to identity information across disparate identity stores. It also showcases the role engine and audit reporting capabilities in development by the Bandit community.

“The development of this reference application would not have been possible without the collaboration and contribution of the wider Internet identity community,” said Dale Olds, Bandit project lead and distinguished engineer for Novell. “This is the first of many milestones we are working towards as both the Bandit and Higgins communities strive to enable interoperable, open source identity services.”

So congratulations to Bandit, Higgins and everyone else who made this happen – this is great stuff, and the identity big bang is one step closer for it.

Ignite Deux Seattle

Jackson Shaw knows as much about identity management as anyone.  I very much value his thinking.  If that weren't enough, there is that irresistable love of life that sweeps everyone into his energy field.  I think it comes through in his new blog:

No, it's not another French post from Jackson. Tonight I did something a bit different. I headed over to the Capital Hill Art Center in downtown Seattle to watch . There were 21 speakers scheduled including Scott Kveton the CEO of the folks behind . As you probably know, my buddy Kim Cameron is the man behind the curtain for Microsoft's CardSpace initiative ( I guess I should stop calling it an initiative – it is actually part of Vista now) and at the RSA conference Microsoft announced that CardSpace would be interoperable with OpenID.

I thought since Scott was going to present I might as well go over and see what all the hub-bub was about. The format of the evening was interesting in itself. Presenters had 5 minutes – only – to present their 20 slides! That's 15 seconds a slide. Scott was third presenter in the first volley of speakers. The first talk was from Matthew Maclaurin of Microsoft Research on Programming for Fun/Children/Hobbyists/Hackers. The second was from Elisabeth Freeman (Author in the Head First Series, Works at Disney Internet Group) on The Science Behind the Head First Books: or how to write a technical book that doesn’t put your readers to sleep. Then Scott was to speak.

First, I was shocked to walk into this “art space” that was packed to the rafters with people. Was I in the wrong place? Apparently not. On the website they stated the space would hold 400 people and it was jam packed. I had this vision of a few people sitting around some tables chatting. Not so! It was pretty cool; folksy; kinda out there but very engaging. Second, what was I going to get out of a 5 minute talk? Well, the speakers kind of had the pressure on them to make their points. The ones that I saw all got to the point quickly and they all engaged the with the audience, did their thing and got off.

Check out my photos on Picasa if you want to see the shots I took which included many from Scott's talk. So, what did I learn from Scott's talk?

  • OpenID is single sign-on for the web
  • Simple, light-weight, easy-to-use, open development process
  • Decentralized
  • Lots of companies are already using it or have pledged support
  • 12-15M users have OpenIDs; 1000+ OpenID enabled sites
  • 10-15 new OpenID sites added each day
  • 7% growth every week in sites

Scott predicts that in 2007 there will be 100M users, 7,500 sites, big players adopt OpenID and that OpenID services emerge. Bold predictions but something that is viral, like OpenID has a shot at it.

I have to say I was impressed. Scott finished up with a call to action that included learning more about OpenID at openidenabled.com. I'm definitely heading over there to learn more.

I'll report back.

p.s. Here's an interesting read:

Jackson just “gets” the potential for contagion into the enterprise – assuming we can use OpenID in the proper roles and with the right protections.  Corroborates for me the possible “charging locomotive effect”.   People shouldn't be caught looking the wrong way.

As for the numbers Scott threw out, I think they are very achievable.

What is meant by “token independence”?

I don't want to get sidetracked into a discussion of the nuances the SAML protocol and token independence, but imagine readers will want me to share a comment by Scott Cantor – one of the principal creators of Shibboleth.  He knows something about SAML too – since he was the editor of the Version 2.0 spec.  He is responding to my recent post about why communications protocol, trust system and token payload must become three orthogonal axes:

SAML doesn’t have the problem Kim is referring to either. Both trust model and token format are out of scope of SAML protocol exchanges. The former is generally understood, but the token issue is the source of a lot of FUD, or in Kim’s case just misunderstanding SAML. This is largely SAML’s own fault, as the specs do not explain the issue well.

It is true that SAML protocols generally return assertions. What isn’t true is that a SAML assertion in and of itself is a security token. What turns a SAML assertion into such a token is the SubjectConfirmation construct inside it. That construct is extensible/open to any token type, proof mechanism, trust model, etc.

So the difference between SAML and WS-Trust is that SAML returns other tokens by bridging them from a SAML assertion so as to create a common baseline to work from, while WS-Trust returns the other tokens by themselves. This isn’t more or less functional, it’s simply a different design. I suppose you could say that it validates both of them, since they end up with the same answer in the end.

An obvious strategy for bridging SAML and OpenID is using an OpenID confirmation method. That would be one possible “simple” profile, although others are possible, and some have been discussed.

I'm not sure I really misunderstand SAML.  I actually do understand that the SubjectConfirmation within SAML offers quite a bit of elasticity.  But SAML does have a bunch of built-in assumptions within the Assertion that make it, well, SAML (Security Assertion Markup Language).  These aren't always the assumptions you want to make.  I'll share one of my own experiences with you.

CardSpace supports a mode of operation we call “non-auditing”.  In this mode, the identity of the relying party is never conveyed to the identity provider.

The identity provider can still create assertions for the relying party, sign them, and send them back to CardSpace, which can in turn forward them to the relying party.  If done properly, using a reasonable caching scheme, this provides a high degree of privacy, since the identity provider is blind to the usage of its tokens.  For example, one could imagine a school system issuing my daughter a token that says she's under sixteen that she could use to get into protected chat rooms.  In non-auditing mode  the school system would not track which chat rooms she was visiting, and the chat room would only know she is of the correct age.  This is certainly an increasingly important use case.

My first instinct was to use SAML Assertions as the means for creating this kind of non-audited assertion.  But as Arun Nanda and I started our design we discovered that SAML – even SAML 2.0 – just wouldn't work for us. 

In the specification (latest draft), section 2.3.3 says a SAML Assertion MUST have a unique ID.  It must also have an IssueInstant.  When that is the case, the identity provider can always collaborate with the relying party to do a search on the unique ID or IssueInstant, so the desired privacy characteristics dissipate.

Being a person of some deviousness who just wants to get things done, I said to Arun, “I know you won't like this, but I wonder if we couldn't just create an ID that would be the canonical ‘untraceable identifier’?”  I hesitate to admit this and do so only to show I really was trying to get reuse.

But within a few seconds, Arun pointed out the following stipulation from section 1.3.4: 

Any party that assigns an identifier MUST ensure that there is negligible probability that that party or any other party will accidentally assign the same identifier to a different data object.

I could have argued we weren't reassigning it “accidentally”, I suppose.  But there you are.  I needed a new “Assertion” type – by which I'm referring to the payload hard-wired into SAML. 

It isn't that there is anything wrong with a SAML Assertion.  The “ID” requirement and “IssueInstant” make total sense when your use case is centered primarily around avoiding replay attacks.  But I had a different use case, and needed a different payload, one incompatible with the SAML protocol.  And going forward, I won't be the last to operate outside of the assumptions of any given payload, no matter how clever.

I have looked deeply at SAML, but am convinced that protocol, payload (call it assertion type or token type, I don't care) and trust fabric need all to be orthogonal.  SAML was a great step forward after PKI because it disentangled trust framework from the Assertion/Protocol pairing (in PKI they had all been mixed up in a huge ball of string).  But I like WS-Trust because it completes the process, and gets us to what I think is a cleaner architecture.

In spite of all this, I totally buy Scott's uberpoint that for a number of common use cases SAML and WS-Fed (meaning WS-Trust in http redirection mode) are not more or less functional, but simply a different design. 

HelloWorld Information Cards

One of the most important things about the Information Card paradigm is that the cards are just ways for the user to represent and employ digital identities (meaning sets of claims about a subject). 

The paradigm doesn't say anything about what those claims look like or how they are encoded.  Nor does it say anything about the cryptographic (or other) mechanisms used to validate the claims. 

You can really look at the InfoCard technology as just being

  1. a way that a relying party can ask for claims of “some kind”;
  2. a safe environment through which the user can understand what's happening; and
  3. the tubing through which a related payload is transfered from the user-approved identity provider to the relying party.  The goal is to satisfy the necessary claim requirements. 

If you have looked at other technologies for exchanging claims (they not called that, but are at heart the same thing), you will see this system disentangles the communication protocol, the trust framework and the payload formats, whereas previous systems conflated them.  Because there are now three independent axes, the trust frameworks and payloads can evolve without destabilizing anything.

CardSpace “comes with” a “simple self-asserted identity provider” that uses the SAML 1.1 token format.  But we just did that to “bootstrap” the system.  You could just as well send SAML 2.0 tokens through the tubing.  In fact, people who have followed the Laws of Identity and Identity Metasystem discussions know that the fifth law of identity refers to a pluralism of operators and technologies.  When speaking I've talked about why different underlying identity technologies make sense, and compared this pluralism to the plurality of transport mechanisms underlying TCP/IP.  I've spoken about the need to be “token agnostic” – and to be ready for new token formats that can use the same “tubing”.

There have been some who have rejected the open “meta” model in favor of just settling on tokens in the “concept de jour”.  They urge us to forget about all these subtleties and just adopt SAML, or PKI, or whatever else meets someone's use cases.  But the sudden rise of OpenID shows exactly why we need a token-agnostic system.  OpenID has great use cases that we should all recognize as important.  And because of the new metasystem architecture, OpenID payloads can be selected and conveyed safely through the Information Card mechanisms just as well as anything else.  To me it is amazing that the identity metasystem idea isn't more than a couple of years old and yet we already have an impressive new identity technology arising.  It provides an important example of why an elastic system like CardSpace is architecturally right. 

It's sometimes hard to explain how all this works under the hood.  So I've decided to give a tutorial about “HelloWorld” cards.  They don't follow any format previously known to man – or even woman.  They're just someting made up to show elasticity.  But I'm hoping that when you understand how the HelloWorld cards work, it will help you see the tremendous possibilities in the metasystem model.

The best way to follow this tutorial is to actually try things out.  If you want to participate, install CardSpace on XP or use Vista, download a HelloWorld Card and kick the tires.  (I'm checking now to see if other selector implementations will support this.  If not, I know that compatibility is certainly the intention on everyones’ part). 

The HelloWord card is just metadata for getting to a “helloworld” identity server.  In upcoming posts I'll explain how all this works in a way that I hope will make the technology very clear.  I'll also make the source code available.  An interesting note here:  the identity server is just a few hundred lines of code. 

To try it out, enter a login name and download a card (if you don't enter a name, you won't get an error message right now but the demonstration won't work later).  Once you have your card, click on the InfoCard icon here.  You'll see how the HelloWorld token is transferred to the relying party web site. 

This card uses passwords for authentication to the HelloWorld identity provider, and any password will do. 

Continue here…

Drummond Reed on CardSpace and OpenID

Amongst other things, Drummond is CTO of Cordance and co-chair of the OASIS XRI and XDI Technical Committees.  He's playing an important role in getting new identity ideas into the Internet Service Provider world.  Here he responds to my first convergence post:

Earlier this month Kim Cameron starting blogging about some of the phishing concerns he’s had about OpenID that he and Mike Jones have shared with myself and other members of the OpenID community privately since Digital ID World last September. Given that anti-phishing protection is one of the greatest strengths of CardSpace, one of Kim’s and Mike’s suggestions has been for OpenID providers to start accepting CardSpace cards for customer authentication.

Today Kim blogged his proposed solution for integrating OpenID and InfoCard in detail. He does a wonderful job of it, making it very clear how using CardSpace and OpenID together can be a win/win for both. With Windows Vista shipping to consumers at the end of the month, and the CardSpace upgrade now available to XP users, this is a very practical solution to increasing OpenID security that I expect all XDI.org-accredited i-brokers (who all provide OpenID authentication service for i-name holders) to implement as soon as they can.

Kim closes his post by saying, “That said, I have another proposal [for integrating OpenID and CardSpace] as well.” That’s good, and I await it eagerly, because I too believe the integration can go much deeper, just as it can for OpenID and SAML. The heart of it is individuals and organizations being able to assert their own resolvable, privacy-protected digital identifiers. That’s the foundation of the OpenID framework, and the job for which we’ve been designing XRI i-names and i-numbers for the past five years. Microsoft’s current default CardSpace schema does not yet natively support XRIs as digital identifiers, but adding them could increase their power and utility and be another big step towards convergence on a unified Internet identity layer.

I'm going to clone myself so I can find more time to write up my second proposal.  Meanwhile, just a small clarification.  Drummond talks about the “default CardSpace schema”.  He's really talking about the “default Self-Issued Card schema.” 

CardSpace itself handles tokens of any type, containing claims of any type.  There are no limitations on your schema if you create a managed card.  I'll make that clearer in my next post. 

Further, we tried to keep the “bootstrap” Self-Issued Card provider down to a minimal set of initial schema choices – precisely to leave room for a managed card ecology.  But one of those initial claims is a URL…  I thought an i-name or i-numbers would be able to go there.  Is more needed?

 

Superpat and the third way

Pat Patterson leaps through the firmament to punctuate my recent discussion of minimal disclosure with this gotcha: 

But, but, but… how does the relying party know not to ask for givenname, surname and emailaddress the second (and subsequent) time round? It doesn't know that it's already collected those claims for that user, since it doesn't know who the user is yet…

In the case described by Pat, the site really does use a “registration” model like the one from BestBuy shown here. 

When registering you hand over your identity information, and subsequently you only “authenticate”. 

This is really the current model for how identity is handled by most web sites.  In other words the “Registration process” is completely separated from the “Returning user” process.

So the obvious answer to Pat's question is that when you press “create an account” above, you invoke an object tag that asks for the four attributes discussed earlier.  And if you press “Sign in”, you invoke an object tag that only asks for PPID and then associates with your stored information.  

In other words, there is no new problem and no new framework is required.

This doesn't prevent Pat from serving up a little irony:

If only there were some specification (perhaps part of some sort of framework) that, given a token from an authentication, allowed you to get the data you needed, subject, of course, to the user's permission. 

I guess it bothered Pat that I didn't include use of backend protocols as one of the options for reducing disclosure. 

I want to set this right.  I've said since the beginning that as I saw it, the PPID (or other authenticated identifier) delivered by an InfoCard could also be used to animate a back-end protocol such as he's refering to.  That's one of the reasons I thought everyone should be able to rally behind these proposals.

The third option

So let me add a third alternative to the two I gave yesterday (storing locally or asking the user to resubmit through infocard).  The relying party could authenticate the user using InfoCard and then contact the identity provider with the user's PPID and ask it for the information the user has already agreed should be released to it.  This could be done using the protocols referred to by Pat. 

My uberpoint is simple.  InfoCards are intended to be as neutral as possible in their technical assumptions (e.g. to be an identity platform) and can be used in many ways that make sense in different environments and use cases.

I don't personally agree that the back-end protocol route for obtaining attributes is either simpler or more secure than delivering the claims directly on an as-needed basis in the authentication token, but it is certainly possible and I'm sure it has its use cases.  I wonder if Pat's implementation of Information Cards, should there be one, will take this approach?  Interesting.

 

BBAuth and OpenID

From commented.org, here's a thoughtful piece by Verisign's Hans Granqvist on Yahoo's BBAuth:

Yahoo! released its Browser-based authentication (BBAuth) mechanism yesterday. It can be used to authenticate 3rd party webapp users to Yahoo!’s services, for example, photo sharing, email sharing.

Big deal, huh?

The kicker is this though. You can use BBAuth for simple single sign-on (SSO). Most 3rd party web app developers would love to have someone deal with the username and password issues. Not storing users’ passwords mean much less liability, much less programming, much less problem.

Now Yahoo! gives you a REST-based API to do just that.

It will be interesting to see how this plays out against OpenID.They are both very similar. Granted there is some skew: OpenID is completely open, both for consumers and providers of identity.

However, from my own experience, OpenID consumers (a.k.a. relying parties) seem to want only one thing, perhaps two or three:

  • have someone deal with your users’ passwords,
  • retrieve name and email address for a user

And now Yahoo! does the first, and the second is available. At the same time they’re making your app reachable to 257 million+ users. Here’s an example.

Seems a pretty big reason to implement it for the web app developer, especially since it is such an easy API you can integrate it in an hour or two.

And yet someone has added a sobering comment to Hans’ blog:

It will be interesting to see how long it takes for adoption to reach the point that no one thinks twice when a yahoo login pops up on another site. They'll be nice and ripe for password harvesting via fake yahoo login forms then. 🙂

Sadly, if I had written this comment I would not have included the happy face. Until the security concerns are addressed, despite Yahoo's very laudible openness, this is not a happy face moment.

But through Yahoo-issued InfoCards BBauth would avoid the loss of context that will otherwise lead to password harvesting.  It's a good concrete example of how the various things we're all working on are synergistic if we combine them.