How to get a branded InfoCard?

A lot of people have asked me, “But how does a user GET a managed Information Card?”  They are referring, if you are new to this discussion, to obtaining the digital equivalent to the cards in your wallet.

I sometimes hum and ha because there are so many ways you can potentially design this user experience.  I explain that technically, the user could just click a button on a web page, for example, to download and install a card.  She might need to enter a one-time password to make sure the card ended up in the hands of the right person.  And so on.

But now that production web sites are being built that accept InfoCards, we should be able to stop arm-waving and study actual screen shots to see what works best.  We'll be able to contrast and compare real user experiences.

Vittorio Bertocci, a colleague who blogs at vibro.net, is one of the people who has thought about this problem a lot.  He has worked with the giant European Etailer, Otto, on a really slick and powerful smart client that solves the problem.  Otto and Vittorio are really on top of their technology.  It's just a day or two after Vista and they have already deployed their first CardSpace application – using branded Otto cards. 

Here's what they've come up with.  I know there are a whole bunch of screen shots, but bear in mind that this is a “registration” process that only occurs once.  I'll hand it over to Vittorio…

The Otto Store is a smartclient which enables users to browse and experience in a rich way a view of Otto's products catalog.  Those functions are available to everyone who downloads and installs the application (it is in German only, but it is very intuitive and a true pleasure for the eye). The client also allows purchase of products directly from the application: this function, however, is available only for Otto customers. For being able to buy, you need to be registered with http://www.otto.de/.

That said: while buying on http://www.otto.de/ implies using username and password, the new smart client uses Otto managed cards. The client offers a seamless provisioning experience, thanks to which Otto customers can obtain and associate to their account an Otto managed card backed by a self issued card of their choice.
Let's get a closer look through the screens sequence.

Above you see the main screen of the application. You can play with it, I'll leave the sequence for adding items in the shopping cart to your playful explorations (hint: once you're chosen what to buy, you can reach the shopping cart area via the icon on the right end.

Above you see the shopping cart screen. You can check out by pressing the button circled in red.

Now we start talking business! On the left you are offered the chance of transfering the shopping cart to the otto.de web store, where you can finalize the purchase in “traditional” way. If you click the circled button on the right, though, you will be able to do go on with the purchase without leaving the smartclient and securing the whole thing via CardSpace. We click this button.

This screen asks you if you already own an Otto card or if you need to get one. I know what you are thinking: can't the app figure that out on its own? The answer is a resounding NO. The card collection is off limit for everybody but the interactive user: the application HAS to ask. Anyway, we don't have one Otto card yet so the question is immaterial. Let's go ahead and get one. We click on the circled button on the left.

This screen explains to the (German-speaking) user what is CardSpace. It also explains what is going to happen in the provisioning process: the user will choose one of his/her personal card, and the client will give back a newly created Otto managed card. We go ahead by clicking the circled button.

Well, that's end of line for non-Otto customers. This screen allows the user to 1) insert credentials (customer number and birthdate) to prove that they are Otto customers and 2) protect the message containing the above information with a self issued card. This is naturally obtained via my beloved WS-Security and seamless WCF-CardSpace integration. Inserting the credentials and pushing the button in the red circle will start the CardSpace experience:

The first thing you get is the “trust dialog” (I think it has a proper name, but I can't recall it right now). Here you can see that Otto has a Verisign EV certificate, which is great. I definitely believe it deserves my trust, so I'm going ahead and chosing to send my card.

 

Here there's my usual card collection. I'm going to send my “Main Card”: let's see which claims the service requires by clicking “Preview”.

Above you can see details about my self issued card. Specifically, we see that the only claim requested is the PPID. That makes sense: the self issued card will be used for generating the Otto managed card so the PPID comes in handy. Furthermore, it makes sense that no demographic info are requested: those have been acquired already, through a different channel. What we are being requested is almost a bare token, the seond authentication factor that will support our use of our future Otto managed card. Let's click “Send” and perform our ws-secure call.

The call was successful: the credentials were recognized, the token associated to self issued card parsed: as a result, the server generated my very own managed card and sent it back to you. Here's there is something to think about: since this is a rich client application, we can give to the user a seamless provisioning experience. I clicked “Send” in the former step, and I find myself in the Import screen without any intermediate step. How? The bits of the managed card (the famous .CRD file) are sent as a result of the WS call: then the rich client can “launch” it, opening the cardspace import experience (remember, there are NO APIs for accessing the card store: everything must be explicitly decided by the interactive user). What really happens is that the cardspace experience closes (when you hit send) and reopens (when they client receives the new card) without any actions required in the middle. If the same process would have happened in a browser application, this would not be achievable. If the card is generated in a web page, the card bits will probably be supplied as a link to a CRD card: when the user follows the link, it will be prompted by the browser for choosing if he/she wants to save the file or open it. It's a simple choice, but it's still a choice.
Back on the matter at hand: I'm all proud with my brand new Otto card, and I certainly click on “Install” without further ado. The screen changes to a view of my card collection, which I will spare you here, with its new citizen. You can close it and get back to the application.

The application senses that we successfully imported the managed card (another thing that would be harder with web applications). The provisioning phase is over, we won;t have to go through this again when we'll come back to the store in the future.
At this point we can click on the circled button and give to our new Otto Card a test drive: that will finally secure the call that will start the checkout process.

Here's we have my new card collection: as expected, the Otto card is the only card I can use here. Let's preview it:

Here's the set of claims in the Otto card: I have no hard time to admit that apart from “Strasse” and “Kundennummer” I have no idea of what those names mean :).
I may click on “Retrieve” and get the values, but at the time of writing there's no display token support yet: nothing to be worried about in term of functionality, you still get the token that will be used for securing the call: click the circled button for making the token available to WCF for securing the call.
Please note: if you are behind a firewall you may experience some difficulties a this point. If you have problems retrieving the token, try the application from home and that should fix it.

That's it! We used our Otto managed card, the backend recognized us and opened the session: from this moment on we can proceed as usual. Please note that we just authenticated, we didn't make any form of payment yet: the necessary information about that will be gathered in the next steps.
Also note that it was enough to choose the Otto managed card for performing the call, we weren't prompted for further authentication: that is because we backed the Otto card with a personal card that is not password protected. The identity selector was able to use the associated personal card directly, but if it would have been password protected the selector would have prompted the user to enter it before using the managed card.

It definitely takes longer to explain it than to make it. In fact, the experience is incredibly smooth (including the card provisioning!).

Kim, here's some energy for fueling the Identity Big Bang. How about that? 🙂

That's great, Vittorio.  I hear it. 

By the way, Vittorio is a really open person so if you have questions or ideas, contact him.

 

Resending of personal data with InfoCards

Eric Schultz writes with this question: 

I've been investigating CardSpace and the practicality of it's use for login on a new social networking site.

I have a question regarding the method through which data is transferred. I see that you can require certain claims from an InfoCard such as email, first and last name, zip code etc. When I look at the login code I see that the same claims are required again.

Does this mean that each time an InfoCard is sent all the personal data is resent? Isn't this dangerous for security/privacy? The potential for a server failure (malicious or not) caused by a buffer overflow, a coding mistake that outputs the details of session variables etc. seems rather risky in this scenario.

Perhaps I am being alarmist?

This is an area in which being “an alarmist” – perhaps I will rephrase it as being thoroughly pessimistic about what can go wrong – is the best starting point.  You questions are ones everyone should think about.

InfoCard and Minimal Disclosure

The simple answer is that there is nothing built into InfoCard concepts that requires a “relying party” to ask for attributes every time a user comes to its site.  Let's first look at the mechanics. 

The relying party controls what attributes it asks for by putting an OBJECT tag in the HTML page where the user opts to use an infocard.

The example shown here will bring up the infocard dialog and illuminate any cards that offer all four claims so the user can select one. 

If, next time, the relying party doesn't want to receive these claims, it just doesn't ask for them.  If it has stored them, it should be able to retrieve them when necessary by using  “privatepersonalidentifier” as a handle.  This identifier is just a random pairwise number meaningless to any other site, and so there is no identity risk in using it.

No theoretical bias 

In other words, the InfoCard system has no theoretical bias about what information should be asked for when.  Through the Laws of Identity we have tried to help people understand that they should only ask for what they need to complete a transaction and should only keep it for the length of time they absolutely must. 

In particular, there should be no hoarding of rainy-day information – information that “might come in handy” some day – but which is more likely to turn into a liability than into a benefit.

Do your risk analysis 

You'll need to do the conventional risk analysis and think about whether it is more dangerous to store the information or just ask for it on an “as-needed” basis and then forget it.  My personal sense is that it is more dangerous to store it than to use an on-demand approach. 

A central machine with the stored information that animates a successful internet business is a honeypot.  It could well be subject to insider attacks, and certainly, since it lives on the internet, will be subject to many attacks on the information it stores.  Why not avoid these problems completely?

Certainly, the on-demand approach has benefits in convincing customers and legal practitioners that, having held no identity information, you cannot be seen as being responsible for an identity meltdown.  To me this is very attractive, and something that has not been possible until now.

Conclusion

The examples Eric gives of things that can go wrong seem to me to apply even more strongly if you have stored information locally than if you ask for it on demand.

But as I said earlier, this just expresses my thinking – there is lots more to be written by Eric and hundreds of others as they develop applications. 

Meanwhile, InfoCard has no built-in assumptions around this and can be used in whatever way is appropriate to a given situation.

 

As simple as possible – but no simpler

Gunnar Peterson recently showed me a posting by Andy Jaquith (securitymetrics.org and JSPWiki) that combines coding, wiki, identity, Cardspace, cybervandals, and OpenID all into one post – a pretty interesting set of memes.  Like me, he finally actually shut off anonymous commentary because he couldn't stand the spam workload, and is looking for answers:

Last week's shutoff of this website's self-registration system was something I did with deep misgivings. I've always been a fan of keeping the Web as open as possible. I cannot stand soul-sucking, personally invasive registration processes like the New York Times website. However, my experience with a particularly persistent Italian vandal was instructive, and it got me thinking about the relationship between accountability and identity.

Some background. When you self-register on securitymetrics.org you supply a desired “wiki name”, a full name, a desired login id, and optionally a e-mail address for password resets. We require the identifying information to associate specific activities (page edits, attachment uploads, login/log out events) with particular members. We do not verify the information, and we trust that the user is telling truth. Our Italian vandal decided to abuse our trust in him by attaching pr0n links to the front page. Cute.

The software we use here on securitymetrics.org has decent audit logs. It was a simple matter of identifying the offending user account. I know which user put the porn spam on the website, and I know when he did it. I also know when he logged in, what IP address he came from, and what he claimed his “real” name was. But although I've got a decent amount of forensic information available to me, what I don't have any idea of whether the person who did it supplied real information when he registered.

And therein lies the paradox. I don't want to keep personal information about members — but at the same time, I want to have some degree of assurance that people who choose to become members are real people who have serious intent. But there's no way to get any level of assurance about intent. After alll, as the New Yorker cartoon aptly put it, on the Internet no-one knows if you're a dog. Or just a jackass.

During the holiday break, I did a bit of thinking and exploration about how to address (note I do not say “solve”) issues of identity and accountability, in the context of website registration. Because I am a co-author of the software we use to run this website (JSPWiki), I have a fair amount of freedom in coming up with possible enhancements.

One obvious way to address self-registration identity issue is to introduce a vetting system into the registration process. That is, when someone registers, it triggers a little workflow that requires me to do some investigation on the person. I already do this for the mailing list, so it would be a logical extension to do it for the wiki, too. This would solve the identity issue — successfully vetting someone would enable the administrator to have much higher confidence in their identity claims, albeit with some sacrifice

There's just one problem with this — I hate vetting people. It takes time to do, and I am always far, far behind.

A second approach is to not do anything special for registration, but moderate page changes. This, too, requires workflow. On the JSPWiki developer mailing lists, we've been discussing this option quite a bit, in combination with blacklists and anti-spam heuristics. This would help solve the accountability problem.

A third approach would be to accept third-party identities that you have a reasonable level of assurance in. Classic PKI (digital certificates) are a good example of third-party identities that you can inspect and choose to trust or not. But client-side digital certificates have deployment shortcomings. Very few people use them.

A promising alternative to client-side certificates is the new breed of digital identity architectures, many of which do not require a huge, monolithic corporate infrastructure to issue. I'm thinking mostly of OpenID and Microsoft's CardSpace specs. I really like what Kim Cameron has done with CardSpace; it takes a lot of the things that I like about Apple's Keychain (self-management, portability, simple user metaphors, ease-of-use) and applies it specifically to the issue of identity. CardSpace‘s InfoCards (I have always felt they should be called IdentityCards) are kind of like credit cards in your wallet. When you want to express a claim about your identity, you pick a card (any card!) and present it to the person who's asking.

What's nice about InfoCards is that, in theory, these are things you can create for yourself at a registrar (identity provider) of your choice. InfoCards also have good privacy controls — if you don't want a relying party (e.g., securitymetrics.org) to see your e-mail identity attribute, you don't have to release that information.

So, InfoCards have promise. But they use the WS-* XML standards for communication (think: big, hairy, complicated), and they require a client-side supplicant that allows users to navigate their InfoCards and present them when asked. It's nice to see that there's a Firefox InfoCard client, but there isn't one for Safari, and older versions of Windows are still left out in the cold. CardSpace will make its mark in time, but it is still early, methinks.

OpenID holds more promise for me. There are loads more implementations available (and several choices for Java libraries), and the mechanism that identity providers use to communicate with relying parties is simple and comprehensible by humans. It doesn't require special software because it relies on HTTP redirects to work. And best of all, the thing the identity is based on is something “my kind of people” all have: a website URL. Identity, essentially, boils down to an assertion of ownership over a URL. I like this because it's something I can verify easily. And by visiting your website, I can usually tell whether the person who owns that URL is my kind of people.

OpenID is cool. I got far enough into the evaluation process to do some reasonably serious interoperability testing with the SXIP and JanRain libraries. I mocked up a web server and got it to sucessfully accept identities from the Technorati and VeriSign OpenID services. But I hit a few snags.

Recall that the point of all of this fooling around is to figure out a way to balance privacy and authenticity. By “privacy”, I mean that I do not want to ask users to disgorge too much personal information to me when they register. And correspondingly, I do not want the custodial obligation of having to store and safeguard any information they give me. The ideal implementation, therefore, would accept an OpenID identity when presented, dyamically collect the attributes we want (really, just the full name and websute URL) and pull them into our in-memory session, and flush them at the end of the session. In other words, the integrity of the attributes presented, combined with transience yields privacy. It's kind of like the front-desk guard I used to see when I consulted to the Massachussetts Department of Mental Health. He was a rehabilitated patient, but his years of illness and heavy treatment left him with no memory for faces at all. Despite the fact I'd visited DMH on dozens of occasions, every time I signed in he would ask “Have you been here before? Do you know where you are going?” Put another way, integrity of identity + dynamic attribute exchange protocols + enforced amnesia = privacy.

By “authenticity” I mean having reasonable assurance that the person on my website is not just who they say they are, but that I can also get some idea about their intentions (or what they might have been). OpenID meets both of these criteria… if I want to know something more about the person registering or posting on my website, I can just go and check ’em out by visiting their URL.

But, in my experiments I found that the attribute-exchange process needs work… I could not get VeriSign's or Technorati's identity provider to release to my relying website the attributes I wanted, namely my identity's full name and e-mail addresses. I determined that this was because neither of these identity providers support what the OpenID people call the “Simple Registration” profile aka SREG.

More on this later. Needless to say, I am encouraged by my progress so far. And regardless of the outcome of my investigations into InfoCard and OpenID, my JSPWiki workflow library development continues at a torrid pace.

Bottom line: once we have a decent workflow system in place, I'll open registrations back up. And longer term, we will have more some open identity system choices.

Hmmm.  Interesting thoughts that I want to explore more over the next while.

Before I get to the nitty-gritty, please note that Cardspace and InfoCards do NOT require a client-side wiki or web site to use WS-* protocols

The system supports WS-*, which gives it the ability to handle upper-end scenarios, but doesn't require it and can operate in a RESTful mode! 

So the actual effort required to implement the client side is on the same order of magnitude as for OpenID.  But I agree there are not very many open-source options out there for doing this yet – requiring more creativity on the part of the implementor.  I'm trying to help with this.

It's also true that InfoCards require client software (although there are ways around this if someone is enterprising: you could build an infocard selector that “lives in the cloud”).

But the advantages of InfoCard speak clearly too.  Andy Jaquist would find that release of user information is built right in, and that “what you see is what you get” – user control.  Further, the model doesn't expose the user to the risk that personal information will become public by being posted on the web.  This makes it useful in a number of applications which OpenID can't handle without a lot of complexity.

But what's the core issue? 

InfoCards change the current model in which the user can be controlled by an evil site.  OpenID doesn't.

if a user ends up at an evil site today, it can pose as a good site known to the user by scooping the good site's skin so the user is fooled into entering her username and passord.

But think of what we unleash with OpenID…

It's way easier for the evil site to scoop the skin of a user's OpenID service because – are you ready? – the user helps out by entering her honeypot's URL!

By playing back her OpenID skin the evil site can trick the user into revealing her creds.  But these are magic creds,  the keys to her whole kingdom!  The result is a world more perilous than the one we live in now.

If that isn't enough, evil doers armed with identifiers and ill-gotten creds can then crawl the web to see where the URL they have absconded with is in play, and break into those locations too.

The attacks on OpenID all lend themselves to automation…

One can say all this doesn't matter because these are low-value identities, but I think it is a question of setting off on the wrong foot unless we build the evolution of OpenID into it.

It will really be a shame if all the interest in new identity technology leads to security breaches worse than those that currently exist, and brings about a further demoralization of the user.

I'd like to see OpenID and InfoCard technologies come together more.  I'll be presenting a plan for that over the next little while.

 

Firefox support for Windows CardSpace

Via Mary Jo Foley's Unblinking Eye on Microsoft,  here's a piece on a new Firefox plugin supporting CardSpace on Window's Platforms:

A new plug-in providing Firefox support for Microsoft's CardSpace digital-identity framework is now available for public download.

Solution architect Kevin Miller, who played an instrumental role in developing the technology, announced the availability of his Firefox add-on for Windows via his blog.

Why is Firefox support important? Until there is more third-party support for Microsoft's CardSpace, it will be slow to gain traction. But until CardSpace gains more traction, developers will be reluctant to build applications that make use of it. CardSpace, the technology formerly known as InfoCard, is a key piece of Microsoft's proposed Internet-wide identity metasystem.

CardSpace is designed to store centrally an individual's multiple digital identities used to log into different secure sites (not quite – see below). Microsoft built support directly into Internet Explorer 7.0. And CardSpace is one of the elements of the .Net Framework 3.0, which Microsoft introduced alongside Windows Vista and has back-ported to Windows XP and Windows Server 2003.

Miller blogged: “You can download the (Mozilla CardSpace) extension here for now. I'll jump through the hoops over at addons.mozilla.org this week, and hopefully it will be available there soon. I'll post an update when it is there. I've also set up a project over at Codeplex (as mentioned briefly in my first post), and will get the code posted there in the next day or so.”

Scott Hanselman, chief architect with Corillian Corp., a Hillsboro, Ore.-based financial-services integrator, is bullish about CardSpace's prospects. “CardSpace is going to change it all. It’s likely the biggest thing to happen to security since HTTPS. CardSpace changes the game for consumers. It’ll take a few years, but after IE7 and FireFox and MacOSX all have CardSpace implementations – and it won’t take long – we’ll see Identity 2.0 happen,” Hanselman said.

“If you contrast (CardSpace) with the way certificate management has traditionally worked on nearly any OS, it’s significant because it makes secure certificates accessible to my mom. CardSpace and its related specifications make a secure identity experience accessible, both to the user and to the programmer.”

There are a number of other third-party companies and coalitions beyond Mozilla developing CardSpace-compatible providers and support.  Even Microsoft's own Windows Live team is looking to throw its backing behind the effort. The Windows Live ID team is developing a security token service (STS) that supports CardSpace, according to Richard Turner, senior product

(Live ID is the successor to Microsoft's Passport authentication system.)

“We need a proliferation of ID providers,” Turner said. “LiveID is just one of those providers.”

Mary Jo does an admirable job of describing what's at stake here, though I need to remind readers that CardSpace doesnt actually store digital identities centrally – it just stores metadata (pointers) to identity providers.  This may sound like nit-picking, but it's hugely important in mitigating risk. 

The main point is that Kevin has done some great work here.  I've used his plugin with Firefox and it works beautifully.  If you use Firefox on Windows, give it a try.

I'll try to start compiling a list of resources so people can easily see what to download given various configurations and predelictions.

 

Seems we agree

I have to answer Kveton's response to my last posting just because he answered my answer as fast as I answered his! 

Kim: you’re officially the fastest person in the world at responding to blog posts … 🙂

Yes, could this be a problem you create when it gets too easy to log in?

But wait, Kveton continues:

I’ve always said I’m for interoperability … heck, I’ve made a living at it. Choice for the user is always a good thing.

My answer? You build interfaces and test them. You look at the numbers. You test phishing approaches on a wide assortment of people. You find out what works and doesn’t, and keep evolving the interface. If we take this as a starting point, we’ll all end up agreeing.   

The problem with redirection within the conventional browser is there is no way to know for sure where you’ve ended up – especially if you aren’t a network engineer.

I actually think we’re in agreement here; we both want to find the best experience for end-users and its going to require their involvement to make that happen. Just as InfoCard may not be the end-all-be-all, so too could be the same for OpenID. Either way, both move the ball forward and conversations are happening to make sure interoperability occurs.

There is wisdom in this. But if Kvelton is against giving the InfoCard visual metaphor a try, then I don’t get it. It does nothing to undermine OpenID.

I’m all for trying InfoCard visual metaphor. I’m just trying to figure out how you drive adoption of such a different paradigm, hence my comments on iterative development and the OpenID process.

Those are all legitimate concerns.  I'm trying to do a lot in one go.  I realize it is “somewhat ambitious”.  But what have personal computers been about since the get-go?  Haven't they always seemed ambitious?

Meanwhile Pamela Dingle posted another comment to which I subscribe as well:

Heaven forbid we ever end up with only one solution anyways — how dead boring would that be?

I’m glad there is choice & competition in this area – it means that nothing is being shoved down anyone’s throat, and that the field is still open for further improvement. It also means that nobody is taking the direction for granted, which I think is a healthy thing. Not to mention, it makes identity conferences ever so much more exciting :)

Agreed.  And there's lots of room to keep innovating for a long time.

BBAuth and OpenID

From commented.org, here's a thoughtful piece by Verisign's Hans Granqvist on Yahoo's BBAuth:

Yahoo! released its Browser-based authentication (BBAuth) mechanism yesterday. It can be used to authenticate 3rd party webapp users to Yahoo!’s services, for example, photo sharing, email sharing.

Big deal, huh?

The kicker is this though. You can use BBAuth for simple single sign-on (SSO). Most 3rd party web app developers would love to have someone deal with the username and password issues. Not storing users’ passwords mean much less liability, much less programming, much less problem.

Now Yahoo! gives you a REST-based API to do just that.

It will be interesting to see how this plays out against OpenID.They are both very similar. Granted there is some skew: OpenID is completely open, both for consumers and providers of identity.

However, from my own experience, OpenID consumers (a.k.a. relying parties) seem to want only one thing, perhaps two or three:

  • have someone deal with your users’ passwords,
  • retrieve name and email address for a user

And now Yahoo! does the first, and the second is available. At the same time they’re making your app reachable to 257 million+ users. Here’s an example.

Seems a pretty big reason to implement it for the web app developer, especially since it is such an easy API you can integrate it in an hour or two.

And yet someone has added a sobering comment to Hans’ blog:

It will be interesting to see how long it takes for adoption to reach the point that no one thinks twice when a yahoo login pops up on another site. They'll be nice and ripe for password harvesting via fake yahoo login forms then. 🙂

Sadly, if I had written this comment I would not have included the happy face. Until the security concerns are addressed, despite Yahoo's very laudible openness, this is not a happy face moment.

But through Yahoo-issued InfoCards BBauth would avoid the loss of context that will otherwise lead to password harvesting.  It's a good concrete example of how the various things we're all working on are synergistic if we combine them.

 

Could the world be upside down?

In my last post I shared Jon Udell's conversation about “translucent databases” as a way to protect us from identity catastrophies.  He mentions a lender (e.g. Prosper) who needs information from a credit bureau (e.g. Equifax) about a borrower's reputation.

I'll start by saying that I see the credit bureau as an identity provider that issues claims about a subject's financial reputation.  The lender is a relying party that depends on these claims.

The paradigm currently used is one where the borrower reveals his SSN (and other identifying information) to the lender, who then sends it on to the credit bureau, where it is used as a key to obtain further reputation and personal information.  In other words, the subject deals with the lender, and the lender deals with the credit bureau, which returns information about the subject.

There are big potential problems with this approach.  The lender initially knows nothing about the subject, so it is quite possible for the borrower to pose as someone else.  Further, the borrower releases someone's SSN to the lender – as each of us has given ours away in thousands of similar contexts – so if the SSN might once have been considered secret, it becomes progressively better known with every passing day.

What's next?  The lender uses this non-secret to obtain further private information from the identity provider – and since the user is not involved, there is no way he or she can verify that the lender has any legitimate reason to ask for that information.  Thus a financial institution can ask for credit information prior to spamming me with a credit card I have not applied for and do not want.  Worse still, as happened in the case of Choicepoint, an important opportunity to determine that criminals are phishing for information is lost when the subject is not involved.

Jon proposed ways of changing the paradigm a bit.  He would obfuscate the SSN such that a service operated by the user could later fill it in on its way from the lender to the credit bureau.  But he actually ends up with a more complex message flow.  To me it looks like the proposal has a lot of moving parts, and makes us wonder how the service operating on behalf of the user would know which lenders were authorized.  Finally, it doesn't answer Prosper's claim that it needs the SSN anyway to submit tax information.

Another simpler paradigm

 I hate to be a single trick pony, but “click, clack, neigh, neigh”.  What if we tried a user-centrilc model?  Here's a starting point for discussion:

The borrower asks the lender for a loan, and the lender tells him which credit bureaus it will accept a reputation from. 

The borrower then authenitcates to one of those credit bureaus.  Since the bureaus know a lot more about him than the lender does, they do a much better job of identifying and authenticating him than the lender can.  In fact, this is one reason why the lender is interested in the credit bureau in the first place.

The credit bureau could even facilitate future interactions by giving the subject an InfoCard usable for subsequent credit checks and so on.  (Judging by the email I constantly get from Equifax, it looks like they really want to be in the business of having a relationship with me, so I don't think this is too far-fetched as a starting point).

After charging the borrower a fee, the credit bureau would give out a reputation coupon encrypted to the lender's key.

The coupon would include the borrower's SSN encrypted for the Tax Department (but not visible to the lender).  The coupon might or might not be accompanied by a token visible to the borrower;  the borrower could be charged extra to see this information (let's give the credit bureaus some incentive for changing their paradigm!)

When the lender gets the coupon, it decrypts it and gains access to the borrower's reputation.  It stores the encrypted version of the borrower's SSN in its database (thus Jon's goal of translucency is achieved).  At the end of the year it sends this encrypted SSN to the tax department, which decrypts it and uses it as before.  The lender never needs to see it.

All of this can be done very simply with Information Card technology.  The borrower's experience would be that Prosper's web site would ask for an Equifax infocard.  If he didn't have one, he could get one from Equifax or choose to use the oldworld, privacy-unfriendly mechanisms of today.

Once he had an InfoCard, he would use it to authenticate to Equifax and obtain the token encrypted for Prosper.  One of the claims generated when using the Equifax card would be the SSN encrypted for the Tax Department. 

When you use an Information Card, the identity selector contacts the identity provider to ask for the token.  This is how the credit brueau can return the up-to-date status of the borrower.  This is also how it knows how to charge the borrower, and possibly, the lender.

InfoCard protocol flow

In my view, the problem Jon has raised for discussion is one of a great many that have surfaced because institutions “elided” users from business interactions.  One of the main reasons for this is that institutions had computers long before it could be assumed that individuals did. 

It will take a while for our society to rebalance – and even invert some paradigms – given the fact that we as individuals are now computerized too.

A take on Microsoft, OSP and Open Source

Here is how Martin LaMonica from CNET interpets the Open Specification Promise:

The software giant on Tuesday published the Microsoft Open Specification Promise, a document that says that Microsoft will not sue anyone who creates software based on Web services technology, a set of standardized communication protocols designed by Microsoft and other vendors.

What's new…
Microsoft has promised not to sue anyone who creates software based on Web services technology covered by patents it owns.

Bottom Line
The move reflects how Microsoft has had to come to terms with open-source products and development models.

Reaction to the surprise news was favorable, even from some of Microsoft's rivals.

“The best thing about this is the fundamental mind shift at Microsoft. A couple of years ago, this would have been unthinkable. Now it is real. This is really a major change in the way Microsoft deals with the open-source community,” said Gerald Beuchelt, a Web services architect working in the Business Alliances Group in Sun Microsystems’ chief technologist's office.

Microsoft has never sued anyone for patent infringement related to Web services. But its pledge not to assert the patents alleviates lingering concerns among developers who feared potential legal action if they incorporate Web services into their code, said analysts and software company executives.

Open-source developers, for example, should have fewer worries about writing open-source Web services products. Also, other software companies could create non-Windows products that interoperate with Microsoft code via Web services.

The move reflects how Microsoft has had to come to terms with open-source products and development models.

When Linux began to take hold in the late 1990s, company executives seemed shaken by the shared code foundations of the open-source model. CEO Steve Ballmer famously called Linux a “cancer,” while founder Bill Gates derided the “Pacman-like” nature of open-source licensing models.

Other Microsoft executives, such as Windows development leader Jim Allchin, have in years past painted open source as “an intellectual property destroyer.”

But in the past two years, Microsoft has stepped up its Shared Source program, in which it gives free access to source code under terms similar to those in popular open-source licenses. It has also said it will make Windows-based products work better with those from other vendors, including Linux and other open-source software.

Standards in play
To be sure, Microsoft, which spends more than $6 billion a year on research and development, remains committed to generating proprietary intellectual property. In some cases, that means commercial licensing, rather than opening up access to others.

“In the future, I am sure we will take positions on IP (intellectual property) that will not be so agreeable to various constituencies,” wrote Jason Matusow, Microsoft's director of standards affairs, in his blog.

In the case of Web services, having a pledge not to assert patents around these protocols–which are the communications foundation of Vista, the next version of Windows due early next year–helps drive adoption of those standards in the marketplace, said analysts and software company executives.

Open-source projects, in particular, have become powerful forces within the industry for establishing standards, both de facto and those sanctioned by standards bodies.

“I expect that more and more vendors will realize that a software standard cannot be successful if the relevant patents are incompatible with open-source licenses and principles,” said Cliff Schmidt, vice president of legal affairs at the Apache Software Foundation, which hosts several open-source projects.

Patent pledges of various forms have become more common, he noted. Sun recently said that it would not assert patents relating to the SAML (Security Assertion Markup Language) standard and the OpenDocument Format. IBM gave open-source communities access to 500 patents last year.

More to come?
Microsoft's Matusow said that the Open Specification Promise is part of the company's efforts to “think creatively about intellectual property.”

For the Open Specification Promise, the company sought input from open-source legal experts, including Red Hat's deputy general counsel Mark Webbink and Lawrence Rosen, an open-source software lawyer at Rosenlaw & Einschlag in Northern California.

Matusow said Microsoft is still a big believer in intellectual property but added that the company has chosen a “spectrum approach” to it, which ranges from traditional IP licensing to more permissive usage terms that mimic open-source practices.

“That is the point of a spectrum approach. Any–and I do mean any–commercial organization today needs to have a sophisticated understanding of intellectual property and the strategies you may employ with it to achieve your business goals,” he said.

The current Open Specification Promise does not specifically cover CardSpace, formerly called InfoCard. But the promise not to assert patents could be extended from current Web services standards, said Michael Jones, Microsoft's director of distributed systems customer strategy and evangelism.

“Licensing additional specifications under these same terms should be much easier to do at this point, but I obviously can't make public commitments yet beyond those we already have buy-off on,” Jones said on a discussion group at OSIS, the open-source identity selector project.

Old concerns
Web services standards are authored by several vendors, often including Microsoft and IBM, and are built into products from many vendors.

IBM lauded the move in a statement on Wednesday. “We've provided open-source friendly licenses for Web services specifications and have made non-assert commitments for a broad set of open-source projects including Linux,” said Karla Norsworthy, vice president for software standards at IBM.

Web services specifications are standardized in the World Wide Web Consortium and in the Organization for the Advancement of Structured Information Standards. Both bodies allow people to license standards either royalty-free or on so-called RAND terms (reasonable and non-discriminatory terms).

But Microsoft's Open Specification Promise goes a bit further. It means that developers at Apache projects, for example, no longer have to worry about Microsoft asserting Web services patents down the road, said Apache's Schmidt.

Similarly, Rosen said that the “OSP is compatible with free and open-source licenses.”

That clarity is a far cry from the early days of Web services, which took shape around 2000, when Microsoft and IBM teamed with others to improve system interoperability using XML-based protocols.

Lingering concerns remained among outside developers and were points of dispute in some Web services standardization efforts.

In 2000, Anne Thomas Manes was the chief technology officer of a Web services start-up called Systinet. The venture capitalist backers of the company were nervous that implementing these newly published specifications, created by other companies, could lead to lawsuits down the road, she said.

Until now, there was still a “niggling concern” that Microsoft would sue people. Back in 2000, Systinet decided to accept the risk of creating software based on specifications created by others, even though they did not have a license, she said.

“We went ahead and did it anyway despite the risk, because we were of the impression that Microsoft and IBM really wanted people to implement it,” she said.

To me it isn't really very surprising that Microsoft is doing everything it can to co-operate with everyone else in the industry on fundamental infrastructure like identity and web service protocols.  It suddenly seems like this is being made into a bigger deal than it really is.  That said, I'm really glad that lingering doubts about our intentions are dissipating.   

Namespace change in Cardspace release candidate

Via Steve Linehan, a pointer to Vittorio Bertocci's blog, Vibro.NET:

In RC1 (.NET framework 3.0, IE7.0 and/or Vista: for once, we have all nicely aligned) we discontinued the namespace http://schemas.microsoft.com/ws/2005/05/identity, substituted by http://schemas.xmlsoap.org/ws/2005/05/identity. That holds both for the claims in the self issued cards (s-i-c) and for the qname of the issuer associated to s-i-c. If you browse a pre-RC RP site from a RC1 machine, you may experience weird effects. For example, like the Identity Selector claiming that the website is asking for a managed card from the issuer http://schemas.microsoft.com/ws/2005/05/identity/issuer/self no longer recognized as the s-i-c special issuer. Note that often is not a good idea to explicitly ask for a specific issuer 🙂

 If you want to see a sample of this, check out the updated version of the sandbox.

Why this change? As you may know, relying parties specify the claims they want the identity provider to supply (for example, “lastname” or “givenname”) using URIs.

Everyone will agree that the benefit of this is that the system is very flexible – anyone can make up their own URIs, get relying parties to ask for them, and then supply them through their own identity provider. 

But a lot of synergy accrues if we can agree on sets of basic URIs – much like we did with LDAP attribute names and definitions.  

Given that a number of players are implementing systems that interoperate with our self-asserted identity provider, it made sense to change the namespace of the claims from microsoft.com to xmlsoap.org.  In fact this is an early outcome of our collaboration with the Open Source Identity Selector (OSIS) members.  Now that there are a bunch of people who want to support the same set of claims, it makes total sense to move them into a “neutral” namespace.

While this is therefore a “good and proper” refinement, it can pose a problem for people trying out the new software:  if you are using an early version of Cardspace with self-issued cards that respond to the “microsoft.com” namespace, it won't match new-fangled claims requested by a web site using the “xmlsoap.org” namespace.  And vica versa.  Further, the “card illumination” logic from one version won't recognize the claims from the other namespace.  Cardspace will think the relying party is looking for specialized claims supplied by a “managed card” provider (e.g. a third party).  Thus the confusing message.

After getting some complaints, I fixed this problem at identityblog: now I detect the version of cardspace a client is running and then dynamically request claims in either an old dialect or the new one.  I would say people would do well to build this capability into their implementation from day one.  My sample code is here.