A great idea from Julian Bond

Commenting on my over-excited piece from yesterday, Julian Bond replies:

Rather than launch another rant, let me try and ask a simple question that relates to your last sentence.

Pick a very popular open source web system wth very wide deployment. It doesn't matter which, but something like Drupal, phpBB, WordPress, Movable Type, PHP-Nuke.

This is a really good idea, Julian.

Now try and imagine a roadmap where Infocard gets implemented on that application and gets widely deployed.

I don't know about you but I can't imagine that given what's been said about Infocard so far.

OK. I'm going to pick one of these and show exactly how I would do it. It will make everything a lot more concrete. I've got some other pieces in progress – plus my job – so it may take me a couple of days. But I very much appreciate the suggestion.

If that is “a question for the community at large”. Then MS is turning it's back on that section of the web community. Does that matter? Maybe not. But it does set the scene.

I really don't know how to get this across, Julian, but I will never turn my back on any section of the web community. Quite the opposite. I am deeply committed to this: identity architecture and technology must benefit and embrace our whole community (I speak of everyone-everywhere, not some-people-somewhere). I spend my days and nights trying to reach across the fault lines to make that possible.

Your proposal to get very concrete about a sample roadmap is really a good one. I appreciate it.

The definitive list of metasystem standards…

Scott Cantor, principal editor of SAML 2.0 has sauntered up to the plate with some comments on my recent reply to Hubert Le Van Gong. He makes two main points, and I'll deal with the first of these today:

My concern is less about WS-Trust itself than about both its status as a closed document (Microsoft was very forthright at DIDW in acknowledging that the open OASIS process does not afford them an advantage in influence that their workshop process does, but that's hardly a good thing for the rest of us) and its dependence on so many other equally unfinished specs that it's very hard to evaluate it in isolation, or without an immense amount of time at one's disposal.

This discussion has many legitimate nooks and crannies on all sides, and I think it's important for everyone who cares about these things to understand everyone else's points of view. To write properly about these issues I would need to start up a second blog – which is not in the cards!

But the conversation Scott refers to was part of a dramatic session at DIDW which I thought was one of the frankest industry dialogs I've heard in years. And guess what? Phil Becker of DIDW (God bless him) has agreed to let me podcast his recording of the session… (By the way, when is Phil going to get an award? The more I learn about him, the more he amazes me.)

So here is the mp3 of Federated Standards: The State of Convergence. Don't miss these 56 minutes.

Mike Neuenschwander of the Burton Group moderates skillfuly and starts things off by getting people from the WS camp (John Shewchuk of Microsoft and Anthony Nadalin of IBM), SAML and Liberty (George Goodman of Liberty, Rob Philpott of RSA and Bill Smith of Sun) to explain how they approach standardization. This is followed by a discussion which teaches us a lot about the many issues and requirements driving the approaches of the various players. Scott Canter then makes an extended intervention from the audience where he clearly expresses the concerns he just sketches in the paragraph above. It's a good example of why people should go to DIDW – and learn from what everyone else is saying.

I'm going to allow that session to speak for itself. I respect the views of everyone who participated.

I think we all agree that this is the most important point: WS-Trust and its associated specifications are destined for standards bodies. Everyone involved is working hard to complete the workshop testing process so we can get these specifications submitted for wide discussion as soon as possible, if not sooner. These specifications, with IBM, Microsoft and many others in the software industry already behind them, are going to open up a tremendous opportunity for those, like myself, who have been laboring long and hard to make possible an inclusive, multicentered identity fabric.

In the DIDW podcast Scott says that he is “not trying to make the point that WS-Trust is invalid – because it's not”. He explains that he is trying to get people to face up to some of the attendant complexities, things that he learned in the course of developing SAML. And I for one look forward to his contribution both in vetting and hardening WS-Trust and driving identity work forward – he has been a major innovator in the identity space.

A picture called trustberg On a lighter note, Scott (who I must admit I consider a friend) sent a devilishly mischievious and libellous takeoff of my identityblog logo (the photo of an iceberg that appears in the upper right-hand corner of my blog). He admits that it:

…was perhaps not developed in such a spirit of constructive dialog, but … I hope you'll take it with a dose of humor. I'm not the author, but am happy to relay any comments. 😉

I was brought up to be able to laugh at myself – and it's a good thing, too, given some of the spots I've gotten myself into. So I admit the play on the iceberg is pretty funny. But I guess I would say though the piece makes a good parry, it would be a lot stronger if the implication of dependency on so many standards were actually true.

Why would Scott's anonymous author desecrate my beautiful (Canadian, eh?) iceberg with the names of a bunch of unrelated specifications? Actually, I remember that when I first came across the WS specifications I simply couldn't believe how many of them there were. And they seemed to be reproducing every time I turned around! It took me a while to understand the approach of factoring things into “microstandards” that could be assembled like lego. Then one day while meeting with the authors I saw that what we have here is object orientation applied to the standards themselves. Instead of big spagetti-like mainline standards, they have factored everything out for reuse, specialization and inheritence.

Anyway, the good news is that people who have time to fabricate simulacra of my iceberg have time to read the standards, so we should all get past this in time! They're sort of like the laws of identity, I suppose. Little tiny laws.

But let's get down to reality. What do you need for InfoCards beyond WS-Security and WS-Addressing, which are already widely implemented and in standards bodies? You need WS-Trust, WS-MetadataExchange, and a few lines of XML consistent with WS-Policy and WS-SecurityPolicy. This will all be made clear in the InfoCard implementor's guide, which is real close to release.

Doc Searls puts his finger on it

Doc Searls, editor of the Linux Journal, has posted a piece that brings great encouragement. He puts his finger on the essence of what is at stake here.

I believe the Identity Metasystem is a barn-raising project, in the open public marketspace we call the Internet. Once raised, it becomes part of the Net's infrastructure, kinda like this diagram shows. Also this one.

I agree.

We build the metasystem with Microsoft's leadership (Kim Cameron‘s especially) and participation — even using Microsoft's architectural drawings — but in a public space, for public use, in the open marketplace, without any ownership encumberances. The result will be NEA: Nobody will own it, Everybody can use it, and Anybody can improve it. (Yes, there are exceptions to that principle, especially around ownership — even in the LAMP stack. But the virtues are clear, and it's those virtues that make LAMP components adoptable infrastructure.)

I share Doc's thinking about NEA – just like TCP/IP, http, smtp, SOAP, ldap, and all the other great infrastructure peices. In this sense, I want to make it clear that IBM, SAP, BEA, Verisign and many others have contributed along with Microsoft to the protocols which appear in our architectural drawings.

The main concerns, naturally, are around trusting Microsoft. (We all have our reasons. Also our reasons to look past them.)

Other questions are technical. Or political. Or combinations of those two (licensing is a good example).

I think we need to be able to talk about the technical questions without getting too bogged down in the politics or completely bogged down in distrust.

Agreed.

So I have some technical questions that I'd like to get answered, or at least approached. And I hope we can drop the distrust stuff while we try to answer them.

First, some reading material, in logical (if not always chronological) order.

Now, here's where we set up the question. Johannes says,

In order to accomplish this, InfoCard employs:

  • SOAP
  • WS-Addressing
  • WS-MetadataExchange
  • WS-Policy
  • WS-Security
  • WS-SecurityPolicy
  • WS-Transfer
  • WS-Trust
  • XML Signature
  • XML Encryption

Julian adds,

And So:

– User end requires Longhorn or an XP upgrade

– Depends on SOAP and the WS protocol stack

– Uses HTML OBJECT tag wth DLL support

– Multiple commercial licensing but with probably no open, free, license.

So that counts out Apple and Linux clients. It may well count out Firefox and other browsers. It almost certainly counts out PHP-Apache websites. Java/Perl server environments probably won't work because interop between MS implementations of the WS stack with Java/Perl implementations is extremely patchy.

In my slightly excited previous post, I explained that the conclusion about Apple and Linux mystifies me.

Microsoft Implementation Plans (from the very first link, above) Kim and Microsoft say,

Microsoft plans to build software filling all roles within the identity metasystem (while encouraging others to also build software filling these roles, including on non-Windows platforms). Microsoft is implementing the following software components for participation in the metasystem…

… and then lists four items, the first two of which have InfoCard in their titles. The paper continues,

The identity metasystem preserves and builds upon customers’ investments in their existing identity solutions, including Active Directory and other identity solutions. Microsoft's implementation will be fully interoperable via WS-* protocols with other identity selector implementations, with other relying party implementations, and with other identity provider implementations.

Non-Microsoft applications will have the same ability to use “InfoCard” to manage their identities as Microsoft applications will. Non-Windows operating systems will be able to be full participants of the identity metasystem we are building in cooperation with the industry. Others can build an entire end-to-end implementation of the metasystem without any Microsoft software, payments to Microsoft, or usage of any Microsoft online identity service.

The boldfaces are mine, and meant to draw attention to both the literal meaning of the passages, and what is clearly Microsoft's intention for the metasystem to serve as an open environment and not a walled garden or a silo.

I think what we have here (looking at Johannes’ and Julian's posts, which are representative of questions I hear quite often elsewhere) is an insufficient distinction between an open environment (Identity Metasystem) and one vendor's implementation inside that enviornmemt (InfoCard). Because both come from Microsoft, it's easy to conflate the two.

Exactly. And I've likely contributed to this confusion in that I simply take it for granted that they are clearly separate aspects of things.

From the beginning of these conversations, Kim has made it clear to me that he (and Microsoft) want to see other implementations on other platforms, to demonstrate the open and inclusive nature of the metasystem, and to invite more implementations into the marketplace.

So, here's the first big question: Does the metasystem require adoption of SOAP and the whole WS-* suite of protocols (or whatever those are) — that whole bulleted list above — or something much less than that? I've gathered from Kim that WS-Trust is an essential component. But what about the rest of the list? Seems to me that Kim conceives the Identity Metasystem as a wide-open and inclusive architecture in which all kinds of current (LID, Sxip, XRI-XDI) and future identity systems can participate. Is this possible if the required protocols aren't really open or usable in a practical sense, as Julian contend? And, for that matter, is the WS-* suite a done deal, either? What, if anything, needs to be done there to make it (or parts of it) acceptable to those who inclined to dismiss it?

The second big question (especially for my constituency) is, What will it take to get open source developers, and the rest of the non-Microsoft world, to adopt and deploy stuff that works within the metasystem? Licensing is clearly an issue. What else?

These are the big questions. I can deal with the first. The second is clearly a question for the community at large.

Julian Bond walks into the void

Julian Bond of voidstar has posted a piece called, rather uncharitably, “Why InfoCard will fall at the first fence.” I'll just make a few comments to make it clear where he understands our plans and where he doesn't.

He says:

  • User end requires Longhorn or an XP upgrade
  • Depends on SOAP and the WS protocol stack
  • Uses HTML OBJECT tag wth DLL support
  • Multiple commercial licensing but with probably no open, free, license.

Since the InfoCard identity selector runs in a protected subsystem to offer significant security advantages, it does require new code. Isn't it about time we had new code and a new approach? And can't Julian see anything positive in Microsoft making an initiative here, and using its ability to distribute this code so as to contribute to the construction of a rigorous identity fabric? Further, I would think he might commend the fact that these capabilities are being made available down-level, and will be distributed very efficiently. The result is that the proportion of clients which will be InfoCard equipped will climb very quickly. This doesn't mean that they automatically capture the imagination of users. But it certainly represents a great opportunity to help move the whole industry forward. I don't understand the factionalism in this regard.

InfoCard does depend on SOAP and WS. But creating an interoperating stack is not difficult. People on non-windows clients will have open source implementations available to them. Such implementations are being built today (some exist). Why does Julian want to walk away from such an opportunity?

Again I will say that the IP will be available in a royalty-free license. We are working on using an existing license that is well accepted by the vast majority of people building software today.

Julian goes on to say:

So that counts out Apple and Linux clients. It may well count out Firefox and other browsers. It almost certainly counts out PHP-Apache websites. Java/Perl server environments probably won't work because interop between MS implementations of the WS stack with Java/Perl implementations is extremely patchy.

Untrue. The open source implementations will run on those clients. And we would urge people to build clients to work with us to ensure interoperability. These are still the early days of the stack! Of course the interoperation is patchy! In the early days of LDAP we couldn't even agree on what to call an email address! Julian's comments here strike me as a bit mean-spirited. But Julian continues:

So >50% of the market is excluded. And *all* of the long tail of small and medium sized web sites. Which is exactly the same problem as with Passport. It ends up as an IE only, MS Windows only, client tied to a server system that only works with the very biggest players. And each one of them involves a huge sell with the corresponding bad press when they back out.

I guess if people follow Julian's advice in lockstep he may achieve this. But on the other hand, someone who wants to do something positive for identity might use the open source stacks to put together a Security Token Service (STS) that just plugs into apache and handles identity management for that long tail. It all depends on whether we want to take advantage of an opportunity that I see as historic.

Julian concludes:

What's sad about this is that Microsoft cannot separate the standards process from it's commercial business. It's completely unable to take a view that a larger market raises all boats. So I'm not at all surprised at the approach and I also predict loads of noise and very little implementation leading to another failure. I think the rest of us can safely ignore what they're doing. While at the same time borrowing from all the excellent work that people like Kim Cameron are doing on the fundamental analysis of Identity.

I wonder, in this case, if it is Microsoft who can't separate the standards from the opportunity, or Julian himself.

I just don't get Julian's vibrations. We thought long and hard about how to make the client tremendously open to a plurality of identity technologies and operators. We've put it out there. It doesn't require anyone to lay down their existing protocols – use whatever works for interacting with conventional clients. But let's give the end user a better, safer and more comprehensible mechanism for taking control of her identity.

In this, Julian, why not work with us? The laws are not abstract things. This is the time when we need to change the Internet so it comes into accord with them. Not every aspect of these proposals may be exactly as you would wish. But please consider the great complexity of “weaving” a solution here, garnering support across all the consitutencies, and consider again why you would walk away from this opportunity.

Want to save 10 billion pounds?

Ideal Government has just posted this story by David Harrison, of the Sunday Telegraph, who apparently has some inside information about the upcoming LSE report on the British Identity Card. It seems like the British debate has gone from off again to on again:

An identity card scheme that costs just £30 per person – compared with £300 per person under the Government's proposals – will be unveiled this week.

The plan, drawn up by the London School of Economics after six months of research, would also limit the Government's access to information on the card to a few basic details – while the Government wants to hold much more personal information on a national database.

The LSE report is highly critical of the Government's ID card plans

The LSE's proposals will reignite the debate over compulsory ID cards just before the Government's Bill introducing the cards has its second reading in the Commons later this month.

A 180-page LSE report says that its proposals would satisfy the need for a national ID card to help to combat identity fraud and illegal working and allay fears that the right to privacy would be seriously undermined by a “Big Brother” state.

The report says that the scheme, would reduce the overall cost of ID cards to £2.25 billion, a fraction of the £12-18 billion that the LSE says the Government's scheme will cost.

Under the proposals, the Government would have access to only a few details – the holder's name, date of birth and photograph, plus an encrypted card number and a unique “national identification number”.

The scheme would be more acceptable to the public because it gives individuals the right to decide whether to store any other information on the cards, according to the report.

The Government's ID cards Bill faces strong opposition from many Conservatives, Liberal Democrats and rebellious Labour backbenchers struggling to explain to their constituents why, in the words of one rebel, “we should spend £18 billion on ID cards when our local school has no money for books”.

Opponents are concerned about the cost of the project – which the Treasury says has to be “self-financing” – and about loss of privacy and fears that a future government could misuse the data.

The LSE report is highly critical of the Government's plans, describing them as “a potential danger to the public interest and to the legal rights of individuals”.

The technology envisioned for the Government's scheme is “largely untested and unreliable” and would need expensive security measures, particularly as private and public sector organisations would have access to it, the report says.

The LSE claims that its scheme is cheaper and more secure. Prof Patrick Dunleavy, a member of the LSE's ID card advisory group, said: “This is as small, robust and cost-effective an ID card as anybody could get away with in the world we live in today.

“The card will work better than the Government's scheme because people will want to use it. It is also more secure because the cards will carry much less information so there will be fewer problems if they are lost.”

Prof Ian Angell, the head of the LSE's Department of Information Systems, and also a member of the advisory group, said that any identity card system should be built “on the basis of public trust rather than compulsion and coercion. An ID system will only work if it is supported by all citizens”.

The Government's proposals had “fatal weaknesses”, he said. “The system outlined in the Bill will be insecure and costly. Our new blueprint addresses these problems by creating a system based on proven technology and citizen control. We want this to be the subject of as public debate.”

Under the government scheme all citizens would have to register for an ID card at one of about 70 regional centres. Details they would have to disclose could include bank accounts, proof of residency and address, birth certificate, passport number, NHS number, National Insurance number and a credit reference number.

In the LSE's model, individuals will have to provide only a few details, but their application forms would have to be endorsed by three referees – a doctor, lawyer, teacher or police officer for example – who have known the applicant for a long time.

Crucially, the referees will have to include a professional identity detail – such as a doctor's or JP's registration number or police number – to deter fraudulent applications and hold them accountable.

To obtain the card under the LSE scheme an applicant would go to a job centre, post office or other authorised centre. There he or she would enter an electronic kiosk that takes a digital photograph and embeds it into the coded application form.

Once endorsed by the referees, the form is handed in at a post office where the applicant chooses a biometric test – fingerprint or iris scan – for extra security. When the card is ready, the test and photograph are used to confirm that the card is handed over to the right person.

At this point the card is still inactive. The holder then takes it to a “trusted third party” – a bank or post office for example – where the applicant is well known. There a copy of the data is taken and stored securely.

The card is then connected to the Government's temporary file. If the codes match the card is validated and all data is deleted from the government file apart from the name, code and card number.

The holder then has a secure card with a secret code, back-up held by the third party, and a minimum of data is held by the Government.

The LSE says that this “localised” scheme is much more secure than the Government's because the data, apart from a few details, is spread out among thousands of “trusted third parties” and not contained in one central database. “There is no master key,” a spokesman said.

Professor Ian Angell's comments are quite in line with the thinking in the Laws of Identity. I wonder what kind of fall-out all of this will have on how British citizens look at cyber identity.

Location as an identity claim

Here's an interesting piece from Dave Kearns

I was on a teleconference with O'Reilly Group‘s Tim O'Reilly and Nat Torkington discussing the upcoming Where 2.0 Conference which will focus on mapping and location technologies when a thought occurred to me – could location be a factor in a multi-factor authentication scheme?

The “where” of IdM has often referred to the platform or device that someone was using to access a resource, but suppose a GPS was used in order to indicate the physical location of the user?

For a cell-phone user, the GPS might not be needed if the location of the cell tower was “close enough” (i.e., area of a city rather than street address).

I could see this being used in a graded authentication scheme to reduce or deny access based on a possibly adverse location (e.g., someone trying to access a Pentagon database from Uzbekistan).

I don't know if there are any products that do this, if any or planned or if it's even feasible – but it's worth a thought.

This is one of the very scenarios I see us as enabling by moving to “claims based identities”. So yes, I see it as planned at an architectural level.

Once you get your head around expressing identities as sets of claims, you can easily imagine expressing a user's location as one of those claims. In the identity metasystem, the relying party could indicate in its policy that it requires several sets of identity claims– one indicating who the user is, and another indicating where the user is. The claims might come from different authorities (e.g. an enterprise and a trusted location provider). These would be implemented as two Security Token Services (claims transformers). Both sets of claims, taken together, would identify the user from the point of view of the relying party.

I've spoken recently with a number of Europeans for whom location is fast becoming a central issue. Various national and international agreements mean that exposing information across international borders increasingly opens enterprises up to audits by additional (foreign) governments. This problem is particularly accute in banking – and has many ramifications. So the need to ensure that some data is only accessed within national boundaries is fast becoming a real driving issue.

As a side note, this example captures one of the most interesting things about the identity metasystem we are proposing. Independent third parties can innovate and create claims transformers (STS's) of the kind described here and just plug them in to the fabric. People can then consume their outputs just by putting in a URL and deciding to trust them (payment might be a good idea too).

To my mind this is a very significant aspect of the ecosystem. In other words, people can add pieces that really take us towards new capabilities without having in any way to change the way the broader system works.

Phil Becker on the failure of oversimplification

Phil Becker of Digital ID World (DIDW) had this to say about Dave Kearns’ recent post on InfoCards (my comments here).

It may be mostly true “that there's nothing new here except” the decentralized and heterogeneous aspects of this approach. But then, that's exactly what it has taken a decade to fully understand is actually the only vitally important part of the task.

Until this problem was recognized to unavoidably be (and thus looked at and designed for as) a decentralized distributed problem spread across multiple administrative domains and extremely heterogeneous technology and platforms, it was doomed to repeat Einstein's “failure of oversimplification.” (“We must make this as simple as possible, but no simpler.”)

Elegant simplicity is always the most difficult thing to find, and the most difficult to appreciate as significantly different from what has gone before when it is found. Are we there yet? Probably not. But we are at last on the road to getting there instead of just standing aroung in the bus station complaining about how everyone fails to understand.

This is a very dense and immeasurably deep comment. I hope Phil will be able to explain this widely as he has so many other things.

Bridging the chasm

The recent piece in which Hubert Le Van Gong began looking at InfoCards attracted a number of comments from the Liberty and SAML side of the house. I'm glad to see them joining with those of us already in this discussion to look at what we might accomplish together.

First up was Robin Wilton of Sun, who saw InfoCards as being, “…an interesting dimension of the whole ‘thin client/fat client’ tension”. For those who don't follow this kind of thing, a thin client is, at its most extreme, a display and keyboard with little local computing power and no state (no storage) – essentially a window onto a mainframe – er, I mean onto a powerful network-based server.

More often than not, local processing power comes in handy for some applications, so the term ‘thin client’ is used in practice to describe an approach to delivering specific applications. Generally, it refers to using a browser as a stateless window onto a server-based application. What would it mean to introduce InfoCards into such a world?

Let's remember that “InfoCards” are visual representations of identity providers (and sets of claims) that can be controlled through an “Identity Selector” crafted for choosing between “Digital Identities”.

These Digital Identities can emanate from anywhere (and from any platform) so are architecturally distinct from the client (and don't “fatten” it).

The Identity Selector itself does need to be part of the platform – in order to increase the safety and usability of the identity system by offering a consistent experience and making it very much harder for attackers to subvert than is the case with current applications and the browser-based technology.

However, there is nothing in the architecture to imply that the Identity Selector needs to store state on the thin client.

I can imagine “ultra thin” implementations of an Identity Selector store in which the card collection would be hosted by a distant web service, for example. Or better still, I can see many advantages to hosting the card collection and authentication information in some kind of dongle, or iPod, or mobile phone or wearable computer. The essential thing is that these decisions be under the control of those using the system.

So yes, we need to add identity selection as a capability required by any end point. But it would be an error to see this as requiring storage of state on the client, or as a matter of fat versus thin.

Moving on, we come to this comment on Hubert's blog by Chuck Mortimore:

At this point, Microsoft's intentions seem to be to only support the WS-* protocol stack. Requests have been made to make this pluggable, but all indications are that this will enter the market using only ws-trust and its dependent specs.

Scott Cantor, the key editor of the SAML 2.0 spec, then adds:

I can relate from conversations at DIDW that it is 100% a constraint that this InfoCard model for identity provider “discovery”, as the SAML spec refers to the process, is completely tied to WS-Trust and they do not intend that it support any other authentication protocol.

Scott goes on to (accurately) report my view as being:

…that multiple protocols introduce insecurity to the client, and are a deployment problem because of the need to keep each client up to date with each protocol. He drew the analogy of that being like “putting a router in the desktop”.

He has a point, but I'll only say that WS-Trust includes (as it must) extensibility to deal with the kinds of challenge/response behavior required to use certain kinds of tokens…

But in any case, you can certainly ask him what he means by “SAML and Liberty can work in this framework”. But I'm pretty sure he just means that SAML assertions are supported.

Here is my thinking. I do want InfoCards to be “pluggable”. I want users to be able to plug in InfoCards that represent multiple operators, and multiple security technologies, including SAML. This is the translation of the crucial fifth law into a working system. Little is more important to me that delivering on this requirement. But the real question is one of how to best make InfoCards pluggable. Should the complexity be on the client, or should it be quarantined on specialized servers?

And now, we enter the twilight zone. As strange as some may find it, I am on the “thin client” side of the ensuing discussion. I want to achieve manageability and reduce the client security attack surface by supporting a single protocol stack between the client and security token servers. Meanwhile my friends Hubert and Scott have wandered onto the fat client side – they want to add multiple parallel protocol stacks to what will likely be the most attacked client system ever deployed.

I urge them to reconsider. We should collaborate to minimize the vulnerability of what will eventually (if we are successful) be deployed on billions of clients. If we limit the identity protocol between client and server to WS-Trust (making it at least possible to test it), it by no means limits the overall protocol options available to Scott and Hubert. The server can bridge WS-Trust to whatever other protocols it wants. The system is then eminently pluggable at a protocol level, but in managed environments, rather than in a chaos impossible to defend from attack.

Clearly, this call for minimalism on the client means the WS-Trust protocol should allow any security content to be carried by it – effective freedom of choice. It must transport any type of token. It must also allow the conversion of any set of claims into any other. It must be the meta in the metasystem.

WS-Trust is precisely such a protocol. For those who don't follow these things, it was designed as a way to build a “security token service” – a service whose very purpose is to exchange one security token for another.

I therefore hope all in this conversation will understand that I am not asking anyone to lay down their protocols! I am asking them to consider adding a token exchanger (aka Security Token Service, or STS) to their implementation, and to use the WS-Trust protocol as the way to implement this one piece of the puzzle.

We can imagine many synergistic outcomes. For example, one could have a client using InfoCard (and WS-Trust) to authenticate to the WS-Trust “head” on a portal server which is part of a Liberty federation, even though that server interacts with its partner sites using Liberty protocols. One can also imagine building a generic STS which works much like an “identity bridge”, converting SAML and Liberty identity and attribute providers into a WS-Trust InfoCard identity provider. I was thinking of these opportunities when I said, “SAML and Liberty implementations can easily interwork with this proposal – it may require some extensions to current capabilities but nothing very significant.”

One thing all of us can agree on is to be wary when an architect says, “nothing very significant”. But on this one occasion I think I can get away with it. Ping has already demonstrated that those who already have the technology necessary to do SAML or Liberty can easily add the functionality implied by WS-Trust. After all, Andre Durand was able to demonstrate such a service at the recent DIDW conference – easily interworking with InfoCards. Other providers of hardware and software are already far along in developing this capability as well.

Dave Kearns on InfoCards

Dave Kearn has been posting up a storm – a good storm – including his comment on Johannes Ernst‘s What is Microsoft InfoCard? He says:

The explanation is a good, if somewhat convoluted, one. But could be simplified.

InfoCard is simply Novell's old DigitalME decentralized (as Novell's personal directory intended it to be) and hopped up on Web Services SOA.

In many ways, it could also be described as Passport without the Big Brother implications of “Hailstorm“, hopped up on SOA.

The important thing to remember, I think, is that there's nothing new here except the joining together of the personal directory with the panoply of specs and protocols that make up Service Oriented Architectures. That's no small accomplishment, of course, especially for a company as vilified for it's security and privacy policies as Microsoft is.

It has certainly been my intention to invent as little as possible, so I thank Dave for pointing out my lack of a contribution in this regard.

Decentralization is certainly a big part of what is being proposed. The ablility to support multiple (and evolving) underlying security technologies is also a key property. So is the new approach to integrating the user into the experience and keeping it consistent across contexts and technologies. Finally the project employs various technologies which raise the bar on resisting phishing and pharming attacks.