Ben Laurie responds to OSP

Ben Laurie, a major contibutor to internet security through his work at Apache, and now at Google, is generally positive about OSP but has questions: 

“Kim Cameron announced that Microsoft are making it possible for anyone to implement Infocard-compatible systems (and other systems the depend on the same protocols), via the Open Specification Promise.

“First off, let me say that this is a huge step forward – there’s been a great deal of uncertainty around WS-* and friends because of the various patents various companies own. Microsoft taking this step definitely helps.

“But, there are some details that worry me – firstly I am curious why Microsoft have taken the approach of this promise rather than an explicit licence. I’ve talked to various lawyers about it, and the general feeling I get is that they’d be more comfortable with a licence, but they can’t point to anything obviously wrong with the promise approach.”

So I need to make it absolutely clear that if anyone feels more comfortable with a RANDZ (Reasonable and Non-Discriminatory Zero Royalty) License rather than the Open Specification Promise, Microsoft will be happy to provide them with one.  The goal was simply to provide a simple, clear alternative for those who wanted one.  Ben continues:

“Secondly, there’s this definition:

“’Microsoft Necessary Claims’ are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification. ‘Covered Specifications’ are listed below.

“(my italics). Now, I’ve implemented a lot of software from protocol specifications, and there are two things that are extremely common:

  • “The specifications include many optional parts. These parts will not be covered by Microsoft’s promise.
  • “The specifications reference other specifications for vital parts of their implementation. These parts will not be covered by Microsoft’s promise.

“Now, exactly what affect these considerations have on Microsoft’s promise and implementations of WS-* et al is something I have not had the time or energy to assess – perhaps others with more intimate knowledge of the specs could help me out there? I’d love to hear that, in fact, this is a non-problem.”

It may help to recall what Standards Guru Andy Updegrove says about the phrase “…that are described in detail and not merely referenced in such Specification….”:

“While not usually phrased in this fashion, this is a common limitation intended to clarify that, for example, other standards that may be referenced, or so-called “enabling technologies,” the use of which would be required to use an implementation (e.g., the computer upon which the software is running) are not included.”

But I do understand Ben's question about the required versus optional parts of a specification and will ask our legal people to clarify. 

Ben's next point:

“Another factor to consider is that (as I understand it) Microsoft are not the only people with IP around these standards. Will everyone else be so generous with their IP? Microsoft don’t care, of course, because they have the usual patent mutually assured destruction – but those of us with smaller patent portfolios are not so fortunate.”

So, as always, I guess I’m an optimistic cynic.

Incidentally, another thing Kim has talked about several times is Microsoft allowing exact copies of their user interface. I’m in two minds whether its a good idea to copy it, but this promise doesn’t cover the UI, as far as I can see. I wonder when that piece will be forthcoming?

I really want to make it clear that I have never suggested I would ask Microsoft to allow people to make “exact copies” of our user interface.  And in fact, no one has ever asked to be able to do this.

What we want to be able to do is create a “ceremony” that is recognizable across platforms.  I'm talking about the equivalent of using a steering wheel and brakes in a car.  All cars have them, so even if we like a particular type of car, we can get in another one and drive it.  This doesn't mean the cars are “exact copies” of each other, or even that the steering wheel and brakes look or feel identical. 

As Novell's Dale Olds put it at DIDW, we are talking about sharing a predictable sequence of experiences, not cloned screens.  So in this sense, I think everyone shares Ben's “two-minds” thinking.

Phil Windley at DIDW


Phil Windley at ZDNet has been blogging the DIDW conference, and captures a bit of it here:

This evening, at the reception for Digital ID World, someone asked me what I thought of the conference. I've been to every DIDW since it started (5 years now). I realized that the conversations and talks had changed from “won't it be cool when we…” to “this is what we did to…” That's a big change and shows just how far identity, as a concept separate from security, has come.

At the same time, I look around the show floor and other than the usual big names like Microsoft, Novell, and Oracle there are few repeat companies. Ping and a few others have been here from the start, but most seem to come and go. Part of that's because any company that gets successful gets bought by one of the big guys looking to build out their stack.

One of my favorite sessions today was Dave Nikolesjsin's presentation on citizen-centric identity. Nikolesjsin is the CIO for the Prov. of British Columbia. BC is making real progress building identity systems that have been proofed by in-person visits to government agencies. There are lots of lessons in what BC is doing–not just for other governments, but for any large organization.

The most significant announcement of DIDW was Microsoft's Open Specification Promise. For years, there's been an intellectual property cloud hanging over the OASIS specifications that form a large part of what makes Web services work. Unlike other standards bodies, OASIS doesn't require that technologies built into its specifications be IP-free.

Today's announcement is a huge step by one of the major contributors to the OASIS specifications. Microsoft irrevocably promises not to assert claims against people or companies who distribute products that conform to the specifications. Of course, like any legal agreement, there are terms and conditions. I'm sure some will be waiting to see what isn't there.

Since many of these specifications are at the heart of CardSpace, Microsoft's Internet-scale identity system, the announcement is especially important to other vendors working to interoperate with it. This is also important to Microsoft. If no one builds interoperable identity products for CardSpace, Microsoft will have failed to achieve true Internet-scale identity. Removing the legal threat is an important enabler.

More at Phil's blog here.

Microsoft's Open Specification Promise

Today marks a major milestone for Mike Jones and myself. 

Microsoft announced a new initiative that I hope goes a long way towards making life easier for all of us working together on identity cross-industry.

It's called the Open Specification Promise (OSP).  The goal was to find the simplest, clearest way of assuring that the broadest possible audience of developers could implement specifications without worrying about intellectual property issues – in other words a simplified method of sharing “technical assets”.  It's still a legal document, although a very simple one, so adjust your spectacles:

Microsoft Open Specification Promise

Microsoft irrevocably promises not to assert any Microsoft Necessary Claims against you for making, using, selling, offering for sale, importing or distributing any implementation to the extent it conforms to a Covered Specification (“Covered Implementation”), subject to the following.  This is a personal promise directly from Microsoft to you, and you acknowledge as a condition of benefiting from it that no Microsoft rights are received from suppliers, distributors, or otherwise in connection with this promise.  If you file, maintain or voluntarily participate in a patent infringement lawsuit against a Microsoft implementation of such Covered Specification, then this personal promise does not apply with respect to any Covered Implementation of the same Covered Specification made or used by you.  To clarify, “Microsoft Necessary Claims” are those claims of Microsoft-owned or Microsoft-controlled patents that are necessary to implement only the required portions of the Covered Specification that are described in detail and not merely referenced in such Specification.  “Covered Specifications” are listed below.

This promise is not an assurance either (i) that any of Microsoft’s issued patent claims covers a Covered Implementation or are enforceable or (ii) that a Covered Implementation would not infringe patents or other intellectual property rights of any third party.  No other rights except those expressly stated in this promise shall be deemed granted, waived or received by implication, exhaustion, estoppel, or otherwise.

Covered Specifications (the promise applies individually to each of these specifications)

Web Services  This promise applies to all existing versions of the following specifications.  Many of these specifications are currently undergoing further standardization in certain standards organizations.  To the extent that Microsoft is participating in those efforts, and this promise will apply to the specifications that result from those activities (as well as the existing versions).
WSDL 1.1 Binding Extension for SOAP 1.2
WS-Federation Active Requestor Profile
WS-Federation Passive Requestor Profile
WS-Management Catalog    
WS-RM Policy
Remote Shell Web Services Protocol
WS-Security: Kerberos Binding
WS-Security: SOAP Message Security
WS-Security: UsernameToken Profile
WS-Security: X.509 Certificate Token Profile
SOAP 1.1 Binding for MTOM 1.0    
WS-I Basic Profile
Web Single Sign-On Interoperability Profile
Web Single Sign-On Metadata Exchange Protocol

Note that you don't have to “do anything” to benefit from the promise.  You don't need to sign a license or communicate anything to anyone.  Just implement.  Further, you don't need to mention or credit Microsoft.  And you don't need to worry about encumbering people who use or redistribute or elaborate on your code – they are covered by the same promise. 

The promise is the result of a lot of dialog between our lawyers and many others in the industry.  Sometimes we developers wished progress could have been faster, but these are really complicated issues.  How long does it take to write code?  As long as it takes.  And I think the same notion applies to negotiations of this kind – unless one party arrives at the table with some pre-determined and intransigent proposal.  People on all sides of this discussion had legitimate concerns, and eventually we worked out ways to mitigate those concerns.  I thank everyone for their contribution. 

How have people from various communities reacted to the final proposal?

Lawrence Rosen, the lecturer at Stanford and author of, “Open Source Licensing: Software Freedom and Intellectual Property Law”, said:

“I see Microsoft’s introduction of the OSP as a good step by Microsoft to further enable collaboration between software vendors and the open source community.  This OSP enables the open source community to implement these standard specifications without having to pay any royalties to Microsoft or sign a license agreement. I'm pleased that this OSP is compatible with free and open source licenses.” 

Mark Webbink, Deputy General Counsel at Red Hat, said:

“Red Hat believes that the text of the OSP gives sufficient flexibility to implement the listed specifications in software licensed under free and open source licenses.  We commend Microsoft’s efforts to reach out to representatives from the open source community and solicit their feedback on this text, and Microsoft's willingness to make modifications in response to our comments.”

And from RL “Bob” Morgan, Chair of the Middleware Architeture Committee for Education, and a major force behind Shibboleth:

The Microsoft Open Specification Promise is a very positive development.
In the university and open source communities, we need to know that we can implement specifications freely.  This promise will make it easier for us to implement Web Services protocols and information cards and for them to be used in our communities.

So there it is folks.  I'm impressed that such a short document embodies so much work and progress.

WordPress InfoCard integration code

Update:  There are now excellent community-based and commercial implementations of Information Card code for WordPress, php, ruby, “C” and other languages.  I've left this zip here for documentary and pedagogical purposes only.

 I've been wanting to share my experiences adding Information Card support to identityblog for quite a while now.  I just haven't had the time.

I started by publishing my work on building the necessary code for handling secure identity tokens.  But then I got interrupted with the necessities of life – like shipping Cardspace.

Anyway, now I'm ready to present my integration code.  Very little of it is unique to WordPress – it is really code that would in general apply just as much to any other piece of software.  Someone could easily factor my code so the interface is a little cleaner than is currently the case. 

When I had to actually alter wordpress files (only 3 of them), I just show the changes that are necessary.  You'll have to download the original files from wordpress to see what I'm talking about (version 2.0.4) in context (usually not necessary unless you are making the changes in your own version.)

Download my contribution here.  My assumption is that the root of this download is the same as the root of the wordpress directory. 


The files all begin with “infocard” so they're easy to delete if you want to.

I'll be publishing a number of pieces explaining why I took the approaches it did.  I hope this will get some good, concrete conversation going.  The first in this series is uncharacteristically wordpress specific – don't get discouraged if you're looking for something more general.  It talks about how I approached changing the wp-login page.  I'm pretty sure that even people thinking about infocard-enabling other products will find some ideas here that help them out.

Like my previous work, you can use this code in whatever way you want.  My goal is to help as many people as possible understand, use and deploy information cards.

UPDATE:  Thanks to Samuel Rinnetmäki for pointing out the need to warn readers not to install “as is” in an operational directory – it had never occured to me they might do this…  I've edited the  ZIP to make this impossible (09-02-2008).

Dynamic detection of client dialect requirements

It seems I might not have found quite the magic recipe yet in my attempt to dynamically recognize whether you are coming from a July CTP or release candidate client.  “Close, probably, but no cigar.”

If you have any kind of problem logging in with an Information Card, please email me the output of this diagnostic.

“Funny, it worked on MY machines.” (From Programming Yarns, Volume 1, Chapter 1). 

Sorry for having been a little optimistic about my initial success.  A bunch of people had reported that things worked – and I prematureluy took that as meaning that they didn't NOT work. 

I'm still trying to sort out why some people are having problems.  So if you don't mind trying out and mailing in the diagnostic, I'd really appreciate it.


Upcoming DIDW

I hope everyone's going to Digital ID World (DIDW) next week. We'll start on Monday with an Identity Open Space Unconference (don't worry, Virgos, they're unstructured, but not without shape and self-revealing purpose). Once this gives rise to the main event, there are a number of sessions that look fascinating for identity afficionados – like “What Do the Internet's Largest Sites Think About Identity?”, a panel moderated by Dan Farber and featuring representatives of the large sites and a new presentation by Dick Hardt. There will also be an OSIS meeting – and of course, the endless hallway conversation.

I'm pairing up with Patrick Harding (from Ping Identity) on a Wednesday session called “Understanding InfoCards in an Enterprise Setting“. It will include a demo that I think will really help show the concrete benefits of InfoCards inside the enterprise. What can you expect? 

First, you'll see the latest version of Ping's InfoCard server, now featuring both Managed IdP as well as Service Provider capabilities. Ping's goal is to show how to seamlessly chain passive and active federation – allowing for on-the-fly privacy context switching.  They'll use real-world use-cases where passive federation gives way to active and vice-versa.

According to Andre Durand, Ping Identity's CEO:

“The Digital ID World demo will show two scenarios to depict how passive federation (via SAML 2.0 Web SSO Profiles or WS-Federation) and active federation (via CardSpace) can both play a role in enabling a seamless user experience for accessing outsourced applications. The plan is to demonstrate how passive and active federation work together to enable a myriad of different business use cases when chained together in different situations

“Scenario 1:

“An enterprise employee leverages her internal employee portal to access applications that are hosted externally. In the first case we show how SAML 2.0 Web SSO (passive federation) is used to enable seamless access into the web site. The user accepts this as part of her employment contract – the employer has deemed that the use of is critical to their business and they want no friction for their sales force in entering information for forecasting purposes.

“In the second case we'll show how CardSpace is used to ‘optionally’ enable seamless access into the employees Employee Benefits web site. As the Employee Benefits web site is made up of a mixture of personal and corporate information (i.e. 401k, health and payroll) the employee is given the choice of whether to enable SSO via the use of CardSpace. The Employee Benefits web site is enabled with CardSpace. After the user clicks on the ‘Benefits’ link in their corporate portal, she is prompted with different Cards (Employer and Benefits) which she can then choose between for accessing the Benefits web site. If she chooses ‘Employer’ then she will be enabled with SSO from the Corporate Portal in future interactions.”

By the way, Andre, please tell me there's some way for her to change her mind later!

“Scenario 2:

“An enterprise employee is traveling and loses her cell phone. She uses her laptop to access her corporate cell phone provider in an effort to have the phone replaced immediately. The employee would normally access this web site via SSO from her corporate portal. The cell phone provider web site is enabled with Card Space to simplify the IdP discovery and selection process. The employee is prompted to use her Employer card to authenticate to her employer's authentication service. The cell phone provider web site leverages CardSpace to handle IdP Selection rather than having to discover this themselves. Once the user has authenticated to her employer the returned security token contains the relevant information to service the employee's request for a new cell phone.”

It all sounds very interesting – amongst the first examples of what it means to have a full palette of identity options.  Ping is emblematic of an emerging ecology – many of us, across the industry, moving us towards the Identity Big Bang.

Doc Searls will be doing the closing Keynote.  I'm really looking forward to that and to seeing you in Santa Clara.

Can namespaces survive name changes?

Arcadian Vision, an interesting place created by a person (I'm not sure who…) with deep knowledge of Ruby, thinks the namespace change problem I explained earlier today could have been avoided if we were using namespace schemes with a “little more indirection”. His thinking seems to spontaneously head in the same direction as Drummond Reed's.

Kim Cameron writes about namespace changes relating to Microsoft’s Cardspace initiative. The explanations offered sound good, but it’s hard to not be somewhat annoyed if you’re the one patching your code as a result of this change. This also reminds me of a few unconnected experiences that revolve, at least somewhat, around the permanence of URIs. URIs used to denote namespaces often (typically?) aren’t actually valid URLs. They specify a transfer protocol, but they’re not actually meant to be used with that protocol (e.g. they don’t link to documentation about that namespace). It seems to me that this is doubling the burden on a mechanism that isn’t necessarily appropriate. I suppose the argument goes that you control your domain, so you can split that resource among its various responsibilities. Sounds shaky to me, but let’s see where it leads us.

He reaches the conclusion:

So when I put it all together, I’m using my domain name to identify namespaces that are potentially distinct from the content served up via HTTP from that domain. I’m also using my domain name to locate information that isn’t intrinsically related to my domain. I think there’s a blog in there, too. Personally, I’m going to closely watch Google Base to see if it catches on. I could host my own data but have a unique Google Base identifier for it that I can edit to reflect changes in where I’m keeping my data. So how about rather than using a URI to identify my namespace, I identify it as this, which is a unique identifier, can be annotated with relevant metadata (like a link to documentation), and won’t screw anyone else up if I change the URL of my website.

I find it interesting that someone would think of using Google Base as a kind of XRI.  That's pretty far out of the box.  I can hear schema-addicts writhing in pain, but no one can argue with the simplicity of Arcadian's scheme.

Regardless, I think the case of whether to put InfoCard claims under “” or “” turns on a different set of issues.  I think the move makes a statement – that is a part of the essence of the InfoCard system – about the cross-industry character of the technology.  In other words, the semantics of the work are becoming richer as a result of the move.

In terms of using Google Base and names like , doesn't that have a fixed root too?  Arcadian ends up still being tied to a domain-based system, and the more he goes down this path, the more he will find himself becoming dependent on the domain.  If his approach were to become popular, everyone would be making themselves progressively more dependent on a single namespace with a commercial purpose and future – a course one shouldn't adopt without careful thought.

Arcardian should look at Drummond Reed's work before adopting conventional search engines for this particular purpose.  It introduces a framework of persistent identifiers that sit behind transient namespaces, and provides a mapping service with, as I understand it, no central commercial owner.  In other words, the indirection is offered through a new commons.  You can get an intro here and here.


Namespace change in Cardspace release candidate

Via Steve Linehan, a pointer to Vittorio Bertocci's blog, Vibro.NET:

In RC1 (.NET framework 3.0, IE7.0 and/or Vista: for once, we have all nicely aligned) we discontinued the namespace, substituted by That holds both for the claims in the self issued cards (s-i-c) and for the qname of the issuer associated to s-i-c. If you browse a pre-RC RP site from a RC1 machine, you may experience weird effects. For example, like the Identity Selector claiming that the website is asking for a managed card from the issuer no longer recognized as the s-i-c special issuer. Note that often is not a good idea to explicitly ask for a specific issuer 🙂

 If you want to see a sample of this, check out the updated version of the sandbox.

Why this change? As you may know, relying parties specify the claims they want the identity provider to supply (for example, “lastname” or “givenname”) using URIs.

Everyone will agree that the benefit of this is that the system is very flexible – anyone can make up their own URIs, get relying parties to ask for them, and then supply them through their own identity provider. 

But a lot of synergy accrues if we can agree on sets of basic URIs – much like we did with LDAP attribute names and definitions.  

Given that a number of players are implementing systems that interoperate with our self-asserted identity provider, it made sense to change the namespace of the claims from to  In fact this is an early outcome of our collaboration with the Open Source Identity Selector (OSIS) members.  Now that there are a bunch of people who want to support the same set of claims, it makes total sense to move them into a “neutral” namespace.

While this is therefore a “good and proper” refinement, it can pose a problem for people trying out the new software:  if you are using an early version of Cardspace with self-issued cards that respond to the “” namespace, it won't match new-fangled claims requested by a web site using the “” namespace.  And vica versa.  Further, the “card illumination” logic from one version won't recognize the claims from the other namespace.  Cardspace will think the relying party is looking for specialized claims supplied by a “managed card” provider (e.g. a third party).  Thus the confusing message.

After getting some complaints, I fixed this problem at identityblog: now I detect the version of cardspace a client is running and then dynamically request claims in either an old dialect or the new one.  I would say people would do well to build this capability into their implementation from day one.  My sample code is here.

The virtualization of crime

I love this piece by Scott Adams:

Imagine.  The Internet has no way of knowing who you are dealing with.  What environment could be more convenient for the criminally inclined?

Since starting to work on the Identity Metasystem I've learned more and more about the hoists being pulled off in the context of virtual reality.  Over time, we have seen the attacks become more professionalized, and ultimately linked to well organized international syndicates.  Part of the basic equation is that the international nature of virtual reality makes it especially hard to deal with the type of organization that is emerging at the boundary of its interface with the brick and mortar world.

But recently, we've seen more highly focused attacks that are essentially artisinal.  It seems to be a case of “think globally, act locally.”  Some of the schemes put in place depend on intimate knowledge of the workings of specific sites, and even specific communities and indvidiuals.  This is no longer generic targeting.  It's highly individualized, the work of community professionals of a special kind, who may draw upon internationally organized resources as necessary.

And of course this all makes sense.  Computerization has progressively worked its way through the various professions and industry sectors and nooks and crannies of our society, and we've reached the point that a growing number of criminals are no more likely to function without computers than are accountants (not to cast judgement on whether some accountants are or are not ciminals…)

As the level of familiarity with technology grows and increasingly wider swaths of the population become aware of the opportunities that await us in virtual reality, it is obvious that more and more criminals will find their place there. 

I walked into my local Office Depot a few days ago and amazingly, almost all the stationary goods and high class pens and filing contraptions and things that have always made such places interesting, had basically disappeared into a distant corner, while the whole center of the store consisted of computers, printers, electronic cash registers and cameras.  A further indication of the growing virtualization of which cyber criminalization is just a natural a part.

But to keep any balance at all, we really do need to fix the fundamental architectural problem of the internet: having a way that we can, when we want to, be sure who we are connecting with.

In other words, we need to put in place an Identity Metasystem.

Practice equals theory – demos OK

In terms of certificate behavior, at least, all the metasystem components worked together as designed, across platforms and operators, during the recent change of site key and SSL certificate at identityblog.

I have to give, the operators of my site, credit for going through this on extremely short notice and without charge.  You could look at it as a proof of their ability to handle an emergency revocation, and another example of what a good company they are.

When using Information Cards in the most basic configuration, as I do on my site, the SSL certificate is also used for encryption of the WS-Trust token sent from the identity provider, so everything has to line up at the transport and message level.  The good news is that all this worked as predicted.

It's now fine to use the site for demos – with only the usual caveats…