The Laws of Identity

I've been working on how to make the Laws of Identity accessible to busy people without a technical background.  If you have ideas about how this can be improved please let me know:

 

People using computers should be in control of giving out information about themselves, just as they are in the physical world.

 

Only information needed for the purpose at hand should be released, and only to those who need it, just as we don’t indiscriminately broadcast our private information in daily life.   

 

It should NOT be possible to automatically link up everything we do in all aspects of how we use the Internet.  A single identifier that stitches everything up would be a big mistake. 

 

 

We need choice in terms of who provides our identity information in different contexts.

 

The system must be built so that as users, we can understand how it works, make rational decisions and protect ourselves. 

 

And finally, for all these reasons, we need a single, consistent, comprehensible user experience even though behind the scenes, different technologies, identifiers and identity providers are being used.

 

[UPDATE:  important comments integrated and new version here.]

Clarification

In response to my post earlier today on some OpenID providers who did not follow proper procedures to recover from a bug in Debian Linux, a reader wrote:

 

“You state that users who authenticated to the OpenID provider using an Information Card would not have their credentials stolen.   I assume that cracking the provider cert would allow the bad guys to tease a password out of a user, and that InformationCards require a more secure handshake than just establishing a secure channel with a cert. But it still seems that if the bad guys went to the effort of implementing the handshake, they could fool CardSpace as well. Why does that not expose the users credentials?

 

I'll try to be be more precise.  I should have stayed away from the word “credential”.  It confused the issue.

 

Why?  There are two different things involved here that people call “credentials”.  One is the “credential” used when a user authenticates to an OpenID provider.  To avoid the “credential” word, I'll call this a “primordial” claim: a password or a key that isn't based on anything else, the “first mover” in the authentication chain.

 

The other thing some call a “credential” is the payload produced by the OpenID provider and sent to the relying party.  At the minimum this payload asserts that a user has a given OpenID URL.  Using various extensions, it might say more – passing along the user's email address for instance.  So I'll call these “substantive” claims – claims that are issued by an identity provider and have content.  This differentiates them from primordial ones.

 

With this vocabulary I can express my thoughts more clearly.  By using a self-issued Information card like I employ with my OpenID provider –  which is based on strong public key cryptography – we make it impossible to steal the primordial claim using the attack described.  That is because the secret is never released, even to the legitimate provider.  A proof is calculated and sent – nothing more.

 

But let's be clear:  protecting the primordial claim this way doesn't prevent a rogue identity provider who has guessed the key of a legitimate provider – and poisoned DNS  – from tricking a relying party that depends on its substantitve claim.   Once it has the legitimate provider's key, it can “be” the legitimate provider.  The Debian Linux bug made it really easy to guess the legitimate provider's key.

 

Such a “lucky” rogue provider has “obtained” the legitimate provider's keys.  It can then “manufacture” substantive claims that the legitimate provider would normally only issue for the appropriate individual.  It's like the difference between stealing someone's credit card, and stealing a machine that can manufacture a duplicate of their credit card – and many others as well. 

 

So my point is that using Information Cards would have protected the primordial claim from the vulnerability described.  It would have prevented the user's keys from being stolen and reused.  But It would not have prevented the attack on the substantive claim even in the case of PKI, SAML or WS-Federation.  A weak key is a weak key.

 

The recently publicised wide-scale DNS-poisoning exploits do underline the fact that OpenID isn't currently appropriate for high value resources.  As I explained in more detail here back in February:

 

My view is simple.  OpenID is not a panacea.  Its unique power stems from the way it leverages DNS – but this same framework sets limits on its potential uses.  Above all, it is an important addition to the spectrum of technologies we call the Identity Metasystem, since it facilitates integration of the “long tail” of web sites into an emerging identity framework. 

 

Crypto flaw + bad practices = need for governance

Speaking of issues of governance, the Register just brought us this report on a recent “archeological investigation” by Ben Laurie and Richard Clayton that revealed how a Linux security flaw left a number of OpenID sites vulnerable to attack:

“Slipshod cryptographic housekeeping left some OpenID services far less secure than they ought to be.

“OpenID is a shared identity service that enables users to eliminate the need for punters to create separate IDs and logins for websites that support the service. A growing number of around 9,000 websites support the decentralised service, which offers a a URL-based system for single sign-on.

“Security researchers discovered the websites run by three OpenID providers – including Sun Microsystems – used SSL certificates with weak crypto keys. Instead of being generated from billions of possibilities, the keys came from a a set of just 32,768 options, due to a flaw in the random number generation routines used by Debian. The bug, which has been dormant on systems for 18 months, was discovered and corrected back in May.

“Keys generated by cryptographically flawed systems still needed to be replaced even after the software was upgraded. But recent research by Ben Laurie of Google reveals that 1.5 per cent of certificates he looked at contained weak keys. Three OpenID providers (openid.sun.com, xopenid.net and openid.net.nz) were among the guilty parties.

“To exploit the vulnerability, malicious hackers would need to trick surfers into visiting a site impersonating a pukka OpenID provider. But faking digital certificate alone wouldn't do the trick without first misdirecting surfers to these bogus sites. Dan Kaminsky's recent discovery of a DNS cache poisoning flaw made it far more plausible to construct an attack that sent surfers the wrong away around the net's address lookup system, potentially to a bogus Open site posing as the real deal.

“The security flaw meant that even cautious users who check SSL certificates were at risk of handing over their OpenID credentials as part of a phishing attack. Such an attack would take a lot of effort to pull off and would only yield OpenID login credentials, which aren't especially useful for hackers and are difficult to monetise.

“Going after online banking credentials via a site that makes no attempt to offer up fake SSL certificates is a far more reliable moneyspinner, a factor that leads noted security researcher Richard Clayton to describe the attack as the “modern equivalent of a small earthquake in Chile”.

“Sun has responded to the issue by generating a new secure key, which reduces the scope for mischief but still leaves potential problems from the old key.

“More thoughts on the cryptographically interesting – though not especially life-threatening – flaw can be found in Clayton's posting on Cambridge University's Light the Blue Touchpaper blog here. A security advisory by Laurie and Clayton explaining the issue in greater depth can be found here.”

I tip my hat to Ben and Richard for doing what I think of as “system archeology” – looking into the systems people actually leave behind them, as opposed to the ones they think they have built.  We need a lot more of this.  In fact, we need to have full time archeologists rigorously exploring what is being deployed.

This said, I have to question the surprisingly opportunistic title of Richard's piece:  An insecurity in OpenID, not many dead.

Let's get real.  None of what went wrong here was in any way specific to OpenID.  The weakness would have struck any application that relied on crypto and was built on Debian Linux and operated in the same way.  This includes SSL, which for some reason doesn't get singled out.  And it applies to SAML, WS-Trust and PKI  (e.g. any of the security-based identity protocols).  Is OpenID a convenient straw man? 

In fact there were really two culprits.  First, the crypto flaw itself, a problem in Linux.  Second, the fact that although the flaw had been fixed, new keys and certificates had not been obtained by a number of the operators. 

So we are brought right back to the issue of governance, and in all fairness, Richard makes that point too.  Given the improper operating practices, and the fact that OpenID imples no contractual agreement, how would anyone have been able to sort out liability if the flaw had resulted in a serious breach?

Clearly, timely patching of one's operating system needs to be one of the host of requirements placed on any identity provider.  A system of governance would make this explicit, and provide a framework for assigning liability should the requirement not be met.  I really think we need to move forward on a broadly inclusive governance conversation.

And finally, just so no one thinks I have gone out of character, let's all note that any user who employed an Information Card to authenticate to the OpenID provider would NOT have had her credentials stolen, in spite of the vulnerability Ben and Richard have documented.   

 

New York Times on OpenID and Information Cards

Randall Stross has a piece in the NYT that hits the jackpot in explaining to non-technical readers what's wrong with passwords and how Information Cards help:    

“I once felt ashamed about failing to follow best practices for password selection — but no more. Computer security experts say that choosing hard-to-guess passwords ultimately brings little security protection. Passwords won’t keep us safe from identity theft, no matter how clever we are in choosing them.

“That would be the case even if we had done a better job of listening to instructions. Surveys show that we’ve remained stubbornly fond of perennial favorites like “password,” “123456” and “LetMeIn.” The underlying problem, however, isn’t their simplicity. It’s the log-on procedure itself, in which we land on a Web page, which may or may not be what it says it is, and type in a string of characters to authenticate our identity (or have our password manager insert the expected string on our behalf).

“This procedure — which now seems perfectly natural because we’ve been trained to repeat it so much — is a bad idea, one that no security expert whom I reached would defend.”

“The solution urged by the experts is to abandon passwords — and to move to a fundamentally different model, one in which humans play little or no part in logging on. Instead, machines have a cryptographically encoded conversation to establish both parties’ authenticity, using digital keys that we, as users, have no need to see.

“In short, we need a log-on system that relies on cryptography, not mnemonics.

“As users, we would replace passwords with so-called information cards, icons on our screen that we select with a click to log on to a Web site. The click starts a handshake between machines that relies on hard-to-crack cryptographic code…”

Randall's piece also drills into OpenID.  Summarizing, he sees it as a password-based system, and therefore a diversion from what's really important:

“OpenID offers, at best, a little convenience, and ignores the security vulnerability inherent in the process of typing a password into someone else’s Web site. Nevertheless, every few months another brand-name company announces that it has become the newest OpenID signatory. Representatives of Google, I.B.M., Microsoft and Yahoo are on OpenID’s guiding board of corporations. Last month, when MySpace announced that it would support the standard, the nonprofit foundation OpenID.net boasted that the number of “OpenID enabled users” had passed 500 million and that “it’s clear the momentum is only just starting to pick up.”

“Support for OpenID is conspicuously limited, however. Each of the big powers supposedly backing OpenID is glad to create an OpenID identity for visitors, which can be used at its site, but it isn’t willing to rely upon the OpenID credentials issued by others. You can’t use Microsoft-issued OpenID at Yahoo, nor Yahoo’s at Microsoft.

“Why not? Because the companies see the many ways that the password-based log-on process, handled elsewhere, could be compromised. They do not want to take on the liability for mischief originating at someone else’s site.

Randall is right that when people use passwords to authenticate to their OpenID provider, the system is vulnerable to many phishing attacks.  But there's an important point to be made:  these problems are caused by their use of passwords, not by their use of OpenID. 

When people authenticate to OpenID in a reliable way – for example, by using Information Cards –  the phishing attacks are no longer possible, as I explain in this video.  At that point, it becomes a safe and convenient way to use a public personna.

The question of whether and when large sites will accept the OpenIDs issued by other large sites is a more complicated one.  I discussed a number of the issues here.   The problem is that for many applications, there needs to be a layer of governance on top of the identity basic technology.  What happens when something goes wrong?  Are there reliability and quality of service guarantees?  If informaiton is leaked, who is responsible?  How is fiscal liability established?  And by the way, we need to figure this out in order to use any federation technology, whether OpenID, SAML or WS-Trust.

So far, these questions are being answered on an ad hoc basis, since there are no established frameworks.  I think you can divide what's happening into two approaches, both of which make a lot of sense: 

First, there are relying parties that limit the use of OpenID to low-value resources.  A great example is the French telecom company Orange.  It will accept ID's from any OpenID provider – but just for free services.  The approach is simply to limit use of the credentials to so-called low-value resources.  Blogger and others use this approach as well.

Second, the is the tack of using the protocol for higher-value purposes, but limiting the providers accepted to those with whom a governance agreement can be put in place.  Microsoft's Health Vault, for example, currently accepts OpenIDs from two providers, and plans to extend this as it understands the governance issues better.  I look at it as a very early example of a governance-oriented approach.

I strongly believe OpenID moves identity forward.  The issues of password attacks don't go away – in fact the vulnerabilites are potentially worse to the extent that a single password becomes the gate to more resources.  But technologies like Information Cards will solve these problems.  There is a tremendous synergy here, and that is the heart of the matter.  Randall writes:

“We won’t make much progress on information cards in the near future, however, because of wasted energy and attention devoted to a large distraction, the OpenID initiative. “

But I think this energy and attention will take us in the right direction as it shines the spotlight on the benefits and issues of identity, wagging identity's “long tail”. 

 

Getting down with Zermatt

Zermatt is a destination in Switzerland, shown above, that benefits from what Nietzsche calls “the air at high altitudes, with which everything in animal being grows more spiritual and acquires wings”.

It's therefore a good code name for the new identity application development framework Microsoft has just released in Beta form.  We used to call it IDFX internally  – who knows what it will be called when it is released in final form? 

Zermatt is what you use to develop interoperable identity-aware applications that run on the Windows platform.  We are building the future versions of Active Directory Federation Services (ADFS) with it, and claims-aware Microsoft applications will all use it as a foundation.  All capabilities of the platform are open to third party developers and enterprise customers working in Windows environments.  Every aspect of the framework works over the wire with other products on other platforms.

 I can't stress enough how important it is to make it easy for application developers to incororate the kind of sensible and sophisticated capabilities that this framework makes available.  And everyone should understand that our intent is for this platform to interoperate fully with products and frameworks produced by other vendors and open source projects, and to help the capabilities we are developing to become universal.

I also want to make it clear that this is a beta.  The goal is to involve our developer community in driving this towards final release.  The beta also makes it easy for other vendors and projects to explore every nook and cranny of our implementation and advise us of problems or work to achieve interoperability.

I've been doing my own little project using the beta Zermatt framework and will write about the experience and share my code.  As an architect, I can tell you already how happy I am about the extent to which this framework realizes the metasystem architecture we've worked so hard to define.

The product comes with a good White Paper for Developers by Keith Brown of Pluralsight.  Here's how Zermatt's main ReadMe sets out the goals of the framework.

Building claims-aware applications

Zermatt makes it easier to build identity aware applications. In addition to providing a new claims model, it provides applications with a rich set of API’s to reason about the identity of a caller using claims.

Zermatt also provides developers with a consistent programming experience whether they choose to build their applications in ASP.NET or in WCF environments. 

ASP.NET Controls

ASP.NET controls simplify development of ASP.NET pages for building claims-aware Web applications, as well as Passive STS’s.

Building Security Token Services (STS)

Zermatt makes it substantially easier for building a custom security token service (STS) that supports the WS-Trust protocol. These STS’s are also referred to as an Active STS.

In addition, the framework also provides support for building STS’s that support WS-Federation to enable web browser clients. These STS’s are also referred to as a Passive STS.

Creating Information Cards

Zermatt includes classes that you can use to create Information Cards – as well as STS's that support them.

There are a whole bunch of samples, and for identity geeks they are incredibly interesting.  I'll discuss what they do in another post.

Follow the installation instructions!

Meanwhile, go ahead and download.  I'll share one word of advice.  If you want things to run right out of the digital box, then for now slavishly follow the installation instructions.  I'm the type of person who never really looks at the ReadMe's – and I was chastened by the experience of not doing what I was told.  I went back and behaved, and the experience was flawless, so don't make the same mistake I did.

For example, there is a master installation script in the /samples/utilities directory called “SamplesPreReqSetup.bat”. This is a miraculous piece of work that sets up your machine certs automatically and takes care of a great number of security configuration details.  I know it's miraculous because initially (having skipped the readme) I thought I had to do this configuration manually.  Congratulations to everyone who got this to work.

You will also find a script in each sample directory that creates the necessary virtual directory for you.  You need this because of the way you are expected to use the visual studio debugger.

Using the debugger

In order to show how the framework really works, the projects all involve at least a couple of aspx pages (for example, one page that acts as a relying party, and another that acts as an STS).  So you need the ability to debug multiple pages at once.

To do this, you run the pages from a virtual directory as though they were “production” aspx pages.  Then you attach your debugger to the w3wp.exe process (under debug, select “Attach to a process” and make sure you can see all the processes from all the sessions.  “Wake up” the w3wp.exe process by opening a page.  Then you'll see it in the list). 

For now it's best to compile the applications in the directory where they get installed.  It's possible that if you move the whole tree, they can be put somewhere else (I haven't tried this with my own hands).  But if you move a single project, it definitely won't work unless you tweak the virtual directory configuration yourself (why bother?).

Clear samples

I found the samples very clear, and uncluttered with a lot of “sample decoration” that makes it hard to understand the main high level points.  Some of the samples have a number of components working together – the delegation sample is totally amazing – and yet it is easy, once you run the sample, to understand how the pieces fit together.  There could be more documentation and this will appear as the beta progresses. 

The Zermatt team is really serious about collecting questions, feedback and suggestions – and responding to them.  I hope that if you are a developer interested in identity you'll take a look and send your feedback – whether you are primarily a Windows developer or not.  After all, our goal remains the Identity Big Bang, and getting identity deployed and cool applications written on all the different platforms. 

Key Piece of The Identity Puzzle

John Fontana, who writes expert pieces about identity for Network World, just posted this piece, called “Microsoft Sets Key Piece of Identity Puzzle“.   

Microsoft Wednesday released a beta of its most important tool to date for helping developers build applications that can plug into the company's Identity Metasystem and provide what amounts to a re-usable identity service for securing network resources.

Code-named Zermatt, the tools are a new extension to the .Net Framework 3.5 that helps developers more easily build applications that incorporate a claims-based identity model for authentication/authorization. Claims are a set of statements that identify a user and provide specific information such as title or purchasing authority…

John goes on to quote Stuart Kwan:

“The model is that when a user arrives at the applications, they bring claims that they fetched from an STS ahead of time,” says Stuart Kwan, director of program management for identity and access for Microsoft. “Zermatt is one part of building apps that can more easily plug into your environment. You use Zermatt so [applications] can use the STS in your environment.”

In fact, a network would have multiple STS nodes. Those nodes will eventually include Active Directory, which will have an STS built into the directory's Federation Services in the next version slated to ship sometime after 2008.

Microsoft will use the new Federation Services capabilities, Zermatt and STS technology to build toward its ultimate goal of an “identity bus.” The nirvana of the idea is that off-the-shelf applications could plug into the bus in order to authenticate users and provide access control.

In my view, as enterpise applications and desktop suites start to integrate with the identity metasystem,  it will become obvious that businesses can build “business logic” into STS's and suddenly get a huge payoff by controlling access, identity and personalization in all their off-the shelf and enterprise-specific applications.  This is going to be huge for developers, who will be able both to simplify and deliver value.

But back to John and Stuart:

Kwan says Zermatt also can be used to build an STS that would run on top of custom built stores of user data.  He says Zermatt could be used to build applications that accept information from CardSpace, the user-centric identity system in Vista and XP.

The final release of Zermatt is expected by year-end.

It is the first time Microsoft has so directly written its sizeable development army into its Identity Metasystem, plan, which was outlined first in 2005 and defines a distributed identity architecture for multi-vendor platforms.

Read the full story here.

Problem between keyboard and seat

Jeff Bohren picks up on Axel Nennker's recent post:

Axel Nennker points out that the supposed “Cardspace Hack” is still floating around the old media. He allows the issue is not really a Cardspace security hole, but a problem between the keyboards and seats at Ruhr University Bochum:

A while ago two students, Xuan Chen and Christoph Löhr, from Ruhr University Bochum claimed to have “broken” CardSpace. There were some blog reactions to this claim. The authoritative one of course is from Kim.

Today I browsed through a magazine lying on the desk of a colleague of mine. This magazine with the promising title “IT-Security” repeats the false claim and reports that the students proved that CardSpace has severe security flaws… Well, when you switch off all security mechanism then, yes, there are security flaws (The security researcher in front of the computer).

Sort of what developers like me call an ID10T error.

Update: speaking of ID10T errors, I originally mistyped Axel’s name as Alex. My apologies.

What identity providers will sites support?

Paul Madsen digs deeper into the factors that will influence the choices of Internet service providers as they move towards user-centric identity.

“Often times, in trying to be clever and sarcastic, I dive too deep into the ‘satire pool’. The urge to be witty and contrarian surpasses the urge to be clear. Consequently, the ‘point’ I am trying to make can, on occasion, be buried underneath surface frivolity and snideness.
“As happened with my recent post on HealthVault‘s chosen model for OP acceptance.

“With that post, I have confused Kim, and for that I here apologize.

“I was responding to a post of Simon Willison, in which he defended HealthVault's right to choose OPs selectively – and not be compelled to accept any ol’ OP coming in off the street presenting an identity claim.

“My post might have given some the impression that I disagreed with Simon. For instance, I wrote

‘I disagree’

“Admittedly, this set a tone.

“But the rest of the post was meant to point out that, while I do think the user has the right to pressure RPs like HealthVault to accept assertions from particular OPs – the appropriate mechanism for this pressure, as for many other interactions between customers and service providers (e.g. buying an OS), is through market forces. If enough users choose an OP because it is secure and privacy-respecting, or because it offers 2-factor authentication, or because it has a snazzy flash UI, the RPs will find it (if they are interested in serving their customer base).

“When the RPs do find these candidate OPs (or IDPs, the issue is of course not unique to OpenID) they will themselves do their own checking and assessment before they start accepting assertions. And of course, each RP has to ask the question ‘Is this OP appropriate for the resources I protect/manage?’. If the resources are neither privacy sensitive nor valuable, the list of OPs that are appropriate will be longer than for medical or financial information.

“HealthVault (actually probably some other audit & risk management group in Microsoft) performed this assessment and, at least initially, came up with 2 OPs that they felt were right for them. More power to ’em. Partner selection is tough and fraught with risk – they are right to be careful.

“I smile (more a smirk really) when I hear some in the user-centric world place the sole right and responsibility of choosing an OP on the user's shoulders. User's can't even remember their passwords, and you want them to assess the security infrastructure of an OP?

Surgeon: So, are we ready for your operation tomorrow?
Patient: Hi Doc, yes. But I was just reading about this new surgical instrument for the procedure. I really want you to try it out on me.
Surgeon: Hmmm, I don't know much about it …
Patient: Oh, you'll work it out as you go

“So yes Kim, I agree. Resources, and gall bladders, do have rights. “

Now it becomes clear why his original piece was called Pressure. Meanwhile, everyone should know that the last thing I would ever want to do is cast a chill over Paul's satire pool. What a refreshing oasis it is!  (No pun intended.)

Resources have rights too

Paul Madsen has a knack for pithy identity wisdom.  But his recent piece on HealthVault's use of OpenID made me do a double take.

“Simon Willison defends HealthVault‘s choice of OPs [OpenID providers – Kim].

“I disagree. It is I, as a user, that should be able to dictate to HealthVault the OPs from which they are to accept identity assertions through OpenID.

“Just as I, as a user of Vista, should be able to dictate to Microsoft which software partners they work with to bundle into the OS (I particularly like the Slow Down to Crawl install).

“Just as I, as a Zune user … oh wait, there are no Zune users….

“The mechanism by which I (the user) am able to indicate to HealthVault, or Vista, my preferences for their partners is called ‘the market‘.”

Hmmm.  All passion aside, are Vista and HealthVault really the same things?

When you buy an operating system like Vista, it is the substratum of YOUR personal computer.  You should be able to run whatever YOU want on it.  That strikes me as part of the very definition of the PC.

But what about a cloud service like HealthVault?  And here I want to get away from the specifics of HealthVault, and talk generically about services that live in the cloud.  In terms of the points I want to make, we could just as easily be talking about Facebook, LinkedIn, Blogger or Hotmail.

As a user, do you own such a service? Do you run it in whatever way you see fit?  

I've tried a lot of services, and I don't think I've ever seen one that gives you that kind of carte blanche. 

Normally a service provides options. You can often control content, but you function within parameters.  Your biggest decision is whether you want to use the service in the first place.  That's a large part of what “the market” in services really is like.

But let me push this part of the discussion onto “the stack” for a moment.

PUSH

Last week a friend came by and told me a story.  One of his friends regularly used an Internet advertising service, and paid for it via the Internet too.  At some point, a large transaction “went missing”.  The victim contacted the service through which he was making the transaction, and was told it “wasn't their problem”.  Whose problem was it?

I don't know anything about legal matters and am not talking from that point of view.  It just seems obvious to me that if you are a company that values its relationships with customers, this kind of breach really IS your problem, and you need to face up to that.

And there is the rub.  I never want to be the one saying, “Sorry – this is your problem, not ours.”  But if I'm going share the problem, shouldn't I have some say in preventing it and limiting my liability?

POP

I think that someone offering a service has the right to define the conditions for use of the service (let's for now ignore the fact that there may be some regulation of such conditions – for example certain conditions might be “illegal” in some jurisdictions).  And that includes security requirements.

In other words, matters of access control proceed from the resource.  The resource decides who can access it.   Identity assertions are a tool which a resource may use to accomplish this.  For years we've gotten this backwards, thinking access proceeded from the identity to the resource – we need to reverse our thinking.

Takeaway:  “user-centric” doesn't mean The Dictatorship of the Users.  In fact there are three parties whose interests must be accomodated (the user, the resource, and the claims provider).  At times this is going to be complex.  Proclamations like, “It is I, as a user, that should be able to dictate…” just don't capture what is at stake here. 

I like the way Simon Willison puts this:

“You have to remember that behind the excitement and marketing OpenID is a protocol, just like SMTP or HTTP. All OpenID actually provides is a mechanism for asserting ownership over a URL and then “proving” that assertion. We can build a pyramid of interesting things on top of this, but that assertion is really all OpenID gives us (well, that and a globally unique identifier). In internet theory terms, it’s a dumb network: the protocol just concentrates on passing assertions around; it’s up to the endpoints to set policies and invent interesting applications.

“Open means that providers and consumers are free to use the protocol in whatever way they wish. If they want to only accept OpenID from a trusted subset of providers, they can go ahead. If they only want to pass OpenID details around behind the corporate firewall (great for gluing together an SSO network from open-source components) they can knock themselves out. Just like SMTP or HTTP, the protocol does not imply any rules about where or how it should be used…”

In a later post – where he seems to have calmed down a bit – Paul mentions a Liberty framework that allows relying parties to “outsource the assessment of… OPs to accredited 3rd parties (or at least provide a common assessment framework…)”.  This sounds more like the Paul I know, and I want to learn more about his thinking in this area.