Resources have rights too

Paul Madsen has a knack for pithy identity wisdom.  But his recent piece on HealthVault's use of OpenID made me do a double take.

“Simon Willison defends HealthVault‘s choice of OPs [OpenID providers – Kim].

“I disagree. It is I, as a user, that should be able to dictate to HealthVault the OPs from which they are to accept identity assertions through OpenID.

“Just as I, as a user of Vista, should be able to dictate to Microsoft which software partners they work with to bundle into the OS (I particularly like the Slow Down to Crawl install).

“Just as I, as a Zune user … oh wait, there are no Zune users….

“The mechanism by which I (the user) am able to indicate to HealthVault, or Vista, my preferences for their partners is called ‘the market‘.”

Hmmm.  All passion aside, are Vista and HealthVault really the same things?

When you buy an operating system like Vista, it is the substratum of YOUR personal computer.  You should be able to run whatever YOU want on it.  That strikes me as part of the very definition of the PC.

But what about a cloud service like HealthVault?  And here I want to get away from the specifics of HealthVault, and talk generically about services that live in the cloud.  In terms of the points I want to make, we could just as easily be talking about Facebook, LinkedIn, Blogger or Hotmail.

As a user, do you own such a service? Do you run it in whatever way you see fit?  

I've tried a lot of services, and I don't think I've ever seen one that gives you that kind of carte blanche. 

Normally a service provides options. You can often control content, but you function within parameters.  Your biggest decision is whether you want to use the service in the first place.  That's a large part of what “the market” in services really is like.

But let me push this part of the discussion onto “the stack” for a moment.

PUSH

Last week a friend came by and told me a story.  One of his friends regularly used an Internet advertising service, and paid for it via the Internet too.  At some point, a large transaction “went missing”.  The victim contacted the service through which he was making the transaction, and was told it “wasn't their problem”.  Whose problem was it?

I don't know anything about legal matters and am not talking from that point of view.  It just seems obvious to me that if you are a company that values its relationships with customers, this kind of breach really IS your problem, and you need to face up to that.

And there is the rub.  I never want to be the one saying, “Sorry – this is your problem, not ours.”  But if I'm going share the problem, shouldn't I have some say in preventing it and limiting my liability?

POP

I think that someone offering a service has the right to define the conditions for use of the service (let's for now ignore the fact that there may be some regulation of such conditions – for example certain conditions might be “illegal” in some jurisdictions).  And that includes security requirements.

In other words, matters of access control proceed from the resource.  The resource decides who can access it.   Identity assertions are a tool which a resource may use to accomplish this.  For years we've gotten this backwards, thinking access proceeded from the identity to the resource – we need to reverse our thinking.

Takeaway:  “user-centric” doesn't mean The Dictatorship of the Users.  In fact there are three parties whose interests must be accomodated (the user, the resource, and the claims provider).  At times this is going to be complex.  Proclamations like, “It is I, as a user, that should be able to dictate…” just don't capture what is at stake here. 

I like the way Simon Willison puts this:

“You have to remember that behind the excitement and marketing OpenID is a protocol, just like SMTP or HTTP. All OpenID actually provides is a mechanism for asserting ownership over a URL and then “proving” that assertion. We can build a pyramid of interesting things on top of this, but that assertion is really all OpenID gives us (well, that and a globally unique identifier). In internet theory terms, it’s a dumb network: the protocol just concentrates on passing assertions around; it’s up to the endpoints to set policies and invent interesting applications.

“Open means that providers and consumers are free to use the protocol in whatever way they wish. If they want to only accept OpenID from a trusted subset of providers, they can go ahead. If they only want to pass OpenID details around behind the corporate firewall (great for gluing together an SSO network from open-source components) they can knock themselves out. Just like SMTP or HTTP, the protocol does not imply any rules about where or how it should be used…”

In a later post – where he seems to have calmed down a bit – Paul mentions a Liberty framework that allows relying parties to “outsource the assessment of… OPs to accredited 3rd parties (or at least provide a common assessment framework…)”.  This sounds more like the Paul I know, and I want to learn more about his thinking in this area.

Identity bus and administrative domain

Novell's Dale Olds, who will be on Dave Kearns’ panel at the upcoming European Identity Conference, has added the “identity bus” to the metadirectory / virtual directory mashup.  He says in part :

Meta directories synchronize the identity data from multiple sources via a push or pull protocols, configuration files, etc. They are useful for synchronizing, reconciling, and cleaning data from multiple applications, particularly systems that have their own identity store or do not use a common access mechanism to get their identity data. Many of those applications will not change, so synchronizing with a metadirectory works well.

Virtual directories are useful to pull identity data through the hub from various sources dynamically when an application requests it. This is needed in highly connected environments with dynamic data, and where the application uses a protocol which can be connected to the virtual directory service. I am also well aware that virtual directory fans will want to point out that the authoritative data source is not the service itself, but my point here is that, if the owners shut down the central service, applications can’t access the data. It’s still a political hub.

Personally, I think all this meta and virtual stuff are useful additions to THE key identity hub technology — directory services. When it comes to good old-fashioned, solid scalable, secure directory services, I even have a personal favorite. But I digress.

The key point here as I see it is ‘hub’ vs. ‘bus’ — a central hub service vs. passing identity data between services along the bus.

The meta/virtual/directory administration and configuration is the limiting problem. In directory-speak, the meta/virtual/directory must support the union of all schema of all applications that use it. That means it’s not the mass of data, or speed of synchronization that’s the problem — it’s the political mass of control of the hub that becomes immovable as more and more applications rendezvous on it.

A hub is like the proverbial silo. In the case of meta/virtual/directories the problem goes beyond the inflexibility of large identity silos like Yahoo and Google — those silos support a limited set of very tightly coupled applications. In enterprise deployments, many more applications access the same meta/virtual/directory service. As those applications come and go, new versions are added, some departments are unwilling to move, the central service must support the union of all identity data types needed by all those applications over time. It’s not whether the service can technically achieve this feat, it’s more an issue of whether the application administrators are willing to wait for delays caused by the political bottleneck that the central service inevitably becomes.

Dale makes other related points that are well worth thinking about.  But let me zoom in on the relation between metadirectory and the identity bus.

As Dale points out in his piece, I think of the “bus” as being a “backplane” loosely connecting distributed services.  The bus exends forever in all directions, since ultimately distributed computing doesn't have a boundary.

In spite of this, the fabric of distributed services isn't an undifferentiated slate.  Services and systems are grouped into continents by the people and organizations running and using them.  Let's call these “administrative domains”.  Such domains may be defined at any scale – and often overlap.

The magic of the backplane or “bus”, as Stuart Kwan called it, is that we can pass identity claims across loosely coupled systems living in multiple discontinuous administrative domains. 

But let's be clear.  The administrative domains still continue to exist, and we need to manage and rationalize them as much tomorrow as we did yesterday.

I see metadirectories (meaning directories of directories) as the glue for stitching up these administrative continents so digital objects can be managed and co-ordinated within them. 

That is the precondition for hoisting the layer of loosely coupled systems that exists above administrative domains.  And I don't think it matters one bit whether a given digital object is accessed by a remote protocol, synchronization, or stapling a set of claims to a message – each has its place.

Complex and interesting issues.  And my main concern here is not terminology, but making sure the things we have learned about metadirectory (or whatever you want to call it) are properly integrated into the evolving distributed computing architecture.  A lot of us are going to be at the European Identity Conference in Munich later this month, so I look forward to the sessions and discussions that will take place there.

How to safely deliver information to auditors

I just came across Ian Brown's proposal for doing random audits while avoiding data breaches like Britain's terrible HMRC Identity Chernobyl: 

It is clear from correspondence between the National Audit Office and Her Majesty's Revenue & Customs over the lost files fiasco that this data should never have been requested, nor supplied.

NAO wanted to choose a random sample of child benefit recipients to audit. Understandably, it did not want HMRC to select that sample “randomly”. However, HMRC could have used an extremely simple bit-commitment protocol to give NAO a way to choose recipients themselves without revealing any of the data related to those not chosen:

  1. For each recipient, HMRC should have calculated a cryptographic hash of all of the recipient's data and then given NAO a set of index numbers and this hash data.
  2. NAO could then select a sample of these records to audit. They would inform HMRC of the index values of the records in that sample.
  3. HMRC would finally supply only those records. NAO could verify the records had not been changed by comparing their hashes to those in the original data received from HMRC.

This is not cryptographic rocket science. Any competent computer science graduate could have designed this scheme and implemented it in about an hour using an open source cryptographic library like OpenSSL.

Ben Laurie notes that the redacted correspondence itself demonstrates a lack of basic security awareness. I hope those carrying out the security review of the ContactPoint database are better informed.

Attention application developers: Obey Dave Kearns!

Dave Kearns, knife freshly sharpened, responded to my recent post on metadirectory with the polemic, “Killing the Metadirectory“:

… My interpretation is that the metadirectory has finally given way to the virtual directory as the synchronization engine for identity data. Kim interprets it differently. He talks about the “Identity Bus” and says that “…you still need identity providers. Isn’t that what directories do? You still need to transform and arbitrate claims, and distribute metadata. Isn’t metadirectory the most advanced technology for that? ” And I have to answer, “no.” The metadirectory is last century's technology and it's day is past.

The Virtual Directory, the “Directory as a Service” is the model for today and tomorrow. Data that is fresh, always available and available anywhere is what we need. The behemoth metadirectory with it's huge datastore and intricate synchronization schedule (yet is never quite up to date) are just not the right model for the nimble, agile world of today's service driven computing. But the “bus” Kim mentions could be a good analogy here – the metadirectory is a lumbering, diesel-spewing bus. The virtual directory? It's a zippy little Prius…  [Full article here]

Who would want to get in the way of Dave's metaphors?  He's on a streak.  But he's making a fundamental mistake, taking an extreme position that is uncharacteristically naive.  I hope he'll rethink it.

Applications drive infrastructure

Here's the problem.  Infrastructure people cannot dictate how application developers should build their applications.  Applications – providing human and business value – drive infrastructure, not the other way around.  Infrastructure people who don't get this are doomed. 

Dave's neat little story about web service query needs to be put in the crucible of application development.  We need to get real.

Telling application developers how to live 

Real-time query across web services solves some identity problems very well.  In these cases, application developers will be happy to use them.  But it doesn't solve all their identity needs, or even most of them.  When Dave Kearns starts to tell real live application developers they shouldn't put identity information in their databases, they'll tell him to take his zippy Prius and shove off. 

Application developers like to use databases and tables.  They have become expert at doing joins across tables and objects to produce quite magical results.  As people and things become truly first class objects in our applications, developers will want even more to include them in their databases. 

Think for a minute about the kinds of queries you need to do when you start building enterprise social networks.  “Show me all the friends of friends who work in a class of projects similar to the ones I work in…”  You need to do joins, eh?  So it's not just existing enterprise applications that have the need to support distributed storage – it's the emerging ones too.

Even thinking for a moment just about Microsoft applications – SharePoint provides a good example  – the developers ran into the need to maintain local tables so they can get the kind of performance and complex query they need.  Virtual directory doesn't help them one iota in solving this kind of problem.  Nor do web service queries.

Betting big time against the house 

I admire many aspects of Dave's thinking about identity.  But I pity anyone who follows his really ideological argument that virtual directory solves everything and distributed storage just isn't needed.  We need both.

He's asking readers to bet against databases.  He's asking them to bet against the programming model used by application developers.  He's asking them to forget about performance.  He's asking them to take all the use cases in the world and stuff them into his Prius – which is actually more like a hobby horse than a car.

Once you have identity data distributed across stores you either have chaos or you have metadirectory.  I'll explore this more in upcoming posts.

Meanwhile, if anyone wants to bet against the future of databases and integration of identity information into them, drop me a note and I'll set up a page to take your money.  And at the same time, I recommend that you start training for a second career.

This said, I'm as strong a believer in using web services to query for claims in real time as Dave is.  So on that we very much agree.

Metadirectory and claims

My friend and long-time collaborator Jackson Shaw seems to have intrigued both Dave Kearns and Eric Norlin in an  amusing (if wicked) post called You won't have me to kick around anymore

You won't have me to kick around anymore!

No, not me. Hewlett-Packard.

I heard about a month ago that HP was going to bow out of the IDM business. I didn't want to post anything because I felt it would compromise the person that told me. But, now that it has made the news:

Check out Burton Group's blog entry on this very topic

Burton Group has been contacted by HP customers who report that HP is no longer going to seek new customers for its Identity Center product. We have contacted HP and the company confirms that HP Software has decided to focus its investment in identity management products exclusively on existing customers and not on pursuing additional customers or market share. HP is in the process of reaching out to each customer regarding the change.

Seriously – you thought HP was a contender in this space???!!! No, no, Nanette. Thanks for playing. Mission failure…

Let's be honest. The meta-directory is dead. Approaches that look like a meta-directory are dead. We talk about Identity 2.0 in the context of Web services and the evolution of digital identity but our infrastructure, enterprise identity “stuff” is decrepit and falling apart. I have visions of identity leprosy with this bit and that bit simply falling off because it was never built with Web services in mind…

There is going to be a big bang in this area. HP getting sucked into the black hole is just a step towards that…

As graphic as the notion of identity leprosy might be, it was the bit on metadirectory that prompted Dave Kearns to write,

That’s a quote from Quest’s Jackson Shaw. Formerly Microsoft’s Jackson Shaw. Formerly Zoomit’s Jackson Shaw. This is a guy who was deeply involved in metadirectory technology for more than a dozen years. I can only hope that Microsoft is listening.

Back at Jackson's blog we find out that he was largely responding to a session he liked very much given by Neil MacDonald at a recent Gartner Conference.  It was called “Everything You Know About Identity Management Is Wrong.”  Observing that customers are dissatisfied with the cost of hand tailoring their identity and access management, Jackson says,

Neil also introduced the concept of “Identity as a service” to the audience. At the Directory Experts Conference, John Fontana wrote “Is Microsoft’s directory, identity management a service of the future?”   What I am stating is quite simple: I believe a big-bang around identity is coming and it will primarily be centered around web services. I hope the resultant bright star that evolves from this will simplify identity for both web and enterprise-based identity infrastructure.

Active Directory, other directories and metadirectory “engines” will hopefully become dial tone on the network and won't be something that has to be managed – at least not to the level it has to be today.

Without getting overly philosophical, there is a big difference between being, metaphorically,  a “dial tone” – and being “dead”.   I buy Jackson's argument about dial tone, but not about “dead”. 

Web services allow solutions to be hooked together on an identity bus (I called it a backplane in the Laws of Identity).  Claims are the electrons that flow on that bus.  This is as important to information technology as the development of printed circuit boards and ICs were to electronics.  Basically, if we were still hand-wiring our electronic systems, personal computers would be the size of shopping centers and would cost billions of dollars.  An identity bus offers us the possibility to mix and match services in a dynamic way with potential efficiencies and innovations of the same magnitude.

In that sense, claims-based identity drastically changes the identity landscape.

But you still need identity providers.  Isn't that what directories do?  You still need to transform and arbitrate claims, and distribute metadata.  Isn't metadirectory the most advanced technology for that?  In fact, I think directory / metadirectory is integral to the claims based model.  From the beginning, directory allowed claims to be pulled.  Metadirectory allowed them to be pulled, pushed, synchronized, arbitrated and integrated.  The more we move toward claims, the more these capabilities will become important. 

The difference is that as we move towards a common, bus-based architecture, these capabilities can be simplified and automated.   That's one of the most interesting current areas of innovation. 

Part of this process will involve moving directory onto web services protocols.  As that happens, the ability to dispatch and assemble queries in a distributed fashion will become a base functionality of the system – that's what web services are good at.  So by definition, what we now call “virtual directory” will definitely be a base capability of emerging identity systems.

Ralf Bendrath on the Credentica acquisition

Privacy, security and Internet researcher and activist Ralf Bendrath is a person who thinks about privacy deeply. The industry has a lot to learn from him about modelling and countering privacy threats. Here is his view of the recent credentica acquisition:

Microsoft has acquired Montreal-based privacy technology company Credentica. While that probably means nothing to most of you out there, it is one of the most important and promising developments in the digital identity world.

My main criticism around user-centric identity management has been that the identity provider (the party that you and others rely on, like your credit card issuer or the agency that gave you your driver's license) knows a lot about the users. Microsoft's identity architect Kim Cameron explains it very well:

[W]ith managed cards carrying claims asserted by a third party authority, it has so far been impossible, even for CardSpace, to completely avoid artifacts that allow linkage. (…) Though relying parties are not able to collude with one another, if they collude with the identity provider, a set of claims can be linked to a given user even if they contain no obvious linking information.

This is related to the digital signatures involved in the claims flows. Kim goes on:

But there is good news. Minimal disclosure technology allows the identity provider to sign the token and proof key in such a way that the user can prove the claims come legitimately from the identity provider without revealing the signature applied by the identity provider.

Stefan Brands was among the first to invent technology for minimal disclosure or “zero knowledge” proofs in the early nineties, similar to what David Chaum did with his anonymous digital cash concept. His technology was bought by the privacy firm Zero-Knowledge until they ran out of funding and gave it back to Stefan. He has since then built his own company, Credentica, and, together with his colleagues Christian Paquin and Greg Thompson, developed it into a comprehensive middleware product called “U-Prove” that was released a bit more than a year ago. U-Prove works with SAML, Liberty ID-WSF, and Windows CardSpace.

The importance of the concept of “zero-knowledge proofs” for privacy is comparable to the impact public key infrastructures (PKIs) described by Witfield Diffie and Martin Hellmann had on internet security. The U-Prove technology based on these concepts has been compared to what Ron Rivest, Adi Shamir and Leonard Adleman (RSA) did for security when they were the first to offer an algorithm and a product based on PKIs.

When I was at the CFP conference in Montreal last May, I was meeting Kim and Stefan, and a colleague pointed me to the fact that Kim was being very nice to Stefan. “He has some cool patents Microsoft really wants”, my colleague said. Bruce Schneier recently also praised U-Prove, but questioned the business model for companies like Credentica. He added, “I’d like to be proven wrong.”

Kim Cameron is now bragging about having proven Bruce wrong (which is hard to imagine, given the fact that “Bruce Schneier feeds Schrödinger's cat on his back porch. Without opening the box”), while admitting that he still has no business model:

Our goal is that Minimal Disclosure Tokens will become base features of identity platforms and products, leading to the safest possible intenet. I don’t think the point here is ultimately to make a dollar. It’s about building a system of identity that can withstand the ravages that the Internet will unleash. That will be worth billions.

Stefan Brands is also really happy:

For starters, the market needs in identity and access management have evolved to a point where technologies for multi-party security and privacy can address real pains. Secondly, there is no industry player around that I believe in as much as Microsoft with regard to its commitment to build security and privacy into IT systems and applications. Add to that Microsoft’s strong presence in many of the target markets for identity and access management, its brain trust, and the fact that Microsoft can influence both the client and server side of applications like no industry player can, and it is easy to see why this is a perfect match.

A good overview of other reactions is at Kim's latest blog post. The cruicial issue has, again, been pointed out by Ben Laurie, who quotes the Microsoft Privacy Team's blog:

When this technology is broadly available in Microsoft products (such as Windows Communication Foundation and Windows Cardspace), enterprises, governments, and consumers all stand to benefit from the enhanced security and privacy that it will enable.

Ben sarcastically reads it like “the Microsoft we all know and love”, implying market domination based on proprietary technology. But the Microsoft we all know in the identity field is not the one we used to know with Passport and other crazy proprietary surveillance stuff. They have released the standards underlying the CardSpace claims exchange under an open specification promise, and Kim assures us that they will have their lawyers sort out the legal issues so anybody can use the technology:

I can guarantee everyone that I have zero intention of hoarding Minimal Disclosure Tokens or turning U-Prove into a proprietary Microsoft technology silo. Like, it’s 2008, right? Give me a break, guys!

Well. Given the fact that U-Prove is not just about claims flows, but involves fancy advanced cryptography, they really should do everybody a favour and release the source code and some libraries that contain the algorithm under a free license, and donate the patent to the public domain.

First of all, because yes – it's 2008, and “free is the new paid”, as even the IHT has discovered in January 2007.

Second, because yes – it's 2008, and there has been an alternative product out there under a free license for more than a year. IBM Research Labs Zurich have finished their Idemix identity software that works with zero-knowledge proofs in January 2007. It is part of the Higgins identity suite and will be available under an open source license. (The Eclipse lawyers seem to have been looking into this for more than a year, though. Does anybody know about the current status?)

Third, because yes – it's 2008, it's not 1882 anymore, to quote Bruce Schneier again:

A basic rule of cryptography is to use published, public, algorithms and protocols. This principle was first stated in 1883 by Auguste Kerckhoffs.

While I don't follow Ralf into every nook and cranny of his argument, I think he has a pretty balanced view.

But Ralf, you should tell your friend I was being very nice to Stefan in Montreal because I find him very amusing, especially with a scotch in him.  I would have tried to get his technology into widescale use whether I liked him or not, and I would have liked him just as much if he didn't have any patents at all.

I don't want to get into a “free is the new paid” discussion.  As the article you cite states, “Mass media given away freely or at low cost is hardly new, of course. In many countries, over-the-air television and radio have long been financed primarily by advertisers, at no direct cost to consumers.”  So what is new here?  When I can apply this paradigm to my next dinner, tell me about it. 

This having been vented, I come to exactly the same general conclusions you do:  we want a safe, privacy-friendly identity infrastructure as the basis for a safe, privacy-friendly Internet, and we should do everything possible to make it easier for everyone to bring that about.  So your suggestions go in the right direction.  If we were ultimately to give the existing code to a foundation, I would like to know what foundation people in the privacy community would suggest.

As for the business model issue, I agree with you and Bruce – and Stefan – that there is no obvious business model for a small company.  But for companies like Microsoft, our long term success depends on the flourishing of the Internet and the digital economy.  The best and most trustworthy possible identity infrastructure is key to that.  So for the Microsofts, the IBMs, the Suns and others, this technology fits very squarely into our business models.

As for the Identity and Access group at Microsoft, our goal is to have the most secure, privacy-friendly, interoperable, complete, easy to use and manageable identity products available.  As the Internet's privacy and identity problems become clearer to people, this strategy will attract many new customers and keep the loyalty of existing ones.  So there you have it.  To us, U-Prove technology is foundational to building a very significant business.

Reactions to Credentica acquisition

Network World's John Fontana has done a great job of explaining what it means for Microsoft to integrate U-Prove into its offerings:

Microsoft plans to incorporate U-Prove into both Windows Communication Foundation (WCF) and CardSpace, the user-centric identity software in Vista and XP.

Microsoft said all its servers and partner products that incorporate the WCF framework would provide support for U-Prove.

“The main point is that this will just become part of the base identity infrastructure we offer. Good privacy practices will become one of the norms of e-commerce,” Cameron said.

“The U-Prove technology looks like a good candidate as an authentication mechanism for CardSpace-managed cards (i.e., those cards issued by an identity provider),” Mark Diodati, an analyst with the Burton Group, wrote on his blog

In general, the technology ensures that users always have say over what information they release and that the data can not be linked together by the recipients. That means that recipients along the chain of disclosure can not aggregate the data they collect and piece together the user’s personal information.

[More here…]

Eric Norlin has this piece in CSO, and Nancy Gohring's ComputerWorld article emphasizes that “U-Prove is the equivalent in the privacy world of RSA in the security space.”  Burton's Mark Diodati covers the acquisition here.

Gunnar Peterson from 1 Raindrop notes in That Was Fast

…the digital natives may be getting some better tooling faster than I thought. I am sure you already know there is a northern alliance and Redmond is U-Prove enabled. I fondly remember a lengthy conversation I had with Stefan Brands in Croatia several years ago, while he patiently explained to me how misguided the security-privacy collision course way of thinking is, and instead how real security is only achieved with privacy. If you have not already, I recommend you read Stefans’ primer on user identification.

Entrepreneur and angel investor Austin Hill gives us some background and links here:

In the year 2000, Zero-Knowledge acquired the rights to Dr. Stefan Brands work and hired Stefan to help us build privacy-enhanced identity & payments systems.  It turns out we were very early into the identity game, failed to commercialize the technology – and during the Dot.Com bust cycle we shut down the business unit and released the patents back to Stefan.  This was groundbreaking stuff that Stefan had invented, and we invested heavily in trying to make it real, but there weren’t enough bitters in the market at that time.  We referred to the technologies as the “RSA” algorithms of the identity & privacy industry.  Unfortunately the ‘privacy & identity’ industry didn’t exist.

Stefan went on to found Crendentica to continue the work of commercialization of his invention. Today he announced that Microsoft has acquired his company and he and his team are joining Microsoft.

Microsoft’s Identity Architect Guru Kim Cameron has more on the deal on his blog (he mentions the RSA for privacy concept as well).

Adam Shostack (former Zero Knowledge Evil Genius, who also created a startup & currently works at Microsoft) has this post up.   George Favvas, CEO of SmartHippo (also another Zero-Knowledge/Total.Net alumni – entrepreneur) also blogged about the deal as well.

Congratulations to Stefan and the team.  This is a great deal for Microsoft, the identity industry and his team. (I know we tried to get Microsoft to buy or adopt the technology back in 2001 🙂 

(I didn't really know much about Zero-Knowledge back in 2000, but it's interesting to see how early they characterized of Stefan's technology as being the privacy equivalent of RSA.  It's wonderful to see people who are so forward-thinking.)

Analyst Neil Macehiter writes:

Credentica was founded by acknowledged security expert Stefan Brands, whose team has applied some very advanced cryptography techniques to allow users to authenticate to service providers directly without the involvement of identity providers. They also limit the disclosure of personally-identifiable information to prevent accounts being linked across service providers and provide resistance to phishing attacks. Credentica's own marketing literature highlights the synergies with CardSpace:

“`The SDK is ideally suited for creating the electronic equivalent of the cards in one's wallet and for protecting identity-related information in frameworks such as SAML, Liberty ID-WSF, and Windows CardSpace.”

This is a smart move by Microsoft. Not only does it bring some very innovative and well-respected technology (with endorsements from the likes of the Information and Privacy Commissioner of Ontario, Canada) which extends the capabilities of Microsoft's identity and security offerings; it also brings some heavyweight cryptography and privacy expertise and credibility from the Credentica team. The latter can, and undoubtedly will, be exploited by Microsoft in the short term: the former will take more time to realise with Microsoft stating that integrated offerings are at least 12–18 months away.

[More here…]

Besides the many positives, there were concerns expressed about whether Microsoft would make the technology available beyond Windows.  Ben Laurie wrote:

Kim and Stefan blog about Microsoft’s acquisition of Stefan’s selective disclosure patents and technologies, which I’ve blogged about many times before.

This is potentially great news, especially if one interprets Kim’s

Our goal is that Minimal Disclosure Tokens will become base features of identity platforms and products, leading to the safest possible intenet. I don’t think the point here is ultimately to make a dollar. It’s about building a system of identity that can withstand the ravages that the Internet will unleash.

in the most positive way. Unfortunately, comments such as this from Stefan

Microsoft plans to integrate the technology into Windows Communication Foundation and Windows Cardspace.

and this from Microsoft’s Privacy folk

When this technology is broadly available in Microsoft products (such as Windows Communication Foundation and Windows Cardspace), enterprises, governments, and consumers all stand to benefit from the enhanced security and privacy that it will enable.

sound more like the Microsoft we know and love.

I hope everyone who reads this blog knows that it is elementary, my dear Laurie, that identity technology must work across boundaries, platforms and vendors (Law 5 – not to mention, “Since the identity system has to work on all platforms, it must be safe on all platforms”). 

That doesn't mean it is trivial to figure out the best legal mecahnisms for making the intellectual property and even the code available to the ecosystem.  Lawyers are needed, and it takes a while.  But I can guarantee everyone that I have zero intention of hoarding Minimal Disclosure Tokens or turning U-Prove into a proprietary Microsoft technology silo. 

Like, it's 2008, right?  Give me a break, guys!

Microsoft to adopt Stefan Brands’ Technology

The Internet may sometimes randomly “forget”.  But in general it doesn't. 

Once digital information is released to a few parties, it really is “out there”.  Cory Doctorow wrote recently about what he called the half-life of personal information, pointing out that personal information doesn't just “dissipate” after use.  It hangs around like radioactive waste.  You can't just push a button and get rid of it.

I personally think we are just beginning to understand what it would mean if everything we do is both remembered and automatically related to everything else we do.  No evil “Dr. No” is necessary to bring this about, although evil actors might accelerate and take advantage of the outcome.  Linkage is just a natural tendency of digital reality, similar to entropy in the physical world.  When designing phsyical systems a big part of our job is countering entropy.  And in the digital sphere, our designs need to counter linkage. 

This has led me to the idea of the “Need-to-Know Internet”.

The Need-to-Know Internet

“Need to Know” thinking comes from the military.  The precept is that if people in dangerous situations don't know things they don't need to know, that information can't leak or be used in ways that increase danger.  Taken as a starting point, it leads to a safer environment.

As Craig Burton pointed out many years ago, one key defining aspect of the Internet is that everything is equidistant from everything else. 

That means we can get easily to the most obscure possible resources, which makes the Internet fantastic.  But it also means unknown “enemies” are as “close” to us as our “friends” – just a packet away.  If something is just a packet away, you can't see it coming, or prepare for it.  This aspect of digital “physics” is one of the main reasons the Internet can be a dangerous place.

That danger can be addressed by adopting a need-to-know approach to the Internet.  As little personal information as possible should be released, and to the smallest possible number of parties.  Architecturally, our infrastructure should lead naturally to this outcome. Continue reading Microsoft to adopt Stefan Brands’ Technology

xmldap / openinfocard paymentCards

Axel Nennker from ignisvulpis has been enhancing the openinfocard identity selector – I'm hoping to catch up with him soon and learn more about where the project is headed.  Meanwhile this post is very interesting:

At DIDW 2007 I heard Sid Sidner talk about variable claims and how they could be used for online payment. Kim Cameron, who sat next to me during Sid's talk, suggested that I should include this into the openinfocard id selector. Today I uploaded two new applications to xmldap.org. You can use the STS to create a paymentCard and import it into the openinfocard id selector:

Next go to the paymentCard relying party. You can change the price to see that the claim can be changed by the merchant. Type a new price into the input field and press enter. Next click on the paymentCard icon to start the openinfocard id selector:

 

 Select a paymentCard using the openinfocard id selector:

 

 The result looks something like this:

 

Please note the “trandata?” claim. This is the one that is modifiable by the relying party. It can contain anything. Sid suggested to base64 encode the data needed for 3D-secure. I just use the variable claim to transport price information from the merchant to the STS. The basic principle: If a claim contains a ‘?’ then the matching of the claim against the claims in a information card stops; that is the claim “matches” and the whole claim is send to the STS in the RST. Of course this does not work with the current version of CardSpace. Some newer version of the openinfocard id selector should do it. This functionality is inside it since end of October (I think). I did not find time to blog about this feature earlier. Have fun.

I tried importing the card into CardSpace, but wasn't able to do so since the openinfocard STS currently issues the card using an expired certificate.  CardSpace checks for this, and other identity selectors should too.  Is this one of the tests in the emerging information card interoperability test suite? 

I'll pick this up again once the certificate problem is fixed.  Until then, it works very nicely with the openinfocard selector.

CardSpace for the rest of us

I've been a Jon Udell fan for so long that I can't even admit to myself just how long it is!  So I'll avoid that calculation and just say I'm really delighted to see the CardSpace team get kudos for its long-tail (no-ssl) work in Jon's recent CardSpace for the rest of us

Hat tip to the CardSpace team for enabling “long tail” use of Information Card technology by lots of folks who are (understandably) daunted by the prospect of installing SSL certificates onto web servers. Kim Cameron’s screencast walks through the scenario in PHP, but anyone who can parse a bit of XML in any language will be able to follow along. The demo shows how to create a simple http: (not https:) web page that invokes an identity selector, and then parses out and reports the attributes sent by the client.

As Kim points out this is advisable only in low-value scenarios where an unencrypted exchange may be deemed acceptable. But when you count blogs, and other kinds of lightweight or ad-hoc services, there are a lot of those scenarios.

Kim adds the following key point:

Students and others who want to see the basic ideas of the Metasystem can therefore get into the game more easily, and upgrade to certificates once they’ve mastered the basics.

Exactly. Understanding the logistics of SSL is unrelated to understanding how identity claims can be represented and exchanged. Separating those concerns is a great way to grow the latter understanding.

I've never been able to put it this well, even though it's just what I was trying to do.  Jon really nails it.  I guess that's why he's such a good writer while I have to content myself with being an architect.