Is New Zealand's government a ‘justifiable party’?

Vikram Kumar works for New Zealand's State Services Commission on the All-of-government Authentication Programme.   As he puts it, “… that means my working and blog lives intersect….”  In this discussion of the Third Law of Identity, he argues that in New Zealand, where the population of the whole country is smaller than that of many international cities, people may consider the government to be  a “justifiable party” in private sector transactions:

A recent article in CR80News called Social networking sites have little to no identity verification got me thinking about the Laws of Identity, specifically Justifiable Parties, “Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.”

The article itself makes points that have been made before, i.e. on social networking sites “there’s no way to tell whether you’re corresponding with a 15-year-old girl or a 32-year-old man…The vast majority of sites don’t do anything to try to confirm the identities of members. The sites also don’t want to absorb the cost of trying to prove the identity of their members. Also, identifying minors is almost impossible because there isn’t enough information out there to authenticate their identity.”

In the US, this has thrown up business opportunities for some companies to act as third party identity verifiers. Examples are Texas-based Entrust, Dallas-based RelyID, and Atlanta-based IDology. They rely on public and financial records databases and, in some cases, government-issued identification as a fallback.

Clearly, these vendors are Justifiable Parties.

What about the government? It is the source of most of the original information. Is the government a Justifiable Party?

In describing the law, Kim Cameron says “Today some governments are thinking of operating digital identity services. It makes sense (and is clearly justifiable) for people to use government-issued identities when doing business with the government. But it will be a cultural matter as to whether, for example, citizens agree it is “necessary and justifiable” for government identities to be used in controlling access to a family wiki or connecting a consumer to her hobby or vice.” [emphasis added]

So, in the US, where there isn’t a high trust relationship between people and the government, the US government would probably not be a Justifiable Party. In other words, if the US government was to try and provide social networking sites with the identity of its members, the law of Justifiable Parties predicts that it would fail.

This is probably no great discovery- most Americans would have said the conclusion is obvious, law of Justifiable Parties or not.

Which then leads to the question of other cultures…are there cultures where government could be a Justifiable Party for social networking sites?

To address, I think it is necessary to distinguish between the requirements of social networking sites that need real-world identity attributes (e.g. age) and the examples that Kim gives- family wiki, connecting a consumer to her hobby or vice- where authentication is required (i.e. it is the same person each time without a reliance on real-world attributes).

Now, I think government does have a role to play in verifying real-world identity attributes like age. It is after all the authoritative source of that information. If a person makes an age claim and government accepts it, government-issued documents reflects the accepted claim as, what I call, an authoritative assertion that other parties accept.

The question then is whether in some high trust societies, where there is a sufficiently high trust relationship between society and government, can the government be a Justifiable Party in verifying the identity (or identity attributes such as age alone) for the members of social networking societies?

I believe that the answer is yes. Specifically, in New Zealand where this trust relationship exists, I believe it is right and proper for government to play this role. It is of course subject to many caveats, such as devising a privacy-protective system for the verification of identity or identity attributes and understanding the power of choice.

In NZ, igovt provides this. During public consultation held late last year about igovt, people were asked whether they would like to use the service to verify their identity to the private sector (in addition to government agencies). In other words, is government a Justifiable Party?

The results from the public consultation are due soon and will provide the answer. Based on the media coverage of igovt so far, I think the answer, for NZ, will be yes, government is a Justifiable Party.

It is noteworthy that if citizens give them the go-ahead, the State Services Commission is prepared to take on the responsibility and risk of managing all aspects of the digital identity of New Zealand's citizens . The combined governement and commercial identities the Commission administers will attract attackers.  Effectively, the Commission will be handling “digital explosives” of a greater potency than has so far been the case anywhere in the world.

At the same time, the other Laws of Identity will continue to hold.  The Commission will need to work extra hard to achieve data minimization after having collapsed previously independent contexts together. I think this can be done, but it requires tremendous care and use of the most advanced policies and technologies.

To be safe, such an intertwined system must, more than any other, minimize disclosure and aggregation of information.  And more than any other, it must be resilient against attack. 

If I lived in New Zealand I would be working to see that the Commission's system is based on a minimal disclosure technology like U-Prove or Idemix.  I would also be working to make sure the system avoids “redirection protocols” that give the identity provider complete visibility into how identity is used.  (Redirection protocols unsuitable for this usage include SAML and WS-Federation, as well as OpenID).    Finally, I would make phishing resistance a top priority.  In short, I wouldn't touch this kind of challenge without Information Cards and very distributed, encrypted information storage.

Identity bus and administrative domain

Novell's Dale Olds, who will be on Dave Kearns’ panel at the upcoming European Identity Conference, has added the “identity bus” to the metadirectory / virtual directory mashup.  He says in part :

Meta directories synchronize the identity data from multiple sources via a push or pull protocols, configuration files, etc. They are useful for synchronizing, reconciling, and cleaning data from multiple applications, particularly systems that have their own identity store or do not use a common access mechanism to get their identity data. Many of those applications will not change, so synchronizing with a metadirectory works well.

Virtual directories are useful to pull identity data through the hub from various sources dynamically when an application requests it. This is needed in highly connected environments with dynamic data, and where the application uses a protocol which can be connected to the virtual directory service. I am also well aware that virtual directory fans will want to point out that the authoritative data source is not the service itself, but my point here is that, if the owners shut down the central service, applications can’t access the data. It’s still a political hub.

Personally, I think all this meta and virtual stuff are useful additions to THE key identity hub technology — directory services. When it comes to good old-fashioned, solid scalable, secure directory services, I even have a personal favorite. But I digress.

The key point here as I see it is ‘hub’ vs. ‘bus’ — a central hub service vs. passing identity data between services along the bus.

The meta/virtual/directory administration and configuration is the limiting problem. In directory-speak, the meta/virtual/directory must support the union of all schema of all applications that use it. That means it’s not the mass of data, or speed of synchronization that’s the problem — it’s the political mass of control of the hub that becomes immovable as more and more applications rendezvous on it.

A hub is like the proverbial silo. In the case of meta/virtual/directories the problem goes beyond the inflexibility of large identity silos like Yahoo and Google — those silos support a limited set of very tightly coupled applications. In enterprise deployments, many more applications access the same meta/virtual/directory service. As those applications come and go, new versions are added, some departments are unwilling to move, the central service must support the union of all identity data types needed by all those applications over time. It’s not whether the service can technically achieve this feat, it’s more an issue of whether the application administrators are willing to wait for delays caused by the political bottleneck that the central service inevitably becomes.

Dale makes other related points that are well worth thinking about.  But let me zoom in on the relation between metadirectory and the identity bus.

As Dale points out in his piece, I think of the “bus” as being a “backplane” loosely connecting distributed services.  The bus exends forever in all directions, since ultimately distributed computing doesn't have a boundary.

In spite of this, the fabric of distributed services isn't an undifferentiated slate.  Services and systems are grouped into continents by the people and organizations running and using them.  Let's call these “administrative domains”.  Such domains may be defined at any scale – and often overlap.

The magic of the backplane or “bus”, as Stuart Kwan called it, is that we can pass identity claims across loosely coupled systems living in multiple discontinuous administrative domains. 

But let's be clear.  The administrative domains still continue to exist, and we need to manage and rationalize them as much tomorrow as we did yesterday.

I see metadirectories (meaning directories of directories) as the glue for stitching up these administrative continents so digital objects can be managed and co-ordinated within them. 

That is the precondition for hoisting the layer of loosely coupled systems that exists above administrative domains.  And I don't think it matters one bit whether a given digital object is accessed by a remote protocol, synchronization, or stapling a set of claims to a message – each has its place.

Complex and interesting issues.  And my main concern here is not terminology, but making sure the things we have learned about metadirectory (or whatever you want to call it) are properly integrated into the evolving distributed computing architecture.  A lot of us are going to be at the European Identity Conference in Munich later this month, so I look forward to the sessions and discussions that will take place there.

Through the looking glass

You have to like the way, in his latest piece on metadirectory, Dave Kearns summons Lewis Carroll to chide me for using the word “metadirectory” to mean whatever I want:

“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
“The question is, ” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty. “which is to be master—that's all.

Dave continues:

Kim talks about a “second generation” metadirectory. Metadirectory 2.0 if you will. First time I've heard about it. First time anyone has heard about it, for that matter. There is no such animal. Every metadirectory on the market meets the definition which Kim provides as “first generation”.

It's time to move on away from the huge silo that sucks up data, disk space, RAM and bandwidth and move on to a more lithe, agile, ubiquitous and pervasive identity layer. Organized as an identity hub which sees all of the authoritative sources and delivers, via the developer's chosen protocol, the data the application needs when and where it's needed.

It's funny.  I remember sitting around in Craig Burton's office in 1995 while he, Jamie Lewis and I tried to figure out what we should call the new kind of multi-centered logical directory that each of us had come to understand was needed for distributed computing. 

After a while, Craig threw out the word “metadirectory”.  I was completely amazed.  My colleagues and I had also come up with the word “metadirectory”, but we figured the name would be way too “intellectual” – even though the idea of a “directory of directories” was exactly right.

Craig just laughed the way he always does when you say something naive.  Anyone who knows Craig will be able to hear him saying, “Kim, we can call it whatever we want.  If we call it what it really is, how can that be wrong?”

So guess what?  The thing we were calling a metadirectory was a logical directory, not a physical one.  We figured that the output of one instance was the input to the next – there was no center.  The metadirectory would speak all protocols, support different technologies and schemas, support referral to specific application directories, and preserve the application-related characteristics of the constituent data stores.   I'll come back to these ideas going forward because I think they are still super important.

My message to Dave is that I haven't changed what I mean by metadirectory one iota since the term was first introduced in 1995.  I've always seen what is now called virtual directory as an aspect of metadirectory.  In fact, I shipped a product that included virtual directory in 1996.  It not only synchronized, but it did what we used to call “chaining” and “referral” in order to create composite views across multiple physical directories.  It did this not only at the server, but optionally on the client.

Of course, there were implementations of metadirectory that were “a bit more focussed”.  Customers put specific things at the top of their list of “must-haves”, and that is what everyone in the industry tried to build.

But though certain features predominated in the early days of metadirectory, that doesn't mean that those features ARE metadirectory.   We still live in the age of the logical directory, and ALL the aspects of the metadirectory that address that fact will continue to be important.

[Read the rest of Dave's post here.]

How to safely deliver information to auditors

I just came across Ian Brown's proposal for doing random audits while avoiding data breaches like Britain's terrible HMRC Identity Chernobyl: 

It is clear from correspondence between the National Audit Office and Her Majesty's Revenue & Customs over the lost files fiasco that this data should never have been requested, nor supplied.

NAO wanted to choose a random sample of child benefit recipients to audit. Understandably, it did not want HMRC to select that sample “randomly”. However, HMRC could have used an extremely simple bit-commitment protocol to give NAO a way to choose recipients themselves without revealing any of the data related to those not chosen:

  1. For each recipient, HMRC should have calculated a cryptographic hash of all of the recipient's data and then given NAO a set of index numbers and this hash data.
  2. NAO could then select a sample of these records to audit. They would inform HMRC of the index values of the records in that sample.
  3. HMRC would finally supply only those records. NAO could verify the records had not been changed by comparing their hashes to those in the original data received from HMRC.

This is not cryptographic rocket science. Any competent computer science graduate could have designed this scheme and implemented it in about an hour using an open source cryptographic library like OpenSSL.

Ben Laurie notes that the redacted correspondence itself demonstrates a lack of basic security awareness. I hope those carrying out the security review of the ContactPoint database are better informed.

Cross industry interop event at RSA 2008

From Mike Jones at self-issued.info here's the latest on the Information Card and OpenID interop testing coming up at RSA.  The initiatives continue to pick up support from vendors and visitors will get sneak peaks at what the many upcoming products will look like.

33 Companies…
24 Projects…
57 Participants working together to build an interoperable user-centric identity layer for the Internet!

Come join us!

Tuesday and Wednesday, April 8 and 9 at RSA 2008, Moscone Center, San Francisco, California
Location: Mezzanine Level Room 220
Interactive Working Sessions: Tuesday and Wednesday, 11am – 4pm
Demonstrations: Tuesday and Wednesday, 4pm – 6pm
Reception: Wednesday, 4pm – 6pm

OSIS Participants RSA 2008