Radia Perlman on PBE

The ever-interesting James McGovern posted about Encryption Based Encryption a while ago, wondering if Microsoft and Sun might add it to their product suites.

I was so busy travelling that I got swept away by other issues, but Sun's Pat Patterson persisted and recently posted a cogent note by Radia Perlman, one of his colleagues, which I thought hit a lot of the issues:

Identity based encryption.
Sigh.

This is something that some people in the research community have gotten all excited about, and I really think there's nothing there. It might be cute math, and even a cute concept. The hype is that it makes “all the problems of PKI go away”.

The basic idea is that you can use your name as your public key. The private key is derived from the public key based on a domain secret, known to a special node called the PKG (private key generator), which is like a KDC, or an NT domain controller.

Some of the problems I see with it:

a) public key stuff is so nice because nobody needs to know your secret, and the trusted party (the CA) need not be online. The PKG obviously needs to be online, and knows everyone's secrets

b) If Alice is in a different trust domain than Bob, she has to somehow securely find out Bob's PKG's public parameters (which enable her to turn Bob's name into a public key IN THAT DOMAIN).

c) Bob has to somehow authenticate to the PKG to get his own private key

d) There is no good answer to revocation, in case someone steals Bob's private key

e) There is no good answer to revocation, in case someone steals the PKG's domain secret.

I've seen hype slides about identity based encryption saying “which identity is easier to remember?

In PKI: 237490798271278094178034612638947182748901728394078971890468193707
In IBE: radia.perlman@sun.com

This is such ill-conceived hype. In PKI no human needs to see an RSA key. The RSA key is not your identity. Your identity is still something like radia.perlman@sun.com

So, it looks like IBE gives with one hand (sender can create a public key without the recipient's involvement) but takes much more away with the other (key secrecy, PKG has to be online, revocation issues). I guess there is no such thing as a free lunch…

Let me put the same points just a bit differently. 

IBE is very interesting if you think everyone in the world can trust a single authority to hold everyone's secrets. 

OK, now let's move on.

When you do have multiple authorities, you need a way to discover those, so you need a secure email-to-authority mapping and lookup.  Yikes.  The only way to do that which is simpler than public key itself, is to use mail domains as the authority boundary combined with some kind of secure DNS.

But in that case, your mail server can decrypt anything you receive, so it's no better than a conventional edge to edge encryption scheme (e.g. where mail from me to Pat gets encrypted leaving the Microsoft mail system and then decrypted when entering the Sun mail system). 

Edge encryption is pretty well what everyone's building anyway, isn't it?  So what's the role for PBE?

My vision is that one day, Information Cards will be used to convey the information needed to do real end-to-end encryption using asymmetric keys – without the current difficulties of key distribution.  This said, signing interests me a lot more than end-to-end encryption in the short term.

More about this some other day.

Snoops highlight importance of second law

I'm back from a really intense visit to Australia. Some would call the trip home a “long flight”. But not me. I had the Sydney Morning Herald to read, which the day before had featured this piece on 100 government employees fired by their agency, Centrelink (hundreds of others were demoted). It seems – you guessed it – they had been snooping on (and even changing) hundreds of personal records.

So I was fascinated when I came across this piece during my flight. It quotes the head of the Australian government's Smartcard Privacy Taskforce, Professor Allan Fels:

Serious concerns have been raised about the federal government's planned Smartcard after more than 100 Centrelink staff lost their jobs for inappropriately accessing client records.

Labor has called for the privacy commissioner to investigate the breaches, in which 600 Centrelink staff browsed the welfare records of friends, family, neighbours and ex-lovers without authorisation.

And the man heading a privacy taskforce looking into the proposed Smartcard says he is deeply concerned by the breaches.

A total of 19 staff were sacked and 92 resigned after 790 cases of inappropriate access were uncovered.

In the most serious cases, staff members changed client details without authorisation as they spied on sensitive information.

Smartcard Privacy Taskforce head Allan Fels said the breaches highlighted why data on the proposed new card should be kept to a minimum.

The Smartcard will link welfare and other personal details of at least 17 million Australians.

The Centrelink revelations are deeply disturbing,” Prof Fels told ABC radio.

“I take some comfort from the fact that the government has caught them and punished them but there is still a huge weight now on the government to provide full proper legal and technical protection of privacy with the access card.”

Prime Minister John Howard said Centrelink had dealt appropriately with employees who abused their positions of trust.

But opposition human services spokesman Kelvin Thomson said Privacy Commissioner Karen Curtis had to investigate.

Mr Thomson said the news came on top of revelations in June that the Child Support Agency had 405 privacy breaches in nine months – two of which required mothers and their children to be relocated at taxpayers’ expense.

He said the breaches raised serious concerns about the Smartcard.

“The government cannot expect Australians to accept the Smartcard proposal until it satisfies them that it has resolved their legitimate privacy concerns,” he said.

Centrelink spokesman Hank Jongen said five cases had been referred to the Australian Federal Police for investigation, while more than 300 staff faced salary deductions or fines, another 46 were reprimanded, and the remainder were demoted or warned.

The staff were caught using sophisticated “spyware” software monitoring access to client records.

Mr Jongen described the dragnet as a “mopping up exercise”, saying the number of staff involved was small considering Centrelink handled 80 million transactions every week for more than six million customers.

“So you've got to keep these incidents in context,” Mr Jongen told ABC radio.

“The overwhelming majority of our staff have not been involved in these activities.

“Often these activities have simply involved one of our staff, for example, surfing the details of family and friends or taking a peek at their neighbour's records.

“The number of serious offences that have occurred is only a small proportion of the total number.”

Community and Public Sector Union deputy national president Lisa Newman said the job losses were regrettable but the union had long warned Centrelink members about the dangers of inappropriate data access.

Opposition Leader Kim Beazley said the breaches demonstrated the government's administrative incompetence.

Mr. Jongen sounds like a lot of spokesmen, doesn't he? Do spokesmen all train as junior camp councillors? He doesn't see “taking a peek at a neighbour's records” as being “a serious offense”? Luckily we have Mr. Fels standing by to provide adult supervision.

The interesting thing about this story is that on one hand, you have the prospect of a card. On the other, you have the current problems of centralized data storage.

Note that the reason employees could inappropriately access sensitive information was because it was sitting in databases they could get to – not because it was present on a card in someone's wallet.

Centralized databases worry me way more than any other aspect of this technology.

Law of Minimal Disclosure or Norlin's Maxim?

John at IDology has posted a more detailed description of how knowledge-based authentication works.  I'll pick up part of it here.  Go to his blog to see his response to Adam's comments.  John says:

“…Let's look at this in relation to an e-commerce transaction where we are buying something on the Internet over $250.

“First, because we (the consumers) have voluntarily submitted our information with the intention of entering into a business transaction, we have given our consent for the business to verify the information we’ve presented.

“Once the business receives the information, in the interest of controlling fraud and completing the transaction as quickly as possible (avoiding a manual review of the transaction by the business), it uses an automatic system to verify that the personal information submitted is linked to a real person and that I am indeed that person.

“Enter IDology’s knowledge-based authentication (KBA) which scours (without exposing) billions of public data records to develop on-the-fly intelligent multiple choice questions for the person to answer. Our clients vary in their delivery of KBA, some reward their customer with expedited shipping for going through the process, others consider it a further extension of the credit card approval process which during the process various data elements associated with the credit card will be validated such as address verification along with the credit approval.

“The key is for a business to use a KBA system that bases its questions on non-credit data and reaches back into your public records history so that the answers are not easily guessed or blatantly obvious. Typically, consumers find credit based questions (what was the amount of your last mortgage payment, bank deposit, etc) intrusive and difficult to answer, and these type of answers can be forged by stealing someone’s credit report or accessed with compromised consumer data. Without giving away too much of our secret sauce, our questions relate to items such as former addresses (from as far back as college), people you know, vehicle information and anything else that can be determined confidentally while not exposing data from existing public data sources. Once the system processes the results (which is all real-time processing), it simply shares how many questions were answered right or wrong so that the business can determine how to handle the transaction further. The answers are not given within the transaction processing (protecting the consumer and the business from employees misusing data) and good KBA systems have lots of different types of questions to ask, so that the same questions are not always presented and one question doesn’t give away the answer to another…

“At the end of the day, the consumer, by completing this ecommerce transaction, is establishing a single pointed trusted identity with that business. The next extension is how the consumer can utilize this verification process to validate his/her identity to complete other economic transactions or have an established verified identity to make posts to a blog or enter into a conversation in a social network where participants have agreed to be verified to establish a trusted network or may be concerned with the age of someone in their verified network. To us, KBA can be an important part of establishing and maintaining a trusted identity.Let's begin by supposing this technology becomes widely adopted.”

My first concern would regard the security of the system from the merchant and banking point of view.  Why wouldn't an organized crime syndicate be able to set itself up with exactly the same set of publicly available databases used by IDology and thus be able to impersonate all of us perfectly – since it would know all the answers to the same questions?  It seems feasible to me.  I think it is likely that this technology, if initially successful and widely deployed, will crumble under attack because of that very success.

My second concern regards the security of the system from the point of view of the individual; in other words, her privacy.  IDology's approach takes progressively more obscure aspects of a person's history and then, through the question and answer process, shares them with sites that people shouldn't necessarily trust very much. 

The scenario is intended to weed out bad apples talking to good sites, but if adopted widely, infringes the security of good apples talking to bad sites – or even of good apples talking to sites whose morals are influenced by the profit motive (not that there are many of those around.)

Is this really an application of minimal disclosure?  I fear it is more an application of Norlin's Maxim:  The internet inexorably pulls information from the private domain into the public domain.  As in the case of a tree falling in a forest with no one to observe it, historical data which, despite being digital, is left alone, represents less of a privacy problem than that which is circulated widely.

I would much rather see IDology apply its resources to the initial registration of a user, and provision a service which then releases only the results of its inquest (e.g. some number between 1 and 10) as an identity claim.  This would be data minimalization in line with my second law.

I still worry that organized crime could take advantage of its access to public information to subvert even the singular registration phase, but at least the mechanisms used by IDology and like firms could include ones which attackers are unlikely to learn about (this is itself no small feat). 

Clearly, in line with the first law of identity, users would have to know what the strength of their rating is, and how to seek redress should it not be right. 

It's not my place to argue how things should be done – I'm just expressing my concerns about John's system as he has described it and I have understood it.

In short, I would much prefer a claims based approach to that of having the “secret public” information flow through untrusted relying parties.  I especially worry about teaching users to enter even more obscure information into forms appearing on free-floating web pages – which would be like enrolling them in a graduate course at the School of Blabbing Your Secrets.

Issues raised by Knowledge Verification

Adam at Emergent Chaos outlines several issues he thinks arise from IDology's approach to Knowledge Verification

I don’t like these types of systems for three reasons:

First, they are non-consensual for the consumer. Companies such as IDology make deals with other companies, such as my bank, and then I’m forced to use the system.

Second, the information that such companies can gather are probably already being gathered by Choicepoint, Axciom, Google, and others. So the assertions that “its cheap for us, and expensive for the attackers” are hard to accept as credible.

Third, if truth and your database don’t agree, then we’re forced to have a reconciliation process, in which I, or the id thief, convince the company to change its answers. How does that process work?

I hope John at IDology can respond at the same time he gives us concrete examples of how the system works in practice.

Knowledge verification

Reading this post by John at IDology, I'm starting to understand how “knowledge verification” can differ from conventional uses of personal identifying information:

So I came across some interesting commentary in the blogsphere regarding verification services sparked by Jessica’s article I blogged about in my last entry (which you can now read a version of in The Charlotte Observer). In the article, Jessica describes the verification chain (which I must point out is only a brief snapshot as well as a combination of several different processes from different providers) that prompted Conor Cahill to post on the problems of verification services in general.

While I think Kim Cameron’s blogpost response helps clarify verification as it relates to Identity 2.0…

“Right now we give all our identifying information to every Tom, Dick and Harry…What if we just gave it to Tom, or a couple of Toms, and the Toms then vouched for who we are? We would ‘register’ with the Toms, and the Toms would make claims about us and the chances of having our identity stolen would drop…”

…there is still light to be shed on what a verification service is and how it in fact works today to protect consumer data from being further comprised in the event of becoming a victim of identity theft.

Conor comments: “I would hope they start to add stronger verification that the person who “knows” this stuff is actually the person who’s data is being verified…We really need to move away from knowledge of basic facts as a verification of identity, especially when many of those facts are published in one form or another.”

Yes, in some instances some verification providers are using current information, credit history and other data resources that are easy for thieves to buy, know or guess when impersonating someone. That’s why using knowledge-based information on past personal history is much more effective. This information is hard to dig up. Admittedly it’s not foolproof against our mother or spouse, but if someone that close to me steal’s my identity then there are other levels of trust issues to be discussed.

Based on Kim’s comment

“I’ve been asked so many times for the name of my first pet that I’ve had to make one up.”

I want to clarify that this form of verification does not fall in the category of what I define as knowledge based authentication. Sure, it’s based on knowledge, but it’s a knowledge we provide which is then stored in a database for when we inevitably forget our password. Considering most consumers probably use the same question/answer and passwords or combination password at several different sites, consumers are in a real pickle when a data breach occurs or a laptop with those records is stolen. The solution for this of course is very eloquently addressed in the Tom, Dick and Harry example Kim Cameron provided, but it’s important to explain that Knowledge verification services as they relate to ecommerce today and in the future for Identity 2.0, are intelligent-based and ask you questions not every Tom, Dick and Harry use or know.

It would help to understand the concepts better if John would give us some examples of how this works in practice. What kinds of questions are asked, and how does IDology know the answers?

 

WordPress vulnerability at identityblog

Sun's Rohan Pinto has spent a fair amount of time this week using a recipe that has been discussed in the Blogosphere recently to hack into my blog, which runs WordPress 2.0.1, and then apologizing for it (I appreciate that, Rohan).

He was able to use a vulnerability in WordPress to employ his “subscriber” account (which normally only grants comment rights) in order to import a fake post onto my site (I've since removed it but it is shown at the right).

The exploit used was described about three weeks ago (July 27th, 2006) when Dr. Dave published his “Critical Announcement affecting ALL WordPress Users.”  All in all, it was a fairly stern warning.  I would have upgraded to a newer version of WordPress but couldn't because I was travelling:

If you are running WordPress as your blogging platform and if you have been trusting enough to leave User registration enabled for guests, DISABLE IT IMMEDIATELY (in wp-admin >> options: make sure “Anyone can register” is not checked).

Additionally, delete or disable ANY guest account already created by people you are not sure about.

Leaving it open and letting people sign-up for guest accounts on your WordPress blog could lead to incredibly nasty stuff happening if anybody so desired. And trust me I am not exaggerating this. So don’t wait a second to disable this option and please relay the message. WordPress dev team has been notified a while back and I dare hope they will soon start acting on it, if only by relaying a similar announcement through the official channel (as well as, of course, releasing a proper patch).

Sorry for the shrill hysterical tone, but this is a big deal. However, disable that one option and you are fine, no need to panic further :)

[cheers go to Geoff Eby for discovering and bringing this insane security exploit to my attention]

Initially Rohan entitled the post that described the exploit, “Is Cardspace Secure Enough?”.  That bothered me, since the exploit had nothing to do with InfoCard or Cardspace or my PHP demo code.  Rohan was good enough to later make that perfectly clear:

Pursuant to my prior post. Please do take note of this. I would like to make it crystal clear to everybody that me logging into Kim’s blog and publishing as “him” was NOT a infocard exploit, but rather a “wordpress” exploit…

Please, please, please do note, that this IS NOT a infocard hack.

Conor Cahill read about the exploit and commented

Access Control is always going to be a responsibility of the entity managing the resource (in this case, Kim's blog is managed by a wordpress installation that he setup on his server, so his server must manage the access control).

The selection of the tool to manage the rescource will be based upon the reliability of the manager and the value of the resource.

I'm sure Kim wouldn't have put his bank account up on wordpress without a lot more testing and perhaps requiring someone else to stand behind it should there be such a problem…

All of this is true, of course, with the exception that my blog usually has more in it than my bank account.  Further, in the case of WordPress, it is the application that manages the security, not the underlying operating system or environment (in this case a LAMP stack) or hardware.

Of course, I didn't choose WordPress because it was the most secure solution in the world;  I chose it because it was an interesting blogging tool, with a lot of cool features, and would help me learn about the issues confronting people on non-Microsoft platforms so I could have a more inclusive view of identity problems.  And it has been great for those purposes.

You might think I would be abandoning WordPress.  But I won't.  I like it and want to continue to explore what it is like to work with it, and help make it better.  To me the real lesson in all of this is that the approach to remote operations used in WordPress – and almost all web-based applications – is just not adequate.  The more you know about all the exploits that are possible in the http world, the more you want to run headlong into the world of Web Services, where each transaction has its own security environment, in the sense that the security environment travels with each message and operation.  In the same way, SOA moves the control of authorization from the application to the operation definition process, so creative application authors like those who built wordpress, don't have to take sole responsibility for all the subtle security problems that will inevitably arise as we move further into the virtual world.

I take it for granted that given all my pontification about identity and security my site will be used in creative ways.  So I have no ill feeling toward Rohan.  The important thing is the conversation and the learning that come out of this.

So to rephrase Conor, in this case, the selection of the tool to manage the rescource will be based upon an analysis of the risks and benefits of using an emerging technology to reach others working on the issues. 

Identropy – Stephen Colbert, Identity and User 16006693

Ashraf Motiwala from Ash's Identity blog has contributed this illumination

Stephen Colbert had a hilarious piece on tonight's Colbert Report regarding protecting identity while searching (he suggests typing with your weaker hand, to disguise your typing patterns), in response to the AOL debacle (if you haven't heard, they released about 3 months of search histories comprising of some 20 million searches…but don't worry, they replaced people's usernames with random numbers…so we are safe, right?)
Not exactly. Paul Boutin used splunkd.com to parse the heck out of the data – and arrived at seven patterns of searchers. According to him, according to the data – people fall into one of seven searcher categories: the pornhound, the manhunter (looks up a persons name again and again), the shopper, the obsessive (the person who searches for the same thing incessantly), the omnivore (the person who searches like crazy, and doesn't really have a pattern), the newbie and the basketcase.

The most interesting way that I found to look at the data is to pick out a specific user. It's damn interesting, comical, and scary as to how much insight you might get. Take a look at User 16006693 go from politics, to retirement, to politics, to religion, to sex, quickly back to religion (repent!), to food and finally to heartburn. Classic.

16006693 nak
16006693 nack
16006693 sharona
16006693 knack
16006693 knack downloads
16006693 oakrige boys
16006693 oakridge boys
16006693 oakridge boys downloads free
16006693 jokes about dick cheney
16006693 jokes about dick cheney but not george bush
16006693 dick cheney creep
16006693 dick cheney dickhead
16006693 rummy dickhead
16006693 where is iraq
16006693 where is lebenon
16006693 his bullets
16006693 his bullies
16006693 shiits
16006693 shee-ites
16006693 bush appruval
16006693 bush approvel
16006693 bush drops below
16006693 dead reporters
16006693 dead reporters fotos
16006693 dead reporters pix
16006693 disembowled reporters pix
16006693 disembowled new york times
16006693 love thine enemas
16006693 love thine enemies
16006693 bible quote of the day
16006693 insperation from bible
16006693 george bush great president
16006693 george w bush great president
16006693 dream on
16006693 oakridge boys lyrics dream on
16006693 how to run country
16006693 how to run country when not really inerested
16006693 people to run country for you
16006693 over work
16006693 overwork
16006693 stress
16006693 best place to retire
16006693 places like crawford but without cindy sheehan
16006693 crawford the town not cindy crawford
16006693 crawford tx
16006693 like crawford tx but not so hot
16006693 best places to retire not hot
16006693 best places to retire global warming
16006693 global warming mith
16006693 global warming myth
16006693 crawford hot
16006693 cindy crawford hot
16006693 rice hot
16006693 rice hot not recipes
16006693 rice naked
16006693 rice nude
16006693 bible quotes resisting temptation
16006693 oakridge boys i'll be true to you
16006693 oakridge boys trying to love two women
16006693 rice and beans
16006693 tex mex
16006693 tex mex not music
16006693 tex mex takeout
16006693 tex mex takeout dc
16006693 heart burn
16006693 heartburn

Dave Kearns takes on anonymity

 Dave Kearns of The Virtual Quill (and many other venues) has joined the anonymity scrum (even though he was already in it) :

“Anonymity as default,” which I mentioned in the previous post, is taking on a life of it's own. Now Tom Maddox has posted in his Opinity weblog, commenting on Ben Laurie's commentary about Kim Cameron's mention of Eric Norlin's post concerning David Weinberger's original thought that “Anonymity should be the default.”

(I'll just sit here and whistle for a moment while you follow that set of links)

The point I wanted to mention was Maddox’ statement:

We need to begin with anonymity/pseudonymity as the default, Laurie's ‘substrate choice’. Otherwise, whatever identity system we employ, we'll always be trying to get the cat back in the bag (or the scrambled egg back in the shell)

The fallacy here is that he seems to believe that there can be an “identity system” in which anonymity is a choice! And not only a choice, but the default choice. But without a unique identifier for each object in the system, there is no identity system. And with a unique identifier there is no anonymity within the system. Rather, the default should be PRIVACY for all objects, with any dispersal or publishing of identity attributes only done with the consent of the entity if it's sentient, and the entity's controller if it isn't.

Maddox is correct that once the data is published you can't unpublish it completely. That argument shouldn't be overlooked. But it's equally as important to realize that the “anonymity bandwagon” is out of control and headed for the cliff. Privacy is the key, and privacy should be the issue.

I have trouble with Dave's use of the phrase, “within the system”.  What is “the system” in a multi-centered world with an interpenetrating mesh of domains?  Put another way, just because an object has a unique identifier, do entities dealing with the object have to know that?

Things may have unique identifiers that are known to some identity authority / domain (even infinitesimilly small ones) but these authorities don't have to release them when identifying things to other parties. 

Would an example help? 

Suppose some company – let's call it Contoso.com – runs Active Directory as its local identity infrastructure.  Active Directory identifies all of the machines and people in Contoso's “domain” with a Security IDentifier (SID) – basically a unique id/domain pair.  But when I am dealing with someone from Contoso.com, I probably don't give a darn about their SID, no matter how useful it may be to their local AD system.  Dave, do you care about my SID? Knowing you and loving you, I think you've got better things to worry about!

In the world of web services, which will be a vast mesh where identity reaches beyond domain boundaries, the definition of what is “within the system” becomes very ambiguous. 

The SID makes sense “within the system” thought of a narrow domain manager.  It normally doesn't make sense “within the system” thought of as a connecting mesh of entities that happen to interact with many domains. 

In this bigger world, I may be interested in the fact that someone is an employee of Contoso, byt totally uninterested in anything that uniquely identifiers them as an employee – even if such unique identification is necessary for some other purpose.

For example, if I call 411, I speak with a representative of the phone company.  I don't know her or his name, or number, or location, or anything else.  I just know the person I'm talking with works on behalf of Verizon – and that is all I really want to know.

Yet knowing they are an official employee is still a matter of identity! 

Is this anonymous?  I would say so.  It “has an unknown or unacknowledged name”, as my pathetic online dictionary puts it (I'm travelling).  So it is anonymous, but it is identity.

This is all part of the notion that an authority can make claims about a subject – and that this is done through a set of assertions.  Given this, we need a name for the “empty set” of assertions. 

So far, we call it anonymity.  We believe this will ring a bell in more peoples’ heads than “empty set of assertions”.

If we now combine this thinking with the second law (minimal disclosure) – we come to the notion that if more is not needed, the identity set should be the empty set.  This is what I think people are talking about when they say the default should be anonymous.

Anonymity is the substrate

Ben Laurie at Links, contemplating the “identity as a default” debate, argues “Anonymity is the substrate“:

Kim Cameron’s blog draws my attention to a couple of articles on anonymity. The first argues for anonymity to be the default. The second misses the point and claims that wanting anonymity to be the default makes it a binary thing, whereas identity is a spectrum.

But the point is this: unless you have anonymity as your default state, you don’t get to choose where on that spectrum you lie.

Eric Norlin says

Further, every “user-centric” system I know of doesn’t seek to make “identity” a default, so much as it seeks to make “choice” (including the choice of anonymity) a default.

as if identity management systems were the only way you are identified and tracked on the ‘net. But that’s the problem: the choices we make for identity management don’t control what information is gathered about us unless we are completely anonymous apart from what we choose to reveal.

Unless anonymity is the substrate choice in identity management gets us nowhere. This is why I am not happy with any existing identity management proposal – none of them even attempt to give you anonymity as the substrate.

Ben has a valid point in terms of the network substrate.  There are a number of hard issues intertwined here.  But from a practical point of view, here is how I approach it:

  1. You can't solve every problem everywhere simultaneously.  Solving one problem may leave others to be dealt with.  But with one problem gone, the others are easier to tackle.
  2. There are interesting technologies like onion routing and tor that could be combined with the evolving identity framework to offer a more secure overall solution (Ben is better versed in these matters than I am).
  3. If society mandates storage of network addresses under certain circumstances, as it seems to be doing, a much more secure approach to this storage could and should be adopted.  Any legislation that calls for auditing should also require that the audit trail be encrypted under keys available only to vetted authorities and then only through well-defined legal procedures with public notification and in an off-line setting.  This would have a huge impact in preventing the ravages of Norlin's Maxim.

Network issues aside, in keeping with the second law of identity (minimal disclosure), users should by default release NO identifying information at all. 

You can call this anonymity, or you can call this “not needlessly blabbing everything about yourself”. 

Sites should only ask for identifying information when there is some valid and defensible reason to do so.  They should always ask for the minimum possible.  They should keep it for the shortest possible time.  They should encrypt it so it is only available to systems that must access it.  They should ensure as few parties as possible have access to such systems.  And if possible, they should only allow it to be decrypted on systems not connected to the internet.  Finally, they should audit their conformance with these best practices.

Once you accept that release of identifying information should be proportionate to well-defined needs – and that such needs vary according to context – it follows that identity must “be a spectrum”.

 

Norlin's Maxim

I'm a big fan of Eric Norlin – it was one of his posts that got me started on the Laws of Identity.  But I think this ZDNet piece is especially good – and love Norlin's Maxim:

In the very near wake of a foiled terrorist plot, I find myself waking up, planning to write about the topic of anonymity and identity. The original impetus for my post is a recent article by David Weinberger. In that article, David argues for anonymity as a “default” in the online world by saying: “personal anonymity is the default in the real world — if you live in a large town, not only don't you know everyone you see, but you're not allowed randomly to demand ID from them — and it ought to be the default on line.”

Its not so much that I disagree with David, as I think he's framing the problem incorrectly. Framing the “online anonymity” issue in the context of being a default makes it a binary issue — a simple on/off switch; either anonymity is the default, or something else (from pseudonymity up to strongly authenticated identity) is the default. But online identity is *not* a binary issue. Identity (be it authentication, access, authorization, federation or any other component) operates on a spectrum. Further, every “user-centric” system I know of doesn't seek to make “identity” a default, so much as it seeks to make “choice” (including the choice of anonymity) a default. Whether the system is SXIP, CardSpace, or OpenID, they all begin by having the user choose how they will present themselves.

In the context of choice being the identity default, we're finding that the bulk of online users are choosing to place huge chunks of their identity online. My evidence: MySpace, YouTube, Facebook, etc. The heaviest generational component of the online community (the kids) rushes to identity themselves online. They flock to it so fast and so easily that its making federal lawmakers (and many parents) uneasy. Do these kids think that anonymity is or should be the online default? Apparently not.

My semi-joking explanation of this lies in “Norlin's Maxim.” I first posited “norlin's maxim” as a joke, but I've since found it to actually be at least partially true — thus its semi-joking nature. Norlin's maxim is simple: The internet inexorably pulls information from the private domain into the public domain. The proof: Google your name today and google it again in 90 days (more will be known about you over time).

So, rather than arguing about whether or not anonymity is the default in the “real-world” (its not), I would simply assert that while location may have been a proxy for identity in the original architecture of the internet, the nature of the network itself *forces* identity information from the private to the public domains. That forcing function leaves users open to losing control over their own personal information, and *that* problem demands a digital identity network infrastructure.

It's so true.  One of the main keys to understanding my work is to understand Norlin's Maxim.  And the maxim also explains why so many comparisons between the brick and mortar and the digitial worlds fail to grasp the central issues.