From CardSpace to Verified Claims

Last week Microsoft announced the availability of Version 2 of the U-Prove Technology Preview.

What’s new about it?

The most important thing is that it offers a new, web-oriented user experience carefully tailored to helping people control the release of “verified claims” while protecting their privacy.  By verified claims I mean things that are said about them as flesh-and-blood people by entities that can speak, at least in certain contexts, with authority. By protecting privacy I mean keeping information released to the minimum necessary, and ensuring that the authority making the claims – for example a government – is not able to track and profile the way your information is used.

The system takes a number of the good ideas from CardSpace but is also informed by what CardSpace didn’t do well. It doesn’t require the installation of new components on your computer. It works on all the major browsers and phones. It roams between devices. Sites don't have to worry about users “getting a card” before the system will work. And it allows claims providers and relying parties to shape and brand their users’ experiences while still providing a consistent interface for claims approval.

In other words, it represents a big step forward for protecting privacy using high value credentials to release claims.

A focused approach

When it comes to verified claims, the “U-Prove Agent” goes beyond CardSpace.  One way it does this is by being highly focused and integrated into a specific type of identity experience. I’ll be posting a video soon that will help you get a concrete sense of why this works.

That focus represents a change from what we tried to do with CardSpace.   One of the key goals of CardSpace was to provide a “generalized solution” – an alternative to the “patchwork quilt” of what I called “identity kludges” that characterize peoples’ experience of identity on the Internet.

In fact I still believe as much as ever that a “generalized solution” would be nice to have. I would even go so far as to say that a generalized solution is inevitable – at some point in time.

But the current chaos is so vast – and peoples’ thinking about it so fractured – that the only prudent practical approach is to carve the problem into smaller pieces. If we can make progress in some of the pieces we can tie that progress together. The U-Prove Agent for exchange of verified claims is a good example of this, making it possible to offer services that would otherwise be impossible because of privacy problems.

What about CardSpace?

Because of its focus, the U-Prove agent isn’t capable of doing everything that CardSpace attempted to do using Information Cards.

It doesn’t address the problem of helping users manage ALL their identities while keeping them separate. It doesn’t address the user problems of password fatigue, phishing and pervasive “secret questions” when logging into consumer web sites.  It doesn’t solve the famous “home realm discovery problem” when using federation. And perhaps most frustrating when it comes to using devices like phones, it doesn’t give the user a simple way to pick their identities from a set of visual representations (icons or cards).

These issues are all more pressing today than they were in 2006 when CardSpace was first proposed. Yet one thing is clear: in five years of intensive work and great cross-industry collaboration with other innovators working on Apple and Linux computers and phones, we weren’t able to get Information Cards onto the radar of the big web properties users depend on.

Those properties had other priorities. My friend Mike Jones put it well at Self-Issued:

“In my extensive experience talking with potential adopters, while many/most thought that CardSpace was a good idea, because they didn’t see it solving a top-5 pain point that they were facing at that moment or providing immediate compelling value, they never actually allocated resources to do the adoption at their site.”

Regardless of why this was the case, it explains why last week Microsoft also announced that it will not be shipping CardSpace 2.0.

In my personal view, we all certainly need to keep working on the problems Information Cards address, and many of the concepts and technologies used in Information Cards should be retained and evolved. I think the U-Prove team has done a good job at that, and provides an example of how we can move forward to solve specific problems. Now the question is how to do so with the other aspects of user-centric identity.

Over the next while I’m going to do a series of posts that explore some of these issues further – drawing some lessons from what we’ve learned over the last few years.  Most of all, it is important to remember what great progress we’ve made as an industry around the Identity Metasystem, federation technology, and claims-based computing. The CardSpace identity selector dealt with the hardest and most forward-looking problems of the Metasystem:  the privacy, security and usability problems that will emerge as federated identity becomes a key component of the Internet.  It also challenged industry with an approach that was truly user centric.

It's no surprise that it is hardest to get consensus on forward-looking technologies!  But meanwhile,  the very success of the Identity Metasystem as a whole will cause all the issues we’ve been working on with Information Cards to return larger than life.

 

Stephan Engberg on Touch2ID

Stephan Engberg is member of the Strategic Advisory Board of the EU ICT Security & Dependability Taskforce and an innovator in terms of reconciling the security requirements in both ambient and integrated digital networks. I thought readers would benefit from comments he circulated in response to my posting on Touch2Id.

Kim Cameron's comments on Touch2Id – and especially the way PI is used – make me want to see more discussion about the definition of privacy and the approaches that can be taken in creating such a definition.

To me Touch2Id is a disaster – teaching kids to offer their fingerprints to strangers is not compatible  with my understanding of democracy or of what constitutes the basis of free society. The claim that data is “not collected” is absurd and represents outdated legal thinking.  Biometric data gets collected even though it shouldn't and such collection is entirely unnecessary given the PET solutions to this problem that exist, e. g chip-on-card.

In my book, Touch2Id did not do the work to deserve a positive privacy appraisal.

Touch2Id, in using blinded signature, is a much better solution than, for example, a PKI-based solution would be.  But this does not change the fact that biometrics are getting collected where they shouldn't.
To me Touch2Id therefore remains a strong invasion of Privacy – because it teaches kids to accept biometric interactions that are outside their control. Trusting a reader is not an option.

My concern is not so much in discussing the specific solution as reaching some agreement on the use of words and what is acceptable in terms of use of words and definitions.

We all understand that there are different approaches possible given different levels of pragmatism and focus. In reality we have our different approaches because of a number of variables:  the country we live in, our experiences and especially our core competencies and fields of expertise.

Many do good work from different angles – improving regulation, inventing technologies, debating, pointing out major threats etc. etc.

No criticism – only appraisal

Some try to avoid compromises – often at great cost as it is hard to overcome many legacy and interest barriers.  At the same time the stakes are rising rapidly:  reports of spyware are increasingly universal. Further, some try to avoid compromises out of fear or on the principle that governments are “dangerous”.

Some people think I am rather uncompromising and driven by idealist principles (or whatever words people use to do character assaination of those who speak inconvenient truths).  But those who know me are also surprised – and to some extent find it hard to believe – that this is due largely to considerations of economics and security rather than privacy and principle.

Consider the example of Touch2Id.  The fact that it is NON-INTEROPERABLE is even worse than the fact that biometrics are being collected, since because of this, you simply cannot create a PET solution using the technology interfaces!  It is not open, but closed to innovations and security upgrades. There is only external verification of biometrics or nothing – and as such no PET model can be applied.  My criticism of Touch2Id is fully in line with the work on security research roadmapping prior to the EU's large FP7 research programme (see pg. 14 on private biometrics and biometric encryption – both chip-on-card).

Some might remember the discussion at the 2003 EU PET Workshop in Brussels where there were strong objections to the “inflation of terms”.  In particular, there was much agreement that the term Privacy Enhancing Technology should only be applied to non-compromising solutions.  Even within the category of “non-compromising” there are differences.  For example, do we require absolute anonymity or can PETs be created through specific built-in countermeasures such as anti-counterfeiting through self-incrimination in Digital Cash or some sort of tightly controlled Escrow (Conditional Identification) in cases such as that of non-payment in an otherwise pseudonymous contract (see here).

I tried to raise the same issue last year in Brussels.

The main point here is that we need a vocabulary that does not allow for inflation – a vocabulary that is not infected by someone's interest in claiming “trust” or overselling an issue. 

And we first and foremost need to stop – or at least address – the tendency of the bad guys to steal the terms for marketing or propaganda purposes.  Around National Id and Identity Cards this theft has been a constant – for example, the term “User-centric Identity” has been turned upside down and today, in many contexts, means “servers focusing on profiling and managing your identity.”

The latest examples of this are the exclusive and centralist european eID model and the IdP-centric identity models recently proposed by US which are neither technological interoperable, adding to security or privacy-enhancing. These models represent the latest in democratic and free markets failure.

My point is not so much to define policy, but rather to respect the fact that different policies at different levels cannot happen unless we have a clear vocabulary that avoid inflation of terms.

Strong PETs must be applied to ensure principles such as net neutrality, demand-side controls and semantic interoperability.  If they aren't, I am personally convinced that within 20 or 30 years we will no longer have anything resembling democracy – and economic crises will worsen due to Command & Control inefficiencies and anti-innovation initiatives

In my view, democracy as construct is failing due to the rapid deterioration of fundamental rights and requirements of citizen-centric structures.  I see no alternative than trying to get it back on track through strong empowerment of citizens – however non-informed one might think the “masses” are – which depends on propagating the notion that you CAN be in control or “Empowered” in the many possible meanings of the term.

When I began to think about Touch2Id it did of course occur to me that it would be possible for operators of the system to secretly retain a copy of the fingerprints and the information gleaned from the proof-of-age identity documents – in other words, to use the system in a deceptive way.  I saw this as being something that could be mitigated by introducing the requirement for auditing of the system by independent parties who act in the privacy interests of citizens.

It also occured to me that it would be better, other things being equal, to use an on-card fingerprint sensor.  But is this a practical requirement given that it would still be possible to use the system in a deceptive way?  Let me explain.

Each card could, unbeknownst to anyone, be imprinted with an identifier and the identity documents could be surreptitiously captured and recorded.  Further, a card with the capability of doing fingerprint recognition could easily contain a wireless transmitter.  How would anyone be certain a card wasn't capable of surreptitiously transmitting the fingerprint it senses or the identifier imprinted on it through a passive wireless connection? 

Only through audit of every technical component and all the human processes associated with them.

So we need to ask, what are the respective roles of auditability and technology in providing privacy enhancing solutions?

Does it make sense to kill schemes like Touch2ID even though they are, as Stephan says, better than other alternatives?   Or is it better to put the proper auditing processes in place, show that the technology benefits its users, and continue to evolve the technology based on these successes?

None of this is to dismiss the importance of Stephan's arguments – the discussion he calls for is absolutely required and I certainly welcome it. 

I'm sure he and I agree we need systematic threat analysis combined with analysis of the possible mitigations, and we need to evolve a process for evaluating these things which is rigorous and can withstand deep scrutiny. 

I am also struck by Stephan's explanation of the relationship between interoperability and the ability to upgrade and uplevel privacy through PETs, as well as the interesting references he provides. 

All the help we can get

Now that the world is so thoroughly post-modern, how often do you come across information that qualifies as unexpected?  Well, I have to say that the following story , appearing in the The Australian, left me wide-eyed:

Yesterday, in the church of the City of London Corporation, (Canon Parrot)  presented an updated version of Plow Monday, an observance that dates from medieval times. On this day, the first Monday after Twelfth Night, farm labourers would bring a plough to the door of the church to be blessed.

“When I arrived a few months ago I looked at this service and thought, ‘Why do we have a Plow Monday?’,” Canon Parrott said. Men and women coming to his church no longer used ploughs; their tools were their laptops, their iPhones and their BlackBerries.

So he wrote a blessing and strode out to deliver it before a congregation of 80, the white heat of technology shining from his every pronouncement. “I invite you to have your mobile phone out … though I would like you to put it on silent,” he said.

This was Church 2.0. Behind him, the altar resembled a counter at PC World. Upon it, laid out like holy relics, were four smart phones, one Apple laptop and one Dell.

Then, after another hymn, came the blessing of the smart phones. The Lord Mayor of London offered his BlackBerry to Canon Parrott, which was received with due reverence and placed upon the altar.

The congregation held their phones in the air, and Canon Parrott addressed the Almighty. “By Your blessing, may these phones and computers, symbols of all the technology and communication in our daily lives, be a reminder to us that You are a God who communicates with us and who speaks by Your Word. Amen.”

It makes me wonder what Innis said to McLuhan when he read abut this.

Le Figaro carried a report of an additional prayer, “”May our tongues be gentle, our e-mails be simple and our websites be accessible”. 

Perhaps it is asking too much, but I would have really liked Father Parrott to add, “websites be accessible and secure.”  After all – it can't hurt.   Perhaps next time?

Bizzare customer journey at myPay…

Internet security is a sitting duck that could easily succumb to a number of bleak possible futures.

One prediction we can make with certainty is that as the overall safety of the net continues to erode, individual web sites will flail around looking for ways to protect themselves. They will come across novel ideas that seem to make sense from the vantage point of a single web site. Yet if they implement these ideas, most of them will backfire. Internet users have to navigate many different sites on an irregular basis. For them, the experience of disparate mechanisms and paradigms on every different site will be even more confusing and troubling than the current degenerating landscape. The Seventh Law of Identity is animated by these very concerns.

I know from earlier exchanges that Michael Ramirez understands these issues – as well as their architectural implications. So I can just imagine how he felt when he first encountered a new system that seems to represent an unfortunately great example of this dynamic. His first post on the matter started this way:

“Logging into the DFAS myPay site is frustrating. This is the gateway where DoD employees can view and change their financial data and records.

“In an attempt secure the interface (namely to prevent key loggers), they have implemented a javascript-based keyboard where the user must enter their PIN using their mouse (or using the keyboard pressing tab LOTS of times).

“A randomization function is used to change the position of the buttons, presumably to prevent a simple click-tracking virus from simply replaying the click sequence. Numbers always appear on the upper row and the letters will appear in a random position on the same row where they exist on the keyboard (e.g. QWERTY letters will always appear on the top row, just in a random order).

“At first glance, I assumed that there would be some server-side state that identified the position of the buttons (as to not allow the user's browser to arbitrarily choose the positions). Looking at how the button layout is generated, however, makes it clear that the position is indeed generated by the client-side alone. Javascript functions are called to randomize the locations, and the locations of these buttons are included as part of the POST parameters upon authentication.

“A visOrder variable is included with a simple substitution cipher to identify button locations: 0 is represented by position 0, 1 by position 1, etc. Thus:

VisOrder =3601827594
Substitution =0123456789
Example PIN =325476
Encoded =102867

“Thus any virus/program can easily mount an online guessing attack (since it defines the substitution pattern), and can quickly decipher the PIN if it has access to the POST parameters.

“The web site's security implementation is painfully trivial, so we can conclude that the Javascript keyboard is only to prevent keyloggers. But it has a number of side effects, especially with respect to the security of the password. Given the tedious nature of PIN entry, users choose extremely simplistic passwords. MyPay actually encourages this as it does not enforce complexity requirements and limits the length of the password between 4 and 8 characters. There is no support for upper/lower case or special characters. 36 possible values over an 4-character search space is not terribly secure.”

A few days later, Michael was back with an even stranger report. In fact this particular “user journey” verges on the bizarre. Michael writes:

“MyPay recently overhauled their interface and made it more “secure.” I have my doubts, but they certainly have changed how they interact with the user.

“I was a bit speechless. Pleading with users is new, but maybe it'll work for them. Apparently it'll be the only thing working for them:

Although most users have established their new login credentials with no trouble, some users are calling the Central Customer Support Unit for assistance. As a result, customer support is experiencing high call volume, and many customers are waiting on hold longer than usual.

We apologize for any inconvenience this may cause. We are doing everything possible to remedy this situation.

Michael concludes by making it clear he thinks “more than a few” users may have had trouble. He says, “Maybe, just maybe, it's because of your continued use of the ridiculous virtual keyboard. Yes, you've increased the password complexity requirements (which actually increased security), but slaughtered what little usability you had. I promise you that getting rid of it will ‘remedy this situation.'”

One might just shrug one's shoulders and wait for this to pass. But I can't do that.  I feel compelled to redouble our efforts to produce and adopt a common standards-based approach to authentication that will work securely and in a consistent way across different web sites and environments.  In other words, reusable identities, the claims-based architecture, and truly usable and intuitive visual interfaces.