European Identity Awards

The recent European Identity Conference 2008 featured the presentation of Kuppinger Cole&#39s European Identity Awards. Vendors, integrators, consultants and user companies were asked for nominations. For each category, three outstanding projects and innovations were nominated as finalists. Here is how Kuppinger Cole framed the results:

Best Innovation

“The award went to a group of companies that are driving forward the process to outsource authentication and authorisation, making it easier to control application security ‘from outside’.   There are several providers with different approaches in this field but during the past year, they all contributed a lot to promote this concept, considered as indispensable by KCP.   The winners in this category are Bitkoo, CA, iSM, Microsoft and Oracle.

“Also among the finalists were Aveksa and Sailpoint for their Identity Risk Management solutions and Microsoft for making a significant contribution to identity information protection in distributed environments through their takeover of Credentica and the planned integration of U-Prove technology into user-centric Identity Management.”

Best New/Improved Standard

“The award went to the OpenID Foundation and to Microsoft for their InfoCard initiative. These standards form the base for Identity 2.0, the so-called user-centric Identity Management.

“Other outstanding solutions nominated as finalists were the eCard API Framework and the simpleSAMLphp project driven forward by Feide RnD. The eCard API Framework has been jointly developed by Secunet and the Bundesamt für Sicherheit in der Informationstechnik (abbreviated BSI – in English: Federal Office for Security in Information Technology) to simplify the interaction of applications with different card technologies. With simpleSAMLphp, federation functions can easily be integrated into existing and new applications.”

Best Internal Identity Management Project

“The award went to BASF for their AccessIT project, which realises Identity Management within a complex corporate structure and excells in consistent approaches to centralised auditing.

“Another finalist in this category was the Royal Bank of Scotland, with its project to control a multitude of applications by an integrated role-based access control.”

Best B2B Identity Management Project

“The award went to Orange/France Telecom.  Their project is revolutionary due to the consistent use of federation and the opening of systems to partners.

“Also among the finalists in this category were Endress+Hauser for their business customer portal and education network SurfNET which is at present one of the most comprehensive federation implementations.”

Best B2C Identity Management Project

“The award went to eBay and Paypal which support strong authentication mechanisms, thus making a significant contribution to the protection of online transactions and creating more awareness on this issue among the wider public.

“Other finalists were Karlsruhe-based company Fun Communications for their innovative approach to the use of info cards as virtual customer cards, which is groundbreaking in our opinion, and KAS bank for their consistent use of strong authentication and encryption technologies to protect transactions.”

Best eGovernment Identity Management Project 

“The Republic of Austria received the prize in the “Best eGovernment Identity Management project” category for their eGovernment initiatives which we think are leading with regard to the implementation of Identity Management.

“Other finalists were Crossroads Bank, Smals and BAMF  – the Bundesamt für Migration and Flüchtlinge (Federal Office for Migration and Refugees).”

Special prizes

Dale accepting award and champagne on behalf of Higgins/Bandit“Special prizes were given to two initiatives considered as groundbreaking by KCP.

“In KCP&#39s opinion, the VRM project by Doc Searls is an innovative approach that applies user-centric Identity Management concepts to customer management. In the VRM Unconference 2008 at the EIC 2008, this issue was intensely discussed in Europe for the first time.

“The second special prize went to open source projects Higgins and Bandit which we think are the most important open source initiatives in Identity Management.”

[Thanks to Jackson Shaw for Photos]

Drstarcat on Project Pamela

drstarcat.com is doing “A History of Tomorrow&#39s Internet” – a dive into Information Cards, CardSpace, Higgins and now, in Part Five, The Pamela Project. The “future history” is a personal tale that is definitely worth reading.  The most recent post introduces us to Pamela Dingle herself – a woman who has played a key role – both technically and as a leader – in advancing Information Cards. 

Drstarcat writes:

“As I’ve explained more than once in this blog, a greater problem than finding reliable Identity Providers is getting the websites we know and love to become Relying Parties. That is exactly the problem that Pamela has deemed to attack with her eponymous project. As the project’s mission statement says, “The Pamela Project is a grassroots organization dedicated to providing community support for both technical and non-technical web users and administrators who wish to use or deploy information card technologies.” Given the difficulties I experienced even USING iCards as a non-technical web user, this seems like a pretty ambitious task, and as part of this post, I’m going to try to get my blog up and running. First, a few words about Pamela and the history of the project.

“Pamela first ran into the issues surrounding Identity in her role as a technology consultant in Calgary in 1999. Anyone who’s done any large-scale enterprise software installation has likely had a similar experience–try to do anything and you’ll run into a myriad of (often semi-functional) authentication and directory services before you can even get off the ground. She’d been working on Peoplesoft installations and with Oblix (an enterprise self-service password management tool later acquired by Oracle), when she attended her first Burton Identity conference in 2001. It was here she first began to think of Identity as a (the?) core technology problem, as opposed to something peripheral to what she wanted to get done. It’s a realization that, once had, can become a little consuming (trust me, I spend WAY too much time building software to be blogging about anything–especially, SOFTWARE).

“Her second “ah-ha” moment came when, if my notes serve me correctly, she was “hit on the head with a brick” by Kim Cameron at the 2002 Catalyst conference. There he drew her a brief sketch on a napkin where he showed the three party system (Subject, Relying Party, Identity Provider) that is at the core of most of the emerging identity systems. She was hooked, but it wasn’t until in 2005, when Kim added some sample PHP Relying Party code to his blog that she saw a place where she could contribute. As a sometimes PHP hacker, she took the simple code, and began to port it over to some of her favorite PHP frameworks (WordPress, Joomla, and MediaWiki). Since that time, she and about 10 other contributers have been working to get a 1.0 version of the product out, which, given Pamela’s commitment, I suspect will be about like most other project’s 2.0 release.

“Before writing about my experience installing the WordPress v0.9 plugin, a word about the seemingly self-promulgatory name of the project because I think it says a lot about Pamela as a person and the Identity movement she’s part of. According to Pamela it’s the last name she would have thought of as a woman working as a technologist. As she explains, it’s hard enough as a woman to get recognized as a serious technologist without drawing unnecessary attention to yourself. Having a wife who is one the best Java engineers in NYC, but who also is regularly asked if she REALLY wrote the stunning code she produces, I can attest this is true. It’s because of this stereotype though that Pamela chose the name. She was tired, as someone who is self-admittedly “vocal”, of this kind of self-inflicted sheepishness. So in “defiance to self-regulation”, and at Craig Burton’s urging, she chose The Pamela Project…

“I’ll let you know how my experience actually USING the Pamela project goes in my next post. In the mean time, as you wait in breathless anticipation, why not go over to the project’s site and ask Pamela how you can be of use. This is a big project and they’re going to need all the help they can get.”

[More here.]

Virtual Corporate Business Cards

Martin Kuppinger is one of the key analysts behind the amazing European Identity Conference just held in Munich.  This was “User Centric Meets Enterprise Identity Management” with a twist: our European colleagues have many things to contribute to the discussion about how they fit together…

For a taste of what I&#39m talking about, here is a posting that I found dazzling.  There are no weeds encumbering Martin&#39s thinking.  He&#39s got the story:  Virtual Corporate Business Cards.   

Yes, I know – it is a little redundant talking about “corporate” and “business” in the context of virtual cards. But it is one of the most obvious, interesting and feasible business cases around Identity 2.0.

What do I mean by that term? My idea is about applying the ideas of Identity 2.0 and especially of InfoCard to the business. Provide every employee with an InfoCard or even some of them and you are better suited to solve many of today’s open issues.

How to issue these cards

I have this in mind for a pretty long time. I remember that I had asked Don Schmidt from Microsoft about the interface between Active Directory and CardSpace some time before EIC 2007. Active Directory might be one source of these cards. Just provide an interface between AD and an Identity Provider for InfoCards and you are able to issue and manage these cards based on information which still exits in the Active Directory. For sure, any other corporate directory or meta directory might work as well.

Today these technical interfaces are still missing, at least in an easy-to-use implementations. But it won’t take that long until we will see them. Thus, it is time to start thinking about the use cases.

How to use these cards

There are at least three types of cards I have in mind:

  • Virtual business cards: They are used when someone represents his company. How do you ensure today that every employee provides current and correct information when he registers with other web sites? How do you ensure that he acts in the web like you expect him to do? How do you ensure that he enters the correct title or the correct information about the size of your business when registering? InfoCards are the counterpart to your paper-based business cards today, but they can contain more information. And there might be different ones for different purposes.
  • Virtual corporate cards: They are used for B2B transactions and interactions. Add information like business roles to the cards and you can provide all these claims or assertions which are required for B2B business. These cards can be an important element in Federation, providing current information on the role of an employee or other data required. For sure there can be as well several cards, depending on the details which are required for interaction with different types of business partners.
  • Virtual employee cards: They are used internally, for example to identify users in business processes. Again, there might be a lot of information on them, like current business roles. You might use them as well to improve internal order processes, identifying the users who request new PCs, paper, or what ever else.

With these three types I might even have to extend the name for the cards, I assume. But I will stick with the term I have in the title of this post. The interesting aspect is the flexibility which (managed) InfoCards provide and the ability to manage them in context with a leading directory you have.

Due to the fact that you are the Identity Provider when applying these concepts you can ensure that no one uses these cards after leaving the company. You can ensure as well that the data is always up-to-date. That’s by far easier than with some of today’s equivalents for these future type of cards.

I will blog these days about two other ideas I have in mind in this context: The way the concept of claims Microsoft’s Kim Cameron is evangelizing will affect end-to-end security in business processes and SOA applications in general and the idea of using InfoCards for all these personalization and profiling ideas which have been discussed many years ago. I’m convinced that Identity 2.0 concepts like InfoCards and claims are a key element to solve these threats and bring these things to live.

There is a lot of business value in these concepts. And they will affect the way businesses cooperate, because they are much easier to implement and use than many other approaches.

I&#39m with you 100% Martin.  That&#39s the most concise and comprehensible description of enterprise Information Cards that I&#39ve seen.  

Can women detect idiot researchers better than men?

According to an article in The Register

“Women are four times more likely than men to give out “passwords” in exchange for chocolate bars.

“A survey by of 576 office workers in central London found that women are far more likely to give away their computer passwords to total strangers than their male counterparts, with 45 per cent of women versus ten per cent of men prepared to give away their login credentials to strangers masquerading as market researchers.

“The survey, conducted outside Liverpool Street Station in the City of London, was actually part of a social engineering exercise to raise awareness about information security in the run-up to next week’s Infosec Europe conference.

“Infosec has conducted similar surveys every year for at least the last five years involving punters apparently handing over login credentials in exchange for free pens or chocolate rewards.

“Little attempt is made to verify the authenticity of the passwords, beyond follow-up questions asking what category it falls under. So we don’t know whether women responding to the survey filled in any old rubbish in return for a choccy treat or handed out their real passwords.

“This year’s survey results were significantly better than previous years. In 2007, 64 per cent of people were prepared to give away their passwords for a chocolate bar, a figure that dropped 21 per cent this time around.

“So either people are getting more security-aware or more weight-conscious. And with half the respondents stating that they used the same passwords at home and work, then perhaps the latter is more likely.

“Taken in isolation the password findings might suggest the high-profile HMRC data loss debacle had increased awareness about information security. However, continued willingness to hand over personal information that could be useful to ID fraudsters suggests otherwise.

“The bogus researchers also asked for workers’ names and telephone numbers, ostensibly so they could be entered into a draw to go to Paris. With this incentive 60 per cent of men and 62 per cent of women handed over their contact information. A similar percentage (61 per cent) were happy to hand over their dates of birth. ®

This report is fascinating – not because it is good or bad but because it makes us question so much.

The people being studied don’t understand how our systems operate.  [In my view this is our worst problem.]  They’ve been shut out of knowing why things work the way they do.  So if they can be tricked, should we be surprised?  And does it mean they are “stupid”??? 

I feel a lot of people are simply sick and tired of naive and stupid questions from naive and stupid researchers.  Example:  I was just called to the door of my hotel room and asked what my major problems were…  Guess what?  I said that I was an architect and thus disqualified from discussing any such issues.  Sugar freaks will be happy that this qualified me for several  free chocolates, as well as some more idiosyncratic pastries…

Converging Metadirectory and Virtual Directory

Phil Hunt, now at Oracle, is the visionary responsible for a lot of the innovation in Virtual Directory. From his recent response to my ideas on second generation metadirectory, it looks like we are actually thinking about things in similar ways, where meta and virtual work together.

As you may know, there has been an ongoing discussion on what does the next generation of meta-directory look like. Kim Cameron’s latest post elaborates on what he thinks is needed for the next generation of “metadirectory”.

  • By “next generation application” I mean applications based on web service protocols. Our directories need to integrate completely into the web services fabric, and application developers must to be able to interact with them without knowing LDAP.
  • Developers and users need places they can go to query for “core attributes”. They must be able to use those attributes to “locate” object metadata. Having done so, applications need to be able to understand what the known information content of the object is, and how they can reach it.
  • Applications need to be able to register the information fields they can serve up.

These are actually some of the key reasons I have been advocating for a new approach to developing identity services APIs for developers. We are actually very close in our thinking. Here are my thoughts:

  • There should be a new generation of APIs that de-couple developers from dependence on particular vendor implementations, protocols, and potentially even data schemas when it comes to accessing identity information. Applications should be able to define their requirements for data and simply let the infrastructure deal with how to deliver it.
  • Instead of thinking of core attributes as those attributes that are used in common (e.g. such as surname is likely the same everywhere). I would like to propose we alter the definition slightly in terms of “authoritativeness”. Application developers should think about what data is core to their application. What data is the application authoritative for? If an application isn’t authoritative over an attribute, it probably shouldn’t be storing or managing that attribute. Instead, this “non-core” attribute should be obtained from the “identity network” (or metaverse as Kim calls it). An application’s “core” data should only be the data for which the application is authoritative. In that sense, I guess I may be saying the opposite of Kim. But the idea is the same, an application should have a sense of what is core and not core.
  • Applications need to register the identity data they consume, use, and update. Additionally, applications need to register the transactions they intend to perform with that data. This enables identity services to be built around an application that can be performant to the application’s requirements.

What I have just described was actually part of the original inspiration behind CARML (Client Attributes Requirements Markup Language) put forward by Oracle that the Liberty Alliance is working on now. It was our belief that in order to enable applications to connect to diverse identity service infrastructures, something like CARML was needed to make the identity network both possible, adaptive, and intelligent.

But, while CARML was cool in itself, the business benefit to CARML was that knowing how an application consumes and uses identity data would not only help the identity network but it would also greatly improve the ability of auditors to perform privacy impact assessments.

We’ve recently begun an open source project at OpenLiberty called the IGF Attribute Services API that does exactly what Kim is talking about (by the way, I’m looking for nominations for a cool project name – let me know your thoughts). The Attribute Services API is still in early development stages – we are only at milestone 0.3. But that said, now is a great time for broader input. I think we are beginning to show that a fully de-coupled API that meets the requirements above is possible and dramatically easier to use and yet at the same time, much more privacy centric in its approach.

The key to all of this is to get as many applications as possible in the future to support CARML as a standard form of declaration. CARML makes it possible for identity infrastructure product vendors and service providers to build the identity network or next generation of metadirectory as described by Kim.

I haven’t seen CARML – perhaps it is still a private proposal? [UPDATE: I’ve been advised that CARML and the IGF Attribute Servces API are the same thing.] I think having a richer common representation for people will be the most important ingredient for success. I’m a little bit skeptical about confining developers to a single API – is this likely to fly in a world where people want to innovate? But details aside, it sounds like CARML will be a helpful input to an important industry discussion. Above all, this needs to be a wide-ranging and inclusive discussion, where we take lots of input. To get “as many applications as possible” involved we need to win the participation and support of application developers – this is not just an “infrastructure’ problem.

Now for something completely different.

French GuardsIt looks like Dave Kearns might be (?) mad at me… His recent post was entitled Your mother was a hamster and your father smelt of elderberries! Of course I would have taken that as a compliment except that I recognized it from The Holy Grail Scene 8, where the “French Guard” precedes it with, “I don’t wanna talk to you no more, you empty headed animal food trough wiper! I fart in your direction.

The olive branch (or was it a birch rod?) to which Dave refers is this:

Kim has now responded (“Through the looking glass“) to my Humpty Dumpty post, and we’re beginning to sound like a couple of old philosophes arguing about whether or not to include “le weekend” and “hamburguer” and other Franglais in the French dictionary.

We really aren’t that far apart.

In his post, Kim recalls launching the name “metadirectory” back in ‘95 with Craig Burton and I certainly don’t dispute that. In fact, up until 1999, I even agreed somewhat with his definition:

“In my world, a metadirectory is one that holds metadata – not actual objects, but descriptions of objects and their locations in other physical directories.”

But as I continued in that Network World column:

“Unfortunately, vendors such as Zoomit took the term ‘metadirectory’ and redefined it so it could be used to describe what I’d call an überdirectory – a directory that gathers and holds all the data from all your other directories.”

Since no one took up my use of “uberdirectory,” we started using “metadirectory” to describe the situations which required a new identity store and “virtual directory” for those that didn’t.

So perhaps we’re just another couple of blind men trying to describe an elephant.

Gee – have we been having this discussion ever since 1999? Look – I agree that we are both dealing with legitimate aspects of the elephant. Olive branch accepted.

Now that that’s out of the way, maybe I can call upon Dave to lay down his birch rod too. He keeps saying I propose ”a directory that gathers and holds ALL the data from ALL your other directories.” Dave, this is just untrue and unhelpful. “ALL” was never the goal – or the practice – of metadirectory, and you know it. The goal was to represent the “object core” – the attributes shared across many applications and that need therefore to be kept consistent and synchronized if stored in multiple places. Our other goal was to maintain the knowledge about what objects “were called” in different directories and databases (thus the existence of “connector space”).

Basically, the ”ALL” argument is a red herring (and if you want, you can say hareng rouge instead…)

More on the second generation of metadirectory

Oracle’s Clayton Donley has joined the Metadirectory discussion and maybe his participation will help clarify things.

He writes:

I was reading this posting from my friend and colleague, Phil Hunt, in which he talks about the ongoing discussion between Dave Kearns and Kim Cameron about the death of meta-directories.

Not only is he correct in pointing out that Kim’s definition of Meta 2.0 is exactly what virtual directory has been since 1.0, but it’s interesting to see that some virtual directory vendors continue to push something that looks very much like meta-directory 1.0.

Before we go further, I want to peak at how Clayton’s virtual directory works:

… If the request satisfies the in-bound security requirements, the next step is to invoke any global level mappings and plug-ins. Mapping and plug-ins have the ability modify the operation such as changing the name or value of attributes. The next step after global-plug-ins is to determine which adapter(s) can handle the request. This determination is made based on the information provided in the operation.

The primary information used is the DN of the operation – the search base in the search or the DN of the entry in an all other LDAP operations like a bind or add. OVD will look at the DN and determine which adapters could potentially support an operation for that DN. This is possible because each adapter in its configuration tells what LDAP namespace it’s responsible for.

In the case where multiple adapters can support the incoming DN namespace (for example a search who’s base is the root of the directory namespace such as dc=oracle,dc=com), then OVD will perform the operation on each adapter. The order of precedence is configurable based on priority, attributes or supported LDAP search filters.

Pretty cool. But let’s do a historical reality check. The first metadirectory, which shipped twelve years ago, included the ability to do real-time queries that were dispatched to multiple LDAP systems depending on the query (and to several at once). The metadirectory provided the “glue” to know which directory service agents could answer which queries. The system performed the assembly of results across originating directory service agents – in other words mutliple LDAP services produced by multiple vendors.

And guess what? The distributed queries were accessed as part of “the metaverse”. The metaverse was in no way limited to “a local store”.

The metaverse was the joined information field comprising all the objects in the metadirectory. Only the smallest set of “core” attributes was stored in the local database or synchronized throughout the system. This set of attributes composed the “object root” – the things that MUST BE THE SAME in each of the applications and stores in a management continent. There actually aren’t that many of them. For example, in normal circumstances, my surname should be the same in all the systems within my enterprise. So it makes sense to synchronize surname between systems so that it actually stays the same over time.

As metadriectories started to compete in the marketplace, the problem of provisioning and managing core attributes came to predominate over that of connecting to application specific ones. Basically, I think it was just early. That doesn’t mean one should counterpose metadirectory and virtual directory, or congratulate oneself too much for ”owning” distributed query. The problem of distributed information is complex and needs multiple tools – even the dreaded “caching”.

Let me return to what I said would be the focus of “second generation metadirectory”:

Providing the framework by which next-generation applications can become part of the distributed data infrastructure. This includes publishing and subscription. But that isn’t enough. Other applications need ways to find it, name it, and so on.

If Clayton and Phil think virtual directories already do this, I can see that I wasn’t clear enough. So here are a few precisions:

  • By “next generation application” I mean applications based on web service protocols. Our directories need to integrate completely into the web services fabric, and application developers must to be able to interact with them without knowing LDAP.
  • Developers and users need places they can go to query for “core attributes”. They must be able to use those attributes to “locate” object metadata. Having done so, applications need to be able to understand what the known information content of the object is, and how they can reach it.
  • Applications need to be able to register the information fields they can serve up.

Today’s virtual directories just don’t do this any better or any worse than metadirectories do. Virtual directories expose some of the fabric, just as today’s metadirectories do, but they don’t get at the big prize. It’s what I have called the unified field of information. Back in the 90’s more than one analyst friend made fun of me for thinking this was possible. But today it is not only possible, it is necessary.

Flickr, Windows Live ID and Phishing

We talk a lot in the identity milieu about opening up the “walled Gardens” that keep our digital experiences partitioned between Internet portals.  Speaking as a person who dabbles in many services, it would be really great if I could reuse information rather than entering it over and over again.  I think as time goes on we will get more and more fed up with the friction that engulfs our information.   Over time enough people will feel this way that no portal will be able to avoid ”data portability” and still attract usage.

Even so, many have argued that today’s business models don’t allow more user-centric services to evolve.  That’s why it has been fascinating to read about the new Flickr Friend Finder.  I think it is tremendously significant to see organizations of the stature of Flickr, Yahoo, Google and Microsoft working closely together so people can easily associate their pictures on one site with their friends and colleagues from others.

Once people decide to share information between their services, we run smack dab into the “how” of it all.  In the past, some sites actually asked you to give them your username and password, so they could essentially become you.  Clearly this was terrible from a security and identity point of view.  The fact is, sharing requires new technology approaches.

Windows Live has moved forward in this area by developing a new “Contacts API“.  Angus Logan gave us a great overview on his blog recently, taking us through the whole experience.  I recommend you look at it – the design handles a lot of fascinating issues that we’ll be encountering more and more.  I’ll just pick up on the first couple of steps:

Go to the Friend finder

image

Select Windows Live Hotmail (you can also select Yahoo! Mail and GMail) – I’d imagine soon there will be Facebook / LinkedIn / insert social network here.

 image

If you aren’t already authenticated, use your Windows Live ID to sign in (IMPORTANT: Notice how you are not sharing your Windows Live ID secret credential pair with Flickr – this is a good thing!)

image

If you have followed my work on the problems with protocols that redirect users across web contexts, you will see there is a potential problem here.  

If Flickr plays by the rules, it will not learn your username and password, and cannot “become you”.  It really is a step forward.

But if a user gets used to this behavior, an unreputable site can pretend to send her to Windows Live by putting up a fake page.  The fake can look real enough that the user gives away her credentials.

A user called davidacoder called this out on Angus’ blog:

I think this whole approach will lead to many, many, many hacked Windows Live ID accounts. If you guys seriously believe that average users will be able to follow the rule “only type in your credentials on login.live.com” your are just naive. AND your own uber-security guy Kim Cameron is telling that very story to the world for years already. I wouldn’t mind so much if a Live ID was a low-value asset, but you bring people to associate some of their most valuable assets with it (email, calendar, contacts). I find the whole approach irresponsible. I just hope that at some point, if someone looses his credentials this way, he will sue you and present Kim Cameron’s blog as evidence that you were perfectly aware in what danger you bring your users. And to make a long story short, I think the Live ID team should fix the phising problem first (i.e. implement managed infocards), before they come up with new delegation stuff etc that will just lead to more attack surface. Very bad planning.

I admire David’s passion, although I’d prefer not to be used in any law suits if that is OK with everyone.  Let’s face it.  There are two very important things to be done here. 

One is to open up the portals so people can control their information and use it as they see fit  I totally endorse Angus’ work in this regard, and the forward-looking attitude of the Windows Live team.  I urge everyone to give them the credit they deserve so they’ll continue to move in this positive direction.

The other is to deal with the phishing problems of the web. 

And let me be clear.  Information sharing is NOT the only factor heightening the need for stronger Internet identity.  It is one of a dozen factors.  Perhaps the most dangerous of these is the impending collision between the security infrastructure of the Internet and that of the enterprise.  But no one can prevent this collision – or turn back the forces of openness.  All we can do is make sure we apply every effort to get stronger identity into place.

On that front, today Neelamadhaba Mahapatro (Neel), who runs Windows Live ID, put up a post where he responds to David’s comment:

Earlier this week a comment was left on Angus Logan’s blog, it got me thinking, and I want to share what we are doing to create phishing resistant systems.

  • We are absolutely aware of the dangers of phishing on the Internet.
  • We understand the probability of attack goes up when the value of the asset that is being protected is higher than the strength of authentication protecting that asset – watch this video by Kim Cameron to see OpenID phished.
  • We have put certain measures in place to counteract phishing attempts which are listed below.

Self Issued InfoCards

In August 2007 we announced beta support for self issued InfoCards with Windows Live ID (instead of username/password). The Windows Live ID team is working closely with the Windows CardSpace team to ensure we deliver the best solution for the 400 million+ people who use Windows Live ID monthly. Angus’s commentor, davidacoder, also asked for the Windows Live ID service to become a Managed InfoCard provider – we have been evaluating this; however we have nothing to announce yet.

Authenticating to Windows Live ID with CardSpace.

Additional Protection through Extended Validation Certificates

To further reduce the risk of phishing, we have implemented Extended Validation certificates to prove that the login.live.com site is trustworthy. I do however think more education for internet users is required to help drive the understanding of what it means when the address bar turns green (and what to do when it doesn’t). When authenticating in a web browser, Microsoft will only ask for your Windows Live ID credential pair on login.live.com – nowhere else! (See this related post).

login.live.com with the Extended Validation certificate. 

Neel continues by showing a number of other initiatives the group has taken – including the Windows Live Sign-in Assistant and “roaming tiles”.  He concludes:

We’re constantly looking for ways to balance end-user security/privacy and user experience. If the barrier to entry is too high or the user experience is poor, the users will revolt. If it is too insecure the system becomes an easy target. A balance needs to be struck Using Windows CardSpace is definitely a move forward from usernames & passwords but adoption will be the critical factor here.

And he’s right.  Sites like Windows Live can really help drive this, but they can’t tell users what to do.  The important thing is to give people the option of using Information Cards to prevent phishing.  Beyond that, it is a matter of user education. One option would be for systems like Live ID to automatically suggest stronger authentication to people who use features like data sharing and off-portal authentication – features that put password credentials more at risk.

Microsoft must “U-Prove” what its plans are

Kuppinger Cole‘s analyst Felix Gaehtgens calls on Microsoft to move more quickly in announcing how we are going to make Credentica&#39s Minimal Disclosure technology available to others in the industry.  He says,

“On March 6th, almost a month ago, Microsoft announced its acquisition of Montreal based Credentica, a technology leader in the online digital privacy area. It’s been almost a month, but the dust won’t settle. Most analysts including KCP agree that Microsoft has managed a master coup in snapping up all patents and rights to this technology. But there are fears in the industry that Microsoft could effectively try to use this technology to enrich its own platform whilst impeding interoperability by making the technology unavailable. These fears are likely to turn out to be unfounded, but Microsoft isn’t helping to calm the rumour mill – no statements are being made for the time being to clarify its intentions.”

Wow.  Felix makes a month sound like such a long time.  I&#39m jealous.  To me it just flew by.  But I get his message and feel the tines of his pitchfork.

Calling U-Prove a “Hot Technology” and explaining why, Felix continues,

“…if Microsoft were to choose to leverage the technology only in its own ecosystem, effectively shutting out the rest of the Internet, then it would be very questionable whether the technology would be widely adopted. The same if Microsoft were to release the specifications, but introduce a “poison pill” by leveraging its patent. This would certainly be against Microsoft’s interest in the medium to long future.”

This is completely correct.  Microsoft would have to be completely luny to try to partition the internet across vendor lines.  So, basically, you can be sure we won&#39t.

“There is a fair amount of mistrust in the industry, sometime even bordering on paranoia because of Microsoft’s past approach to privacy and interoperability. The current heated discussion about the OOXML is an example of this. Over the last years, Microsoft has taken great pains to alleviate those fears, and has shown an willingness to work towards interoperability. But many are not yet convinced of the picture that Kim is painting. It is very much in Microsoft’s interest to make an official statement regarding its broad intentions with U-Prove, and reassure the industry if and how Microsoft intends to follow the “fifth law of identity” with regards to this new technology.

We are working hard on this.  The problem is that Microsoft can&#39t make an announcement until we have the legal documents in place to show what we&#39re talking about.  So there is no consipiracy or poison pill.  Just a lot of details to nail down.

All about Phorm

The Law of User Control is hard at work in a growing controversy about interception of people&#39s web traffic in the United Kingdom.  At the center of the storm is the “patent-pending” technology of a new company called Phorm.  It&#39s web site advises:

Leading UK ISPs BT, Virgin Media and TalkTalk, along with advertisers, agencies, publishers and ad networks, work with Phorm to make online advertising more relevant, rewarding and valuable. (View press release.)

Phorm&#39s proprietary ad serving technology uses anonymised ISP data to deliver the right ad to the right person at the right time – the right number of times. Our platform gives consumers advertising that&#39s tailored to their interests – in real time – with irrelevant ads replaced in the process.

What makes the technology behind OIX and Webwise truly groundbreaking is that it takes consumer privacy protection to a new level. Our technology doesn&#39t store any personally identifiable information or IP addresses, and we don&#39t retain information on user browsing behaviour. So we never know – and can&#39t record – who&#39s browsing, or where they&#39ve browsed.

It is counterintuitive to see claims of increased privacy posited as the outcome of a tracking system.  But even if that happened to be true, it seems like the system is being laid on the population as a fait accompli by the big powerful ISPs.  It doesn&#39t seem that users will be able to avoid having their traffic redirected and inspected.  And early tests of the system were branded “illegal” by Nicholas Bohm of the Foundation for Information Policy Research (FIPR). 

Is Phorm completely wrong?  Probably not.  Respected and wise privacy activist Simon Davies has done an Interim Privacy Impact Assessment that argues (in part):

In our view, Phorm has successfully implemented privacy as a key design component in the development of its Phorm Technology system. In contrast to the design of other targeting systems, careful choices have been made to ensure that privacy is preserved to the greatest possible extent. In particular, Phorm has quite consciously avoided the processing of personally identifiable information.

Simon seems to be suggesting we consider Phorm in relation to the current alternatives – which may be worse.

To make a judgment we need to really understand how Phorm&#39s system works.  Dr. Richard Clayton, a computer security researcher at the University of Cambridge and a participant in Light Blue Touchpaper, has published a succinct ten page explanation that that is a must-read for anyone who is a protocol head.

Richard says his technical analysis of the Phorm online advertising system has reinforced his view that it is “illegal”, breaking laws designed to limit unwarranted interception of data.

The British Information Commissioners Office confirmed to the BBC that BT is planning a large-scale trial of the technology “involving around 10,000 broadband users later this month”.  The ICO said: “We have spoken to BT about this trial and they have made clear that unless customers positively opt in to the trial their web browsing will not be monitored in order to deliver adverts.”

Having quickly read Richard&#39s description of the actual protocol, it isn&#39t yet clear to me that if you opt out, your web traffic isn&#39t still being examined and redirected.  But there is worse. I have to admit to a sense of horror when I realized the system rewards ISPs for abusing their trusted role in the Internet by improperly posing as other peoples’ domains in order to create fraudulent cookies and place them on users machines.  Is there a worse precedent?  How come ISPs can do this kind of thing and other can&#39t?  Or perhaps now they can…

To accord with the Laws of Identity, no ISP would examine or redirect packets to a Phorm-related server unless a user explicitly opted-in to such a service.  Opting in should involve explicitly accepting Phorm as a justifiable witness to all web interactions, and agreeing to be categorized by the Phorm systems.

The system is devised to aggregate across contexts, and thus runs counter to the Fourth Law of Identity.  It claims to mitigate this by reducing profiling to categorization information.  However, I don&#39t buy that.  Categorization, practiced at a grand enough scale and over a sufficient period of time, potentially becomes more privacy invasive than a regularly truncated audit trail.    Thus there must be mechanisms for introducing amnesia into the categorization itself.

Phorm would therefore require clearly defined mechanisms for deprecating and deleting profile information over time, and these should be made clear during the opt-in process.

I also have trouble with the notion that in Phorm identities are “anonymized”.  As I understand it, each user is given a persistent random ID.  Whenever the user accesses the ISP, the ISP can see the link between the random ID and the user&#39s natural identity.  I understand that ISPs will prevent Phorm from knowing the user&#39s natural identity.  That is certainly better than many other systems.  But I still wouldn&#39t claim the system is based on anonymity.  It is based on controlling the release of information.

[Podcasts are available here]

Chaos computer club gives us the German phish finger

If you missed this article in The Register, you missed the most instructive story to date about applied biometrics:  

A hacker club has published what it says is the fingerprint of Wolfgang Schauble, Germany&#39s interior minister and a staunch supporter of the collection of citizens’ unique physical characteristics as a means of preventing terrorism.

In the most recent issue of Die Datenschleuder, the Chaos Computer Club printed the image on a plastic foil that leaves fingerprints when it is pressed against biometric readers…

Last two pages of magazine issue, showing article and including plastic film containing Schauble&#39s fingerprint

“The whole research has always been inspired by showing how insecure biometrics are, especially a biometric that you leave all over the place,” said Karsten Nohl, a colleague of an amateur researcher going by the moniker Starbug, who engineered the hack. “It&#39s basically like leaving the password to your computer everywhere you go without you being able to control it anymore.” … 

A water glass 

Schauble&#39s fingerprint was captured off a water glass he used last summer while participating in a discussion celebrating the opening of a religious studies department at the University of Humboldt in Berlin. The print came from an index finger, most likely the right one, Starbug believes, because Schauble is right-handed.

The print is included in more than 4,000 copies of the latest issue of the magazine, which is published by the CCC. The image is printed two ways: one using traditional ink on paper, and the other on a film of flexible rubber that contains partially dried glue. The latter medium can be covertly affixed to a person&#39s finger and used to leave an individual&#39s prints on doors, telephones or biometric readers…

Schauble is a big proponent of using fingerprints and other unique characteristics to identify individuals.

“Each individual’s fingerprints are unique,” he is quoted as saying in this official interior department press release announcing a new electronic passport that stores individuals’ fingerprints on an RFID chip. “This technology will help us keep one step ahead of criminals. With the new passport, it is possible to conduct biometric checks, which will also prevent authentic passports from being misused by unauthorized persons who happen to look like the person in the passport photo.”

The magazine is calling on readers to collect the prints of other German officials, including Chancellor Angela Merkel, Bavarian Prime Minister Guenther Beckstein and BKA President Joerg Ziercke.

“The thing I like a lot is the political activism of the hack,” said Bruce Schneier, who is chief security technology officer for BT and an expert on online authentication. Fingerprint readers were long ago shown to be faulty, largely because designers opt to make the devices err on the side of false positives rather than on the side of false negatives…

[Read the full article here]