My dog ate my homework

Am I the only one, or is this a strange email from Facebook?

I mean, “lost”??  No backups?  

I hear you.  This must be fake – a phishing email, right?   

No https on the page I'm directed to, either… The average user doesn't have a chance when figuring out whether this is legit or not.  So guess what.  He or she won't even try.

I'll forget and forgive the “loss”, but following it up by putting all their users through a sequence of steps that teaches them how to be phished really stinks.

Seems to drive home the main premise of Information Cards set forth in the Laws of Identity:

Hundreds of millions of people have been trained to accept anything any site wants to throw at them as being the “normal way” to conduct business online. They have been taught to type their names,
secret passwords and personal identifying information into almost any input form that appears on their screen.

There is no consistent and comprehensible framework allowing them to evaluate the authenticity of the sites they visit, and they don’t have a reliable way of knowing when they are disclosing private information to illegitimate parties.

 

The economics of vulnerabilities…

Gunnar Peterson of 1 Raindrop has blogged his Keynote at the recent Quality of Protection conference.  It is a great read – and a defense in depth against the binary “secure / not secure” polarity that characterizes the thinking of those new to security matters. 

His argument riffs on Dan Geer's famous Risk Management is Where the Money Is.  He turns to Warren Buffet as someone who knows something about this kind of thing, writing:

“Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter talking about financial institutions’ ability to deal with the subprime mess in the housing market saying, “You don't know who is swimming naked until the tide goes out.” In our world, we don't know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.

“So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation – “All these people are talking about risk, but they don't have any assets.” You can't do risk management if you don't know your assets.

“Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.

“Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.

Analysing vulnerabilities and the values of assets, he uncovers two pyramids that turn out to be inverted. 

To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:

  •  
    • Network: all the resources invested in Cisco, network admins, etc.
    • Host: all the resources invested in Unix, Windows, sys admins, etc.
    • Applications: all the resources invested in developers, CRM, ERP, etc.
    • Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

  •  
    • Network: all the resources invested in network firewalls, firewall admins, etc.
    • Host: all the resources invested in Vulnerability management, patching, etc.
    • Applications: all the resources invested in static analysis, black box scanning etc.
    • Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!

He relates his thinking to a fascinating piece by Pat Helland called SOA and Newton's Universe (a must-read to which I will return) and then proposes some elements of a concrete approach to development of meaningful metrics that he argues allow correlation of “value” and “risk” in ways that could sustain meaningful business decisions. 

In an otherwise clear argument, Gunnar itemizes a series of “Apologies”, in the sense of corrections applied post-facto due to the uncertaintly of decisionmaking in a distributed environment:

Example Apologies – Identity Management tools – provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction – Giant Global Bank is still sorry your account was compromised!

Try as I might, I don't understand the categorization of identity management tools as apology, or their relationship to account compromise – I hope Gunnar will tell us more. 

My Twitterank is 101.54

In case you need mind-stretching with regard to credulity, try out this piece from Sprout Marketing:

Madness erupted on Twitter last night, as the latest cool “app,” Twitterank, was suddenly accused of being a simple password swiping scheme. Over the past 48 hours, thousands of people were Tweeting the same message:

my Twitterank is 101.54!

Each one of those thousands of users freely gave out their username and password to the site. In exchange, the site uses some complicated algorithm (or not, maybe it's entirely random) and out pops a rating.

Then around 3 p.m. or so, Mountain Time, PANIC broke out.

This is how e-riots start...

Within minutes, similar messages were everywhere. This is the online equivalent of an angry, confused mob [FOLLOW the incredible link – Kim] . ZDnet jumped in, along with dozens of other legitimate news sources.

News is breaking out this morning that it really isn't a scam at all. Regardless, I think there are a couple lessons here.

1. Twitter people need to be a lot more careful about their passwords. A lot of them use the same passwords across multiple sites. If the Twitterank person wanted, he could be posting to your blog while ordering expensive popcorn with your credit card.

2. How trustworthy is your brand? Do people have confidence in coming to your site that if they share personal information, it'll be protected? It took eBay and Amazon years to get to this point; they were the pioneers. There are tons of sites that do e-commerce now, thanks to Amazon.

Then you look at the Twitterank site; does it instill confidence? Kind of reminds me of an old Yahoo! Geocities page. Sure, he did it late one night for kicks, and he SAYS he won't take your password…

Apparently this was good enough for tons of people. But I bet they're rethinking that today.

The average person has no way of evaluating the extent to which their passwords are in danger, especially when presented with two related sites that perform redirection or ask for entry of passwords. 

The only safe solution for the broad spectrum of computer users is one in which they cannot give away their secrets.  In other words:  Information Cards (the advantage being they don't necessarily require hardware) or Smart Cards.   Can there be a better teacher than reality?

[Via Vu – Thanks]

Security and ContactPoint: perception is all

Given the recent theft of my identity while it was being “stewarded” by CountryWide, I feel especially motivated to share with you this important piece on ContactPoint by Sir Bonar Neville-Kingdom GCMG KCVO that appeared in Britain's Ideal Government.   Sir Bonar writes:

I’m facing a blizzard of Freedom of Information requests from the self-appointed (and frankly self-righteous) civil liberties brigade about releasing details of the ContactPoint security review. Of course we’re all in favour of Freedom of Information to a point but there is a limit.

Perhaps I might point out:

The decision not to release any information about the ContactPoint security review was taken by an independent panel. I personally chaired ths panel to ensure its independence from any outside interests. I was of course not directly involved in the original requests, which were handled by a junior staff member.

The security of ContactPoint relies on nobody knowing how it works. If nobody knows what the security measures are, how can they possibly circumvent them? This is simply common sense. Details of the security measures will be shared only with the 330,000 accredited and vetted public servants who will have direct access to the database of children.

We’re hardly going to ask every Tom, Dick and Harry for how to keep our own data secure when, as you’re probably aware, our friends in Cheltenham pretty much invented the whole information security game. To share the security details with some troublemaking non-governmental organisation is merely to ask for trouble with the news media and to put us all needlessly at risk. The Department will not tolerate such risk and it is clearly not in the public interest to do so.

We did consider whether to redact and release any text.  We concluded that the small amount of text that would result after redacting text that should not be released would be incoherent and without context.  Such a release would serve no public interest.

ContactPoint is both a safe and secure system and I should remind everyone that it is fundamental to its success that it is perceived as such by parents, the professionals that use it and others with an interest in ContactPoint and its contribution to delivering the Every Child Matters agenda. Maintaining this perception of absolute “gold standard” security is why it is so important that nobody should question the security arrangements put in by our contractor Cap Gemini (whom I shall be meeting again in Andorra over the weekend).

We must guard the public mind – and indeed our own minds – against any inappropriate concerns on data security.

All this is set out on the Every Child Matters website, which includes a specific and contextual reference to the ContactPoint Data Security Review.  The content has been recently updated and can be found at: http://www.everychildmatters.gov.uk/deliveringservices/contactpoint/security/

Sending out our policy thinking via the medium of a Web Site is a central plank of the “Perfecting Web 1.0” aspect of our Transformational Government strategy, which is due to be complete in 2015. If interfering busybodies have any other queries about how we propose that children in Britain should be raised and protected I would refer them t that

I might add we never get this sort of trouble from the trade association Intellect, and this is why we find them a pleasure to deal with. And on the foundation of that relationship is our track record of success in government IT projects built.

So put that in your collective pipe and smoke it, naysayers. Now is not the time to ask difficult questions. We have to get on with the job of restoring order.

 

Protecting the Internet through minimal disclosure

Here's an email I received from John through my I-name account:

I would have left a comment on the appropriate entry in your blog, but you've locked it down and so I can't 🙁

I have a quick question about InfoCards that I've been unable to find a clear answer to (no doubt due to my own lack of comprehension of the mountains of talk on this topic — although I'm not ignorant, I've been a software engineer for 25+ years, with a heavy focus on networking and cryptography), which is all the more pertinent with EquiFax's recent announcement of their own “card”.

The problem is one of trust. None of the corporations in the ICF are ones that I consider trustworthy — and EquiFax perhaps least of all. So my question is — in a world where it's not possible to trust identity providers, how does the InfoCard scheme mitigate my risk in dealing with them? Specifically, the risk that my data will be misused by the providers?

This is the single, biggest issue I have when it comes to the entire field of identity management, and my fear is that if these technologies actually do become implemented in a widespread way, they will become mandatory — much like they are to be able to comment on your blog — and people like me will end up being excluded from participating in the social cyberspace. I am already excluded from shopping at stores such as Safeway because I do not trust them enough to get an affinity card and am unwill to pay the outrageous markup they require if you don't.

So, you can see how InfoCard (and similar schemes) terrify me. Even more than phishers. Please explain why I should not fear!

Thank you for your time.

There's a lot compressed into this note, and I'm not sure I can respond to all of it in one go.  Before getting to the substantive points, I want to make it clear that the only reason identityblog.com requires people who leave a comment to use an Information Card is to give them a feeling for one of the technologies I'm writing about.  To quote Don Quixote: “The proof of the pudding is the eating.”  But now on to the main attraction. 

It is obvious, and your reference to the members of the ICF illustrates this, that every individual and organization ultimately decides who or what to trust for any given reason.  Wanting to change this would be a non-starter.

It is also obvious that in our society, if someone offers a service, it is their right to establish the terms under which they do so (even requiring identification of various sorts).

Yet to achieve balance with the rights of others, the legal systems of most countries also recognize the need to limit this right.  One example would be in making it illegal to violate basic human rights (for example, offering a service in a way that is discriminatory with respect to gender, race, etc). 

Information Cards don't change anything in this equation.  They replicate what happens today in the physical world.  The identity selector is no different than a wallet.  The Information Cards are the same as the cards you carry in your wallet.  The act of presenting them is no different than the act of presenting a credit card or photo id.  The decision of a merchant to require some form of identification is unchanged in the proposed model.

But is it necessary to convey identity in the digital world?

Increasing population and density in the digital world has led to the embodiment of greater material value there – a tendency that will only become stronger.  This has attracted more criminal activity and if cyberspace is denied any protective structure, this activity will become disproportionately more pronounced as time goes on.  If everything remains as it is, I don't find it very hard to foresee an Internet vulnerable enough to become almost useless.

Many people have come or are coming to the conclusion that these dynamics make it necessary to be able to determine who we are dealing with in the digital realm.  I'm one of them.

However, many also jump to the conclusion that if reliable identification is necessary for protection in some contexts, it is necessary in all contexts.  I do not follow that reasoning. 

Some != All

If the “some == all” thinking predominates, one is left with a future where people need to identify themselves to log onto the Internet, and their identity is automatically made available everywhere they go:  ubiquitous identity in all contexts.

I think the threats to the Internet and to society are sufficiently strong that in the absence of an alternate vision and understanding of the relevant pitfalls, this notion of a singular “tracking key” is likely to be widely mandated.

This is as dangerous to the fabric and traditions of our society as the threats it attempts to counter.  It is a complete departure from the way things work in the physical world.

For example, we don't need to present identification to walk down the street in the physical world.  We don't walk around with our names or religions stenciled on our backs.  We show ID when we go to a bank or government office and want to get into our resources.  We don't show it when we buy a book.  We show a credit card when we make a purchase.  My goal is to get to the same point in the digital world.

Information Cards were intended to deliver an alternate vision from that of a singular, ubiquitous identity.

New vision

This new vision is of identity scoped to context, in which there is minimal disclosure of specific attributes necessary to a transaction.  I've discussed all of this here

In this vision, many contexts require ZERO disclosure.  That means NO release of identity.  In other words, what is released needs to be “proportionate” to specific requirements (I quote the Europeans).  It is worth noting that in many countries these requirements are embodied in law and enforced.

Conclusions

So I encourage my reader to see Information Cards in the context of the possible alternate futures of identity on the Internet.  I urge him to take seriously the probability that deteriorating conditions on the internet will lead to draconian identity schemes counter to western democratic traditions.

Contrast this dystopia to what is achievable through Information Cards, and the very power of the idea that identity is contextual.  This itself can be the basis of many legal and social protections not otherwise possible. 

It may very well be that legislation will be required to ensure identity providers treat our information with sufficient care, providing individuals with adequate control and respecting the requirements of minimal disclosure.  I hope our blogosphere discussion can advance to the point where we talk more concretely about the kind of policy framework required to accompany the technology we are building. 

But the very basis of all these protections, and of the very possibility of providing protections in the first place, depends on gaining commitment to minimal disclosure and contextual identity as a fundamental alternative to far more nefarious alternatives – be they pirate-dominated chaos or draconian over-identification.  I hope we'll reach a point where no one thinks about these matters absent the specter of such alternatives.

Finally, in terms of the technology itself, we need to move towards the cryptographic systems developed by David Chaum, Stefan Brands and Jan Camenisch (zero knowledge proofs).    Information Cards are an indispensible component required to make this possible.  I'll also be discussing progress in this area more as we go forward.

 

Leaving a comment

Since one of my goals is to introduce people to Information Cards – and because I used to get mountains of spam comments and worse (!) – I require people to either write to me or use an Information Card when leaving comments on my blog. 

(This blog is hosted for me by Joyent, and it runs on open source software (WordPress, PHP, MySQL, Apache, OpenSolaris).  For Information Card support, it uses Pamelaware, an open-source project offering an Information Card plugin for WordPress and other popular programs.)

Information Cards use an “identity selector”.  Vista has the CardSpace V1 selector built right in.   (If you  don't use Vista please continue here.   Also, if you are wondering about our new beta of Windows CardSpace Geneva – V2 if you want – I'll deal with that in a separate post.)

How you register at my site

1. Click the Information Card logo or the “LOG IN” option in the upper right hand corner of the blog.  (Clicking the logo saves you the step where you can learn about Information Cards).

 

2. If you clicked the logo, go to step 3.  If you have clicked “LOG IN”, you will see this page and can explore the ‘Learn More’ and other tabs.  When ready, click on the Information Card logo to proceed.

 

3. CardSpace will start (it may be a bit slow the first time it loads).  It will verify my site's certificate, and present it t you so you can decide whether or not to proceed.  Click “Yes, choose a card to send”.

 

4.  If you are trying CardSpace for the first time, you don't have a “Managed” card yet.  So just create a “Personal Card” that serves a bit like a username / password – except it can't be phished and protects your privacy by automatically using a different key at every site. 

 

5. You'll be asked to create a Personal card.  Name it with something you'll recognize, and I recommend you put a picture on it (the picture will never be sent).  The name and picture prevent many attacks since if someone tries to fool you with a CardSpace “look-alike”, they won't know what your Cards look like and you will immediately notice your cards aren't present! 

Use an email address that you control – you will have to respond to a confirmation email.  Then click SAVE.

 

 

6.  Now you'll see your saved card, and click SEND.

 

7.  The information from your card will be used to log in to my site, but I'll notice you haven't been here before and send you an email that you must click on to complete registration (I want some way to prevent spammers from bothering me).

 

8.  The email I send looks like the one below.  IMPORTANT NOTE:  this email might be “eaten” by your spam protection software (!) , so don't overlook your spam folder to find it.  (On Hotmail, it doesn't ever get delivered – haven't sorted that out yet.  It doesn't seem to like my little mail server.)  

9.  When you click the embedded link you'll be taken back to my blog as a verification step.  Click on the Information Card logo to log in.

 

10.  CardSpace will come up, and will recognize my site.  Just click send.

 

11.  Et voila…

Press “Go to Blog” and you can leave your comment.

In the future, logging in will just be a two-step process.  Click on the CardSpace logo, click on your personal card, and you will be logged in.  No password to remember.

 

Project Geneva – Part 5

[This is the fifth – and thankfully the final – installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

I've made a number of announcements today that I think will have broad industry-wide support not only because they are cool, but because they indelibly mark Microsoft's practical and profound committment to an interoperable identity metasystem that reaches across devices, platforms, vendors, applications, and administrative boundaries. 

I'm very happy, in this context, to announce that from now on, all Live ID's will also work as OpenIDs.   

That means the users of 400 million Live ID accounts will be able to log in to a large number of sites across the internet without a further proliferation of passwords – an important step forward for binging reduced password fatigue to the long tail of small sites engaged in social networking, blogging and consumer services.

As the beta progresses, CardSpace will be integrated into the same offering (there is already a separate CardSpace beta for Live ID).

Again, we are stressing choice of protocol and framework.

Beyond this support for a super lightweight open standard, we have a framework specifically tailored for those who want a very lightweight way to integrate tightly with a wider range of Live capabilities.

The Live Framework gives you access to an efficient, lightweight protocol that we use to optimize exchanges within the Live cloud.

It too integrates with our Gateway. Developers can download sample code (available in 7 languages), insert it directly into their application, and get access to all the identities that use the gateway including Live IDs and federated business users connecting via Geneva, the Microsoft Services Connector, and third party Apps.

 

Flexible and Granular Trust Policy

 Decisions about access control and personalization need to be made by the people responsible for resources and information – including personal information. That includes deciding who to trust – and for what.

At Microsoft, our Live Services all use and trust the Microsoft Federation Gateway, and this is helpful in terms of establishing common management, quality control, and a security bar that all services must meet.

But the claims-based model also fully supports the flexible and granular trust policies needed in very specialized contexts. We already see some examples of this within our own backbone.

For example, we’ve been careful to make sure you can use Azure to build a cloud application – and yet get claims directly from a third party STS using a different third party’s identity framework, or directly from OpenID providers. Developers who take this approach never come into contact with our backbone.

Our Azure Access Control Service provides another interesting example. It is, in fact, a security component that can be used to provide claims about authorization decisions. Someone who wants to use the service might want their application, or its STS, to consume ACS directly, and not get involved with the rest of our backbone. We understand that. Trust starts with the application and we respect that.

Still another interesting case is HealthVault. HealthVault decided from day one to accept OpenIDs from a set of OpenID providers who operate the kind of robust claims provider needed by a service handling sensitive information. Their requirement has given us concrete experience, and let us learn about what it means in practice to accept claims via OpenID. We think of it as pilot, really, from which we can decide how to evolve the rest of our backbone.

So in general we see our Identity Backbone and our federation gateway as a great simplifying and synergizing factor for our Cloud services. But we always put the needs of trustworthy computing first and foremost, and are able to be flexible because we have a single identity model that is immune to deployment details.


Identity Software + Services

To transition to the services world, the identity platform must consist of both software components and services components.

We believe Microsoft is well positioned to help developers in this critical area.

Above all, to benefit from the claims-based model, none of these components is mandatory. You select what is appropriate.

We think the needs of the application drive everything. The application specifies the claims required, and the identity metasystem needs to be flexible enough to supply them.

Roadmap

Our roadmap looks like this:

Identity @ PDC

You can learn more about every component I mentioned today by drilling into the 7 other presentations presented at PDC (watch the videos…):

Software
(BB42) Identity:  “Geneva” Server and Framework Overview
(BB43) Identity: “Geneva” Deep Dive
(BB44) Identity: Windows CardSpace “Geneva” Under the Hood
Services
(BB22) Identity: Live Identity Services Drilldown
(BB29) Identity: Connecting Active Directory to Microsoft Services
(BB28) .NET Services: Access Control Service Drilldown
(BB55) .NET Services: Access Control In the Cloud Services
 

Conclusion

I once went to a hypnotist to help me give up smoking. Unfortunately, his cure wasn’t very immediate. I was able to stop – but it was a decade after my session.

Regardless, he had one trick I quite liked. I’m going to try it out on you to see if I can help focus your take-aways from this session. Here goes:

I’m going to stop speaking, and you are going to forget about all the permutations and combinations of technology I took you through today. You’ll remember how to use the claims based model. You’ll remember that we’ve announced a bunch of very cool components and services. And above all, you will remember just how easy it now is to write applications that benefit from identity, through a single model that handles every identity use case, is based on standards, and puts users in control.

 

Project Geneva – Part 4

[This is the fourth installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

We have another announcement that really drives home the flexibility of claims.

Today we are announcing a Community Technical Preview (CTP) of the .Net Access Control Service, an STS that issues claims for access control. I think this is especially cool work since it moves clearly into the next generation of claims, going way beyond authentication. In fact it is a claims transformer, where one kind of claim is turned into another.

An application that uses “Geneva” can use ACS to externalize access control logic, and manage access control rules at the access control service.  You just configure it to employ ACS as a claims provider, and configure ACS to generate authorization claims derived from the claims that are presented to it. 

The application can federate directly to ACS to do this, or it can federate with a “Geneva” Server which is federated with ACS.

ACS federates with the Microsoft Federation Gateway, so it can also be used with any customer who is already federated with the Gateway.

The .Net Access Control Service was built using the “Geneva” Framework.  Besides being useful as a service within Azure, it is a great example of the kind of service any other application developer could create using the Geneva Framework.

You might wonder – is there a version of ACS I can run on-premises?   Not today, but these capabilities will be delivered in the future through “Geneva”.

Putting it all together

Let me summarize our discussion so far, and then conjure up Vittorio Bertocci, who will present a demo of many of these components working together.

  • The claims-based model is a unified model for identity that puts users firmly in control of their identities.
  • The model consists of a few basic building blocks can be put together to handle virtually any identity scenario.
  • Best of all, the whole approach is based on standards and works across platforms and vendors.

Let’s return to why this is useful, and to my friend Joe.  Developers no longer have to spend resources trying to handle all the demands their customers will make of them with respect to identity in the face of evolving technology. They no longer have to worry about where things are running. They will get colossal reach involving both hundreds of millions of consumers and corporate customers, and have complete control over what they want to use and what they don’t.

Click on this link – then skip ahead about 31 Minutes – and my friend Vittorio will take you on a whirlwind tour showing all the flexibility you get by giving up complexity and programming to a simple, unified identity model putting control in the hands of its users.  Vitorrio will also be blogging in depth about the demo over the next little while.  [If your media player doesn't accept WMV but understands MP4, try this link.]

In the next (and thankfully final!) installment of this series, I'll talk about the need for flexibility and granulartiy when it comes to trust, and a matter very important to many of us – support for OpenID.

The Identity Domino Effect

My friend Jerry Fishenden, Microsoft's National Technology Officer in the United Kingdom, had a piece in The Scotsman recently where he lays out, with great clarity, many of the concerns that “keep me up at night”.  I hope this kind of thinking will one day be second nature to policy makers and politicians world wide. 

Barely a day passes it seems without a new headline appearing about how our personal information has been lost from yet another database. Last week, the Information Commissioner, Richard Thomas, revealed that the number of reported data breaches in the UK has soared to 277 since HMRC lost 25 million child benefit records nearly a year ago. “Information can be a toxic liability,” he commented.

Such data losses are bad news on many fronts. Not just for us, when it's our personal information that is lost or misplaced, but because it also undermines trust in modern technology. Personal information in digital form is the very lifeblood of theinternet age and the relentless rise in data breaches is eroding public trust. Such trust, once lost, is very hard to regain.

Earlier this year, Sir James Crosby conducted an independent review of identity-related issues for Gordon Brown. It included an important underlying point: that it's our personal data, nobody else's. Any organisation, private or public sector, needs to remember that. All too often the loss of our personal information is caused not by technical failures, but by lackadaisical processes and people.

These widely-publicised security and data breaches threaten to undermine online services. Any organisations, including governments, which inadequately manage and protect users’ personal information, face considerable risks – among them damage to reputation, penalties and sanctions, lost citizen confidence and needless expense.

Of course, problems with leaks of our personal information from existing public-sector systems are one thing. But significant additional problems could arise if yet more of our personal information is acquired and stored in new central databases. In light of projects such as the proposed identity cards programme, ContactPoint (storing details of all children in the UK), and the Communications Data Bill (storing details of our phone records, e-mails and websites we have visited), some of Richard Thomas's other comments are particularly prescient: “The more databases set up and the more information exchanged from one place to another, the greater the risk of things going wrong. The more you centralise data collection, the greater the risk of multiple records going missing or wrong decisions about real people being made. The more you lose the trust and confidence of customers and the public, the more your prosperity and standing will suffer. Put simply, holding huge collections of personal data brings significant risks.”

The Information Commissioner's comments highlight problems that arise when many different pieces of information are brought together. Aggregating our personal information in this way can indeed prove “toxic”, producing the exact opposite consequences of those originally intended. We know, for example, that most intentional breaches and leaks of information from computer systems are actually a result of insider abuse, where some of those looking after these highly sensitive systems are corrupted in order to persuade them to access or even change records. Any plans to build yet more centralised databases will raise profound questions about how information stored in such systems can be appropriately secured.

The Prime Minister acknowledges these problems: “It is important to recognise that we cannot promise that every single item of information will always be safe, because mistakes are made by human beings. Mistakes are made in the transportation, if you like – the communication of information”.

This is an honest recognition of reality. No system can ever be 100 per cent secure. To help minimise risks, the technology industry has suggested adopting proposals such as “data minimisation” – acquiring as little data as required for the task at hand and holding it in systems no longer than absolutely necessary. And it's essential that only the minimum amount of our personal information needed for the specific purpose at hand is released, and then only to those who really need it.

Unless we want to risk a domino effect that will compromise our personal information in its entirety, it is also critical that it should not be possible automatically to link up everything we do in all aspects of how we use the internet. A single identifying number, for example, that stitches all of our personal information together would have many unintended, deeply negative consequences.

There is much that governments can do to help protect citizens better. This includes adopting effective standards and policies on data governance, reducing the risk to users’ privacy that comes with unneeded and long-term storage of personal information, and taking appropriate action when breaches do occur. Comprehensive data breach notification legislation is another important step that can help keep citizens informed of serious risks to their online identity and personal information, as well as helping rebuild trust and confidence in online services.

Our politicians are often caught between a rock and a very hard place in these challenging times. But the stream of data breaches and the scope of recent proposals to capture and hold even more of our personal information does suggest that we are failing to ensure an adequate dialogue between policymakers and technologists in the formulation of UK public policy.

This is a major problem that we can, and must, fix. We cannot let our personal information in digital form, as the essential lifeblood of the internet age, be allowed to drain away under this withering onslaught of damaging data breaches. It is time for a rethink, and to take advantage of the best lessons that the technology industry has learned over the past 30 or so years. It is, after all, our data, nobody else's.

My identity has already been stolen through the very mechanisms Jerry describes.  I would find this even more depressing if I didn't see more and more IT architects understanding the identity domino problem – and how it could affect their own systems. 

It's our job as architects to do everything we can so the next generation of information systems are as safe from insider attacks as we can make them.  On the one hand this means protecting the organizations we work for from unnecessary liability;  on the other, it means protecting the privacy of our customers and employees, and the overall identity fabric of society.

In particular, we need to insist on:

  • scrupulously partitioning personally identifying information from operational and profile data;
  • eliminating “rainy day” collection of information – the need for data must always be justifiable;
  • preventing personally identifying information from being stored on multiple systems;
  • use of encryption;
  • minimal disclosure of identity intormation within a “need-to-know” paradigm.

I particularly emphasize partitioning PII from operational data since most of a typical company's operational systema – and employees – need no access to PII.  Those who do need such access rarely need to know anything beyond a name.  Those who do need greater access to detailed information rarely need access to information about large numbers of people except in anonymized form.

I would love someone to send me a use case that calls for anyone to have access – at the same time – to the personally identifying information about thousands of individuals  (much less millions, as was the case for some of the incidents Jerry describes).  This kind of wholesale access was clearly afforded the person who stole my identity.  I still don't understand why. 

Where is your identity more likely to be stolen?

 From the Economist:

Internet users in Britain are more likely to fall victim to identity theft than their peers elsewhere in Europe and North America. In a recent survey of 6,000 online shoppers in six countries by PayPal and Ipsos Research, 14% of respondents in Britain said that they have had their identities stolen online, compared with only 3% in Germany. More than half of respondents said that they used personal dates and names as passwords, making it relatively easy for scammers to gain access to accounts. The French are particularly cavalier, with two-thirds using easily guessed passwords and over 80% divulging personal data, such as birthdays, on social-networking sites.

Of course, my identity was stolen (and apparently sold) NOT because of inadequate password hygiene, but just because I was dealing with Countrywide – a company whose computer systems were sufficiently misdesigned to be vulnerable to a large-scale insider attack.  So there are a lot of things to fix before we get to a world of trustworthy computing.