The Right To Correlate

Dave Kearns’ comment in Another Violent Agreement convinces me I've got to apply the scalpel to the way I talk about correlation handles.  Dave writes:

‘I took Kim at his word when he talked “about the need to prevent correlation handles and assembly of information across contexts…” That does sound like “banning the tools.”

‘So I'm pleased to say I agree with his clarification of today:

;”I agree that we must influence behaviors as well as develop tools… [but] there’s a huge gap between the kind of data correlation done at a person’s request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person’s consent or knowledge.” (Emphasis added by Dave)’

Thinking about this some more, it seems we might be able to use a delegation paradigm.

The “right to correlate”

Let's postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).

Then it would follow that any two parties with whom an individual interacts would not by default have the right to correlate data they had each collected in their separate transactions.

On the other hand, the individual would have the right to organize and correlate her own data across all the parties with whom she interacts since she was party to all the transactions.

Delegating the Right to Correlate

If we introduce the ability to delegate, then an individual could delegate her right for two parties to correlate relevant data about her.  For example, I could delegate to Alaska Airlines and British Airways the right to share information about me.

Similarly, if I were an optimistic person, I could opt to use a service like that envisaged by Dave Kearns, which “can discern our real desires from our passing whims and organize our quest for knowledge, experience and – yes – material things in ways which we can only dream about now.”  The point here is that we would delegate the right to correlate to this service operating on our behalf.

Revoking the Right to Correlate

A key aspect of delegating a right is the ability to revoke that delegation.  In other words, if the service to which I had given some set of rights became annoying or odious, I would need to be able terminate its right to correlate.  Importantly, the right applies to correlation itself.  Thus when the right is revoked, the data must no longer be linkable in any way.

Forensics

There are cases where criminal activity is being investigated or proven where it is necessary for law enforcement to be able to correlate without the consent of the individual.  This is already the case in western society and it seems likely that new mechanisms would not be required in a world resepcting the Right to Correlate.

Defining contexts

Respecting the Right to Correlate would not by itself solve the Canadian Tire Problem that started this thread.  The thing that made the Canadian Tire human experiments most odious is that they correlated buying habits at the level of individual purchases (our relations to Canadian Tire as a store)  with  probable behavior in paying off credit cards (Canadian Tire as a credit card issuer).  Paradoxically, someone's loyalty to the store could actually be used to deny her credit.  People who get Canadian Tire credit cards do know that the company is in a position to correlate all this information, but are unlikely to predict this counter-intuitive outcome.

Those of us prefering mainstream credit card companies presumably don't have the same issues at this point in time.  They know where we buy but not what we buy (although there may be data sharing relationships with merchants that I am not aware of… Let me know…).

So we have come to the the most important long-term problem:  The Internet changes the rules of the game by making data correlation so very easy.

It potentially turns every credit card company into a data-correlating Canadian Tire.  Are we looking at the Canadian Tirization of the Internet?

But do people care?

Some will say that none of this matters because people just don't care about what is correlated.  I'll discuss that briefly in my next post.

Kim Cameron: secret RIAA agent?

Dave Kearns cuts me to the polemical quick by tarring me with the smelly brush of the RIAA:

‘Kim has an interesting post today, referencing an article (“What Does Your Credit-Card Company Know About You?” by Charles Duhigg in last week’s New York Times.

‘Kim correctly points out the major fallacies in the thinking of J. P. Martin, a “math-loving executive at Canadian Tire”, who, in 2002, decided to analyze the information his company had collected from credit-card transactions the previous year. For example, Martin notes that “2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months.” But that's barely 2% of the total, as Kim points out, and hardly conclusive evidence of anything.

‘I'm right with Cameron for most of his essay, up til the end when he notes:

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine. Mr. Duhigg’s story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

‘Change “privacy enhancing” to “intellectual property protecting” and it could be a quote from an RIAA press release!

‘We should never confuse tools with the bad behavior that can be helped by those tools. Data correlation tools, for example, are vitally necessary for automated personalization services and can be a big help to future services such as Vendor Relationship Management (VRM) . After all, it's not Napster that's bad but people who use it to get around copyright laws who are bad. It isn't a cup of coffee that's evil, just people who try to carry one thru airport security. :)

‘It is easier to forbid the tool rather than to police the behavior but in a democratic society, it's the way we should act.’

I agree that we must influence behaviors as well as develop tools.  And I'm as positive about Vendor Relationship Management as anyone.  But getting concrete, there's a huge gap between the kind of data correlation done at a person's request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person's consent or knowledge.  As VRM's Saint Searls has said, “Sometimes, I don't want a deep relationship, I just want a cup of coffee”.  

I'll come clean with an example.  Not a month ago, I was visiting friends in Canada, and since I had an “extra car”, was nominated to go pick up some new barbells for the kids. 

So, off to Canadian Tire to buy a barbell.  Who knows what category they put me in when 100% of my annual consumption consists of barbells?  It had to be right up there with low-grade oil or even a Mega Thruster Exhaust System.  In this case, Dave, there was no R and certainly no VRM: I didn't ask to be profiled by Mr. Martin's reputation machines.

There is nothing about miminal disclosure that says profiles cannot be constructed when people want that.  It simply means that information should only be collected in light of a specific usage, and that usage should be clear to the parties involved (NOT the case with Canadian Tire!).  When there is no legitimate reason for collecting information, people should be able to avoid it. 

It all boils down to the matter of people being “in control” of their digital interactions, and of developing technology that makes this both possible and likely.  How can you compare an automated profiling service you can turn on and off with one such as Mr. Martin thinks should rule the world of credit?  The difference between the two is a bit like the difference between a consensual sexual relationship and one based on force.

Returning to the RIAA, in my view Dave is barking up the wrong metaphor.  RIAA is NOT producing tools that put people in control of their relationships or property – quite the contrary.  And they'll pay for that. 

The brands we buy are “the windows into our souls”

You should read this fascinating piece by Charles Duhigg in last week’s New York Times. A few tidbits to whet the appetite:

‘The exploration into cardholders’ minds hit a breakthrough in 2002, when J. P. Martin, a math-loving executive at Canadian Tire, decided to analyze almost every piece of information his company had collected from credit-card transactions the previous year. Canadian Tire’s stores sold electronics, sporting equipment, kitchen supplies and automotive goods and issued a credit card that could be used almost anywhere. Martin could often see precisely what cardholders were purchasing, and he discovered that the brands we buy are the windows into our souls — or at least into our willingness to make good on our debts…

‘His data indicated, for instance, that people who bought cheap, generic automotive oil were much more likely to miss a credit-card payment than someone who got the expensive, name-brand stuff. People who bought carbon-monoxide monitors for their homes or those little felt pads that stop chair legs from scratching the floor almost never missed payments. Anyone who purchased a chrome-skull car accessory or a “Mega Thruster Exhaust System” was pretty likely to miss paying his bill eventually.

‘Martin’s measurements were so precise that he could tell you the “riskiest” drinking establishment in Canada — Sharx Pool Bar in Montreal, where 47 percent of the patrons who used their Canadian Tire card missed four payments over 12 months. He could also tell you the “safest” products — premium birdseed and a device called a “snow roof rake” that homeowners use to remove high-up snowdrifts so they don’t fall on pedestrians…

‘Why were felt-pad buyers so upstanding? Because they wanted to protect their belongings, be they hardwood floors or credit scores. Why did chrome-skull owners skip out on their debts? “The person who buys a skull for their car, they are like people who go to a bar named Sharx,” Martin told me. “Would you give them a loan?”

So what if there are errors?

Now perhaps I’ve had too much training in science and mathematics, but this type of thinking seems totally neanderthal to me. It belongs in the same category of things we should be protected from as “guilt by association” and “racial profiling”.

For example, the article cites one of Martin’s concrete statistics:

‘A 2002 study of how customers of Canadian Tire were using the company's credit cards found that 2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months. By contrast, only 530 of the cardholders who used their credit cards at the dentist missed four payments within the next 12 months.’

We can rephrase the statement to say that 98% of the people who used their credit cards in drinking places did NOT miss the requisite four payments.

Drawing the conclusion that “use of the credit card in a drinking establishment predicts default” is thus an error 98 times out of 100.

Denying people credit on a premise which is wrong 98% of the time seems like one of those things regulators should rush to address, even if the premise reduces risk to the credit card company.

But there won’t be enough regulators to go around, since there are thousands of other examples given that are similarly idiotic from the point of view of a society fair to its members. For the article continues,

‘Are cardholders suddenly logging in at 1 in the morning? It might signal sleeplessness due to anxiety. Are they using their cards for groceries? It might mean they are trying to conserve their cash. Have they started using their cards for therapy sessions? Do they call the card company in the middle of the day, when they should be at work? What do they say when a customer-service representative asks how they’re feeling? Are their sighs long or short? Do they respond better to a comforting or bullying tone?

Hmmm.

  • Logging in at 1 in the morning. That’s me. I guess I’m one of the 98% for whom this thesis is wrong… I like to stay up late. Do you think staying up late could explain why Mr. Martin’s self-consciously erroneous theses irk me?
  • Using card to buy groceries? True, I don’t like cash. Does this put me on the road to ruin? Another stupid thesis for Mr. Martin.
  • Therapy sessions? If I read enough theses like those proposed by Martin, I may one day need therapy.  But frankly,  I don’t think Mr. Martin should have the slightest visibility into matters like these.  Canadian Tire meets Freud?
  • Calling in the middle of the day when I should be at work? Grow up, Mr. Martin. There is this thing called flex schedules for the 98% or 99% or 99.9% of us for which your theses continually fail.
  • What I would say if a customer-service representative asked how I was feeling? I would point out, with some vigor, that we do not have a personal relationship and that such a question isn't appropriate. And I certainly would not remain on the line.

Apparently Mr. Martin told Charles Duhigg, “If you show us what you buy, we can tell you who you are, maybe even better than you know yourself.” He then lamented that in the past, “everyone was scared that people will resent companies for knowing too much.”

At the best, this no more than a Luciferian version of the Beatles’ “You are what you eat” – but minus the excessive drug use that can explain why everyone thought this was so deep. The truth is, you are not “what you eat”.

Duhigg argues that in the past, companies stuck to “more traditional methods” of managing risk, like raising interest rates when someone was late paying a bill (imagine – a methodology based on actual delinquency rather than hocus pocus), because they worried that customers would revolt if they found out they were being studied so closely. He then says that after “the meltdown”, Mr. Martin’s methods have gained much more currency.

In fact, customers would revolt because the methodology is not reasonable or fair from the point of view of the vast majority of individuals, being wrong tens or hundreds or thousands of times more often than it is right.

If we weren’t working on digital identity, we could just end this discussion by saying Mr. Martin represents one more reason to introduce regulation into the credit card industry. But unfortunately, his thinking is contagious and symptomatic.

Mining of credit card information is just the tip of a vast and dangerous iceberg we are beginning to encounter in cyberspace. The Internet is currently engineered to facilitate the assembly of ever more information of the kind that so thrills Mr. Martin – data accumulated throughout the lives of our young people that will become progressively more important in limiting their opportunities as more “risk reduction” assumptions – of the Martinist kind that apply to almost no one but affect many – take hold.

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine.  Mr. Duhigg's story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

[Thanks to JC Cannon for pointing me to this article..]

The economics of vulnerabilities…

Gunnar Peterson of 1 Raindrop has blogged his Keynote at the recent Quality of Protection conference.  It is a great read – and a defense in depth against the binary “secure / not secure” polarity that characterizes the thinking of those new to security matters. 

His argument riffs on Dan Geer's famous Risk Management is Where the Money Is.  He turns to Warren Buffet as someone who knows something about this kind of thing, writing:

“Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter talking about financial institutions’ ability to deal with the subprime mess in the housing market saying, “You don't know who is swimming naked until the tide goes out.” In our world, we don't know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.

“So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation – “All these people are talking about risk, but they don't have any assets.” You can't do risk management if you don't know your assets.

“Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.

“Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.

Analysing vulnerabilities and the values of assets, he uncovers two pyramids that turn out to be inverted. 

To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:

  •  
    • Network: all the resources invested in Cisco, network admins, etc.
    • Host: all the resources invested in Unix, Windows, sys admins, etc.
    • Applications: all the resources invested in developers, CRM, ERP, etc.
    • Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

  •  
    • Network: all the resources invested in network firewalls, firewall admins, etc.
    • Host: all the resources invested in Vulnerability management, patching, etc.
    • Applications: all the resources invested in static analysis, black box scanning etc.
    • Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!

He relates his thinking to a fascinating piece by Pat Helland called SOA and Newton's Universe (a must-read to which I will return) and then proposes some elements of a concrete approach to development of meaningful metrics that he argues allow correlation of “value” and “risk” in ways that could sustain meaningful business decisions. 

In an otherwise clear argument, Gunnar itemizes a series of “Apologies”, in the sense of corrections applied post-facto due to the uncertaintly of decisionmaking in a distributed environment:

Example Apologies – Identity Management tools – provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction – Giant Global Bank is still sorry your account was compromised!

Try as I might, I don't understand the categorization of identity management tools as apology, or their relationship to account compromise – I hope Gunnar will tell us more. 

Protecting the Internet through minimal disclosure

Here's an email I received from John through my I-name account:

I would have left a comment on the appropriate entry in your blog, but you've locked it down and so I can't :(

I have a quick question about InfoCards that I've been unable to find a clear answer to (no doubt due to my own lack of comprehension of the mountains of talk on this topic — although I'm not ignorant, I've been a software engineer for 25+ years, with a heavy focus on networking and cryptography), which is all the more pertinent with EquiFax's recent announcement of their own “card”.

The problem is one of trust. None of the corporations in the ICF are ones that I consider trustworthy — and EquiFax perhaps least of all. So my question is — in a world where it's not possible to trust identity providers, how does the InfoCard scheme mitigate my risk in dealing with them? Specifically, the risk that my data will be misused by the providers?

This is the single, biggest issue I have when it comes to the entire field of identity management, and my fear is that if these technologies actually do become implemented in a widespread way, they will become mandatory — much like they are to be able to comment on your blog — and people like me will end up being excluded from participating in the social cyberspace. I am already excluded from shopping at stores such as Safeway because I do not trust them enough to get an affinity card and am unwill to pay the outrageous markup they require if you don't.

So, you can see how InfoCard (and similar schemes) terrify me. Even more than phishers. Please explain why I should not fear!

Thank you for your time.

There's a lot compressed into this note, and I'm not sure I can respond to all of it in one go.  Before getting to the substantive points, I want to make it clear that the only reason identityblog.com requires people who leave a comment to use an Information Card is to give them a feeling for one of the technologies I'm writing about.  To quote Don Quixote: “The proof of the pudding is the eating.”  But now on to the main attraction. 

It is obvious, and your reference to the members of the ICF illustrates this, that every individual and organization ultimately decides who or what to trust for any given reason.  Wanting to change this would be a non-starter.

It is also obvious that in our society, if someone offers a service, it is their right to establish the terms under which they do so (even requiring identification of various sorts).

Yet to achieve balance with the rights of others, the legal systems of most countries also recognize the need to limit this right.  One example would be in making it illegal to violate basic human rights (for example, offering a service in a way that is discriminatory with respect to gender, race, etc). 

Information Cards don't change anything in this equation.  They replicate what happens today in the physical world.  The identity selector is no different than a wallet.  The Information Cards are the same as the cards you carry in your wallet.  The act of presenting them is no different than the act of presenting a credit card or photo id.  The decision of a merchant to require some form of identification is unchanged in the proposed model.

But is it necessary to convey identity in the digital world?

Increasing population and density in the digital world has led to the embodiment of greater material value there – a tendency that will only become stronger.  This has attracted more criminal activity and if cyberspace is denied any protective structure, this activity will become disproportionately more pronounced as time goes on.  If everything remains as it is, I don't find it very hard to foresee an Internet vulnerable enough to become almost useless.

Many people have come or are coming to the conclusion that these dynamics make it necessary to be able to determine who we are dealing with in the digital realm.  I'm one of them.

However, many also jump to the conclusion that if reliable identification is necessary for protection in some contexts, it is necessary in all contexts.  I do not follow that reasoning. 

Some != All

If the “some == all” thinking predominates, one is left with a future where people need to identify themselves to log onto the Internet, and their identity is automatically made available everywhere they go:  ubiquitous identity in all contexts.

I think the threats to the Internet and to society are sufficiently strong that in the absence of an alternate vision and understanding of the relevant pitfalls, this notion of a singular “tracking key” is likely to be widely mandated.

This is as dangerous to the fabric and traditions of our society as the threats it attempts to counter.  It is a complete departure from the way things work in the physical world.

For example, we don't need to present identification to walk down the street in the physical world.  We don't walk around with our names or religions stenciled on our backs.  We show ID when we go to a bank or government office and want to get into our resources.  We don't show it when we buy a book.  We show a credit card when we make a purchase.  My goal is to get to the same point in the digital world.

Information Cards were intended to deliver an alternate vision from that of a singular, ubiquitous identity.

New vision

This new vision is of identity scoped to context, in which there is minimal disclosure of specific attributes necessary to a transaction.  I've discussed all of this here

In this vision, many contexts require ZERO disclosure.  That means NO release of identity.  In other words, what is released needs to be “proportionate” to specific requirements (I quote the Europeans).  It is worth noting that in many countries these requirements are embodied in law and enforced.

Conclusions

So I encourage my reader to see Information Cards in the context of the possible alternate futures of identity on the Internet.  I urge him to take seriously the probability that deteriorating conditions on the internet will lead to draconian identity schemes counter to western democratic traditions.

Contrast this dystopia to what is achievable through Information Cards, and the very power of the idea that identity is contextual.  This itself can be the basis of many legal and social protections not otherwise possible. 

It may very well be that legislation will be required to ensure identity providers treat our information with sufficient care, providing individuals with adequate control and respecting the requirements of minimal disclosure.  I hope our blogosphere discussion can advance to the point where we talk more concretely about the kind of policy framework required to accompany the technology we are building. 

But the very basis of all these protections, and of the very possibility of providing protections in the first place, depends on gaining commitment to minimal disclosure and contextual identity as a fundamental alternative to far more nefarious alternatives – be they pirate-dominated chaos or draconian over-identification.  I hope we'll reach a point where no one thinks about these matters absent the specter of such alternatives.

Finally, in terms of the technology itself, we need to move towards the cryptographic systems developed by David Chaum, Stefan Brands and Jan Camenisch (zero knowledge proofs).    Information Cards are an indispensible component required to make this possible.  I'll also be discussing progress in this area more as we go forward.

 

The Identity Domino Effect

My friend Jerry Fishenden, Microsoft's National Technology Officer in the United Kingdom, had a piece in The Scotsman recently where he lays out, with great clarity, many of the concerns that “keep me up at night”.  I hope this kind of thinking will one day be second nature to policy makers and politicians world wide. 

Barely a day passes it seems without a new headline appearing about how our personal information has been lost from yet another database. Last week, the Information Commissioner, Richard Thomas, revealed that the number of reported data breaches in the UK has soared to 277 since HMRC lost 25 million child benefit records nearly a year ago. “Information can be a toxic liability,” he commented.

Such data losses are bad news on many fronts. Not just for us, when it's our personal information that is lost or misplaced, but because it also undermines trust in modern technology. Personal information in digital form is the very lifeblood of theinternet age and the relentless rise in data breaches is eroding public trust. Such trust, once lost, is very hard to regain.

Earlier this year, Sir James Crosby conducted an independent review of identity-related issues for Gordon Brown. It included an important underlying point: that it's our personal data, nobody else's. Any organisation, private or public sector, needs to remember that. All too often the loss of our personal information is caused not by technical failures, but by lackadaisical processes and people.

These widely-publicised security and data breaches threaten to undermine online services. Any organisations, including governments, which inadequately manage and protect users’ personal information, face considerable risks – among them damage to reputation, penalties and sanctions, lost citizen confidence and needless expense.

Of course, problems with leaks of our personal information from existing public-sector systems are one thing. But significant additional problems could arise if yet more of our personal information is acquired and stored in new central databases. In light of projects such as the proposed identity cards programme, ContactPoint (storing details of all children in the UK), and the Communications Data Bill (storing details of our phone records, e-mails and websites we have visited), some of Richard Thomas's other comments are particularly prescient: “The more databases set up and the more information exchanged from one place to another, the greater the risk of things going wrong. The more you centralise data collection, the greater the risk of multiple records going missing or wrong decisions about real people being made. The more you lose the trust and confidence of customers and the public, the more your prosperity and standing will suffer. Put simply, holding huge collections of personal data brings significant risks.”

The Information Commissioner's comments highlight problems that arise when many different pieces of information are brought together. Aggregating our personal information in this way can indeed prove “toxic”, producing the exact opposite consequences of those originally intended. We know, for example, that most intentional breaches and leaks of information from computer systems are actually a result of insider abuse, where some of those looking after these highly sensitive systems are corrupted in order to persuade them to access or even change records. Any plans to build yet more centralised databases will raise profound questions about how information stored in such systems can be appropriately secured.

The Prime Minister acknowledges these problems: “It is important to recognise that we cannot promise that every single item of information will always be safe, because mistakes are made by human beings. Mistakes are made in the transportation, if you like – the communication of information”.

This is an honest recognition of reality. No system can ever be 100 per cent secure. To help minimise risks, the technology industry has suggested adopting proposals such as “data minimisation” – acquiring as little data as required for the task at hand and holding it in systems no longer than absolutely necessary. And it's essential that only the minimum amount of our personal information needed for the specific purpose at hand is released, and then only to those who really need it.

Unless we want to risk a domino effect that will compromise our personal information in its entirety, it is also critical that it should not be possible automatically to link up everything we do in all aspects of how we use the internet. A single identifying number, for example, that stitches all of our personal information together would have many unintended, deeply negative consequences.

There is much that governments can do to help protect citizens better. This includes adopting effective standards and policies on data governance, reducing the risk to users’ privacy that comes with unneeded and long-term storage of personal information, and taking appropriate action when breaches do occur. Comprehensive data breach notification legislation is another important step that can help keep citizens informed of serious risks to their online identity and personal information, as well as helping rebuild trust and confidence in online services.

Our politicians are often caught between a rock and a very hard place in these challenging times. But the stream of data breaches and the scope of recent proposals to capture and hold even more of our personal information does suggest that we are failing to ensure an adequate dialogue between policymakers and technologists in the formulation of UK public policy.

This is a major problem that we can, and must, fix. We cannot let our personal information in digital form, as the essential lifeblood of the internet age, be allowed to drain away under this withering onslaught of damaging data breaches. It is time for a rethink, and to take advantage of the best lessons that the technology industry has learned over the past 30 or so years. It is, after all, our data, nobody else's.

My identity has already been stolen through the very mechanisms Jerry describes.  I would find this even more depressing if I didn't see more and more IT architects understanding the identity domino problem – and how it could affect their own systems. 

It's our job as architects to do everything we can so the next generation of information systems are as safe from insider attacks as we can make them.  On the one hand this means protecting the organizations we work for from unnecessary liability;  on the other, it means protecting the privacy of our customers and employees, and the overall identity fabric of society.

In particular, we need to insist on:

  • scrupulously partitioning personally identifying information from operational and profile data;
  • eliminating “rainy day” collection of information – the need for data must always be justifiable;
  • preventing personally identifying information from being stored on multiple systems;
  • use of encryption;
  • minimal disclosure of identity intormation within a “need-to-know” paradigm.

I particularly emphasize partitioning PII from operational data since most of a typical company's operational systema – and employees – need no access to PII.  Those who do need such access rarely need to know anything beyond a name.  Those who do need greater access to detailed information rarely need access to information about large numbers of people except in anonymized form.

I would love someone to send me a use case that calls for anyone to have access – at the same time – to the personally identifying information about thousands of individuals  (much less millions, as was the case for some of the incidents Jerry describes).  This kind of wholesale access was clearly afforded the person who stole my identity.  I still don't understand why. 

Are Countrywide's systems designed around need to know?

I'm mad as hell and I'm not taking it any more

It was inevitable, given how sloppy many companies are when handling the identity of their customers, that someone would eventually steal all my personal information.  But no matter how much science you have in your back pocket, it hurts when you get slapped in the face.

The theory is clear:  systems must be built to withstand being breached.  But they often aren't.

One thing for sure: the system used at Countrywide Mortgage was so leaky that when I phoned my bank to ask how I should handle the theft, my advisor said, “I don't know.  I'm trying to figure that out, since my information was stolen too.”  We commiserated.  It's not a good feeling.

And then we talked about the letter.

What a letter.  It is actually demented.  It's as though Countrywide's information systems didn't exist, and weren't a factor in any insider misbehavior. 

I agree there was a bad employee.  But is he the only guilty party?  Was the system set up so employees could only get at my personal information when there was a need to know

Was the need to know documented?  Was there a separation of duties?  Was there minimization of data?  Can I see the audit trails?  What was going on here?  I want to know.

My checks rolled in to Countrywide with scientific precision.  No one needed to contact me through emergency channels.  Why would anyone get access to my personal information?  Just on a whim?  

How many of us were affected?  We haven't been told.  I want to know.  Iit bears on need to know and storage technologies.

But I'm ahead of myself.  I'll share the letter, sent by Sheila Zuckerman on behalf of “the President” (“President” who??).

We are writing to inform you that we recently became aware-that a Countrywide employee (now former) may have sold unauthorized personal information about you to a third party. Based on a joint investigation conducted by Countrywide and law enforcement authorities, it was determined that the customer information involved in this incident included your name, address, Social Security number, mortgage loan number, and various other loan and application information.

We deeply regret this incident and apologize for any inconvenience or concern it may cause you. We take our responsibility to safeguard your information very seriously and will not tolerate any actions that compromise the privacy or security of our customers’ information. We have terminated the individual's access to customer information and he is no longer employed by Countrywide. Countrywide will continue to work with law enforcement authorities to pursue further actions as appropriate.

I don't want to hear this kind of pap.  I want an audit of your systems and how they protected or did not protect me from insider attack.

If you are a current Countrywide mortgage holder, we will take necessary precautions to monitor your mortgage account and will notify you if we detect any suspicious or unauthorized activity related to this incident. We will also work with you to resolve unauthorized transactions on your Countrywide mortgage account related to this incident if reported to us in a timely manner.

I find this paragraph especially arrogant.  I'm the one who needs to do things in a timely manner although they didn't take the precautions necessary to protect me. 

As an additional measure of protection, Countrywide has arranged for complimentary credit monitoring services provided by a Countrywide vendor at no cost to you over the next two years. We have engaged ConsumerInfo.com, Inc., an Experian® Company, to provide to you at your option, a two-year membership in Triple Advantage Credit Monitoring.  You will not be billed for this service. Triple Advantage includes daily monitoring of your credit reports from the three national credit reporting companies (Experian, Equifax and TransUnion®) and email monitoring alerts of key changes to your credit reports.

Why are they doing this?  Out of the goodness of their hearts?  Or because they've allowed my information to be spewed all over the world through incompetent systems?

To learn more about and enroll in Triple Advantage, log on to www.consumerinfo.com/countrywide and complete the secure online form. You will need to enter the activation code provided below on page two of the online form to complete enrollment. If you do not have Internet access, please, call the number below for assistance with enrollment.   You will have-90 days from the-date of-this letter-to-use the code to activate the credit monitoring product.

Borrower Activation Code: XXXXXXXXX

And now the best part.  I'm going to need to hire a personal assistant to do everything required by Countrywide and still remain employed:

In light of the sensitive nature of the information, we urge you to read the enclosed brochure outlining precautionary measures you may want to take. The brochure will guide you through steps to:

  • Contact the major credit bureaus and place a fraud alert on your credit reports;
  • Review your recent account activity for unauthorized charges or accounts;
  • Be vigilant and carefully review your monthly credit card and other account statements over the next twelve to twenty-four months for any unauthorized charges; anTake action should any unauthorized activity appear on your credit report.

I need more information on why I only need to be vigilant for twelve to twenty-four months, when, thanks to Countrywide, my personal information has spilled out and I have no way to get it back!

We apologize again that this incident has occurred and for any inconvenience or worry it may have caused.  If you have questions, please call our special services hotline at 1-866-451-5895, and a specially trained representative will be ready to assist you.

Sincerely,

Sheila Zuckerman
Countrywide Office of the President
Enclosure

O.K.  This is going to be a long process.  It drives home the need for data minimization.  It underlines the need for stronger authentication.  But EVERY case like this should make us deeply question the way our systems are structured, and ask why there are no professional standards that must be met in protecting private information.

When a bridge collapses, people look into the why of it all.

We need to do that with these identity breaches too.   As far as I'm concerned, Countrywide has a lot of explaining to do.  And as a profession we need clear engineering standards and ways of documenting how our systems are protected through “need to know” and the other relevant technologies.

Finally, we all need to start making insider attacks a top priority, since all research points to insiders and our number one threat.

 

Jerry Fishenden in the Financial Times

It's encouraging to see people like Jerry Fishenden figuring out how to take the discussion about identity issues to a mainstream audience.  Here's a piece he wrote for the Financial Times:

Financial times

If you think the current problems of computer security appear daunting, what is going to happen as the internet grows beyond web browsing and e-mail to pervade every aspect of our daily lives? As the internet powers the monitoring of our health and the checking of our energy saving devices in our homes for example, will problems of cybercrime and threats to our identity, security and privacy grow at the same rate?

One of the most significant contributory causes of existing internet security problems is the lack of a trustworthy identity layer. I can’t prove it’s me when I’m online and I can’t prove to a reasonable level of satisfaction whether the person (or thing) I’m communicating or transacting with online is who or what they claim to be. If you’re a cybercrook, this is great news and highly lucrative since it makes online attacks such as phishing and spam e-mail possible. And cybercrooks are always among the smartest to exploit such flaws.

If we’re serious about realising technology’s potential, security and privacy issues need to be dealt with – certainly before we can seriously contemplate letting the internet move into far more important areas, such as assisting our healthcare at home. How are we to develop such services if none of the devices can be certain who or what they are communicating with?

In front of us lies an age in which everything and everyone is linked and joined through an all pervading system – a worldwide digital mesh of billions of devices and communications happening every second, a complex grid of systems communicating within and between each other in real time.

But how can it be built – and trusted – without the problem of identity being fixed?

So what is identity anyway? For our purposes, identity is about people – and ”things”: the physical fabric of the internet and everything in (or on) it. And ultimately it’s about safeguarding our security and privacy.

If we’re to avoid exponential growth of the security and privacy issues that plague the current relatively simple internet as we enter the pervasive, complex grid age, what principles do we adhere to? How can we have a secure, trusted, privacy-aware internet that will be able to fulfil its potential – and deserve our trust too?

The good news is that these problems are already being addressed. Technology now makes possible an identity infrastructure that simultaneously addresses the security and public service needs of government as well as those of private sector organisations and the privacy needs of individuals.

Privacy-enhancing security technologies now exist that enable the secure sharing of identity-related information in a way that ensures privacy for all parties involved in the data flow. This technology includes ”minimal disclosure tokens” which enable organisations securely to share identity-related information in digital form via the individuals to whom it pertains, thereby ensuring security and privacy for all parties involved in the data flow.

These minimal disclosure tokens also guard against the unauthorised manipulation of our personal identity information, not only by outsiders such as professional cybercrooks but also by the individuals themselves. The tokens enable us to see what personal information we are about to share, which ensures full transparency about what aspects of our personal information we divulge to people and things on the internet. This approach lets individuals selectively disclose only those aspects of their personal information relevant for them to gain access to a particular service.

Equally important, we can also choose to disclose such selective identity information without leaving behind data trails that enable third parties to link, trace and aggregate all of our actions. This prevents one of the current ways that third parties use to collate our personal information without our knowledge or consent. For example, a minimal disclosure token would allow a citizen to prove to a pub landlord they are over 18 but without revealing anything else, not even their date of birth or specific age.

These new technologies help to avoid the problem of centralised systems that can electronically monitor in real time all activities of an individual (and hence enable those central systems surreptitiously to access the accounts of any individual). Such models are bad practice in any case since such central parties themselves become attractive targets for security breaches and insider misuse. Centralised identity models have been shown to be a major source of identity fraud and theft, and to undermine the trust of those whose identity it is meant to safeguard.

It is of course important to achieve the right balance between the security needs of organisations, both in private and public sectors, and the public’s right to be left alone. Achieving such a balance will help restore citizens’ trust in the internet and broader identity initiatives, while also reducing the data losses and identity thefts that arise from current practices.

Now that the technology industry is currently implementing all of the components necessary to establish such secure, privacy-aware infrastructures, all it takes is the will to embrace and adopt them. Only by doing so will we all be able to enjoy the true potential of the digital age – in a secure and privacy-aware fashion.

The Laws of Identity

Thanks to Eric Norman, Craig Burton and others for helping work towards a “short version” of the Laws of Identity. So here is a refinement:

People using computers should be in control of giving out information about themselves, just as they are in the physical world.

The minimum information needed for the purpose at hand should be released, and only to those who need it. Details should be retained no longer than necesary.

It should NOT be possible to automatically link up everything we do in all aspects of how we use the Internet. A single identifier that stitches everything up would have many unintended consequences.

We need choice in terms of who provides our identity information in different contexts.

The system must be built so we can understand how it works, make rational decisions and protect ourselves.

Devices through which we employ identity should offer people the same kinds of identity controls – just as car makers offer similar controls so we can all drive safely.

Satisfaction Guaranteed?

Francois Paget, an investigator at McAfee Avert Labs, has posted a detailed report on a site that gives us insight into the emerging international market for identity information.   He writes:

Last Friday morning in France, my investigations lead me to visit a site proposing top-quality data for a higher price than usual. But when we look at this data we understand that as everywhere, you have to pay for quality. The first offer concerned bank logons. As you can see in the following screenshot, pricing depends on available balance, bank organization and country. Additional information such as PIN and Transfer Passphrase are also given when necessary:

null

For such prices, the seller offers some guaranties. For example, the purchase is covered by replacement, if you are unable – within the 24 hours – to log into the account using the provided details.

The selling site also proposes US, Austria and Spanish credit cards with full information…

It is also possible to purchase skimmers (for ATM machine) and “dump tracks” to create fake credit cards. Here too, cost is in touch with the quality:

null

Many other offers are available like shop administrative area accesses (back end of an online store where all the customer details are stored – from Name, SSN, DOB, Address, Phone number to CC) or UK or Swiss Passport information:

null

Read the rest of Francois’ story here.  Beyond that, it's well worth keeping up with the Avert Labs blog, where every post reminds us that the future of the Internet depends on fundamentally increasing its security and privacy.   [Note:  I slightly condensed Francois’ graphics…]