If you try sometimes – you can get what you need

I'll lose a few minutes less sleep each night worrying about Electronic Eternity – thanks to the serendipitous appearance of  John Markoff's recent piece on Vanish in the New York Times Science section:

A group of computer scientists at the University of Washington has developed a way to make electronic messages “self destruct” after a certain period of time, like messages in sand lost to the surf. The researchers said they think the new software, called Vanish, which requires encrypting messages, will be needed more and more as personal and business information is stored not on personal computers, but on centralized machines, or servers. In the term of the moment this is called cloud computing, and the cloud consists of the data — including e-mail and Web-based documents and calendars — stored on numerous servers.

The idea of developing technology to make digital data disappear after a specified period of time is not new. A number of services that perform this function exist on the World Wide Web, and some electronic devices like FLASH memory chips have added this capability for protecting stored data by automatically erasing it after a specified period of time.

But the researchers said they had struck upon a unique approach that relies on “shattering” an encryption key that is held by neither party in an e-mail exchange but is widely scattered across a peer-to-peer file sharing system…

The pieces of the key, small numbers, tend to “erode” over time as they gradually fall out of use. To make keys erode, or timeout, Vanish takes advantage of the structure of a peer-to-peer file system. Such networks are based on millions of personal computers whose Internet addresses change as they come and go from the network. This would make it exceedingly difficult for an eavesdropper or spy to reassemble the pieces of the key because the key is never held in a single location. The Vanish technology is applicable to more than just e-mail or other electronic messages. Tadayoshi Kohno, a University of Washington assistant professor who is one of Vanish’s designers, said Vanish makes it possible to control the “lifetime” of any type of data stored in the cloud, including information on Facebook, Google documents or blogs. In addition to Mr. Kohno, the authors of the paper, “Vanish: Increasing Data Privacy with Self-Destructing Data,” include Roxana Geambasu, Amit A. Levy and Henry M. Levy.

[More here]

Green Dam goes in all the wrong directions

The Chinese Government's Green Dam sets an important precedent:  government trying to achieve its purposes by taking control over the technology installed on peoples’ personal computers.  Here's how the Chinese Government's explained its initiative:

‘In order to create a green, healthy, and harmonious internet environment, to avoid exposing youth to the harmful effects of bad information, The Ministry of Information Industry, The Central Spiritual Civilization Office, and The Commerce Ministry, in accordance with the requirements of “The Government Purchasing Law,” are using central funds to purchase rights to “Green Dam Flower Season Escort”(Henceforth “Green Dam”) … for one year along with associated services, which will be freely provided to the public.

‘The software is for general use and testing. The software can effectively filter improper language and images and is prepared for use by computer factories.

‘In order to improve the government’s ability to deal with Web content of low moral character, and preserve the healthy development of children, the regulation and demands pertaining to the software are as follows: 

  1. Computers produced and sold in China must have the latest version of “Green Dam” pre-installed, imported computers should have the latest version of the software installed prior to sale.
  2. The software should be installed on computer hard drives and available discs for subsequent restoration
  3. The providers of “Green Dam” have to provide support to computer manufacturers to facilitate installation
  4. Computer manufacturers must complete installation and testing prior to the end of June. As of July 1, all computers should have “Green Dam” pre-installed.
  5. Every month computer manufacturers and the provider of Green Dam should give MII data on monthly sales and the pre-installation of the software. By February 2010, an annual report should be submitted.’

What does the software do?  According to OpenNet Initiative:

Green Dam exerts unprecedented control over users’ computing experience:  The version of the Green Dam software that we tested, when operating under its default settings, is far more intrusive than any other content control software we have reviewed. Not only does it block access to a wide range of web sites based on keywords and image processing, including porn, gaming, gay content, religious sites and political themes, it actively monitors individual computer behavior, such that a wide range of programs including word processing and email can be suddenly terminated if content algorithm detects inappropriate speech [my emphasis – Kim]. The program installs components deep into the kernel of the computer operating system in order to enable this application layer monitoring. The operation of the software is highly unpredictable and disrupts computer activity far beyond the blocking of websites.

The functionality of Green Dam goes far beyond that which is needed to protect children online and subjects users to security risks:   The deeply intrusive nature of the software opens up several possibilities for use other than filtering material harmful to minors. With minor changes introduced through the auto-update feature, the architecture could be used for monitoring personal communications and Internet browsing behavior. Log files are currently recorded locally on the machine, including events and keywords that trigger filtering. The auto-update feature can used to change the scope and targeting of filtering without any notification to users.

How is it being received?  Wikipedia says:

Online polls conducted by leading Chinese web portals revealed poor acceptance of the software by netizens. On Sina and Netease, over 80% of poll participants said they would not consider or were not interested in using the software; on Tencent, over 70% of poll participants said it was unnecessary for new computers to be preloaded with filtering software; on Sohu, over 70% of poll participants said filtering software would not effectively prevent minors from browsing inappropriate websites.  A poll conducted by the Southern Metropolis Daily showed similar results.

In addition, the software is a virus transmission system.   Researchers from the University of Michigan concluded:

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC [my emphasis – Kim].

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

There is no doubt that government has a legitimate interest in the safety of the Internet, and in the safety of our children.  But neither goal can be achieved with any of the unfortunate methods being used here. 

Rather than so-called “blacklisting”, the alternative is to construct virtual networks that are dramatically safer for children than the Internet as a whole.  As such virtual networks emerge, technology can be created allowing parents to limit the access of their young children to those networks.

It's a big job to build such “green zones”.  But government is the strong force that could serve as a catalyst in bringing this about.   The key would be to organize virtual districts and environments that would be fun and safe for children, so children want to play in them.

This kind of virtual world doesn't require the generalized banning of sites or ideas or prurient thoughts – or require government to “improve” the nature of human beings.

Enhanced driver's licences too stupid for their own good

Enhanced driver's licences too smart for their own good appeared in the Toronto Star recently.  It was written by Roch Tassé (coordinator of the International Civil Liberties Monitoring Group) and Stuart Trew (The Council of Canadians’ trade campaigner). 

A common refrain coming out of Homeland Security chief Janet Napolitano's visit to Ottawa and Detroit last week was that the Canada-U.S. border is getting thicker and stickier even as Canadian officials work overtime to implement measures that are meant to get us across that border more efficiently and securely.

One of those measures –  “enhanced” drivers licences (EDLs) now available in Ontario, Quebec, B.C. and Manitoba – has been rushed into production to meet today's implementation date of the Western Hemisphere Travel Initiative. This unilateral U.S. law requires all travellers entering the United States to show a valid passport or other form of secure identification when crossing the border.

But as privacy and civil liberties groups have been saying for a while, the EDL card poses its own thick and sticky questions that have not been satisfactorily answered by either the federal government, which has jurisdiction over privacy and citizenship matters, or the provincial ministries issuing the new “enhanced” licences.

For example, why introduce a new citizenship document specific to the Canada-U.S. border when the internationally recognized passport will do the trick?

Or, as even the smart-card industry wonders, why include technology used for monitoring the movement of livestock and other commodities in a citizenship document?

More crucially, why ignore calls from Canada's federal and provincial privacy commissioners, as well as groups like the civil liberty groups to put a freeze on “enhanced” licences until they can be adequately debated and assessed by Parliament? It's not as if there's nothing to talk about.

First, the radio frequency identification devices (RFID) that will be used to transmit the personal ID number in your EDL to border officials contain no security or authentication features, cannot be turned off, and are designed to be read at distances of more than 10 metres using inexpensive and commercially available technology.

This creates a significant threat of “surreptitious location tracking,” according to Canada's privacy commissioners. The protective sleeve proposed by several provincial governments is demonstrably unreliable at blocking the RFID signal and constitutes an unacceptable privacy risk.

Facial recognition screening of all card applicants, as proposed in Ontario and B.C. to reduce fraud, has a shaky success rate at best, creating a significant and unacceptable risk of false positive matches, which could increase wait times as even more people are pulled aside for questioning.

Recently, a journalist for La Presse demonstrated just how insecure Quebec's EDLs are by successfully reading the number of a colleague's card and cloning that card with a different but similar photograph. It might explain why, when announcing Quebec's EDL card this year, Premier Jean Charest could point only to hypothetical benefits.

Furthermore, the range of personal information collected through EDL programs, once shared with U.S. authorities, can be circulated excessively among a whole range of agencies under the authority of the Department of Homeland Security. It's important to note that Canada's privacy laws do not hold once that information crosses the border.

So while the border may appear to be getting thicker for some, it is becoming increasingly permeable to flows of personal information on Canadian citizens to U.S. security and immigration databases, where it can be used to mine for what the DHS considers risky behaviour.

Some provincial governments have taken these concerns seriously. Based on the high costs involved with a new identity document, the lack of clear benefits to travellers, the significant privacy risks, and the lack of prior public consultation, the Saskatchewan government suspended its own proposed EDL project this year. The New Brunswick and Prince Edward Island governments, citing excessive costs, have also abandoned theirs.

The Harper government owes it to Canadians to freeze the EDL program now and hold a parliamentary hearing into the new technology, its alleged benefits and the stated privacy risks.

Napolitano has repeatedly said that from now on Canadians must treat the U.S. border as any other international checkpoint. It might feel like an inconvenience for some who are used to crossing into the U.S. without a passport, but the costs – real and in terms of privacy – of these provincial EDL projects will be much higher.

My main problem with this article is the title, which should have been, “Enhanced driver's licenses too stupid for their own good”. 

That's because we have the technology to design smart driver's licenses and passports so they have NONE of the problems described – but so far, our governments don't do it. 

I expect it is we as technologists who are largely responsible for this.  We haven't found the ways of communicating with governments, and more to the point, with the public and its advocates, about the fact that these problems can be eliminated. 

From what I have been told, the new German identity card represents a real step forward in this regard.  I promise to look into the details and write about them.

Ethical Foundations of Cybersecurity

Britian's Enterprise Privacy Group is starting a new series of workshops that deal squarely with ethics.  While specialists in ethics have achieved a signficant role in professions like medicine, this is one of the first workshops I've seen that takes on equivalent issues in our field of work.  Perhaps that's why it is already oversubscribed… 

‘The continuing openess of the Internet is fundamental to our way of life, promoting the free flow of ideas to strengthen democratic ideals and deliver the economic benefits of globalisation.  But a fundamental challenge for any government is to balance measures intended to protect security and the right to life with the impact these may have on the other rights that we cherish and which form the basis of our society.
 
'The security of cyber space poses particular challenges in meeting tests of necessity and proportionality as its distributed, de-centralised form means that powerful tools may need to be deployed to tackle those who wish to do harm.  A clear ethical foundation is essential to ensure that the power of these tools is not abused.
 
'The first workshop in this series will be hosted at the Cabinet Office on 17 June, and will explore what questions need to be asked and answered to develop this foundation?

‘The event is already fully subscribed, but we hope to host further events in the near future with greater opportunities for all EPG Members to participate.’

Let's hope EPG eventually turns these deliberations into a document they can share more widely.  Meanwhile, this article seems to offer an introduction to the literature.

Proposal for a Common Identity Framework

Today I am posting a new paper called, Proposal for a Common Identity Framework: A User-Centric Identity Metasystem.

Good news: it doesn’t propose a new protocol!

Instead, it attempts to crisply articulate the requirements in creating a privacy-protecting identity layer for the Internet, and sets out a formal model for such a layer, defined through the set of services the layer must provide.

The paper is the outcome of a year-long collaboration between Dr. Kai Rannenberg, Dr. Reinhard Posch and myself. We were introduced by Dr. Jacques Bus, Head of Unit Trust and Security in ICT Research at the European Commission.

Each of us brought our different cultures, concerns, backgrounds and experiences to the project and we occasionally struggled to understand how our different slices of reality fit together. But it was in those very areas that we ended up with some of the most interesting results.

Kai holds the T-Mobile Chair for Mobile Business and Multilateral Security at Goethe University Frankfurt. He coordinates the EU research projects FIDIS  (Future of Identity in the Information Society), a multidisciplinary endeavor of 24 leading institutions from research, government, and industry, and PICOS (Privacy and Identity Management for Community Services).  He also is Convener of the ISO/IEC Identity Management and Privacy Technology working group (JTC 1/SC 27/WG 5)  and Chair of the IFIP Technical Committee 11 “Security and Privacy Protection in Information Processing Systems”.

Reinhard taught Information Technology at Graz University beginning in the mid 1970’s, and was Scientific Director of the Austrian Secure Information Technology Center starting in 1999. He has been federal CIO for the Austrian government since 2001, and was elected chair of the management board of ENISA (The European Network and Information Security Agency) in 2007. 

I invite you to look at our paper.  It aims at combining the ideas set out in the Laws of Identity and related papers, extended discussions and blog posts from the open identity community, the formal principles of Information Protection that have evolved in Europe, research on Privacy Enhancing Technologies (PETs), outputs from key working groups and academic conferences, and deep experience with EU government digital identity initiatives.

Our work is included in The Future of Identity in the Information Society – a report on research carried out in a number of different EU states on topics like the identification of citizens, ID cards, and Virtual Identities, with an accent on privacy, mobility, interoperability, profiling, forensics, and identity related crime.

I’ll be taking up the ideas in our paper in a number of blog posts going forward. My hope is that readers will find the model useful in advancing the way they think about the architecture of their identity systems.  I’ll be extremely interested in feedback, as will Reinhard and Kai, who I hope will feel free to join into the conversation as voices independent from my own.

More precision on the Right to Correlate

Dave Kearns continues to whack me for some of my terminology in discussing data correlation.  He says: 

‘In responding to my “violent agreement” post, Kim Cameron goes a long way towards beginning to define the parameters for correlating data and transactions. I'd urge all of you to jump into the discussion.

‘But – and it's a huge but – we need to be very careful of the terminology we use.

‘Kim starts: “Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction, and further, that they only have the right to correlate it with other transactions involving the same parties.” ‘

Dave's right that this was overly restrictive.  In fact I changed it within a few minutes of the initial post – but apparently not fast enough to prevent confusion.  My edited version stated:

‘Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).’

This way of putting things eliminates Dave's concern:

‘Which would mean, as I read it, that I couldn't correlate my transactions booking a plane trip, hotel and rental car since different parties were involved in all three transactions!’

That said, I want to be clear that “parties to a transaction” does NOT include what Dave calls “all corporate partners” (aka a corporate information free-for-all!)  It just means parties (for example corporations) participating directly in some transaction can correlate it with the other transacitons in which they directly participate (but not with the transactions of some other corporation unless they get approval from the transaction participants to do so). 

Dave argues:

‘In the end, it isn't the correlation that's problematic, but the use to which it's put. So let's tie up the usage in a legally binding way, and not worry so much about the tools and technology.

‘In many ways the internet makes anti-social and unethical behavior easier. That doesn't mean (as some would have it) that we need to ban internet access or technological tools. It does mean we need to better educate people about acceptable behavior and step up our policing tools to better enable us to nab the bad guys (while not inconveniencing the good guys).’

To be perfectly clear, I'm not proposing a ban on technology!  I don't do banning!  I do creation. 

So instead, I'm arguing that as we develop our new technologies we should make sure they support the “right to correlation” – and the delegation of that right – in ways that restore balance and give people a fighting chance to prevent unseen software robots from limiting their destinies.

 

The Right To Correlate

Dave Kearns’ comment in Another Violent Agreement convinces me I've got to apply the scalpel to the way I talk about correlation handles.  Dave writes:

‘I took Kim at his word when he talked “about the need to prevent correlation handles and assembly of information across contexts…” That does sound like “banning the tools.”

‘So I'm pleased to say I agree with his clarification of today:

;”I agree that we must influence behaviors as well as develop tools… [but] there’s a huge gap between the kind of data correlation done at a person’s request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person’s consent or knowledge.” (Emphasis added by Dave)’

Thinking about this some more, it seems we might be able to use a delegation paradigm.

The “right to correlate”

Let's postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).

Then it would follow that any two parties with whom an individual interacts would not by default have the right to correlate data they had each collected in their separate transactions.

On the other hand, the individual would have the right to organize and correlate her own data across all the parties with whom she interacts since she was party to all the transactions.

Delegating the Right to Correlate

If we introduce the ability to delegate, then an individual could delegate her right for two parties to correlate relevant data about her.  For example, I could delegate to Alaska Airlines and British Airways the right to share information about me.

Similarly, if I were an optimistic person, I could opt to use a service like that envisaged by Dave Kearns, which “can discern our real desires from our passing whims and organize our quest for knowledge, experience and – yes – material things in ways which we can only dream about now.”  The point here is that we would delegate the right to correlate to this service operating on our behalf.

Revoking the Right to Correlate

A key aspect of delegating a right is the ability to revoke that delegation.  In other words, if the service to which I had given some set of rights became annoying or odious, I would need to be able terminate its right to correlate.  Importantly, the right applies to correlation itself.  Thus when the right is revoked, the data must no longer be linkable in any way.

Forensics

There are cases where criminal activity is being investigated or proven where it is necessary for law enforcement to be able to correlate without the consent of the individual.  This is already the case in western society and it seems likely that new mechanisms would not be required in a world resepcting the Right to Correlate.

Defining contexts

Respecting the Right to Correlate would not by itself solve the Canadian Tire Problem that started this thread.  The thing that made the Canadian Tire human experiments most odious is that they correlated buying habits at the level of individual purchases (our relations to Canadian Tire as a store)  with  probable behavior in paying off credit cards (Canadian Tire as a credit card issuer).  Paradoxically, someone's loyalty to the store could actually be used to deny her credit.  People who get Canadian Tire credit cards do know that the company is in a position to correlate all this information, but are unlikely to predict this counter-intuitive outcome.

Those of us prefering mainstream credit card companies presumably don't have the same issues at this point in time.  They know where we buy but not what we buy (although there may be data sharing relationships with merchants that I am not aware of… Let me know…).

So we have come to the the most important long-term problem:  The Internet changes the rules of the game by making data correlation so very easy.

It potentially turns every credit card company into a data-correlating Canadian Tire.  Are we looking at the Canadian Tirization of the Internet?

But do people care?

Some will say that none of this matters because people just don't care about what is correlated.  I'll discuss that briefly in my next post.

The brands we buy are “the windows into our souls”

You should read this fascinating piece by Charles Duhigg in last week’s New York Times. A few tidbits to whet the appetite:

‘The exploration into cardholders’ minds hit a breakthrough in 2002, when J. P. Martin, a math-loving executive at Canadian Tire, decided to analyze almost every piece of information his company had collected from credit-card transactions the previous year. Canadian Tire’s stores sold electronics, sporting equipment, kitchen supplies and automotive goods and issued a credit card that could be used almost anywhere. Martin could often see precisely what cardholders were purchasing, and he discovered that the brands we buy are the windows into our souls — or at least into our willingness to make good on our debts…

‘His data indicated, for instance, that people who bought cheap, generic automotive oil were much more likely to miss a credit-card payment than someone who got the expensive, name-brand stuff. People who bought carbon-monoxide monitors for their homes or those little felt pads that stop chair legs from scratching the floor almost never missed payments. Anyone who purchased a chrome-skull car accessory or a “Mega Thruster Exhaust System” was pretty likely to miss paying his bill eventually.

‘Martin’s measurements were so precise that he could tell you the “riskiest” drinking establishment in Canada — Sharx Pool Bar in Montreal, where 47 percent of the patrons who used their Canadian Tire card missed four payments over 12 months. He could also tell you the “safest” products — premium birdseed and a device called a “snow roof rake” that homeowners use to remove high-up snowdrifts so they don’t fall on pedestrians…

‘Why were felt-pad buyers so upstanding? Because they wanted to protect their belongings, be they hardwood floors or credit scores. Why did chrome-skull owners skip out on their debts? “The person who buys a skull for their car, they are like people who go to a bar named Sharx,” Martin told me. “Would you give them a loan?”

So what if there are errors?

Now perhaps I’ve had too much training in science and mathematics, but this type of thinking seems totally neanderthal to me. It belongs in the same category of things we should be protected from as “guilt by association” and “racial profiling”.

For example, the article cites one of Martin’s concrete statistics:

‘A 2002 study of how customers of Canadian Tire were using the company's credit cards found that 2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months. By contrast, only 530 of the cardholders who used their credit cards at the dentist missed four payments within the next 12 months.’

We can rephrase the statement to say that 98% of the people who used their credit cards in drinking places did NOT miss the requisite four payments.

Drawing the conclusion that “use of the credit card in a drinking establishment predicts default” is thus an error 98 times out of 100.

Denying people credit on a premise which is wrong 98% of the time seems like one of those things regulators should rush to address, even if the premise reduces risk to the credit card company.

But there won’t be enough regulators to go around, since there are thousands of other examples given that are similarly idiotic from the point of view of a society fair to its members. For the article continues,

‘Are cardholders suddenly logging in at 1 in the morning? It might signal sleeplessness due to anxiety. Are they using their cards for groceries? It might mean they are trying to conserve their cash. Have they started using their cards for therapy sessions? Do they call the card company in the middle of the day, when they should be at work? What do they say when a customer-service representative asks how they’re feeling? Are their sighs long or short? Do they respond better to a comforting or bullying tone?

Hmmm.

  • Logging in at 1 in the morning. That’s me. I guess I’m one of the 98% for whom this thesis is wrong… I like to stay up late. Do you think staying up late could explain why Mr. Martin’s self-consciously erroneous theses irk me?
  • Using card to buy groceries? True, I don’t like cash. Does this put me on the road to ruin? Another stupid thesis for Mr. Martin.
  • Therapy sessions? If I read enough theses like those proposed by Martin, I may one day need therapy.  But frankly,  I don’t think Mr. Martin should have the slightest visibility into matters like these.  Canadian Tire meets Freud?
  • Calling in the middle of the day when I should be at work? Grow up, Mr. Martin. There is this thing called flex schedules for the 98% or 99% or 99.9% of us for which your theses continually fail.
  • What I would say if a customer-service representative asked how I was feeling? I would point out, with some vigor, that we do not have a personal relationship and that such a question isn't appropriate. And I certainly would not remain on the line.

Apparently Mr. Martin told Charles Duhigg, “If you show us what you buy, we can tell you who you are, maybe even better than you know yourself.” He then lamented that in the past, “everyone was scared that people will resent companies for knowing too much.”

At the best, this no more than a Luciferian version of the Beatles’ “You are what you eat” – but minus the excessive drug use that can explain why everyone thought this was so deep. The truth is, you are not “what you eat”.

Duhigg argues that in the past, companies stuck to “more traditional methods” of managing risk, like raising interest rates when someone was late paying a bill (imagine – a methodology based on actual delinquency rather than hocus pocus), because they worried that customers would revolt if they found out they were being studied so closely. He then says that after “the meltdown”, Mr. Martin’s methods have gained much more currency.

In fact, customers would revolt because the methodology is not reasonable or fair from the point of view of the vast majority of individuals, being wrong tens or hundreds or thousands of times more often than it is right.

If we weren’t working on digital identity, we could just end this discussion by saying Mr. Martin represents one more reason to introduce regulation into the credit card industry. But unfortunately, his thinking is contagious and symptomatic.

Mining of credit card information is just the tip of a vast and dangerous iceberg we are beginning to encounter in cyberspace. The Internet is currently engineered to facilitate the assembly of ever more information of the kind that so thrills Mr. Martin – data accumulated throughout the lives of our young people that will become progressively more important in limiting their opportunities as more “risk reduction” assumptions – of the Martinist kind that apply to almost no one but affect many – take hold.

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine.  Mr. Duhigg's story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

[Thanks to JC Cannon for pointing me to this article..]

The economics of vulnerabilities…

Gunnar Peterson of 1 Raindrop has blogged his Keynote at the recent Quality of Protection conference.  It is a great read – and a defense in depth against the binary “secure / not secure” polarity that characterizes the thinking of those new to security matters. 

His argument riffs on Dan Geer's famous Risk Management is Where the Money Is.  He turns to Warren Buffet as someone who knows something about this kind of thing, writing:

“Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter talking about financial institutions’ ability to deal with the subprime mess in the housing market saying, “You don't know who is swimming naked until the tide goes out.” In our world, we don't know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.

“So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation – “All these people are talking about risk, but they don't have any assets.” You can't do risk management if you don't know your assets.

“Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.

“Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.

Analysing vulnerabilities and the values of assets, he uncovers two pyramids that turn out to be inverted. 

To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:

  •  
    • Network: all the resources invested in Cisco, network admins, etc.
    • Host: all the resources invested in Unix, Windows, sys admins, etc.
    • Applications: all the resources invested in developers, CRM, ERP, etc.
    • Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

  •  
    • Network: all the resources invested in network firewalls, firewall admins, etc.
    • Host: all the resources invested in Vulnerability management, patching, etc.
    • Applications: all the resources invested in static analysis, black box scanning etc.
    • Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!

He relates his thinking to a fascinating piece by Pat Helland called SOA and Newton's Universe (a must-read to which I will return) and then proposes some elements of a concrete approach to development of meaningful metrics that he argues allow correlation of “value” and “risk” in ways that could sustain meaningful business decisions. 

In an otherwise clear argument, Gunnar itemizes a series of “Apologies”, in the sense of corrections applied post-facto due to the uncertaintly of decisionmaking in a distributed environment:

Example Apologies – Identity Management tools – provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction – Giant Global Bank is still sorry your account was compromised!

Try as I might, I don't understand the categorization of identity management tools as apology, or their relationship to account compromise – I hope Gunnar will tell us more. 

Security and ContactPoint: perception is all

Given the recent theft of my identity while it was being “stewarded” by CountryWide, I feel especially motivated to share with you this important piece on ContactPoint by Sir Bonar Neville-Kingdom GCMG KCVO that appeared in Britain's Ideal Government.   Sir Bonar writes:

I’m facing a blizzard of Freedom of Information requests from the self-appointed (and frankly self-righteous) civil liberties brigade about releasing details of the ContactPoint security review. Of course we’re all in favour of Freedom of Information to a point but there is a limit.

Perhaps I might point out:

The decision not to release any information about the ContactPoint security review was taken by an independent panel. I personally chaired ths panel to ensure its independence from any outside interests. I was of course not directly involved in the original requests, which were handled by a junior staff member.

The security of ContactPoint relies on nobody knowing how it works. If nobody knows what the security measures are, how can they possibly circumvent them? This is simply common sense. Details of the security measures will be shared only with the 330,000 accredited and vetted public servants who will have direct access to the database of children.

We’re hardly going to ask every Tom, Dick and Harry for how to keep our own data secure when, as you’re probably aware, our friends in Cheltenham pretty much invented the whole information security game. To share the security details with some troublemaking non-governmental organisation is merely to ask for trouble with the news media and to put us all needlessly at risk. The Department will not tolerate such risk and it is clearly not in the public interest to do so.

We did consider whether to redact and release any text.  We concluded that the small amount of text that would result after redacting text that should not be released would be incoherent and without context.  Such a release would serve no public interest.

ContactPoint is both a safe and secure system and I should remind everyone that it is fundamental to its success that it is perceived as such by parents, the professionals that use it and others with an interest in ContactPoint and its contribution to delivering the Every Child Matters agenda. Maintaining this perception of absolute “gold standard” security is why it is so important that nobody should question the security arrangements put in by our contractor Cap Gemini (whom I shall be meeting again in Andorra over the weekend).

We must guard the public mind – and indeed our own minds – against any inappropriate concerns on data security.

All this is set out on the Every Child Matters website, which includes a specific and contextual reference to the ContactPoint Data Security Review.  The content has been recently updated and can be found at: http://www.everychildmatters.gov.uk/deliveringservices/contactpoint/security/

Sending out our policy thinking via the medium of a Web Site is a central plank of the “Perfecting Web 1.0” aspect of our Transformational Government strategy, which is due to be complete in 2015. If interfering busybodies have any other queries about how we propose that children in Britain should be raised and protected I would refer them t that

I might add we never get this sort of trouble from the trade association Intellect, and this is why we find them a pleasure to deal with. And on the foundation of that relationship is our track record of success in government IT projects built.

So put that in your collective pipe and smoke it, naysayers. Now is not the time to ask difficult questions. We have to get on with the job of restoring order.