Kim Cameron's excellent adventure

I need to correct a few of the factual errors in recent posts by James Governor and Jon Udell.  James begins by describing our recent get-together:

We talked about Project Geneva, a new claims based access platform which supersedes Active Directory Federation Services, adding support for SAML 2.0 and even the open source web authentication protocol OpenID.

Geneva is big news for OpenID. As David Recordon, one of the prime movers behind the standard said on Twitter yesterday:

Microsoft’s Live ID is adding support for OpenID. Goodbye proprietary identity technologies for the web! Good work MSFT

TechCrunch took the story forward, calling out de facto standardization:

Login standard OpenID has gotten a huge boost today from Microsoft, as the company has announced that users will soon be able to login to any OpenID site using their Windows Live IDs. With over 400 million Windows Live accounts (many of which see frequent use on the Live’s Mail and Messenger services), the announcement is a massive win for OpenID. And Microsoft isn’t just supporting OpenID – the announcement goes as far as to call it the de facto login standard [the announcement actually calls it “an emerging, de facto standard” – Kim] 

But that’s not what this post is supposed to be about. No I am talking about the fact [that] later yesterday evening Kim hacked his way into a party at the standard using someone else’s token!  [Now this is where I think some “small tweaks” start to be called for… – Kim]

It happened like this. I was talking to Mary Branscombe, Simon Bisson and John Udell when suddenly Mary jumped up with a big smile on her face. Kim, who has a kind of friendly bear look about him, had arrived. She ran over and then I noticed that a bouncer had his arm across Kim’s chest (”if your name’s not down you’re not coming in”). Kim had apparently wandered upstairs without getting his wristband first. Kim disappeared off downstairs, and I figured he might not even come back. A few minutes later though and there he was. I assumed he had found an organizer downstairs to give him a wristband… When he said that he actually had taken the wristband from someone leaving the party, and hooked it onto his wrist me and John practically pissed our pants laughing. As Jon explains (in Kim Cameron's Excellent Adventure):

If you don’t know who Kim is, what’s cosmically funny here is that he’s the architect for Microsoft’s identity system and one of the planet’s leading authorities on identity tokens and access control.

We stood around for a while, laughing and wondering if Kim would reappear or just call it a night. Then he emerged from the elevator, wearing a wristband which — wait for it — belonged to John Fontana.  Kim hacked his way into the party with a forged credential! You can’t make this stuff up!

While there is certainly some cosmic truth to this description, and while I did in fact back away slightly from the raucus party at the precise moment James says he and Jon “pissed their pants”, John Fontana did NOT actually give me his wristband.  You see, he didn't have a wristband either. 

So let's go through this step by step.  It all began with the invite that brought me to the party in the first place:

As a spokesperson for PDC2008, we’re looking forward to having you join us at the Rooftop Bar of the Standard Hotel for the Media/Analyst party on October 27th at 7:00pm

This invite came directly from the corporate Department of Parties.

I point this out just to ward off any unfair accusations that I just wanted to raid the party's immense Martini bar. Those who know me also know nothing could be further from the truth. You have to force a Martini into my hands.  My attendance represented nothing but Duty.  But I digress.

Protocol Violation

The truth of the matter is that I ran into John Fontana in the cafe of the Standard and we arrived at the party together.  He had been invited because this was, ummm, a Press party and he was, ummm, Press. 

However, it didn’t take more than a few seconds for us to see that the protocol for party access control had not been implemented correctly.   We just assumed this was a bug due to the fact that the party was celebrating a Beta, and that we would have to work our way past it as all beta participants do. 

Let’s just say the token-issuing part of the party infrastructure had crashed, whereas the access control point was operating in an out-of-control fashion.

Looking at it from an architectural point of view, the admission system was based on what is technically called “bearer” tokens (wristbands). Such tokens are NOT actually personalized in any way, or tied to the identity of the person they are given to through some kind of proof. If you “have” the token, you ARE the bearer of the token.

So one of those big ideas slowly began to take root in our minds.  Why not become bearers of the requisite tokens, thereby compensating for the inoperative token-issuing system?

Well, at that point, since not a few of the people leaving the party knew us,  John and I explained our “aha”, and pointed out the moribund token-issuing component.  As is typical of people seeing those in need of help, we were showered with offers of assistance.

I happened to be rescued by an unknown bystander with incredibly nimble and strong fingers and deep expertise with wristband technology.  She was able to easily dislodge her wristband and put it on me in such a way that it’s integrity was totally intact. 

There was no forged token.  There was no stolen token.  It was a real token.  I just became the bearer.

When we got back upstairs, the access control point evaluated my token – and presto – let me in to join a certain set of regaling hedonists basking in the moonlight.  

But sadly – and unfairly –  John’s token was rejected since its donor, lacking the great skill of mine, had damaged it during the token transplant.

Despite the Martini now in my hand, I was overcome by that special sadness you feel when escaping ill fate wrongly allotted to one more deserving of good fortune than you.  John slipped silently out of the queue and slinked off to a completely different party.

So that's it, folks.  Yet the next morning, I had to wake up, and confont again my humdrum life.  But I do so inspired by the kindness of both strangers and friends (have I gone too far?)

 

Hole in Google SSO service

Some days you get up and wish you hadn't.  How about this one:

Google SSO problem

The ZDNet article begins by reassuring us:

“Google has fixed an implementation flaw in the single sign-on service that powers Google Apps follow a warning from researchers that remote attackers can exploit a hole to access Google accounts.

“The vulnerability, described in this white paper (.pdf), affects the SAML Single Sign-On Service for Google Apps.

“This US-CERT notice describes the issue:

“A malicious service provider might have been able to access a user’s Google Account or other services offered by different identity providers. [US-CERT actually means ‘by different service providers’ – Kim]

“Google has addressed this issue by changing the behavior of their SSO implemenation. Administrators and developers were required to update their identity provider to provide a valid recipient field in their assertions.

“To exploit this vulnerability, an attacker would have to convince the user to login to their site

Incredibly basic errors

The paper is by Alessandro Armando,  Roberto Carbone, Luca Compagna, Jorge Cuellar, and Llanos Tobarra, who are affiliated with University of Genoa, Siemens and SAP, and is one of an important series of studies demonstrating the value of automated protocol verification systems.

But the surprising fact is that the errors made are incredibly basic – you don't need an automated protocol verification system to know which way the wind blows.  The industry has known about exactly these problems for a long time now.   Yet people keep making the same mistakes.

Do your own thing

The developers decided to forget about the SAML specification as it's written and just “do their own thing.”  As great as this kind of move might be on the dance floor, it's dangerous when it comes to protecting peoples’ resources and privacy.  In fact it is insideous since the claim that Google SSO implemented a well vetted protocol tended to give security professionals a sense of confidence that we understood its capabilities and limitations.  In retrospect, it seems we need independent audit before we depend on anything.  Maybe companies like Fugen can help in this regard?

What was the problem?

Normally, when a SAML relying party wants a user to authenticate through SAML (or WS-Fed), the replying party sends her to an identity provider with a request that contains an ID and a scope  (e.g. URL) to which the resulting token should apply.

For example, in authenticating someone to identityblog, my underlying software would make up a random authentication ID number and the scope would be www.identityblog.com.  The user would carry this information with her when she was redirected to the identity provider for authantication.

The identity provider would then ask for a password, or examine a cookie, and sign an authentication assertion containing the ID number, the scope, the client identity, and the identity provider's identity.  

Having been bound together cryptographically in a tamperproof form where authenticity of the assertion could be verified, these properties would be returned to the relying party.  Because of the unique ID, the relying party knows this assertion was freshly minted in response to its needs.  Further, since the scope is specified, the relying party can't abuse the assertion it gets at some other scope.

But according to the research done by the paper's authors, the Google engineers “simplified” the protocol, perhaps hoping to make it “more efficient”?  So they dropped the whole ID and scope “thing” out of the assertion.  All that was signed was the client's identity.

The result was that the relying party had no idea if the assertion was minted for it or for some other relying party.  It was one-for-all and all-for-one at Google.

Wake up to insider attacks

This might seem reasonable, but it sure would sure cause me sleepless nights.

The problem is that if you have a huge site like Google, which brings together many hundreds (thousands?) of services, then with this approach, if even ONE of them “goes bad”, the penetrated service can use any tokens it gets to impersonate those users at any other Google relying party service.

It is a red carpet for insider attacks.  It is as though someone didn't know that insider attacks represent the overwhelming majority of attacks.  Partitioning is the key weapon we have in fighting these attacks.  And the Google design threw partitioning to the wind.  One hole in the hull and the whole ship goes down.

Indeed the qualifying note in the ZD article that “to exploit this vulnerability, an attacker would have to convince the user to login to their site” misses the whole point about how this vulnerability facilitates insider attacks.  This is itself worrisome since it seems that thinking about the insider isn't something that comes naturally to us.

Then it gets worse.

This is all pretty depressing but it gets worse.  At some point, Google decided to offer SSO to third party sites.  But according to the researchers, at this point, the scope still was not being verified.  Of course the conclusion is that any service provider who subscribed to this SSO service – and any wayward employee who could get at the tokens – could impersonate any user of the third party service and access their accounts anywhere within the Google ecosystem.

My friends at Google aren't going to want to be lectured about security and privacy issues – especially by someone from Microsoft.  And I want to help, not hinder.

But let's face it.  As an industry we shouldn't be making the kinds of mistakes we made 15 or 20 years ago.  There must be better processes in place.  I hope we'll get to the point where we are all using vetted software frameworks so this kind of do-it-yourself brain surgery doesn't happen. 

Let's work together to build our systems to protect them from inside jobs.  If we all had this as a primary goal, the Google SSO fiasco could not have happened.  And as I'll make clear in an upcoming post, I'm feeling very strongly about this particular issue.

Personal information can be a toxic liability…

From Britain's Guardian, another fantastic tale of information leakage:

The home secretary, Jacqui Smith, yesterday denounced the consultancy firm involved in the development of the ID cards scheme for “completely unacceptable” practice after losing a memory stick containing the personal details of all of the 84,000 prisoners in England and Wales.

The memory stick contained unencrypted information from the electronic system for monitoring offenders through the criminal justice system, including information about 10,000 of the most persistent offenders…

Smith said PA Consulting had broken the terms of its contract in downloading the highly sensitive data. She said: “It runs against the rules set down both for the holding of government data and set down by the external contractor and certainly set down in the contract that we had with the external contractor.

An illuminating twist is that the information was provided to the contractor encrypted.  The contractor, one of the “experts” designing the British national identity card, unencrypted it, put it on a USB stick and “lost it”.   With experts like this, who needs non-experts? 

When government identity system design and operations are flawed, the politicians responsible suffer  the repercussions.  It therefore always fills me with wonder – it is one of those inexplicable aspects of human nature – that politicians don't protect themselves by demanding the safest possible systems, nixing any plan that isn't based on at least a modicum of the requisite pessimism.  Why do they choose such rotten technical advisors?

Opposition parties urged the government to reconsider its plan for the introduction of an ID card database following the incident. Dominic Grieve, the shadow home secretary, said: “The public will be alarmed that the government is happy to entrust their £20bn ID card project to the firm involved in this fiasco.

“This will destroy any confidence the public still have in this white elephant and reinforce why it could endanger – rather than strengthen – our security.”

The Liberal Democrats were also not prepared to absolve the home secretary of responsibility. Their leader, Nick Clegg, accused Smith of being worse than the Keystone Cops at keeping data safe.

Clegg said: “Frankly the Keystone Cops would do a better job running the Home Office and keeping our data safe than this government, and if this government cannot keep the data of thousands of guilty people safe, why on earth should we give them the data of millions of innocent people in an ID card database?”

David Smith, deputy commissioner for the information commissioner's office, said: “The data loss by a Home Office contractor demonstrates that personal information can be a toxic liability if it is not handled properly , and reinforces the need for data protection to be taken seriously at all levels.”

Home Office resource accounts for last year show that in March of this year two CDs containing the personal information of seasonal agricultural workers went missing in transit to the UK Borders Agency. The names, dates of birth, and passport numbers of 3,000 individuals were lost.

If you are wondering why Britain seems to experience more “data loss” than anyone else, I suspect you are asking the wrong question.  If I were a betting man, I would wager that they just have better reporting – more people paying attention and blowing whistles.

But the big takeaway at the technical level is that sensitive information – and identity information in particular – needs to be protected throughout its lifetime.  If put on portable devices, the device should enforce rights management and only release specific information as needed – never allow wholesale copying.  Maybe we don't have dongles that can do this yet, but we certainly have phone-sized computers (dare I say phones?) with all the necessary computational capabilities.

 

How to set up your computer so people can attack it

As I said in the previous post, the students from Ruhr Universitat who are claiming discovery of security vulnerabilities in CardSpace did NOT “crack” CardSpace.
 
Instead, they created a demonstration that requires the computer's owner to consciously disable the computer's defenses through complex configurations – following a recipe they published on the web.

The students are not able to undermine the system without active co-operation by its owner. 

You might be thinking a user could be tricked into accidently cooperating with the attack..  To explore that idea, I've captured the steps required to enable the attack in this video.  I suggest you look at this yourself to judge the students’ claim they have come up with a “practical attack”.

 In essence, the video shows that a sophisticated computer owner is able to cause her system to be compromised if she chooses to do so.  This is not a “breach”.

Fingerprint charade

I got a new Toshiba Portege a few weeks ago, the first machine I've owned that came with a fingerprint sensor.   At first the system seemed to have been designed in a sensible way.  The fingerprint template is encrypted and stays local.  It is never released or stored in a remote database.  I decided to try it out – to experience what it “felt like”.

A couple of days later, I was at a conference and on stage under pretty bright lights.  Glancing down at my shiny new computer, I saw what looked unmistakably like a fingerprint on my laptop's right mouse button.  Then it occurred to me that the fingerprint sensor was only a quarter of an inch from what seemed to be a perfect image of my fingerprint.  How secure is that?

A while later I ran into  Dale Olds from Novell.  Since Dale's an amazing photographer, I asked if he would photograph the laptop to see if the fingerprint was actually usable.  Within a few seconds he took the picture above. 

When Dale actually sent me the photo, he said,

I have attached a slightly edited version of the photo that showed your fingerprint most clearly. In fact, it is so clear I am wondering whether you want to publish it. The original photos were in Olympus raw format. Please let me know if this version works for you.

Eee Gads.  I opened up the photo in Paint and saw something along these lines:

The gold blotch wasn't actually there.  I added it as a kind of fig-leaf before posting it here, since it covers the very clearest part of the fingerprint. 

The net of all of this was to drive home, yet again, just how silly it is to use a “public” secret as a proof of identity.  The fact that I can somehow “demonstrate knowledge” of a given fingerprint means nothing.  Identification is only possible by physically verifying that my finger embodies the fingerprint.  Without physical verifcation, what kind of a lock does the fingerprint reader provide?  A lock which conveniently offers every thief the key.

At first my mind boggled at the fact that Toshiba would supply mouse buttons that were such excellent fingerprint collection devices.  But then I realized that even if the fingerprint weren't conveniently stored on the mouse button, it would be easy to find it somewhere on the laptop's surface.

It hit me that in the age of digital photography, a properly motivated photographer could probably find fingerprints on all kinds of surfaces, and capture them as expertly as Dale did.  I realized it was no longer necessary to use special powder or inks or tape or whatever.  Fingerprints have become a thing of “sousveillance”.

Can women detect idiot researchers better than men?

According to an article in The Register

“Women are four times more likely than men to give out “passwords” in exchange for chocolate bars.

“A survey by of 576 office workers in central London found that women are far more likely to give away their computer passwords to total strangers than their male counterparts, with 45 per cent of women versus ten per cent of men prepared to give away their login credentials to strangers masquerading as market researchers.

“The survey, conducted outside Liverpool Street Station in the City of London, was actually part of a social engineering exercise to raise awareness about information security in the run-up to next week’s Infosec Europe conference.

“Infosec has conducted similar surveys every year for at least the last five years involving punters apparently handing over login credentials in exchange for free pens or chocolate rewards.

“Little attempt is made to verify the authenticity of the passwords, beyond follow-up questions asking what category it falls under. So we don’t know whether women responding to the survey filled in any old rubbish in return for a choccy treat or handed out their real passwords.

“This year’s survey results were significantly better than previous years. In 2007, 64 per cent of people were prepared to give away their passwords for a chocolate bar, a figure that dropped 21 per cent this time around.

“So either people are getting more security-aware or more weight-conscious. And with half the respondents stating that they used the same passwords at home and work, then perhaps the latter is more likely.

“Taken in isolation the password findings might suggest the high-profile HMRC data loss debacle had increased awareness about information security. However, continued willingness to hand over personal information that could be useful to ID fraudsters suggests otherwise.

“The bogus researchers also asked for workers’ names and telephone numbers, ostensibly so they could be entered into a draw to go to Paris. With this incentive 60 per cent of men and 62 per cent of women handed over their contact information. A similar percentage (61 per cent) were happy to hand over their dates of birth. ®

This report is fascinating – not because it is good or bad but because it makes us question so much.

The people being studied don’t understand how our systems operate.  [In my view this is our worst problem.]  They’ve been shut out of knowing why things work the way they do.  So if they can be tricked, should we be surprised?  And does it mean they are “stupid”??? 

I feel a lot of people are simply sick and tired of naive and stupid questions from naive and stupid researchers.  Example:  I was just called to the door of my hotel room and asked what my major problems were…  Guess what?  I said that I was an architect and thus disqualified from discussing any such issues.  Sugar freaks will be happy that this qualified me for several  free chocolates, as well as some more idiosyncratic pastries…

Chaos computer club gives us the German phish finger

If you missed this article in The Register, you missed the most instructive story to date about applied biometrics:  

A hacker club has published what it says is the fingerprint of Wolfgang Schauble, Germany's interior minister and a staunch supporter of the collection of citizens’ unique physical characteristics as a means of preventing terrorism.

In the most recent issue of Die Datenschleuder, the Chaos Computer Club printed the image on a plastic foil that leaves fingerprints when it is pressed against biometric readers…

Last two pages of magazine issue, showing article and including plastic film containing Schauble's fingerprint

“The whole research has always been inspired by showing how insecure biometrics are, especially a biometric that you leave all over the place,” said Karsten Nohl, a colleague of an amateur researcher going by the moniker Starbug, who engineered the hack. “It's basically like leaving the password to your computer everywhere you go without you being able to control it anymore.” … 

A water glass 

Schauble's fingerprint was captured off a water glass he used last summer while participating in a discussion celebrating the opening of a religious studies department at the University of Humboldt in Berlin. The print came from an index finger, most likely the right one, Starbug believes, because Schauble is right-handed.

The print is included in more than 4,000 copies of the latest issue of the magazine, which is published by the CCC. The image is printed two ways: one using traditional ink on paper, and the other on a film of flexible rubber that contains partially dried glue. The latter medium can be covertly affixed to a person's finger and used to leave an individual's prints on doors, telephones or biometric readers…

Schauble is a big proponent of using fingerprints and other unique characteristics to identify individuals.

“Each individual’s fingerprints are unique,” he is quoted as saying in this official interior department press release announcing a new electronic passport that stores individuals’ fingerprints on an RFID chip. “This technology will help us keep one step ahead of criminals. With the new passport, it is possible to conduct biometric checks, which will also prevent authentic passports from being misused by unauthorized persons who happen to look like the person in the passport photo.”

The magazine is calling on readers to collect the prints of other German officials, including Chancellor Angela Merkel, Bavarian Prime Minister Guenther Beckstein and BKA President Joerg Ziercke.

“The thing I like a lot is the political activism of the hack,” said Bruce Schneier, who is chief security technology officer for BT and an expert on online authentication. Fingerprint readers were long ago shown to be faulty, largely because designers opt to make the devices err on the side of false positives rather than on the side of false negatives…

[Read the full article here]

UK Chip and PIN vulnerable to simple attack

LightBlueTouchpaper, a blog by security researchers at Cambridge University, has posted details of a study documenting easy attacks on the new generation of British bank cards.  Saar Drimer explains, “This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography”.  Let's all heed the warning: 

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

[Thanks to Richard Turner for the heads up.]

Britain's HMRC Identity Chernobyl

The recent British Identy Chernobyl demands our close examination. 

Consider:

  • the size of the breach – loss of one person's identity information is cause for concern, but HMRC lost the information on 25 million people (7.5 million families)
  • the actual information “lost” – unencrypted records containing not only personal but also banking and national insurance details (a three-for-one…)
  • the narrative – every British family with a child under sixteen years of age made vulnerable to fraud and identity theft

According to Bloomberg News,

Political analysts said the data loss, which prompted the resignation of the head of the tax authority, could badly damage the government.

“I think it’s just a colossal error that I think could really rebound on the government’s popularity”, said Lancaster University politics Professor David Denver.

“What people think about governments these days is not so about much ideology, but about competence, and here we have truly massive incompetence.”

Even British Chancellor Alistair Darling said,

“Of course it shakes confidence, because you have a situation where millions of people give you information and expect it to be protected.

Systemic Failure

Meanwhile, in parliament, Prime Minister Gordon Brown explained that security measures had been breached when the information was downloaded and sent by courier to the National Audit Office, although there had been no “systemic failure”.

This is really the crux of the matter. Because, from a technology point of view, the failure was systemic. 

From a technology point of view, the failure was systemic.

We are living in an age where systems dealing with our identity must be designed from the bottom up not to leak information in spite of being breached.  Perhaps I should say, “redesigned from the bottom up”, because today's systems rarely meet the bar.  It's not that data protection wasn't considered when devising them.  It is simply that the profound risks were not yet evident, and guaranteeing protection was not seen to be as fundamental as meeting other design goals – like making sure the transactions balanced or abusers were caught.

Isn't it incredible that “a junior official” could simply “download” detailed personal and financial information on 25 million people?  Why would a system be designed this way? 

To me this is the equivalent of assembling a vast pile of dynamite in the middle of a city on the assumption that excellent procedures would therefore be put in place, so no one would ever set it off.  

There is no need to store all of society's dynamite in one place, and no need to run the risk of the collosal explosion that an error in procedure might produce.  

Similarly, the information that is the subject of HMRC's identity catastrophe should have been partitioned – broken up both in terms of the number of records and the information components.

In addition, it should have been encrypted – even rights protected from beginning to end.  And no official (A.K.A insider) should ever have been able to get at enough of it that a significant breach could occur.

Gordon Brown, like other political leaders, deserves technical advisors savvy enough to explain the advantages of adopting new approaches to these problems.  Information technology is important enough to the lives of citizens that political leaders really ought to understand the implications of different technology strategies.  Governments need CTOs that are responsible for national technical systems in much the same ways that chancellors and the like are responsible for finances.

Rather than being advised to apologize for systems that are fundamentally flawed, leaders should be advised to inform the population that the government has inherited antiquated systems that are not up to the privacy requirements of the digital age, and put in place solutions based on breach-resistance and privacy-enhancing technologies. 

The British information commissioner, Richard Thomas, is conducting a broad inquiry on government data privacy.  He is quoted by the Guardian as saying he was demanding more powers to enter government offices without warning for spot-checks.

He said he wanted new criminal penalties for reckless disregard of procedures. He also disclosed that only last week he had sought assurances from the Home Office on limiting information to be stored on ID cards.

“This could not be more serious and has to be a serious wake-up call to the whole of government. We have been warning about these dangers for more than a year.  

I have never understood why any politician in his (or her) right mind wouldn't want to be on the privacy-enhancing and future-facing side of this problem.