Head over to the Office of Inadequate Security

First of all, I have to refer readers to the Office of Inadequate Security, apparently operated by databreaches.net. I suggest heading over there pretty quickly too – the office is undoubtedly going to be so busy you'll have to line up as time goes on.

So far it looks like the go-to place for info on breaches – it even has a twitter feed for breach junkies.

Recently the Office published an account that raises a lot of questions:

I just read a breach disclosure to the New Hampshire Attorney General’s Office with accompanying notification letters to those affected that impressed me favorably. But first, to the breach itself:

StudentCity.com, a site that allows students to book trips for school vacation breaks, suffered a breach in their system that they learned about on June 9 after they started getting reports of credit card fraud from customers. An FAQ about the breach, posted on www.myidexperts.com explains:

StudentCity first became concerned there could be an issue on June 9, 2011, when we received reports of customers travelling together who had reported issues with their credit and debit cards. Because this seemed to be with 2011 groups, we initially thought it was a hotel or vendor used in conjunction with 2011 tours. We then became aware of an account that was 2012 passengers on the same day who were all impacted. This is when we became highly concerned. Although our processing company could find no issue, we immediately notified customers about the incident via email, contacted federal authorities and immediately began a forensic investigation.

According to the report to New Hampshire, where 266 residents were affected, the compromised data included students’ credit card numbers, passport numbers, and names. The FAQ, however, indicates that dates of birth were also involved.

Frustratingly for StudentCity, the credit card data had been encrypted but their investigation revealed that the encryption had broken in some cases. In the FAQ, they explain:

The credit card information was encrypted, but the encryption appears to have been decoded by the hackers. It appears they were able to write a script to decode some information for some customers and most or all for others.

The letter to the NH AG’s office, written by their lawyers on July 1, is wonderfully plain and clear in terms of what happened and what steps StudentCity promptly took to address the breach and prevent future breaches, but it was the tailored letters sent to those affected on July 8 that really impressed me for their plain language, recognition of concerns, active encouragement of the recipients to take immediate steps to protect themselves, and for the utterly human tone of the correspondence.

Kudos to StudentCity.com and their law firm, Nelson Mullins Riley & Scarborough, LLP, for providing an exemplar of a good notification.

It would be great if StudentCity would bring in some security experts to audit the way encryption was done, and report on what went wrong. I don't say this to be punitive, I agree that StudentCity deserves credit for at least attempting to employ encryption. But the outcome points to the fact that we need programming frameworks that make it easy to get truly robust encryption and key protection – and to deploy it in a minimal disclosure architecture that keeps secrets off-line. If StudentCity goes the extra mile in helping others learn from their unfortunate experience, I'll certainly be a supporter.

The Idiot's Guide to Why Voicemail Hacking is a Crime

Pangloss sent me reeling recently with her statement that “in the wake of the amazing News of the World revelations, there does seem to be some public interest in a quick note on why there is (some) controversy around whether hacking mesages in someone's voicemail is a crime.”

What?  Outside Britain I imagine most of us have simply assumed that breaking into peoples’ voicemails MUST be illegal.   So Pangloss's excellent summary of the situation – I share just enough to reveal the issues – is a suitable slap in the face of our naivete:

The first relevant provision is RIPA (the Regulation of Investigatory Powers Act 2000) which provides that interception of communications without consent of both ends of the communication , or some other provision like a police warrant is criminal in principle. The complications arise from s 2(2) which provides that:

“….a person intercepts a communication in the course of its transmission by
means of a telecommunication system if, and only if … (he makes) …some or all of the contents of the communication available, while being transmitted, to a person other than the sender or intended recipient of the communication”. [my itals]

Section 2(4) states that an “interception of a communication” has also to be “in the course of its transmission” by any public or private telecommunications system. [my itals]

The argument that seems to have been been made to the DPP, Keir Starmer, on October 2010, by QC David Perry, is that voicemail has already been transmitted and is thus therefore no longer “in the course of its transmission.” Therefore a RIPA s 1 interception offence would not stand up. The DPP stressed in a letter to the Guardian in March 2011 that this interpretation was (a) specific to the cases of Goodman and Mulcaire (yes the same Goodman who's just been re-arrested and inded went to jail) and (b) not conclusive as a court would have to rule on it.

We do not know the exact terms of the advice from counsel as (according to advice given to the HC on November 2009) it was delivered in oral form only. There are two possible interpretations of even what we know. One is that messages left on voicemail are “in transmission” till read. Another is that even when they are stored on the voicemail server unread, they have completed transmission, and thus accessing them would not be “interception”.

Very few people I think would view the latter interpretation as plausible, but the former seem to have carried weight with the prosecution authorities. In the case of Milly Dowler, if (as seems likely) voicemails were hacked after she was already deceased, there may have been messages unread and so a prosecution would be appropriate on RIPA without worrying about the advice from counsel. In many other cases eg involving celebrities though, hacking may have been of already-listened- to voicemails. What is the law there?

When does a message to voicemail cease to be “in the course of transmission”? Chris Pounder pointed out in April 2011 that we also have to look at s 2(7) of RIPA which says

” (7)For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it.”

A common sense interpretation of this, it seems to me (and to Chris Pounder ) would be that messages stored on voicemail are deemed to remain “in the course of transmission” and hence capable of generating a criminal offence, when hacked – because it is being stored on the system for later access (which might include re-listening to already played messages).

This rather thoroughly seems to contradict the well known interpretation offered during the debates in the HL over RIPA from L Bassam, that the analogy of transmission of a voice message or email was to a letter being delievered to a house. There, transmission ended when the letter hit the doormat.

Fascinating issues.  And that's just the beginning.  For the full story, continue here.

Don't take identities from our homes without our consent

Joerg Resch of Kuppinger Cole in Germany wrote recently about the importance of identity management to the Smart Grid – by which he means the emerging energy infrastructure based on intelligent, distributed renewable resources:

In 10-12 years from now, the whole utilities and energy market will look dramatically different. Decentralization of energy production with consumers converting to prosumers pumping solar energy into the grid and offering  their electric car batteries as storage facilities, spot markets for the masses offering electricity on demand with a fully transparent price setting (energy in a defined region at a defined time can be cheaper, if the sun is shining or the wind is blowing strong), and smart meters in each home being able to automatically contract such energy from spot markets and then tell the washing machine to start working as soon as electricity price falls under a defined line. And – if we think a bit further and apply Google-like business models to the energy market, we can get an idea of the incredible size this market will develop into.

These are just a few examples, which might give you an idea on how the “post fossile energy market” will work. The drivers leading the way into this new age are clear: energy production from oil and gas will become more and more expensive, because pollution is not for free and the resources will not last forever. And the transparency gain from making the grid smarter will make electricity cheaper than it is now.

The drivers are getting stronger every day. Therefore, we will soon see many large scale smart grid initiatives, and we will see questions rising such as who has control over the information collected by the smart meter in my home. Is it my energy provider? How would Kim Cameron´s 7 laws of Identity work in a smart grid? What would a “grid perimeter” look like which keeps information on the usage of whatever electric devices within my 4 walls? By now, we all know what cybercrimes are and how they can affect each of us. But what are the risks of “smart grid hacking”? How might we be affected by “grid crimes”?

In fact at Blackhat 2009, security consultant Mike Davis demonstrated successful hacker attacks on commercially available smart meters.  He told the conference,

“Many of the security vulnerabilities we found are pretty frightening and most smart meters don't even use encryption or ask for authentication before carrying out sensitive functions like running software updates and severing customers from the power grid.”

Privacy commission Ann Cavoukian of Ontario has insisted that industry turn its attention to the security and privacy of these devices:

“The best response is to ensure that privacy is proactively embedded into the design of the Smart Grid, from end to end. The Smart Grid is presently in its infancy worldwide – I’m confident that many jurisdictions will look to our work being done in Ontario as the privacy standard to be met. We are creating the necessary framework with which to address this issue.”

Until recently, no one has talked about drive-by mapping of our home devices.  But from now on we will.  When we think about home devices, we need to reach into the future and come to terms with the huge stakes that are up for grabs here.  

The smart home and the smart grid alert us to just how important the identity and privacy of our devices really is.  We can use technical mechanisms like encryption to protect some information from eavesdroppers.   But not the patterns of our communication or the identities of our devices…  To do that we need a regulatory framework that ensures commercial interests don't enter our “device space” without our consent.

Google's recent Street View WiFi boondoggle is a watershed event in drawing our attention to these matters.

Enterprise lockdown versus consumer applications

My friend Cameron Westland, who has worked on some cool applications for the iPhone, wrote me to complain that I linked to iPhone Privacy:

I understand the implications of what you are trying to say, but how is this any different from Mac OS X applications accessing the address book or Windows applications accessing contacts? (I'm not sure about Windows, but I know it's possible on a Mac).

Also, the article touches on storing patient information on an iPhone. I believe Seriot is guilty of a major oversight in simply correlating the fact that spy phone has access to contacts with it also being able to do so in a secured enterprise.

If the iPhone is deployed in the enterprise, the corporate administrators can control exactly which applications get installed. In the situations where patient information is stored on the phone, they should be using their own security review process to verify that all applications installed meet the HIPPA  certification requirements. Apple makes no claim that applications meet the stringent needs of certain industries – that's why they give control to administrators to encrypt phones, restrict specific application installs, and do remote wipes.

Also, Seriot did no research behavior of a phone connected to a company's active directory, versus just plain old address book… This is cargo cult science at best, and I'm really surprised you linked to it!

I buy Cameron's point that the controls available to enterprises mitigate a number of the attacks presented by Seriot – and agree this is  important.  How do these controls work?  Corporate administrators can set policies specifying the digital signatures of applications that can be installed.  They can use their own processes to decide what applications these will be. 

None of this depends on App Store verification, sandboxing, or Apple's control of platform content.  In fact it is no different from the universally available ability to use a combination of enterprise policy and digital signature to protect enterprise desktop and server systems.  Other features, like the ability for an operator to wipe information, are also pretty much universal.

If the iPhone can be locked down in enterprises, why is Seriot's paper still worth reading?  Because many companies and even governments are interested in developing customer applications that run on phones.  They can't dictate to customers what applications to install, and so lock-down solutions are of little interest.  They turn to Apple's own claims about security, and find statements like this one, taken from the otherwise quite interesting iPhone security overview.

Runtime Protection

Applications on the device are “sandboxed” so they cannot access data stored by other applications. In addition, system files, resources, and the kernel are shielded from the user’s application space. If an application needs to access data from another application, it can only do so using the APIs and services provided by iPhone OS. Code generation is also prevented.

Seriot shows that taking this claim at face value would be risky.  As he says in an eWeek interview:

“In late 2009, I was involved in discussions with the Swiss private banking industry regarding the confidentiality of iPhone personal data,” Seriot told eWEEK. “Bankers wanted to know how safe their information [stores] were, which ones are exactly at risk and which ones are not. In brief, I showed that an application downloaded from the App Store to a standard iPhone could technically harvest a significant quantity of personal data … [including] the full name, the e-mail addresses, the phone number, the keyboard cache entries, the Wi-Fi connection logs and the most recent GPS location.” 

It is worth noting that Seriot's demonstration is very easy to replicate, and doesn't depend on silly assumptions like convincing the user to disable their security settings and ignore all warnings.

The points made about banking applications apply even more to medical applications.  Doctors are effectively customers from the point of view of the information management services they use.  Those services won't be able to dictate the applications their customers deploy.  I know for sure that my doctor, bless his soul,  doesn't have an IT department that sets policies limiting his ability to play games or buy stocks.  If he starts using his phone for patient-related activities, he should be aware of the potential issues, and that's what MedPage was talking about.

Neither MedPage, nor CNET, nor eWeek nor Seriot nor I are trying to trash the iPhone – it's just that application isolation is one of the hardest problems of computer science.  We are pointing out that the iPhone is a computing device like all the others and subject to the same laws of digital physics, despite dangerous mythology to the contrary.  On this point I don't think Cameron Westland and I disagree.

 

SpyPhone for iPhone

The MedPage Today blog recently wrote about “iPhone Security Risks and How to Protect Your Data — A Must-Read for Medical Professionals.”  The story begins: 

Many healthcare providers feel comfortable with the iPhone because of its fluid operating system, and the extra functionality it offers, in the form of games and a variety of other apps.  This added functionality is missing with more enterprise-based smart phones, such as the Blackberry platform.  However, this added functionality comes with a price, and exposes the iPhone to security risks. 

Nicolas Seriot, a researcher from the Swiss University of Applied Sciences, has found some alarming design flaws in the iPhone operating system that allow rogue apps to access sensitive information on your phone.

MedPage quotes a CNET article where Elinor Mills reports:

Lax security screening at Apple's App Store and a design flaw are putting iPhone users at risk of downloading malicious applications that could steal data and spy on them, a Swiss researcher warns.

Apple's iPhone app review process is inadequate to stop malicious apps from getting distributed to millions of users, according to Nicolas Seriot, a software engineer and scientific collaborator at the Swiss University of Applied Sciences (HEIG-VD). Once they are downloaded, iPhone apps have unfettered access to a wide range of privacy-invasive information about the user's device, location, activities, interests, and friends, he said in an interview Tuesday…

In addition, a sandboxing technique limits access to other applications’ data but leaves exposed data in the iPhone file system, including some personal information, he said.

To make his point, Seriot has created open-source proof-of-concept spyware dubbed “SpyPhone” that can access the 20 most recent Safari searches, YouTube history, and e-mail account parameters like username, e-mail address, host, and login, as well as detailed information on the phone itself that can be used to track users, even when they change devices.

Following the link to Seriot's paper, called iPhone Privacy, here is the abstract:

It is a little known fact that, despite Apple's claims, any applications downloaded from the App Store to a standard iPhone can access a significant quantity of personal data.

This paper explains what data are at risk and how to get them programmatically without the user's knowledge. These data include the phone number, email accounts settings (except passwords), keyboard cache entries, Safari searches and the most recent GPS location.

This paper shows how malicious applications could pass the mandatory App Store review unnoticed and harvest data through officially sanctioned Apple APIs. Some attack scenarios and recommendations are also presented.

 

In light of Seriot's paper, MedPage concludes:

These security risks are substantial for everyday users, but become heightened if your phone contains sensitive data, in the form of patient information, and when your phone is used for patient care.   Over at iMedicalApps.com, we are not fans of medical apps that enable you to input patient data, and there are several out there.  But we also have peers who have patient contact information stored on their phones, patient information in their calendars, or are accessible to their patients via e-mail.  You can even e-prescribe using your iPhone. 

I don't want to even think about e-prescribing using an iPhone right now, thank you.

Anyone who knows anything about security has known all along that the iPhone – like all devices – is vulnerable to some set of attacks.  For them, iPhone Privacy will be surprising not because it reveals possible attacks, but because of how amazingly elementary they are (the paper is a must-read from this point of view).  

On a positive note, the paper might awaken some of those sent into a deep sleep by proselytizers convinced that Apple's App Store censorship program is reasonable because it protects them from rogue applications.

Evidently Apple's App Store staff take their mandate to protect us from people like award winning Mad Magazine cartoonist  Tom Richmond pretty seriously (see Apple bans Nancy Pelosi bobble head).  If their approach to “protecting” the underlying platform has any merit at all, perhaps a few of them could be reassigned to work part time on preventing trivial and obvious hacker exploits..

But I don't personally think a closed platform with a censorship board is either the right approach or one that can possibly work as attackers get more serious (in fact computer science has long known that this approach is baloney).  The real answer will lie in hard, unfashionable and (dare I say it?) expensive R&D into application isolation and related technologies. I hope this will be an outcome:  first, for the sake of building a secure infrastructure;  second, because one of my phones is an iPhone and I like to explore downloaded applications too.

[Heads Up: Khaja Ahmed]

Make of it what you will

One of the people whose work has most influenced the world of security – a brilliant researcher who is also gifted with a sense of irony and humor – received this email and sent it on to a group of us.   He didn't specify why he thought we would find it useful…  

At any rate, the content boggles the mind.  A joke?  Or a metaspam social engineering attack, intended to bilk jealous boyfriends and competitors? 

Or… could this kind of… virus actually be built and… sold?  

Subject: MMS PHONE INTERCEPTOR – THE ULTIMATE SPY SOLUTION FOR MOBILE PHONES AND THE GREAT PRODUCT FOR YOUR CUSTOMERS

MMS PHONE INTERCEPTOR – The ultimate surveillance solution will enable you to acquire the most valuable information from a mobile phone of a person of your interested.

Now all you will need to do in order to get total control over a NOKIA mobile (target) phone of a person of your interest is to send the special MMS to that target phone, which is generated by our unique MMS PHONE INTERCEPTOR LOADER. See through peoples' clothsThis way you can get very valuable and otherwise un-accessible information about a person of your interest very easily.

The example of use:

You will send the special MMS message containing our unique MMS PHONE INTERCEPTOR to a mobile phone of e.g. your girlfriend

In case your girlfriend will be using her (target) mobile phone, you will be provided by following unique functions:

  • In case your girlfriend will make an outgoing call or in case her (target) phone will receive an incoming call, you will get on your personal standard mobile phone an immediate SMS message about her call. This will give you a chance to listen to such call immediately on your standard mobile phone.
  • In case your girlfriend will send an outgoing SMS message from her (target) mobile phone or she will receive a SMS message then you will receive a copy of this message on your mobile phone immediately.
  • This target phone will give you a chance to listen to all sounds in its the surrounding area even in case the phone is switched off. Therefore you can hear very clearly every spoken word around the phone.
  • You will get a chance to find at any time the precise location of your girlfriend by GPS satellites.

All these functions may be activated / deactivated via simple SMS commands.

A target mobile phone will show no signs of use of these functions.

As a consequence of this your girlfriend can by no means find out that she is under your control.

In case your girlfriend will change her SIM card in her (target) phone for a new one, then after switch on of her (target) phone, your (source) phone will receive a SMS message about the change of the SIM card in her (target) phone and its new phone number.

These unique surveillance functions of target phones may be used to obtain very valuable and by no other means accessible information also from other subjects of your interest {managers, key employees, business partners etc, too.

I like the nostalgic sense of convenience and user-friendliness conjured up by this description.  Even better, it reminds me of the comic book ads that used to amuse me as a kid.  So I guess we can just forget all about this and go back to sleep, right?

FYI: Encryption is “not necessary”

A few weeks ago I spoke at a conference of CIOs, CSOs and IT Mandarins that – of course – also featured a session on Cloud Computing.  

It was an industry panel where we heard from the people responsible for security and compliance matters at a number of leading cloud providers.  This was followed by Q and A  from the audience.

There was a lot of enthusiasm about the potential of cutting costs.  The discussion wasn't so much about whether cloud services would be helpful, as about what kinds of things the cloud could be used for.  A government architect sitting beside me thought it was a no-brainer that informational web sites could be outsourced.  His enthusiasm for putting confidential information in the cloud was more restrained.

Quite a bit of discussion centered on how “compliance” could be achieved in the cloud.  The panel was all over the place on the answer.  At one end of the spectrum was a provider who maintained that nothing changed in terms of compliance – it was just a matter of oursourcing.  Rather than creating vast multi-tenant databases, this provider argued that virtualization would allow hosted services to be treated as being logically located “in the enterprise”.

At the other end of the spectrum was a vendor who argued that if the cloud followed “normal” practices of data protection, multi-tenancy (in the sense of many customers sharing the same database or other resource) would not be an issue.  According to him, any compliance problems were due to the way requirements were specified in the first place.  It seemed obvious to him that compliance requirements need to be totally reworked to adjust to the realities of the cloud.

Someone from the audience asked whether cloud vendors really wanted to deal with high value data.  In other words, was there a business case for cloud computing once valuable resources were involved?  And did cloud providers want to address this relatively constrained part of the potential market?

The discussion made it crystal clear that questions of security, privacy and compliance in the cloud are going to require really deep thinking if we want to build trustworthy services.

The session also convinced me that those of us who care about trustworthy infrastructure are in for some rough weather.  One of the vendors shook me to the core when he said, “If you have the right physical access controls and the right background checks on employees, then you don't need encryption”.

I have to say I almost choked.  When you build gigantic, hypercentralized, data repositories of valuable private data – honeypots on a scale never before known – you had better take advantage of all the relevant technologies allowing you to build concentric perimeters of protection.  Come on, people – it isn't just a matter of replicating in the cloud the things we do in enterprises that by their very nature benefit from firewalled separation from other enterprises, departmental isolation and separation of duty inside the enterprise, and physical partitioning.  

I hope people look in great detail at what cloud vendors are doing to innovate with respect to the security and privacy measures required to safely offer hypercentralized, co-mingled sensitive and valuable data. 

The Identity Domino Effect

My friend Jerry Fishenden, Microsoft's National Technology Officer in the United Kingdom, had a piece in The Scotsman recently where he lays out, with great clarity, many of the concerns that “keep me up at night”.  I hope this kind of thinking will one day be second nature to policy makers and politicians world wide. 

Barely a day passes it seems without a new headline appearing about how our personal information has been lost from yet another database. Last week, the Information Commissioner, Richard Thomas, revealed that the number of reported data breaches in the UK has soared to 277 since HMRC lost 25 million child benefit records nearly a year ago. “Information can be a toxic liability,” he commented.

Such data losses are bad news on many fronts. Not just for us, when it's our personal information that is lost or misplaced, but because it also undermines trust in modern technology. Personal information in digital form is the very lifeblood of theinternet age and the relentless rise in data breaches is eroding public trust. Such trust, once lost, is very hard to regain.

Earlier this year, Sir James Crosby conducted an independent review of identity-related issues for Gordon Brown. It included an important underlying point: that it's our personal data, nobody else's. Any organisation, private or public sector, needs to remember that. All too often the loss of our personal information is caused not by technical failures, but by lackadaisical processes and people.

These widely-publicised security and data breaches threaten to undermine online services. Any organisations, including governments, which inadequately manage and protect users’ personal information, face considerable risks – among them damage to reputation, penalties and sanctions, lost citizen confidence and needless expense.

Of course, problems with leaks of our personal information from existing public-sector systems are one thing. But significant additional problems could arise if yet more of our personal information is acquired and stored in new central databases. In light of projects such as the proposed identity cards programme, ContactPoint (storing details of all children in the UK), and the Communications Data Bill (storing details of our phone records, e-mails and websites we have visited), some of Richard Thomas's other comments are particularly prescient: “The more databases set up and the more information exchanged from one place to another, the greater the risk of things going wrong. The more you centralise data collection, the greater the risk of multiple records going missing or wrong decisions about real people being made. The more you lose the trust and confidence of customers and the public, the more your prosperity and standing will suffer. Put simply, holding huge collections of personal data brings significant risks.”

The Information Commissioner's comments highlight problems that arise when many different pieces of information are brought together. Aggregating our personal information in this way can indeed prove “toxic”, producing the exact opposite consequences of those originally intended. We know, for example, that most intentional breaches and leaks of information from computer systems are actually a result of insider abuse, where some of those looking after these highly sensitive systems are corrupted in order to persuade them to access or even change records. Any plans to build yet more centralised databases will raise profound questions about how information stored in such systems can be appropriately secured.

The Prime Minister acknowledges these problems: “It is important to recognise that we cannot promise that every single item of information will always be safe, because mistakes are made by human beings. Mistakes are made in the transportation, if you like – the communication of information”.

This is an honest recognition of reality. No system can ever be 100 per cent secure. To help minimise risks, the technology industry has suggested adopting proposals such as “data minimisation” – acquiring as little data as required for the task at hand and holding it in systems no longer than absolutely necessary. And it's essential that only the minimum amount of our personal information needed for the specific purpose at hand is released, and then only to those who really need it.

Unless we want to risk a domino effect that will compromise our personal information in its entirety, it is also critical that it should not be possible automatically to link up everything we do in all aspects of how we use the internet. A single identifying number, for example, that stitches all of our personal information together would have many unintended, deeply negative consequences.

There is much that governments can do to help protect citizens better. This includes adopting effective standards and policies on data governance, reducing the risk to users’ privacy that comes with unneeded and long-term storage of personal information, and taking appropriate action when breaches do occur. Comprehensive data breach notification legislation is another important step that can help keep citizens informed of serious risks to their online identity and personal information, as well as helping rebuild trust and confidence in online services.

Our politicians are often caught between a rock and a very hard place in these challenging times. But the stream of data breaches and the scope of recent proposals to capture and hold even more of our personal information does suggest that we are failing to ensure an adequate dialogue between policymakers and technologists in the formulation of UK public policy.

This is a major problem that we can, and must, fix. We cannot let our personal information in digital form, as the essential lifeblood of the internet age, be allowed to drain away under this withering onslaught of damaging data breaches. It is time for a rethink, and to take advantage of the best lessons that the technology industry has learned over the past 30 or so years. It is, after all, our data, nobody else's.

My identity has already been stolen through the very mechanisms Jerry describes.  I would find this even more depressing if I didn't see more and more IT architects understanding the identity domino problem – and how it could affect their own systems. 

It's our job as architects to do everything we can so the next generation of information systems are as safe from insider attacks as we can make them.  On the one hand this means protecting the organizations we work for from unnecessary liability;  on the other, it means protecting the privacy of our customers and employees, and the overall identity fabric of society.

In particular, we need to insist on:

  • scrupulously partitioning personally identifying information from operational and profile data;
  • eliminating “rainy day” collection of information – the need for data must always be justifiable;
  • preventing personally identifying information from being stored on multiple systems;
  • use of encryption;
  • minimal disclosure of identity intormation within a “need-to-know” paradigm.

I particularly emphasize partitioning PII from operational data since most of a typical company's operational systema – and employees – need no access to PII.  Those who do need such access rarely need to know anything beyond a name.  Those who do need greater access to detailed information rarely need access to information about large numbers of people except in anonymized form.

I would love someone to send me a use case that calls for anyone to have access – at the same time – to the personally identifying information about thousands of individuals  (much less millions, as was the case for some of the incidents Jerry describes).  This kind of wholesale access was clearly afforded the person who stole my identity.  I still don't understand why. 

Where is your identity more likely to be stolen?

 From the Economist:

Internet users in Britain are more likely to fall victim to identity theft than their peers elsewhere in Europe and North America. In a recent survey of 6,000 online shoppers in six countries by PayPal and Ipsos Research, 14% of respondents in Britain said that they have had their identities stolen online, compared with only 3% in Germany. More than half of respondents said that they used personal dates and names as passwords, making it relatively easy for scammers to gain access to accounts. The French are particularly cavalier, with two-thirds using easily guessed passwords and over 80% divulging personal data, such as birthdays, on social-networking sites.

Of course, my identity was stolen (and apparently sold) NOT because of inadequate password hygiene, but just because I was dealing with Countrywide – a company whose computer systems were sufficiently misdesigned to be vulnerable to a large-scale insider attack.  So there are a lot of things to fix before we get to a world of trustworthy computing.

 

Kim Cameron's excellent adventure

I need to correct a few of the factual errors in recent posts by James Governor and Jon Udell.  James begins by describing our recent get-together:

We talked about Project Geneva, a new claims based access platform which supersedes Active Directory Federation Services, adding support for SAML 2.0 and even the open source web authentication protocol OpenID.

Geneva is big news for OpenID. As David Recordon, one of the prime movers behind the standard said on Twitter yesterday:

Microsoft’s Live ID is adding support for OpenID. Goodbye proprietary identity technologies for the web! Good work MSFT

TechCrunch took the story forward, calling out de facto standardization:

Login standard OpenID has gotten a huge boost today from Microsoft, as the company has announced that users will soon be able to login to any OpenID site using their Windows Live IDs. With over 400 million Windows Live accounts (many of which see frequent use on the Live’s Mail and Messenger services), the announcement is a massive win for OpenID. And Microsoft isn’t just supporting OpenID – the announcement goes as far as to call it the de facto login standard [the announcement actually calls it “an emerging, de facto standard” – Kim] 

But that’s not what this post is supposed to be about. No I am talking about the fact [that] later yesterday evening Kim hacked his way into a party at the standard using someone else’s token!  [Now this is where I think some “small tweaks” start to be called for… – Kim]

It happened like this. I was talking to Mary Branscombe, Simon Bisson and John Udell when suddenly Mary jumped up with a big smile on her face. Kim, who has a kind of friendly bear look about him, had arrived. She ran over and then I noticed that a bouncer had his arm across Kim’s chest (”if your name’s not down you’re not coming in”). Kim had apparently wandered upstairs without getting his wristband first. Kim disappeared off downstairs, and I figured he might not even come back. A few minutes later though and there he was. I assumed he had found an organizer downstairs to give him a wristband… When he said that he actually had taken the wristband from someone leaving the party, and hooked it onto his wrist me and John practically pissed our pants laughing. As Jon explains (in Kim Cameron's Excellent Adventure):

If you don’t know who Kim is, what’s cosmically funny here is that he’s the architect for Microsoft’s identity system and one of the planet’s leading authorities on identity tokens and access control.

We stood around for a while, laughing and wondering if Kim would reappear or just call it a night. Then he emerged from the elevator, wearing a wristband which — wait for it — belonged to John Fontana.  Kim hacked his way into the party with a forged credential! You can’t make this stuff up!

While there is certainly some cosmic truth to this description, and while I did in fact back away slightly from the raucus party at the precise moment James says he and Jon “pissed their pants”, John Fontana did NOT actually give me his wristband.  You see, he didn't have a wristband either. 

So let's go through this step by step.  It all began with the invite that brought me to the party in the first place:

As a spokesperson for PDC2008, we’re looking forward to having you join us at the Rooftop Bar of the Standard Hotel for the Media/Analyst party on October 27th at 7:00pm

This invite came directly from the corporate Department of Parties.

I point this out just to ward off any unfair accusations that I just wanted to raid the party's immense Martini bar. Those who know me also know nothing could be further from the truth. You have to force a Martini into my hands.  My attendance represented nothing but Duty.  But I digress.

Protocol Violation

The truth of the matter is that I ran into John Fontana in the cafe of the Standard and we arrived at the party together.  He had been invited because this was, ummm, a Press party and he was, ummm, Press. 

However, it didn’t take more than a few seconds for us to see that the protocol for party access control had not been implemented correctly.   We just assumed this was a bug due to the fact that the party was celebrating a Beta, and that we would have to work our way past it as all beta participants do. 

Let’s just say the token-issuing part of the party infrastructure had crashed, whereas the access control point was operating in an out-of-control fashion.

Looking at it from an architectural point of view, the admission system was based on what is technically called “bearer” tokens (wristbands). Such tokens are NOT actually personalized in any way, or tied to the identity of the person they are given to through some kind of proof. If you “have” the token, you ARE the bearer of the token.

So one of those big ideas slowly began to take root in our minds.  Why not become bearers of the requisite tokens, thereby compensating for the inoperative token-issuing system?

Well, at that point, since not a few of the people leaving the party knew us,  John and I explained our “aha”, and pointed out the moribund token-issuing component.  As is typical of people seeing those in need of help, we were showered with offers of assistance.

I happened to be rescued by an unknown bystander with incredibly nimble and strong fingers and deep expertise with wristband technology.  She was able to easily dislodge her wristband and put it on me in such a way that it’s integrity was totally intact. 

There was no forged token.  There was no stolen token.  It was a real token.  I just became the bearer.

When we got back upstairs, the access control point evaluated my token – and presto – let me in to join a certain set of regaling hedonists basking in the moonlight.  

But sadly – and unfairly –  John’s token was rejected since its donor, lacking the great skill of mine, had damaged it during the token transplant.

Despite the Martini now in my hand, I was overcome by that special sadness you feel when escaping ill fate wrongly allotted to one more deserving of good fortune than you.  John slipped silently out of the queue and slinked off to a completely different party.

So that's it, folks.  Yet the next morning, I had to wake up, and confont again my humdrum life.  But I do so inspired by the kindness of both strangers and friends (have I gone too far?)