Head over to the Office of Inadequate Security

First of all, I have to refer readers to the Office of Inadequate Security, apparently operated by databreaches.net. I suggest heading over there pretty quickly too – the office is undoubtedly going to be so busy you&#39ll have to line up as time goes on.

So far it looks like the go-to place for info on breaches – it even has a twitter feed for breach junkies.

Recently the Office published an account that raises a lot of questions:

I just read a breach disclosure to the New Hampshire Attorney General’s Office with accompanying notification letters to those affected that impressed me favorably. But first, to the breach itself:

StudentCity.com, a site that allows students to book trips for school vacation breaks, suffered a breach in their system that they learned about on June 9 after they started getting reports of credit card fraud from customers. An FAQ about the breach, posted on www.myidexperts.com explains:

StudentCity first became concerned there could be an issue on June 9, 2011, when we received reports of customers travelling together who had reported issues with their credit and debit cards. Because this seemed to be with 2011 groups, we initially thought it was a hotel or vendor used in conjunction with 2011 tours. We then became aware of an account that was 2012 passengers on the same day who were all impacted. This is when we became highly concerned. Although our processing company could find no issue, we immediately notified customers about the incident via email, contacted federal authorities and immediately began a forensic investigation.

According to the report to New Hampshire, where 266 residents were affected, the compromised data included students’ credit card numbers, passport numbers, and names. The FAQ, however, indicates that dates of birth were also involved.

Frustratingly for StudentCity, the credit card data had been encrypted but their investigation revealed that the encryption had broken in some cases. In the FAQ, they explain:

The credit card information was encrypted, but the encryption appears to have been decoded by the hackers. It appears they were able to write a script to decode some information for some customers and most or all for others.

The letter to the NH AG’s office, written by their lawyers on July 1, is wonderfully plain and clear in terms of what happened and what steps StudentCity promptly took to address the breach and prevent future breaches, but it was the tailored letters sent to those affected on July 8 that really impressed me for their plain language, recognition of concerns, active encouragement of the recipients to take immediate steps to protect themselves, and for the utterly human tone of the correspondence.

Kudos to StudentCity.com and their law firm, Nelson Mullins Riley & Scarborough, LLP, for providing an exemplar of a good notification.

It would be great if StudentCity would bring in some security experts to audit the way encryption was done, and report on what went wrong. I don&#39t say this to be punitive, I agree that StudentCity deserves credit for at least attempting to employ encryption. But the outcome points to the fact that we need programming frameworks that make it easy to get truly robust encryption and key protection – and to deploy it in a minimal disclosure architecture that keeps secrets off-line. If StudentCity goes the extra mile in helping others learn from their unfortunate experience, I&#39ll certainly be a supporter.

Kerry McCain bill proposes “minimal disclosure” for transaction

Steve Satterfield at Inside Privacy gives us this overview of central features of new Commercial Privacy Bill of Rights proposed by US Senators Kerry and McCain (download it here):

  • The draft envisions a significant role for the FTC and includes provisions requiring the FTC to promulgate rules on a number of important issues, including the appropriate consent mechanism for uses of data.  The FTC would also be tasked with issuing rules obligating businesses to provide reasonable security measures for the consumer data they maintain and to provide transparent notices about data practices.
  • The draft also states that businesses should “seek” to collect only as much “covered information” as is reasonably necessary to provide a transaction or service requested by an individual, to prevent fraud, or to improve the transaction or service
  • “Covered information” is defined broadly and would include not just “personally identifiable information” (such as name, address, telephone number, social security number), but also “unique identifier information,” including a customer number held in a cookie, a user ID, a processor serial number or a device serial number.  Unlike definitions of “covered information” that appear in separate bills authored by Reps. Bobby Rush (D-Ill.) and Jackie Speier (D-Cal.), this definition specifically covers cookies and device IDs.
  • The draft encompasses a data retention principle, providing that businesses should only retain covered information only as long as necessary to provide the transaction or service “or for a reasonable period of time if the service is ongoing.” 
  • The draft contemplates enforcement by the FTC and state attorneys general.  Notably — and in contrast to Rep. Rush&#39s bill — the draft does not provide a privacy right of action for individuals who are affected by a violation. 
  • Nor does the bill specifically address the much-debated “Do Not Track” opt-out mechanism that was recommended in the FTC&#39s recent staff report on consumer privacy.  (You can read our analysis of that report here.) 

As noted above, the draft is reportedly still a work in progress.  Inside Privacy will provide additional commentary on the Kerry legislation and other congressional privacy efforts as they develop.   

Press conference will be held tomorrow at 12:30 pm.  [Emphasis above is mine – Kim]

Readers of Identityblog will understand that I see this development, like so many others, as inevitable and predictable consequences of many short-sighted industry players breaking the Laws of Identity.

 

ZIP ruled personally identifying in California

From CNN this surprising story:

California&#39s high court ruled Thursday that retailers don&#39t have the right to ask customers for their ZIP code while completing credit card transactions, saying that doing so violates a cardholders’ right to protect his or her personal information.

Many retailers in California and nationwide now ask people to give their ZIP code, punching in that information and recording it. Yet California Supreme Court&#39s seven justices unanimously determined that this practice goes too far.

The ruling, penned by Justice Carlos Moreno, overrules earlier decisions by trial and appeals courts in California. It points to a 1971 state law that prohibits businesses from asking credit cardholders for “personal identification information” that could be used to track them down.

While a ZIP code isn&#39t a full address, the court&#39s judgment states that asking for it — and piecing that 5-digit number together with other information, like a cardholder&#39s name — “would permit retailers to obtain indirectly what they are clearly prohibited from obtaining directly, (therefore) ‘end-running'” the intent of California state laws.

“The legislature intended to provide robust consumer protections by prohibiting retailers from soliciting and recording information about the cardholder that is unnecessary to the credit card transaction,” the decision states. “We hold that personal identification information … includes the cardholder&#39s ZIP code.”

Bill Dombrowski, president of the California Retailers Association, said it is “ironic” that a practice aimed partly at protecting consumers from fraud is being taken away.

“We think it&#39s a terrible decision because it dramatically expands what personal information is, by including a ZIP code as part of an address,” Dombrowski said. “We are surprised by it.”

The court decision applies only in California, though it reflects a practice that is increasingly common elsewhere. It does not specify how or if all businesses that take credit cards, such as gas stations, would be affected — though it does state that its objection is not over a retailer seeing a person&#39s ZIP code, but rather recording and using it for marketing purposes.

The discussion began with a June 2008 class-action lawsuit filed initially by Jessica Pineda against home retailer Williams-Sonoma.

In her suit, Pineda claimed that a cashier had asked for her ZIP code during a purchase — information that was recorded and later used, along with her name, to figure out her home address. Williams-Sonoma did this tapping a database that it uses to market products to customers and sell its compiled consumer information to other businesses.

Pineda contended the practice of asking for ZIP codes violates a person&#39s right to privacy, made illegal use of her personal information and gave a retailer, like Williams-Sonoma, an unfair competitive advantage.

Williams-Sonoma claimed that a ZIP code doesn&#39t constitute “personal identification information,” as stated in the 1971 state law.

The state supreme court ruling, only addressing the “identification information” issue, determined that a ZIP code should be protected, since the law specifically mentions protecting a cardholder&#39s address. The court concluded requesting a ZIP code is not much different than asking for a phone number or home address.

It is not illegal in California for a retailer to see a person&#39s ZIP code or address, the ruling notes: For instance, one can request a customer&#39s driver&#39s license to verify his or her identity. What makes it wrong is when a business records that information, according to the ruling, especially when the practice is “unnecessary to the sales transaction.”

In reversing the Court of Appeals judgment, the supreme court remanded the case back to a lower court to order specific changes and policies “consistent with this decision.”

The important thing here is that the Court understood a very nuanced technical point: although the ZIP is not in itself personally identifying, when used with other information such as name, the ZIP becomes personally identifying.  Understanding the privacy implications of such information combinations is key. I think there is much wisdom in the Court recognizing that this is a defining issue.

In terms of industry reaction, the notion that recording our ZIP protects us is totally ludicrous and shows to what extent we are in need of stronger privacy-protecting identity solutions like U-Prove. The logic of the California Retailers Association is pathetically convoluted – will someone please give these people a consultant for Christmas?

My thanks to Craig Wittenberg for the heads up on this story. He saw it as a sign that minimal disclosure laws already exist in the US…

That&#39s an interesting idea. One way or the other, it is extremely important to get harmonization on this kind of question across business jurisdictions.  Looking at cases like this one, I have a feeling harmonization might possibly take “quite a while” to achieve…

TERENA Networking Conference and the Fourth Law

I gave a plenary keynote on the Laws of Identity to the TERENA conference in Vilnius this week.  The intense controversy around Google&#39s world mapping of private WiFi identifiers made it pretty clear that the Fourth Law of Identity is not an academic exercise.  TERENA is a place where people “collaborate, innovate and share knowledge in order to foster the development of Internet technology, infrastructure and services to be used by the research and education community.”

People in the identity community will have heard me talk about the Laws of Identity before. However it was refreshing to discuss their implications with people who are world experts on networking issues.  [Humanitarian hint:  don&#39t blow up the video or you&#39ll not only miss the sides but become very conscious that I had several cups of good Vilnius coffee before getting up on stage.]

 

More precision on the Right to Correlate

Dave Kearns continues to whack me for some of my terminology in discussing data correlation.  He says: 

‘In responding to my “violent agreement” post, Kim Cameron goes a long way towards beginning to define the parameters for correlating data and transactions. I&#39d urge all of you to jump into the discussion.

‘But – and it&#39s a huge but – we need to be very careful of the terminology we use.

‘Kim starts: “Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction, and further, that they only have the right to correlate it with other transactions involving the same parties.” ‘

Dave&#39s right that this was overly restrictive.  In fact I changed it within a few minutes of the initial post – but apparently not fast enough to prevent confusion.  My edited version stated:

‘Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).’

This way of putting things eliminates Dave&#39s concern:

‘Which would mean, as I read it, that I couldn&#39t correlate my transactions booking a plane trip, hotel and rental car since different parties were involved in all three transactions!’

That said, I want to be clear that “parties to a transaction” does NOT include what Dave calls “all corporate partners” (aka a corporate information free-for-all!)  It just means parties (for example corporations) participating directly in some transaction can correlate it with the other transacitons in which they directly participate (but not with the transactions of some other corporation unless they get approval from the transaction participants to do so). 

Dave argues:

‘In the end, it isn&#39t the correlation that&#39s problematic, but the use to which it&#39s put. So let&#39s tie up the usage in a legally binding way, and not worry so much about the tools and technology.

‘In many ways the internet makes anti-social and unethical behavior easier. That doesn&#39t mean (as some would have it) that we need to ban internet access or technological tools. It does mean we need to better educate people about acceptable behavior and step up our policing tools to better enable us to nab the bad guys (while not inconveniencing the good guys).’

To be perfectly clear, I&#39m not proposing a ban on technology!  I don&#39t do banning!  I do creation. 

So instead, I&#39m arguing that as we develop our new technologies we should make sure they support the “right to correlation” – and the delegation of that right – in ways that restore balance and give people a fighting chance to prevent unseen software robots from limiting their destinies.

 

FYI: Encryption is “not necessary”

A few weeks ago I spoke at a conference of CIOs, CSOs and IT Mandarins that – of course – also featured a session on Cloud Computing.  

It was an industry panel where we heard from the people responsible for security and compliance matters at a number of leading cloud providers.  This was followed by Q and A  from the audience.

There was a lot of enthusiasm about the potential of cutting costs.  The discussion wasn&#39t so much about whether cloud services would be helpful, as about what kinds of things the cloud could be used for.  A government architect sitting beside me thought it was a no-brainer that informational web sites could be outsourced.  His enthusiasm for putting confidential information in the cloud was more restrained.

Quite a bit of discussion centered on how “compliance” could be achieved in the cloud.  The panel was all over the place on the answer.  At one end of the spectrum was a provider who maintained that nothing changed in terms of compliance – it was just a matter of oursourcing.  Rather than creating vast multi-tenant databases, this provider argued that virtualization would allow hosted services to be treated as being logically located “in the enterprise”.

At the other end of the spectrum was a vendor who argued that if the cloud followed “normal” practices of data protection, multi-tenancy (in the sense of many customers sharing the same database or other resource) would not be an issue.  According to him, any compliance problems were due to the way requirements were specified in the first place.  It seemed obvious to him that compliance requirements need to be totally reworked to adjust to the realities of the cloud.

Someone from the audience asked whether cloud vendors really wanted to deal with high value data.  In other words, was there a business case for cloud computing once valuable resources were involved?  And did cloud providers want to address this relatively constrained part of the potential market?

The discussion made it crystal clear that questions of security, privacy and compliance in the cloud are going to require really deep thinking if we want to build trustworthy services.

The session also convinced me that those of us who care about trustworthy infrastructure are in for some rough weather.  One of the vendors shook me to the core when he said, “If you have the right physical access controls and the right background checks on employees, then you don&#39t need encryption”.

I have to say I almost choked.  When you build gigantic, hypercentralized, data repositories of valuable private data – honeypots on a scale never before known – you had better take advantage of all the relevant technologies allowing you to build concentric perimeters of protection.  Come on, people – it isn&#39t just a matter of replicating in the cloud the things we do in enterprises that by their very nature benefit from firewalled separation from other enterprises, departmental isolation and separation of duty inside the enterprise, and physical partitioning.  

I hope people look in great detail at what cloud vendors are doing to innovate with respect to the security and privacy measures required to safely offer hypercentralized, co-mingled sensitive and valuable data. 

My dog ate my homework

Am I the only one, or is this a strange email from Facebook?

I mean, “lost”??  No backups?  

I hear you.  This must be fake – a phishing email, right?   

No https on the page I&#39m directed to, either… The average user doesn&#39t have a chance when figuring out whether this is legit or not.  So guess what.  He or she won&#39t even try.

I&#39ll forget and forgive the “loss”, but following it up by putting all their users through a sequence of steps that teaches them how to be phished really stinks.

Seems to drive home the main premise of Information Cards set forth in the Laws of Identity:

Hundreds of millions of people have been trained to accept anything any site wants to throw at them as being the “normal way” to conduct business online. They have been taught to type their names,
secret passwords and personal identifying information into almost any input form that appears on their screen.

There is no consistent and comprehensible framework allowing them to evaluate the authenticity of the sites they visit, and they don’t have a reliable way of knowing when they are disclosing private information to illegitimate parties.

 

The economics of vulnerabilities…

Gunnar Peterson of 1 Raindrop has blogged his Keynote at the recent Quality of Protection conference.  It is a great read – and a defense in depth against the binary “secure / not secure” polarity that characterizes the thinking of those new to security matters. 

His argument riffs on Dan Geer&#39s famous Risk Management is Where the Money Is.  He turns to Warren Buffet as someone who knows something about this kind of thing, writing:

“Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter talking about financial institutions’ ability to deal with the subprime mess in the housing market saying, “You don&#39t know who is swimming naked until the tide goes out.” In our world, we don&#39t know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.

“So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation – “All these people are talking about risk, but they don&#39t have any assets.” You can&#39t do risk management if you don&#39t know your assets.

“Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.

“Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.

Analysing vulnerabilities and the values of assets, he uncovers two pyramids that turn out to be inverted. 

To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:

  •  
    • Network: all the resources invested in Cisco, network admins, etc.
    • Host: all the resources invested in Unix, Windows, sys admins, etc.
    • Applications: all the resources invested in developers, CRM, ERP, etc.
    • Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

  •  
    • Network: all the resources invested in network firewalls, firewall admins, etc.
    • Host: all the resources invested in Vulnerability management, patching, etc.
    • Applications: all the resources invested in static analysis, black box scanning etc.
    • Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!

He relates his thinking to a fascinating piece by Pat Helland called SOA and Newton&#39s Universe (a must-read to which I will return) and then proposes some elements of a concrete approach to development of meaningful metrics that he argues allow correlation of “value” and “risk” in ways that could sustain meaningful business decisions. 

In an otherwise clear argument, Gunnar itemizes a series of “Apologies”, in the sense of corrections applied post-facto due to the uncertaintly of decisionmaking in a distributed environment:

Example Apologies – Identity Management tools – provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction – Giant Global Bank is still sorry your account was compromised!

Try as I might, I don&#39t understand the categorization of identity management tools as apology, or their relationship to account compromise – I hope Gunnar will tell us more. 

Security and ContactPoint: perception is all

Given the recent theft of my identity while it was being “stewarded” by CountryWide, I feel especially motivated to share with you this important piece on ContactPoint by Sir Bonar Neville-Kingdom GCMG KCVO that appeared in Britain&#39s Ideal Government.   Sir Bonar writes:

I’m facing a blizzard of Freedom of Information requests from the self-appointed (and frankly self-righteous) civil liberties brigade about releasing details of the ContactPoint security review. Of course we’re all in favour of Freedom of Information to a point but there is a limit.

Perhaps I might point out:

The decision not to release any information about the ContactPoint security review was taken by an independent panel. I personally chaired ths panel to ensure its independence from any outside interests. I was of course not directly involved in the original requests, which were handled by a junior staff member.

The security of ContactPoint relies on nobody knowing how it works. If nobody knows what the security measures are, how can they possibly circumvent them? This is simply common sense. Details of the security measures will be shared only with the 330,000 accredited and vetted public servants who will have direct access to the database of children.

We’re hardly going to ask every Tom, Dick and Harry for how to keep our own data secure when, as you’re probably aware, our friends in Cheltenham pretty much invented the whole information security game. To share the security details with some troublemaking non-governmental organisation is merely to ask for trouble with the news media and to put us all needlessly at risk. The Department will not tolerate such risk and it is clearly not in the public interest to do so.

We did consider whether to redact and release any text.  We concluded that the small amount of text that would result after redacting text that should not be released would be incoherent and without context.  Such a release would serve no public interest.

ContactPoint is both a safe and secure system and I should remind everyone that it is fundamental to its success that it is perceived as such by parents, the professionals that use it and others with an interest in ContactPoint and its contribution to delivering the Every Child Matters agenda. Maintaining this perception of absolute “gold standard” security is why it is so important that nobody should question the security arrangements put in by our contractor Cap Gemini (whom I shall be meeting again in Andorra over the weekend).

We must guard the public mind – and indeed our own minds – against any inappropriate concerns on data security.

All this is set out on the Every Child Matters website, which includes a specific and contextual reference to the ContactPoint Data Security Review.  The content has been recently updated and can be found at: http://www.everychildmatters.gov.uk/deliveringservices/contactpoint/security/

Sending out our policy thinking via the medium of a Web Site is a central plank of the “Perfecting Web 1.0” aspect of our Transformational Government strategy, which is due to be complete in 2015. If interfering busybodies have any other queries about how we propose that children in Britain should be raised and protected I would refer them t that

I might add we never get this sort of trouble from the trade association Intellect, and this is why we find them a pleasure to deal with. And on the foundation of that relationship is our track record of success in government IT projects built.

So put that in your collective pipe and smoke it, naysayers. Now is not the time to ask difficult questions. We have to get on with the job of restoring order.

 

The Identity Domino Effect

My friend Jerry Fishenden, Microsoft&#39s National Technology Officer in the United Kingdom, had a piece in The Scotsman recently where he lays out, with great clarity, many of the concerns that “keep me up at night”.  I hope this kind of thinking will one day be second nature to policy makers and politicians world wide. 

Barely a day passes it seems without a new headline appearing about how our personal information has been lost from yet another database. Last week, the Information Commissioner, Richard Thomas, revealed that the number of reported data breaches in the UK has soared to 277 since HMRC lost 25 million child benefit records nearly a year ago. “Information can be a toxic liability,” he commented.

Such data losses are bad news on many fronts. Not just for us, when it&#39s our personal information that is lost or misplaced, but because it also undermines trust in modern technology. Personal information in digital form is the very lifeblood of theinternet age and the relentless rise in data breaches is eroding public trust. Such trust, once lost, is very hard to regain.

Earlier this year, Sir James Crosby conducted an independent review of identity-related issues for Gordon Brown. It included an important underlying point: that it&#39s our personal data, nobody else&#39s. Any organisation, private or public sector, needs to remember that. All too often the loss of our personal information is caused not by technical failures, but by lackadaisical processes and people.

These widely-publicised security and data breaches threaten to undermine online services. Any organisations, including governments, which inadequately manage and protect users’ personal information, face considerable risks – among them damage to reputation, penalties and sanctions, lost citizen confidence and needless expense.

Of course, problems with leaks of our personal information from existing public-sector systems are one thing. But significant additional problems could arise if yet more of our personal information is acquired and stored in new central databases. In light of projects such as the proposed identity cards programme, ContactPoint (storing details of all children in the UK), and the Communications Data Bill (storing details of our phone records, e-mails and websites we have visited), some of Richard Thomas&#39s other comments are particularly prescient: “The more databases set up and the more information exchanged from one place to another, the greater the risk of things going wrong. The more you centralise data collection, the greater the risk of multiple records going missing or wrong decisions about real people being made. The more you lose the trust and confidence of customers and the public, the more your prosperity and standing will suffer. Put simply, holding huge collections of personal data brings significant risks.”

The Information Commissioner&#39s comments highlight problems that arise when many different pieces of information are brought together. Aggregating our personal information in this way can indeed prove “toxic”, producing the exact opposite consequences of those originally intended. We know, for example, that most intentional breaches and leaks of information from computer systems are actually a result of insider abuse, where some of those looking after these highly sensitive systems are corrupted in order to persuade them to access or even change records. Any plans to build yet more centralised databases will raise profound questions about how information stored in such systems can be appropriately secured.

The Prime Minister acknowledges these problems: “It is important to recognise that we cannot promise that every single item of information will always be safe, because mistakes are made by human beings. Mistakes are made in the transportation, if you like – the communication of information”.

This is an honest recognition of reality. No system can ever be 100 per cent secure. To help minimise risks, the technology industry has suggested adopting proposals such as “data minimisation” – acquiring as little data as required for the task at hand and holding it in systems no longer than absolutely necessary. And it&#39s essential that only the minimum amount of our personal information needed for the specific purpose at hand is released, and then only to those who really need it.

Unless we want to risk a domino effect that will compromise our personal information in its entirety, it is also critical that it should not be possible automatically to link up everything we do in all aspects of how we use the internet. A single identifying number, for example, that stitches all of our personal information together would have many unintended, deeply negative consequences.

There is much that governments can do to help protect citizens better. This includes adopting effective standards and policies on data governance, reducing the risk to users’ privacy that comes with unneeded and long-term storage of personal information, and taking appropriate action when breaches do occur. Comprehensive data breach notification legislation is another important step that can help keep citizens informed of serious risks to their online identity and personal information, as well as helping rebuild trust and confidence in online services.

Our politicians are often caught between a rock and a very hard place in these challenging times. But the stream of data breaches and the scope of recent proposals to capture and hold even more of our personal information does suggest that we are failing to ensure an adequate dialogue between policymakers and technologists in the formulation of UK public policy.

This is a major problem that we can, and must, fix. We cannot let our personal information in digital form, as the essential lifeblood of the internet age, be allowed to drain away under this withering onslaught of damaging data breaches. It is time for a rethink, and to take advantage of the best lessons that the technology industry has learned over the past 30 or so years. It is, after all, our data, nobody else&#39s.

My identity has already been stolen through the very mechanisms Jerry describes.  I would find this even more depressing if I didn&#39t see more and more IT architects understanding the identity domino problem – and how it could affect their own systems. 

It&#39s our job as architects to do everything we can so the next generation of information systems are as safe from insider attacks as we can make them.  On the one hand this means protecting the organizations we work for from unnecessary liability;  on the other, it means protecting the privacy of our customers and employees, and the overall identity fabric of society.

In particular, we need to insist on:

  • scrupulously partitioning personally identifying information from operational and profile data;
  • eliminating “rainy day” collection of information – the need for data must always be justifiable;
  • preventing personally identifying information from being stored on multiple systems;
  • use of encryption;
  • minimal disclosure of identity intormation within a “need-to-know” paradigm.

I particularly emphasize partitioning PII from operational data since most of a typical company&#39s operational systema – and employees – need no access to PII.  Those who do need such access rarely need to know anything beyond a name.  Those who do need greater access to detailed information rarely need access to information about large numbers of people except in anonymized form.

I would love someone to send me a use case that calls for anyone to have access – at the same time – to the personally identifying information about thousands of individuals  (much less millions, as was the case for some of the incidents Jerry describes).  This kind of wholesale access was clearly afforded the person who stole my identity.  I still don&#39t understand why.