Massive breach could involve 94 million credit cards

According to Britain's The Register,  the world's largest credit card heist might be twice as large as previously admitted. 

A retailer called TJX was able to create a system so badly conceived, designed and implemented that  94 million accounts could be stolen.  It is thought that the potential cost could reach 1 billion dollars – or even more.  The Register says

The world's largest credit card heist may be bigger than we thought. Much bigger.

According to court documents filed by a group of banks, more than 94 million accounts fell into the hands of criminals as a result of a massive security breach suffered by TJX, the Massachusetts-based retailer.

That's more than double what TJX fessed up to in March, when it estimated some 45.7 million card numbers were stolen during a 17-month span in which criminals had almost unfettered access to the company's back-end systems. Going by the smaller estimate, TJX still presided over the largest data security SNAFU in history. But credit card issuers are accusing TJX of employing fuzzy math in an attempt to contain the damage.

“Unlike other limited data breaches where ‘pastime hackers’ may have accessed data with no intention to commit fraud, in this case it is beyond doubt that there is an extremely high risk that the compromised data will be used for illegal purposes,” read the document, filed Tuesday in US District Court in Boston. “Faced with overwhelming exposure to losses it created, TJX continues to downplay the seriousness of the situation.”

TJX officials didn't return a call requesting comment for this story.

The new figures may mean TJX will have to pay more than previously estimated to clean up the mess. According to the document, Visa has incurred fraud losses of $68m to $83m resulting from the theft of 65 million accounts. That calculates to a cost of $1.04 to $1.28 per card. Applying the same rate to the 29 million MasterCard numbers lost, the total fraud losses alone could top more than $120m.

Research firms have estimated the total loss from the breach could reach $1bn once settlements, once legal settlements and lost sales are tallied. But that figure was at least partly based on the belief that fewer than 46 million accounts were intercepted (more…)

Interestingly, the actual court case is not focused on the systems themselves, but on the representations made about the systems to the banks.  According to eWeek, U.S. District Judge William Young told the plaintiffs,

“You're going to have to prove that TJX made negligent misrepresentations. That it was under a duty to speak and didn't speak and knew what its problems were and didn't say to MasterCard and Visa that they weren't encrypting and the like,” Young said. “That's why MasterCard and Visa acted to allow TJX to get into the electronic, plastic monetary exchange upon which the economic health of the nation now rests.

This was a case where the storage architecture was wrong.  The access architecture was wrong.  The security architecture was missing.  Information was collected and stored in a way that made it too easy to gain access to too much. 

Given the losses involved, if the banks lose against TJX, we can expect them to devise contracts strong enought that they can win against the next “TJX”.  So I'm hopeful that one way or the other, this breach, precisely because of its predictability and cost, will help bring the “need to know” principle into many more systems. 

I'd like to see us discussing potential architectures that can help here rather than leaving every retailer to fend for itself.

Unifying the experience of online identity

Jon Udel zeros in on the problem of web sites that introduce “novel” authentication schemes once these schemes start to proliferate.   I had the same concerns when I set out the seventh law of identity (consistent experience).  Jon says:

Several months ago my bank implemented an anti-phishing scheme called Site ID, and now my mortgage company has gone to a similar scheme called PassMark. Both required an enrollment procedure in which I had to choose private questions and give answers (e.g., mother’s maiden name) and then choose (and label) an image. The question-and-answer protocol mainly beefs up name/password security, and secondarily deters phishing — because I’d notice if a site I believed to be my bank or mortgage company suddenly didn’t use that protocol. The primary anti-phishing feature is the named image. The idea is that now I’ll be suspicious if one of these sites doesn’t show me the image and label that I chose.

When you’re talking about a single site, this idea arguably make sense. But it starts to break down when applied across sites. In my case, there’s dissonance created by different variants of the protocol: PassMark versus Site ID. Then there’s the fact that these aren’t my images, they’re generic clip art with no personal significance to me. Another variant of this approach, the Yahoo! Sign-In Seal, does allow me to choose a personally meaningful image — but only to verify Yahoo! sites.

These fragmentary approaches can’t provide the grounded and consistent experience that we so desperately need. One subtle aspect of that consistency, highlighted in Richard Turner’s CardSpace screencast, is the visual gestalt that’s created by the set of cards you hold. In the CardSpace identity selector, the images you see always appear together and form a pattern. Presumably the same will be true in the Higgins-based identity selector, though I haven’t seen that yet.

I can’t say for sure, because none of us is yet having this experience with our banks and mortgage companies, but the use of that pattern across interactions with many sites should provide that grounded and consistent experience. Note that the images forming that pattern can be personalized, as Kevin Hammond discusses in this item (via Kim Cameron) about adding a handmade image to a self-issued card. Can you do something similar with a managed card issued by an identity provider? I imagine it’s possible, but I’m not sure, maybe somebody on the CardSpace team can answer that.

In any event, the general problem isn’t just that PassMark or Site ID or Sign-In Seal are different schemes. Even if one of those were suddenly to become the standard used everywhere, the subjective feeling would still be that each site manages a piece of your identity but that nothing brings it all together under your control. We must have, and I’m increasingly hopeful that we will have, diverse and interoperable identity selectors, identity providers, relying parties, and trust protocols. But every participant in the identity metasystem must also have a set of core properties that are invariant. One of the key invariant properties is that it must bring your experience of online identity together and place it under your control.

The “novel authentication” approach used by PassMark and others doesn't scale any better than the “pocket full of dongles” solutions proposed by Dongle queens or – for that matter – than conventional usernames and passwords. 

So far Information Cards are the only technology that both prevents phishing and avoids the novel authentication and multiple dongle problems.

By the way – if what Jon calls the “dissonance” problem that arises from the use of different images and questions on web sites were to be overcome by reusing the same images and questions everywhere, things would only get worse!

Once sites begin to share the same “novel authentication” model, you no longer have novel authentication. 

In fact you return full circle to the deepest phishing problems.  Why? 

If you went to an evil site and set up your reusable images and questions, you would have taught the evil site how to impersonate you at legitimage sites.   Thus in spite of lots of effort, and lots of illusions, you would end up further behind than when you started.

Denial mobs & the Cyberwar on Estonia

Ross Mayfield of Socialtext writes more explicitly about the same possible social-technical phenomena I hinted at in my recent piece on Cyber-attack against Estonia:

The latest thread of the Cyberwar attack against Estonia, as covered in the NY Times, an interview in Cnet with an expert from Arbor Networks and a post Kim Cameron raise an interesting question.  It is unlikely that the Russian government can be directly linked to the massive coordinated and sophisticated denial of service attack on Estonia. It is also possible that such attack could self-organize with the right conditions.  Is a large part of our future dealing with hacktavists as denial mobs?

Given the right conditions that make a central resource a target, a decentralized attack could be decentralized in its coordination as well.  Estonia may be the first nation state to be attacked at the scale of war, but it isn't just nations at threat.  The largest bank in Estonia, in one of the top markets for e-banking, has losses in excess of $1M.  Small amount relatively, but the overall economic cost is far from known.

If a multinational corporation did something to spark widespread outrage, such an attack could emerge against it as a net-dependent institution.  Then we would be asking ourselves if the attack was economic warfare from a nation or terrorist organization.  But it also could be a lesser, and illegal, form of grassroots activism.  None of this is particularly new, but less in concept.

But what is new are tools, that cut both ways, for easy group forming and conversation. 

Roland Dobbins on DDoS attacks and mitigations

Roland Dobbins has written to point out that the recent Russian cyber-attacks on Estonia are not the first launched by one state against another (he cites incidents during the Balkan confict, as well as China versus Japan).

Then he gives us an overview of DDoS attacks and mitigations:

DoS attacks are easy to trace as long as Service Providers (SPs) have the proper instrumentation and telemetry enabled on their routers – NetFlow is the most common way of doing this, along  with various open-source and commercial tools (nfdump/ nfsen, Panoptis, Arbor, Lancope, Narus, Q1).

Most DDoS attacks these days aren't spoofed, because a) there's no need, given the zillions of botted computers out there available for use as attack platforms and b) because many SPs have implemented antispoofing technologies such as uRPF, iACLs, etc.

However, antispoofing (BCP38/BCP84) isn't universally deployed, and so the ability to spoof combined with DNS servers which are misconfigured as open recursors means that attackers can launch very large (up to 25gb/sec that I know of) spoofed DDoS attacks, due to the amplification factor of the open DNS recursors.

There are various mitigation techniques employed such as  destination-based (destroys the village in order to save it) and/or source-based remotely-triggered blackholing (S/RTBH), plan old iACLs, and dedicated DDoS mitigation appliances; there's a lot of information-sharing and coordinated mitigation which takes place in the SP community, as well.

But there isn't nearly enough of any of these things, especially in the developing world.

Russian cyber-attacks on Estonia

Here is a report from the Guardian on what it calls the first cyber assault on a state. 

Whether it's the first or not, this type of attack is something we have known was going to be inevitable, something that was destined to become a standard characteristic of political conflict.

I came across the report while browsing a must-read new identity site called blindside (more on that later…).  Here are some excerpts from the Guardian's piece:

A three-week wave of massive cyber-attacks on the small Baltic country of Estonia, the first known incidence of such an assault on a state, is causing alarm across the western alliance, with Nato urgently examining the offensive and its implications.

While Russia and Estonia are embroiled in their worst dispute since the collapse of the Soviet Union, a row that erupted at the end of last month over the Estonians’ removal of the Bronze Soldier Soviet war memorial in central Tallinn, the country has been subjected to a barrage of cyber warfare, disabling the websites of government ministries, political parties, newspapers, banks, and companies.

Nato has dispatched some of its top cyber-terrorism experts to Tallinn to investigate and to help the Estonians beef up their electronic defences.
“This is an operational security issue, something we're taking very seriously,” said an official at Nato headquarters in Brussels. “It goes to the heart of the alliance's modus operandi.”

“Frankly it is clear that what happened in Estonia in the cyber-attacks is not acceptable and a very serious disturbance,” said a senior EU official…

“Not a single Nato defence minister would define a cyber-attack as a clear military action at present. However, this matter needs to be resolved in the near future…”

Estonia, a country of 1.4 million people, including a large ethnic Russian minority, is one of the most wired societies in Europe and a pioneer in the development of “e-government”. Being highly dependent on computers, it is also highly vulnerable to cyber-attack.

It is fascinating to think about how this kind of attack could be resisted:

With their reputation for electronic prowess, the Estonians have been quick to marshal their defences, mainly by closing down the sites under attack to foreign internet addresses, in order to try to keep them accessible to domestic users…

Attacks have apparently been launched from all over the world:

The crisis unleashed a wave of so-called DDoS, or Distributed Denial of Service, attacks, where websites are suddenly swamped by tens of thousands of visits, jamming and disabling them by overcrowding the bandwidths for the servers running the sites…

The attacks have been pouring in from all over the world, but Estonian officials and computer security experts say that, particularly in the early phase, some attackers were identified by their internet addresses – many of which were Russian, and some of which were from Russian state institutions…

“We have been lucky to survive this,” said Mikko Maddis, Estonia's defence ministry spokesman. “People started to fight a cyber-war against it right away. Ways were found to eliminate the attacker.”

I don't know enough about denial of service attacks to know how difficult it is to trace them. after the fact.  But presumably, since there is no need to receive responses in order to be successful in DOS, the attacker can spoof his source address with no problem.  This can't make things any easier.

Estonian officials say that one of the masterminds of the cyber-campaign, identified from his online name, is connected to the Russian security service. A 19-year-old was arrested in Tallinn at the weekend for his alleged involvement…

Expert opinion is divided on whether the identity of the cyber-warriors can be ascertained properly…

(A) Nato official familiar with the experts’ work said it was easy for them, with other organisations and internet providers, to track, trace, and identify the attackers.

But Mikko Hyppoenen, a Finnish expert, told the Helsingin Sanomat newspaper that it would be difficult to prove the Russian state's responsibility, and that the Kremlin could inflict much more serious cyber-damage if it chose to.  (More here…)

There was huge loss of life and bitterness between Russia and Estonia during the second world war, and there are still nationalist forces within Russia who would see this statue as symbolic of that historical reality.  It is perhaps not impossible that the DOS was mounted by individuals with those leanings rather than being government sponsored.  Someone with a clear target in mind, and the right technical collaborators, and who could muster bottoms up participation by thousands of sympathizers could likely put this kind of attack in place almost as quickly as a nation state.

The dissolving perimeter

The perimeter of the enterprise is dissolving in an environment requiring greater collaboration, oursourcing and integration with both suppliers and customers.  But Consentry's recent report shows that most IT leaders perceive that “external” threats come from inside the enterprise itself… 

Increasing network user diversity is raising concerns that there is a need for a more dynamic approach to LAN security. The following report tackles this issue, advocating an identity-based approach to managing users on the network.

The key drivers for focusing on network security from a user perspective come from the level of transitory, or non-permanent, workers who access network environments on a daily basis. The research found a significant majority of respondents seeing the following groups as a threat to the network:

  • Temporary workers (62%)
  • Guest users (54%)
  • Contractors (51%)

With 82 percent of businesses in the survey saying they have moderate to high levels of nonpermanent workers accessing the network, it appears that the changing shape of the workforce is a contentious issue for security professionals.

Further highlights from the research are as follows:

  • 87% of respondents state that they have multiple levels of user access
  • 82% of respondents recognise the need to increase network security
  • 95% believe there is an increased need for the use of identity-based control
  • 41% of businesses do not have up-to-date network access
  • 65% acknowledge that network access is becoming more diverse and difficult to manage

Download the report here.

Secret weapon against high tech

Thanks to Lars Iwer, a story from The Independent on breaching the invincible to get at the Crown Jewels.  By the way, how much does 120,000 carats weigh?  Answer here.  That's one big ring.

A thief has evaded one of the world's most expensive hi-tech security systems, and made off with €21m (£14.5m) worth of diamonds – thanks to a secret weapon rarely used on bank staff: personal charm.

In what may be the biggest robbery committed by one person, the conman burgled safety deposit boxes at an ABN Amro bank in Antwerp's diamond quarter, stealing gems weighing 120,000 carats. Posing as a successful businessman, the thief visited the bank frequently, befriending staff and gradually winning their confidence. He even brought them chocolates, according to one diamond industry official.

Now, embarrassed bank staff in Belgium's second city are wondering how they had been hoodwinked into giving a man with a false Argentine passport access to their vaults.

The prime suspect had been a regular customer at the bank for the past year, giving his name as Carlos Hector Flomenbaum from Argentina. The authorities, who have offered a €2m reward for information leading to an arrest, now know that a passport in that name was stolen in Israel a few years ago. Although not familiar to the local diamond dealers, the conman became one of several trusted traders given an electronic card to access the bank vault. The heist, believed to have been more than a year in the planning, has astounded diamond dealers.


Being psychic, I sense a movie coming.

Token Decryption Service for CardSpace

Via Richard Turner's blog, the announcement of an architecturally superior  token decryption component devised by Dominick Baier at

Dominick  and Richard have blogged previously about mitigating the dangers involved in allowing web middleware and front-end software to process encrypted payloads.  Decrypting a payload involves access to a private key.  The broader the range of applications that can get to the key, the greater the attack surface.  This led to discussions about:

  1. Re-factoring the token decryption code into an assembly that runs under full trust whilst the site runs under partial trust
  2. Building a Token Decryption Service to which you can pass your encrypted blob and you get back a list of claims, PPID and issuer key.

And that is exactly the problem Dominick has tackled:

Web Applications that want to decrypt CardSpace tokens need read access to the SSL private key. But you would increase your attack surface tremendously if you directly grant this access to the worker process account of your application. I wrote about this in more detail here and Richard Turner followed up here.

Together with my colleagues at Thinktecture (thanks Christian and Buddhike for code reviewing and QA) I wrote an out-of-proc token decryption service that allows decrypting tokens without having to have direct access to the private key in the application, the idea is as follows:

Your web application runs under its normal least privilege account with no read access to the private key. The token decryption service runs as an NT service on the same machine under an account that has read access. Whenever the application has to decrypt a token, it hands the encrypted token to the token decryption service which (in this version) simply uses the TokenProcessor to return a list of claims, a unique ID and the issuer key.

The token decryption service is implemented as a WCF service that uses named pipes to communicate with the applications. To make sure that only authorized applications can call into the service, the application(s) have to be member of a special Windows group called “TokenDecryptionUsers” (can be changed in configuration to support multiple decryption services on the same machine). I also wrote a shim for the WCF client proxy that allows using this service from partially trusted web applications.

The download contains binaries, installation instructions and the full source code. I hope this helps CardSpace adopters to improve the security of their applications and servers. If you have any comments or questions – feel free to contact me.

The approach is a good example of the “alligators and snakes” approach I discussed here recently.

Weaknesses of Strong Authentication?

Here is a piece by Robert Richardson from the CSI Blog .  He discusses what one of his colleages calls “some of the weaknesses or downright drawbracks of strong authentication methods”:

There's this author named Kathy Siena who's currently at the center of one of those firestorms that break out on the Web now and again. Some threatening material regarding her was posted on the Web, she blames some fairly prominent bloggers of being involved in one way or another, and the rest seems to be finger pointing and confusion.

One detail of the saga worth considering is that one of the implicated bloggers claims that actions were taken by someone using his identity and access to his passworded accounts (this is quoted from Kim Cameron's Blog):

I am writing this from a new computer, using an email address that will be deleted at the end of this.I am no longer me. My main machine despite my best efforts has been hacked, my accounts compromised including my email. and has been disconnected from the internet.

How did this happen? When did this happen?

This is, to be sure, something of doomsday scenario for an individual user–the complete breach of one's identity across all the systems one uses and cares about (I'm assuming that the person in question, Allen Harrell, is telling the truth about being hacked).

Kim Cameron writes this on his blog:

Maybe next time Allan and colleagues will be using Information Cards, not passwords, not shared secrets. This won’t extinguish either flaming or trolling, but it can sure make breaking in to someone’s site unbelievably harder – assuming we get to the point where our blogging software is safe too.

But I'm not convinced of this for a couple of reasons. First, Information Cards may or may not make breaking into someone's site unbelievably harder. Hackers sidestep the authentication process (strong or otherwise) all the time. Second, the perception of super-duper strong identity management may make it harder to prove that one's identity was in fact hacked.

InfoCard credentials are only more reliable if the system where they are being used is highly secure. If I'm using a given highly trusted credential from my system, but my system has been compromised, then the situation just looks worse for me when people start accusing me of misdeeds that were carried out in my name.

Many discussions about better credentialing begin from an underlying presumption that there will be a more secure operating system providing protection to the credentials and the subsystem that manages them. But at present, no one can point to that operating system. It certainly isn't Vista, however much improved its security may be.

Designing for Breach

I agree with Robert that credentials are only part of the story.  That's why I said, “assuming we get to the point where our blogging software is safe too.” 

Maybe that sounds simplistic.  What did I mean by “safe”? 

I'll start by saying I don't believe the idea of an unbreachable system is a useful operational concept.  If we were to produce such a system, we wouldn't know it.  The mere fact that a system hasn't been breached, or that we don't know how it could be, doesn't mean that a breach is not possible.  The only systems we can build are those that “might” be breached.

The way to design securely is to assume your system WILL be breached and create a design that mitigates potential damage.  There is nothing new in this – it is just risk management applied to security.

As a consequence, each component of the system must be isolated – to the extent possible –  in an attempt to prevent contagion from compromised pieces.

Security Binarism versus Probabilities

I know Robert will agree with me that one of the things we have to avoid at all costs is “security binarism”.  In this view, either something is secure or it isn't secure.  If its adherants can find any potential vulnerability in something, they conclude the whole thing is vulnerable, so we might as well give up trying to protect it.  Of course this isn't the way reality works – or the way anything real can be secured.

Let's use the analogy of physical security.  I'll conjure up our old friend, the problem of protecting a castle. 

You want a good outer wall – the higher and thicker the better.  Then you want a deep moat – full of alligators and poisonous snakes.  Why?  If someone gets over the wall, you want them to have to cross the moat.  If they don't drown in the moat, you want them to be eaten or bitten (those were the days!)  And after the moat, you would have another wall, with places to launch boiling oil, shoot arrows, and all the rest.  I could go on, but will spare you the obviousness of the excercise.

The point is, someone can breach the moat, but will then hit the next barrier.  It doesn't take a deep grasp of statistics to see that if there is a probability of breach associated with each of these components, the probability of breaking through to the castle keep is the product of all the probabilities.  So if you have five barriers, then even if each has a very high probability of breach (say 10%), the overall probability of breaking through all the barriers is just .001%.  This is what lies behind the extreme power of combining numerous defences – especially if breaking through each defence requires completely unrelated skills and resources.

But despite the best castle design, we all know that the conquering hero can still dress up as a priest and walk in through the drawbridge without being detected (I saw the movie).  In other words, there is a social engineering attack.

So, CardSpace may be nothing more than a really excellent moat.  There may be other ways into the castle.  But having a really great moat is in itself a significant advance in terms of “defence in depth”. 

Beyond that, Information Cards begin to frame many questions better than they have been framed in the past – questions like, “Why am I retaining data that creates potential liability?”

In terms of Robert's fear that strong authentication will lead to hallucinations of non-repudiation, I agree that this is a huge potential problem.   We need to start thinking about it and planning for it now.  CSI can play an important role in educating professionals, government and citizens about these issues. 

I recently expanded on these ideas here.

Richard Gray on authentication and reputation

Richard Gray posted two comments that I found illuminating, even though I see things in a somewhat different light.  The first was a response to my Very Sad Story

One of the interesting points of this is that it highlights very strongly some of the meat space problems that I’m not sure any identity solution can solve. The problem in particular is that as much as we try to associate a digital identity with a real person, so long as the two can be separated without exposing the split we have no hope of succeeding.

For so long identity technical commentators have pushed the idea that a person’s digital identity and their real identity can be tightly bound together then suddenly, when the weakness is finally exposed everyone once again is forced to say ‘This digital identity is nothing more than a string puppet that I control. I didn’t do this thing, some other puppet master did.’

What’s the solution? I don’t know. Perhaps we need to stop talking about identities in this way. If a burglar stole my keys and broke into my home to use my telephone it would be my responsibility to demonstrate that but I doubt that I could be held responsible for what he said afterwards.  Alternatively we need non-repudiation to be a key feature of any authentication scheme that gets implemented.

In short, so long as we can separate ourselves from our digital identities, we should expect people not to trust them. We should in fact go to great lengths to ensure that people trust them only as much as they have to and no more.

 He continued in this line of thought over at Jon's blog:

As you don’t have CardSpace enabled here, you can’t actually verify that I am the said same Richard from Kim’s blog. However in a satisfyingly circular set of references I imagine that what follows will serve to authenticate me in exactly the manner that Stephen described. 🙂  [Hey Jon – take a look at Pamelaware – Kim]

I’m going to mark a line somewhere between the view that reputation will protect us from harm and that the damage that can be done will be reversible. Reputation is a great authenticating factor, indeed it fits most of the requirements of an identity. It's trusted by the recipient, it requires lots of effort to create, and is easy to test against. Amongst people who know each other well its probably the source of information that is relied upon the most. (”That doesn’t sound like them” is a common phrase)

However, this isn’t the way that our society appears to work. When my wife reads the celebrity magazines she is unlikely to rely on reputation as a measure for their actions. Worse than this, when she does use reputation, it is built from a collection of previous celebrity offerings.

To lay it out simply, no matter who should steal my identity (phone, passwords etc.) they would struggle to damage my relationship with my current employer as they know me and have a reputation to authenticate my actions with. They could do a very good job of destroying any hope I have of getting a job anywhere else though. Regardless of the truth I would be forced to explain myself at every subsequent meeting. The public won’t have done the background checks, they’ll only know what they’ve heard. Why would they take the risk and employ me, I *might* be lying.

Incredibly, the private reputation that Allen has built up (and Stephen and the rest of us rely on) has probably helped to save a large portion of his public reputation. Doing a google for “Allen Herrell” doesn’t find netizens baying for his blood, it finds a large collection of people who have rallied behind him to declare ‘He would not do this’.

Now what I’m about to say is going to seem a little crazy but please think it through to the end before cutting it down completely. So long as our online identities are fragile and easily compromised people will be wary to trust them. If we lower the probability of an identity failing, people will, as a result, place more faith in that identity. But if we can’t reduce the probability of failure to zero then when some pour soul suffers the inevitable failure of their identity, so many more people will have placed faith in it that undoing the damage may be almost impossible. It would seem then that the unreliability of our identity is in fact our last line of defence.

My point then is that while it is useful to spend time improving authentication schemes perhaps we are neglecting the importance of non-repudiation within the system. If it was impossible for anyone other than me to communicate my password string to an authentication system then that password would be fine for authentication and it wouldn’t even be necessary to encrypt the text wherever it was stored!