New York Times on OpenID and Information Cards

Randall Stross has a piece in the NYT that hits the jackpot in explaining to non-technical readers what's wrong with passwords and how Information Cards help:    

“I once felt ashamed about failing to follow best practices for password selection — but no more. Computer security experts say that choosing hard-to-guess passwords ultimately brings little security protection. Passwords won’t keep us safe from identity theft, no matter how clever we are in choosing them.

“That would be the case even if we had done a better job of listening to instructions. Surveys show that we’ve remained stubbornly fond of perennial favorites like “password,” “123456” and “LetMeIn.” The underlying problem, however, isn’t their simplicity. It’s the log-on procedure itself, in which we land on a Web page, which may or may not be what it says it is, and type in a string of characters to authenticate our identity (or have our password manager insert the expected string on our behalf).

“This procedure — which now seems perfectly natural because we’ve been trained to repeat it so much — is a bad idea, one that no security expert whom I reached would defend.”

“The solution urged by the experts is to abandon passwords — and to move to a fundamentally different model, one in which humans play little or no part in logging on. Instead, machines have a cryptographically encoded conversation to establish both parties’ authenticity, using digital keys that we, as users, have no need to see.

“In short, we need a log-on system that relies on cryptography, not mnemonics.

“As users, we would replace passwords with so-called information cards, icons on our screen that we select with a click to log on to a Web site. The click starts a handshake between machines that relies on hard-to-crack cryptographic code…”

Randall's piece also drills into OpenID.  Summarizing, he sees it as a password-based system, and therefore a diversion from what's really important:

“OpenID offers, at best, a little convenience, and ignores the security vulnerability inherent in the process of typing a password into someone else’s Web site. Nevertheless, every few months another brand-name company announces that it has become the newest OpenID signatory. Representatives of Google, I.B.M., Microsoft and Yahoo are on OpenID’s guiding board of corporations. Last month, when MySpace announced that it would support the standard, the nonprofit foundation OpenID.net boasted that the number of “OpenID enabled users” had passed 500 million and that “it’s clear the momentum is only just starting to pick up.”

“Support for OpenID is conspicuously limited, however. Each of the big powers supposedly backing OpenID is glad to create an OpenID identity for visitors, which can be used at its site, but it isn’t willing to rely upon the OpenID credentials issued by others. You can’t use Microsoft-issued OpenID at Yahoo, nor Yahoo’s at Microsoft.

“Why not? Because the companies see the many ways that the password-based log-on process, handled elsewhere, could be compromised. They do not want to take on the liability for mischief originating at someone else’s site.

Randall is right that when people use passwords to authenticate to their OpenID provider, the system is vulnerable to many phishing attacks.  But there's an important point to be made:  these problems are caused by their use of passwords, not by their use of OpenID. 

When people authenticate to OpenID in a reliable way – for example, by using Information Cards –  the phishing attacks are no longer possible, as I explain in this video.  At that point, it becomes a safe and convenient way to use a public personna.

The question of whether and when large sites will accept the OpenIDs issued by other large sites is a more complicated one.  I discussed a number of the issues here.   The problem is that for many applications, there needs to be a layer of governance on top of the identity basic technology.  What happens when something goes wrong?  Are there reliability and quality of service guarantees?  If informaiton is leaked, who is responsible?  How is fiscal liability established?  And by the way, we need to figure this out in order to use any federation technology, whether OpenID, SAML or WS-Trust.

So far, these questions are being answered on an ad hoc basis, since there are no established frameworks.  I think you can divide what's happening into two approaches, both of which make a lot of sense: 

First, there are relying parties that limit the use of OpenID to low-value resources.  A great example is the French telecom company Orange.  It will accept ID's from any OpenID provider – but just for free services.  The approach is simply to limit use of the credentials to so-called low-value resources.  Blogger and others use this approach as well.

Second, the is the tack of using the protocol for higher-value purposes, but limiting the providers accepted to those with whom a governance agreement can be put in place.  Microsoft's Health Vault, for example, currently accepts OpenIDs from two providers, and plans to extend this as it understands the governance issues better.  I look at it as a very early example of a governance-oriented approach.

I strongly believe OpenID moves identity forward.  The issues of password attacks don't go away – in fact the vulnerabilites are potentially worse to the extent that a single password becomes the gate to more resources.  But technologies like Information Cards will solve these problems.  There is a tremendous synergy here, and that is the heart of the matter.  Randall writes:

“We won’t make much progress on information cards in the near future, however, because of wasted energy and attention devoted to a large distraction, the OpenID initiative. “

But I think this energy and attention will take us in the right direction as it shines the spotlight on the benefits and issues of identity, wagging identity's “long tail”. 

 

Problem between keyboard and seat

Jeff Bohren picks up on Axel Nennker's recent post:

Axel Nennker points out that the supposed “Cardspace Hack” is still floating around the old media. He allows the issue is not really a Cardspace security hole, but a problem between the keyboards and seats at Ruhr University Bochum:

A while ago two students, Xuan Chen and Christoph Löhr, from Ruhr University Bochum claimed to have “broken” CardSpace. There were some blog reactions to this claim. The authoritative one of course is from Kim.

Today I browsed through a magazine lying on the desk of a colleague of mine. This magazine with the promising title “IT-Security” repeats the false claim and reports that the students proved that CardSpace has severe security flaws… Well, when you switch off all security mechanism then, yes, there are security flaws (The security researcher in front of the computer).

Sort of what developers like me call an ID10T error.

Update: speaking of ID10T errors, I originally mistyped Axel’s name as Alex. My apologies.

How to set up your computer so people can attack it

As I said in the previous post, the students from Ruhr Universitat who are claiming discovery of security vulnerabilities in CardSpace did NOT “crack” CardSpace.
 
Instead, they created a demonstration that requires the computer's owner to consciously disable the computer's defenses through complex configurations – following a recipe they published on the web.

The students are not able to undermine the system without active co-operation by its owner. 

You might be thinking a user could be tricked into accidently cooperating with the attack..  To explore that idea, I've captured the steps required to enable the attack in this video.  I suggest you look at this yourself to judge the students’ claim they have come up with a “practical attack”.

 In essence, the video shows that a sophisticated computer owner is able to cause her system to be compromised if she chooses to do so.  This is not a “breach”.

Students enlist readers’ assistance in CardSpace “breach”

Students at Ruhr Universitat Bochum in Germany have published an account this week describing an attack on the use of CardSpace within Internet Explorer.  Their claim is to “confirm the practicability of the attack by presenting a proof of concept implementation“.

I’ve spent a fair amount of time reproducing and analyzing the attack.  The students were not actually able to compromise my safety except by asking me to go through elaborate measures to poison my own computer (I show how complicated this is in a video I will post next).  For the attack to succeed, the user has to bring full administrative power to bear against her own system.  It seems obvious that if people go to the trouble to manually circumvent all their defenses they become vulnerable to the attacks those defenses were intended to resist.  In my view, the students did not compromise CardSpace.

DNS must be undermined through a separate (unspecified) attack

To succeed, the students first require a compromise of a computer’s Domain Name System (DNS).  They ask their readers to reconfigure their computers and point to an evil DNS site they have constructed.  Once we help them out with this, they attempt to exploit the fact that poisoned DNS allows a rogue site and a legitimate site to appear to have the same internet “domain name” (e.g. www.goodsite.com) .  Code in browser frames animated by one domain can interact with code from other frames animated by the same domain.  So once DNS is compromised, code supplied by the rogue site can interfere with the code supplied by the legitimate site.  The students want to use this capability to hijack the legitimate site’s CardSpace token.

However, the potential problems of DNS are well understood.  Computers protect themselves from attacks of this kind by using cryptographic certificates that guarantee a given site REALLY DOES legitimately own a DNS name.  Use of certificates prevents the kind of attack proposed by the students.

The certificate store must also “somehow be compromised”

But this is no problem as far as the students are concerned.  They simply ask us to TURN OFF this defense as well.  In other words, we have to assist them by poisoning all of the safeguards that have been put in place to thwart their attack.  

Note that both safeguards need to be compromised at the same time.  Could such a compromise occur in the wild?  It is theoretically possible that through a rootkit or equivalent, an attacker could completely take over the user’s computer.  However, if this is the case, the attacker can control the web browser, see and alter everything on the user’s screen and on the computer as a whole, so there is no need to obtain the CardSpace token.

I think it is amazing that the Ruhr students describe their attack as successful when it does NOT provide a method for compromising EITHER DNS or the certificate store.  They say DNS might be taken over through a drive-by attack on a badly installed wireless home network.  But they provide no indication of how to simultaneously compromise the Root Certificate Store. 

In summary, the students’ attack is theoretical.  They have not demonstrated the simultaneous compromise of the systems necessary for the attack to succeed.

The user experience

Because of the difficulty of compromising the root certificate store, let’s look at what would happen if only DNS were attacked.

Internet Explorer does a good job of informing the user that she is in danger and of advising her not to proceed. 

First the user encounters the following screen, and has to select “Continue to the website (not recommended)”:


 
If recalcitrant, the user next sees an ominous red band warning within the address bar and an unnaturally long delay:

The combined attacks require a different yet coordinated malware delivery mechanism than a visit to the phishing site provides.  In other words, accomplishing two or more attacks simultaneously greatly reduces the likelihood of success.

The students’ paper proposes adding a false root certificate that will suppress the Internet Explorer warnings.  As is shown in the video, this requires meeting an impossibly higher bar.  The user must be tricked into importing a “root certificate”.  This by default doesn’t work – the system protects the user again by installing the false certificate in a store that will not deceive the browser.  Altering this behavior requires a complex manual override.

However, should all the planets involved in the attack align, the contents of the token are never visible to the attacker.  They are encrypted for the legitimate party, and no personally identifying information is disclosed by the system.  This is not made clear by the students’ paper.

What the attempt proves 

The demonstrator shows that if you are willing to compromise enough parts of your system using elevated access, you can render your system attackable.   This aspect of the students’ attack is not noteworthy. 

There is, however, one interesting aspect to their attack.  It doesn’t concern CardSpace, but rather the way intermittent web site behavior can be combined with DNS to confuse the browser.  The student’s paper proposes implementing a stronger “Same Origin Policy” to deal with this (and other) possible attacks.  I wish they had concentrated on this positive contribution rather than making claims that require suspension of disbelief. 

The students propose a mechanism for associating Information Card tokens with a given SSL channel.   This idea would likely harden Information Card systems and is worth evaluating.

However, the students propose equipping browsers with end user certificates so the browsers would be authenticated, rather than the sites they are visiting.  This represents a significant privacy problem in that a single tracking key would be used at all the sites the user visits.  It also doesn’t solve the problem of knowning whether I am at a “good” site or not.  The problem here is that if duped, I might provide an illegitimate site with information which seriously damages me.

One of the most important observations that must be made is that security isn’t binary – there is no simple dichotomy between vulnerable and not-vulnerable.  Security derives from concentric circles of defense that act cumulatively and in such a way as to reinforce one another.  The title of the students’ report misses this essential point.  We need to design our systems in light of the fact that any system is breachable.  That’s what we’ve attempted to do with CardSpace.  And that’s why there is an entire array of defenses which act together to provide a substantial and practical barrier against the kind of attack the students have attempted to achieve.

Out-manned and out-gunned

Jeff Bohren draws our attention to this article on Cyber Offence research being done by the US Air Force Cyber Command (AFCYBER).  The article says:

…Williamson makes a pretty decent case for the military botnet; his points are especially strong when he describes the inevitable failure of a purely defensive posture. Williamson argues that, like every fortress down through history that has eventually fallen to a determined invader, America’s cyber defenses can never be strong enough to ward off all attacks.

And here, Williamson is on solid infosec ground-it’s a truism in security circles that any electronic “fortress” that you build, whether it’s intended to protect media files from unauthorized viewers or financial data from thieves, can eventually be breached with enough collective effort.

Given that cyber defenses are doomed to failure, Williamson argues that we need a credible cyber offensive capability to act as a deterrent against foreign attackers. I have a hard time disagreeing with this, but I’m still very uncomfortable with it, partly because it involves using civilian infrastructure for military ends…

Jeff then comments:

The idea (as I understand it) is to use military owned computers to launch a botnet attack as a retaliation against an attack by an enemy.

In this field of battle I fear the AFCYBER is both out-manned and out-gunned. The AF are the go-to guys if you absolutely, positively need something blown up tomorrow. But a DDoS attack? Without compromising civilian hardware, the AF likely couldn’t muster enough machines. Additionally the network locations of the machines they could muster could be easily predicted before the start of any cyber war.

There is an interesting alternative if anyone from AFCYBER is reading this. How about a volunteer botnet force? Civilians could volunteer to download an application that would allow their computer to be used in an AFCYBER controlled botnet in time of a cyber war. Obviously securing this so that it couldn’t be hijacked is a formidable technical challenge, but it’s not insurmountable.

If the reason for having a botnet is because we should assume every system can be compromised, don't we HAVE TO assume the botnet can be compromised too?   Once we say “the problem is not surmountable” we have turned our back on the presuppositions that led to the botnet in the first place.