Hole in Google SSO service

Some days you get up and wish you hadn't.  How about this one:

Google SSO problem

The ZDNet article begins by reassuring us:

“Google has fixed an implementation flaw in the single sign-on service that powers Google Apps follow a warning from researchers that remote attackers can exploit a hole to access Google accounts.

“The vulnerability, described in this white paper (.pdf), affects the SAML Single Sign-On Service for Google Apps.

“This US-CERT notice describes the issue:

“A malicious service provider might have been able to access a user’s Google Account or other services offered by different identity providers. [US-CERT actually means ‘by different service providers’ – Kim]

“Google has addressed this issue by changing the behavior of their SSO implemenation. Administrators and developers were required to update their identity provider to provide a valid recipient field in their assertions.

“To exploit this vulnerability, an attacker would have to convince the user to login to their site

Incredibly basic errors

The paper is by Alessandro Armando,  Roberto Carbone, Luca Compagna, Jorge Cuellar, and Llanos Tobarra, who are affiliated with University of Genoa, Siemens and SAP, and is one of an important series of studies demonstrating the value of automated protocol verification systems.

But the surprising fact is that the errors made are incredibly basic – you don't need an automated protocol verification system to know which way the wind blows.  The industry has known about exactly these problems for a long time now.   Yet people keep making the same mistakes.

Do your own thing

The developers decided to forget about the SAML specification as it's written and just “do their own thing.”  As great as this kind of move might be on the dance floor, it's dangerous when it comes to protecting peoples’ resources and privacy.  In fact it is insideous since the claim that Google SSO implemented a well vetted protocol tended to give security professionals a sense of confidence that we understood its capabilities and limitations.  In retrospect, it seems we need independent audit before we depend on anything.  Maybe companies like Fugen can help in this regard?

What was the problem?

Normally, when a SAML relying party wants a user to authenticate through SAML (or WS-Fed), the replying party sends her to an identity provider with a request that contains an ID and a scope  (e.g. URL) to which the resulting token should apply.

For example, in authenticating someone to identityblog, my underlying software would make up a random authentication ID number and the scope would be www.identityblog.com.  The user would carry this information with her when she was redirected to the identity provider for authantication.

The identity provider would then ask for a password, or examine a cookie, and sign an authentication assertion containing the ID number, the scope, the client identity, and the identity provider's identity.  

Having been bound together cryptographically in a tamperproof form where authenticity of the assertion could be verified, these properties would be returned to the relying party.  Because of the unique ID, the relying party knows this assertion was freshly minted in response to its needs.  Further, since the scope is specified, the relying party can't abuse the assertion it gets at some other scope.

But according to the research done by the paper's authors, the Google engineers “simplified” the protocol, perhaps hoping to make it “more efficient”?  So they dropped the whole ID and scope “thing” out of the assertion.  All that was signed was the client's identity.

The result was that the relying party had no idea if the assertion was minted for it or for some other relying party.  It was one-for-all and all-for-one at Google.

Wake up to insider attacks

This might seem reasonable, but it sure would sure cause me sleepless nights.

The problem is that if you have a huge site like Google, which brings together many hundreds (thousands?) of services, then with this approach, if even ONE of them “goes bad”, the penetrated service can use any tokens it gets to impersonate those users at any other Google relying party service.

It is a red carpet for insider attacks.  It is as though someone didn't know that insider attacks represent the overwhelming majority of attacks.  Partitioning is the key weapon we have in fighting these attacks.  And the Google design threw partitioning to the wind.  One hole in the hull and the whole ship goes down.

Indeed the qualifying note in the ZD article that “to exploit this vulnerability, an attacker would have to convince the user to login to their site” misses the whole point about how this vulnerability facilitates insider attacks.  This is itself worrisome since it seems that thinking about the insider isn't something that comes naturally to us.

Then it gets worse.

This is all pretty depressing but it gets worse.  At some point, Google decided to offer SSO to third party sites.  But according to the researchers, at this point, the scope still was not being verified.  Of course the conclusion is that any service provider who subscribed to this SSO service – and any wayward employee who could get at the tokens – could impersonate any user of the third party service and access their accounts anywhere within the Google ecosystem.

My friends at Google aren't going to want to be lectured about security and privacy issues – especially by someone from Microsoft.  And I want to help, not hinder.

But let's face it.  As an industry we shouldn't be making the kinds of mistakes we made 15 or 20 years ago.  There must be better processes in place.  I hope we'll get to the point where we are all using vetted software frameworks so this kind of do-it-yourself brain surgery doesn't happen. 

Let's work together to build our systems to protect them from inside jobs.  If we all had this as a primary goal, the Google SSO fiasco could not have happened.  And as I'll make clear in an upcoming post, I'm feeling very strongly about this particular issue.

Jerry Fishenden in the Financial Times

It's encouraging to see people like Jerry Fishenden figuring out how to take the discussion about identity issues to a mainstream audience.  Here's a piece he wrote for the Financial Times:

Financial times

If you think the current problems of computer security appear daunting, what is going to happen as the internet grows beyond web browsing and e-mail to pervade every aspect of our daily lives? As the internet powers the monitoring of our health and the checking of our energy saving devices in our homes for example, will problems of cybercrime and threats to our identity, security and privacy grow at the same rate?

One of the most significant contributory causes of existing internet security problems is the lack of a trustworthy identity layer. I can’t prove it’s me when I’m online and I can’t prove to a reasonable level of satisfaction whether the person (or thing) I’m communicating or transacting with online is who or what they claim to be. If you’re a cybercrook, this is great news and highly lucrative since it makes online attacks such as phishing and spam e-mail possible. And cybercrooks are always among the smartest to exploit such flaws.

If we’re serious about realising technology’s potential, security and privacy issues need to be dealt with – certainly before we can seriously contemplate letting the internet move into far more important areas, such as assisting our healthcare at home. How are we to develop such services if none of the devices can be certain who or what they are communicating with?

In front of us lies an age in which everything and everyone is linked and joined through an all pervading system – a worldwide digital mesh of billions of devices and communications happening every second, a complex grid of systems communicating within and between each other in real time.

But how can it be built – and trusted – without the problem of identity being fixed?

So what is identity anyway? For our purposes, identity is about people – and ”things”: the physical fabric of the internet and everything in (or on) it. And ultimately it’s about safeguarding our security and privacy.

If we’re to avoid exponential growth of the security and privacy issues that plague the current relatively simple internet as we enter the pervasive, complex grid age, what principles do we adhere to? How can we have a secure, trusted, privacy-aware internet that will be able to fulfil its potential – and deserve our trust too?

The good news is that these problems are already being addressed. Technology now makes possible an identity infrastructure that simultaneously addresses the security and public service needs of government as well as those of private sector organisations and the privacy needs of individuals.

Privacy-enhancing security technologies now exist that enable the secure sharing of identity-related information in a way that ensures privacy for all parties involved in the data flow. This technology includes ”minimal disclosure tokens” which enable organisations securely to share identity-related information in digital form via the individuals to whom it pertains, thereby ensuring security and privacy for all parties involved in the data flow.

These minimal disclosure tokens also guard against the unauthorised manipulation of our personal identity information, not only by outsiders such as professional cybercrooks but also by the individuals themselves. The tokens enable us to see what personal information we are about to share, which ensures full transparency about what aspects of our personal information we divulge to people and things on the internet. This approach lets individuals selectively disclose only those aspects of their personal information relevant for them to gain access to a particular service.

Equally important, we can also choose to disclose such selective identity information without leaving behind data trails that enable third parties to link, trace and aggregate all of our actions. This prevents one of the current ways that third parties use to collate our personal information without our knowledge or consent. For example, a minimal disclosure token would allow a citizen to prove to a pub landlord they are over 18 but without revealing anything else, not even their date of birth or specific age.

These new technologies help to avoid the problem of centralised systems that can electronically monitor in real time all activities of an individual (and hence enable those central systems surreptitiously to access the accounts of any individual). Such models are bad practice in any case since such central parties themselves become attractive targets for security breaches and insider misuse. Centralised identity models have been shown to be a major source of identity fraud and theft, and to undermine the trust of those whose identity it is meant to safeguard.

It is of course important to achieve the right balance between the security needs of organisations, both in private and public sectors, and the public’s right to be left alone. Achieving such a balance will help restore citizens’ trust in the internet and broader identity initiatives, while also reducing the data losses and identity thefts that arise from current practices.

Now that the technology industry is currently implementing all of the components necessary to establish such secure, privacy-aware infrastructures, all it takes is the will to embrace and adopt them. Only by doing so will we all be able to enjoy the true potential of the digital age – in a secure and privacy-aware fashion.