Hole in Google SSO service

Some days you get up and wish you hadn't.  How about this one:

Google SSO problem

The ZDNet article begins by reassuring us:

“Google has fixed an implementation flaw in the single sign-on service that powers Google Apps follow a warning from researchers that remote attackers can exploit a hole to access Google accounts.

“The vulnerability, described in this white paper (.pdf), affects the SAML Single Sign-On Service for Google Apps.

“This US-CERT notice describes the issue:

“A malicious service provider might have been able to access a user’s Google Account or other services offered by different identity providers. [US-CERT actually means ‘by different service providers’ – Kim]

“Google has addressed this issue by changing the behavior of their SSO implemenation. Administrators and developers were required to update their identity provider to provide a valid recipient field in their assertions.

“To exploit this vulnerability, an attacker would have to convince the user to login to their site

Incredibly basic errors

The paper is by Alessandro Armando,  Roberto Carbone, Luca Compagna, Jorge Cuellar, and Llanos Tobarra, who are affiliated with University of Genoa, Siemens and SAP, and is one of an important series of studies demonstrating the value of automated protocol verification systems.

But the surprising fact is that the errors made are incredibly basic – you don't need an automated protocol verification system to know which way the wind blows.  The industry has known about exactly these problems for a long time now.   Yet people keep making the same mistakes.

Do your own thing

The developers decided to forget about the SAML specification as it's written and just “do their own thing.”  As great as this kind of move might be on the dance floor, it's dangerous when it comes to protecting peoples’ resources and privacy.  In fact it is insideous since the claim that Google SSO implemented a well vetted protocol tended to give security professionals a sense of confidence that we understood its capabilities and limitations.  In retrospect, it seems we need independent audit before we depend on anything.  Maybe companies like Fugen can help in this regard?

What was the problem?

Normally, when a SAML relying party wants a user to authenticate through SAML (or WS-Fed), the replying party sends her to an identity provider with a request that contains an ID and a scope  (e.g. URL) to which the resulting token should apply.

For example, in authenticating someone to identityblog, my underlying software would make up a random authentication ID number and the scope would be www.identityblog.com.  The user would carry this information with her when she was redirected to the identity provider for authantication.

The identity provider would then ask for a password, or examine a cookie, and sign an authentication assertion containing the ID number, the scope, the client identity, and the identity provider's identity.  

Having been bound together cryptographically in a tamperproof form where authenticity of the assertion could be verified, these properties would be returned to the relying party.  Because of the unique ID, the relying party knows this assertion was freshly minted in response to its needs.  Further, since the scope is specified, the relying party can't abuse the assertion it gets at some other scope.

But according to the research done by the paper's authors, the Google engineers “simplified” the protocol, perhaps hoping to make it “more efficient”?  So they dropped the whole ID and scope “thing” out of the assertion.  All that was signed was the client's identity.

The result was that the relying party had no idea if the assertion was minted for it or for some other relying party.  It was one-for-all and all-for-one at Google.

Wake up to insider attacks

This might seem reasonable, but it sure would sure cause me sleepless nights.

The problem is that if you have a huge site like Google, which brings together many hundreds (thousands?) of services, then with this approach, if even ONE of them “goes bad”, the penetrated service can use any tokens it gets to impersonate those users at any other Google relying party service.

It is a red carpet for insider attacks.  It is as though someone didn't know that insider attacks represent the overwhelming majority of attacks.  Partitioning is the key weapon we have in fighting these attacks.  And the Google design threw partitioning to the wind.  One hole in the hull and the whole ship goes down.

Indeed the qualifying note in the ZD article that “to exploit this vulnerability, an attacker would have to convince the user to login to their site” misses the whole point about how this vulnerability facilitates insider attacks.  This is itself worrisome since it seems that thinking about the insider isn't something that comes naturally to us.

Then it gets worse.

This is all pretty depressing but it gets worse.  At some point, Google decided to offer SSO to third party sites.  But according to the researchers, at this point, the scope still was not being verified.  Of course the conclusion is that any service provider who subscribed to this SSO service – and any wayward employee who could get at the tokens – could impersonate any user of the third party service and access their accounts anywhere within the Google ecosystem.

My friends at Google aren't going to want to be lectured about security and privacy issues – especially by someone from Microsoft.  And I want to help, not hinder.

But let's face it.  As an industry we shouldn't be making the kinds of mistakes we made 15 or 20 years ago.  There must be better processes in place.  I hope we'll get to the point where we are all using vetted software frameworks so this kind of do-it-yourself brain surgery doesn't happen. 

Let's work together to build our systems to protect them from inside jobs.  If we all had this as a primary goal, the Google SSO fiasco could not have happened.  And as I'll make clear in an upcoming post, I'm feeling very strongly about this particular issue.

Published by

Kim Cameron

Work on identity.

3 thoughts on “Hole in Google SSO service”

  1. This is truely horrible, indeed.

    On the other hand, here is one I don't understand about Live ID: When I go to any site that uses Live ID as its SSO solution and log in, I am redirected to the Live ID login page. BY DEFAULT it is not encrypted, i.e. no SSL, no green header saying this is by MS or anything like that. There is a tiny little link at the bottom saying “Use enhanced security” or something like that which will redirect me to a https version of the site, but there is no chance on earth someone like my parents will ever use that (nor 80% of the rest, I'd be willing to bet). I then have to type in username and password, click a button, this gets send via https (but only now!) back to Live ID, and then I am redirected back to the original site. Here is a super simple attack on that, although I am not sure whether it will truely work. Lets assume I sit in an internet cafe. The provider of that one can easily assign my computer via DHCP a DNS server that he himself runs. In that DNS server he can point login.live.com to his/her own server and show a fake Live ID site. When I now surf to http://www.hotmail.com, I get redirected to login.live.com, but in fact I will go to the site hosted by the internet cafe provider. At this point there is no way for me to tell whether I am actually on the Live ID hosted site or some malicious site, I have to click on “Use enhanced security version” to actually get to a site where I can make that trust decision. But my parents (and 80% of the rest of the population) don't anything about that. And at that point the internet cafe guy can super easily steal the username and password for Live ID, right? Isn't that totally crazy? Or am I missing something? Why isn't the Live ID sign in page by default ssl encrypted with a certificate?

    I find this particularily disturbing (if I am actually right) because the Live ID by now is one of the most valuable assets all around. Just think what one can access with it, in the case of an “ideal MS user”: All the health records (healthvault), lots of office documents (Office live Workspace), all business information, including financial stuff (office live small business), fotos, email, contacts, and it doesn't end. The Live ID of someone must be one of the most precious things on can steal, at this point…

    But, to make a long story short: Why doesn't the “secure by default” principle not apply here? Why isn't the default the https encrypted page for Live ID login? Isn't that a much worse security hole?

  2. Oh, and actually, in my profile I selected that my nickname to be shown with the comments, not my full name. Can you please remove my full name from the comments and actually only show my nickname? Not happy at all that this gets disclosed publicly when I selected a different option… Thanks!

    Kim responds: David, I have fixed the way your name is shown. Sorry if the options didn't behave properly – I'm using WordPress with Pamelaware and have installed some upgrades recently. I will drill into this.

  3. Thanks!

    Oh, and any thought on my point with the https? I have the strong feeling that I am missing something fundamental in my description and that in fact such a simple attack could not work… But I can't figure out why.

    [Kim replies] You are right – service providers should ALWAYS use https when requesting passwords.

    Right now a Hotmail user has to “opt in” to using https to submit her password – she must actually click the “use enhanced security” link.

    I don't understand this. The default should be to use the secure option.

    On a more positive note, HealthVault users ARE required to use enhanced security. They can only get around this if they are already logged in to LiveID before going to Health Vault.

    This said, the non-ssl tradeoffs are at a different “level” than having fundamental errors in your security implementation – as Google's SSO implementation did.

Comments are closed.