In a piece called Pleading down the Charges, Jeff Bohren of talkBMC refers to the discussion I've had with Conor about invisible redirection as ‘inflammatory’, and adds:
“In subsequent exchanges Kim and Conor plead the charges down from a felony to a misdemeanor. Kim allows that the redirection is OK so long as the IdP is completely trusted, but he is concerned about the case where the IdP is not trustworthy…
It's probably true that my “hand in wallet” metaphor was a bit stark. But how can I put this? I'm doing a threat analysis. Saying everything is OK because people are trustworthy really doesn't get us very far. Even a trustworthy IdP can be attacked; threats remain real even in the light of mitigations.
When we put on our security hats, and look at the security of a system, we try as hard as we can to explore every possible thing that can go wrong, and develop a complete profile of the attack vectors. No one says, “Hey, don't talk about that attack, because we've done this or that to prevent it.” Instead, we list the attack, we list what we do to mitigate it, and we understand the vulnerability. We need to do the same thing around the privacy attack vectors. It is revealing that this doesn't seem to be our instinct at this point in time, and reminds me of the days, before the widespread vulnerability of computer systems became apparent, when people who brought up potential security vulnerabilities were sent to stand in the corner.
What is missing from this discussion is the point that “automatic redirection” is not mandated by SAML. Redirection, yes, but automatic redirection is not required. The SP could very well have presented at page to the user that says:
“Your browser is about to be redirected www.youridp.com for the purposes of establishing your identity. If you consent to this redirection, press Continue. If you do not consent, press Cancel….
Correct. This could be done. But information can also be made to fly around with zero visibility to the user. And that represents a risk.
Nobody does this kind of warning because the average user doesnâ€™t want to be bothered and isnâ€™t concerned with it. Not as concerned as, for instance, having a stranger reach into their pocket.
Actually, thanks to “invisible system design”, the “average user” has no idea about how her personal information is being sent around, or that with redirection protocols, her own browser is the covert channel for sharing her identity information between sites. This might be all right inside an enterprise, when there is an implicit understanding that the enterprise shares all kinds of personal information. It might even be OK in a portal, where I go to a financial institution and expect it to share my information with its various departments and subsidiaries. But in the age of identity theft, I'm not so sure she would not be concerned with the invisible exchange of identity information between contextually unrelated sites. I think she would probably feel like a stranger were reaching into her wallet.
To be clear, my initial thinking about the “hand in wallet” came not from SAML, but from X.509, where the certificates described in Beyond maximal disclosure tokens are routinely and automatically released to any site that asks for them without any user approval. SAML can be better in this regard, since the IP is able to judge the identity of the RP before releasing anything to it. In this sense, not just any hand can reach into your wallet – just a hand approved by the “card issuer”… This is better for sure.
Do we need to nag users as Jeff suggests might be the alternative? No. Give the user a smart client, as is the case with CardSpace or Higgins, and whole new user experiences are possible that are “post nagging”. The invisibility threat is substantially reduced.
In my next post in this series I'm going to start looking at CardSpace and linkability.
One thought on “Control, not nagging”
Comments are closed.