Linkage in “redirect” protocols like SAML

Moving on from certificates in our examination of identity technology and linkability, we'll look next at the redirection protocols – SAML, WS-Federation and OpenID.  These work as shown in the following diagram.  Let's take SAML as our example. 

In step 1, the user goes to a relying party and requests a resource using an http “GET”.  Assuming the relying party wants proof of identity, it returns 2), an http “redirect” that contains a “Location” header.  This header will necessarily include the URL of the identity provider (IP), and a bunch of goop in the URL query string that encodes the SAML request.    

For example, the redirect might look something like this:

HTTP/1.1 302 Object Moved
Date: 21 Jan 2004 07:00:49 GMT
Location:
https://ServiceProvider.com/SAML/SLO/Browser?SAMLRequest=fVFdS8MwFH0f7D%
2BUvGdNsq62oSsIQyhMESc%2B%2BJYlmRbWpObeyvz3puv2IMjyFM7HPedyK1DdsZdb%........
2F%
50sl9lU6RV2Dp0vsLIy7NM7YU82r9B90PrvCf85W%2FwL8zSVQzAEAAA%3D%
3D&RelayState=0043bfc1bc45110dae17004005b13a2b&SigAlg=http%3A%2F%
2Fwww.w3.org%2F200%2F09%2Fxmldsig%23rsasha1&
Signature=NOTAREALSIGNATUREBUTTHEREALONEWOULDGOHERE
Content-Type: text/html; charset=iso-8859-1

The user's browser receives the redirect and then behaves as a good browser should, doing the GET at the URL represented by the Location header, as shown in 3). 

The question of how the relying party knows which identity provider URL to use is open ended.  In a portal scenario, the address might be hard wired, pointing to the portal's identity provider.  Or in OpenID, the user manually enters information that can be used to figure out the URL of the identity provider (see the associated dangers).

The next question is, “How does the identity provider return the response to the relying party?”  As you might guess, the same redirection mechanism is used again in 4), but this time the identity provider fills out the Location header with the URL of the relying party, and the goop is the identity information required by the RP.  As shown in 5), the browser responds to this redirection information by obediently posting back to the relying party.

Note that all of this can occur without the user being aware that anything has happened or having to take any action.  For example, the user might have a cookie that identifies her to her identity provider.  Then if she is sent through steps 2) to 4), she will likely see nothing but a little flicker in her status bar as different addresses flash by.  (This is why I often compare redirection to a world where, when you enter a store to buy something, the sales clerk reaches into your pocket, pulls out your wallet and debits your credit card without you knowing what is going on — trust us…)

Since the identity provider is tasked with telling the browser where to send the response, it MUST know what relying party you are visiting.  Because it fabricates the returned identity token, it MUST know all the contents of that token.

So, returning to the axes for linkability that we set up in Evolving Technology for Better Privacy, we see that from an identity point of view, the identity provider “sees all” – without the requirement for any collusion.  Knowing each other's identity, the relying party and the identity provider can, in the absence of appropriate policy and suitable auditing, exchange any information they want, either through the redirection channel, or through a “back channel” that dispenses with the user and her browser altogether. 

In fact all versions of SAML include an “artifact” binding intended to facilitate this.  The intention of this mechanism is that only a “handle” need be exchanged through the browser redirection channel, with the assumption that the IP and RP can then hook up and use the handle to “collaborate” about the user without her participation.

In considering the use cases for which SAML was designed, it is important to remember that redirection was not originally designed to put the “user at the center”, but rather was “intended for cases in which the SAML requester and responder need to communicate using an HTTP user agent… for example, if the communicating parties do not share a direct path of communication.”  In other words, an IP/RP collaboration use case.

As Paul Masden reminded us in a recent comment, SAML 2.0 introduced a new element called RelayState that provides another means for synchronizing or exchanging information between the identity provider and the relying party; again, this demonstrates the great amount of trust a user must place in a SAML identity provider.

There are other SAML bindings that vary slightly from the redirect binding described above (for example, there is an HTTP POST binding that gets around the payload size limitations involved with the redirected GET, as Pat Paterson has pointed out).  But nothing changes in terms of the big picture.  In general, we can say that the redirection protocols promote much greater visibility of the IP on the RPs than was the case with X.509. 

I certainly do not see this as all bad.  It can be useful in many cases – for example when you would like your financial institution to verify the identity of a commercial site before you release funds to it.  But the important point is this:  the protocol pattern is only appropriate for a certain set of use cases, reminding us why we need to move towards a multi-technology metasystem. 

It is possible to use the same SAML payloads in more privacy-protecting ways by using a different wire protocol and putting more intelligence and control on the client.  This is the case for CardSpace in non-auditing mode, and Conor Cahor points out that SAML's Enhanced Client or Proxy (ECP) Profile has similar goals.  Privacy is one of the important reasons why evolving towards an “active client” has advantages.

You might ask why, given the greater visibility of IP on RP, I didn't put the redirection protocols at the extreme left of my identity technology privacy spectrum.  The reason is that the probability of RP/RP collusion CAN be greatly reduced when compared to X.509 certificates, as I will show next.

What does the identity provider know?

I appreciate the correction by Irving Reid in a posting called What did the identity provider know?

…And when did it know it?

Kim Cameron sums up the reasons why we need to understand the technical possibilities for how digital identity information can affect privacy; in short, we can’t make good policy if we don’t know how this stuff actually works.

But I want to call out one assertion he (and he’s not the only one) makes:

 First, part of what becomes evident is that with browser-based technologies like Liberty, WS-Federation and OpenID,  NO collusion is actually necessary for the identity provider to “see everything”.

The identity provider most certainly does not “see everything”. The IP sees which RPs you initiate sessions with and, depending on configuration, has some indication of how long those sessions last. Granted, that is *a lot* of information, but it’s far from “everything”. The IP must collude with the RPs to get any information about what you did at the RP during the session.

Completely right. I'll try to make this clearer as I go on. Without collusion, the IP doesn't know how the user actually behaved while at the RP.  I was too focussed on the “identity channel”, thinking about the fact that the IP knows times, what RPs were visited, and what claims were released for each particular user to each RP.

Collusion takes effort; how much?

Eric Norman, from University of Wisconsin, has a new blog called Fun with Metaphors and an independent spirit that is attractive and informed.    He weighs in to our recent discussion with Collusion takes effort:

Now don't get me wrong here. I'm all for protection of privacy. In fact, I have been credited by some as raising consciousness about 8 years ago (pre-Shibboleth) in the Internet2 community to the effect that privacy concerns need to be dealt with in the beginning and at a fundamental level instead of being grafted on later as an afterthought.

There have been recent discussions in the blogosphere about various parties colluding to invade someone's privacy. What I would like to see during such discussions is a more ecological and risk-assessing approach. I'll try to elaborate.

The other day, Kim Cameron analyzed sundry combinations of colluding parties and identity systems to find out what collusion is possible and what isn't. That's all well and good and useful. It answers questions about what's possible in a techno- and crypto- sense. However, I think there's more to the story.

The essence of the rest of the story is that collusion takes effort and motivation on the part of the conspirators. Such effort would act as a deterrent to the formation of such conspiracies and might even make them not worthwhile.

Just the fact that privacy violations would take collusion might be enough to inhibit them in some cases. This is a lightweight version of separation of duty — the nuclear launch scenario; make sure the decision to take action can't be unilateral.

In some of the cases, not much is said about how the parties that are involved in such a conspiracy would find each other. In the case of RPs colluding with each other, how would one of the RPs even know that there's another RP to conspire with and who the other RP is? That would involve a search and I don't think they could just consult Google. It would take effort.

Just today, Kaliya reported another example. A court has held that email is subject to protection under the Fourth Amendment and therefore a subpoena is required for collusion. That takes a lot of effort.

Anyway, the message here is that it is indeed useful to focus on just the technical and cryptographic possibilities. However, all that gets you is a yes/no answer about what's possible and what's not. Don't forget to also include the effort it would actually take to make such collusions happen.

First of all, I agree that the technical and crypto possibilities are not the whole story of linkability.  But they are a part of the story we do need to understand a lot more objectively than is currently the case.  Clearly this applies to technical people, but I think the same goes for policy makers.  Let's get to the point where the characteristics of the systems can be discussed without emotion or the bias of any one technology.

Now let's turn to one of Eric's main points: the effort required for conspirators to collude would act as a deterrent to the formation of such conspiracies.

First, part of what becomes evident is that with browser-based technologies like Liberty, WS-Federation and OpenID,  NO collusion is actually necessary for the identity provider to “see everything” – in the sense of all aspects of the identity exchange.  That in itself may limit use cases.   It also underlines the level of trust the user MUST place in such an IP.  At the very minimum, all the users of the system need to be made aware of how this works.  I'm not sure that has been happening…

Secondly, even if you blind the IP as to the identity of the RP, you clearly can't prevent the inverse, since the RP needs to know who has made the claims!  Even so,  I agree that this blinding represents something akin to “separation of duty”, making collusion a lot harder to get away with on a large scale.

So I really am trying to set up this continuum to allow for “risk assessment” and concrete understanding of different use cases and benefits.  In this regard Eric and I are in total agreement.

As a concrete example of such risk assessment, people responsible for privacy in government have pointed out to me that their systems are tightly connected, and are often run by entities who provide services across multiple departments.  They worry that in this case, collusion is very easy.  Put another way, the separation of duties is too fragile.

Assemble the audit logs and you collude.  No more to it than that.  This is why they see it as prudent to put in place a system with properties that make routine creation of super-dossiers more difficult.  And why we need to understand our continuum. 

Revealing patterns when there is no need to do so

Irving Reid of Controlled Flight into Terrain has come up with exactly the kind of use case I wanted to see when I was thinking about Paul Madsen's points:

Kim Cameron responds to Paul Madsen responding to Kim Cameron, and I wonder what it is about Canadians and identity…

But I have to admit that I have not personally been that interested in the use case of presenting “managed assertions” to amnesiac web sites.  In other words, I think the cases where you would want a managed identity provider for completely amnesiac interactions are fairly few and far between.  (If someone wants to turn me around me in this regard I’m wide open.)

Shibboleth, in particular, has a very clear requirement for this use case. FERPA requires that educational institutions disclose the least possible information about students, staff and faculty to their partners. The example I heard, back in the early days of SAML, was of an institution that had a contract with an on-line case law research provider such that anyone affiliated with the law school at that institution could look up cases.

In this case, the “managed identity provider” (representing the educational institution) needs to assert that the person visiting right now is affiliated with the law school. However, the provider has no need to know anything more than that, and therefore the institution has a responsibility under FERPA to not give the provider any extra information. “The person looking up Case X right now is the same person who looked up Case Y last week” is one of the pieces of information the institution shouldn’t share with the provider.

Put this way it is obvious that it breaks the law of minimal disclosure to reveal that “the person looking up Case X right now is the same person who looked up Case Y last week” when there is no need to do so.

I initially didn't see that a pseudonymous link between Case X and Case Y would leak very much information.  But on reflection, in the competitive world of academic research, these linkages could benefit an observer by revealing patterns the observer would not otherwise be aware of.  He might not know whose research he was observing, but might nonetheless cobble a paper together faster than the original researcher, beating him in terms of publication date.

I'll include this example in discussing some of the collusion issues raised by various identity technologies.

Colluding with yourself

Further to Dave Kearn's article, here is the complete text of Paul Masden's comment

Kim Cameron introduces a nice diagram into his series exploring linkability & correlation in different identity systems.

Kim categorizes correlation as either ‘IP sees all’, ‘RP/RP collusion’, or ‘RP/IP collusion’, depending on which two entities can ‘talk’ about the user.

A meaningful distinction for RP/RP collusion that Kim omits (at least in the diagram and in his discussion of X.509) is ‘temporal self-correlation’, i.e. that in which the same RP is able to correlate the same user's visits occurring over time.

Were an IDP to use transient (as opposed to persistent pseudonymous) identifiers within a SAML assertion each time it asserted to a RP, then not only would RP's be unable to collude with each other (based on that identifier), they'd be unable to collude with themselves (the past or future themselves).

I was working on a diagram comparable to Kim's, but got lost in the additional axis for representing time (e.g. ‘what the provider knows and when they learned it’ when considering collusion potential).

Separately, Kim will surely acknowledge at some point (or already has) that these identity systems, with their varying degrees of inhibiting correlation & subsequent collusion, will all be deployed in an environment that, by default, does not support the same degree of obfuscation. Not to say that designing identity systems to inhibit correlation isn't important & valuable for privacy, just that there is little point in deploying such a system without addressing the other vulnerabilities (like a masked bank robber writing his ‘hand over the money’ note on a monogrammed pad).

First, I love Paul's comment that he “got lost in the additional axis”, since there are many potential axes – some of which have taken me to the steps of purgatory.  Perhaps we can collect them into a joint set of diagrams since the various axes are interesting in different ways.

Second, I want everyone to understand that I do not see correlation as being something which is in itself bad.  It depends on the context, on what we are trying to achieve.  When writing my blog, I want everyone to know it is “me again”, for better or for worse.  But as I always say, I would like to be able to use my search engine and read my newspaper without fear that some profile of me, the real-world Kim Cameron, would be assembled and shared.

The one statement Paul makes that I don't agree with is this: 

Were an IDP to use transient (as opposed to persistent pseudonymous) identifiers within a SAML assertion each time it asserted to a RP, then not only would RP's be unable to collude with each other (based on that identifier), they'd be unable to collude with themselves (the past or future themselves).

I've been through this thinking myself.

Suppose we got rid of the user identifier completely, and just kept the assertion ID that identifies a given SAML token (must be unique across time and space – totally transient).  If the relying party received such a token and colluded with the identity provider, the assertionID could be used to tie the profile at the relying party to the person who authenticated and got the token in the first place.  So it doesn't really prevent linking once you try to handle the problem of collusion.

No masks in the grocery store

Dave Kearns discusses the first part of my examination of the relation between identity technologies and linking, beginning with a reference to Paul Madsen:

Paul Madsen comments on Kim Cameron's first post in a series he's about to do on privacy and collusion in on-line identity-based transactions. He notes:

A meaningful distinction for RP/RP collusion that Kim omits (at least in the diagram and in his discussion of X.509) is ‘temporal self-correlation’, i.e. that in which the same RP is able to correlate the same user's visits occurring over time.

and concludes:

Not to say that designing identity systems to inhibit correlation isn't important & valuable for privacy, just that there is little point in deploying such a system without addressing the other vulnerabilities (like a masked bank robber writing his ‘hand over the money’ note on a monogrammed pad).

Paul makes some good points.  Rereading my post I tweaked it slightly to make it somewhat clearer that correlating the same user's visits occuring over time is one possible aspect of linking. 

But I have to admit that I have not personally been that interested in the use case of presenting “managed assertions” to amnesiac web sites.  In other words, I think the cases where you would want a managed identity provider for completely amnesiac interactions are fairly few and far between.  (If someone wants to turn me around me in this regard I'm wide open.)  To me the interesting use cases have been those of pseudonymous identity – sites that respond to you over time, but are not linked to a natural person.  This isn't to say that whatever architecture we come out with can simply ignore use cases people think are important.

Dave continues:

I'd like to add that Kim's posting seems to fall into what I call on-line fallacy #1 – the on-line experience must be better in some way than the “real world” experience, as defined by some non-consumer “expert”. This first surfaced for me in discussions about electronic voting (see Rock the Net Vote), where I concluded “The bottom line is that computerized voting machines – even those running Microsoft operating systems [Dave, mais vous êtes trop méchant! – Kim]- are more secure and more reliable than any other ‘secret ballot’ vote tabulation method we've used in the past.”

When I re-visit a store, I expect to be recognized. I hope that the clerk will remember me and my preferences (and not have to ask “plastic or paper?” every single blasted time!). Customers like to be recognized when they return to the store. We appreciate it when we go to the saloon where “everybody knows your name” and the bartender presents you with a glass of “the usual” without you having to ask. And there is nothing wrong with that! It's what most people want. Fallacy #2 is that most Jeremiahs (those weeping, wailing, and tooth-gnashing doomsayers who wish to stop technology in it's tracks) think that what they want is what everyone should want, and would want if the hoi-polloi were only educated enough. (and people think I'm elitist! 🙂

I do wish that all those “anonymity advocates” would start trying to anonymize themselves in the physical world, too. So here's a test – next time you need to visit your bank, wear a mask. Be anonymous. But tell your lawyer to stand by the phone…

Dave, I think you are really bringing up an important issue here.  But beyond the following brief comment, I would like to refrain from the discussion until I finish the technical exploration.  I ask you to go with me on the idea that there are cases where you want to be treated like you are in your local pub, and there are cases where you don't.  The whole world is not a pub – as much as that might have some advantages, like beer.

In the physical world we do leave impressions of the kind you describe.  But in the digital world they can all be assembled and integrated automatically and communicated intercontinentally to forces unknown to you in a way that is just impossible in the physical world.  There is absolutely no precedent for digital physics.  We need to temper your proposed fallacies with this reality.

I'm trying to do a dispassionate examination of how the different identity technologies relate to linking, without making value judgements about use cases.

That done, let's see if we can agree on some of the digital physics versus physical reality issues.

Evolving technology for better privacy

Let's continue to explore linking, and of how it relates to CardSpace, identity protocols, token formats and cryptography.

I've summarized a number of thoughts in the following diagram, which contrasts the linking threats posed by a number of technology combinations. The diagram presents these technologies on an ordinal scale ranging from the most dangerous to the least – along the axis of linkage prevention.

X.509 with OCSP

Let's begin with Public Key Infrastructure (PKI) technology employed with X.509 user certificates.

Here the user has a key only she can “exercise”, and some Certificate Authority (CA) mints a long-lived certificate binding her key to her name, organization, country and the like. When the user visits a relying party who trusts the CA, she presents the certificate and exercises the key – typically by using it to sign a challenge created by the relying party.

In many cases, in addition to binding the key to attributes,  the CA exists with the explicit mission of linking it to a “natural person” (in the sense of an identifiable person in the physical world).  However, for now we'll leave the meaning of assertions aside and look only at how the technology itself impacts privacy.

Since the user presents the same long-lived certificate to every relying party who trusts the CA, the certificate and the information in it link the user accross sessions on one web site, and between one web site and another. Any two web sites obtaining this kind of certificate can compare notes and determine that the same user has visited each of them. This allows linkage of their profiles into a super-dossier (possibly including a super-dossier of a natural person).

What is good about X.509 is that if a relying party does not collude, the CA has no visibility onto the fact that a given user has visited it (we will see that in some other systems such visibility is unavoidable). But a relying party could at any point decide to collude with the CA (assuming the CA actually accepts such information, which may be a breach of policy).  This might result in the transfer of information in either direction beyond that contained in the certificate itself.

So in the diagram, I express this through two risks of collusion. The first is between any two relying parties who receive the same certificate. The second is between any relying party and the certificate authority. In esssence, then, all participating parties can collude with any other party, so this represents one of the worst possible technology alternatives if privacy is your goal.

In light of this it makes sense that X.509 has been successful as a technology for public entities like corporate web sites, where correlation is actually a good thing, but not for individual identification where privacy is part of the equation.

(Continues tomorrow…).

News on the Australian “Access Card”

Here is a report from The Australian about the issues surrounding Australia's Human Services Access Card.  Some of the key points: 

“By this time next year, the federal Government hopes to be interviewing and photographing 35,000 Australians each day to create the nation's first ID databank. Biometric photos, matched with names, addresses, dates of birth, signatures, sex, social security status and children's details, would be loaded into a new centralised database. Welfare bureaucrats, ASIO, the Australian Federal Police and possibly even the Australian Taxation Office would have some form of access to the unprecedented collection of identity data.

“Within three years, all Australians seeking benefits such as Medicare, pensions, childcare subsidies, family payments, unemployment or disability allowances – about 16.5 million people – would have joined the databank. They would be given a photographic access card to prove who they are and show their eligibility for social security.

“This week, however, the billion-dollar project hit a bump when Human Services Minister Chris Ellison revealed that legislation due to go before federal Parliament this month had been delayed…

“How will Australians’ privacy be protected? How will the database and cards be kept secure? Who can see information on the card? What identity documents will Australians need to acquire a card, and what will happen to the estimated 600,000 people without a birth certificate, passport or driver's licence?

“The Government's mantra is that this is not an ID card because it does not have to be carried, but users will have to show it to prove their identity when claiming welfare benefits…

“The Government claims the new system will stem between $1.6 billion and $3 billion in welfare fraud over the next decade…

“A key Government adviser, Allan Fels – a former chairman of the Australian Competition and Consumer Commission and now head of the Government's Access Card Consumer and Privacy Taskforce – is at loggerheads with Medicare, Centrelink and the AFP, who all want the new card to display the user's identification number, photograph and signature…

“The photo would be stored in a central database, as well as in a microchip that could be read by 50,000 terminals in government offices, doctors’ surgeries and pharmacies…

“Despite his official role as the citizens’ watchdog, Fels still has not seen the draft bill…

“‘The law should be specific about what is on the card, in the chip and in the database,’ he says. ‘If anyone in future wants to change that they would have to do nothing less than get an act of parliament through. We don't want a situation where, just by administrative decisions, changes can be made…’

“‘There will be no mega-database created that will record a customer's dealings with different agencies,” the minister [Ellison] told the conference…

“Cardholders may be able to include sensitive personal information – such as their blood type, emergency contacts, allergies or illnesses such as AIDS or epilepsy – in the one-third of the microchip space that will be reserved for personal use. It is not yet clear who would have access to this private zone.

“Hansard transcripts of Senate committee hearings into the access card legislation reveal that police, spies and perhaps even the taxman will be able to glean details from the new database. The Department of Human Services admits the AFP will be able to obtain and use information from the databank and card chip to respond to threats of killing or injury, to identify disaster victims, investigate missing persons, or to ‘enforce criminal law or for the protection of the public revenue’.

“Australia's super-secretive spy agency, the Defence Signals Directorate, will test security for the new access card system…

“The Australian Privacy Foundation's no-ID-card campaign director, Anna Johnston, fears future governments could “misuse and abuse” the biometric databank…

(Full story…)

ID Cards can be deployed in ways that increase, rather than decrease, the privacy of citizens, while still achieving the goals of fraud reduction.  It's a matter of taking advantage of new card and crypto technologies.  My view is that politicians would be well advised in funding such products rather than massive centralized databases.

As for the Defense Signals Directorate's access to identity data, what has this got to do with databases offering generalized access to every curious official?  You would think they were without other means.   

More on the iTunes approach to privacy

Reading more about Apple's decision to insert user's names and email addresses in the songs they download from iTunes, I stumbled across a Macworld article on iTunes 6.0.2 where Rob Griffiths described the store's approach to capturing user preferences as “spyware”.

I blogged about Rob's piece, but it turns it was 18 months old, and Apple had quickly published a fix to the “phone-home without user permission” issue.  

Since I don't want to beat a dead horse, and Apple showed the right spirit in fixing things, I took that post down within a couple of hours (leaving it here for anyone who wonders what it said). 

So now, with a better understanding of the context, I can get on with thinking about what it means for Apple to insert our names and email addresses into the music files we download – again without telling us.

First I have to thank David Waite for pointing out that the original profiling issue had been resolved:

Kim, [the Macworld] article is almost 18 months old.  Apple quickly released a newer version of iTunes which ‘fixed’ this issue – the mini store is disabled by default, and today when you select to ‘Show MiniStore’ it displays:

“The iTunes MiniStore helps you discover new music and video right from your iTunes Library. As you select tracks or videos in your Library, information about your selections are sent to Apple and the MiniStore will display related songs, artists, or videos. Apple does not keep any information related to the contents of your iTunes Library.

Would you like to turn on the MiniStore now?”

The interesting thing about the more recent debacle about Apple including your name and email address in the songs you buy from their store is that they have done this since Day 1. Its only after people thought Apple selling music with no DRM was too good to be true that the current stink over it started.

It's interesting to understand the history here.  I would have thought that in light of their previous experience, Apple would have been very up front about the fact that they are embedding your name and email address in the files they give you.  After all, it is PII, and I would think it would require your knowledge and approval. 

I wonder what the Europeans will make of this?

iTunes and Identity-Based Digital Rights Management

 A fascinating posting by Randy Picker at the University of Chicago Law School Faculty Blog:

Over the last week, it has been become clear that Apple is embedding some identifying information in songs purchased from iTunes, including the name of the customer and his or her e-mail address. This has raised the ire of consumer advocates, including the Electronic Frontier Foundation which addressed this again yesterday.

Last year, I published a paper entitled Mistrust-Based Digital Rights Management (online preprint available here). In that paper, I argued that as we switched from content products such as CDs and DVDs to content services such as iTunes, Google Video and YouTube, we would embrace identity-based digital rights management. This is exactly what we are seeing from iTunes. How should we assess identity-based DRM?

Take a step backwards. As long as I keep my songs to myself and don’t share them, the embedded information shouldn’t matter. The information may facilitate interactions between Apple and its customers and might make it easier to verify whether a particular song was purchased from iTunes, but this doesn’t seem to be the central point of embedding identity in the songs.

Instead, identity matters if I share the song with someone else. Identity travels with the content. If I know that and care, I will be less likely to share the content indiscriminately over p2p networks. Why should I care? It depends on what happens with the embedded information. One use would make it possible for Apple to identify who was sharing content on p2p networks. Having traced content to its purchaser, Apple might choose to drop that person as a customer.

But Apple could do this without embedding the information in the clear. As Fred von Lohmann asked in his post on the EFF blog, why embed identity in the clear rather than as encrypted data? After all, if Apple intends to scour p2p networks, it could do so just as easily looking for encrypted identities.

Apple might have a different strategy, one that relies on third-party sanctions, and that strategy would require actual identities. Suppose Apple posted the following notice on iTunes:

“Songs downloaded from iTunes are not to be shared with strangers. We have embedded your name and email address into the songs. Our best guess is that if you share iTunes songs on p2p networks, your name and email will be harvested from those songs and you will receive an extra 10 spam emails per day from third parties.”

Encrypted information works if Apple is doing all of the detection. It would even work, as I suggested in my paper, if Apple relied on third parties to do the detection by turning in p2p uploaders to Apple. We could run that system with encrypted information. All that is required is that the rat knows that he is turning in someone; he doesn’t need to know who that person is exactly.

But a third-party punishment strategy would probably be implemented using actual identity. The spammer who harvests the email address inflicts the penalty for uploading, not Apple itself. For Apple to drop out of the punishment business, it needs to hand off identity. Obviously, extra spam is just one possible cost for disclosing names and emails; other costs would further reduce the incentive to upload.

Disclosing identity is a clumsy tool. It doesn’t scale very well. It will work most powerfully against the casual uploader. It offers no (marginal) deterrence against someone who would upload lots of songs anyway. My mistrust-based scheme (described in the paper) might work better in those circumstances.

So far, Apple doesn’t seem to be saying much about what it is doing. It needs to be careful. As the Sony BMG fiasco—also discussed in the paper—emphasizes, content owners may not get that many opportunities to establish technological protection schemes. Each one they get wrong makes it that much harder to try another scheme later, given the adverse public relations fallout. As I suggest above, Apple may have a legitimate strategy for disclosing identity in the clear. It will be interesting to see what Apple says next.

I haven't read Randy's paper yet but will do so now.