The Law of User Control is hard at work in a growing controversy about interception of people's web traffic in the United Kingdom. At the center of the storm is the “patent-pending” technology of a new company called Phorm. It's web site advises:
Leading UK ISPs BT, Virgin Media and TalkTalk, along with advertisers, agencies, publishers and ad networks, work with Phorm to make online advertising more relevant, rewarding and valuable. (View press release.)
Phorm's proprietary ad serving technology uses anonymised ISP data to deliver the right ad to the right person at the right time – the right number of times. Our platform gives consumers advertising that's tailored to their interests – in real time – with irrelevant ads replaced in the process.
What makes the technology behind OIX and Webwise truly groundbreaking is that it takes consumer privacy protection to a new level. Our technology doesn't store any personally identifiable information or IP addresses, and we don't retain information on user browsing behaviour. So we never know – and can't record – who's browsing, or where they've browsed.
It is counterintuitive to see claims of increased privacy posited as the outcome of a tracking system. But even if that happened to be true, it seems like the system is being laid on the population as a fait accompli by the big powerful ISPs. It doesn't seem that users will be able to avoid having their traffic redirected and inspected. And early tests of the system were branded “illegal” by Nicholas Bohm of the Foundation for Information Policy Research (FIPR).
Is Phorm completely wrong? Probably not. Respected and wise privacy activist Simon Davies has done an Interim Privacy Impact Assessment that argues (in part):
In our view, Phorm has successfully implemented privacy as a key design component in the development of its Phorm Technology system. In contrast to the design of other targeting systems, careful choices have been made to ensure that privacy is preserved to the greatest possible extent. In particular, Phorm has quite consciously avoided the processing of personally identifiable information.
Simon seems to be suggesting we consider Phorm in relation to the current alternatives – which may be worse.
To make a judgment we need to really understand how Phorm's system works. Dr. Richard Clayton, a computer security researcher at the University of Cambridge and a participant in Light Blue Touchpaper, has published a succinct ten page explanation that that is a must-read for anyone who is a protocol head.
Richard says his technical analysis of the Phorm online advertising system has reinforced his view that it is “illegal”, breaking laws designed to limit unwarranted interception of data.
The British Information Commissioners Office confirmed to the BBC that BT is planning a large-scale trial of the technology “involving around 10,000 broadband users later this month”. The ICO said: “We have spoken to BT about this trial and they have made clear that unless customers positively opt in to the trial their web browsing will not be monitored in order to deliver adverts.”
Having quickly read Richard's description of the actual protocol, it isn't yet clear to me that if you opt out, your web traffic isn't still being examined and redirected. But there is worse. I have to admit to a sense of horror when I realized the system rewards ISPs for abusing their trusted role in the Internet by improperly posing as other peoples’ domains in order to create fraudulent cookies and place them on users machines. Is there a worse precedent? How come ISPs can do this kind of thing and other can't? Or perhaps now they can…
To accord with the Laws of Identity, no ISP would examine or redirect packets to a Phorm-related server unless a user explicitly opted-in to such a service. Opting in should involve explicitly accepting Phorm as a justifiable witness to all web interactions, and agreeing to be categorized by the Phorm systems.
The system is devised to aggregate across contexts, and thus runs counter to the Fourth Law of Identity. It claims to mitigate this by reducing profiling to categorization information. However, I don't buy that. Categorization, practiced at a grand enough scale and over a sufficient period of time, potentially becomes more privacy invasive than a regularly truncated audit trail. Thus there must be mechanisms for introducing amnesia into the categorization itself.
Phorm would therefore require clearly defined mechanisms for deprecating and deleting profile information over time, and these should be made clear during the opt-in process.
I also have trouble with the notion that in Phorm identities are “anonymized”. As I understand it, each user is given a persistent random ID. Whenever the user accesses the ISP, the ISP can see the link between the random ID and the user's natural identity. I understand that ISPs will prevent Phorm from knowing the user's natural identity. That is certainly better than many other systems. But I still wouldn't claim the system is based on anonymity. It is based on controlling the release of information.
[Podcasts are available here]