Lazy headmasters versus the Laws of Identity

Ray Corrigan routinely combines legal and technological insight at B2fxxx – Random thoughts on law, the Internet and society, and his book on Digital Decision Making is essential.  His work often leaves me feeling uncharacteristically optimistic – living proof that a new kind of legal thinker is emerging with the technological depth needed to be a modern day Solomon.

I hadn't noticed the UK's new Protection of Freedoms Bill until I heard cabinet minister Damian Green talk about it as he pulverized the UK's centralized identity database recently.  Naturally I turned to Ray Corrigan for comment, only to discover that the political housecleaning had also swept away the assumptions behind widespread fingerprinting in Britain's schools, reinstating user control and consent. 

According to TES Connect:

The new Protection of Freedoms Bill gives pupils in schools and colleges the right to refuse to give their biometric data and compels schools to make alternative provision for them.  The several thousand schools that already use the technology will also have to ask permission from parents retrospectively, even if their systems have been established for years…

It turns out that Britain's headmasters, apparently now a lazy bunch, have little stomach for trivialities like civil liberties.  And writing about this, Ray's tone seems that of a judge who has had an impetuous and over-the-top barrister try to bend the rules one too many times.  It is satisfying to see Ray send them home to study the Laws of Identity as scientific laws governing identity systems.   I hope they catch up on their homework…

The Association of School and College Leaders (ASCL) is reportedly opposing the controls on school fingerprinting proposed in the UK coalition government's Protection of Freedoms Bill.

I always understood the reason that unions existed was to protect the rights of individuals. That ASCL should give what they perceive to be their own members’ managerial convenience priority over the civil rights of kids should make them thoroughly ashamed of themselves.  Oh dear – now head teachers are going to have to fill in a few forms before they abuse children's fundamental right to privacy – how terrible.

Although headteachers and governors at schools deploying these systems may be typically ‘happy that this does not contravene the Data Protection Act’, a number of leading barristers have stated that the use of such systems in schools may be illegal on several grounds. As far back as 2006 Stephen Groesz, a partner at Bindmans in London, was advising:

“Absent a specific power allowing schools to fingerprint, I'd say they have no power to do it. The notion you can do it because it's a neat way of keeping track of books doesn't cut it as a justification.”

The recent decisions in the European Court of Human rights in cases like S. and Marper v UK (2008 – retention of dna and fingerprints) and Gillan and Quinton v UK (2010 – s44 police stop and search) mean schools have to be increasingly careful about the use of such systems anyway. Not that most schools would know that.

Again the question of whether kids should be fingerprinted to get access to books and school meals is not even a hard one! They completely decimate Kim Cameron's first four laws of identity.

1. User control and consent – many schools don't ask for consent, child or parental, and don't provide simple opt out options

2. Minimum disclosure for constrained use – the information collected, children's unique biometrics, is disproportionate for the stated use

3. Justifiable parties – the information is in control of or at least accessible by parties who have absolutely no right to it

4. Directed identity – a unique, irrevocable, omnidirectional identifier is being used when a simple unidirectional identifier (eg lunch ticket or library card) would more than adequately do the job.

It's irrelevant how much schools have invested in such systems or how convenient school administrators find them, or that the Information Commissioner's Office soft peddled their advice on the matter (in 2008) in relation to the Data Protection Act.  They should all be scrapped and if the need for schools to wade through a few more forms before they use these systems causes them to be scrapped then that's a good outcome from my perspective.

In addition just because school fingerprint vendors have conned them into parting with ridiculous sums of money (in school budget terms) to install these systems, with promises that they are not really storing fingerprints and they can't be recreated, there is no doubt it is possible to recreate the image of a fingerprint from data stored on such systems. Ross, A et al ‘From Template to Image: Reconstructing Fingerprints from Minutiae Points’ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007 is just one example of how university researchers have reverse engineered these systems. The warning caveat emptor applies emphatically to digital technology systems that buyers don't understand especially when it comes to undermining the civil liberties of our younger generation.

Broken Laws of Identity lead to system's destruction

Britain's Home Office has posted a remarkable video, showing Immigration Minister Damian Green methodically pulverizing the disk drives that once held the centralized database that was to be connected to the British ID Cards introduced by Tony Blair.  

“What we're doing today is CRUSHING, the final remnants of the national identity card scheme – the disks and hard drives that held the information on the national identity register have been wiped and they're crushed and reduced to bits of metal so everyone can be absolutely sure that the identity scheme is absolutely dead and buried.

“This whole experiment of trying to collect huge amounts of private information on everyone in this country – and collecting on the central database – is no more, and it's a first step towards a wider agenda of freedom.  We're publishing the protection of freedoms bill as well, and what this shows is that we want to rebalance the security and freedom of the citizen.  We think that previously we have not had enough emphasis on peoples’ individual freedom and privacy, and we're determined to restore the proper balance on that.”

Readers of Identityblog will recall that the British scheme was exceptional in breaking so many of the Laws of Identity at once.  It flouted the first law – User control and Consent – since citizen participation was mandatory.  It broke the second – Minimal Disclosure for a Constrained Use – since it followed the premise that as much information as possible should be assembled in a central location for whatever uses might arise…  The third law of Justifiable Parties was not addressed given the centralized architecture of the system, in which all departments would have made queries and posted updates to the same database and access could have been extended at the flick of a wrist.  And the fourth law of “Directed Identity” was a clear non-goal, since the whole idea was to use a single identifier to unify all possible information.

Over time opposition to the scheme began to grow and became widespread, even though the Blair and Brown governments claimed their polls showed majority support.  Many well-known technologists and privacy advocates attempted to convince them to consider privacy enhancing technologies and architectures that would be less vulnerable to security and privacy meltdown – but without success.  Beyond the scheme's many technical deficiencies, the social fracturing it created eventually assured its irrelevance as a foundational element for the digital future.

Many say the scheme was an important issue in the last British election.  It certainly appears the change in government has left the ID card scheme in the dust, with politicians of all stripes eager to distance themselves from it.  Damian Green, who worked in television and understands it, does a masterful job of showing what his views are.  His video posted by the Home Office, seems iconic.

All in all, the fate of the British ID Card and centralized database scheme is exactly what was predicted by the Laws of Identity:

Those of us who work on or with identity systems need to obey the Laws of Identity.  Otherwise, we create a wake of reinforcing side-effects that eventually undermine all resulting technology.  The result is similar to what would happen if civil engineers were to flount the law of gravity.  By following the Laws we can build a unifying identity metasystem that is universally accepted and enduring.

[Thanks to Jerry Fishenden (here and here) for twittering Damian Green's video]

Incident closed – good support from janrain…

When I connected with janrain to resolve the issue described here, they were more than helpful. In fact, I have to quote them, because this is what companies should be like:

“We certainly test on ie 6,7,8,9, and would love to get your situation smoothed out.” 

The scary part came a little while later…

“The cause is likely to be configuration based on the browser.  Browser security settings should be set to default for testing. Temporarily disable all toolbars and add-ons. Clear caches and cookies (at least for your site domain and rpxnow.com.”

Oh yeah.  I've heard that one before.  So I was a bit skeptical. 

On the other hand, I happened to be in a crowd and asked some people nearby with Windows 7 to see what happened to them when they tried to log in.  It was one of those moments.  Everything worked perfectly for everyone but me… 

Gathering my courage, I pressed the dreaded configuration reset button as I had been told to do: 

Then I re-enabled all my add-ons as janrain suggested.  And… everything worked as advertised.

So there you go.  Possibly I did something to my IE config at some point – I do a lot of experimenting.  Conclusion: if any of you run into the same problem, please let me know.  Until then, let's consider the incident closed.

 

Six new authentication methods for Identityblog

Back in March 2006, when Information Cards were unknown and untested, it became obvious that the best way for me to understand the issues would be to put Information Cards onto Identityblog. 

I wrote the code in PHP, and a few people started trying out Information Cards.  Since I was being killed by spam at the time, I decided to try an experiment:  make it mandatory to use an Information Card to leave a comment.  It was worth a try.  More people might check out InfoCards.  And presto, my spam problems would go away.

So on March 18th 2006 I posted More hardy pioneers try out InfoCard, showing the first few people to give it all a whirl.

At first I thought my draconian “InfoCard-Only” approach would get a lot of peoples’ hackles up and only last a few weeks.  But over time more and more people seemed to be subscribing – probably because Identityblog was one of the few sites that actually used InfoCards in production.  And I never had spam again.

How many people joined using InfoCards?  Today I looked at my user list (see the screenshot below with PII fuzzed out).  The answer: 2958 people successfully subscribed and passed email verification.  There were then over 23,000 successful audited logins.  Not very many for a commercial site, but not bad for a technical blog.

Of course, as we all know, the powers at the large commercial sites have preferred the  “NASCAR” approach of presenting a bunch of different buttons that redirect the user to, uh, something-or-other-that-can-be-phished, ahem, in spite of the privacy and security problems.  This part of the conversation will go on for some time, since these problems will become progressively more widespread as NASCAR gains popularity and the criminally inclined tune in to its potential as a gold mine… But that discussion is for another day. 

Meanwhile, I want to get my hands dirty and understand all the implications of the NASCAR-style approach.  So recently I subscribed to a nifty janrain service that offers a whole array of login methods.  I then integrated their stuff into Identityblog.  I promise, Scout's Honor, not to do man-in-the-middle-attacks or scrape your credentials, even though I probably could if I were so inclined.

From now on, when you need to authenticate at Identityblog, you will see a NASCAR-style login symbol.  See, for example, the LOG IN option at the top of this page. 

If you are not logged in and you want to leave a comment you will see :
 

Click on the string of icons and you get something like this:

 

Because many people continue to use my site to try out Information Cards, I've supplemented the janrain widget experience with the Pamelaware Information Card Option (it was pretty easy to make them coexist, and it leaves me with at least one unphishable alternative).  This will also benefit people who don't like the idea of linking their identifiers all over the web.  I expect it will help researchers and students too.

One warning:  Janrain's otherwise polished implementation doesn't work properly with Internet Explorer – it leaves a spurious “Cross Domain Receiver Page” lurking on your desktop.  [Update – this was apparently my problem: see here]  Once I figure out how to contact them (not evident), I'll ask janrain if and when they're going to fix this.  Anyway, the system works – just a bit messy because you have to manually close the stranded empty page.  The problem doesn't appear in Firefox. 

It has already been a riot looking into the new technology and working through the implications.  I'll talk about this as we go forward.

 

Social Network Users’ Bill of Rights

The  “Social Network Users’ Bill of Rights” panel at the South by Southwest Interactive (SXSW) conference last Friday had something that most panels lack:  an outcome.  The goal was to get the SXSWi community to cast their votes and help to shape a bill of rights that would reflect the participation of many thousands of people using the social networks.

The idea of getting broad communities to vote on this is pretty interesting.  Panelist Lisa Borodkin wrote:

There is no good way currently of collecting hard, empirical, quantitative data about the preferences of a large number of social network users. There is a need to have user input into the formation of social norms, because courts interpreting values such as “expectations of privacy” often look to social network sites policies and practices.

Where did the Bill of Rights come from?  The document was written collaboratively over four days at last year's Computers, Freedom and Privacy Conference and since the final version was published has been collecting votes through pages like this one.  Voting is open until June 15, 2011 – the “anniversary of the date the U.S. government asked Twitter to delay its scheduled server maintenance as a critical communication tool for use in the 2009 Iran elections”.  And guess what?  That date also coincides with this year's Computers, Freedom and Privacy Conference.

The Bill – admirably straightforward and aimed at real people – reads as follows:

We the users expect social network sites to provide us the following rights in their Terms of Service, Privacy Policies, and implementations of their system:

  1. Honesty: Honor your privacy policy and terms of service
  2. Clarity: Make sure that policies, terms of service, and settings are easy to find and understand
  3. Freedom of speech: Do not delete or modify my data without a clear policy and justification
  4. Empowerment : Support assistive technologies and universal accessibility
  5. Self-protection: Support privacy-enhancing technologies
  6. Data minimization: Minimize the information I am required to provide and share with others
  7. Control: Let me control my data, and don’t facilitate sharing it unless I agree first
  8. Predictability: Obtain my prior consent before significantly changing who can see my data.
  9. Data portability: Make it easy for me to obtain a copy of my data
  10. Protection: Treat my data as securely as your own confidential data unless I choose to share it, and notify me if it is compromised
  11. Right to know: Show me how you are using my data and allow me to see who and what has access to it.
  12. Right to self-define: Let me create more than one identity and use pseudonyms. Do not link them without my permission.
  13. Right to appeal: Allow me to appeal punitive actions
  14. Right to withdraw: Allow me to delete my account, and remove my data

It will be interesting to see whether social networking sites engage with this initiative.  Sixestate reported some time ago that Facebook objected to requiring support for pseudonyms. 

While I support all other aspects of the Bill, I too think it is a mistake to mandate that ALL communities MUST support pseudonymity or be in violation of the Bill…  In all other respects, the Bill is consistent with the Laws of Identity.  However the Laws envisaged a continuum of approaches to identification, and argued that all have their place for different purposes.  I think this is much closer to the mark and Right 12 should be amended.  The fundamental point is that we must have the RIGHT to form and participate in communities that DO choose to support pseudonymity.  This doesn't mean we ONLY have the right to participate in such communities.

Where do the organizers want to go next? Jon Pincus writes:

Here’s a few ideas:

  • get social network sites to adopt the concept of a Bill of Rights for their users and as many of the individual rights as they’re comfortable with.   Some of the specific rights are contentious  — for example, Facebook objected to in their response last summer.  But more positively, Facebook’s current “user rights and responsibilities” document already covers many of these rights, and it would be great to have even partial support from them.  And sites like Twitter, tribe.net, and emerging companies that are trying to emphasize different values may be willing to go even farther.
  • work with politicians in the US and elsewhere who are looking at protecting online, and encourage them to adopt the bill of rights framework and our specific language.  There’s a bit of “carrot and stick” combining this and the previous bullet: the threat of legislation is great both for encouraging self-regulation and getting startups to look for a potential future strategic advantage by adopting strong user rights from the beginning.
  • encourage broad participation to highlight where there’s consensus.  Currently, there are a couple of ways to weigh in: the Social Network Users’ Bill of Rights site allows you to vote on the individual rights, and you can also vote for or against the entire bill via Twitter.  It would be great to have additional voting on other social network sites like Facebook, MySpace, Reddit to give the citizens of those “countries” a voice.
  • collaborate with with groups like the Global Network Initiative, the Internet Rights and Principles Coalition, the Social Charter, and the Association for Progressive Communications that support similar principles
  • follow Gabrielle Pohl’s lead and translate into multiple languages to build awareness globally.
  • take a more active approach with media outreach to call more attention to the campaign.  #privchat, the weekly Twitter chat sponsored by Center for Democracy and Technology and Privacy Camp, is natural hub for the discussion.

Meanwhile, here are some ways you can express your views:

 

Touch2Id Testimonials

Last summer I wrote about the British outfit called touch2id.  They had developed a system that sounded pretty horrible when I first heard about it – a scheme to control underage drinking by using peoples’ fingerprints rather than getting them to present identity cards.  I assumed it would be another of the hair-brained biometric schemes I had come across in the past – like this one, or this, or these.

But no.  The approach was completely different.  Not only was the system popular with its early adopters, but its developers had really thought through the privacy issues.   There was no database of fingerprints, no record linking a fingerprint to a natural person.  The system was truly one of “minimal disclosure” and privacy by design:

  • To register, people presented their ID documents and, once verified, a template of their fingerprint was stored on a Touch2Id card that was immediately given to them.  The fingerprint was NOT stored in a database
  • When people with the cards wanted to have a drink, they would wave their card over a machine similar to a credit card reader, and press their finger on the machine.  If their finger matched the template on their card, the light came on indicating they were of drinking age and they could be served.

A single claim:  “Able to drink“.  Here we had well designed technology offering an experience that the people using it liked way better than the current “carding” process – and which was much more protective of their privacy.  “Privacy by design” was delivering tangible benefits.  Merchants didn’t have to worry about making mistakes.  Young people didn’t have to worry about being discriminated against (or being embarassed) just because they “looked young” or got a haircut.  No identifying information was being released to the merchants.  No name, age or photo was stored on the cards.  The movements of young people were not tracked.  And so on.

Today touch2id published Testemonials – an impressive summary of their project consisting of reviews by individuals involved.  It is clear that those who liked it loved it.  It would be interesting to find out to what extent these rave reviews are typical of those who tried the system.  

At any rate, it's instructive to compare the positive outcome of this pilot with all the biometric proposals that have crashed onto the shoals of privacy invasion.

From CardSpace to Verified Claims

Last week Microsoft announced the availability of Version 2 of the U-Prove Technology Preview.

What’s new about it?

The most important thing is that it offers a new, web-oriented user experience carefully tailored to helping people control the release of “verified claims” while protecting their privacy.  By verified claims I mean things that are said about them as flesh-and-blood people by entities that can speak, at least in certain contexts, with authority. By protecting privacy I mean keeping information released to the minimum necessary, and ensuring that the authority making the claims – for example a government – is not able to track and profile the way your information is used.

The system takes a number of the good ideas from CardSpace but is also informed by what CardSpace didn’t do well. It doesn’t require the installation of new components on your computer. It works on all the major browsers and phones. It roams between devices. Sites don't have to worry about users “getting a card” before the system will work. And it allows claims providers and relying parties to shape and brand their users’ experiences while still providing a consistent interface for claims approval.

In other words, it represents a big step forward for protecting privacy using high value credentials to release claims.

A focused approach

When it comes to verified claims, the “U-Prove Agent” goes beyond CardSpace.  One way it does this is by being highly focused and integrated into a specific type of identity experience. I’ll be posting a video soon that will help you get a concrete sense of why this works.

That focus represents a change from what we tried to do with CardSpace.   One of the key goals of CardSpace was to provide a “generalized solution” – an alternative to the “patchwork quilt” of what I called “identity kludges” that characterize peoples’ experience of identity on the Internet.

In fact I still believe as much as ever that a “generalized solution” would be nice to have. I would even go so far as to say that a generalized solution is inevitable – at some point in time.

But the current chaos is so vast – and peoples’ thinking about it so fractured – that the only prudent practical approach is to carve the problem into smaller pieces. If we can make progress in some of the pieces we can tie that progress together. The U-Prove Agent for exchange of verified claims is a good example of this, making it possible to offer services that would otherwise be impossible because of privacy problems.

What about CardSpace?

Because of its focus, the U-Prove agent isn’t capable of doing everything that CardSpace attempted to do using Information Cards.

It doesn’t address the problem of helping users manage ALL their identities while keeping them separate. It doesn’t address the user problems of password fatigue, phishing and pervasive “secret questions” when logging into consumer web sites.  It doesn’t solve the famous “home realm discovery problem” when using federation. And perhaps most frustrating when it comes to using devices like phones, it doesn’t give the user a simple way to pick their identities from a set of visual representations (icons or cards).

These issues are all more pressing today than they were in 2006 when CardSpace was first proposed. Yet one thing is clear: in five years of intensive work and great cross-industry collaboration with other innovators working on Apple and Linux computers and phones, we weren’t able to get Information Cards onto the radar of the big web properties users depend on.

Those properties had other priorities. My friend Mike Jones put it well at Self-Issued:

“In my extensive experience talking with potential adopters, while many/most thought that CardSpace was a good idea, because they didn’t see it solving a top-5 pain point that they were facing at that moment or providing immediate compelling value, they never actually allocated resources to do the adoption at their site.”

Regardless of why this was the case, it explains why last week Microsoft also announced that it will not be shipping CardSpace 2.0.

In my personal view, we all certainly need to keep working on the problems Information Cards address, and many of the concepts and technologies used in Information Cards should be retained and evolved. I think the U-Prove team has done a good job at that, and provides an example of how we can move forward to solve specific problems. Now the question is how to do so with the other aspects of user-centric identity.

Over the next while I’m going to do a series of posts that explore some of these issues further – drawing some lessons from what we’ve learned over the last few years.  Most of all, it is important to remember what great progress we’ve made as an industry around the Identity Metasystem, federation technology, and claims-based computing. The CardSpace identity selector dealt with the hardest and most forward-looking problems of the Metasystem:  the privacy, security and usability problems that will emerge as federated identity becomes a key component of the Internet.  It also challenged industry with an approach that was truly user centric.

It's no surprise that it is hardest to get consensus on forward-looking technologies!  But meanwhile,  the very success of the Identity Metasystem as a whole will cause all the issues we’ve been working on with Information Cards to return larger than life.

 

Gov2.0 and Facebook ‘Like’ Buttons

I couldn't agree more with the points made by identity architect James Brown in a very disturbing piece he has posted at The Other James Brown

James explains how the omnipresent Facebook  widget works as a tracking mechanism:  if you are a Facebook subscriber, then whenever you open a page showing the widget, your visit is reported to Facebook.

You don't have to do anything whatsoever – or click the widget – to trigger this report.  It is automatic.  Nor are we talking here about anonymized information or simple IP address collection.  The report contains your Facebook identity information as well as the URL of the page you are looking at.

If you are familiar with the way advertising beacons operate, your first reaction might be to roll your eyes and yawn.  After all, tracking beacons are all over the place and we've known about them for years.

But until recently, government web sites – or private web sites treating sensitive information of any kind – wouldn't be caught dead using tracking beacons. 

What has changed?  Governments want to piggyback on the reach of social networks, and show they embrace technology evolution.  But do they have procedures in place that ensure that the mechanisms they adopt are actually safe?  Probably not, if the growing use of the Facebook ‘Like’ button on these sites demonstrates.  I doubt those who inserted the widgets have any idea about how the underlying technology works – or the time or background to evaluate it in depth.  The result is a really serious privacy violation.

Governments need to be cautious about embracing tracking technology that betrays the trust citizens put in them.  James gives us a good explanation of the problem with Facebook widgets.  But other equally disturbing threats exist.  For example, should governments be developing iPhone applications when to use them, citizens must agree that Apple has the right to reveal their phone's identifier and location to anyone for any purpose?    

In my view, data protection authorities are going to have to look hard at emerging technologies and develop guidelines on whether government departments can embrace technologies that endanger the privacy of citizens.

Let's turn now to the details of James’ explanation.  He writes:

I am all for Gov2.0.  I think that it can genuinely make a difference and help bring public sector organisations and people closer together and give them new ways of working.  However, with it comes responsibility, the public sector needs to understand what it is signing its users up for.image

In my post Insurers use social networking sites to identify risky clients last week I mentioned that NHS Choices was using a Facebook ‘Like’ button on its pages and this potentially allows Facebook to track what its users were doing on the site.  I have been reading a couple of posts on ‘Mischa’s ramblings on the interweb’ who unearthed this issue here and here and digging into this a bit further to see for myself, and to be honest I really did not realise how invasive these social widgets can be.

Many services that government and public sector organisations offer are sensitive and personal. When browsing through public sector web portals I do not expect that other organisations are going to be able to track my visit – especially organisations such as Facebook which I use to interact with friends, family and colleagues.

This issue has now been raised by Tom Watson MP, and the response from the Department of Health on this issue of Facebook is:

“Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system. When users sign up to Facebook they agree Facebook can gather information on their web use. NHS Choices privacy policy, which is on the homepage of the site, makes this clear.”

“We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.”

I think this response is wrong on a number of different levels.  Firstly at a personal level, when I browse the UK National Health Service web portal to read about health conditions I do not expect them to allow other companies to track that visit; I don't really care what anybody's privacy policy states, I don't expect the NHS to allow Facebook to track my browsing habits on the NHS web site.

Secondly, I would suggest that the statement “Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system” is wrong.  Facebook being able to capture data from sites like NHS Choices is a result of NHS Choices adding Facebook's functionality to their site.

Finally, I don't believe that the “We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.” is technically correct.

(Sorry to non-technical users but it is about to a bit techy…)

I created a clean Virtual Machine and installed HTTPWatch so I could see the traffic in my browser when I load an NHS Choices page.  This machine has never been to Facebook, and definitely never logged into it.  When I visit the NHS Choices page on bowel cancer the following call is made to Facebook:

http://www.facebook.com/plugins/like.php?href=http%3A%2F%2Fwww.nhs.uk%2fconditions%2fcancer-of-the-colon-rectum-or-bowel%2fpages%2fintroduction.aspx&layout=button_count&show_faces=true&width=450&action=like&colorscheme=light&height=21

 

AnonFacebook

So Facebook knows someone has gone to the above page, but does not know who.

 

Now go Facebook and log-in without ticking the ‘Keep logged in’ checkbox and the following cookie is deposited on my machine with the following 2 fields in it: (added xxxxxxxx to mask the my unique id)

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

If I now close my browser and go back to Facebook, it does not log me in – but it knows who I am as my email address is pre-filled.

 

Now head over back to http://www.nhs.uk/conditions/cancer-of-the-colon-rectum-or-bowel/pages/introduction.aspx and when the Facebook page is contacted the cookie is sent to them with the data:

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

FacebookNotLoggedIn

 

So even if I am not logged into Facebook, and even if I do not click on the ‘Like’ button, the NHS Choices site is allowing Facebook to track me.

Sorry, I don't think that is acceptable.

[Update:  I originally misread James’ posting as saying the “keep me logged in” checkbox on the Facebook login page was a factor in enabling tracking – in other words that Facebook only used permanent cookies after you ticked that box.  Unfortunately this is not the case.  I've updated my comments in light of this information.

If you have authenticated to Facebook even once, the tracking widget will continue to collect information about you as you surf the web unless you manually delete your Facebook cookies from the browser.  This design is about as invasive of your privacy as you can possibly get…]

 

Vittorio's new book is a must-read

Vittorio's new bookIf you are a programmer interested in identity, I doubt you'll find a more instructive or amusing video than this one by Vittorio Bertocci.  It's aimed at people who work in .NET and explores the Windows Identity Foundation.   I expect most programmers interested in identity will find it fascinating no matter what platform they work on, even if it just provides a point of comparison.

And that brings me to Vittorio's new book:  Programming Windows Identity Foundation.  I really only have one thing to say about it:  you are crazy to program in WIF without reading this book.  And if you're an architect rather than a coder – but still have a sense of reading code – you'll find that subjects like delegation benefit immensely from the concrete presentation Vittorio has put together.

I have to admit to being sufficiently engrossed that I had to drop everything I was doing in order to deal with some of the miniature brain-waves the book induced.  

But then, I have a soft spot for good books on programming.  I'm talking about books that have real depth but are simple and exciting because the writer has the same clarity as programmers have when they are in “programming trance”.  I used to even take a bunch of books with me when I went on vacation – it drove my mother-in-law nuts.

I'm not going to try to descibe Vittorio's book – but it really hangs together, and if you're trying to do anything original or complex it will give you the depth of understanding you need to do it efficiently.  Just as important, you'll enjoy reading it.

Non-Personal Information – like where you live?

Last week I gave a presentation at PII 2010 in Seattle where I tried to summarize what I had learned from my recent work on WiFi location services and identity.  During the question period  an audience member asked me to return to the slide where I recounted how I had first encountered Apple’s new location tracking policy:

My questioner was clearly a bit irritated with me,  Didn’t I realize that the “unique device identifier” was just a GUID – a purely random number?  It wasn’t a MAC address.  It was not personally identifying.

The question really perplexed me, since I had just shown a slide demonstrating how if you go to this well-known web site (for example) and enter a location you find out who lives there (I used myself as an example, and by the way, “whitepages” releases this information even though I have had an unlisted number…).

I pointed out the obvious:  if Apple releases your location and a GUID to a third party on multiple occasions, one location will soon stand out as being your residence… Then presto, if the third pary looks up the address in a “Reverse Address” search engine, the “random” GUID identifies you personally forever more.  The notion that location information tied to random identifiers is not personally identifiable information is total hogwash.

My questioner then asked, “Is your problem that Apple’s privacy policy is so clear?  Do you prefer companies who don’t publish a privacy policy at all, but rather just take your information without telling you?”  A chorus of groans seemed to answer his question to everyone’s satisfaction.  But I personally found the question thought provoking.  I assume corporations publish privacy policies – even those as duplicitous as Apple’s – because they have to.  I need to learn more about why.

[Meanwhile, if you’re wondering how I could possibly post my own residential address on my blog, it turns out I’ve moved and it is no longer my address.  Beyond that, the initial “A” in the listing above has nothing to do with my real name – it’s just a mechanism I use to track who has given out my personal information.]