The voting so far

The people working on a Social Network Users’ Bill of Rights have done another interesting eThing:  rather than requiring people to express support or rejection holus-bolus they've decided to let us vote on the individual rights proposed.  Further, Jon Pincus has shared the early results on his Liminal States blog.  He writes:

The SXSW panel got a decent amount of attention, including article by Helen A. S. Popkin’s Vote on your ‘Social Network Users’ Bill of Rights’ on MSNBC’s Technolog, Kim Cameron’s post on the Identity Weblog, and a brief link from Mark Sullivan of PC World. Here’s the voting so far

  1. 41 yes 0 no Honesty: Honor your privacy policy and terms of service
  2. 41 yes 0 no Clarity: Make sure that policies, terms of service, and settings are easy to find and understand
  3. 41 yes 0 no Freedom of speech: Do not delete or modify my data without a clear policy and justification
  4. 33 yes 4 no Empowerment : Support assistive technologies and universal accessibility
  5. 35 yes 2 no Self-protection: Support privacy-enhancing technologies
  6. 37 yes 3 no Data minimization: Minimize the information I am required to provide and share with others
  7. 39 yes 1 no Control: Let me control my data, and don’t facilitate sharing it unless I agree first
  8. 39 yes 1 no Predictability: Obtain my prior consent before significantly changing who can see my data.
  9. 38 yes 0 no Data portability: Make it easy for me to obtain a copy of my data
  10. 39 yes 0 no Protection: Treat my data as securely as your own confidential data unless I choose to share it, and notify me if it is compromised
  11. 36 yes 2 no Right to know: Show me how you are using my data and allow me to see who and what has access to it.
  12. 24 yes 13 no Right to self-define: Let me create more than one identity and use pseudonyms. Do not link them without my permission.
  13. 35 yes 1 no Right to appeal: Allow me to appeal punitive actions
  14. 37 yes 1 no Right to withdraw: Allow me to delete my account, and remove my data

So it’s in general overwhelmingly positive: five rights are unanimous, and another eight at 89% or higher.  The one exception: the right to self-define, currently at about 65%.  As I said in a comment on the earlier thread, this right is vital for people like whistleblowers, domestic violence victims, political dissidents, closeted LGBTQs.   I wonder whether the large minority of people who don’t think it matters are thinking about it from those perspectives.

The voting continues at http://SNUBillOfRights.com.  Please voice your opinion!

The voting on individual rights is still light.  Right 12 clearly stands out as one which needs discussion.

I expect most people just take a quick look at the bill as a whole, say “Yeah, that makes sense” and move on.  The “pro” and “against” pages at facebook ran about 500 to 1 in favor of the Bill when I looked a few days ago.  In this sense the Bill is certainly right on track. 

But the individual rights need to be examined very carefully by at least some of us.  I'll return to Jon's comments on right 12 when I can make some time to set out my ideas.

Social Network Users’ Bill of Rights

The  “Social Network Users’ Bill of Rights” panel at the South by Southwest Interactive (SXSW) conference last Friday had something that most panels lack:  an outcome.  The goal was to get the SXSWi community to cast their votes and help to shape a bill of rights that would reflect the participation of many thousands of people using the social networks.

The idea of getting broad communities to vote on this is pretty interesting.  Panelist Lisa Borodkin wrote:

There is no good way currently of collecting hard, empirical, quantitative data about the preferences of a large number of social network users. There is a need to have user input into the formation of social norms, because courts interpreting values such as “expectations of privacy” often look to social network sites policies and practices.

Where did the Bill of Rights come from?  The document was written collaboratively over four days at last year's Computers, Freedom and Privacy Conference and since the final version was published has been collecting votes through pages like this one.  Voting is open until June 15, 2011 – the “anniversary of the date the U.S. government asked Twitter to delay its scheduled server maintenance as a critical communication tool for use in the 2009 Iran elections”.  And guess what?  That date also coincides with this year's Computers, Freedom and Privacy Conference.

The Bill – admirably straightforward and aimed at real people – reads as follows:

We the users expect social network sites to provide us the following rights in their Terms of Service, Privacy Policies, and implementations of their system:

  1. Honesty: Honor your privacy policy and terms of service
  2. Clarity: Make sure that policies, terms of service, and settings are easy to find and understand
  3. Freedom of speech: Do not delete or modify my data without a clear policy and justification
  4. Empowerment : Support assistive technologies and universal accessibility
  5. Self-protection: Support privacy-enhancing technologies
  6. Data minimization: Minimize the information I am required to provide and share with others
  7. Control: Let me control my data, and don’t facilitate sharing it unless I agree first
  8. Predictability: Obtain my prior consent before significantly changing who can see my data.
  9. Data portability: Make it easy for me to obtain a copy of my data
  10. Protection: Treat my data as securely as your own confidential data unless I choose to share it, and notify me if it is compromised
  11. Right to know: Show me how you are using my data and allow me to see who and what has access to it.
  12. Right to self-define: Let me create more than one identity and use pseudonyms. Do not link them without my permission.
  13. Right to appeal: Allow me to appeal punitive actions
  14. Right to withdraw: Allow me to delete my account, and remove my data

It will be interesting to see whether social networking sites engage with this initiative.  Sixestate reported some time ago that Facebook objected to requiring support for pseudonyms. 

While I support all other aspects of the Bill, I too think it is a mistake to mandate that ALL communities MUST support pseudonymity or be in violation of the Bill…  In all other respects, the Bill is consistent with the Laws of Identity.  However the Laws envisaged a continuum of approaches to identification, and argued that all have their place for different purposes.  I think this is much closer to the mark and Right 12 should be amended.  The fundamental point is that we must have the RIGHT to form and participate in communities that DO choose to support pseudonymity.  This doesn't mean we ONLY have the right to participate in such communities.

Where do the organizers want to go next? Jon Pincus writes:

Here’s a few ideas:

  • get social network sites to adopt the concept of a Bill of Rights for their users and as many of the individual rights as they’re comfortable with.   Some of the specific rights are contentious  — for example, Facebook objected to in their response last summer.  But more positively, Facebook’s current “user rights and responsibilities” document already covers many of these rights, and it would be great to have even partial support from them.  And sites like Twitter, tribe.net, and emerging companies that are trying to emphasize different values may be willing to go even farther.
  • work with politicians in the US and elsewhere who are looking at protecting online, and encourage them to adopt the bill of rights framework and our specific language.  There’s a bit of “carrot and stick” combining this and the previous bullet: the threat of legislation is great both for encouraging self-regulation and getting startups to look for a potential future strategic advantage by adopting strong user rights from the beginning.
  • encourage broad participation to highlight where there’s consensus.  Currently, there are a couple of ways to weigh in: the Social Network Users’ Bill of Rights site allows you to vote on the individual rights, and you can also vote for or against the entire bill via Twitter.  It would be great to have additional voting on other social network sites like Facebook, MySpace, Reddit to give the citizens of those “countries” a voice.
  • collaborate with with groups like the Global Network Initiative, the Internet Rights and Principles Coalition, the Social Charter, and the Association for Progressive Communications that support similar principles
  • follow Gabrielle Pohl’s lead and translate into multiple languages to build awareness globally.
  • take a more active approach with media outreach to call more attention to the campaign.  #privchat, the weekly Twitter chat sponsored by Center for Democracy and Technology and Privacy Camp, is natural hub for the discussion.

Meanwhile, here are some ways you can express your views:

 

Non-Personal Information – like where you live?

Last week I gave a presentation at PII 2010 in Seattle where I tried to summarize what I had learned from my recent work on WiFi location services and identity.  During the question period  an audience member asked me to return to the slide where I recounted how I had first encountered Apple’s new location tracking policy:

My questioner was clearly a bit irritated with me,  Didn’t I realize that the “unique device identifier” was just a GUID – a purely random number?  It wasn’t a MAC address.  It was not personally identifying.

The question really perplexed me, since I had just shown a slide demonstrating how if you go to this well-known web site (for example) and enter a location you find out who lives there (I used myself as an example, and by the way, “whitepages” releases this information even though I have had an unlisted number…).

I pointed out the obvious:  if Apple releases your location and a GUID to a third party on multiple occasions, one location will soon stand out as being your residence… Then presto, if the third pary looks up the address in a “Reverse Address” search engine, the “random” GUID identifies you personally forever more.  The notion that location information tied to random identifiers is not personally identifiable information is total hogwash.

My questioner then asked, “Is your problem that Apple’s privacy policy is so clear?  Do you prefer companies who don’t publish a privacy policy at all, but rather just take your information without telling you?”  A chorus of groans seemed to answer his question to everyone’s satisfaction.  But I personally found the question thought provoking.  I assume corporations publish privacy policies – even those as duplicitous as Apple’s – because they have to.  I need to learn more about why.

[Meanwhile, if you’re wondering how I could possibly post my own residential address on my blog, it turns out I’ve moved and it is no longer my address.  Beyond that, the initial “A” in the listing above has nothing to do with my real name – it’s just a mechanism I use to track who has given out my personal information.]

Blizzard backtracks on real-names policy

A few days ago I mentioned the outcry when Blizzard, publisher of the World of Warcraft (WoW) multi-player Internet game, decided to make gamers reveal their offline identities and identifiers within their fantasy gaming context. 

I also descibed Blizzard's move as being the “kookiest” flaunting yet of the Fourth Law of Identity (Contextual separation through unidirectional identifiers). 

Today the news is all about Blizzard's first step back from the mistaken plan that appears to have completely misunderstood its own community.

CEO Mike Morhaime  seems to be on the right track with the first part of his message:

“I'd like to take some time to speak with all of you regarding our desire to make the Blizzard forums a better place for players to discuss our games. We've been constantly monitoring the feedback you've given us, as well as internally discussing your concerns about the use of real names on our forums. As a result of those discussions, we've decided at this time that real names will not be required for posting on official Blizzard forums.

“It's important to note that we still remain committed to improving our forums. Our efforts are driven 100% by the desire to find ways to make our community areas more welcoming for players and encourage more constructive conversations about our games. We will still move forward with new forum features such as the ability to rate posts up or down, post highlighting based on rating, improved search functionality, and more. However, when we launch the new StarCraft II forums that include these new features, you will be posting by your StarCraft II Battle.net character name + character code, not your real name. The upgraded World of Warcraft forums with these new features will launch close to the release of Cataclysm, and also will not require your real name.”

Then he goes weird again.  He seems to have a fantasy of his own:  that he is running Facebook…

“I want to make sure it's clear that our plans for the forums are completely separate from our plans for the optional in-game Real ID system now live with World of Warcraft and launching soon with StarCraft II. We believe that the powerful communications functionality enabled by Real ID, such as cross-game and cross-realm chat, make Battle.net a great place for players to stay connected to real-life friends and family while playing Blizzard games. And of course, you'll still be able to keep your relationships at the anonymous, character level if you so choose when you communicate with other players in game. Over time, we will continue to evolve Real ID on Battle.net to add new and exciting functionality within our games for players who decide to use the feature.”

Don't get me wrong.  As convoluted as this thinking is, it's one big step forward (after two giant steps backward) to make linking of offline identity to gaming identity “optional”. 

And who knows?  Maybe Mike Morhaime really does understand his users…  He may be right that lots of gamers are totally excited at the prospect of their parents, lovers and children joining Battle.net to stay connected with them while they are playing WoW!  Facebook doesn't stand a chance!

 

Trusting Mobile Technology

Jacques Bus recently shared a communication he has circulated about the mobile technology issues I've been exploring.  To European readers he will need no introduction:  as Head of Unit for the European Commission's Information and Communication Technologies (ICT) Research Programme he oversaw and gave consistency to the programs shaping Europe's ICT research investment.  Thoroughly expert and equally committed to results, Jacques’ influence on ICT policy thinking is clearly visible in Europe.   Jacques is now an independent consultant on ICT issues.

On June 20, Kim Cameron [KC] posted a piece on this blog titled: Harvesting phone and laptop fingerprints for its database – Google says the user’s device sends a request to its location server with a list of all MAC addresses currently visible to it. Does that include yours?

It was the start of a series of communications that reads like a thriller. Unfortunately the victim is not imaginary, but it is me and you.

He started with an example of someone attending a conference while subscribed to a geo-location service. “I [KC] argued that the subscriber’s cell phone would pick up all the MAC addresses (which serve as digital fingerprints) of nearby phones and laptops and send them in to the centralized database service, which would look them up and potentially use the harvested addresses to further increase its knowledge of people’s behavior – for example, generating a list of those attending the conference.”

He then explained how Google says its location database works, showing that “certainly the MAC addresses of all nearby phones and laptops are sent in to the geo-location server – not simply the MAC addresses of wireless access points that are broadcasting SSIDs.”

His first post was followed by others, including reference to an excellent piece of Niraj Chokshi in The Atlantic and demonstrating that Google's messages in its application descriptions are, to say the least, not in line with their PR messages to Chokshi.

On 2 July a discussion of Apple iTunes follows in KC's post: Update to iTunes comes with privacy fibs with as main message: As the personal phone evolves it will become increasingly obvious that groups within some of our best tech companies have built businesses based on consciously crafted privacy fibs.

The new iTunes policy says: By using this software in connection with an iTunes Store account, you agree to the latest iTunes Store Terms of Service, which you may access and review from the home page of the iTunes Store. So iTunes says: Our privacy policy is that you need to read another privacy policy. This other policy states:

We also collect non-personal information – data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

I think KC rightly asks the question: What does downloading a song have to do with giving away your location???

Clearly Apple would call its unique device identifier – and its location – ”non-personal data”. However, personal data means in Europe any information relating to an identified or identifiable natural person. Even Google CEO Eric Schmidt would under this EU definition supposedly disagree with Apple, given his statement in a recent speech quoted by KC: Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”.

We could go on with the post of 3 July: The current abuse of personal device identifiers by Google and Apple is at least as significant as the problems I discussed long ago with Passport. He is referring to a story by Todd Bishop at TechFlash – here I refer readers to the original thriller rather than trying to summarize it for them.

What is absolutely clear from the above is how dependent we all are on mobile technology. It is also clear that to enjoy the personal and location services we request one needs to combine data on the person and his location. However, I am convinced that in the complex society we live in, we will eventually only accept services and infrastructure if we can trust them to work as we expect, including the handling of our personal data. But trust can only be given if the services and infrastructure is trustworthy. O'Hara and Hall describe trust on the Web very well, based on fundamental principles. They decompose trust in local trust (personal experience through high-bandwidth interactions) and global trust (outsourcing our trust decisions to trusted institutions, like accepted roles through training, witnessing, or certification). Reputation is usually a mix of this.

For trust to be built up the transparency and accountability of the data collectors and processors is essential. As local trust is particularly difficult in global transactions over the Web, we need stronger global trust through a-priori assurances on compliance with legal obligations on privacy protection, transparency, auditing, and effective law enforcement and redress. These are basic principles on which our free and developed societies are built, and which are necessary to guarantee creativity, social stability, economic activity and growth.

One can conclude from KCs posts that not much of these essential elements are represented in the current mobile world.

I agree that the legal solutions he proposes are small steps in the right direction and should be pursued. However, essential action at the level of the legislators is urgently needed. Data Protection authorities in Europe are well aware of that as is demonstrated in The Future of Privacy. Unfortunately these solutions are slow to implement, whilst commercial developments are very fast.

Technology solutions, like developing WiFi protocols that appropriately randomize MAC addresses and also protect other personal data, are also needed urgently to enable develop trustworthy solutions that are competitive and methods should be sought to standardize such results quickly.

However, the gigantic global centralization of data collection and the possibilities of massive correlation is scaring and may make DP Commissioners, even in group in Europe, look helpless. The data is already out there and usable.

What I wonder: is all this data available for law enforcers under warrant and accepted as legal proof in court? And if not, how can it be possible that private companies can collect it? Don't we need some large legal test cases?

And let’s not forget one thing: any government action must be as global as possible given the broad international presence of the most important companies in this field, hence the proposed standards of the joint international DP authorities in their Madrid Declaration.

Smart questions and conclusions.

 

Doing it right: Touch2Id

And now for something refreshingly different:  an innovative company that is doing identity right. 

I'm talking about a British outfit called Touch2Id.  Their concept is really simple.  They offer young people a smart card that can be used to prove they are old enough to drink alcohol.  The technology is now well beyond the “proof of concept” phase – in fact its use in Wiltshire, England is being expanded based on its initial success.

  • To register, people present their ID documents and, once verified, a template of their fingerprint is stored on a Touch2Id card that is immediately given to them. 
  • When they go to a bar, they wave their card over a machine similar to a credit card reader, and press their finger on the machine.  If their finger matches the template on their card, the lights come on and they can walk on in.

   What's great here is:

  • Merchants don't have to worry about making mistakes.  The age vetting process is stringent and fake IDs are weeded out by experts.
  • Young people don't have to worry about being discriminated against (or being embarassed) just because they “look young”
  • No identifying information is released to the merchant.  No name, age or photo appears on (or is stored on) the card.
  • The movements of the young person are not tracked.
  • There is no central database assembled that contains the fingerprints of innocent people
  • The fingerprint template remains the property of the person with the fingerprint – there is no privacy issue or security honeypot.
  • Kids cannot lend their card to a friend – the friend's finger would not match the fingerprint template.
  • If the card is lost or stolen, it won't work any more
  • The templates on the card are digitally signed and can't be tampered with

I met the man behind the Touch2Id, Giles Sergant, at the recent EEMA meeting in London.

Being a skeptic versed in the (mis) use of biometrics in identity – especially the fingerprinting of our kids – I was initially more than skeptical. 

But Giles has done his homework (even auditing the course given by privacy experts Gus Hosein and Simon Davies at the London School of Economics).  The better I understood the approach he has taken, the more impressed I was.

Eventually I even agreed to enroll so as to get a feeling for what the experience was like.  The verdict:  amazing.  Its a lovely piece of minimalistic engineering, with no unnecessary moving parts or ugly underbelly.    If I look strangely euphoric in the photo that was taken it is because I was thoroughly surprised by seeing something so good.

Since then, Giles has already added an alternate form factor – an NFC sticker people can put on their mobile phone so they don't actually need to carry around an additional artifact.  It will be fascinating to watch how young people respond to this initiative, which Giles is trying to grow from the bottom up.  More info on the Facebook page.

Update to iTunes comes with privacy fibs

A few days ago I reported that from now on, to get into the iPhone App store you must allow Apple to share your phone or tablet device fingerprints and detailed, dynamic location information with anyone it pleases.  No chance to vet the purposes for which your location data is being used.  No way to know who it is going to. 

As incredible as it sounds in 2010, no user control.  Not even  transparency.  Just one thing is for sure.  If privacy isn't dead, Apple is now amongst those trying to bury it alive.

Then today, just when I thought Apple had gone as far as it could go in this particular direction, a new version of iTunes wanted to install itself on my laptop.  What do you know?  It had a new privacy policy too… 

The new iTunes policy was snappier than the iPhone policy – it came to the point – sort of – in the 5th paragraph rather than the 37th page!

5. iTunes Store and other Services.  This software enables access to Apple's iTunes Store which offers downloads of music for sale and other services (collectively and individually, “Services”). Use of the Services requires Internet access and use of certain Services requires you to accept additional terms of service which will be presented to you before you can use such Services.

By using this software in connection with an iTunes Store account, you agree to the latest iTunes Store Terms of Service, which you may access and review from the home page of the iTunes Store.

I shuddered.  Mind bend!  A level of indirection in a privacy policy! 

Imagine:  “Our privacy policy is that you need to read another privacy policy.”  This makes it much more likely that people will figure out what they're getting into, don't you think?  Besides, it is a really novel application of the proposition that all problems of computer science can be solved through a level of indirection!  Bravo!

But then – the coup de grace.  The privacy policy to which Apple redirects you is… are you ready… the same one we came across a few days ago at the App Store!  So once again you need to get to the equivalent of page 37 of 45 to read:

Collection and Use of Non-Personal Information

We also collect non-personal information – data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

The mind bogggggles.  What does downloading a song have to do with giving away your location???

Some may remember my surprise that the Lords of The iPhone would call its unique device identifier – and its location – “non-personal data”.  Non-personal implies there is no strong relationship to the person who is using it.  I wrote:

The irony here is a bit fantastic.  I was, after all, using an “iPhone”.   I assume Apple’s lawyers are aware there is an ”I” in the word “iPhone”.  We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village.  An iPhone is carried around by its owner.  If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it. 

Anybody who thinks about identity understands that a “personal device” is associated with (even an extension of) the person who uses it.  But most people – including technical people – don't give these matters the slightest thought.  

A parade of tech companies have figured out how to use peoples’ ignorance about digital identity to get away with practices letting them track what we do from morning to night in the physical world.  But of course, they never track people, they only track their personal devices!  Those unruly devices really have a mind of their own – you definitely need central databases to keep tabs on where they're going.

I was therefore really happy to read some of  Google CEO Eric Schmidt’s recent speech to the American Society of News Editors.  Talking about mobility he made a number of statements that begin to explain the ABCs of what mobile devices are about:

Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”

It is good to see Eric sharing the actual truth about personal devices with a group of key influencers.  This stands in stark contrast to the silly fibs about phones and laptops being non-personal that are being handed down in the iTunes Store, the iPhone App Store, and in the “Refresher FAQ” Fantasyland Google created in response to its Street View WiFi shenanigans. 

As the personal phone evolves it will become increasingly obvious  that groups within some of our best tech companies have built businesses based on consciously crafted privacy fibs.  I'm amazed at the short-sightedness involved:  folks, we're talking about a “BP moment”.  History teaches us that “There is no vice that doth so cover a man with shame as to be found false and perfidious.” [Francis Bacon]  And statements that your personal device doesn't identify you and that location is not personal information are precisely “false and perfidious.”

 

National Strategy for Trusted Identities in Cyberspace

Friday saw what I think is a historic post by Howard Schmidt on The Whitehouse Blog:

“Today, I am pleased to announce the latest step in moving our Nation forward in securing our cyberspace with the release of the draft National Strategy for Trusted Identities in Cyberspace (NSTIC).  This first draft of NSTIC was developed in collaboration with key government agencies, business leaders and privacy advocates. What has emerged is a blueprint to reduce cybersecurity vulnerabilities and improve online privacy protections through the use of trusted digital identities. “

I say the current draft is historic because of the grasp of identity issues it achieves

At the core of the document is a recognition that we need a solution supporting privacy-enhancing technologies and built by harnessing a user-centric Identity Ecosystem offering citizens and private enterprise plenty of choice.  

Finally we have before us a proposal that can move society forward in  protecting individual privacy and simultaneously create a secure and trustworthy infrastructure with enough protections to be resistant to insider attacks.  

Further, the work appears to have support from multiple government agencies – the Department of Homeland Security was a key partner in its creation. 

Here are the guiding principles (beginning page 8):

  • Identity solutions will be secure and resilient
  • Identity solutions will be interoperable
  • Identity solutions will be privacy enhancing and voluntary for the public
  • Identity solutions will be cost-effective and easy to use

Let's start with the final “s” on the word “solutions” – a major achievement.  The authors understand society needs a spectrum of approaches suitable for different use cases but fitting within a common interoperable framework – what I and others have called an identity metasystem. 

The report embraces the need for anonymous access as well as that for strong identification.  It stands firmly in favor of minimal disclosure.  The authors call out the requirement that solutions be privacy enhancing and voluntary for the public, rather than attempting to ram something bureaucratic down peoples’ throats.  And they are fully cognisant of the practicality and usability requirements for the initiative to be successful.  A few years ago I would not have believed this kind of progress would be possible.

Nor is the report just a theoretical treatment devoid of concrete proposals.  The section on “Commitment to Action” includes:

  • Designate a federal agency to lead the public/private sector efforts to advance the vision
  • Develop a shared, comprehensive public/private sector implementation plan
  • Accelerate the expansion of government services, pilots and policies that align with the identity ecosystem
  • Work to implement enhanced privacy protections
  • Coordinate the development and refinement of risk management and interoperability standards
  • Address liability concerns of service providers and individuals
  • Perform outreach and awareness across all stakeholders
  • Continue collaborating in international efforts
  • Identify other means to drive adoption

Readers should dive into the report – it is in a draft stage and “Public ideas and recommendations to further refine this Strategy are encouraged.”  

A number of people and organizations in the identity world have participated in getting this right, working closely with policy thinkers and those leading this initiative in government.  I don't hesitate to say that congratulations are due all round for getting this effort off to such a good start.

We can expect suggestions to be made strengthening various aspects of the report – mainly in terms of making it more internally consistent.  

For example, the report contains good vignettes about minimal disclosure and the use of claims to gain access to resources.  Yet it also retains the traditional notion that authentication is dependent on identification.  What is meant by identification?  Many will assume it means “unique identification” in the old-fashioned sense of associating someone with an identifier.  That doesn't jive with the notion of minimal disclosure present throughout the report.  Why? For many purposes association with an identifier is over-identification or unhelpful, and a simple proof of some set of claims would suffice to control access.  

But these refinements can be made fairly easily.  The real challenge will be to actually live up to the guiding principles as we move from high level statements to a widely deployed system – making it truly secure, resilient and privacy enhancing.  These are guiding principles we can use to measure our success and help select between alternatives.

 

Apple giving out your iPhone fingerprints and location

I went to the Apple App store a few days ago to download a new iPhone application.  I expected that this would be as straightforward as it had been in the past: choose a title, click on pay, and presto – a new application becomes available.

No such luck.  Apple had changed it's privacy policy, and I was taken to the screen at right,  To proceed I had to “read and accept the new Terms and Conditions”.  I pressed OK and up came page 1 of a new 45 page “privacy” policy.

I would assume “normal people” would say “uncle” and “click approve” around page 3.  But in light of what is happening in the industry around location services I kept reading the tiny, unsearchable, unzoomable print.

And there – on page 37 – you come to “the news”.  Apple's new “privacy” policy reveals that if you use Apple products Apple can disclose your device fingerprints and location to whomever it chooses and for whatever purpose:

Collection and Use of Non-Personal Information

We also collect non-personal information – data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

No “direct association with any specific individual…”

Maintaining that a personal device fingerprint has “no direct association with any specific individual” is unbelievably specious in 2010 – and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses.  And – big surprise – my iPhone, at least, came bundled with Google's location service.

The irony here is a bit fantastic.  I was, after all, using an “iPhone”.  I assume Apple's lawyers are aware there is an “I” in the word “iPhone”.  We're not talking here about a piece of shared communal property that might be picked up by anyone in the village.  An iPhone is carried around by its owner.  If a link is established between the owner's natural identity and the device (as Google's databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it. 

Apple's statements constitute more disappointing doubletalk that is suspiciously well-aligned with the statements in Google's now-infamous WiFi FAQ.  Checking with the “Wayback machine” (which is of course not guaranteed to be accurate or up to date) the last change recorded in Apple's privacy policy seems to have been made in April 2008.  It contained no reference to device identifiers or location services. 

 

Digital copiers – a privacy and security timebomb

Everyone involved with software and services should watch this remarkable investigative report by CBS News and think about what it teaches us.

Nearly every digital copier built since 2002 contains a hard drive storing an image of every document copied, scanned, or emailed by the machine.  Because of this, the report shows, an office staple has turned into a digital time-bomb packed with highly-personal or sensitive data.  To quote the narrator, “If you're in the identity theft business it seems this would be a pot of gold.”

In the video, the investigators purchase some used machines and then John Juntunen of Digital Copier Security shows them what is still stored on them when they are resold.  As he says, “The type of information we see on these machines with the social security numbers, birth certificates, bank records, income tax forms… would be very valuable.”   He's been trying to warn people about the potential risk, but “Nobody wants to step up and say, ‘we see the problem, and we need to solve it.'”

The results obtained by the investigators in their random sample are stunning, turning up:

  • detailed domestic violence complaints;
  • a list of wanted sex offenders;
  • a list of targets in a major drug raid;
  • design plans for a building near Ground Zero in Manhattan;
  • 95 pages of pay stubs with names, addresses and social security numbers;
  • $40,000 in copied checks; and
  • 300 pages of individual medical records including everything from drug prescriptions, to blood test results, to a cancer diagnosis.

Why are these records sitting around on the hard disk in the first place?  Why aren't they deleted once the copy has been completed or within some minimal time?  If they are kept for audit purposes, why aren't they encrypted for the auditor? 

Is this “rainy-day data collection?” Gee, we have a hard disk, why don't we keep the scans around – they might come in useful sometime. 

It becomes clear that addressing privacy and security threats was never a concern in designing these machines – which are actually computer systems.  This was an example of “privacy chernoble by design”.  Of course I'm speaking not only about individual privacy, but that of the organizations using the machines as well.   The report makes it obvious that digital copiers, or anything else that collects or remembers information, must be designed based on the Law of Minimal Disclosure.

This story also casts an interesting light on what the French are calling “le droit à l'oubli” – the right to have things forgotten.   Most discussions I've seen call for this principle to be applied on the Internet.  But as the digital world collides with the molecular one, we will see the need to build information lifetimes into all digital systems, including smart systems in our environment.  The current and very serious problems with copiers should be seen as profoundly instructive in this regard.

[Thanks to Francis Shanahan for heads up]