Malcolm Compton on power imbalance and security

Australia's CRN reports that former Australian Privacy Commissioner Malcolm Crompton has called for the establishment of a formal privacy industry to rethink identity management in an increasingly digital world:

Addressing the Cards & Payments Australasia conference in Sydney this week, Crompton said the online environment needed to become “safe to play” from citizens’ perspective.

While the internet was built as a “trusted environment”, Crompton said governments and businesses had emerged as “digital gods” with imbalanced identification requirements.

Power allocation is where we got it wrong,” he said, warning that organisations’ unwarranted emphasis on identification had created money-making opportunities for criminals.

Malcolm puts this well.  I too have come to see that the imbalance of power between individual users and Internet business is one of the key factors blocking the emergence of a safe Internet. 

CRN continues:

Currently, users were forced to provide personal information to various email providers, social networking sites, and online retailers in what Crompton described as “a patchwork of identity one-offs”.

Not only were login systems “incredibly clumsy and easy to compromise”; centralised stores of personal details and metadata created honeypots of information for identity thieves, he said…

Refuting arguments that metadata – such as login records and search strings – was unidentifiable, Crompton warned that organisations hording such information would one day face a user revolt

He also recommended the use of cloud-based identification management systems such as Azigo, Avoco and OpenID, which tended to give users more control of their information and third-party access rights.

User-centricity was central to Microsoft chief identity architect Kim Cameron’s ‘Laws of Identity’ (pdf), as well as Canadian Privacy Commissioner Ann Cavoukian’s seven principles of ‘Privacy by Design’ (pdf).

Full article here.

Lazy headmasters versus the Laws of Identity

Ray Corrigan routinely combines legal and technological insight at B2fxxx – Random thoughts on law, the Internet and society, and his book on Digital Decision Making is essential.  His work often leaves me feeling uncharacteristically optimistic – living proof that a new kind of legal thinker is emerging with the technological depth needed to be a modern day Solomon.

I hadn't noticed the UK's new Protection of Freedoms Bill until I heard cabinet minister Damian Green talk about it as he pulverized the UK's centralized identity database recently.  Naturally I turned to Ray Corrigan for comment, only to discover that the political housecleaning had also swept away the assumptions behind widespread fingerprinting in Britain's schools, reinstating user control and consent. 

According to TES Connect:

The new Protection of Freedoms Bill gives pupils in schools and colleges the right to refuse to give their biometric data and compels schools to make alternative provision for them.  The several thousand schools that already use the technology will also have to ask permission from parents retrospectively, even if their systems have been established for years…

It turns out that Britain's headmasters, apparently now a lazy bunch, have little stomach for trivialities like civil liberties.  And writing about this, Ray's tone seems that of a judge who has had an impetuous and over-the-top barrister try to bend the rules one too many times.  It is satisfying to see Ray send them home to study the Laws of Identity as scientific laws governing identity systems.   I hope they catch up on their homework…

The Association of School and College Leaders (ASCL) is reportedly opposing the controls on school fingerprinting proposed in the UK coalition government's Protection of Freedoms Bill.

I always understood the reason that unions existed was to protect the rights of individuals. That ASCL should give what they perceive to be their own members’ managerial convenience priority over the civil rights of kids should make them thoroughly ashamed of themselves.  Oh dear – now head teachers are going to have to fill in a few forms before they abuse children's fundamental right to privacy – how terrible.

Although headteachers and governors at schools deploying these systems may be typically ‘happy that this does not contravene the Data Protection Act’, a number of leading barristers have stated that the use of such systems in schools may be illegal on several grounds. As far back as 2006 Stephen Groesz, a partner at Bindmans in London, was advising:

“Absent a specific power allowing schools to fingerprint, I'd say they have no power to do it. The notion you can do it because it's a neat way of keeping track of books doesn't cut it as a justification.”

The recent decisions in the European Court of Human rights in cases like S. and Marper v UK (2008 – retention of dna and fingerprints) and Gillan and Quinton v UK (2010 – s44 police stop and search) mean schools have to be increasingly careful about the use of such systems anyway. Not that most schools would know that.

Again the question of whether kids should be fingerprinted to get access to books and school meals is not even a hard one! They completely decimate Kim Cameron's first four laws of identity.

1. User control and consent – many schools don't ask for consent, child or parental, and don't provide simple opt out options

2. Minimum disclosure for constrained use – the information collected, children's unique biometrics, is disproportionate for the stated use

3. Justifiable parties – the information is in control of or at least accessible by parties who have absolutely no right to it

4. Directed identity – a unique, irrevocable, omnidirectional identifier is being used when a simple unidirectional identifier (eg lunch ticket or library card) would more than adequately do the job.

It's irrelevant how much schools have invested in such systems or how convenient school administrators find them, or that the Information Commissioner's Office soft peddled their advice on the matter (in 2008) in relation to the Data Protection Act.  They should all be scrapped and if the need for schools to wade through a few more forms before they use these systems causes them to be scrapped then that's a good outcome from my perspective.

In addition just because school fingerprint vendors have conned them into parting with ridiculous sums of money (in school budget terms) to install these systems, with promises that they are not really storing fingerprints and they can't be recreated, there is no doubt it is possible to recreate the image of a fingerprint from data stored on such systems. Ross, A et al ‘From Template to Image: Reconstructing Fingerprints from Minutiae Points’ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007 is just one example of how university researchers have reverse engineered these systems. The warning caveat emptor applies emphatically to digital technology systems that buyers don't understand especially when it comes to undermining the civil liberties of our younger generation.

Broken Laws of Identity lead to system's destruction

Britain's Home Office has posted a remarkable video, showing Immigration Minister Damian Green methodically pulverizing the disk drives that once held the centralized database that was to be connected to the British ID Cards introduced by Tony Blair.  

“What we're doing today is CRUSHING, the final remnants of the national identity card scheme – the disks and hard drives that held the information on the national identity register have been wiped and they're crushed and reduced to bits of metal so everyone can be absolutely sure that the identity scheme is absolutely dead and buried.

“This whole experiment of trying to collect huge amounts of private information on everyone in this country – and collecting on the central database – is no more, and it's a first step towards a wider agenda of freedom.  We're publishing the protection of freedoms bill as well, and what this shows is that we want to rebalance the security and freedom of the citizen.  We think that previously we have not had enough emphasis on peoples’ individual freedom and privacy, and we're determined to restore the proper balance on that.”

Readers of Identityblog will recall that the British scheme was exceptional in breaking so many of the Laws of Identity at once.  It flouted the first law – User control and Consent – since citizen participation was mandatory.  It broke the second – Minimal Disclosure for a Constrained Use – since it followed the premise that as much information as possible should be assembled in a central location for whatever uses might arise…  The third law of Justifiable Parties was not addressed given the centralized architecture of the system, in which all departments would have made queries and posted updates to the same database and access could have been extended at the flick of a wrist.  And the fourth law of “Directed Identity” was a clear non-goal, since the whole idea was to use a single identifier to unify all possible information.

Over time opposition to the scheme began to grow and became widespread, even though the Blair and Brown governments claimed their polls showed majority support.  Many well-known technologists and privacy advocates attempted to convince them to consider privacy enhancing technologies and architectures that would be less vulnerable to security and privacy meltdown – but without success.  Beyond the scheme's many technical deficiencies, the social fracturing it created eventually assured its irrelevance as a foundational element for the digital future.

Many say the scheme was an important issue in the last British election.  It certainly appears the change in government has left the ID card scheme in the dust, with politicians of all stripes eager to distance themselves from it.  Damian Green, who worked in television and understands it, does a masterful job of showing what his views are.  His video posted by the Home Office, seems iconic.

All in all, the fate of the British ID Card and centralized database scheme is exactly what was predicted by the Laws of Identity:

Those of us who work on or with identity systems need to obey the Laws of Identity.  Otherwise, we create a wake of reinforcing side-effects that eventually undermine all resulting technology.  The result is similar to what would happen if civil engineers were to flount the law of gravity.  By following the Laws we can build a unifying identity metasystem that is universally accepted and enduring.

[Thanks to Jerry Fishenden (here and here) for twittering Damian Green's video]

People, meet Facebook HAL…

According to  Irina Slutsky of Ad Age Digital, Facebook is testing the idea of deciding what ads to show you by pigeon-holing you based on your real-time conversations. 

In the past, a user's Facebook advertising would eventually be impacted by what's on her wall and in her stream, but this was a gradual shift based on out-of-band analysis and categorization. 

Now, at least for participants in this test, it will become crystal clear that Facebook is looking at and listening to your activities; making assumptions about who you are and what you want; and using those assumptions to change how you are treated.

Irena writes:

This month — and for the first time — Facebook started to mine real-time conversations to target ads. The delivery model is being tested by only 1% of Facebook users worldwide. On Facebook, that's a focus group 6 million people strong.

The closest Facebook has come to real-time advertising has been with its most recent ad offering, known as sponsored stories, which repost users’ brand interactions as an ad on the side bar. But for the 6 million users involved in this test, any utterance will become fodder for real-time targeted ads.

For example: Users who update their status with “Mmm, I could go for some pizza tonight,” could get an ad or a coupon from Domino's, Papa John's or Pizza Hut.

To be clear, Facebook has been delivering targeted ads based on wall posts and status updates for some time, but never on a real-time basis. In general, users’ posts and updates are collected in an aggregate format, adding them to target audiences based on the data collected over time. Keywords are a small part of that equation, but Facebook says sometimes keywords aren't even used. The company said delivering ads based on user conversations is a complex algorithm continuously perfected and changed. The real aim of this test is to figure out if those kinds of ads can be served at split-second speed, as soon as the user makes a statement that is a match for an ad in the system.

With real-time delivery, the mere mention of having a baby, running a marathon, buying a power drill or wearing high-heeled shoes is transformed into an opportunity to serve immediate ads, expanding the target audience exponentially beyond usual targeting methods such as stated preferences through “likes” or user profiles. Facebook didn't have to create new ads for this test and no particular advertiser has been tapped to participate — the inventory remains as is.

A user may not have liked any soccer pages or indicated that soccer is an interest, but by sharing his trip to the pub for the World Cup, that user is now part of the Adidas target audience. The moment between a potential customer expressing a desire and deciding on how to fulfill that desire is an advertiser sweet spot, and the real-time ad model puts advertisers in front of a user at that very delicate, decisive moment.

“The long-held promise of local is to deliver timely, relevant and measurable ads which drive actions such as commerce, so if Facebook is moving in this direction, it's brilliant,” said Reggie Bradford, CEO of Facebook software and marketing company Vitrue. “This is a massive market shift everyone is pivoting toward, led by services such as Groupon. Facebook has the power of the graph of me and my friends placing them in the position to dominate this medium.” [More here]

This test is important and will reveal a lot.  If the system is accurate and truly real-time, the way it works will become obvious to people.  It will be a simple cause-and-effect experience that leads to a clarity people have not had before around profiling.  This will be good

However, once the analysis algorithms make mistakes in pigeon-holing users – which is inevitable – it is  likely that it will alienate at least some part of the test population, raising their consciousness of the serious potential problems with profiling.  What will that do to their perception of Facebook?

A Facebook that looks more and more like HAL will not be accepted as “your universal internet identity” – as some of the more pathologically shortsighted dabblers in identity claim is already becoming the case.  Like other companies, Facebook has many simultaneous goals, and some of them conflict in fundamental ways.  More than anything else, in the long term, it is these conflicts that will limit Facebook's role as an identity provider.

 

 

Netflix stung with privacy lawsuits

Via Archie Reed, this story by Greg Sandoval of ZDnet:

Netflix, the web's top video-rental service, has been accused of violating US privacy laws in five separate lawsuits filed during the past two months, records show.

Each of the five plaintiffs allege that Netflix hangs onto customer information, such as credit card numbers and rental histories, long after subscribers cancel their membership. They claim this violates the Video Privacy Protection Act (VPPA).

Netflix declined to comment.

In a four-page suit filed on Friday, Michael Sevy, a former Netflix subscriber who lives in Michigan, accuses Netflix of violating the VPPA by “collecting, storing and maintaining for an indefinite period of time, the video rental histories of every customer that has ever rented a DVD from Netflix”. Netflix also retains information that “identifies the customer as having requested or obtained specific video materials or services”, according to Sevy's suit.

In a complaint filed 22 February, plaintiff Jason Bernal, a resident of Texas, claimed “Netflix has assumed the role of Big Brother and trampled the privacy rights of its former customers”.

Jeff Milans from Virginia filed the first of the five suits on 26 January. One of his attorneys, Bill Gray, told ZDNet Australia‘s sister site CNET yesterday that the way he knows Netflix is preserving information belonging to customers who have left the company is from Netflix emails. According to Gray, in messages to former subscribers, Netflix writes something similar to “We'd love to have you come back. We've retained all of your video choices”.

Gray said that Netflix uses the customer data to market the rental service, but this is done while risking its customers’ privacy. Someone's choice in rental movies could prove embarrassing, according to Gray, and should hackers ever get access to Netflix's database, that information could be made publicly available.

“We want Netflix to operate in compliance of the law and delete all of this information,” Gray said.

All the plaintiffs filed their complaints in US District Court for the Northern District of California. Each has asked the court for class action status. [More here].

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example,
Le droit à l’oubli sur Internet).  The notion is that after some time, information should simply fade away (counteracting digital eternity).  The Right to be Forgotten has to be one of the most important digital rights – not only for social networks, but for the Internet as a whole.  

The authors of the Social Network Users’ Bill of Rights have called some variant of this the “Right to Withdraw”.  Whatever words we use, the Right is a far-reaching game-changer – a cure as important as the introduction of antibiotics was in the world of medicine.

I say “cure” because it helps heal problems that shouldn't have been created in the first place. 

For example, Netflix does not need to – and should not – associate our rental patterns with our natural identities (e.g. with us as recognizable citizens).  Nor should any other company that operates in the digital world. 

Instead, following the precepts of minimal disclosure, the patterns should simply be associated with entities who have accounts and the right to rent movies.  The details of billing should not be linked to the details of ordering (this is possible using the new privacy-enhancing technologies).  From our point of view as consumers of these services, there is no reason the linking should be visible to anyone but ourselves.

All this requires a wee bit of a paradigm shift, you will say.  And you're right.  Until that happens, we don't have a lot of alternatives other than the Right to be Forgotten.  Especially, as described in the law suits above, when we have “chosen to withdraw.”

Incident closed – good support from janrain…

When I connected with janrain to resolve the issue described here, they were more than helpful. In fact, I have to quote them, because this is what companies should be like:

“We certainly test on ie 6,7,8,9, and would love to get your situation smoothed out.” 

The scary part came a little while later…

“The cause is likely to be configuration based on the browser.  Browser security settings should be set to default for testing. Temporarily disable all toolbars and add-ons. Clear caches and cookies (at least for your site domain and rpxnow.com.”

Oh yeah.  I've heard that one before.  So I was a bit skeptical. 

On the other hand, I happened to be in a crowd and asked some people nearby with Windows 7 to see what happened to them when they tried to log in.  It was one of those moments.  Everything worked perfectly for everyone but me… 

Gathering my courage, I pressed the dreaded configuration reset button as I had been told to do: 

Then I re-enabled all my add-ons as janrain suggested.  And… everything worked as advertised.

So there you go.  Possibly I did something to my IE config at some point – I do a lot of experimenting.  Conclusion: if any of you run into the same problem, please let me know.  Until then, let's consider the incident closed.

 

Six new authentication methods for Identityblog

Back in March 2006, when Information Cards were unknown and untested, it became obvious that the best way for me to understand the issues would be to put Information Cards onto Identityblog. 

I wrote the code in PHP, and a few people started trying out Information Cards.  Since I was being killed by spam at the time, I decided to try an experiment:  make it mandatory to use an Information Card to leave a comment.  It was worth a try.  More people might check out InfoCards.  And presto, my spam problems would go away.

So on March 18th 2006 I posted More hardy pioneers try out InfoCard, showing the first few people to give it all a whirl.

At first I thought my draconian “InfoCard-Only” approach would get a lot of peoples’ hackles up and only last a few weeks.  But over time more and more people seemed to be subscribing – probably because Identityblog was one of the few sites that actually used InfoCards in production.  And I never had spam again.

How many people joined using InfoCards?  Today I looked at my user list (see the screenshot below with PII fuzzed out).  The answer: 2958 people successfully subscribed and passed email verification.  There were then over 23,000 successful audited logins.  Not very many for a commercial site, but not bad for a technical blog.

Of course, as we all know, the powers at the large commercial sites have preferred the  “NASCAR” approach of presenting a bunch of different buttons that redirect the user to, uh, something-or-other-that-can-be-phished, ahem, in spite of the privacy and security problems.  This part of the conversation will go on for some time, since these problems will become progressively more widespread as NASCAR gains popularity and the criminally inclined tune in to its potential as a gold mine… But that discussion is for another day. 

Meanwhile, I want to get my hands dirty and understand all the implications of the NASCAR-style approach.  So recently I subscribed to a nifty janrain service that offers a whole array of login methods.  I then integrated their stuff into Identityblog.  I promise, Scout's Honor, not to do man-in-the-middle-attacks or scrape your credentials, even though I probably could if I were so inclined.

From now on, when you need to authenticate at Identityblog, you will see a NASCAR-style login symbol.  See, for example, the LOG IN option at the top of this page. 

If you are not logged in and you want to leave a comment you will see :
 

Click on the string of icons and you get something like this:

 

Because many people continue to use my site to try out Information Cards, I've supplemented the janrain widget experience with the Pamelaware Information Card Option (it was pretty easy to make them coexist, and it leaves me with at least one unphishable alternative).  This will also benefit people who don't like the idea of linking their identifiers all over the web.  I expect it will help researchers and students too.

One warning:  Janrain's otherwise polished implementation doesn't work properly with Internet Explorer – it leaves a spurious “Cross Domain Receiver Page” lurking on your desktop.  [Update – this was apparently my problem: see here]  Once I figure out how to contact them (not evident), I'll ask janrain if and when they're going to fix this.  Anyway, the system works – just a bit messy because you have to manually close the stranded empty page.  The problem doesn't appear in Firefox. 

It has already been a riot looking into the new technology and working through the implications.  I'll talk about this as we go forward.

 

The voting so far

The people working on a Social Network Users’ Bill of Rights have done another interesting eThing:  rather than requiring people to express support or rejection holus-bolus they've decided to let us vote on the individual rights proposed.  Further, Jon Pincus has shared the early results on his Liminal States blog.  He writes:

The SXSW panel got a decent amount of attention, including article by Helen A. S. Popkin’s Vote on your ‘Social Network Users’ Bill of Rights’ on MSNBC’s Technolog, Kim Cameron’s post on the Identity Weblog, and a brief link from Mark Sullivan of PC World. Here’s the voting so far

  1. 41 yes 0 no Honesty: Honor your privacy policy and terms of service
  2. 41 yes 0 no Clarity: Make sure that policies, terms of service, and settings are easy to find and understand
  3. 41 yes 0 no Freedom of speech: Do not delete or modify my data without a clear policy and justification
  4. 33 yes 4 no Empowerment : Support assistive technologies and universal accessibility
  5. 35 yes 2 no Self-protection: Support privacy-enhancing technologies
  6. 37 yes 3 no Data minimization: Minimize the information I am required to provide and share with others
  7. 39 yes 1 no Control: Let me control my data, and don’t facilitate sharing it unless I agree first
  8. 39 yes 1 no Predictability: Obtain my prior consent before significantly changing who can see my data.
  9. 38 yes 0 no Data portability: Make it easy for me to obtain a copy of my data
  10. 39 yes 0 no Protection: Treat my data as securely as your own confidential data unless I choose to share it, and notify me if it is compromised
  11. 36 yes 2 no Right to know: Show me how you are using my data and allow me to see who and what has access to it.
  12. 24 yes 13 no Right to self-define: Let me create more than one identity and use pseudonyms. Do not link them without my permission.
  13. 35 yes 1 no Right to appeal: Allow me to appeal punitive actions
  14. 37 yes 1 no Right to withdraw: Allow me to delete my account, and remove my data

So it’s in general overwhelmingly positive: five rights are unanimous, and another eight at 89% or higher.  The one exception: the right to self-define, currently at about 65%.  As I said in a comment on the earlier thread, this right is vital for people like whistleblowers, domestic violence victims, political dissidents, closeted LGBTQs.   I wonder whether the large minority of people who don’t think it matters are thinking about it from those perspectives.

The voting continues at http://SNUBillOfRights.com.  Please voice your opinion!

The voting on individual rights is still light.  Right 12 clearly stands out as one which needs discussion.

I expect most people just take a quick look at the bill as a whole, say “Yeah, that makes sense” and move on.  The “pro” and “against” pages at facebook ran about 500 to 1 in favor of the Bill when I looked a few days ago.  In this sense the Bill is certainly right on track. 

But the individual rights need to be examined very carefully by at least some of us.  I'll return to Jon's comments on right 12 when I can make some time to set out my ideas.

Social Network Users’ Bill of Rights

The  “Social Network Users’ Bill of Rights” panel at the South by Southwest Interactive (SXSW) conference last Friday had something that most panels lack:  an outcome.  The goal was to get the SXSWi community to cast their votes and help to shape a bill of rights that would reflect the participation of many thousands of people using the social networks.

The idea of getting broad communities to vote on this is pretty interesting.  Panelist Lisa Borodkin wrote:

There is no good way currently of collecting hard, empirical, quantitative data about the preferences of a large number of social network users. There is a need to have user input into the formation of social norms, because courts interpreting values such as “expectations of privacy” often look to social network sites policies and practices.

Where did the Bill of Rights come from?  The document was written collaboratively over four days at last year's Computers, Freedom and Privacy Conference and since the final version was published has been collecting votes through pages like this one.  Voting is open until June 15, 2011 – the “anniversary of the date the U.S. government asked Twitter to delay its scheduled server maintenance as a critical communication tool for use in the 2009 Iran elections”.  And guess what?  That date also coincides with this year's Computers, Freedom and Privacy Conference.

The Bill – admirably straightforward and aimed at real people – reads as follows:

We the users expect social network sites to provide us the following rights in their Terms of Service, Privacy Policies, and implementations of their system:

  1. Honesty: Honor your privacy policy and terms of service
  2. Clarity: Make sure that policies, terms of service, and settings are easy to find and understand
  3. Freedom of speech: Do not delete or modify my data without a clear policy and justification
  4. Empowerment : Support assistive technologies and universal accessibility
  5. Self-protection: Support privacy-enhancing technologies
  6. Data minimization: Minimize the information I am required to provide and share with others
  7. Control: Let me control my data, and don’t facilitate sharing it unless I agree first
  8. Predictability: Obtain my prior consent before significantly changing who can see my data.
  9. Data portability: Make it easy for me to obtain a copy of my data
  10. Protection: Treat my data as securely as your own confidential data unless I choose to share it, and notify me if it is compromised
  11. Right to know: Show me how you are using my data and allow me to see who and what has access to it.
  12. Right to self-define: Let me create more than one identity and use pseudonyms. Do not link them without my permission.
  13. Right to appeal: Allow me to appeal punitive actions
  14. Right to withdraw: Allow me to delete my account, and remove my data

It will be interesting to see whether social networking sites engage with this initiative.  Sixestate reported some time ago that Facebook objected to requiring support for pseudonyms. 

While I support all other aspects of the Bill, I too think it is a mistake to mandate that ALL communities MUST support pseudonymity or be in violation of the Bill…  In all other respects, the Bill is consistent with the Laws of Identity.  However the Laws envisaged a continuum of approaches to identification, and argued that all have their place for different purposes.  I think this is much closer to the mark and Right 12 should be amended.  The fundamental point is that we must have the RIGHT to form and participate in communities that DO choose to support pseudonymity.  This doesn't mean we ONLY have the right to participate in such communities.

Where do the organizers want to go next? Jon Pincus writes:

Here’s a few ideas:

  • get social network sites to adopt the concept of a Bill of Rights for their users and as many of the individual rights as they’re comfortable with.   Some of the specific rights are contentious  — for example, Facebook objected to in their response last summer.  But more positively, Facebook’s current “user rights and responsibilities” document already covers many of these rights, and it would be great to have even partial support from them.  And sites like Twitter, tribe.net, and emerging companies that are trying to emphasize different values may be willing to go even farther.
  • work with politicians in the US and elsewhere who are looking at protecting online, and encourage them to adopt the bill of rights framework and our specific language.  There’s a bit of “carrot and stick” combining this and the previous bullet: the threat of legislation is great both for encouraging self-regulation and getting startups to look for a potential future strategic advantage by adopting strong user rights from the beginning.
  • encourage broad participation to highlight where there’s consensus.  Currently, there are a couple of ways to weigh in: the Social Network Users’ Bill of Rights site allows you to vote on the individual rights, and you can also vote for or against the entire bill via Twitter.  It would be great to have additional voting on other social network sites like Facebook, MySpace, Reddit to give the citizens of those “countries” a voice.
  • collaborate with with groups like the Global Network Initiative, the Internet Rights and Principles Coalition, the Social Charter, and the Association for Progressive Communications that support similar principles
  • follow Gabrielle Pohl’s lead and translate into multiple languages to build awareness globally.
  • take a more active approach with media outreach to call more attention to the campaign.  #privchat, the weekly Twitter chat sponsored by Center for Democracy and Technology and Privacy Camp, is natural hub for the discussion.

Meanwhile, here are some ways you can express your views:

 

Touch2Id Testimonials

Last summer I wrote about the British outfit called touch2id.  They had developed a system that sounded pretty horrible when I first heard about it – a scheme to control underage drinking by using peoples’ fingerprints rather than getting them to present identity cards.  I assumed it would be another of the hair-brained biometric schemes I had come across in the past – like this one, or this, or these.

But no.  The approach was completely different.  Not only was the system popular with its early adopters, but its developers had really thought through the privacy issues.   There was no database of fingerprints, no record linking a fingerprint to a natural person.  The system was truly one of “minimal disclosure” and privacy by design:

  • To register, people presented their ID documents and, once verified, a template of their fingerprint was stored on a Touch2Id card that was immediately given to them.  The fingerprint was NOT stored in a database
  • When people with the cards wanted to have a drink, they would wave their card over a machine similar to a credit card reader, and press their finger on the machine.  If their finger matched the template on their card, the light came on indicating they were of drinking age and they could be served.

A single claim:  “Able to drink“.  Here we had well designed technology offering an experience that the people using it liked way better than the current “carding” process – and which was much more protective of their privacy.  “Privacy by design” was delivering tangible benefits.  Merchants didn’t have to worry about making mistakes.  Young people didn’t have to worry about being discriminated against (or being embarassed) just because they “looked young” or got a haircut.  No identifying information was being released to the merchants.  No name, age or photo was stored on the cards.  The movements of young people were not tracked.  And so on.

Today touch2id published Testemonials – an impressive summary of their project consisting of reviews by individuals involved.  It is clear that those who liked it loved it.  It would be interesting to find out to what extent these rave reviews are typical of those who tried the system.  

At any rate, it's instructive to compare the positive outcome of this pilot with all the biometric proposals that have crashed onto the shoals of privacy invasion.