Good news and bad news from Delaware Lawmakers

Reading the following SFGate story was a real rollercoaster ride: 

DOVER, Del. (AP) — State lawmakers have given final approval to a bill prohibiting universities and colleges in Delaware from requiring that students or applicants for enrollment provide their social networking login information.

The bill, which unanimously passed the Senate shortly after midnight Saturday, also prohibits schools and universities from requesting that a student or applicant log onto a social networking site so that school officials can access the site profile or account.

The bill includes exemptions for investigations by police agencies or a school's public safety department if criminal activity is suspected.

Lawmakers approved the bill after deleting an amendment that expanded the scope of its privacy protections to elementary and secondary school students.

First of all there was the realization that if lawmakers had to draft this law it meant universities and colleges were already strong-arming students into giving up their social networking credentials.  This descent into hell knocked my breath away. 

But I groped my way back from the burning sulfur since the new bill seemed to show a modicum of common sense. 

Until finally we learn that younger children won't be afforded the same protections…   Can teachers and principals actually bully youngsters to log in to Facebook and access their accounts?  Can they make kids hand over their passwords?  What are we teaching our young people about their identity?

Why oh why oh why oh? 

 

24 year old student lights match: Europe versus Facebook

If you are interested in social networks, don't miss the slick video about Max Schrems’ David and Goliath struggle with Facebook over the way they are treating his personal information.  Click on the red “CC” in the lower right-hand corner to see the English subtitles.

Max is a 24 year old law student from Vienna with a flair for the interview and plenty of smarts about both technology and legal issues.  In Europe there is a requirement that entities with data about individuals make it available to them if they request it.  That's how Max ended up with a personalized CD from Facebook that he printed out on a stack of paper more than a thousand pages thick (see image below). Analysing it, he came to the conclusion that Facebook is engineered to break many of the requirements of European data protection.  He argues that the record Facebook provided him finds them to be in flagrante delicto.  

The logical next step was a series of 22 lucid and well-reasoned complaints that he submitted to the Irish Data Protection Commissioner (Facebook states that European users have a relationship with the Irish Facebook subsidiary).  This was followed by another perfectly executed move:  setting up a web site called Europe versus Facebook that does everything right in terms using web technology to mount a campaign against a commercial enterprise that depends on its public relations to succeed.

Europe versus Facebook, which seems eventually to have become an organization, then opened its own YouTube channel.  As part of the documentation, they publicised the procedure Max used to get his personal CD.  Somehow this recipe found its way to reddit  where it ended up on a couple of top ten lists.  So many people applied for their own CDs that Facebook had to send out an email indicating it was unable to comply with the requirement that it provide the information within a 40 day period.

If that seems to be enough, it's not all.  As Max studied what had been revealed to him, he noticed that important information was missing and asked for the rest of it.  The response ratchets the battle up one more notch: 

Dear Mr. Schrems:

We refer to our previous correspondence and in particular your subject access request dated July 11, 2011 (the Request).

To date, we have disclosed all personal data to which you are entitled pursuant to Section 4 of the Irish Data Protection Acts 1988 and 2003 (the Acts).

Please note that certain categories of personal data are exempted from subject access requests.
Pursuant to Section 4(9) of the Acts, personal data which is impossible to furnish or which can only be furnished after disproportionate effort is exempt from the scope of a subject access request. We have not furnished personal data which cannot be extracted from our platform in the absence of is proportionate effort.

Section 4(12) of the Acts carves out an exception to subject access requests where the disclosures in response would adversely affect trade secrets or intellectual property. We have not provided any information to you which is a trade secret or intellectual property of Facebook Ireland Limited or its licensors.

Please be aware that we have complied with your subject access request, and that we are not required to comply with any future similar requests, unless, in our opinion, a reasonable period of time has elapsed.

Thanks for contacting Facebook,
Facebook User Operations Data Access Request Team

What a spotlight

This throws intense light on some amazingly important issues. 

For example, as I wrote here (and Max describes here), Facebook's “Like” button collects information every time an Internet user views a page containing the button, and a Facebook cookie associates that page with all the other pages with “Like” buttons visited by the user in the last 3 months. 

If you use Facebook, records of all these visits are linked, through cookies, to your Facebook profile – even if you never click the “like” button.  These long lists of pages visited, tied in Facebook's systems to your “Real Name identity”, were not included on Max's CD. 

Is Facebook prepared to argue that it need not reveal this stored information about your personal data because doing so would adversely affect its “intellectual property”? 

It will be absolutely amazing to watch how this issue plays out, and see just what someone with Max's media talent is able to do with the answers once they become public. 

The result may well impact the whole industry for a long time to come.

Meanwhile, students of these matters would do well to look at Max's many complaints:

no

date

topic

status

files

01

18-AUG-2011

Pokes.
Pokes are kept even after the user “removes” them.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

02

18-AUG-2011

Shadow Profiles.
Facebook is collecting data about people without their knowledge. This information is used to substitute existing profiles and to create profiles of non-users.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

03

18-AUG-2011

Tagging.
Tags are used without the specific consent of the user. Users have to “untag” themselves (opt-out).
Info: Facebook announced changes.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

04

18-AUG-2011

Synchronizing.
Facebook is gathering personal data e.g. via its iPhone-App or the “friend finder”. This data is used by Facebook without the consent of the data subjects.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

05

18-AUG-2011

Deleted Postings.
Postings that have been deleted showed up in the set of data that was received from Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

06

18-AUG-2011

Postings on other Users’ Pages.
Users cannot see the settings under which content is distributed that they post on other’s pages.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

07

18-AUG-2011

Messages.
Messages (incl. Chat-Messages) are stored by Facebook even after the user “deleted” them. This means that all direct communication on Facebook can never be deleted.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

08

18-AUG-2011

Privacy Policy and Consent.
The privacy policy is vague, unclear and contradictory. If European and Irish standards are applied, the consent to the privacy policy is not valid.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

09

18-AUG-2011

Face Recognition.
The new face recognition feature is an inproportionate violation of the users right to privacy. Proper information and an unambiguous consent of the users is missing.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

10

18-AUG-2011

Access Request.
Access Requests have not been answered fully. Many categories of information are missing.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

11

18-AUG-2011

Deleted Tags.
Tags that were “removed” by the user, are only deactivated but saved by Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

12

18-AUG-2011

Data Security.
In its terms, Facebook says that it does not guarantee any level of data security.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

13

18-AUG-2011

Applications.
Applications of “friends” can access data of the user. There is no guarantee that these applications are following European privacy standards.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

14

18-AUG-2011

Deleted Friends.
All removed friends are stored by Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

15

18-AUG-2011

Excessive processing of Data.
Facebook is hosting enormous amounts of personal data and it is processing all data for its own purposes.
It seems Facebook is a prime example of illegal “excessive processing”.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

16

18-AUG-2011

Opt-Out.
Facebook is running an opt-out system instead of an opt-in system, which is required by European law.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

 

24-AUG-2011

Letter from the Irish DPC.

 

Letter (PDF)

 

15-SEPT-2011

Letter to the Irish DPC concerning the new privacy policy and new settings on Facebook.

 

Letter (PDF)

17

19-SEPT-2011

Like Button.
The Like Button is creating extended user data that can be used to track users all over the internet. There is no legitimate purpose for the creation of the data. Users have not consented to the use.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

18

19-SEPT-2011

Obligations as Processor.
Facebook has certain obligations as a provider of a “cloud service” (e.g. not using third party data for its own purposes or only processing data when instructed to do so by the user).

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

19

19-SEPT-2011

Picture Privacy Settings.
The privacy settings only regulate who can see the link to a picture. The picture itself is “public” on the internet. This makes it easy to circumvent the settings.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

20

19-SEPT-2011

Deleted Pictures.
Facebook is only deleting the link to pictures. The pictures are still public on the internet for a certain period of time (more than 32 hours).

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

21

19-SEPT-2011

Groups.
Users can be added to groups without their consent. Users may end up in groups that lead other to false impressions about a person.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

22

19-SEPT-2011

New Policies.
The policies are changed very frequently, users do not get properly informed, they are not asked to consent to new policies.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

 

Who is harmed by a “Real Names” policy?

Skud at Geek Feminism Blog has created a wiki documenting work she and her colleagues are doing to “draft a comprehensive list” of those who would be harmed by a policy banning pseudonymity and requiring “real names”.

The result is impressive.  The rigour Skud and colleagues have applied to their quest has produced an information payload that is both illuminating and touching.

Those of us working on identity technology have to internalize the lessons here.  Over-identification is ALWAYS wrong.  But beyond that, there are people who are especially vulnerable to it.  They have to be treated as first class citizens with clear rights and we need to figure out how to protect them.  This goes beyond what we conventionally think of as privacy concerns (although perhaps it sheds light on the true nature of what privacy is – I'm still learning).

Often people argue in favor of “Real Names” in order to achieve accountability.  The fact is that technology offers us other ways to achieve accountability.  By leveraging the properties of minimal disclosure technology, we can allow people to remain anonymous and yet bar them from given environments if their behavior gets sufficiently anti-social.

But enough editorializing.  Here's Skud's intro.  Just remember that in this case the real enlightenment is in the details, not the summary.

This page lists groups of people who are disadvantaged by any policy which bans Pseudonymity and requires so-called “Real names” (more properly, legal names).

This is an attempt to create a comprehensive list of groups of people who are affected by such policies.

The cost to these people can be vast, including:

  • harassment, both online and offline
  • discrimination in employment, provision of services, etc.
  • actual physical danger of bullying, hate crime, etc.
  • arrest, imprisonment, or execution in some jurisdictions
  • economic harm such as job loss, loss of professional reputation, etc.
  • social costs of not being able to interact with friends and colleagues
  • possible (temporary) loss of access to their data if their account is suspended or terminated

The groups of people who use pseudonyms, or want to use pseudonyms, are not a small minority (some of the classes of people who can benefit from pseudonyms constitute up to 50% of the total population, and many of the others are classes of people that almost everyone knows). However, their needs are often ignored by the relatively privileged designers and policy-makers who want people to use their real/legal names.

Wait a minute.  Just got a note from the I Can't Stop Editorializing Department: the very wiki page that brings us Skud's analysis contains a Facebook “Like” button.  It might be worth removing it given that Facebook requires “Real Names”, and then transmits the URL of any page with a “Like” button to Facebook so it can be associated with the user's “Real Name” – whether or not they click on the button or are logged into Facebook.

Head over to the Office of Inadequate Security

First of all, I have to refer readers to the Office of Inadequate Security, apparently operated by databreaches.net. I suggest heading over there pretty quickly too – the office is undoubtedly going to be so busy you'll have to line up as time goes on.

So far it looks like the go-to place for info on breaches – it even has a twitter feed for breach junkies.

Recently the Office published an account that raises a lot of questions:

I just read a breach disclosure to the New Hampshire Attorney General’s Office with accompanying notification letters to those affected that impressed me favorably. But first, to the breach itself:

StudentCity.com, a site that allows students to book trips for school vacation breaks, suffered a breach in their system that they learned about on June 9 after they started getting reports of credit card fraud from customers. An FAQ about the breach, posted on www.myidexperts.com explains:

StudentCity first became concerned there could be an issue on June 9, 2011, when we received reports of customers travelling together who had reported issues with their credit and debit cards. Because this seemed to be with 2011 groups, we initially thought it was a hotel or vendor used in conjunction with 2011 tours. We then became aware of an account that was 2012 passengers on the same day who were all impacted. This is when we became highly concerned. Although our processing company could find no issue, we immediately notified customers about the incident via email, contacted federal authorities and immediately began a forensic investigation.

According to the report to New Hampshire, where 266 residents were affected, the compromised data included students’ credit card numbers, passport numbers, and names. The FAQ, however, indicates that dates of birth were also involved.

Frustratingly for StudentCity, the credit card data had been encrypted but their investigation revealed that the encryption had broken in some cases. In the FAQ, they explain:

The credit card information was encrypted, but the encryption appears to have been decoded by the hackers. It appears they were able to write a script to decode some information for some customers and most or all for others.

The letter to the NH AG’s office, written by their lawyers on July 1, is wonderfully plain and clear in terms of what happened and what steps StudentCity promptly took to address the breach and prevent future breaches, but it was the tailored letters sent to those affected on July 8 that really impressed me for their plain language, recognition of concerns, active encouragement of the recipients to take immediate steps to protect themselves, and for the utterly human tone of the correspondence.

Kudos to StudentCity.com and their law firm, Nelson Mullins Riley & Scarborough, LLP, for providing an exemplar of a good notification.

It would be great if StudentCity would bring in some security experts to audit the way encryption was done, and report on what went wrong. I don't say this to be punitive, I agree that StudentCity deserves credit for at least attempting to employ encryption. But the outcome points to the fact that we need programming frameworks that make it easy to get truly robust encryption and key protection – and to deploy it in a minimal disclosure architecture that keeps secrets off-line. If StudentCity goes the extra mile in helping others learn from their unfortunate experience, I'll certainly be a supporter.

The Idiot's Guide to Why Voicemail Hacking is a Crime

Pangloss sent me reeling recently with her statement that “in the wake of the amazing News of the World revelations, there does seem to be some public interest in a quick note on why there is (some) controversy around whether hacking mesages in someone's voicemail is a crime.”

What?  Outside Britain I imagine most of us have simply assumed that breaking into peoples’ voicemails MUST be illegal.   So Pangloss's excellent summary of the situation – I share just enough to reveal the issues – is a suitable slap in the face of our naivete:

The first relevant provision is RIPA (the Regulation of Investigatory Powers Act 2000) which provides that interception of communications without consent of both ends of the communication , or some other provision like a police warrant is criminal in principle. The complications arise from s 2(2) which provides that:

“….a person intercepts a communication in the course of its transmission by
means of a telecommunication system if, and only if … (he makes) …some or all of the contents of the communication available, while being transmitted, to a person other than the sender or intended recipient of the communication”. [my itals]

Section 2(4) states that an “interception of a communication” has also to be “in the course of its transmission” by any public or private telecommunications system. [my itals]

The argument that seems to have been been made to the DPP, Keir Starmer, on October 2010, by QC David Perry, is that voicemail has already been transmitted and is thus therefore no longer “in the course of its transmission.” Therefore a RIPA s 1 interception offence would not stand up. The DPP stressed in a letter to the Guardian in March 2011 that this interpretation was (a) specific to the cases of Goodman and Mulcaire (yes the same Goodman who's just been re-arrested and inded went to jail) and (b) not conclusive as a court would have to rule on it.

We do not know the exact terms of the advice from counsel as (according to advice given to the HC on November 2009) it was delivered in oral form only. There are two possible interpretations of even what we know. One is that messages left on voicemail are “in transmission” till read. Another is that even when they are stored on the voicemail server unread, they have completed transmission, and thus accessing them would not be “interception”.

Very few people I think would view the latter interpretation as plausible, but the former seem to have carried weight with the prosecution authorities. In the case of Milly Dowler, if (as seems likely) voicemails were hacked after she was already deceased, there may have been messages unread and so a prosecution would be appropriate on RIPA without worrying about the advice from counsel. In many other cases eg involving celebrities though, hacking may have been of already-listened- to voicemails. What is the law there?

When does a message to voicemail cease to be “in the course of transmission”? Chris Pounder pointed out in April 2011 that we also have to look at s 2(7) of RIPA which says

” (7)For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it.”

A common sense interpretation of this, it seems to me (and to Chris Pounder ) would be that messages stored on voicemail are deemed to remain “in the course of transmission” and hence capable of generating a criminal offence, when hacked – because it is being stored on the system for later access (which might include re-listening to already played messages).

This rather thoroughly seems to contradict the well known interpretation offered during the debates in the HL over RIPA from L Bassam, that the analogy of transmission of a voice message or email was to a letter being delievered to a house. There, transmission ended when the letter hit the doormat.

Fascinating issues.  And that's just the beginning.  For the full story, continue here.

Privacy Bill of Rights establishes device identifiers as PII

In my view the Commercial Privacy Bill of Rights drafted by US Senators McCain and Kerry would significantly strengthen the identify fabric of the Internet through its proposal that “a unique persistent identifier associated with an individual or a networked device used by such an individual” must be treated as personally identifiable information (Section 3 – 4 – vii).   This clear and central statement marks a real step forward.  Amongst other things, it covers the MAC addresses of wireless devices and the serial numbers and random identifiers of mobile phones and laptops.

From this fact alone the bill could play a key role in limiting a number of the most privacy-invasive practices used today by Internet services – including location-based services.  For example, a company like Apple could no longer glibly claim, as it does in its current iTunes privacy policy, that device identifiers and location information are “not personally identifying”.  Nor could it profess, as iTunes also currently does, that this means it can “collect, use, transfer, and disclose”  the information “for any purpose”.  Putting location information under the firm control of users is a key legislative requirement addressed by the bill.

The bill also contributes both to the security of the Internet and to individual privacy by unambiguously embracing “Minimal Disclosure for a Constrained Use” as set out in Law 2 of the Laws of Identity.  Title III explicitly establishes a “Right to Purpose Specification; Data Minimization; Constraints on Distribution; and Data Integrity.”

Despite these real positives, the bill as currently formulated leaves me eager to consult a bevy of lawyers – not a good sign.  This may be because it is still a “working draft”, with numerous provisions that must be clarified. 

For example, how would the population at large ever understand the byzantine interlocking of opt-in and opt-out clauses described in Section 202?  At this point, I don't.

And what does the list of exceptions to Unauthorized Use in Section 3 paragraph 8 imply?  Does it mean such uses can be made without notice and consent?

I'll be looking for comments by legal and policy experts.  Already, EPIC has expressed both support and reservations:

Senators John Kerry (D-MA) and John McCain (R-AZ) have introduced the “Commercial Privacy Bill of Rights Act of 2011,” aimed at protecting consumers’ privacy both online and offline. The Bill endorses several “Fair Information Practices,” gives consumers the ability to opt-out of data disclosures to third-parties, and restricts the sharing of sensitive information.

But the Bill does not allow for a private right of action, preempts better state privacy laws, and includes a “Safe Harbor” arrangement that exempts companies from significant privacy requirements.

EPIC has supported privacy laws that provide meaningful enforcement, limit the ability of companies’ to exploit loopholes for behavioral targeting, and ensure that the Federal Trade Commission can investigate and prosecute unfair and deceptive trade practices, as it did with Google Buzz. For more information, see EPIC: Online Tracking and Behavioral Profiling and EPIC: Federal Trade Commission.

Kerry McCain bill proposes “minimal disclosure” for transaction

Steve Satterfield at Inside Privacy gives us this overview of central features of new Commercial Privacy Bill of Rights proposed by US Senators Kerry and McCain (download it here):

  • The draft envisions a significant role for the FTC and includes provisions requiring the FTC to promulgate rules on a number of important issues, including the appropriate consent mechanism for uses of data.  The FTC would also be tasked with issuing rules obligating businesses to provide reasonable security measures for the consumer data they maintain and to provide transparent notices about data practices.
  • The draft also states that businesses should “seek” to collect only as much “covered information” as is reasonably necessary to provide a transaction or service requested by an individual, to prevent fraud, or to improve the transaction or service
  • “Covered information” is defined broadly and would include not just “personally identifiable information” (such as name, address, telephone number, social security number), but also “unique identifier information,” including a customer number held in a cookie, a user ID, a processor serial number or a device serial number.  Unlike definitions of “covered information” that appear in separate bills authored by Reps. Bobby Rush (D-Ill.) and Jackie Speier (D-Cal.), this definition specifically covers cookies and device IDs.
  • The draft encompasses a data retention principle, providing that businesses should only retain covered information only as long as necessary to provide the transaction or service “or for a reasonable period of time if the service is ongoing.” 
  • The draft contemplates enforcement by the FTC and state attorneys general.  Notably — and in contrast to Rep. Rush's bill — the draft does not provide a privacy right of action for individuals who are affected by a violation. 
  • Nor does the bill specifically address the much-debated “Do Not Track” opt-out mechanism that was recommended in the FTC's recent staff report on consumer privacy.  (You can read our analysis of that report here.) 

As noted above, the draft is reportedly still a work in progress.  Inside Privacy will provide additional commentary on the Kerry legislation and other congressional privacy efforts as they develop.   

Press conference will be held tomorrow at 12:30 pm.  [Emphasis above is mine – Kim]

Readers of Identityblog will understand that I see this development, like so many others, as inevitable and predictable consequences of many short-sighted industry players breaking the Laws of Identity.

 

WSJ: Federal Prosecutors investigate smartphone apps

If you have kept up with the excellent Wall Street Journal series on smartphone apps that inappropriately collect and release location information, you won't be surprised at their latest chapter:  Federal Prosecutors are now investigating information-sharing practices of mobile applications, and a Grand Jury is already issuing subpoenas.  The Journal says, in part:

Federal prosecutors in New Jersey are investigating whether numerous smartphone applications illegally obtained or transmitted information about their users without proper disclosures, according to a person familiar with the matter…

The criminal investigation is examining whether the app makers fully described to users the types of data they collected and why they needed the information—such as a user's location or a unique identifier for the phone—the person familiar with the matter said. Collecting information about a user without proper notice or authorization could violate a federal computer-fraud law…

Online music service Pandora Media Inc. said Monday it received a subpoena related to a federal grand-jury investigation of information-sharing practices by smartphone applications…

In December 2010, Scott Thurm wrote Your Apps Are Watching You,  which has now been “liked” by over 13,000 people.  It reported that the Journal had tested 101 apps and found that:

… 56 transmitted the phone's unique device identifier to other companies without users’ awareness or consent.  Forty-seven apps transmitted the phone's location in some way. Five sent a user's age, gender and other personal details to outsiders.  At the time they were tested, 45 apps didn't provide privacy policies on their websites or inside the apps.

In Pandora's case, both the Android and iPhone versions of its app transmitted information about a user's age, gender, and location, as well as unique identifiers for the phone, to various advertising networks. Pandora gathers the age and gender information when a user registers for the service.

Legal experts said the probe is significant because it involves potentially criminal charges that could be applicable to numerous companies. Federal criminal probes of companies for online privacy violations are rare…

The probe centers on whether app makers violated the Computer Fraud and Abuse Act, said the person familiar with the matter. That law, crafted to help prosecute hackers, covers information stored on computers. It could be used to argue that app makers “hacked” into users’ cellphones.

[More here]

The elephant in the room is Apple's own approach to location information, which should certainly be subject to investigation as well.   The user is never presented with a dialog in which Apple's use of location information is explained and permission is obtained.  Instead, the user's agreement is gained surreptitiously, hidden away  on page 37 of a 45 page policy that Apple users must accept in order to use… iTunes.  Why iTunes requires location information is never explained.  The policy simply states that the user's device identifier and location are non-personal information and that Apple “may collect, use, transfer, and disclose non-personal information for any purpose“.

Any purpose?

Is it reasonable that companies like Apple can  proclaim that device identifiers and location are non-personal and then do whatever they want with them?  Informed opinion seems not to agree with them.  The International Working Group on Data Protection in Telecommunications, for example, asserted precisely the opposite as early as 2004.  Membership of the Group included “representatives from Data Protection Authorities and other bodies of national public administrations, international organisations and scientists from all over the world.”

More empirically, I demonstrated in Non-Personal information, like where you live that the combination of device identifier and location is in very many cases (including my own) personally identifying.  This is especially true in North America where many of us live in single-family dwellings.

[BTW, I have not deeply investigated the approach to sharing of location information taken by other smartphone providers – perhaps others can shed light on this.]

Google Indoors featured on German TV

Germans woke up yesterday to a headline story on Das Erste's TV Morning Show announcing a spiffy new Internet service – Google indoors

The first's lead-in and Google Indoors spokesman

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes.  Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had “opted in”…

Google Indoors greeted by happy customer

… so the technicians needed to get on with their work:

Google Indoors camera-head enters appartment

Google's deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

Automated privacy features and product placements with revenue shared with residents
 
… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves.  As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves – an example of the advanced privacy options and levels the service makes possible.

Google Indoors app experience

Check out the video.  Navigation features within houses are amazing!  From the amount of effort and wit put into it by a major TV show, I'd wager that even if Google's troubles with Germany around Street View are over, its problems with Germans around privacy may not be. 

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed.  I don't have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony.  And given Eric Schmidt's policy of getting as close to “creepy” as possible, who wouldn't find the video at least partly believable?

[Thanks to Kai Rannenberg for the heads up.]

Lazy headmasters versus the Laws of Identity

Ray Corrigan routinely combines legal and technological insight at B2fxxx – Random thoughts on law, the Internet and society, and his book on Digital Decision Making is essential.  His work often leaves me feeling uncharacteristically optimistic – living proof that a new kind of legal thinker is emerging with the technological depth needed to be a modern day Solomon.

I hadn't noticed the UK's new Protection of Freedoms Bill until I heard cabinet minister Damian Green talk about it as he pulverized the UK's centralized identity database recently.  Naturally I turned to Ray Corrigan for comment, only to discover that the political housecleaning had also swept away the assumptions behind widespread fingerprinting in Britain's schools, reinstating user control and consent. 

According to TES Connect:

The new Protection of Freedoms Bill gives pupils in schools and colleges the right to refuse to give their biometric data and compels schools to make alternative provision for them.  The several thousand schools that already use the technology will also have to ask permission from parents retrospectively, even if their systems have been established for years…

It turns out that Britain's headmasters, apparently now a lazy bunch, have little stomach for trivialities like civil liberties.  And writing about this, Ray's tone seems that of a judge who has had an impetuous and over-the-top barrister try to bend the rules one too many times.  It is satisfying to see Ray send them home to study the Laws of Identity as scientific laws governing identity systems.   I hope they catch up on their homework…

The Association of School and College Leaders (ASCL) is reportedly opposing the controls on school fingerprinting proposed in the UK coalition government's Protection of Freedoms Bill.

I always understood the reason that unions existed was to protect the rights of individuals. That ASCL should give what they perceive to be their own members’ managerial convenience priority over the civil rights of kids should make them thoroughly ashamed of themselves.  Oh dear – now head teachers are going to have to fill in a few forms before they abuse children's fundamental right to privacy – how terrible.

Although headteachers and governors at schools deploying these systems may be typically ‘happy that this does not contravene the Data Protection Act’, a number of leading barristers have stated that the use of such systems in schools may be illegal on several grounds. As far back as 2006 Stephen Groesz, a partner at Bindmans in London, was advising:

“Absent a specific power allowing schools to fingerprint, I'd say they have no power to do it. The notion you can do it because it's a neat way of keeping track of books doesn't cut it as a justification.”

The recent decisions in the European Court of Human rights in cases like S. and Marper v UK (2008 – retention of dna and fingerprints) and Gillan and Quinton v UK (2010 – s44 police stop and search) mean schools have to be increasingly careful about the use of such systems anyway. Not that most schools would know that.

Again the question of whether kids should be fingerprinted to get access to books and school meals is not even a hard one! They completely decimate Kim Cameron's first four laws of identity.

1. User control and consent – many schools don't ask for consent, child or parental, and don't provide simple opt out options

2. Minimum disclosure for constrained use – the information collected, children's unique biometrics, is disproportionate for the stated use

3. Justifiable parties – the information is in control of or at least accessible by parties who have absolutely no right to it

4. Directed identity – a unique, irrevocable, omnidirectional identifier is being used when a simple unidirectional identifier (eg lunch ticket or library card) would more than adequately do the job.

It's irrelevant how much schools have invested in such systems or how convenient school administrators find them, or that the Information Commissioner's Office soft peddled their advice on the matter (in 2008) in relation to the Data Protection Act.  They should all be scrapped and if the need for schools to wade through a few more forms before they use these systems causes them to be scrapped then that's a good outcome from my perspective.

In addition just because school fingerprint vendors have conned them into parting with ridiculous sums of money (in school budget terms) to install these systems, with promises that they are not really storing fingerprints and they can't be recreated, there is no doubt it is possible to recreate the image of a fingerprint from data stored on such systems. Ross, A et al ‘From Template to Image: Reconstructing Fingerprints from Minutiae Points’ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007 is just one example of how university researchers have reverse engineered these systems. The warning caveat emptor applies emphatically to digital technology systems that buyers don't understand especially when it comes to undermining the civil liberties of our younger generation.