Android OEMs will need to use Google Location Service

Over at Daring Fireball, John Gruber tells us about Google's approach to controlling content on Android, quoting a brief by Skyhook Wireless in the “complaint and jury demand” they filed against Google recently.

John discusses a couple of aspects of the filing, which he describes as “not long, and… written in pretty straightforward plain language, regarding Google’s control over which devices have access to the Android Market”.   In particular he calls our attention to the way Google is tying Android to it's location service – the one made famous during the StreetView WiFi scandal:

23. On information and belief, Google has notified OEMs that they will need to use Google Location Service, either as a condition of the Android OS-OEM contract or as a condition of the Google Apps contract between Google and each OEM. Though Google claims the Android OS is open source, by requiring OEMs to use Google Location Service, an application that is inextricably bundled with the OS level framework, Google is effectively creating a closed system with respect to location positioning. Google’s manipulation suggests that the true purpose of Android is, or has become, to ensure that “no industry player can restrict or control the innovations of any other”, unless it is Google.

He bookends this with an ironic quote from Vic Gundotra, Google's Vice-President for Engineering:

If you believe in openness, if you believe in choice, if you believe in innovation from everyone, then welcome to Android.

If Google is actually forcing OEMs to hook their users into its world-wide location database it adds one more sinister note to the dark architecture of StreetView location services.

[Thanks to Cameron Westland for the heads up]

Kim Komando on location services

Kim Komando has a great piece at USA Today where she explains geotagging through the experiences of two women who also happened to be using the foursquare location service.  This article is one of the first of what I expect will become a torrent as the media learns the implications of geolocation:

Sylvia was dining out with a friend. The restaurant manager interrupted her dinner to tell her she had a phone call. It was from a complete stranger who tracked her online. He had described her to the manager.

Louise was at a bar with colleagues. A stranger began talking to her. He knew a lot about her personal interests. Then, he pulled out his phone and showed her a photo. It was a picture of Louise that he found online.

Both of these stories are true. And they're very unnerving. There is also a common thread. The women were tracked by something known as “geotagging.”

Geotagging adds GPS coordinates to your online posts or photos. You may be exposing this information without even knowing it. Geotagging is particularly popular with photos; many smartphones automatically geotag photos.

Photos can be plotted on a map for easy organization and viewing. But if you post photos online, and you could reveal your home address or child's school. You've given a criminal a treasure map.

Layers of information

A geotagged photo is the most obvious threat to your privacy and safety. But, in Louise's and Sylvia's cases, there was more going on. Both used the location-based social-networking service Foursquare.

Location-based social-networking services are designed to help you meet up with family and friends. When you're out and about, you check in with the site. At the coffee shop? Check in so friends nearby can find you.

Unless you have a stalker, these services aren't particularly dangerous on their own. You need to think about the layers of information you leave online. As you use more services, it's easier for criminals to track you.

Let's say you post a photo of your new house to a photo site. The photo is geotagged. You've linked your photo account to Facebook. And you use Foursquare or Twitter on the go; updates are sent to your Facebook account.

One night you go to the movies. You send a tweet as you wait in line. When you get home, you discover you've been robbed. The burglar used your photo to find your address. He learned more about you on Facebook. Your tweet tipped him off to your location.

Thanks to a movie site, he knew exactly how long the movie ran. He scoped out your house and neighborhood on Google Street View. He devised a plan to get in and out fast and undetected.

Protecting yourself

If you use these services, protect yourself. Use a little common sense. First, don't geotag photos of your house or your children. In fact, it's best to disable geotagging until you specifically need it.

On the iPhone 4, tap Settings, then General, and then Location Services. You can select which applications can access GPS data. These options aren't available in older iPhone software, so tap Settings, then General, then Reset. Tap Reset Location Warnings. You'll be prompted if an application wants to access GPS data. You can then disallow it.

In Android, start the Camera app and open the menu at the left. Go into the settings and turn off geotagging or location storage, depending on which version of Android is on your phone. On a BlackBerry, click the Camera icon. Press the Menu button and select Options. Set the Geotagging option to Disabled. Save your settings.

You can also use an EXIF editor to remove location information from photos. EXIF data is information about a photo embedded in the file. Visit www.komando.com/news for free EXIF editors.

Don't check in on Foursquare or similar sites from home. And make sure your Twitter program is not including GPS coordinates in your tweets.

For many people, Facebook ties everything together. Reconsider linking other accounts to Facebook. Pay close attention to your privacy settings. Only trusted friends should know when you are or aren't at home. Finally, if you have contacts you don't fully trust, it's time to do a purge.

[Kim Komando hosts the nation's largest talk radio show about computers and the Internet. To get the podcast or find the station nearest you, visit www.komando.com. To subscribe to Kim's free e-mail newsletters, sign up at www.komando.com too. Contact her at C1Tech@gannett.com. ]

It is well worth reading Foursquare's privacy policy – which is well thought out and makes Foursquare a paragon of virtue when compared to the contract with the devil you sign when you install iTunes, for example.  I'll explore this more going forward.

Non-Personal Information – like where you live?

Last week I gave a presentation at PII 2010 in Seattle where I tried to summarize what I had learned from my recent work on WiFi location services and identity.  During the question period  an audience member asked me to return to the slide where I recounted how I had first encountered Apple’s new location tracking policy:

My questioner was clearly a bit irritated with me,  Didn’t I realize that the “unique device identifier” was just a GUID – a purely random number?  It wasn’t a MAC address.  It was not personally identifying.

The question really perplexed me, since I had just shown a slide demonstrating how if you go to this well-known web site (for example) and enter a location you find out who lives there (I used myself as an example, and by the way, “whitepages” releases this information even though I have had an unlisted number…).

I pointed out the obvious:  if Apple releases your location and a GUID to a third party on multiple occasions, one location will soon stand out as being your residence… Then presto, if the third pary looks up the address in a “Reverse Address” search engine, the “random” GUID identifies you personally forever more.  The notion that location information tied to random identifiers is not personally identifiable information is total hogwash.

My questioner then asked, “Is your problem that Apple’s privacy policy is so clear?  Do you prefer companies who don’t publish a privacy policy at all, but rather just take your information without telling you?”  A chorus of groans seemed to answer his question to everyone’s satisfaction.  But I personally found the question thought provoking.  I assume corporations publish privacy policies – even those as duplicitous as Apple’s – because they have to.  I need to learn more about why.

[Meanwhile, if you’re wondering how I could possibly post my own residential address on my blog, it turns out I’ve moved and it is no longer my address.  Beyond that, the initial “A” in the listing above has nothing to do with my real name – it’s just a mechanism I use to track who has given out my personal information.]

Nice twitter

I had a tiny and unobtrusive little “privacy experience” today with Twitter that gives the lie to the idea that privacy makes things complicated and unruly.

Someone had tried to locate me using my email address. My privacy settings did not allow this (not sure if it was because Twitter's privacy policy had changed or because of my initial choices). No matter, Twitter sent me a one-sentence email that explained the situation, and when I clicked on the link allowed me to change my options with a single button press. End of story.

The whole process was low friction and – being tied to someone's attempt to get in touch with me – had a “pay-as-you-go” appeal. This wasn't some indigestible abstract policy – and I wasn't being misled by burying information on page 37 of a legal statement.  The whole UI experience made it clear that policy settings can be tied into their context in a way that is helpful and unobtrusive.

Stephan Engberg on Touch2ID

Stephan Engberg is member of the Strategic Advisory Board of the EU ICT Security & Dependability Taskforce and an innovator in terms of reconciling the security requirements in both ambient and integrated digital networks. I thought readers would benefit from comments he circulated in response to my posting on Touch2Id.

Kim Cameron's comments on Touch2Id – and especially the way PI is used – make me want to see more discussion about the definition of privacy and the approaches that can be taken in creating such a definition.

To me Touch2Id is a disaster – teaching kids to offer their fingerprints to strangers is not compatible  with my understanding of democracy or of what constitutes the basis of free society. The claim that data is “not collected” is absurd and represents outdated legal thinking.  Biometric data gets collected even though it shouldn't and such collection is entirely unnecessary given the PET solutions to this problem that exist, e. g chip-on-card.

In my book, Touch2Id did not do the work to deserve a positive privacy appraisal.

Touch2Id, in using blinded signature, is a much better solution than, for example, a PKI-based solution would be.  But this does not change the fact that biometrics are getting collected where they shouldn't.
To me Touch2Id therefore remains a strong invasion of Privacy – because it teaches kids to accept biometric interactions that are outside their control. Trusting a reader is not an option.

My concern is not so much in discussing the specific solution as reaching some agreement on the use of words and what is acceptable in terms of use of words and definitions.

We all understand that there are different approaches possible given different levels of pragmatism and focus. In reality we have our different approaches because of a number of variables:  the country we live in, our experiences and especially our core competencies and fields of expertise.

Many do good work from different angles – improving regulation, inventing technologies, debating, pointing out major threats etc. etc.

No criticism – only appraisal

Some try to avoid compromises – often at great cost as it is hard to overcome many legacy and interest barriers.  At the same time the stakes are rising rapidly:  reports of spyware are increasingly universal. Further, some try to avoid compromises out of fear or on the principle that governments are “dangerous”.

Some people think I am rather uncompromising and driven by idealist principles (or whatever words people use to do character assaination of those who speak inconvenient truths).  But those who know me are also surprised – and to some extent find it hard to believe – that this is due largely to considerations of economics and security rather than privacy and principle.

Consider the example of Touch2Id.  The fact that it is NON-INTEROPERABLE is even worse than the fact that biometrics are being collected, since because of this, you simply cannot create a PET solution using the technology interfaces!  It is not open, but closed to innovations and security upgrades. There is only external verification of biometrics or nothing – and as such no PET model can be applied.  My criticism of Touch2Id is fully in line with the work on security research roadmapping prior to the EU's large FP7 research programme (see pg. 14 on private biometrics and biometric encryption – both chip-on-card).

Some might remember the discussion at the 2003 EU PET Workshop in Brussels where there were strong objections to the “inflation of terms”.  In particular, there was much agreement that the term Privacy Enhancing Technology should only be applied to non-compromising solutions.  Even within the category of “non-compromising” there are differences.  For example, do we require absolute anonymity or can PETs be created through specific built-in countermeasures such as anti-counterfeiting through self-incrimination in Digital Cash or some sort of tightly controlled Escrow (Conditional Identification) in cases such as that of non-payment in an otherwise pseudonymous contract (see here).

I tried to raise the same issue last year in Brussels.

The main point here is that we need a vocabulary that does not allow for inflation – a vocabulary that is not infected by someone's interest in claiming “trust” or overselling an issue. 

And we first and foremost need to stop – or at least address – the tendency of the bad guys to steal the terms for marketing or propaganda purposes.  Around National Id and Identity Cards this theft has been a constant – for example, the term “User-centric Identity” has been turned upside down and today, in many contexts, means “servers focusing on profiling and managing your identity.”

The latest examples of this are the exclusive and centralist european eID model and the IdP-centric identity models recently proposed by US which are neither technological interoperable, adding to security or privacy-enhancing. These models represent the latest in democratic and free markets failure.

My point is not so much to define policy, but rather to respect the fact that different policies at different levels cannot happen unless we have a clear vocabulary that avoid inflation of terms.

Strong PETs must be applied to ensure principles such as net neutrality, demand-side controls and semantic interoperability.  If they aren't, I am personally convinced that within 20 or 30 years we will no longer have anything resembling democracy – and economic crises will worsen due to Command & Control inefficiencies and anti-innovation initiatives

In my view, democracy as construct is failing due to the rapid deterioration of fundamental rights and requirements of citizen-centric structures.  I see no alternative than trying to get it back on track through strong empowerment of citizens – however non-informed one might think the “masses” are – which depends on propagating the notion that you CAN be in control or “Empowered” in the many possible meanings of the term.

When I began to think about Touch2Id it did of course occur to me that it would be possible for operators of the system to secretly retain a copy of the fingerprints and the information gleaned from the proof-of-age identity documents – in other words, to use the system in a deceptive way.  I saw this as being something that could be mitigated by introducing the requirement for auditing of the system by independent parties who act in the privacy interests of citizens.

It also occured to me that it would be better, other things being equal, to use an on-card fingerprint sensor.  But is this a practical requirement given that it would still be possible to use the system in a deceptive way?  Let me explain.

Each card could, unbeknownst to anyone, be imprinted with an identifier and the identity documents could be surreptitiously captured and recorded.  Further, a card with the capability of doing fingerprint recognition could easily contain a wireless transmitter.  How would anyone be certain a card wasn't capable of surreptitiously transmitting the fingerprint it senses or the identifier imprinted on it through a passive wireless connection? 

Only through audit of every technical component and all the human processes associated with them.

So we need to ask, what are the respective roles of auditability and technology in providing privacy enhancing solutions?

Does it make sense to kill schemes like Touch2ID even though they are, as Stephan says, better than other alternatives?   Or is it better to put the proper auditing processes in place, show that the technology benefits its users, and continue to evolve the technology based on these successes?

None of this is to dismiss the importance of Stephan's arguments – the discussion he calls for is absolutely required and I certainly welcome it. 

I'm sure he and I agree we need systematic threat analysis combined with analysis of the possible mitigations, and we need to evolve a process for evaluating these things which is rigorous and can withstand deep scrutiny. 

I am also struck by Stephan's explanation of the relationship between interoperability and the ability to upgrade and uplevel privacy through PETs, as well as the interesting references he provides. 

Blizzard backtracks on real-names policy

A few days ago I mentioned the outcry when Blizzard, publisher of the World of Warcraft (WoW) multi-player Internet game, decided to make gamers reveal their offline identities and identifiers within their fantasy gaming context. 

I also descibed Blizzard's move as being the “kookiest” flaunting yet of the Fourth Law of Identity (Contextual separation through unidirectional identifiers). 

Today the news is all about Blizzard's first step back from the mistaken plan that appears to have completely misunderstood its own community.

CEO Mike Morhaime  seems to be on the right track with the first part of his message:

“I'd like to take some time to speak with all of you regarding our desire to make the Blizzard forums a better place for players to discuss our games. We've been constantly monitoring the feedback you've given us, as well as internally discussing your concerns about the use of real names on our forums. As a result of those discussions, we've decided at this time that real names will not be required for posting on official Blizzard forums.

“It's important to note that we still remain committed to improving our forums. Our efforts are driven 100% by the desire to find ways to make our community areas more welcoming for players and encourage more constructive conversations about our games. We will still move forward with new forum features such as the ability to rate posts up or down, post highlighting based on rating, improved search functionality, and more. However, when we launch the new StarCraft II forums that include these new features, you will be posting by your StarCraft II Battle.net character name + character code, not your real name. The upgraded World of Warcraft forums with these new features will launch close to the release of Cataclysm, and also will not require your real name.”

Then he goes weird again.  He seems to have a fantasy of his own:  that he is running Facebook…

“I want to make sure it's clear that our plans for the forums are completely separate from our plans for the optional in-game Real ID system now live with World of Warcraft and launching soon with StarCraft II. We believe that the powerful communications functionality enabled by Real ID, such as cross-game and cross-realm chat, make Battle.net a great place for players to stay connected to real-life friends and family while playing Blizzard games. And of course, you'll still be able to keep your relationships at the anonymous, character level if you so choose when you communicate with other players in game. Over time, we will continue to evolve Real ID on Battle.net to add new and exciting functionality within our games for players who decide to use the feature.”

Don't get me wrong.  As convoluted as this thinking is, it's one big step forward (after two giant steps backward) to make linking of offline identity to gaming identity “optional”. 

And who knows?  Maybe Mike Morhaime really does understand his users…  He may be right that lots of gamers are totally excited at the prospect of their parents, lovers and children joining Battle.net to stay connected with them while they are playing WoW!  Facebook doesn't stand a chance!

 

“Microsoft Accuses Apple, Google of Attempted Privacy Murder”

Ms. Smith at Network World made it to the home page of digg.com yesterday when she reported on my concerns about the collection and release of information related to people's movements and location. 

I want to set the record straight about one thing: the headline.  It's not that I object to the term “attempted privacy murder” – it pretty much sums things up. The issue is just that I speak as Kim Cameron – a person, not a corporation.  I'm not in marketing or public releations – I'm a technologist who has come to understand that we must  all work together to ensure people are able to trust their digital environment.  The ideas I present here are the same ones I apply liberally in my day job, but this is a personal blog.

Ms. Smith is as precise as she is concise:

A Microsoft identity guru bit Apple and smacked Google over mobile privacy policies. Once upon a time, before working for Microsoft, this same man took MS to task for breaking the Laws of Identity.

Kim Cameron, Microsoft's Chief Identity Architect in the Identity and Security Division, said of Apple, “If privacy isn’t dead, Apple is now amongst those trying to bury it alive.”

What prompted this was when Cameron visited the Apple App store to download a new iPhone application. When he discovered Apple had updated its privacy policy, he read all 45 pages on his iPhone. Page 37 lets Apple users know:

Collection and Use of Non-Personal Information

We also collect non-personal information – data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

· We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

The MS identity guru put the smack down not only on Apple, but also on Google, writing in his blog, “Maintaining that a personal device fingerprint has ‘no direct association with any specific individual’ is unbelievably specious in 2010 – and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses. And – big surprise – my iPhone, at least, came bundled with Google’s location service.”

MAC in this case refers to Media Access Control addresses associated with specific devices and one of the types that Google collected. Google admits to collecting MAC addresses of WiFi routers, but denies snagging MAC addresses of laptops or phones. Google is under mass investigation for its WiFi blunder.

Apple's new policy is also under fire from two Congressmen who gave Apple until July 12th to respond. Reps. Edward J. Markey (D-Mass.) and Joe Barton (R-Texas) sent a letter to Apple CEO Steve Jobs asking for answers about Apple gathering location information on its customers.

As far as Cameron goes, Microsoft's Chief Identity Architect seems to call out anyone who violates privacy. That includes Microsoft. According to Wikipedia's article on Microsoft Passport:

“A prominent critic was Kim Cameron, the author of the Laws of Identity, who questioned Microsoft Passport in its violations of those laws. He has since become Microsoft's Chief Identity Architect and helped address those violations in the design of the Windows Live ID identity meta-system. As a consequence, Windows Live ID is not positioned as the single sign-on service for all web commerce, but as one choice of many among identity systems.”

Cameron seems to believe location based identifiers and these changes of privacy policies may open the eyes of some people to the, “new world-wide databases linking device identifiers and home addresses.”

 

Doing it right: Touch2Id

And now for something refreshingly different:  an innovative company that is doing identity right. 

I'm talking about a British outfit called Touch2Id.  Their concept is really simple.  They offer young people a smart card that can be used to prove they are old enough to drink alcohol.  The technology is now well beyond the “proof of concept” phase – in fact its use in Wiltshire, England is being expanded based on its initial success.

  • To register, people present their ID documents and, once verified, a template of their fingerprint is stored on a Touch2Id card that is immediately given to them. 
  • When they go to a bar, they wave their card over a machine similar to a credit card reader, and press their finger on the machine.  If their finger matches the template on their card, the lights come on and they can walk on in.

   What's great here is:

  • Merchants don't have to worry about making mistakes.  The age vetting process is stringent and fake IDs are weeded out by experts.
  • Young people don't have to worry about being discriminated against (or being embarassed) just because they “look young”
  • No identifying information is released to the merchant.  No name, age or photo appears on (or is stored on) the card.
  • The movements of the young person are not tracked.
  • There is no central database assembled that contains the fingerprints of innocent people
  • The fingerprint template remains the property of the person with the fingerprint – there is no privacy issue or security honeypot.
  • Kids cannot lend their card to a friend – the friend's finger would not match the fingerprint template.
  • If the card is lost or stolen, it won't work any more
  • The templates on the card are digitally signed and can't be tampered with

I met the man behind the Touch2Id, Giles Sergant, at the recent EEMA meeting in London.

Being a skeptic versed in the (mis) use of biometrics in identity – especially the fingerprinting of our kids – I was initially more than skeptical. 

But Giles has done his homework (even auditing the course given by privacy experts Gus Hosein and Simon Davies at the London School of Economics).  The better I understood the approach he has taken, the more impressed I was.

Eventually I even agreed to enroll so as to get a feeling for what the experience was like.  The verdict:  amazing.  Its a lovely piece of minimalistic engineering, with no unnecessary moving parts or ugly underbelly.    If I look strangely euphoric in the photo that was taken it is because I was thoroughly surprised by seeing something so good.

Since then, Giles has already added an alternate form factor – an NFC sticker people can put on their mobile phone so they don't actually need to carry around an additional artifact.  It will be fascinating to watch how young people respond to this initiative, which Giles is trying to grow from the bottom up.  More info on the Facebook page.

Microsoft identity guru questions Apple, Google on mobile privacy

Todd Bishop at TechFlash published a comprehensive story this week on device fingerprints and location services: 

Kim Cameron is an expert in digital identity and privacy, so when his iPhone recently prompted him to read and accept Apple's revised terms and conditions before downloading a new app, he was perhaps more inclined than the rest of us to read the entire privacy policy — all 45 pages of tiny text on his mobile screen.

It's important to note that apart from writing his own blog on identity issues — where he told this story — Cameron is Microsoft's chief identity architect and one of its distinguished engineers. So he's not a disinterested industry observer in the broader sense. But he does have extensive expertise.

And he is publicly acknowledging his use of an iPhone, after all, which should earn him at least a few points for neutrality…

At this point I'll butt in and editorialize a little.  I'd like to amplify on Todd's point for the benefit of readers who don't know me very well:  I'm not critical of Street View WiFi because I am anti-Google.  I'm not against anyone who does good technology.  My critique stems from my work as a computer scientist specializing in identity, not as a person playing a role in a particular company.  In short, Google's Street View WiFi is bad technology, and if the company persists in it, it will be one of the identity catastrophes of our time.

When I figured out the Laws of Identity and understood that Microsoft had broken them, I was just as hard on Microsoft as I am on Google today.  In fact, someone recently pointed out the following reference in Wikipedia's article on Microsoft's Passport:

“A prominent critic was Kim Cameron, the author of the Laws of Identity, who questioned Microsoft Passport in its violations of those laws. He has since become Microsoft's Chief Identity Architect and helped address those violations in the design of the Windows Live ID identity meta-system. As a consequence, Windows Live ID is not positioned as the single sign-on service for all web commerce, but as one choice of many among identity systems.”

I hope this has earned me some right to comment on the current abuse of personal device identifiers by Google and Apple – which, if their FAQs and privacy policies represent what is actually going on, is at least as significant as the problems I discussed long ago with Passport.  

But back to Todd: 

At any rate, as Cameron explained on his IdentityBlog over the weekend, his epic mobile reading adventure uncovered something troubling on Page 37 of Apple's revised privacy policy, under the heading of “Collection and Use of Non-Personal Information.” Here's an excerpt from Apple's policy, Cameron's emphasis in bold.

We also collect non-personal information — data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

Here's what Cameron had to say about that.

Maintaining that a personal device fingerprint has “no direct association with any specific individual” is unbelievably specious in 2010 — and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses. And — big surprise — my iPhone, at least, came bundled with Google’s location service.

The irony here is a bit fantastic. I was, after all, using an “iPhone”. I assume Apple’s lawyers are aware there is an ‘I’ in the word “iPhone”. We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village. An iPhone is carried around by its owner. If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it.

MAC in this context refers to Media Access Control addresses associated with specific devices, one type of data that Google has acknowledged collecting. However, in a response to an Atlantic magazine piece that quoted an earlier Cameron blog post, Google says that it hasn't gone as far Cameron is suggesting. The company says it has collected only the MAC addresses of WiFi routers, not of laptops or phones.

The distinction is important because it speaks to how far the companies could go in linking together a specific device with a specific person in a particular location.

Google's FAQ, for the record, says its location-based services (such as Google Maps for Mobile) figure out the location of a device when that device “sends a request to the Google location server with a list of MAC addresses which are currently visible to the device” — not distinguishing between MAC addresses from phones or computers and those from wireless routers.

Here's what Cameron said when I asked about that topic via email.

I have suggested that the author ask Google if it will therefore correct its FAQ, since the portion of the FAQ on “how the system works” continues to say it behaves in the way I described. If Google does correct its FAQ then it will be likely that data protection authorities ask Google to demonstrate that its shipped software behaving in the way described in the correction.

I would of course feel better about things if Google’s FAQ is changed to say something like, “The user’s device sends a request to the Google location server with the list of MAC addresses found in Beacon Frames announcing a Network Access Point SSID and excluding the addresses of end user devices.”

However, I would still worry that the commercially irresistible feature of tracking end user devices could be turned on at any second by Google or others. Is that to be prevented? If so, how?

So a statement from Google that its FAQ was incorrect would be good news – and I would welcome it – but not the end of the problem for the industry as a whole.

The privacy statement for Microsoft's Location Finder service, for the record, is more specific in saying that the service uses MAC addresses from wireless access points, making no reference to those from individual devices.

In any event, the basic question about Apple is whether its new privacy policy is ultimately correct in saying that the company is only collecting “data in a form that does not permit direct association with any specific individual” — if that data includes such information as the phone's unique device identifier and location.

Cameron isn't the only one raising questions.

The Consumerist blog picked up on this issue last week, citing a separate portion of the revised privacy policy that says Apple and its partners and licensees “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.” The policy adds, “This location data is collected anonymously in a form that does not personally identify you and is used by Apple and our partners and licensees to provide and improve location-based products and services.”

The Consumerist called the language “creepy” and said it didn't find Apple's assurances about the lack of personal identification particularly comforting. Cameron, in a follow-up post, agreed with that sentiment.

SF Weekly and the Hypebot music technology blog also noted the new location-tracking language, and the fact that users must agree to the new privacy policy if they want to use the service.

“Though Apple states that the data is anonymous and does not enable the personal identification of users, they are left with little choice but to agree if they want to continue buying from iTunes,” Hypebot wrote.

We've left messages with Apple and Google to comment on any of this, and we'll update this post depending on the response.

And for the record, there is an option to email the Apple privacy policy from the phone to a computer for reading, and it's also available here, so you don't necessarily need to duplicate Cameron's feat by reading it all on your phone.

Update to iTunes comes with privacy fibs

A few days ago I reported that from now on, to get into the iPhone App store you must allow Apple to share your phone or tablet device fingerprints and detailed, dynamic location information with anyone it pleases.  No chance to vet the purposes for which your location data is being used.  No way to know who it is going to. 

As incredible as it sounds in 2010, no user control.  Not even  transparency.  Just one thing is for sure.  If privacy isn't dead, Apple is now amongst those trying to bury it alive.

Then today, just when I thought Apple had gone as far as it could go in this particular direction, a new version of iTunes wanted to install itself on my laptop.  What do you know?  It had a new privacy policy too… 

The new iTunes policy was snappier than the iPhone policy – it came to the point – sort of – in the 5th paragraph rather than the 37th page!

5. iTunes Store and other Services.  This software enables access to Apple's iTunes Store which offers downloads of music for sale and other services (collectively and individually, “Services”). Use of the Services requires Internet access and use of certain Services requires you to accept additional terms of service which will be presented to you before you can use such Services.

By using this software in connection with an iTunes Store account, you agree to the latest iTunes Store Terms of Service, which you may access and review from the home page of the iTunes Store.

I shuddered.  Mind bend!  A level of indirection in a privacy policy! 

Imagine:  “Our privacy policy is that you need to read another privacy policy.”  This makes it much more likely that people will figure out what they're getting into, don't you think?  Besides, it is a really novel application of the proposition that all problems of computer science can be solved through a level of indirection!  Bravo!

But then – the coup de grace.  The privacy policy to which Apple redirects you is… are you ready… the same one we came across a few days ago at the App Store!  So once again you need to get to the equivalent of page 37 of 45 to read:

Collection and Use of Non-Personal Information

We also collect non-personal information – data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

The mind bogggggles.  What does downloading a song have to do with giving away your location???

Some may remember my surprise that the Lords of The iPhone would call its unique device identifier – and its location – “non-personal data”.  Non-personal implies there is no strong relationship to the person who is using it.  I wrote:

The irony here is a bit fantastic.  I was, after all, using an “iPhone”.   I assume Apple’s lawyers are aware there is an ”I” in the word “iPhone”.  We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village.  An iPhone is carried around by its owner.  If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it. 

Anybody who thinks about identity understands that a “personal device” is associated with (even an extension of) the person who uses it.  But most people – including technical people – don't give these matters the slightest thought.  

A parade of tech companies have figured out how to use peoples’ ignorance about digital identity to get away with practices letting them track what we do from morning to night in the physical world.  But of course, they never track people, they only track their personal devices!  Those unruly devices really have a mind of their own – you definitely need central databases to keep tabs on where they're going.

I was therefore really happy to read some of  Google CEO Eric Schmidt’s recent speech to the American Society of News Editors.  Talking about mobility he made a number of statements that begin to explain the ABCs of what mobile devices are about:

Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”

It is good to see Eric sharing the actual truth about personal devices with a group of key influencers.  This stands in stark contrast to the silly fibs about phones and laptops being non-personal that are being handed down in the iTunes Store, the iPhone App Store, and in the “Refresher FAQ” Fantasyland Google created in response to its Street View WiFi shenanigans. 

As the personal phone evolves it will become increasingly obvious  that groups within some of our best tech companies have built businesses based on consciously crafted privacy fibs.  I'm amazed at the short-sightedness involved:  folks, we're talking about a “BP moment”.  History teaches us that “There is no vice that doth so cover a man with shame as to be found false and perfidious.” [Francis Bacon]  And statements that your personal device doesn't identify you and that location is not personal information are precisely “false and perfidious.”