Technical naïveté: UK’s Matt Hancock sticks an ignorant finger in the COVID dike

The following letter from a group of UK parliamentarians rings alarm bells that should awaken all of us – I suspect similar things are happening in the shadows well beyond the borders of the United Kingdom…

The letter recounts the sad story of one more politician with no need for science or expertise – for him, rigorous attention to what systems do to data protection and privacy can simply be dismissed as “bureaucracy”.  Here we see a man in over his head – evidently unaware that failure to follow operational procedures protecting security and privacy introduces great risk and undermines both public trust and national security.  I sincerely hope Mr. Hancock brings in some advisors who have paid their dues and know how this type of shortcut wastes precious time and introduces weakness into our technical infrastructure at a time when cyberattack by organized crime and nation states should get politicians to sober up and get on the case.

Elizabeth Denham CBE, UK Information Commissioner
Information Commissioner’s Office
Wycliffe House
Water Lane
Wilmslow
Cheshire SK9 5AF

Dear Elizabeth Denham,

We are writing to you about the Government’s approach to data protection and privacy during the COVID-19 pandemic, and also the ICO’s approach to ensuring the Government is held to account.
During the crisis, the Government has paid scant regard to both privacy concerns and data protection duties. It has engaged private contractors with problematic reputations to process personal data, as highlighted by Open Democracy and Foxglove. It has built a data store of unproven benefit. It chose to build a contact tracing proximity App that centralised and stored more data than was necessary, without sufficient safeguards, as highlighted by the Human Rights Committee. On releasing the App for trial, it failed to notify yourselves in advance of its Data Protection Impact Assessment – a fact you highlighted to the Human Rights Committee.

Most recently, the Government has admitted breaching their data protection obligations by failing to conduct an impact assessment prior to the launch of their Test and Trace programme. They have only acknowledged this failing in the face of a threat of legal action by Open Rights Group.The Government have highlighted your role at every turn, citing you as an advisor looking at the detail of their work, and using you to justify their actions.

On Monday 20 July, Matt Hancock indicated his disregard for data protection safeguards, saying to Parliament that “I will not be held back by bureaucracy” and claiming, against the stated position of the Government’s own legal service, that three DPIAs covered “all of the necessary”.

In this context, Parliamentarians and the public need to be able to rely on the Regulator. However, the Government not only appears unwilling to understand its legal duties, it also seems to lack any sense that it needs your advice, except as a shield against criticism.
Regarding Test and Trace, it is imperative that you take action to establish public confidence – a trusted system is critical to protecting public health. The ICO has powers to compel documents to understand data processing, contractual relations and the like (Information Notices). The ICO has powers to assess what needs to change (Assessment Notices). The ICO can demand particular changes are made (Enforcement notices).  Ultimately the ICO has powers to fine Government, if it fails to adhere to the standards which the ICO is responsible for upholding.

ICO action is urgently required for Parliament and the public to have confidence that their data is being treated safely and legally, in the current COVID-19 pandemic and beyond.

Signed,
Apsana Begum MP
Steven Bonnar MP
Alan Brown MP
Daisy Cooper MP
Sir Edward Davey MP
Marion Fellows MP
Patricia Gibson MP
Drew Hendry MP
Clive Lewis MP
Caroline Lucas MP
Kenny MacAskill MP
John McDonnell MP
Layla Moran MP
Grahame Morris MP
John Nicholson MP
Sarah Olney MP
Bell Ribeiro-Addy MP
Tommy Sheppard MP
Christopher Stephens MP
Owen Thompson MP
Richard Thomson MP Philippa Whitford MP

 

[Thanks to Patrick McKenna for keeping me in the loop]

The Idiot's Guide to Why Voicemail Hacking is a Crime

Pangloss sent me reeling recently with her statement that “in the wake of the amazing News of the World revelations, there does seem to be some public interest in a quick note on why there is (some) controversy around whether hacking mesages in someone's voicemail is a crime.”

What?  Outside Britain I imagine most of us have simply assumed that breaking into peoples’ voicemails MUST be illegal.   So Pangloss's excellent summary of the situation – I share just enough to reveal the issues – is a suitable slap in the face of our naivete:

The first relevant provision is RIPA (the Regulation of Investigatory Powers Act 2000) which provides that interception of communications without consent of both ends of the communication , or some other provision like a police warrant is criminal in principle. The complications arise from s 2(2) which provides that:

“….a person intercepts a communication in the course of its transmission by
means of a telecommunication system if, and only if … (he makes) …some or all of the contents of the communication available, while being transmitted, to a person other than the sender or intended recipient of the communication”. [my itals]

Section 2(4) states that an “interception of a communication” has also to be “in the course of its transmission” by any public or private telecommunications system. [my itals]

The argument that seems to have been been made to the DPP, Keir Starmer, on October 2010, by QC David Perry, is that voicemail has already been transmitted and is thus therefore no longer “in the course of its transmission.” Therefore a RIPA s 1 interception offence would not stand up. The DPP stressed in a letter to the Guardian in March 2011 that this interpretation was (a) specific to the cases of Goodman and Mulcaire (yes the same Goodman who's just been re-arrested and inded went to jail) and (b) not conclusive as a court would have to rule on it.

We do not know the exact terms of the advice from counsel as (according to advice given to the HC on November 2009) it was delivered in oral form only. There are two possible interpretations of even what we know. One is that messages left on voicemail are “in transmission” till read. Another is that even when they are stored on the voicemail server unread, they have completed transmission, and thus accessing them would not be “interception”.

Very few people I think would view the latter interpretation as plausible, but the former seem to have carried weight with the prosecution authorities. In the case of Milly Dowler, if (as seems likely) voicemails were hacked after she was already deceased, there may have been messages unread and so a prosecution would be appropriate on RIPA without worrying about the advice from counsel. In many other cases eg involving celebrities though, hacking may have been of already-listened- to voicemails. What is the law there?

When does a message to voicemail cease to be “in the course of transmission”? Chris Pounder pointed out in April 2011 that we also have to look at s 2(7) of RIPA which says

” (7)For the purposes of this section the times while a communication is being transmitted by means of a telecommunication system shall be taken to include any time when the system by means of which the communication is being, or has been, transmitted is used for storing it in a manner that enables the intended recipient to collect it or otherwise to have access to it.”

A common sense interpretation of this, it seems to me (and to Chris Pounder ) would be that messages stored on voicemail are deemed to remain “in the course of transmission” and hence capable of generating a criminal offence, when hacked – because it is being stored on the system for later access (which might include re-listening to already played messages).

This rather thoroughly seems to contradict the well known interpretation offered during the debates in the HL over RIPA from L Bassam, that the analogy of transmission of a voice message or email was to a letter being delievered to a house. There, transmission ended when the letter hit the doormat.

Fascinating issues.  And that's just the beginning.  For the full story, continue here.

Google Indoors featured on German TV

Germans woke up yesterday to a headline story on Das Erste's TV Morning Show announcing a spiffy new Internet service – Google indoors

The first's lead-in and Google Indoors spokesman

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes.  Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had “opted in”…

Google Indoors greeted by happy customer

… so the technicians needed to get on with their work:

Google Indoors camera-head enters appartment

Google's deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

Automated privacy features and product placements with revenue shared with residents
 
… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves.  As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves – an example of the advanced privacy options and levels the service makes possible.

Google Indoors app experience

Check out the video.  Navigation features within houses are amazing!  From the amount of effort and wit put into it by a major TV show, I'd wager that even if Google's troubles with Germany around Street View are over, its problems with Germans around privacy may not be. 

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed.  I don't have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony.  And given Eric Schmidt's policy of getting as close to “creepy” as possible, who wouldn't find the video at least partly believable?

[Thanks to Kai Rannenberg for the heads up.]

ID used to save “waggle dance”

MSN reports on a fascinating use of tracking:

Bees are being fitted with tiny radio ID tags to monitor their movements as part of research into whether pesticides could be giving the insects brain disorders, scientists have revealed

The study is examining concerns that pesticides could be damaging bees’ abilities to gather food, navigate and even perform their famous “waggle dance” through which they tell other bees where nectar can be found.

I can't help wondering if wearing an antenna twice one's size might also throw off one's “waggle dance”? There is too the question of how this particular bee gets back into its hive to be tracked another day.  But I leave those questions to the researchers.

 

Google patent is a shocker

There are many who have assumed Google's WiFi snooping was “limited” to mapping of routers.  However an article in Computerworld reporting on new developments in an Oregon class action law suit links to a patent application that speaks volumes about what is at stake here.  The abstract begins (emphasis is mine):

“The invention pertains to location approximation of devices, e.g., wireless access points and client devices in a wireless network. “

By “client” the patent is referring to devices being used by you and your family.  This interest in the family devices is exactly what I supposed – it is the natural conclusion you reach using the kind of thinking that drove the Street View WiFi initiative.  The abstract continues,

“Location estimates may be obtained by observation/analysis of packets transmitted or received by the access point. For instance, data rate information associated with a packet is used to approximate the distance between a client device and the access point. This may be coupled with known positioning information to arrive at an approximate location for the access point. Confidence information and metrics about whether a device is an access point and the location of that device may also be determined…

“A location information database of access points may employ measurements from various devices over time. Such information may identify the location of client devices and provide location-based services to them. “

The system is actually doing measurements inside your house or business.

We will refer to these aspects of the plan when examining in further detail the potential harm the construction of massive MAC address databases can bring.

 [Read the whole patent here]

“I just did it because Skyhook did it”

I received a helpful and informed comment by Michael Hanson at Mozilla Labs on the Street View MAC Address issue:

I just wanted to chip in and say that the practice of wardriving to create a SSID/MAC geolocation database is hardly unique to Google.

The practice was invented by Skyhook Wireless], formerly Quarterscope. The iPhone, pre-GPS, integrated the technology to power the Maps application. There was some discussion of how this technology would work back in 2008, but it didn't really break out beyond the community of tech developers. I'm not sure what the connection between Google and Skyhook is today, but I do know that Android can use the Skyhook database.

Your employer recently signed a deal with Navizon, a company that employs crowdsourcing to construct a database of WiFi endpoints.

Anyway – I don't mean to necessarily weigh in on the question of the legality or ethics of this approach, as I'm not quite sure how I feel about it yet myself. The alternative to a decentralized anonymous geolocation system is one based on a) GPS, which requires the generosity of a space-going sovereign to maintain the satellites and has trouble in dense urban areas, or b) the cell towers, which are inefficient and are used to collect our phones’ locations. There's a recent paper by Constandache (et al) at Duke that addresses the question of whether it can be done with just inertial reckoning… but it's a tricky problem.

Thanks for the post.

The scale of the “wardriving” [can you beieve the name?] boggles my mind, and the fact that this has gone on for so long without attracting public attention is a little incredible.  But in spite of the scale, I don't think the argument  that it's OK to do something because other people have already done it will hold much water with regulators or the thinking public  In fact  it all sounds a bit like a teenager trying to avoid his detention because he was “just doing what Johnny did.”

As Michael say, one can argue that there are benefits to drive-by device identity theft.  In fact, one can argue that there would be benefits to appropriating and reselling all kinds of private information and property.  But in most cases we hold ourselves back, and find other, socially acceptable ways of achieving the same benefits.  We should do the same here.

Are these databases decentralized and anonymous?

As hard as I try, I don't see how one can say the databases are decentralized and anonymous.  For starters, they are highly centralized, allowing monetized lookup of any MAC address in the world.  Secondly, they are not anonymous – the databases contain the identity information of our personal devices as well as their exact locations in molecular space.   It is strange to me that personal information can just be “declared to be public” by those who will benefit from that in their businesses.

Do these databases protect our privacy in some way? 

No – they erode it more than before.  Why?

Location information has long been available to our telephone operators, since they use cell-tower triangulation.  This conforms to the Law of Justifiable Parties – they need to know where we are (though not to remember it) to provide us with our phone service. 

But now yet another party has insinuated itself into the mobile location equation: the MAC database operator – be it Google, Skyhook or Navizon. 

If you carry a cell phone that uses one of these databases – and maybe you already do – your phone queries the database for the locations of MAC addresses it detects.  This means means that in additon to your phone company, a database company is constantly being informed about your exact location.   From what Michael says it seems the cell phone vendor might additionally get in the middle of this location reporting – all parties who have no business being part of the location transaction unless you specifically opt to include them.

Exactly what MAC addresses does your phone collect and submit to the database for location analysis?  Clearly, it might be all the MAC addresses detected in its vicinity, including those of other phones and devices…  You would then be revealing not only your own location information, but that of your friends, colleagues, and even of complete strangers who happen to be passing by – even if they have their location features turned off

Having broken into our home device-space to take our network identifiers without our consent, these database operators are thus able to turn themselves into intelligence services that know not only the locations of people who have opted into their system, but of people who have opted out.  I predict that this situation will not be allowed to stand.

Are there any controls on this, on what WiFi sniffing outfits can do with their information, and on how they relate it to other information collected on us, on who they sell it to?

I don't know anything about Navizon or the way it uses crowdsourcing, but I am no happier with the idea that crowds are – probably without their knowledge – eavesdropping on my network to the benefit of some technology outfit.  Do people know how they are being used to scavenge private network identifiers – and potentially even the device identifiers of their friends and colleagues?

Sadly, it seems we might now have a competitive environment in which all the cell phone makers will want to employ these databases.  The question for me is one of whether, as these issues come to the attention of the general public and its representatives, a technology breaking two Laws of Identity will actually survive without major reworking.  My prediction is that it will not. 

Reaping private identifiers is a mistake that, uncorrected,  will haunt us as we move into the age of the smart home and the smart grid.  Sooner or later society will nix it as acceptable behavior.  Technologists will save a lot of trouble if we make our mobile location systems conform with reasonable expectations of privacy and security starting now.

 

Cloud computing: an unsatisfied customer?

Gunnar Peterson has written many good things about architecture and identity over the last few years. Now he lays down the guantlet and challenges cloud advocates with a great video that throws all the fundamental issues into sardonic relief. Everyone involved with the cloud should watch this video repeatedly and come back with really good answers to all that is implied and questioned… albeit through humor.

Make of it what you will

One of the people whose work has most influenced the world of security – a brilliant researcher who is also gifted with a sense of irony and humor – received this email and sent it on to a group of us.   He didn't specify why he thought we would find it useful…  

At any rate, the content boggles the mind.  A joke?  Or a metaspam social engineering attack, intended to bilk jealous boyfriends and competitors? 

Or… could this kind of… virus actually be built and… sold?  

Subject: MMS PHONE INTERCEPTOR – THE ULTIMATE SPY SOLUTION FOR MOBILE PHONES AND THE GREAT PRODUCT FOR YOUR CUSTOMERS

MMS PHONE INTERCEPTOR – The ultimate surveillance solution will enable you to acquire the most valuable information from a mobile phone of a person of your interested.

Now all you will need to do in order to get total control over a NOKIA mobile (target) phone of a person of your interest is to send the special MMS to that target phone, which is generated by our unique MMS PHONE INTERCEPTOR LOADER. See through peoples' clothsThis way you can get very valuable and otherwise un-accessible information about a person of your interest very easily.

The example of use:

You will send the special MMS message containing our unique MMS PHONE INTERCEPTOR to a mobile phone of e.g. your girlfriend

In case your girlfriend will be using her (target) mobile phone, you will be provided by following unique functions:

  • In case your girlfriend will make an outgoing call or in case her (target) phone will receive an incoming call, you will get on your personal standard mobile phone an immediate SMS message about her call. This will give you a chance to listen to such call immediately on your standard mobile phone.
  • In case your girlfriend will send an outgoing SMS message from her (target) mobile phone or she will receive a SMS message then you will receive a copy of this message on your mobile phone immediately.
  • This target phone will give you a chance to listen to all sounds in its the surrounding area even in case the phone is switched off. Therefore you can hear very clearly every spoken word around the phone.
  • You will get a chance to find at any time the precise location of your girlfriend by GPS satellites.

All these functions may be activated / deactivated via simple SMS commands.

A target mobile phone will show no signs of use of these functions.

As a consequence of this your girlfriend can by no means find out that she is under your control.

In case your girlfriend will change her SIM card in her (target) phone for a new one, then after switch on of her (target) phone, your (source) phone will receive a SMS message about the change of the SIM card in her (target) phone and its new phone number.

These unique surveillance functions of target phones may be used to obtain very valuable and by no other means accessible information also from other subjects of your interest {managers, key employees, business partners etc, too.

I like the nostalgic sense of convenience and user-friendliness conjured up by this description.  Even better, it reminds me of the comic book ads that used to amuse me as a kid.  So I guess we can just forget all about this and go back to sleep, right?

Green Dam goes in all the wrong directions

The Chinese Government's Green Dam sets an important precedent:  government trying to achieve its purposes by taking control over the technology installed on peoples’ personal computers.  Here's how the Chinese Government's explained its initiative:

‘In order to create a green, healthy, and harmonious internet environment, to avoid exposing youth to the harmful effects of bad information, The Ministry of Information Industry, The Central Spiritual Civilization Office, and The Commerce Ministry, in accordance with the requirements of “The Government Purchasing Law,” are using central funds to purchase rights to “Green Dam Flower Season Escort”(Henceforth “Green Dam”) … for one year along with associated services, which will be freely provided to the public.

‘The software is for general use and testing. The software can effectively filter improper language and images and is prepared for use by computer factories.

‘In order to improve the government’s ability to deal with Web content of low moral character, and preserve the healthy development of children, the regulation and demands pertaining to the software are as follows: 

  1. Computers produced and sold in China must have the latest version of “Green Dam” pre-installed, imported computers should have the latest version of the software installed prior to sale.
  2. The software should be installed on computer hard drives and available discs for subsequent restoration
  3. The providers of “Green Dam” have to provide support to computer manufacturers to facilitate installation
  4. Computer manufacturers must complete installation and testing prior to the end of June. As of July 1, all computers should have “Green Dam” pre-installed.
  5. Every month computer manufacturers and the provider of Green Dam should give MII data on monthly sales and the pre-installation of the software. By February 2010, an annual report should be submitted.’

What does the software do?  According to OpenNet Initiative:

Green Dam exerts unprecedented control over users’ computing experience:  The version of the Green Dam software that we tested, when operating under its default settings, is far more intrusive than any other content control software we have reviewed. Not only does it block access to a wide range of web sites based on keywords and image processing, including porn, gaming, gay content, religious sites and political themes, it actively monitors individual computer behavior, such that a wide range of programs including word processing and email can be suddenly terminated if content algorithm detects inappropriate speech [my emphasis – Kim]. The program installs components deep into the kernel of the computer operating system in order to enable this application layer monitoring. The operation of the software is highly unpredictable and disrupts computer activity far beyond the blocking of websites.

The functionality of Green Dam goes far beyond that which is needed to protect children online and subjects users to security risks:   The deeply intrusive nature of the software opens up several possibilities for use other than filtering material harmful to minors. With minor changes introduced through the auto-update feature, the architecture could be used for monitoring personal communications and Internet browsing behavior. Log files are currently recorded locally on the machine, including events and keywords that trigger filtering. The auto-update feature can used to change the scope and targeting of filtering without any notification to users.

How is it being received?  Wikipedia says:

Online polls conducted by leading Chinese web portals revealed poor acceptance of the software by netizens. On Sina and Netease, over 80% of poll participants said they would not consider or were not interested in using the software; on Tencent, over 70% of poll participants said it was unnecessary for new computers to be preloaded with filtering software; on Sohu, over 70% of poll participants said filtering software would not effectively prevent minors from browsing inappropriate websites.  A poll conducted by the Southern Metropolis Daily showed similar results.

In addition, the software is a virus transmission system.   Researchers from the University of Michigan concluded:

We have discovered remotely-exploitable vulnerabilities in Green Dam, the censorship software reportedly mandated by the Chinese government. Any web site a Green Dam user visits can take control of the PC [my emphasis – Kim].

We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process.

We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

There is no doubt that government has a legitimate interest in the safety of the Internet, and in the safety of our children.  But neither goal can be achieved with any of the unfortunate methods being used here. 

Rather than so-called “blacklisting”, the alternative is to construct virtual networks that are dramatically safer for children than the Internet as a whole.  As such virtual networks emerge, technology can be created allowing parents to limit the access of their young children to those networks.

It's a big job to build such “green zones”.  But government is the strong force that could serve as a catalyst in bringing this about.   The key would be to organize virtual districts and environments that would be fun and safe for children, so children want to play in them.

This kind of virtual world doesn't require the generalized banning of sites or ideas or prurient thoughts – or require government to “improve” the nature of human beings.

FYI: Encryption is “not necessary”

A few weeks ago I spoke at a conference of CIOs, CSOs and IT Mandarins that – of course – also featured a session on Cloud Computing.  

It was an industry panel where we heard from the people responsible for security and compliance matters at a number of leading cloud providers.  This was followed by Q and A  from the audience.

There was a lot of enthusiasm about the potential of cutting costs.  The discussion wasn't so much about whether cloud services would be helpful, as about what kinds of things the cloud could be used for.  A government architect sitting beside me thought it was a no-brainer that informational web sites could be outsourced.  His enthusiasm for putting confidential information in the cloud was more restrained.

Quite a bit of discussion centered on how “compliance” could be achieved in the cloud.  The panel was all over the place on the answer.  At one end of the spectrum was a provider who maintained that nothing changed in terms of compliance – it was just a matter of oursourcing.  Rather than creating vast multi-tenant databases, this provider argued that virtualization would allow hosted services to be treated as being logically located “in the enterprise”.

At the other end of the spectrum was a vendor who argued that if the cloud followed “normal” practices of data protection, multi-tenancy (in the sense of many customers sharing the same database or other resource) would not be an issue.  According to him, any compliance problems were due to the way requirements were specified in the first place.  It seemed obvious to him that compliance requirements need to be totally reworked to adjust to the realities of the cloud.

Someone from the audience asked whether cloud vendors really wanted to deal with high value data.  In other words, was there a business case for cloud computing once valuable resources were involved?  And did cloud providers want to address this relatively constrained part of the potential market?

The discussion made it crystal clear that questions of security, privacy and compliance in the cloud are going to require really deep thinking if we want to build trustworthy services.

The session also convinced me that those of us who care about trustworthy infrastructure are in for some rough weather.  One of the vendors shook me to the core when he said, “If you have the right physical access controls and the right background checks on employees, then you don't need encryption”.

I have to say I almost choked.  When you build gigantic, hypercentralized, data repositories of valuable private data – honeypots on a scale never before known – you had better take advantage of all the relevant technologies allowing you to build concentric perimeters of protection.  Come on, people – it isn't just a matter of replicating in the cloud the things we do in enterprises that by their very nature benefit from firewalled separation from other enterprises, departmental isolation and separation of duty inside the enterprise, and physical partitioning.  

I hope people look in great detail at what cloud vendors are doing to innovate with respect to the security and privacy measures required to safely offer hypercentralized, co-mingled sensitive and valuable data.