Gov2.0 and Facebook ‘Like’ Buttons

I couldn&#39t agree more with the points made by identity architect James Brown in a very disturbing piece he has posted at The Other James Brown

James explains how the omnipresent Facebook  widget works as a tracking mechanism:  if you are a Facebook subscriber, then whenever you open a page showing the widget, your visit is reported to Facebook.

You don&#39t have to do anything whatsoever – or click the widget – to trigger this report.  It is automatic.  Nor are we talking here about anonymized information or simple IP address collection.  The report contains your Facebook identity information as well as the URL of the page you are looking at.

If you are familiar with the way advertising beacons operate, your first reaction might be to roll your eyes and yawn.  After all, tracking beacons are all over the place and we&#39ve known about them for years.

But until recently, government web sites – or private web sites treating sensitive information of any kind – wouldn&#39t be caught dead using tracking beacons. 

What has changed?  Governments want to piggyback on the reach of social networks, and show they embrace technology evolution.  But do they have procedures in place that ensure that the mechanisms they adopt are actually safe?  Probably not, if the growing use of the Facebook ‘Like’ button on these sites demonstrates.  I doubt those who inserted the widgets have any idea about how the underlying technology works – or the time or background to evaluate it in depth.  The result is a really serious privacy violation.

Governments need to be cautious about embracing tracking technology that betrays the trust citizens put in them.  James gives us a good explanation of the problem with Facebook widgets.  But other equally disturbing threats exist.  For example, should governments be developing iPhone applications when to use them, citizens must agree that Apple has the right to reveal their phone&#39s identifier and location to anyone for any purpose?    

In my view, data protection authorities are going to have to look hard at emerging technologies and develop guidelines on whether government departments can embrace technologies that endanger the privacy of citizens.

Let&#39s turn now to the details of James’ explanation.  He writes:

I am all for Gov2.0.  I think that it can genuinely make a difference and help bring public sector organisations and people closer together and give them new ways of working.  However, with it comes responsibility, the public sector needs to understand what it is signing its users up for.image

In my post Insurers use social networking sites to identify risky clients last week I mentioned that NHS Choices was using a Facebook ‘Like’ button on its pages and this potentially allows Facebook to track what its users were doing on the site.  I have been reading a couple of posts on ‘Mischa’s ramblings on the interweb’ who unearthed this issue here and here and digging into this a bit further to see for myself, and to be honest I really did not realise how invasive these social widgets can be.

Many services that government and public sector organisations offer are sensitive and personal. When browsing through public sector web portals I do not expect that other organisations are going to be able to track my visit – especially organisations such as Facebook which I use to interact with friends, family and colleagues.

This issue has now been raised by Tom Watson MP, and the response from the Department of Health on this issue of Facebook is:

“Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system. When users sign up to Facebook they agree Facebook can gather information on their web use. NHS Choices privacy policy, which is on the homepage of the site, makes this clear.”

“We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.”

I think this response is wrong on a number of different levels.  Firstly at a personal level, when I browse the UK National Health Service web portal to read about health conditions I do not expect them to allow other companies to track that visit; I don&#39t really care what anybody&#39s privacy policy states, I don&#39t expect the NHS to allow Facebook to track my browsing habits on the NHS web site.

Secondly, I would suggest that the statement “Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system” is wrong.  Facebook being able to capture data from sites like NHS Choices is a result of NHS Choices adding Facebook&#39s functionality to their site.

Finally, I don&#39t believe that the “We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.” is technically correct.

(Sorry to non-technical users but it is about to a bit techy…)

I created a clean Virtual Machine and installed HTTPWatch so I could see the traffic in my browser when I load an NHS Choices page.  This machine has never been to Facebook, and definitely never logged into it.  When I visit the NHS Choices page on bowel cancer the following call is made to Facebook:

http://www.facebook.com/plugins/like.php?href=http%3A%2F%2Fwww.nhs.uk%2fconditions%2fcancer-of-the-colon-rectum-or-bowel%2fpages%2fintroduction.aspx&layout=button_count&show_faces=true&width=450&action=like&colorscheme=light&height=21

 

AnonFacebook

So Facebook knows someone has gone to the above page, but does not know who.

 

Now go Facebook and log-in without ticking the ‘Keep logged in’ checkbox and the following cookie is deposited on my machine with the following 2 fields in it: (added xxxxxxxx to mask the my unique id)

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

If I now close my browser and go back to Facebook, it does not log me in – but it knows who I am as my email address is pre-filled.

 

Now head over back to http://www.nhs.uk/conditions/cancer-of-the-colon-rectum-or-bowel/pages/introduction.aspx and when the Facebook page is contacted the cookie is sent to them with the data:

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

FacebookNotLoggedIn

 

So even if I am not logged into Facebook, and even if I do not click on the ‘Like’ button, the NHS Choices site is allowing Facebook to track me.

Sorry, I don&#39t think that is acceptable.

[Update:  I originally misread James’ posting as saying the “keep me logged in” checkbox on the Facebook login page was a factor in enabling tracking – in other words that Facebook only used permanent cookies after you ticked that box.  Unfortunately this is not the case.  I&#39ve updated my comments in light of this information.

If you have authenticated to Facebook even once, the tracking widget will continue to collect information about you as you surf the web unless you manually delete your Facebook cookies from the browser.  This design is about as invasive of your privacy as you can possibly get…]

 

What harm can possibly come from a MAC address?

If you are new to doing privacy threat analysis, I should explain that to do it, you need to be thoroughly pessimistic.  A privacy threat analysis is in this sense no different from any other security threat analysis.  

In our pessimistic frame of mind we then need to brainstorm from two different vantage points.  The first is the vantage point of the party being attacked.  How can people in various situations potentially be endangered by the new technology?  The second is the vantage point of the attacker.  How can people with different motivations misuse the technology?  This is a long and complicated job, and should be done in great detail by those who propose a technology.  The results should be published and vetted.

I haven&#39t seen such publication or vetting by the proponents of world-wide WiFi packet collection and giant central databases of device identifiers.  Perhaps the Street View team or someone else has such a study at hand – it would be great for them to share it.

In the meantime I&#39m just going to throw out a few simple initial ideas – things that are pretty obvious, by constructing a few scenerios.

SCENARIO:  Collecting MAC Addresses is Legal and Morally Acceptable

In this scenario it is legal and socially acceptable to drive up and down the streets recording people&#39s MAC addresses and other network traffic.   

It is also fine for anyone to use a geolocation service to build his own database of MAC addresses and street addresses. 

How could a collector could possibly get the software to do this?  No problem.  In this scenario, since the activity is legal and there is a demand, the software is freely available.  In fact it is widely advertised on various Internet sites.

The collector builds his collection in the evenings, when people are at home with their WiFi enabled phones and computers.  It doesn&#39t take very long to assemble a really detailed map of all the devices used by the people who live in an entire neighborhood  – perhaps a rich neighborhood

Note that it would not matter whether people in the neighborhood have their WiFi encryption turned on or off – the drive by collector would be able to map their devices, since WiFi encryption does not hide the MAC address.

SCENARIO 2 – Collector is a sexual predator

In Scenario 1, anyone can be “a MAC collector”.  In this scenario, the collector is a sexual predator.

When children pass him in the park, they have their phones and WiFi turned on and their MAC addresses are discernable by his laptop software.  Normally the MAC addresses would be meaningless random numbers, but the collector has a complete database of what MAC addresses are associated with a given house address.  It is therefore simple for the collection software on his laptop to automatically convert the WiFi packets emitted from the childrens’ phones into the street addresses where the children live, showing the locations on a map.

There is thus no need for the collector to go up to the children and ask them where they live.  And it won&#39t matter that their parents have taught them never to reveal that to a stranger.  Their devices will have revealed it for them.

I can easily understand that some people might have problems with this example simply because so many questionable things have been justified through reference to predators.  That&#39s not a bandwagon I&#39m trying to get on. 

I chose the example not only because I think it&#39s real and exposes a threat, but because it reveals two important things germane to a threat analysis:

  • The motivations people have to abuse the technical mechanisms we put in place are pretty much unlimited. 
  • We need to be able to empathize with people who are vulnerable – like children – rather than taking a “people deserve what they get” attitude.   

Finally, I hope it is obvious I am not arguing Google is doing anything remotely on a par this example,  I&#39m talking about something different: the matter of whether we want WiFi snooping to be something our society condones, and what some of the software that might come into being if we do.

 

SpyPhone for iPhone

The MedPage Today blog recently wrote about “iPhone Security Risks and How to Protect Your Data — A Must-Read for Medical Professionals.”  The story begins: 

Many healthcare providers feel comfortable with the iPhone because of its fluid operating system, and the extra functionality it offers, in the form of games and a variety of other apps.  This added functionality is missing with more enterprise-based smart phones, such as the Blackberry platform.  However, this added functionality comes with a price, and exposes the iPhone to security risks. 

Nicolas Seriot, a researcher from the Swiss University of Applied Sciences, has found some alarming design flaws in the iPhone operating system that allow rogue apps to access sensitive information on your phone.

MedPage quotes a CNET article where Elinor Mills reports:

Lax security screening at Apple&#39s App Store and a design flaw are putting iPhone users at risk of downloading malicious applications that could steal data and spy on them, a Swiss researcher warns.

Apple&#39s iPhone app review process is inadequate to stop malicious apps from getting distributed to millions of users, according to Nicolas Seriot, a software engineer and scientific collaborator at the Swiss University of Applied Sciences (HEIG-VD). Once they are downloaded, iPhone apps have unfettered access to a wide range of privacy-invasive information about the user&#39s device, location, activities, interests, and friends, he said in an interview Tuesday…

In addition, a sandboxing technique limits access to other applications’ data but leaves exposed data in the iPhone file system, including some personal information, he said.

To make his point, Seriot has created open-source proof-of-concept spyware dubbed “SpyPhone” that can access the 20 most recent Safari searches, YouTube history, and e-mail account parameters like username, e-mail address, host, and login, as well as detailed information on the phone itself that can be used to track users, even when they change devices.

Following the link to Seriot&#39s paper, called iPhone Privacy, here is the abstract:

It is a little known fact that, despite Apple&#39s claims, any applications downloaded from the App Store to a standard iPhone can access a significant quantity of personal data.

This paper explains what data are at risk and how to get them programmatically without the user&#39s knowledge. These data include the phone number, email accounts settings (except passwords), keyboard cache entries, Safari searches and the most recent GPS location.

This paper shows how malicious applications could pass the mandatory App Store review unnoticed and harvest data through officially sanctioned Apple APIs. Some attack scenarios and recommendations are also presented.

 

In light of Seriot&#39s paper, MedPage concludes:

These security risks are substantial for everyday users, but become heightened if your phone contains sensitive data, in the form of patient information, and when your phone is used for patient care.   Over at iMedicalApps.com, we are not fans of medical apps that enable you to input patient data, and there are several out there.  But we also have peers who have patient contact information stored on their phones, patient information in their calendars, or are accessible to their patients via e-mail.  You can even e-prescribe using your iPhone. 

I don&#39t want to even think about e-prescribing using an iPhone right now, thank you.

Anyone who knows anything about security has known all along that the iPhone – like all devices – is vulnerable to some set of attacks.  For them, iPhone Privacy will be surprising not because it reveals possible attacks, but because of how amazingly elementary they are (the paper is a must-read from this point of view).  

On a positive note, the paper might awaken some of those sent into a deep sleep by proselytizers convinced that Apple&#39s App Store censorship program is reasonable because it protects them from rogue applications.

Evidently Apple&#39s App Store staff take their mandate to protect us from people like award winning Mad Magazine cartoonist  Tom Richmond pretty seriously (see Apple bans Nancy Pelosi bobble head).  If their approach to “protecting” the underlying platform has any merit at all, perhaps a few of them could be reassigned to work part time on preventing trivial and obvious hacker exploits..

But I don&#39t personally think a closed platform with a censorship board is either the right approach or one that can possibly work as attackers get more serious (in fact computer science has long known that this approach is baloney).  The real answer will lie in hard, unfashionable and (dare I say it?) expensive R&D into application isolation and related technologies. I hope this will be an outcome:  first, for the sake of building a secure infrastructure;  second, because one of my phones is an iPhone and I like to explore downloaded applications too.

[Heads Up: Khaja Ahmed]

Enhanced driver&#39s licences too stupid for their own good

Enhanced driver&#39s licences too smart for their own good appeared in the Toronto Star recently.  It was written by Roch Tassé (coordinator of the International Civil Liberties Monitoring Group) and Stuart Trew (The Council of Canadians’ trade campaigner). 

A common refrain coming out of Homeland Security chief Janet Napolitano&#39s visit to Ottawa and Detroit last week was that the Canada-U.S. border is getting thicker and stickier even as Canadian officials work overtime to implement measures that are meant to get us across that border more efficiently and securely.

One of those measures –  “enhanced” drivers licences (EDLs) now available in Ontario, Quebec, B.C. and Manitoba – has been rushed into production to meet today&#39s implementation date of the Western Hemisphere Travel Initiative. This unilateral U.S. law requires all travellers entering the United States to show a valid passport or other form of secure identification when crossing the border.

But as privacy and civil liberties groups have been saying for a while, the EDL card poses its own thick and sticky questions that have not been satisfactorily answered by either the federal government, which has jurisdiction over privacy and citizenship matters, or the provincial ministries issuing the new “enhanced” licences.

For example, why introduce a new citizenship document specific to the Canada-U.S. border when the internationally recognized passport will do the trick?

Or, as even the smart-card industry wonders, why include technology used for monitoring the movement of livestock and other commodities in a citizenship document?

More crucially, why ignore calls from Canada&#39s federal and provincial privacy commissioners, as well as groups like the civil liberty groups to put a freeze on “enhanced” licences until they can be adequately debated and assessed by Parliament? It&#39s not as if there&#39s nothing to talk about.

First, the radio frequency identification devices (RFID) that will be used to transmit the personal ID number in your EDL to border officials contain no security or authentication features, cannot be turned off, and are designed to be read at distances of more than 10 metres using inexpensive and commercially available technology.

This creates a significant threat of “surreptitious location tracking,” according to Canada&#39s privacy commissioners. The protective sleeve proposed by several provincial governments is demonstrably unreliable at blocking the RFID signal and constitutes an unacceptable privacy risk.

Facial recognition screening of all card applicants, as proposed in Ontario and B.C. to reduce fraud, has a shaky success rate at best, creating a significant and unacceptable risk of false positive matches, which could increase wait times as even more people are pulled aside for questioning.

Recently, a journalist for La Presse demonstrated just how insecure Quebec&#39s EDLs are by successfully reading the number of a colleague&#39s card and cloning that card with a different but similar photograph. It might explain why, when announcing Quebec&#39s EDL card this year, Premier Jean Charest could point only to hypothetical benefits.

Furthermore, the range of personal information collected through EDL programs, once shared with U.S. authorities, can be circulated excessively among a whole range of agencies under the authority of the Department of Homeland Security. It&#39s important to note that Canada&#39s privacy laws do not hold once that information crosses the border.

So while the border may appear to be getting thicker for some, it is becoming increasingly permeable to flows of personal information on Canadian citizens to U.S. security and immigration databases, where it can be used to mine for what the DHS considers risky behaviour.

Some provincial governments have taken these concerns seriously. Based on the high costs involved with a new identity document, the lack of clear benefits to travellers, the significant privacy risks, and the lack of prior public consultation, the Saskatchewan government suspended its own proposed EDL project this year. The New Brunswick and Prince Edward Island governments, citing excessive costs, have also abandoned theirs.

The Harper government owes it to Canadians to freeze the EDL program now and hold a parliamentary hearing into the new technology, its alleged benefits and the stated privacy risks.

Napolitano has repeatedly said that from now on Canadians must treat the U.S. border as any other international checkpoint. It might feel like an inconvenience for some who are used to crossing into the U.S. without a passport, but the costs – real and in terms of privacy – of these provincial EDL projects will be much higher.

My main problem with this article is the title, which should have been, “Enhanced driver&#39s licenses too stupid for their own good”. 

That&#39s because we have the technology to design smart driver&#39s licenses and passports so they have NONE of the problems described – but so far, our governments don&#39t do it. 

I expect it is we as technologists who are largely responsible for this.  We haven&#39t found the ways of communicating with governments, and more to the point, with the public and its advocates, about the fact that these problems can be eliminated. 

From what I have been told, the new German identity card represents a real step forward in this regard.  I promise to look into the details and write about them.

Kim Cameron: secret RIAA agent?

Dave Kearns cuts me to the polemical quick by tarring me with the smelly brush of the RIAA:

‘Kim has an interesting post today, referencing an article (“What Does Your Credit-Card Company Know About You?” by Charles Duhigg in last week’s New York Times.

‘Kim correctly points out the major fallacies in the thinking of J. P. Martin, a “math-loving executive at Canadian Tire”, who, in 2002, decided to analyze the information his company had collected from credit-card transactions the previous year. For example, Martin notes that “2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months.” But that&#39s barely 2% of the total, as Kim points out, and hardly conclusive evidence of anything.

‘I&#39m right with Cameron for most of his essay, up til the end when he notes:

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine. Mr. Duhigg’s story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

‘Change “privacy enhancing” to “intellectual property protecting” and it could be a quote from an RIAA press release!

‘We should never confuse tools with the bad behavior that can be helped by those tools. Data correlation tools, for example, are vitally necessary for automated personalization services and can be a big help to future services such as Vendor Relationship Management (VRM) . After all, it&#39s not Napster that&#39s bad but people who use it to get around copyright laws who are bad. It isn&#39t a cup of coffee that&#39s evil, just people who try to carry one thru airport security. :)

‘It is easier to forbid the tool rather than to police the behavior but in a democratic society, it&#39s the way we should act.’

I agree that we must influence behaviors as well as develop tools.  And I&#39m as positive about Vendor Relationship Management as anyone.  But getting concrete, there&#39s a huge gap between the kind of data correlation done at a person&#39s request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person&#39s consent or knowledge.  As VRM&#39s Saint Searls has said, “Sometimes, I don&#39t want a deep relationship, I just want a cup of coffee”.  

I&#39ll come clean with an example.  Not a month ago, I was visiting friends in Canada, and since I had an “extra car”, was nominated to go pick up some new barbells for the kids. 

So, off to Canadian Tire to buy a barbell.  Who knows what category they put me in when 100% of my annual consumption consists of barbells?  It had to be right up there with low-grade oil or even a Mega Thruster Exhaust System.  In this case, Dave, there was no R and certainly no VRM: I didn&#39t ask to be profiled by Mr. Martin&#39s reputation machines.

There is nothing about miminal disclosure that says profiles cannot be constructed when people want that.  It simply means that information should only be collected in light of a specific usage, and that usage should be clear to the parties involved (NOT the case with Canadian Tire!).  When there is no legitimate reason for collecting information, people should be able to avoid it. 

It all boils down to the matter of people being “in control” of their digital interactions, and of developing technology that makes this both possible and likely.  How can you compare an automated profiling service you can turn on and off with one such as Mr. Martin thinks should rule the world of credit?  The difference between the two is a bit like the difference between a consensual sexual relationship and one based on force.

Returning to the RIAA, in my view Dave is barking up the wrong metaphor.  RIAA is NOT producing tools that put people in control of their relationships or property – quite the contrary.  And they&#39ll pay for that. 

The brands we buy are “the windows into our souls”

You should read this fascinating piece by Charles Duhigg in last week’s New York Times. A few tidbits to whet the appetite:

‘The exploration into cardholders’ minds hit a breakthrough in 2002, when J. P. Martin, a math-loving executive at Canadian Tire, decided to analyze almost every piece of information his company had collected from credit-card transactions the previous year. Canadian Tire’s stores sold electronics, sporting equipment, kitchen supplies and automotive goods and issued a credit card that could be used almost anywhere. Martin could often see precisely what cardholders were purchasing, and he discovered that the brands we buy are the windows into our souls — or at least into our willingness to make good on our debts…

‘His data indicated, for instance, that people who bought cheap, generic automotive oil were much more likely to miss a credit-card payment than someone who got the expensive, name-brand stuff. People who bought carbon-monoxide monitors for their homes or those little felt pads that stop chair legs from scratching the floor almost never missed payments. Anyone who purchased a chrome-skull car accessory or a “Mega Thruster Exhaust System” was pretty likely to miss paying his bill eventually.

‘Martin’s measurements were so precise that he could tell you the “riskiest” drinking establishment in Canada — Sharx Pool Bar in Montreal, where 47 percent of the patrons who used their Canadian Tire card missed four payments over 12 months. He could also tell you the “safest” products — premium birdseed and a device called a “snow roof rake” that homeowners use to remove high-up snowdrifts so they don’t fall on pedestrians…

‘Why were felt-pad buyers so upstanding? Because they wanted to protect their belongings, be they hardwood floors or credit scores. Why did chrome-skull owners skip out on their debts? “The person who buys a skull for their car, they are like people who go to a bar named Sharx,” Martin told me. “Would you give them a loan?”

So what if there are errors?

Now perhaps I’ve had too much training in science and mathematics, but this type of thinking seems totally neanderthal to me. It belongs in the same category of things we should be protected from as “guilt by association” and “racial profiling”.

For example, the article cites one of Martin’s concrete statistics:

‘A 2002 study of how customers of Canadian Tire were using the company&#39s credit cards found that 2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months. By contrast, only 530 of the cardholders who used their credit cards at the dentist missed four payments within the next 12 months.’

We can rephrase the statement to say that 98% of the people who used their credit cards in drinking places did NOT miss the requisite four payments.

Drawing the conclusion that “use of the credit card in a drinking establishment predicts default” is thus an error 98 times out of 100.

Denying people credit on a premise which is wrong 98% of the time seems like one of those things regulators should rush to address, even if the premise reduces risk to the credit card company.

But there won’t be enough regulators to go around, since there are thousands of other examples given that are similarly idiotic from the point of view of a society fair to its members. For the article continues,

‘Are cardholders suddenly logging in at 1 in the morning? It might signal sleeplessness due to anxiety. Are they using their cards for groceries? It might mean they are trying to conserve their cash. Have they started using their cards for therapy sessions? Do they call the card company in the middle of the day, when they should be at work? What do they say when a customer-service representative asks how they’re feeling? Are their sighs long or short? Do they respond better to a comforting or bullying tone?

Hmmm.

  • Logging in at 1 in the morning. That’s me. I guess I’m one of the 98% for whom this thesis is wrong… I like to stay up late. Do you think staying up late could explain why Mr. Martin’s self-consciously erroneous theses irk me?
  • Using card to buy groceries? True, I don’t like cash. Does this put me on the road to ruin? Another stupid thesis for Mr. Martin.
  • Therapy sessions? If I read enough theses like those proposed by Martin, I may one day need therapy.  But frankly,  I don’t think Mr. Martin should have the slightest visibility into matters like these.  Canadian Tire meets Freud?
  • Calling in the middle of the day when I should be at work? Grow up, Mr. Martin. There is this thing called flex schedules for the 98% or 99% or 99.9% of us for which your theses continually fail.
  • What I would say if a customer-service representative asked how I was feeling? I would point out, with some vigor, that we do not have a personal relationship and that such a question isn&#39t appropriate. And I certainly would not remain on the line.

Apparently Mr. Martin told Charles Duhigg, “If you show us what you buy, we can tell you who you are, maybe even better than you know yourself.” He then lamented that in the past, “everyone was scared that people will resent companies for knowing too much.”

At the best, this no more than a Luciferian version of the Beatles’ “You are what you eat” – but minus the excessive drug use that can explain why everyone thought this was so deep. The truth is, you are not “what you eat”.

Duhigg argues that in the past, companies stuck to “more traditional methods” of managing risk, like raising interest rates when someone was late paying a bill (imagine – a methodology based on actual delinquency rather than hocus pocus), because they worried that customers would revolt if they found out they were being studied so closely. He then says that after “the meltdown”, Mr. Martin’s methods have gained much more currency.

In fact, customers would revolt because the methodology is not reasonable or fair from the point of view of the vast majority of individuals, being wrong tens or hundreds or thousands of times more often than it is right.

If we weren’t working on digital identity, we could just end this discussion by saying Mr. Martin represents one more reason to introduce regulation into the credit card industry. But unfortunately, his thinking is contagious and symptomatic.

Mining of credit card information is just the tip of a vast and dangerous iceberg we are beginning to encounter in cyberspace. The Internet is currently engineered to facilitate the assembly of ever more information of the kind that so thrills Mr. Martin – data accumulated throughout the lives of our young people that will become progressively more important in limiting their opportunities as more “risk reduction” assumptions – of the Martinist kind that apply to almost no one but affect many – take hold.

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine.  Mr. Duhigg&#39s story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

[Thanks to JC Cannon for pointing me to this article..]

FYI: Encryption is “not necessary”

A few weeks ago I spoke at a conference of CIOs, CSOs and IT Mandarins that – of course – also featured a session on Cloud Computing.  

It was an industry panel where we heard from the people responsible for security and compliance matters at a number of leading cloud providers.  This was followed by Q and A  from the audience.

There was a lot of enthusiasm about the potential of cutting costs.  The discussion wasn&#39t so much about whether cloud services would be helpful, as about what kinds of things the cloud could be used for.  A government architect sitting beside me thought it was a no-brainer that informational web sites could be outsourced.  His enthusiasm for putting confidential information in the cloud was more restrained.

Quite a bit of discussion centered on how “compliance” could be achieved in the cloud.  The panel was all over the place on the answer.  At one end of the spectrum was a provider who maintained that nothing changed in terms of compliance – it was just a matter of oursourcing.  Rather than creating vast multi-tenant databases, this provider argued that virtualization would allow hosted services to be treated as being logically located “in the enterprise”.

At the other end of the spectrum was a vendor who argued that if the cloud followed “normal” practices of data protection, multi-tenancy (in the sense of many customers sharing the same database or other resource) would not be an issue.  According to him, any compliance problems were due to the way requirements were specified in the first place.  It seemed obvious to him that compliance requirements need to be totally reworked to adjust to the realities of the cloud.

Someone from the audience asked whether cloud vendors really wanted to deal with high value data.  In other words, was there a business case for cloud computing once valuable resources were involved?  And did cloud providers want to address this relatively constrained part of the potential market?

The discussion made it crystal clear that questions of security, privacy and compliance in the cloud are going to require really deep thinking if we want to build trustworthy services.

The session also convinced me that those of us who care about trustworthy infrastructure are in for some rough weather.  One of the vendors shook me to the core when he said, “If you have the right physical access controls and the right background checks on employees, then you don&#39t need encryption”.

I have to say I almost choked.  When you build gigantic, hypercentralized, data repositories of valuable private data – honeypots on a scale never before known – you had better take advantage of all the relevant technologies allowing you to build concentric perimeters of protection.  Come on, people – it isn&#39t just a matter of replicating in the cloud the things we do in enterprises that by their very nature benefit from firewalled separation from other enterprises, departmental isolation and separation of duty inside the enterprise, and physical partitioning.  

I hope people look in great detail at what cloud vendors are doing to innovate with respect to the security and privacy measures required to safely offer hypercentralized, co-mingled sensitive and valuable data. 

More news about our identity team

After my last post, it occurred to me that people would probably be interested in knowing about some of the other figures from the identity community who have joined my colleagues and I to work on identity and access – great people with diverse backgrounds who bring new depth to the increasingly important area of identity and access management. 

I&#39m going to break this up across several posts in order to keep things manageable…

Ariel Gordon

Ariel Gordon came to Microsoft recently from Orange / France Telecom.  It&#39s really key for the Identity group at Microsoft to have the best possible relationships with our colleagues in the Telecom sector, and Ariel&#39s 12 years of experience and understanding of Telecom will move our dialog forward tremendously. 

Ariel led the creation and deployment of Orange&#39s consumer Identity Management system, focusing  his staff on optimizing customer journeys and UX through Identity lifecycles.  The system currently hosts tens of millions of user identities across Europe.  

Ariel oversaw marketing work (and the development of business planning) for Identity Management and other Enablers, including User Privacy and API exposition framework.  As a key spokesperson for Orange, he unveiled several of their innovations at Industry Events including their support of OpenID and SAML for Outbound Federation at “DIDW” in Sept 2007, and support of OpenID and LiveID for Inbound Federation at “the European Identity Conference” in April 2008.

Orange played an important role in Liberty Alliance, and Ariel has a lot to share with us about Liberty&#39s accomplishments.   Listen to Kuppinger Cole&#39s Felix Gaehtgens interview Ariel on YouTube to get a real sense for his passion and accomplishments.

Pete Rowley

Many people around Internet Identity Workshop know Pete Rowley, not only for the work he has done but because he has a coolio rock-star-type web page banner and a real stone fence:

Pete has been working on identity since the mid 90’s. He contributed to the Netscape Directory Server. Later at Centrify he worked on connecting heterogeneous systems to the Active Directory infrastructure for authentication and policy applications.  Many of us met him at the Identity Gang meetings while he worked for Red Hat. There he founded the Free IPA (Identity, Policy, Audit) open source project. I remember being impressed by what he was trying to achieve:

“For efficiency, compliance and risk mitigation, organizations need to centrally manage and correlate vital security information including

  • Identity (machine, user, virtual machines, groups, authentication credentials)
  • Policy (configuration settings, access control information)
  • Audit (events, logs, analysis thereof)

“Because of its vital importance and the way it is interrelated, we think identity, policy, and audit information should be open, interoperable, and manageable. Our focus is on making identity, policy, and audit easy to centrally manage for the Linux and Unix world. Of course, we will need to interoperate well with Windows and much more.

Now he&#39s working on evolving the Identity Lifecycle Manager (ILM).

Mark Wahl

Mark Wahl has been well known to identerati ever since the early days of LDAP.  In 1997 he published RFC2251, the famous Lightweight Directory Access Protocol (V3) Specification with Tim Howes and Steve Kille.  Of course it was fundamental to a whole generation of directory technology.

People from the directory world may remember Mark as Senior Directory Architect at Innosoft International, and co-founder and President of Critical Angle.  This was great stuff – his  identity management, directory, PKI, messaging and network middleware systems were deployed at many large enterprises and carriers.

Mark was also a Senior Staff Engineer and Principal Directory Architect at Sun Microsystems,  and later  developed and taught a one-year course on information assurance and computer security auditing at the University of Texas.

His passion for auditing and risk assessment technologies for the enterprise identity metasystem led him to create a startup called Informed Control.  You get a good feeling for his thorough and no-holds-barred commitment by browsing through the  site.

Mark is now applying his creativity to evolving the vision, roadmap and architecture for the convergence of identity and security lifecycle management products.

[To be continued…]

The economics of vulnerabilities…

Gunnar Peterson of 1 Raindrop has blogged his Keynote at the recent Quality of Protection conference.  It is a great read – and a defense in depth against the binary “secure / not secure” polarity that characterizes the thinking of those new to security matters. 

His argument riffs on Dan Geer&#39s famous Risk Management is Where the Money Is.  He turns to Warren Buffet as someone who knows something about this kind of thing, writing:

“Of course, saying that you are managing risk and actually managing risk are two different things. Warren Buffett started off his 2007 shareholder letter talking about financial institutions’ ability to deal with the subprime mess in the housing market saying, “You don&#39t know who is swimming naked until the tide goes out.” In our world, we don&#39t know whose systems are running naked, with no controls, until they are attacked. Of course, by then it is too late.

“So the security industry understands enough about risk management that the language of risk has permeated almost every product, presentation, and security project for the last ten years. However, a friend of mine who works at a bank recently attended a workshop on security metrics, and came away with the following observation – “All these people are talking about risk, but they don&#39t have any assets.” You can&#39t do risk management if you don&#39t know your assets.

“Risk management requires that you know your assets, that on some level you understand the vulnerabilities surrounding your assets, the threats against those, and efficacy of the countermeasures you would like to use to separate the threat from the asset. But it starts with assets. Unfortunately, in the digital world these turn out to be devilishly hard to identify and value.

“Recent events have taught us again, that in the financial world, Warren Buffett has few peers as a risk manager. I would like to take the first two parts of this talk looking at his career as a way to understand risk management and what we can infer for our digital assets.

Analysing vulnerabilities and the values of assets, he uncovers two pyramids that turn out to be inverted. 

To deliver a real Margin of Safety to the business, I propose the following based on a defense in depth mindset. Break the IT budget into the following categories:

  •  
    • Network: all the resources invested in Cisco, network admins, etc.
    • Host: all the resources invested in Unix, Windows, sys admins, etc.
    • Applications: all the resources invested in developers, CRM, ERP, etc.
    • Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

  •  
    • Network: all the resources invested in network firewalls, firewall admins, etc.
    • Host: all the resources invested in Vulnerability management, patching, etc.
    • Applications: all the resources invested in static analysis, black box scanning etc.
    • Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!

He relates his thinking to a fascinating piece by Pat Helland called SOA and Newton&#39s Universe (a must-read to which I will return) and then proposes some elements of a concrete approach to development of meaningful metrics that he argues allow correlation of “value” and “risk” in ways that could sustain meaningful business decisions. 

In an otherwise clear argument, Gunnar itemizes a series of “Apologies”, in the sense of corrections applied post-facto due to the uncertaintly of decisionmaking in a distributed environment:

Example Apologies – Identity Management tools – provisioning, deprovisioning, Reimburse customer for fraud losses, Compensating Transaction – Giant Global Bank is still sorry your account was compromised!

Try as I might, I don&#39t understand the categorization of identity management tools as apology, or their relationship to account compromise – I hope Gunnar will tell us more.