Age and identity verification in Second Life

Via Dennis Hamilton, a pointer to a new experiment at Second Life:

We will shortly begin beta testing an age and identity verification system, which will allow Residents to provide a one-time proof of identity (such as a driver’s license, passport or ID card) and have that identity verified in a matter of moments.

Second Life has always been restricted to those over 18. All Residents personally assert their age on registration. When we receive reports of underage Residents in Second Life, we close their account until they provide us with proof of age. This system works well, but as the community grows and the attractions of Second Life become more widely known, we’ve decided to add an additional layer of protection.

Once the age verification system is in place, only those Residents with verified age will be able to access adult content in Mature areas. Any Resident wishing to access adult content will have to prove they are over 18 in real life.We have created Teen Second Life for minors under the age of 18. Access to TSL by adults is prohibited, with minors not allowed into the rest of Second Life.

For their part, land owners will be required to flag their land as ‘adult’ if it contains adult content using the estate and land management tools provided to landowners. This flag will protect landowners from displaying inappropriate content to underage users who may have entered Second Life. Landowners are morally and legally responsible for the content displayed and the behavior taking place on their land. The identity verification system gives them new tools to ensure any adult content is only available to adults over 18 because unverified avatars will not have access to land flagged as containing adult content.

We hope you’ll agree that the small inconvenience of doing this once is far outweighed by the benefits of protecting minors from inappropriate content. Further, this system will assist landowners in engaging in lawful businesses.

The verification system will be run by a third party specializing in age and identity authentication. No personally identifying information will be stored by them or by Linden Lab, including date of birth, unless the Resident chooses to do so. Those who wish to be verified, but remain anonymous, are free to do so.

(Continues here…)

The idea of presenting a passport to get into an imaginary adult establishment strikes me as nutso.  I must be missing a gene.  It is certainly a conundrum, this virtual world. 

I think that rather than adopting this one-off inspector approach, outfits like Second Life and all the other big web sites should get together to accept registration claims from whatever identity providers would fully guarantee both accuracy and the anonymity of their users.  Information Cards combined with the anonymous credential technology developed by people like Stefan Brands would provide the ideal solution.

Privacy International global privacy invaders

Privacy International ran the first International Big Brothers Awards ceremony this week, focussing attention on what it called the most invasive companies, projects, officials, and governments at the ‘Computers, Freedom and Privacy’ conference in Montreal. A ‘special award’ for the ‘Lifetime Menace’ was also announced.  The detailed announcement is here:

PI's ‘Big Brother Awards’ have been running for nearly ten years, with events run in eighteen countries around the world. Government institutions and companies have been named and shamed as privacy invaders in a variety of countries and contexts.

This year was the first time that Privacy International ran an international event to identify the greatest invaders around the world. The event was hosted by ‘the pope’, as presented by Simon Davies in full regalia. Previous hosts include ‘Dr. Evil’ and ‘The Queen of England’.

Nominees and Winners

After reviewing the variety of nominations received from around the world, Privacy International and leading international privacy experts selected the following nominees and winners in the following categories:

Most invasive company

Nominees

  • Google, for their retention practices and their purchase of Doubleclick, an on-line marketing and profiling firm
  • Choicepoint, for their vast databases of personal data, sold to nearly anyone who wishes to pay
  • SWIFT, the international banking co-operative for sharing personal financial transactions with the U.S. government
  • Booz Allen Hamilton, the international consultancy, for taking the knowledge and contacts of their senior executives, mostly from U.S. intelligence agencies, to sell and share their experiences with firms and governments around the world

Winner: Choicepoint

Worst Public Official

Nominees

  • Tony Blair, Prime Minister of Britain, for his relentless work over ten years to expand the UK into the greatest surveillance society amongst democratic nations
  • Vladimir Putin, President of the Russian Federation, for returning the surveillance policies of his nation to the age of the Cold War
  • Stewart Baker, former general counsel for the National Security Agency and now undersecretary for policy at the Department of Homeland Security, behind and at the forefront of most disastrous U.S. surveillance policies, most recently the EU-U.S. agreement on Passenger Name Records transfers
  • Alberto Gonzales, current Attorney General for the U.S., for pushing expansive interpretations of the U.S. Constitution in order to create new powers for the Bush Administration without Congressional authorisation and judicial oversight

Winner: Stewart Baker

Most Heinous Government

Nominees

  • China, for implementing even greater surveillance policies and continues its oppression of various groups, and moves towards the international stage with the Beijing Olympics with additional surveillance schemes
  • The U.S., for leading the world down the path of greater surveillance and its disastrous influence on policy and technology
  • The United Kingdom, for being the greatest surveillance society amongst democratic nations, rivaling only Malaysia, China and Russia as it also leads other countries across the EU down its same path
  • Tunisia, for being stupid enough to have invasive and despotic practices even while hosting a UN summit on the information society, and then oppressing guests and groups from around the world while in the public eye
  • The European Union, for pretending to be founded upon a bedrock of civil liberties and fundamental rights but then spending decades establishing invasive policies without any democratic oversight

Winner: The United Kingdom (for more information please see Taking Liberties documentary (off-site))

Most Appalling Project or Technology

Nominees

  • U.S. Border Policy, and most recently the Western Hemisphere Travel Initiative, for fingerprinting visitors from around the world while hoisting fingerprinting and ID card programmes upon citizens around the world, including Americans
  • International Civil Aviation Organization, a UN agency, for implementing a variety of invasive policies behind closed doors, including the ‘biometric passport’ and passenger data transfer-deals
  • India's Ministry for Personnel, Public Grievances and Pensions for requiring government employees to disclose their menstrual cycles on job appraisal forms
  • the CCTV industry, for promoting a technologically ‘effective’ policy around the world despite all the evidence to the contrary

Winner: The International Civil Aviation Organization

Lifetime Menace Award

Nominees

  • The Biometrics Industry, for selling a limited technology to governments and public institutions around the world, promising much while delivering very little except for minimisation of personal privacy
  • The Military Industrial Complex, for being behind almost every invasive surveillance policy around the world, where we showed the example of General Dynamics, contractor to a variety of governments, who own companies such as Anteon (UK) who in turn own ‘Vericool’ (UK) who is responsible for selling surveillance technologies to schools that want to fingerprint their students to verify class registries, library privileges, and cafeteria purchases
  • The Intellectual Property Industry, for promoting and pushing invasive policies around the world in order to keep track of the habits of on-line users to pursue their agenda of ‘protecting’ content
  • Communitarianism and the proponents of the ‘Common Good’, because every bad policy around the world is justified based on the philosophy that is good for society and the individual must sacrifice his or her selfish rights in favour of the needs of the many

Winner: The ‘Common Good’

Privacy International said winners were given the classic BBA award (shown above), a golden statue of a boot stamping upon a human head, as promised by George Orwell in 1984 on a vision for the future.

I wonder who accepted on behalf of the “Common Good”?

Token Decryption Service for CardSpace

Via Richard Turner's blog, the announcement of an architecturally superior  token decryption component devised by Dominick Baier at leastprivilege.com

Dominick  and Richard have blogged previously about mitigating the dangers involved in allowing web middleware and front-end software to process encrypted payloads.  Decrypting a payload involves access to a private key.  The broader the range of applications that can get to the key, the greater the attack surface.  This led to discussions about:

  1. Re-factoring the token decryption code into an assembly that runs under full trust whilst the site runs under partial trust
  2. Building a Token Decryption Service to which you can pass your encrypted blob and you get back a list of claims, PPID and issuer key.

And that is exactly the problem Dominick has tackled:

Web Applications that want to decrypt CardSpace tokens need read access to the SSL private key. But you would increase your attack surface tremendously if you directly grant this access to the worker process account of your application. I wrote about this in more detail here and Richard Turner followed up here.

Together with my colleagues at Thinktecture (thanks Christian and Buddhike for code reviewing and QA) I wrote an out-of-proc token decryption service that allows decrypting tokens without having to have direct access to the private key in the application, the idea is as follows:

Your web application runs under its normal least privilege account with no read access to the private key. The token decryption service runs as an NT service on the same machine under an account that has read access. Whenever the application has to decrypt a token, it hands the encrypted token to the token decryption service which (in this version) simply uses the TokenProcessor to return a list of claims, a unique ID and the issuer key.

The token decryption service is implemented as a WCF service that uses named pipes to communicate with the applications. To make sure that only authorized applications can call into the service, the application(s) have to be member of a special Windows group called “TokenDecryptionUsers” (can be changed in configuration to support multiple decryption services on the same machine). I also wrote a shim for the WCF client proxy that allows using this service from partially trusted web applications.

The download contains binaries, installation instructions and the full source code. I hope this helps CardSpace adopters to improve the security of their applications and servers. If you have any comments or questions – feel free to contact me.

The approach is a good example of the “alligators and snakes” approach I discussed here recently.

Personal data on 2.9 million people goes missing

Joris Evers at CNet has done a nice wrap-up on the latest identity catastrophy.  (Plumes of smoke were seen coming from the reactor, but so far, there has been no proof of radioactive particles leaking into the environment): 

A CD containing personal information on Georgia residents has gone missing, according to the Georgia Department of Community The CD was lost by Affiliated Computer Services, a Dallas company handling claims for the health care programs, the statement said. The disc holds information on 2.9 million Georgia residents, said Lisa Marie Shekell, a Department of Community Health representative.

It is unclear if the data on the disc, which was lost in transit some time after March 22, was protected. However, it doesn't appear the data has been used fraudulently. “At this time, we do not have any indication that the information on the disk has been misused,” Shekell said.

In response to the loss, the Georgia Department of Community Health has asked ACS to notify all affected members in writing and supply them with information on credit watch monitoring as well as tips on how to obtain a free credit report, it said.  [Funny – I get junk mail with this offer every few days – Kim] 

There has been a string of data breaches in recent years, many of which were reported publicly because of new disclosure laws. About 40,000 Chicago Public Schools employees are at risk of identity fraud after two laptops containing their personal information were stolen Friday.

Last week, the University of California at San Francisco said a possible computer security breach may have exposed records of 46,000 campus and medical center faculty, staff and students.

Since early 2005, more than 150 million personal records have been exposed in dozens of incidents, according to information compiled by the Privacy Rights Clearinghouse.

Identity fraud continues to top the complaints reported to the Federal Trade Commission. Such complaints, which include credit card fraud, bank fraud, as well as phone and utilities fraud, accounted for 36 percent of the total 674,354 complaints submitted to the FTC and its external data contributors in 2006.

Digital identity allows us to manage risk – not prove negatives

Jon's piece channeled below,  Steven O'Grady‘s comments at RedMonk and  Tim O’Reilly’s Blogger's Code of Conduct  all say important things about the horrifying Kathy Sierra situation.   I agree with everyone that reputation is important, just as it is in the physical world.  But I have a fair bit of trouble with some of the technical thinking involved.

I agree we should be responsible for everything that appears on our sites over which we have control.    And I agree that we should take all reasonable steps to ensure we control our systems as effectively as we can.  But I think it is important for everyone to understand that our starting point must be that every system can be breached.  Without such a point of departure, we will see further proliferation of Pollyannish systems that, as likely as not, end in regret.

Once you understand the possibility of breach, you can calculate the associated risks, and build the technology that has the greatest chance of being safe.  You can't do this if you don't understand the risks.  In this sense, all you can do is manage your risk.

When I first set up my blog to accept Information Cards, it prompted a number of people to try their hand at breaking in.  They were unable to compromise the InfoCard system, but guess what?  There was a security flaw in WordPress 2.0.1 that was exploited to post something in my name

By what logic was I responsible for it?  Because I chose to use WordPress – along with the other 900,000 people who had downloaded it and were thus open to this vulnerability?

I guess, by this logic, I would also be responsible for any issues related to problems in the linux kernel operating underneath my blog; and for  potential bugs in MySQL and PHP.  Not to mention any improper behavior by those working at my hosting company or ISP. 

I'm feeling much better now.

So let's move on to the question of non-repudiation.  There is no such thing as a provably correct system of any significant size.  So there is no such thing as non-repudiation in an end-to-end sense.  The fact that this term emerged from the world of PKI is yet another example of its failure to grasp various aspects of reality.

There is no way to prove that a key has not been compromised – even if a fingerprint or other biometric is part of the equation.  The sensors can be compromised, and the biometrics are publicly available information, not secrets.

I'm mystified by people who think cryptography can work “in reverse”.  It can't.  You can prove that someone has a key.  You cannot prove that someone doesn't have a key.  People who don't accept this belong in the ranks of those who believe in perpetual motion machines.

To understand security, we have to leave the nice comfortable world of certainties and embrace uncertainty.  We have to think in terms of probability and risk.  We need structured ways to assess risk.  And we then have to ask ourselves how to reduce risk. 

Even though I can't prove noone has stolen my key, I can protect things a lot more effectively by using a key than by using no key! 

Then, I can use a key that is hard to steal, not easy to steal.  I can put the lock in the hands of trustworthy people.   I can choose NOT to store valuable things that I don't need. 

And so, degree by degree, I can reduce my risk, and that of people around me.

Formula for time conversion

The remarkable William Heath, a key figure in the British Government's IT ecosystem and publisher of ideal government, lands a few of his no-nonsense punches in this piece, both sobering and amusing, on institutional learning:

The original Microsoft Hailstorm press release is still there, bless them! Check out all the hype about “personalisation” and “empowerment” with proper protection of privacy (see extracts below). Complete ecstatic fibs! The apogee of Microsoft’s crazed, childish egocentricity. And it all sounds so familiar to the rhetoric of UK government ID management.

Then April 2002 – Microsoft shelves Hailstorm eg NY Times abstract

And Microsoft announced Kim Cameron’s laws of identity in 2005, and Infocards in 2006.

How fast does Microsoft adapt to customers and markets compared to governments, do we estimate? Is “one Microsoft year = seven government years” a reasonable rule of thumb? In ID management terms the UK government is still in Microsoft’s 2001. So for the UK government to get to Microsoft’s position today, where the notion of empowering enlightenment is at least battling on equal terms with forces of darkness and control and the firm is at the beginning of implementing a sensible widescale solution will take UK government and IPS another forty years or so.

Could we get it down to one MS year = 3.5 UK gov years? That means we could have undone the damage of committing to a centralist panoptical approach in just 21 years. Aha.  But Microsoft doesn’t have elections to contend with… (Continued here.)

I know a number of folks who were involved with Hailstorm, and they are great people who really set a high bar for contributing to society.  I admire them both for their charity and their creativity.  It is possible that the higher the standards for your own behavior, the more you will expect other people will trust you – even if they don't know you.  And then the greater your disappointment when people impune your motives or – best case – question your naivity. 

It requires maturity as technologists to learn that we have to build systems that remain safe in spite of how people behave – not because of how they behave. 

Of course, this is not purely a technical problem, but also a legal and even legeslative one.  It took me, for example, quite a while to understand how serious the threat of panoptics is.  Things always look obvious in retrospect. 

I am trying to share our experience as transparently and as widely as I can.  I have hoped to reduce the learning curve for others – since getting this right is key to creating the most vibrant cyberspace we can. 

Without BE, templates ARE your biometrics

The more I learn from Alex Stoianov about the advantages of Biometric Encryption, the more I understand how dangerous the use of conventional biometric templates really is.  I had not understood that the templates were a reliable unique identifier reusable across databases and even across template schemes without a fresh biometric sample.  People have to be stark, raving mad to use conventional biometrics to improve the efficiency of a children's lunch line.

Alex begins by driving home how easy template matching across databases really is:

Yes, that’s true: conventional biometric templates can be easily correlated across databases. Most biometric matching algorithms work this way: a fresh biometric sample is acquired and processed; a fresh template is extracted from it; and this template is matched against previously enrolled template.

If the biometric templates are stored in the databases, you don’t need a fresh biometric sample for the offline match – the templates contain all the information required.

Moreover, this search is extremely fast, such as 1,000,000 matches per sec is easily available. In our example, it would take only 10 sec to search a database of 10,000,000 records (we may disregard for now the issue of false acceptance – the accuracy is constantly improving). Biometric industry is actively developing standards, so that very soon all the databases will have standardized templates, i.e. will become fully interoperable.

BE, on the other hand, operates in a “blind” mode and, therefore, is inherently a one-to-one algorithm. Our estimate of 11.5 days for just one search makes it infeasible at present to do data mining across BE databases. If the computational power grows according to Kim’s estimates, i.e. without saturation, then in 10 – 20 years the data mining may indeed become common.

Kim already suggested a solution – just make the BE matching process slower! In fact, the use of one-way slowdown functions (known in cryptography) for BE was considered before. The research in this area has not been active because this is not a top priority problem for BE at present. In the future, as long as the computer power grows, every time the user gets re-enrolled, the slower function will be applied to keep the matching time at the same level, for example, 1 sec.

Other points to consider:

  • BE is primarily intended for use in a distributed environment, i.e. without central databases;
  • the data mining between databases is even much easier with users’ names – you wouldn’t even need biometrics for that. We are basically talking about biometric anonymous databases – a non-existing application at present;
  • if a BE database custodian obtains and retains a fresh biometric sample just to do data mining, it would be a violation of his own policy. In contrast, if you give away your templates in conventional biometrics, the custodian is technically free to do any offline search.

These arguments are beyond compelling, and I very much appreciate the time Alex and Ann have taken to explain the issues.

It's understandable that BE researchers would be concentrating on more challenging aspects of the problem, but I strongly support the idea of building in a “slowdown function” from day one.  The BE computations Alex describes lend themselves perfectly to parallel processing, so Moore's law will be operating in two, not one, dimensions.  Maybe this issue could be addressed directly in one of the prototypes.  For 1:1 applications it doesn't seem like reduced efficiency would be an issue. 

Why couldn't the complexity of the calculation be a tunable characteristic of the system – sort of like the number of hash iterations in password based encryption (PBE)?

Clarifications on biometric encryption

Ann Cavoukian and Alex Stoianov have sent me a further explanation of the difference between the “glass slipper effect”, which seems to be a property of all biometric systems, and the much more sinister use of biometric templates as an identifying key.

Kim raises an interesting point, which we would like to address in greater detail:

“This is a step forward in terms of normal usage, but the technology still suffers from the “glass slipper” effect. A given individual's biometric will be capable of revealing a given key forever, while other people's biometrics won't.  So I don't see that it offers any advantage in preventing future mining of databases for biometric matches. Perhaps someone will explain what I'm missing.”

Let us consider a not-so-distant future scenario.  When the use of biometrics grows, an ordinary person will be enrolled in various biometrically controlled databases, such as travel documents, driver licenses, health care, access control, banking, shopping, etc. The current (i.e. conventional, non-BE) biometric systems can use the same biometric template for all of them. The template becomes the ultimate unique identifier of the person. This is where the biometric data mining comes into effect: the different databases, even if some of them are anonymous, may be linked together to create comprehensive personal profiles for all the users. To do this, no fresh biometric sample is even required. The linking of the databases can be done offline using template-to-template matching, in a very efficient one-to-many mode. The privacy implications explode at this point.

Contrast that to BE: it would be much more difficult, if not impossible, to engage in the linkage of biometric databases. BE does not allow a template-to-template matching — the tool commonly used in conventional biometrics. In each BE database, a user has different keys bound to his biometric. Those templates cannot be matched against each other. You need a real biometric sample to do so. Moreover, this matching is relatively slow and, therefore, highly inefficient in one-to-many mode. For example, running a single image against 10,000,000 records in just one BE database could take 0.1 sec x 10,000,000 = 1,000,000 sec = 11.5 days.

Kim is basically correct in stating that if an individual's real biometric image was somehow obtained, then this “glass slipper” could be used to search various databases for all the different PINs or keys that “fit” and, accordingly, construct a personal transaction profile of the individual concerned, using data mining techniques. But you would first have to obtain a “satisfactory” real image of the correct biometric and or multiple biometrics used to encrypt the PIN or key. All of the PINs or keys in the databases can and should be unique (the privacy in numbers argument) — as such, if an individual's actual biometric could somehow be accessed, only an ad hoc data mining search could be made, accessing only one entry (which would represent an individual privacy breach, not a breach of the entire database).

However, with BE, the actual biometric (or template derived from that biometric) is never stored – a record of it doesn’t exist. Without the actual biometric, data mining techniques would be useless because there would be no common template to use as one's search parameter. As mentioned, all the biometrically encrypted PINs or keys in the databases would be unique. Furthermore, access to the individual's biometric and associated transaction data would be far more difficult if a biometrically encrypted challenge/response method is employed.

In contrast, current biometric methods use a common (the same) biometric template for an individual’s transactions and, accordingly, can be used as the search parameter to construct personal profiles, without access to the real biometric. This presents both a privacy and security issue because not only could profiles be constructed on an ad hoc basis, but each template in a database can be used to construct profiles of multiple individuals without access to their real biometric. We thus believe that this alone makes biometric encryption far superior to standard current biometric methods.

Ann Cavoukian and Alex Stoianov

I had not understood that you can so easily correlate conventional biometric templates across databases.  I had thought the “fuzziness” of the problem would make it harder than it apparently is.  This raises even more red flags about the use of conventional biometrics.

Despite the calculation times given for BE matching, I'm still not totally over my concern about what I have called the glass slipper effect.  It would be a useful area of research to find ways of making the time necessary to calculate the BE match orders of magnitude longer than is currently the case.  If today it takes 11.5 days to search through 10,000,000 records, it will only take 4 hours in ten years.  By then the kids we've been talking about will be 16.  Won't that make it less than a minute by the time they are 26?  Or a quarter of a second when they're in their mid thirties?

One very sad story

This article by ZDnet's Mitch Ratcliffe on Identity Rape and Mob Mentality sends shivers down the spine.  Partly because a bunch of our friends are involved.  Partly because the dynamics are just scarey.

Allen Herrell, one of the accused attackers in the Kathy Sierra controversy, has written a long email to Doc Searls explaining that his entire online identity has been compromised. If true, and I believe it, because I have known Allen for many years, it appears there have been many more victims here than Ms. Sierra.

I am writing this from a new computer, using an email address that will be deleted at the end of this.

I am no longer me. My main machine despite my best efforts has been hacked, my accounts compromised including my email. and has been disconnected from the internet.

How did this happen? When did this happen? shit doc, i don't have a fucking clue. I thought i was pretty sharp. I guess not.

just about every online account that i have has been compromised. Most importantly my digital identity and user/password for typepad and wordpress. I have been doing damage control, for my clients. How the fuck i got to be part of this mess is revolting.

The Kathy Sierra mess is horrific. I am not who ever used my identity and my picture!!

I am sick beyond words over this whole episode. Kathy Sierra may not be on my top 10 list , but nobody deserves this filthy character assaination (sic). 

A lynch mob mentality has come over the Blogosphere. Kathy Sierra has ever right to be angry about the messages directed at her, but her allegations appear to have been misdirected and misinformed, because they relied on simplistic analysis of the sites and assumed that appearance and reality were identical. And she's making it worse, writing today:

You're damn right I'm *linking* these folks to these posts. You're wrong about their involvement. The posts and comments were NOT made by–as you said–heinous trolls.

Whoever made the posts was a registered member, and they *know* who made the comments — he was one of their participants. I never said Jeaneane was the one creating the noose picture or comment. I said she was a participant in and “celebrated” and encouraged meankids.org. I believe that when prominent people encourage this kind of behavior, they don't get to wash their hands of it, ethically.

I should be more clear, though, that while *someone* broke the law with the noose photo/comment, I'm definitely NOT suggesting that anyone else did anything legally wrong.

But I think Hugh put it better than I can:

–You might not be the guy raping the cheerleader, but if you're the one standing by saying, “go go go!” you share some responsibility.–

Not legal, but ethical. I don't believe any of these folks should be able to create these forums, *celebrate* them, send people there, and actively participate… and then claim complete innocence. If you hand someone a loaded gun. and encourage them to shoot…

The rape metaphor applies to everyone involved who had words and images they find deplorable attributed to them. But it is far more important to understand that the rape claimed attributed to them probably didn't happen wasn't their doing in the first place. The gun shoved in Chris Locke, Jeneane Sessums, Frank Paynter and Allen Herrell's hands is as likely to be illusory as not. We need proof, not accusations, just like in the physical world.

Trolls created the impression of a crime and sat back to watch human nature show its worst side. They are still enjoying it.

As Chris Locke explained in his email to me yesterday, he took the offensive postings down “shortly after it appeared.” Nevertheless, Bert Bates, Kathy Sierra's Head First Java co-author has commented on this blog, saying “By definition, these ‘posts’ were made by the author(s) of the site – it IS a small circle of candidates.” When you factor in the possibility that accounts were co-opted, according to this definition, anyone who has ever had their email address spoofed is responsible for the content of the messages sent under their name.  (Post continues here…)

There are so many things to be learned from this story that it boggles my mind. 

It brings back a conversation I had with Allen (The Head Lemur) at Ester Dyson's Release 1.0 conference, years ago, where we first talked about identity.  He was skeptical (as is his wont) but I had good fun talking to him.  And there is no doubt in my mind that we should, as our civilization has learned to do, consider Allan innocent until proven guilty – and there doesn't seem to be any sign of that. 

The worst is that I hear stories like this all the time.  Not just in my work, but from my family. 

My daughter tells of a lady friend who's gmail account was broken into – resulting in pandemonium that – if it weren't so unbearable – would be the stuff french farces are made of. 

My son's instant messaging account was hacked by the ex of a ladyfriend he wasn't even dating.  Again, he was dragged through weeks of confusion and reconnection. 

So one of the things that separates this story from all the others happening all over cyberspace is just that we know the people involved.  The broad strokes are common today given the randomness of web security and identity.

To make matters worse, imagine technical people saying, in a world of passwords and keystroke loggers, “these ‘posts’ were made by the author(s) of the site – it IS a small circle of candidates…”  Help me.

It's a great proof point that even though blogs don't involve high finance, they still need high quality security.  The loss of privacy and loss of dignity we have witnessed here can't really be undone, even if one day they can be forgotten.  Protecting identity and protecting access is not a joke.

Some days, when I'm really tired, I look at the vast job ahead of us in fixing the internet's identity infrastructure, and wonder if I shouldn't just go and do something easy – like levitation.  But a story like this drives home the fact that we have to succeed. 

Maybe next time Allan and colleagues will be using Information Cards, not passwords, not shared secrets.  This won't extinguish either flaming or trolling, but it can sure make breaking in to someone's site unbelievably harder – assuming we get to the point where our blogging software is safe too.

Biometric encryption

This diagram from Cavoukian and Stoianov's recent paper on biometric encryption (introduced here) provides an overiew of the possible attacks on conventional biometric systems (Click to enlarge; consult the original paper, which discusses each of the attacks).

Click to enlarge

Having looked at how template-based biometric systems work, we're ready to consider biometric encyption.  The basic idea is that a function of the biometric is used to encrypt (bind to) an arbitrary key.  The key is stored in the database, rather than either the biometric or a template.  The authors explain,

Because of its variability, the biometric image or template itself cannot serve as a cryptographic key. However, the amount of information contained in a biometric image is quite large: for example, a typical image of 300×400 pixel size, encoded with eight bits per pixel has 300x400x8 = 960,000 bits of information. Of course, this information is highly redundant. One can ask a question: Is it possible to consistently extract a relatively small number of bits, say 128, out of these 960,000 bits? Or, is it possible to bind a 128 bit key to the biometric information, so that the key could be consistently regenerated? While the answer to the first question is problematic, the second question has given rise to the new area of research, called Biometric Encryption

Biometric Encryption is a process that securely binds a PIN or a cryptographic key to a biometric,so that neither the key nor the biometric can be retrieved from the stored template. The key is re-created only if the correct live biometric sample is presented on verification.

The process is represented visually as follows (click to enlarge):

Click to enlarge

Perhaps the most interesting aspect of this technology is that the identifier associated with an individual includes the entropy of an arbitrary key.  This is very different from using a template that will be more or less identical as long as the template algorithm remains constant.  With BE, I can delete an identifier from the database, and generate a new one by feeding a new random key into the biometric “binding” process.  The authors thus say the identifiers are “revokable”.

This is a step forward in terms of normal usage, but the technology still suffers from the “glass slipper” effect.  A given individual's biometric will be capable of revealing a given key forever, while other people's biometrics won't.  So I don't see that it offers any advantage in preventing future mining of databases for biometric matches.  Perhaps someone will explain what I'm missing.

The authors describe some of the practical difficulties in building real-world systems (although it appears that already Phillips has a commercial system).  It is argued that for technical reasons, fingerprints lend themselves less to this technology than iris and facial scans. 

Several case studies are included in the paper that demonstrate potential benefits of the system.  Reading them makes the ideas more comprehensible.

The authors conclude:

Biometric Encryption technology is a fruitful area for research and has become sufficiently mature for broader public policy consideration, prototype development, and consideration of applications.

Andy Adler at the University of Ottawa has a paper looking at some of the vulnerabilities of BE.

Certainly, Cavoukian and Stoianov's fine discussion of the problems with conventional biometrics leaves one more skeptical than ever about their use today in schools and pubs.