Richard Gray on authentication and reputation

Richard Gray posted two comments that I found illuminating, even though I see things in a somewhat different light.  The first was a response to my Very Sad Story

One of the interesting points of this is that it highlights very strongly some of the meat space problems that I’m not sure any identity solution can solve. The problem in particular is that as much as we try to associate a digital identity with a real person, so long as the two can be separated without exposing the split we have no hope of succeeding.

For so long identity technical commentators have pushed the idea that a person’s digital identity and their real identity can be tightly bound together then suddenly, when the weakness is finally exposed everyone once again is forced to say ‘This digital identity is nothing more than a string puppet that I control. I didn’t do this thing, some other puppet master did.’

What’s the solution? I don’t know. Perhaps we need to stop talking about identities in this way. If a burglar stole my keys and broke into my home to use my telephone it would be my responsibility to demonstrate that but I doubt that I could be held responsible for what he said afterwards.  Alternatively we need non-repudiation to be a key feature of any authentication scheme that gets implemented.

In short, so long as we can separate ourselves from our digital identities, we should expect people not to trust them. We should in fact go to great lengths to ensure that people trust them only as much as they have to and no more.

 He continued in this line of thought over at Jon's blog:

As you don’t have CardSpace enabled here, you can’t actually verify that I am the said same Richard from Kim’s blog. However in a satisfyingly circular set of references I imagine that what follows will serve to authenticate me in exactly the manner that Stephen described. 🙂  [Hey Jon – take a look at Pamelaware – Kim]

I’m going to mark a line somewhere between the view that reputation will protect us from harm and that the damage that can be done will be reversible. Reputation is a great authenticating factor, indeed it fits most of the requirements of an identity. It's trusted by the recipient, it requires lots of effort to create, and is easy to test against. Amongst people who know each other well its probably the source of information that is relied upon the most. (”That doesn’t sound like them” is a common phrase)

However, this isn’t the way that our society appears to work. When my wife reads the celebrity magazines she is unlikely to rely on reputation as a measure for their actions. Worse than this, when she does use reputation, it is built from a collection of previous celebrity offerings.

To lay it out simply, no matter who should steal my identity (phone, passwords etc.) they would struggle to damage my relationship with my current employer as they know me and have a reputation to authenticate my actions with. They could do a very good job of destroying any hope I have of getting a job anywhere else though. Regardless of the truth I would be forced to explain myself at every subsequent meeting. The public won’t have done the background checks, they’ll only know what they’ve heard. Why would they take the risk and employ me, I *might* be lying.

Incredibly, the private reputation that Allen has built up (and Stephen and the rest of us rely on) has probably helped to save a large portion of his public reputation. Doing a google for “Allen Herrell” doesn’t find netizens baying for his blood, it finds a large collection of people who have rallied behind him to declare ‘He would not do this’.

Now what I’m about to say is going to seem a little crazy but please think it through to the end before cutting it down completely. So long as our online identities are fragile and easily compromised people will be wary to trust them. If we lower the probability of an identity failing, people will, as a result, place more faith in that identity. But if we can’t reduce the probability of failure to zero then when some pour soul suffers the inevitable failure of their identity, so many more people will have placed faith in it that undoing the damage may be almost impossible. It would seem then that the unreliability of our identity is in fact our last line of defence.

My point then is that while it is useful to spend time improving authentication schemes perhaps we are neglecting the importance of non-repudiation within the system. If it was impossible for anyone other than me to communicate my password string to an authentication system then that password would be fine for authentication and it wouldn’t even be necessary to encrypt the text wherever it was stored!

Jon Udell on the Sierra affair

Jon Udell put up this thought-inducing piece on the widely discussed Sierra affair earlier this week, picking up on my piece and the related comment by Richard Gray.   

Kim Cameron had the same reaction to the Sierra affair as I did: Stronger authentication, while no panacea, would be extremely helpful. Kim writes:

Maybe next time Allan and colleagues will be using Information Cards, not passwords, not shared secrets. This won’t extinguish either flaming or trolling, but it can sure make breaking in to someone’s site unbelievably harder.

Commenting on Kim’s entry, Richard Gray (or, more precisely, a source of keystrokes claiming to be one of many Richard Grays) objects on the grounds that all is hopeless so long as digital and real identities are separable:

For so long identity technical commentators have pushed the idea that a person’s digital identity and their real identity can be tightly bound together then suddenly, when the weakness is finally exposed everyone once again is forced to say ‘This digital identity is nothing more than a string puppet that I control. I didn’t do this thing, some other puppet master did.’

Yep, it’s a problem, and there’s no bulletproof solution, but we can and should make it a lot harder for the impersonating puppet master to seize control of the strings.

Elsewhere, Stephen O’Grady asks whether history (i.e., a person’s observable online track record) or technology (i.e., strong authentication) is the better defense.

My answer to Stephen is: You need both. I’ve never met Stephen in person, so in one sense, to me, he’s just another source of keystrokes claiming to represent a person. But behind those keystrokes there is a mind, and I’ve observed the workings of that mind for some years now, and that track record does, as Stephen says, powerfully authenticate him.

“Call me naive,” Stephen says, “but I’d like to think that my track record here counts for something.”

Reprising the comment I made on his blog: it counts for a lot, and I rely on mine in just the same way for the same reasons. But: counts for whom? Will the millions who were first introduced to Kathy Sierra and Chris Locke on CNN recently bother explore their track records and reach their own conclusions?

More to the point, what about Alan Herrell’s1 track record? I would be inclined to explore it but I can’t, now, without digging it out of the Google cache.

The best defense is a strong track record and an online identity that’s as securely yours as is feasible.

The identity metasystem that Kim Cameron has been defining, building, and evangelizing is an important step in the right direction. I thought so before I joined Microsoft, and I think so now.

It’s not a panacea. Security is a risk continuum with tradeoffs all along the way. Evaluating the risk and the tradeoffs, in meatspace or in cyberspace, is psychologically hard. Evaluating security technologies, in both realms, is intellectually hard. But in the long run we have no choice, we have to deal with these difficulties.

The other day I lifted this quote from my podcast with Phil Libin:

The basics of asymmetric cryptography are fundamental concepts that any member of society who wants to understand how the world works, or could work, needs to understand.

When Phil said, that my reaction was, “Oh, come on, I’d like to think that could happen but let’s get real. Even I have to stop and think about how that stuff works, and I’ve been aware of it for many years. How can we ever expect those concepts to penetrate the mass consciousness?”

At 21:10-23:00 in the podcast2, Phil answers in a fascinating way. Ask twenty random people on the street why the government can’t just print as much money as it wants, he said, and you’ll probably get “a reasonable explanation of inflation in some percentage of those cases.” That completely abstract principle, unknown before Adam Smith, has sunk in. Over time, Phil suggests, the principles of asymmetric cryptography, as they relate to digital identity, will sink in too. But not until those principles are embedded in common experiences, and described in common language.

Beyond Stephen O'Grady's piece, the reactions of Jon's readers are of interest too.  In fact, I'm going to post Richard's comments so that everyone gets to see them. 

Formula for time conversion

The remarkable William Heath, a key figure in the British Government's IT ecosystem and publisher of ideal government, lands a few of his no-nonsense punches in this piece, both sobering and amusing, on institutional learning:

The original Microsoft Hailstorm press release is still there, bless them! Check out all the hype about “personalisation” and “empowerment” with proper protection of privacy (see extracts below). Complete ecstatic fibs! The apogee of Microsoft’s crazed, childish egocentricity. And it all sounds so familiar to the rhetoric of UK government ID management.

Then April 2002 – Microsoft shelves Hailstorm eg NY Times abstract

And Microsoft announced Kim Cameron’s laws of identity in 2005, and Infocards in 2006.

How fast does Microsoft adapt to customers and markets compared to governments, do we estimate? Is “one Microsoft year = seven government years” a reasonable rule of thumb? In ID management terms the UK government is still in Microsoft’s 2001. So for the UK government to get to Microsoft’s position today, where the notion of empowering enlightenment is at least battling on equal terms with forces of darkness and control and the firm is at the beginning of implementing a sensible widescale solution will take UK government and IPS another forty years or so.

Could we get it down to one MS year = 3.5 UK gov years? That means we could have undone the damage of committing to a centralist panoptical approach in just 21 years. Aha.  But Microsoft doesn’t have elections to contend with… (Continued here.)

I know a number of folks who were involved with Hailstorm, and they are great people who really set a high bar for contributing to society.  I admire them both for their charity and their creativity.  It is possible that the higher the standards for your own behavior, the more you will expect other people will trust you – even if they don't know you.  And then the greater your disappointment when people impune your motives or – best case – question your naivity. 

It requires maturity as technologists to learn that we have to build systems that remain safe in spite of how people behave – not because of how they behave. 

Of course, this is not purely a technical problem, but also a legal and even legeslative one.  It took me, for example, quite a while to understand how serious the threat of panoptics is.  Things always look obvious in retrospect. 

I am trying to share our experience as transparently and as widely as I can.  I have hoped to reduce the learning curve for others – since getting this right is key to creating the most vibrant cyberspace we can. 

Without BE, templates ARE your biometrics

The more I learn from Alex Stoianov about the advantages of Biometric Encryption, the more I understand how dangerous the use of conventional biometric templates really is.  I had not understood that the templates were a reliable unique identifier reusable across databases and even across template schemes without a fresh biometric sample.  People have to be stark, raving mad to use conventional biometrics to improve the efficiency of a children's lunch line.

Alex begins by driving home how easy template matching across databases really is:

Yes, that’s true: conventional biometric templates can be easily correlated across databases. Most biometric matching algorithms work this way: a fresh biometric sample is acquired and processed; a fresh template is extracted from it; and this template is matched against previously enrolled template.

If the biometric templates are stored in the databases, you don’t need a fresh biometric sample for the offline match – the templates contain all the information required.

Moreover, this search is extremely fast, such as 1,000,000 matches per sec is easily available. In our example, it would take only 10 sec to search a database of 10,000,000 records (we may disregard for now the issue of false acceptance – the accuracy is constantly improving). Biometric industry is actively developing standards, so that very soon all the databases will have standardized templates, i.e. will become fully interoperable.

BE, on the other hand, operates in a “blind” mode and, therefore, is inherently a one-to-one algorithm. Our estimate of 11.5 days for just one search makes it infeasible at present to do data mining across BE databases. If the computational power grows according to Kim’s estimates, i.e. without saturation, then in 10 – 20 years the data mining may indeed become common.

Kim already suggested a solution – just make the BE matching process slower! In fact, the use of one-way slowdown functions (known in cryptography) for BE was considered before. The research in this area has not been active because this is not a top priority problem for BE at present. In the future, as long as the computer power grows, every time the user gets re-enrolled, the slower function will be applied to keep the matching time at the same level, for example, 1 sec.

Other points to consider:

  • BE is primarily intended for use in a distributed environment, i.e. without central databases;
  • the data mining between databases is even much easier with users’ names – you wouldn’t even need biometrics for that. We are basically talking about biometric anonymous databases – a non-existing application at present;
  • if a BE database custodian obtains and retains a fresh biometric sample just to do data mining, it would be a violation of his own policy. In contrast, if you give away your templates in conventional biometrics, the custodian is technically free to do any offline search.

These arguments are beyond compelling, and I very much appreciate the time Alex and Ann have taken to explain the issues.

It's understandable that BE researchers would be concentrating on more challenging aspects of the problem, but I strongly support the idea of building in a “slowdown function” from day one.  The BE computations Alex describes lend themselves perfectly to parallel processing, so Moore's law will be operating in two, not one, dimensions.  Maybe this issue could be addressed directly in one of the prototypes.  For 1:1 applications it doesn't seem like reduced efficiency would be an issue. 

Why couldn't the complexity of the calculation be a tunable characteristic of the system – sort of like the number of hash iterations in password based encryption (PBE)?

Clarifications on biometric encryption

Ann Cavoukian and Alex Stoianov have sent me a further explanation of the difference between the “glass slipper effect”, which seems to be a property of all biometric systems, and the much more sinister use of biometric templates as an identifying key.

Kim raises an interesting point, which we would like to address in greater detail:

“This is a step forward in terms of normal usage, but the technology still suffers from the “glass slipper” effect. A given individual's biometric will be capable of revealing a given key forever, while other people's biometrics won't.  So I don't see that it offers any advantage in preventing future mining of databases for biometric matches. Perhaps someone will explain what I'm missing.”

Let us consider a not-so-distant future scenario.  When the use of biometrics grows, an ordinary person will be enrolled in various biometrically controlled databases, such as travel documents, driver licenses, health care, access control, banking, shopping, etc. The current (i.e. conventional, non-BE) biometric systems can use the same biometric template for all of them. The template becomes the ultimate unique identifier of the person. This is where the biometric data mining comes into effect: the different databases, even if some of them are anonymous, may be linked together to create comprehensive personal profiles for all the users. To do this, no fresh biometric sample is even required. The linking of the databases can be done offline using template-to-template matching, in a very efficient one-to-many mode. The privacy implications explode at this point.

Contrast that to BE: it would be much more difficult, if not impossible, to engage in the linkage of biometric databases. BE does not allow a template-to-template matching — the tool commonly used in conventional biometrics. In each BE database, a user has different keys bound to his biometric. Those templates cannot be matched against each other. You need a real biometric sample to do so. Moreover, this matching is relatively slow and, therefore, highly inefficient in one-to-many mode. For example, running a single image against 10,000,000 records in just one BE database could take 0.1 sec x 10,000,000 = 1,000,000 sec = 11.5 days.

Kim is basically correct in stating that if an individual's real biometric image was somehow obtained, then this “glass slipper” could be used to search various databases for all the different PINs or keys that “fit” and, accordingly, construct a personal transaction profile of the individual concerned, using data mining techniques. But you would first have to obtain a “satisfactory” real image of the correct biometric and or multiple biometrics used to encrypt the PIN or key. All of the PINs or keys in the databases can and should be unique (the privacy in numbers argument) — as such, if an individual's actual biometric could somehow be accessed, only an ad hoc data mining search could be made, accessing only one entry (which would represent an individual privacy breach, not a breach of the entire database).

However, with BE, the actual biometric (or template derived from that biometric) is never stored – a record of it doesn’t exist. Without the actual biometric, data mining techniques would be useless because there would be no common template to use as one's search parameter. As mentioned, all the biometrically encrypted PINs or keys in the databases would be unique. Furthermore, access to the individual's biometric and associated transaction data would be far more difficult if a biometrically encrypted challenge/response method is employed.

In contrast, current biometric methods use a common (the same) biometric template for an individual’s transactions and, accordingly, can be used as the search parameter to construct personal profiles, without access to the real biometric. This presents both a privacy and security issue because not only could profiles be constructed on an ad hoc basis, but each template in a database can be used to construct profiles of multiple individuals without access to their real biometric. We thus believe that this alone makes biometric encryption far superior to standard current biometric methods.

Ann Cavoukian and Alex Stoianov

I had not understood that you can so easily correlate conventional biometric templates across databases.  I had thought the “fuzziness” of the problem would make it harder than it apparently is.  This raises even more red flags about the use of conventional biometrics.

Despite the calculation times given for BE matching, I'm still not totally over my concern about what I have called the glass slipper effect.  It would be a useful area of research to find ways of making the time necessary to calculate the BE match orders of magnitude longer than is currently the case.  If today it takes 11.5 days to search through 10,000,000 records, it will only take 4 hours in ten years.  By then the kids we've been talking about will be 16.  Won't that make it less than a minute by the time they are 26?  Or a quarter of a second when they're in their mid thirties?

Biometric encryption

This diagram from Cavoukian and Stoianov's recent paper on biometric encryption (introduced here) provides an overiew of the possible attacks on conventional biometric systems (Click to enlarge; consult the original paper, which discusses each of the attacks).

Click to enlarge

Having looked at how template-based biometric systems work, we're ready to consider biometric encyption.  The basic idea is that a function of the biometric is used to encrypt (bind to) an arbitrary key.  The key is stored in the database, rather than either the biometric or a template.  The authors explain,

Because of its variability, the biometric image or template itself cannot serve as a cryptographic key. However, the amount of information contained in a biometric image is quite large: for example, a typical image of 300×400 pixel size, encoded with eight bits per pixel has 300x400x8 = 960,000 bits of information. Of course, this information is highly redundant. One can ask a question: Is it possible to consistently extract a relatively small number of bits, say 128, out of these 960,000 bits? Or, is it possible to bind a 128 bit key to the biometric information, so that the key could be consistently regenerated? While the answer to the first question is problematic, the second question has given rise to the new area of research, called Biometric Encryption

Biometric Encryption is a process that securely binds a PIN or a cryptographic key to a biometric,so that neither the key nor the biometric can be retrieved from the stored template. The key is re-created only if the correct live biometric sample is presented on verification.

The process is represented visually as follows (click to enlarge):

Click to enlarge

Perhaps the most interesting aspect of this technology is that the identifier associated with an individual includes the entropy of an arbitrary key.  This is very different from using a template that will be more or less identical as long as the template algorithm remains constant.  With BE, I can delete an identifier from the database, and generate a new one by feeding a new random key into the biometric “binding” process.  The authors thus say the identifiers are “revokable”.

This is a step forward in terms of normal usage, but the technology still suffers from the “glass slipper” effect.  A given individual's biometric will be capable of revealing a given key forever, while other people's biometrics won't.  So I don't see that it offers any advantage in preventing future mining of databases for biometric matches.  Perhaps someone will explain what I'm missing.

The authors describe some of the practical difficulties in building real-world systems (although it appears that already Phillips has a commercial system).  It is argued that for technical reasons, fingerprints lend themselves less to this technology than iris and facial scans. 

Several case studies are included in the paper that demonstrate potential benefits of the system.  Reading them makes the ideas more comprehensible.

The authors conclude:

Biometric Encryption technology is a fruitful area for research and has become sufficiently mature for broader public policy consideration, prototype development, and consideration of applications.

Andy Adler at the University of Ottawa has a paper looking at some of the vulnerabilities of BE.

Certainly, Cavoukian and Stoianov's fine discussion of the problems with conventional biometrics leaves one more skeptical than ever about their use today in schools and pubs.

A sweep of their tiny fingers

My research into the state of child fingerprinting has led me to this extreme video – you will want to download it.  Then let's look further at the technical issues behind fingerprinting.

Here is a diagram showing how “templates” are created from biometric information in conventional fingerprint systems.  It shows the level of informed discourse that is emerging on activist sites such as LeaveThemKidsAlone.com – dedicated to explaining and opposing child fingerprinting in Britain.

Except in the most invasive systems, the fingerprint is not stored – rather, a “function” of the fingerprint is used.  The function is normally “one-way”, meaning you can create the template from the fingerprint by using the correct algorithm, but cannot reconstitute the fingerprint from the template.

The template is associated with some real-world individual (Criminal?  Student?) During matching, the fingerprint reader again applies the one-way function to the fingerprint image, and produces a blob of data that matches the template – within some tolerance.  Because of the tolerance issue, in most systems the template doesn't behave like a “key” that can simply be looked up in a table.   Instead, the matching software is run against a series of templates and calculations are performed in search of a match.

If the raw image of the fingerprint were stored rather than a template, and someone were to gain access to the database, the raw image could be harnessed to create a “gummy bear” finger that could potentially leave fake prints at the scene of a crime – or be applied to fingerprint sensors.

Further, authorities with access to the data could also apply new algorithms to the image, and thus locate matches against emerging template systems not in use at the time the database was created.  For both these reasons, it is considered safer to store a template than the actual biometric data.

But by applying the algorithm, matching of a print to a person remains possible as long as the data is present and the algorithm is known.  With the negligible cost of storage, this could clearly extend throughout the whole lifetime of a child.  LeaveThemKidsAlone quotes Brian Drury, an IT security consultant who makes a nice point about the potential tyranny of the algorithm:

If a child has never touched a fingerprint scanner, there is zero probability of being incorrectly investigated for a crime. Once a child has touched a scanner they will be at the mercy of the matching algorithm for the rest of their lives.” (12th March 2007 – read more from Brian Drury)

So it is disturbing to read statements like the following by Mitch Johns, President and Founder of Food Service Solutions – whose company sells the system featured in the full Fox news video referenced above:

When school lunch biometric systems like FSS’s are numerically-based and discard the actual fingerprint image, they cannot be used for any purpose other than recognizing a student within a registered group of students. Since there’s no stored fingerprint image, the data is useless to law enforcement, which requires actual fingerprint images.

Mitch, this just isn't true.  I hope your statement is the product of not having thought through the potential uses that could be made of templates.  I can understand the mistake – as technologists, evil usages often don't occur to us.   But I hope you'll start explaining what the risks really are.  Or, better still, consider replacing this product with other based on more mature technology and exposing children and schools to less long term danger and liability.

Will biometrics grow up?

Ann Cavoukian has really thought about biometrics – and fingerprinting. As the Privacy Commissioner of Ontario, she hasn't hesitated to join the conversation we have been having as technologists – and has contributed to it in concrete ways. For example, beyond bringing the Laws of Identity to the attention of policy makers, she extended them to make all the privacy implications explicit.

Now she and Alex Stoianov, a biometrics scientist, have published a joint paper called Biometric Encrypton: A Positive-Sum Technology that Achieves Strong Authentication, Security AND Privacy. It is too early to know to what extent Biometric Encryption (BE) will achieve its promise and become a mainstream technology. But everyone who reads the paper will understand why it is absolutely premature to begin using “conventional biometrics” in schools – or pubs. The following table, taken from the paper, summarizes the benefits BE could hold out for us:

Traditional Biometrics:
Privacy OR Security
A Zero-Sum Game
Biometric Encryption:
Privacy AND Security – A Positive-Sum Game
1 The biometric template stored is an identifier unique to the individual. There is no conventional biometric template, therefore no unique biometric identifier may be tied to the individual. (pp. 16, 17)
2 Secondary uses of the template (unique identifier) can be used to log transactions if biometrics become widespread. Without a unique identifier, transactions cannot be collected or tied to an individual. (pp. 17, 25)
3 A compromised database of individual biometrics or their templates affects the privacy of all individuals. No large databases of biometrics are created, only biometrically encrypted keys. Any compromise would have to take place one key at a time. (pp. 23)
4 Privacy and security not possible. Privacy and security easily achieved. (pp. 17-20, 26-28)
5 Biometric cannot achieve a high level of challenge-response security. Challenge-response security is an easily available option. (pp. 26-28)
6 Biometrics can only indirectly protect privacy of personal information in large private or public databases. BE can enable the creation of a private and highly secure anonymous database structure for personal information in large private or public databases. (pp. 19, 20, 27)
7 1:many identification systems suffer from serious privacy concerns if the database is compromised. 1:many identification systems are both private and secure. (pp. 17, 20)
8 Users’ biometric images or templates cannot easily be replaced in the event of a breach, theft or account compromise. Biometrically encrypted account identifiers can be revoked and a new identifier generated in the event of breach or database compromise. (pp. 17)
9 Biometric system is vulnerable to potential attacks. BE is resilient to many known attacks. (pp. 18)
10 Data aggregation Data minimization (pp. 17)

I'll be writing about the basic idea involved in BE. But I advise downloading the paper since beyond BE, it provides an excellent and well structured discussion of the issues with biometrics in general.

Name that scam

Received this email from a reader.  Has anyone any idea what was going on? 

Yesterday afternoon, at approx 2:10pm, I started receiving emails (and phone calls) from a variety of websites, mostly financial (home loans, car loans, debt consolidation) but also other services (BMC music, TheScooterStore, Netflix).. claiming to be responding to requests from me (at their websites) for services or information. 

I’ve received about a dozen emails over the past 24hrs, and about the same number of phone calls at home and about half a dozen at work. 
So somebody is entering my name and personal information (home & work phone, work email, home address & home value – all relatively public info – so far nothing worse like SSN or other credit info) into a variety of websites and signing me up for various services. 

Some of these websites (I have spoken with several sales people on the phone) are part of marketing networks that either share or sell such information (leads) and I have tracked several of these down to a common source.. although it appears that are at least several root sources involved. 

My question is this: what is the scam? 

Its possible its just personal harassment and there is someone out there that is trying to give me a bad time or is playing a not-so-funny joke. 

It doesn’t feel like identity theft – they don’t seem to have private info, but instead seem to have assembled some relatively public info and are inputting that into a bunch of websites. 

Could this be someone trying to defraud a marketing network? If so, do you know how that works? 

Ever heard of anything like this before? (maybe this is a common thing?) 

Btw, at least some of the companies contacting me are legit (QuickenLoans for example, and they were quite helpful on the phone) so it seems the “fraud” is on the input side?

I asked the person who was the target of this attack how he knew for sure he had been speaking with people from QuickenLoans, for example.  It seems they just seemed credible, and helpful, so he never questioned their claims or asked to call them back.

It all reminds me of this.

Hong Kong teaches London about civil liberties

Seven hundred and ninety-two years after the Magna Carta, Britain has fallen behind Hong Kong when it comes to civil liberties.  It looks like the US could take a page from the colony's book as well.  This piece is from the register:

The Hong Kong privacy commissioner has ordered a school to stop fingerprinting children before it becomes a runaway trend that is too late to stop

The school, in the Kowloon District, installed the system last year but, under the order of the Hong Kong Privacy Commission, has ripped it out and destroyed all the fingerprint data it had taken from children.

 Roderick Woo, Justice of the Peace at the Hong Kong Office of the Privacy Commissioner, told El Reg he had decided to examine the issue immediately after the first school installed a fingerprint reader to take registers in his jurisdiction.

And, he decided: “It was a contravention of our law, which is very similar to your law, which is that the function of the school is not to collect data in this manner, that it was excessive and that there was a less privacy-intrusive method to use.”

In other words, he said, what better way is there for a teacher to take a register than to look around the class, note who's missing, and take down their names for the record. Measuring fingerprints seemed a little over the top for the task in hand, which translated into terms understood by privacy laws, means that the use of information technology was not proportionate to the task in hand.

He also looked at the need of schools to get consent from either pupils or parents before they took fingerprints at class registration. This is an avenue being considered by parents in the UK who want to challenge schools that have taken their children's fingerprints without parental consent.

Britain's Information Commissioner has said it might be enough for a school to get the consent of a child before taking its fingerprints.

Woo, however, decided otherwise: “I considered the consent of the staff and pupils rather dubious, because primary school's consent in law cannot be valid and there's undue influence. If the school says, ‘give up your fingerprint’, there's no way of negotiating.

“Also it's not a good way to teach our children how to give privacy rights the consideration they deserve,” he added.

That is another fear expressed by some parents opposed to their children being fingerprinted, even when the majority of the systems in use are much more primitive than those used in criminal investigations.

The Hong Kong Office of the Privacy Commissioner ordered the school to remove the fingerprint system in the hope it would discourage other schools from installing similar systems without careful consideration, and prevent a rush of school fingerprinting as has occured in Britain.

However, Woo did note that other schools could not fingerprint their children for other purposes.

“That's not to say I'm opposed to any fingerprint scanning systems. I will look at any complaint on a case by case basis. It's not an anti hi-tech attitude I take,” he said.