CardSpace FAQ

As you can imagine, over the years I've answered plenty of questions about CardSpace and Information Cards.  In fact, the questions have been instrumental in shaping the theory and the implementation.  To help put together a definitive set of questions and answers, I'm going to share them on my blog.  I invite you to submit further questions and comments.  You can post directly by using an InfoCard, post on your own blog with a link back, or write to me using my i-name.

Would banks ever accept self-issued information cards?  How could they trust the information on them to be true?

In fact a number of banks have expressed interest in accepting self-issued cards (protected by pins) at their on-line banking sites. 

They see self-issued cards as a simple but improved credential when compared to a password.  Because a self-issued card is based on public key technology, the user never sees a secret that can be phished.  The self-issued card uses a 2048 bit RSA key when authenticating to a site – and there is no key-distribution problem.

These banks would not request or depend on the card's informational claims (name, address, etc).  The banks have already vetted the customer through their Know Your Customer (KYC) procedures.  So it is just the crypto that is of interest.

There are also banking sites that are more interested in issuing their own “managed cards” – for branding reasons, and as a way to provide their customers with single-signon to a constellation of services operated in unrelated data centers.

Finally, some banks are interested in using managed cards as a payment instrument within specific communities (for example, high value transactions), and as a way to get into new identity-related businesses.

How can managed cards ever help identity providers prevent phising, if all the end user has is a password? 

Once a user switches to CardSpace, phishing is not possible even when passwords are used as an IdP Credential.

That is because an Information Card reference is included as a part of the “Request Security Token” message sent to the IDP. It may include a second secret in its CardId, never released except encrypted to a certificate specified in the card's metadata. For example:


Even if the user is tricked into leaking her password, she doesn’t know the CardId and can’t leak it. If the IdP verifies that the correct CardId is present in the Request Security Token message (as well as the username and password), it is impossible for an attacker to phish the user.

Why can't you use smart cards, dongles, and one-time password devices with CardSpace?

You can.  Using a password is only one option for accessing IdPs.  CardSpace currently supports four authentication methods:

  1. Kerberos (as supported on *NIX systems and Active Directory):  This is typically useful when accessing an IdP from inside a firewall.
  2. X.509:  This allows conventional dongles, smart cards and soft certs to be used. Further since many devices (such as biometric sensors) integrate with windows by emulating an X.509 device, it supports these other authentication methods as well.
  3. Self-Issued Card:  In other words, the RSA keys present in one of your self-issued cards can be used to create a SAML token.
  4. Username / password:  The password can be generated by an OTP device if the IdP supports it, and this is an extremely safe option.

Biometric encryption

This diagram from Cavoukian and Stoianov's recent paper on biometric encryption (introduced here) provides an overiew of the possible attacks on conventional biometric systems (Click to enlarge; consult the original paper, which discusses each of the attacks).

Click to enlarge

Having looked at how template-based biometric systems work, we're ready to consider biometric encyption.  The basic idea is that a function of the biometric is used to encrypt (bind to) an arbitrary key.  The key is stored in the database, rather than either the biometric or a template.  The authors explain,

Because of its variability, the biometric image or template itself cannot serve as a cryptographic key. However, the amount of information contained in a biometric image is quite large: for example, a typical image of 300×400 pixel size, encoded with eight bits per pixel has 300x400x8 = 960,000 bits of information. Of course, this information is highly redundant. One can ask a question: Is it possible to consistently extract a relatively small number of bits, say 128, out of these 960,000 bits? Or, is it possible to bind a 128 bit key to the biometric information, so that the key could be consistently regenerated? While the answer to the first question is problematic, the second question has given rise to the new area of research, called Biometric Encryption

Biometric Encryption is a process that securely binds a PIN or a cryptographic key to a biometric,so that neither the key nor the biometric can be retrieved from the stored template. The key is re-created only if the correct live biometric sample is presented on verification.

The process is represented visually as follows (click to enlarge):

Click to enlarge

Perhaps the most interesting aspect of this technology is that the identifier associated with an individual includes the entropy of an arbitrary key.  This is very different from using a template that will be more or less identical as long as the template algorithm remains constant.  With BE, I can delete an identifier from the database, and generate a new one by feeding a new random key into the biometric “binding” process.  The authors thus say the identifiers are “revokable”.

This is a step forward in terms of normal usage, but the technology still suffers from the “glass slipper” effect.  A given individual's biometric will be capable of revealing a given key forever, while other people's biometrics won't.  So I don't see that it offers any advantage in preventing future mining of databases for biometric matches.  Perhaps someone will explain what I'm missing.

The authors describe some of the practical difficulties in building real-world systems (although it appears that already Phillips has a commercial system).  It is argued that for technical reasons, fingerprints lend themselves less to this technology than iris and facial scans. 

Several case studies are included in the paper that demonstrate potential benefits of the system.  Reading them makes the ideas more comprehensible.

The authors conclude:

Biometric Encryption technology is a fruitful area for research and has become sufficiently mature for broader public policy consideration, prototype development, and consideration of applications.

Andy Adler at the University of Ottawa has a paper looking at some of the vulnerabilities of BE.

Certainly, Cavoukian and Stoianov's fine discussion of the problems with conventional biometrics leaves one more skeptical than ever about their use today in schools and pubs.

A sweep of their tiny fingers

My research into the state of child fingerprinting has led me to this extreme video – you will want to download it.  Then let's look further at the technical issues behind fingerprinting.

Here is a diagram showing how “templates” are created from biometric information in conventional fingerprint systems.  It shows the level of informed discourse that is emerging on activist sites such as – dedicated to explaining and opposing child fingerprinting in Britain.

Except in the most invasive systems, the fingerprint is not stored – rather, a “function” of the fingerprint is used.  The function is normally “one-way”, meaning you can create the template from the fingerprint by using the correct algorithm, but cannot reconstitute the fingerprint from the template.

The template is associated with some real-world individual (Criminal?  Student?) During matching, the fingerprint reader again applies the one-way function to the fingerprint image, and produces a blob of data that matches the template – within some tolerance.  Because of the tolerance issue, in most systems the template doesn't behave like a “key” that can simply be looked up in a table.   Instead, the matching software is run against a series of templates and calculations are performed in search of a match.

If the raw image of the fingerprint were stored rather than a template, and someone were to gain access to the database, the raw image could be harnessed to create a “gummy bear” finger that could potentially leave fake prints at the scene of a crime – or be applied to fingerprint sensors.

Further, authorities with access to the data could also apply new algorithms to the image, and thus locate matches against emerging template systems not in use at the time the database was created.  For both these reasons, it is considered safer to store a template than the actual biometric data.

But by applying the algorithm, matching of a print to a person remains possible as long as the data is present and the algorithm is known.  With the negligible cost of storage, this could clearly extend throughout the whole lifetime of a child.  LeaveThemKidsAlone quotes Brian Drury, an IT security consultant who makes a nice point about the potential tyranny of the algorithm:

If a child has never touched a fingerprint scanner, there is zero probability of being incorrectly investigated for a crime. Once a child has touched a scanner they will be at the mercy of the matching algorithm for the rest of their lives.” (12th March 2007 – read more from Brian Drury)

So it is disturbing to read statements like the following by Mitch Johns, President and Founder of Food Service Solutions – whose company sells the system featured in the full Fox news video referenced above:

When school lunch biometric systems like FSS’s are numerically-based and discard the actual fingerprint image, they cannot be used for any purpose other than recognizing a student within a registered group of students. Since there’s no stored fingerprint image, the data is useless to law enforcement, which requires actual fingerprint images.

Mitch, this just isn't true.  I hope your statement is the product of not having thought through the potential uses that could be made of templates.  I can understand the mistake – as technologists, evil usages often don't occur to us.   But I hope you'll start explaining what the risks really are.  Or, better still, consider replacing this product with other based on more mature technology and exposing children and schools to less long term danger and liability.