“…Let's look at this in relation to an e-commerce transaction where we are buying something on the Internet over $250.
“First, because we (the consumers) have voluntarily submitted our information with the intention of entering into a business transaction, we have given our consent for the business to verify the information weâ€™ve presented.
“Once the business receives the information, in the interest of controlling fraud and completing the transaction as quickly as possible (avoiding a manual review of the transaction by the business), it uses an automatic system to verify that the personal information submitted is linked to a real person and that I am indeed that person.
“Enter IDologyâ€™s knowledge-based authentication (KBA) which scours (without exposing) billions of public data records to develop on-the-fly intelligent multiple choice questions for the person to answer. Our clients vary in their delivery of KBA, some reward their customer with expedited shipping for going through the process, others consider it a further extension of the credit card approval process which during the process various data elements associated with the credit card will be validated such as address verification along with the credit approval.
“The key is for a business to use a KBA system that bases its questions on non-credit data and reaches back into your public records history so that the answers are not easily guessed or blatantly obvious. Typically, consumers find credit based questions (what was the amount of your last mortgage payment, bank deposit, etc) intrusive and difficult to answer, and these type of answers can be forged by stealing someoneâ€™s credit report or accessed with compromised consumer data. Without giving away too much of our secret sauce, our questions relate to items such as former addresses (from as far back as college), people you know, vehicle information and anything else that can be determined confidentally while not exposing data from existing public data sources. Once the system processes the results (which is all real-time processing), it simply shares how many questions were answered right or wrong so that the business can determine how to handle the transaction further. The answers are not given within the transaction processing (protecting the consumer and the business from employees misusing data) and good KBA systems have lots of different types of questions to ask, so that the same questions are not always presented and one question doesnâ€™t give away the answer to another…
“At the end of the day, the consumer, by completing this ecommerce transaction, is establishing a single pointed trusted identity with that business. The next extension is how the consumer can utilize this verification process to validate his/her identity to complete other economic transactions or have an established verified identity to make posts to a blog or enter into a conversation in a social network where participants have agreed to be verified to establish a trusted network or may be concerned with the age of someone in their verified network. To us, KBA can be an important part of establishing and maintaining a trusted identity.Let's begin by supposing this technology becomes widely adopted.”
My first concern would regard the security of the system from the merchant and banking point of view. Why wouldn't an organized crime syndicate be able to set itself up with exactly the same set of publicly available databases used by IDology and thus be able to impersonate all of us perfectly – since it would know all the answers to the same questions? It seems feasible to me. I think it is likely that this technology, if initially successful and widely deployed, will crumble under attack because of that very success.
My second concern regards the security of the system from the point of view of the individual; in other words, her privacy. IDology's approach takes progressively more obscure aspects of a person's history and then, through the question and answer process, shares them with sites that people shouldn't necessarily trust very much.
The scenario is intended to weed out bad apples talking to good sites, but if adopted widely, infringes the security of good apples talking to bad sites – or even of good apples talking to sites whose morals are influenced by the profit motive (not that there are many of those around.)
Is this really an application of minimal disclosure? I fear it is more an application of Norlin's Maxim: The internet inexorably pulls information from the private domain into the public domain. As in the case of a tree falling in a forest with no one to observe it, historical data which, despite being digital, is left alone, represents less of a privacy problem than that which is circulated widely.
I would much rather see IDology apply its resources to the initial registration of a user, and provision a service which then releases only the results of its inquest (e.g. some number between 1 and 10) as an identity claim. This would be data minimalization in line with my second law.
I still worry that organized crime could take advantage of its access to public information to subvert even the singular registration phase, but at least the mechanisms used by IDology and like firms could include ones which attackers are unlikely to learn about (this is itself no small feat).
Clearly, in line with the first law of identity, users would have to know what the strength of their rating is, and how to seek redress should it not be right.
It's not my place to argue how things should be done – I'm just expressing my concerns about John's system as he has described it and I have understood it.
In short, I would much prefer a claims based approach to that of having the “secret public” information flow through untrusted relying parties. I especially worry about teaching users to enter even more obscure information into forms appearing on free-floating web pages – which would be like enrolling them in a graduate course at the School of Blabbing Your Secrets.