The Microsoft Identity and Access Team is hiring

Sorry to get overly Microsoft specific, but when I have the chance to enlarge our team, I just can't resist. I'm sure you'll all forgive me.

The Microsoft Identity and Access Team (IDA), which is where I work, is hiring a number of people in development, test and program management – starting as soon as possible. We're looking for good people – at all levels of experience. The positions are in Seattle, Washington, USA.

I've put together a page here that lets you bypass some of the relevant bureaucracy. In fact, I've set up an i-name that will get you right through to Dale, the person who handles recruiting for us. Tell him you want to send a CV and he'll send you his email address.

Two ways to achieve privacy

Scott Blackmer, a cyber lawyer you may have heard speaking at this year's Burton Group Catalyst Conference, has contributed the following comment to our Separation of Context discussion:

‘As individuals, we should be interested BOTH in solutions based on “privacy through obscurity” and solutions based on “privacy through accountability” — and technology has a role to play in both approaches.’

I think this is a very important point, (although I'll return to the use of the phrase “privacy through obscurity” in a subsequent post).

‘A digital identity system (or metasystem) can facilitate an individual’s technical control over the distribution of sensitive identity attributes (SSN / SIN, national ID number, credit card account, etc.), limiting the number and kind of entities that receive this information – this is privacy through obscurity.

‘Link contracts such as Drummond describes can add a layer of technical and legal accountability for those that are provided the information, by tracking and imposing conditions on how it is used and with whom it is shared — privacy through accountability.

‘One condition that can be imposed by law or contract is not to repurpose the data or share it with third parties without notice and consent, which can further limit the dispersal of information that is particularly useful in correlation attacks.

‘Correlation techniques will still exist, of course, and we’ll never get complete control over all combinations of identifying information that can be collected with little cost or effort. That’s not necessarily bad; correlation technology offers benefits as well as risks. Government agencies use correlation techniques to track down deadbeat dads and potential terrorists; employers and lenders rely on such techniques to avoid hiring fraudsters or extending credit to people who are bad credit risks. Marketers and political parties using correlation techniques are satisfied with probable rather than certain identification, because they just want to pitch their products or candidates at likely prospects, and they don’t pose much of a risk to individuals beyond annoyance.’

I'm not sure I buy the idea that because governments and police – under the guidance of the courts – should be able to do something, anyone else should as well. And I think a potential employer or lender should obtain my consent before sharing the information I supply with others for verification or correlation. (In doing this, the uses made of this information should be controlled and revealed.) Such a regime would be just as effective in preventing the hiring of a fraudster as today's roughshod measures, but would give people a greater degree and sense of control.

‘From an identity management standpoint, we should probably focus on correlation “attacks” – deliberate efforts to piece together personally identifiable information for criminal purposes, such as fraud, money laundering, stalking, or gaining unauthorized access to protected buildings and computer systems. Can a digital identity system make it harder to perpetrate correlation attacks? As a society, for example, perhaps we should make more of an effort to give individuals the option of obscuring data revealing their physical address or current location (because they have an abusive ex-spouse, for example, or they work in an abortion clinic, or somebody has pronounced a fatwa against them). And government agencies and commercial enterprises could make many correlation attacks irrelevant by requiring identifying information that is not so easy to collect as, for instance, an SSN, birthdate, and mother’s maiden name, when issuing an ID or approving a transaction.

‘Both government and business are under pressure today to adopt and rely on “stronger” forms of identification, ID that cannot so easily be obtained or mimicked by fraudulent practices such as correlation attacks, phishing, and social engineering. As Stefan says, stronger ID credentials carry their own security risks, and we should point those out and take them into account in designing digital ID systems. As these stronger forms of official and financial ID are deployed, it will be increasingly important to control how they are used, legally and contractually. Look at all the jurisdictions passing laws on the use of Social Security Numbers today, for example – they will be even more anxious to regulate the use of a super-ID. And individuals will need to know when they are asked for this ID what their technical and contractual options are (if any) for controlling its use and dissemination. Techniques such as link contracts may be very useful in this regard, to provide accountability beyond the areas controlled by regulation.’

I presume Scott is thinking of link contracts as being examples of legal mechanisms constraining the use of information (often called use policy).

Scott has posted his presentation to Catalyst as a fully expounded document called Privacy and Information Management.

[tags: , , , , , ]

So many phish, so little time…

If you don't have your own spam, here are two little phish that turned up in my corporate mail in one day.

From: Mr. Fredrick Andrew. []


My name is Mr. Fredrick Andrew. I trained and work as an external auditor for the Development Bank of Singapore (DBS). I have taken pains to find your contact through personal endeavours because a late investor, who bears the same last name with you, has left monies totaling a little over $10 million United States Dollars with Our Bank for the past twelve years and no next of kin has come forward all these years.

Isn't that a co-incidence? One of my really lucky breaks!

[Blah. Blah. Blah… – Kim]

Needless to say, Uttermost CONFIDENTIALITY is of vital importance if we are to successfully reap the immense benefits of this transaction. I have intentionally left out the finer details for now until I hear from you. To affirm your willingness and cooperation to my proposal please do so by email, stating your full names, date of birth, telephone number and fax number. I do expect your prompt response. pls do contact me in my email address:

[ ]

Waiting to hear from you soon.

Thank you.

Mr. Fredrick Andrew

There is the small problem that when I ping my IP-location service tells me its in Isreal. Do you think the discrepancy with Singapore matters?

Anyway, if one day I just stop blogging, you'll know this has come through for me!

In the meantime, here's the other one – a lot more sophisticated:

eBay Safeharbor Department Notice

Fraud Alert ID : 00626654

Dear eBay member,

You have received this email because you or someone else had used your identity to make false purchases on eBay. For security reasons, we are required to open an investigation on this matter. We treat online fraud seriously and all cases which cannot be resolved between eBay and the other involved party are forwarded for further investigations to the proper authorities. To speed up this process, you are required to verify your personal information against the eBay account registration data we have on file by following the link below.

Please save this fraud alert id for your reference.

When submitting sensitive information via the website, your information is protected both online and off-line. When our registration/order form asks users to enter sensitive information (such as credit card number and/or social security number), that information is encrypted and is protected with the best encryption software in the industry – SSL.

Please Note – If your account informations are not updated within the next 72 hours, we will assume this account is fraudulent and it will be suspended. We apologize for this inconvenience, but the purpose of this verification is to ensure that your eBay account has not been fraudulently used and to combat fraud.

We apreciate your support and understanding, as we work together to keep eBay a safe place to trade.

Thank you for your patience in this matter.

Regards, Safeharbor Department (Trust and Safety Department)
eBay Inc.

Please do not reply to this e-mail as this is only a notification mail sent to this address and can not be replied to.

Copyright 2005 eBay Inc. All Rights Reserved.
Designated trademarks and brands are the property of their respective owners.
eBay and the eBay logo are trademarks of eBay Inc. which is located on Hamilton Avenue, San

If you look at the source for this one (I've defused it slightly), you'll see it's hard coded to, which couldn't find, but placed in Pakistan, the ISP being the Pakistan Software Export Board. I really like the way the Copyright makes everything look official. Note that despite the sophistication of the attack, the text still contains errors in grammar to alert us.

[tags: , , , ]

Probabilistic versus Determinate Linking

For those following the discussion on probabilistic versus determinate linking, it might be worth rewinding for a minute to consider the Fourth Law of Identity.

In presenting the fourth law, I agreed that traditional omnidirectional identifiers, by which I mean identifiers known to all, were appropriate for public contexts.

Here are some examples of what I meant by public contexts:

  • A stable well-known identifier is essential for MSDN, AOL, my bank, or even my Identity Blog. It is beneficial for such public identifiers to stay constant. I want readers to share information about The more easily they can tell each other about the pieces on this and related websites, the better. These are all public things.
  • A well-known identifier is similarly appropriate for a “hot spot” in a shopping center. The hot spot is obviously “there” and a fixed wireless beacon is a helpful part of its presence. Otherwise I might end up exposing payment information to the wrong parties.
  • A well-known identifier is appropriate for a vending machine supporting digital payment. Again, the identifier would just be an extension of its physical presence.
  • A well-known identifier (an email address is a typical example) could be appropriate for a public role, like my role as architect of identity at Microsoft.
  • I could also employ a well-known identifier associated with a protective service offering more granular control. (For example, I use the i-name =Kim.Cameron to protect myself from spam – and it works really well.)

But in defining the fourth law I also argued that omnidirectional identifiers were not sufficient. In the parts of our life where we act as private individuals, we should have access to a technology which prevents collaboration about our identities except under our strict control. In achieving this, we can have two approaches to use of identifiers:

  • Avoid identifiers of any kind. This means (network addresses and information content aside – both separate discussions) that interaction contexts are completely disconnected – whether separated by points in time, or by the identity of the partner.
  • Use unidirectional identifiers, meaning those known only to a single partner – so that an interaction context can be maintained with that partner over time, yet remain disconnected with respect to interactions with other partners. I might subsequently choose to share some unidirectional identifiers between two (or more) partners – if they give me the right incentives. But being the only one who initially knows all the identifiers, collusion between my partners is not possible without my knowledge.

Why would you want to separate interaction contexts this way?

To prevent partners with whom you have shared information of one kind from amalgamating it with information collected about you by other partners, in order to create a “super-dossier” across different aspects of your life. (If this seems improbable, click here, then read this.)

Solove and others have explained that there are outfits which even today attempt to discover the correlations between our profiles at different organizations or sites; and who then assemble super-dossiers, and sell them, even to government buyers. If such correlations are possible, why does it still makes sense to insist on unidirectional identifiers?

I think there are several reasons.

The first is that if we want people to trust the emerging identity metasystem, we need to give them the ability to predict and intuit how it will behave.

Users can easily understand that if they give a telephone number or email address to two different parties, those parties can correlate them. This happens in the so-called “real world” as well.

But if users release no identifying information whatsoever, a system which still sets up invisible correlation handles would really be failing them. If this sounds like an unlikely technical outcome, remember that this is precisely what happens in the typical use of client X.509 certificates. Even PGP is subject to this problem (and worse, reveals the membership of one's entire circle of trust).

But there is another reason – namely, that correlation handles virtually eliminate the cost of discovering correlations, while providing 100% accuracy. We know that if correlation has a significant cost, then there must be a significant and provable cost benefit to justify it. Conversely, if it comes for free, then super-dossiers come for free, and their proliferation – completely outside of the user's control – is more or less inevitable.

I would see this proliferation as catastrophic – partly because people don't want to live in a virtual world where they feel like characters in a Kafka novel; and partly because there is great likelihood it would ultimately bring about rejection of the underlying identity system by many of those most essential to its success – the opinion leaders, those who think deeply about the implications of things, those who innovate and create, those who affect public opinion.

By providing alternatives to the use of correlation handles, not only is the cost of discovering correlation increased, but the probability of the correctness of attempted correlations is reduced. This in turn, implies further hidden cost as misinformation turns into liability. These costs and liabilities combine to discourage commercial super-dossiers constructed without the permission and participation of the individual. Given a prohibitive cost model for super-dossier activity, other less alienating means of developing real relations with customers are likely to be more cost-effective and beneficial all around.

That's really the background to yesterday's discussion about “Data Pollution and Separation of Context”. And since that posting, a number of comments have been made that are helpful in breaking through to a better understanding of how to think about and explain these complex issues.

Tom Gordon‘s contribution rang very true with me, and sounds a warning about the effect Data Pollution and false correlation will have on customers in general:

I have had one large company in the UK use incorrect information when trying to contact me about services I was purchasing from them. However, since they had previously contacted me successfully (and another department in the same company had telephoned me a few days beforehand), it appears they deliberately chose to use outdated information so they would have a failed contact record.

The interesting thing is they used information that was 3 years old, even though the department in question had sent me a letter (to the correct address) a few weeks before.

Certainly that company hadn't cleaned up its customer identity data! The symptoms described often appear when previously independent entities have been brought together under a common umbrella through reorganization, including mergers and acquisitions. The same customer appears in multiple unrelated computer systems, and it's difficult to unify them. Metadirectory helps in this regard. But getting it right depends on what we call the “identity join”. How do we know two accounts refer to the same customer? And how do we keep from making mistakes when figuring this out?

On this subject, Felipe Conill writes:

In my current job I am leading an effort with the goal of presenting data to customers from separate databases that identify the customer differently (different customer identifiers).

One of the many challenges we have to address to solve this problem is the challenge of using the identity of the customer when he logs in from a browser (where the identity datasource is reachable from the internet – which in itself presents all kinds to security risks) to query other data sources giving them information specific to their company.

The risk in doing this is that you don't want to show customer A the bill of customer B. To ensure this does not happen we need to have a mapping table to match who gets to see what.

Instead of doing this messy solution we are putting a common identifier for the customer in all of the datasources where there is customer data like Stefan suggests. Basically bypassing having to do an “identity join” to solve the problem.

I need to stop Felipe for a moment. Perhaps it's just a “vocabulary thing” – I see how a SQL aficionado, for example, might take the word “join” in a much more restrictive sense – but in the vocabulary Craig Burton and I developed, you are not avoiding doing an “identity join” at all. You are performing an identity join, and then using that to push a common identifier into all your systems as a way to represent it permanently.

This makes sense since your customers presumably want a single relationship with your company. I know I have been frustrated for years by the fact that my bank, for example, has still not “gotten it together” to give me a single login to all the services I use there.

Now the question becomes one of how you do the identity join. You probably have enough data to make a very well informed guess about what account should be joined to what. But if the data is important enough, you likely need to ask the user to verify your conclusions. For example, I have seen systems where “modern correlation technology” is used to propose that various accounts might belong to a given user, but which still ask the customer to demonstrate his ability to access them before information is merged.

In the real world you would never get to do this since the same entity does not control all information. You have governments, foreign entities and business competitors that would never agree to have the same identifier from someone.

This is true, but lets suppose they did. Would the user want or accept this? If the user does not, it is my view, and Stefan's mathematical argument makes this point superbly, it is virtually impossible to accurately know what to join to what.

I agree with Kim in that privacy-through-obscurity can be achieved by technological protections of privacy. Correlation errors are inevitable if the scope is big and in a lot of cases intolerable. By the way Kim, congrats for this blog. It really stimulates thinking reading from people at your level!

Thank you for those kind words – your point of view stimulates my thinking as well.

Simon Chen then says:

I agree with Stefan's analysis regarding how the error probability associated with linking identity accounts could become intolerable. However, I disagree with the alternative vision he's proposing: “Now, on the other hand, imagine a world where each user has only one user identifier that is the basis of all his or her interactions…”

I do not feel that all the users in the world can ever agree to use a single identifier representing him/her. The Internet is simply too diverse and mutable for something like this. Therefore, I believe that we can never avoid having to map identities between organizations.

I agree totally with Simon's point, but have to clear things up for Stefan, who would never propose use of a single identifier across contexts. In fact, he later added this clarifying remark:

Regarding Simon Chen's comment, perhaps it was not sufficiently clear that my paragraph “Now, on the other hand, imagine a world where each user has only one user identifier …” was intended in an ironical manner. In fact, my own work in the past 14 years is all about technically achieving, amongst others, the user-controlled approach that Simon outlines. See, for instance, here and here.

So this is good – we are all on track both for separation of contexts and multiple identifiers, but coming at it from slightly different points of view. Simon continues:

There are fundamental error probabilities associated with identity linkage, but I also believe this error can be manageable with new trust models and infrastracture.

I think Stefan made the assumption that the service providers are responsible for updating identity mappings, which can lead to major data integrity problems.

While this is true in the current world, why not delegate this responsibility in the future to the users that own the identity information in the first place.

Now imagine, in the spirit of the First Law of identity, a user with his identity information distributed across multiple service providers, and the providers are part of a trust network with established business relationships. The user can use a personal identity management interface to update how his/her identity information can be mapped and shared with his service providers, and these user-driven changes can be propagated across the trust network through this interface.

The personal identity management interface can be hosted by any service providers in the trust network, and it simply represents a gateway for the user to tap into the trust network and manage his/her identity. In this model, the user can control the number of different service providers (or contexts) that can store identity information about him and how his identity information can be shared across contexts.

Yes, much of this thinking is along the same line as intended by InfoCards – not to imply that they represent a “silver bullet”.

[tags: , ,, , ]

Data Pollution and Separation of Context

Stefan Brands has posted one of the best argued and most important comments yet on the issue of identity correlation, the phenomenon giving rise to the Fourth Law of Identity.

By way of background, this was part of a conversation taking place in an ID Gang discussion group hosted by Berkman Law School. Our friend Drummond Reed posted a comment which, although perfectly innocent in its intent, sent me into Tasmanian Devil Mode.

‘Ever since I saw the shocking powers of modern correlation technology – it only takes 2 to 3 pieces of MANY kinds of perfectly innocent data (e.g., zip code and income) to uniquely identify a person with a 99+% statistical accuracy – I realized that privacy-through-obscurity was hopeless. Which means privacy-through-accountability is the only option.’

Accountability is indeed important, but not in any way a substitute for technological protections of privacy. Thus, although Drummond is a big supporter of the Fourth Law and context-specific identifiers, I felt it was necessary to underline the key importance of the distinction between probabilistic and determinate correlation. So I wrote:

‘The “modern correlation technology” argument made by Drummond easily leads to the wrong conclusions. The zip code plus income example is typical, and gets my goat because it leads some to say “you can be identified with a few pieces of information, so it doesn’t really matter if correlation handles exist.”

‘In Drummond’s example, how accurately has the income been expressed, and what is the size of the zipcode? … “Modern correlation technology” is based on approximations and fuzzy calculation and is very expensive relative to using “database keys”. It is appropriate to *keep it that way* and make it *more expensive still*.’

Stefan then entered the discussion, extending our consideration of the problem of fuzzy calculation (of correlation) to include the inevitability of correlation errors.

Many readers will understand this because they know that to rationalize their identity infrastructure, enterprises have had to go through the well-known pain of doing what Craig Burton and I, over a decade ago, described as the “identity join”. This was the process of determining how the identifiers used in disparate computer and directory systems throughout the enterprise mapped to each other.

Performing this join accurately usually proved laborious – even though that join represented a trivial problem compared to one at the scale of the Internet as a whole. Further, enterprise administrators had many advantages over those trying to employ “modern correlation techniques”. Besides dealing with a relatively small population, they enjoyed unlimited access to data and identifying information, flexible tools, and the ability to ask the data subjects to collaborate! It was still expensive to get everything right.

Stefan goes on to present an abstracted (mathematical) model which could be the basis for an economics of the phenomena described. If there isn't a ground-breaking paper waiting to be written about identity economics, I'll eat my hat.

Here is Stefan's contribution – one which I think is crucial (I've added some emphasis):

In support of Kim's defense of information privacy, here is another observation: there is a world of difference for organizations between

  1. a link between different user identifiers that is 100% guaranteed and
  2. a link that is only suspected (e.g., is Jon A. Smith really the same person as Jonathan A. Smith?).

Consider this. When organizations link up user accounts (also known as records, files, dossiers, etc.) that are indexed by different user identifiers and they have no guarantee of the correctness of the linkage, the aggregated information in the “super-account” may well become completely worthless to them and may even become a liability.

Even a 0.1% error probability in many cases may be intolerable. Imagine the consequences of hooking up the wrong health care or crime-related information on a per-user basis and making a medical or criminal justice error on that basis that is wrong.

Depending on the business of the organization, there may be a significant cost associated with acting on wrong information, not only from a liability perspective, but also from a goodwill, security, or resource cost perspective.

The more user accounts are linked up into one aggregated “super-account”, the higher the error probability. We are dealing with a geometric distribution here. In an abstracted model, if the probability of success in matching two user/account identifiers is p then the probability that n user identifiers that are hooked up contain at least one error (i.e, they do NOT all pertain to the same person/entity — a case of “data pollution“) is 1 – p^{n-1}. To appreciate how fast this error rate goes up when linking more and more user accounts, check out this site (requires java). More sophisticated statistics can be applied directly out of the text books of econometrists.

Now, on the other hand, imagine a world where each user has only one user identifier that is the basis for all his or her interactions with organizations and other users; no more error probability, no more data pollution when hooking up user accounts, regardless of how many! The strongest possible guarantee that different user identifiers (serving as account indices) really pertain to the same person occurs, of course, when user identifiers are “certified” by a trusted issuer; a national (or world…) ID chipcard with three-factor security would be the ultimate linking/profiling tool for organizations that naively believe that aggregating personal information across domains does not come with major security risks of its own.

In short, there is a major difference from the perspective or organizational value between being able to correlate with absolute infallibility and the value of merely being able to guess with high success probability which user identifiers / account indices relate to the same user.

PS For civil liberties arguments in favor of avoiding correlation handles where not strictly needed, see for instance here and here.

I think it would be good to put together a brief paper that explores the problem of obtaining accuracy in doing the identity join, combining the experience gathered from metadirectory deployments and Stephan's mathematical explanation.

[tags: , , , , ]

One more thing to worry about…

It seems too cruel, but Katrina's victims have one more thing to worry about: identity theft.

An AP story speaks of Social Security cards, driver's licenses, credit cards and other personal documents literally floating around New Orleans, raising the prospect some hurricane survivors could be victimized again — this time by identity thieves.

According to Betsy Broder, the attorney who oversees the Federal Trade Commission's identity theft program:

Survivors giving personal data to insurance adjustors or Federal Emergency Management Agency representatives should be certain they're dealing with legitimate individuals and “not crooks who are trying to trick them out of their information so they can commit identity theft.”

Once victims are able to get access to phones, Internet and mail, they should check their credit card and bank statements to see if there's been any unusual activity.

Meanwhile, scams have arisen to bilk people who are donating money over the internet.

The FBI also warned people wanting to donate money for Katrina survivors to beware of scammers who solicit online donations to lure victims into giving up credit card numbers and other sensitive information.

“There are people out there who are willing to stoop so low as to scam people who are willing to open their hearts and wallets to people in need,” said FBI spokesman Paul Bresson.

Yes, there are professionals “out there” organized internationally so as to scam us with apparent impunity. This is a key point made in the Laws of Identity whitepaper.

He said the bureau has identified about 2,000 Web sites related to the Katrina relief effort. Most are legitimate, Bresson said, but the FBI is investigating about a dozen for possible fraud.

According to other reports, the main scam sites are posing as well-known organizations like the Red Cross. After stealing your personal identifying information, including credit card numbers, they pop you back on to the legitimate site creating a real sense that all is normal.

This is yet another case where we need an Internet Identity Metasystem with a consistent user experience that allow us to be sure – when we want certainty – about who we are talking to on the internet.

[tags: , , ]

Congress Considers Data Security Legislation

Here is a new briefing related to the identity catastrophes we've been following in this blog. Its intent is to guide legislation. The publisher is The Center for Democracy and Technology (CDT).

It will be interesting to see if anyone lines up against these proposals, which seem, though I am not a legal expert, matters of common sense. But for us as technologists, statements like “require entities that electronically store personal information to implement security safeguards” need to be operationalized, and in fact, the effectiveness of the proposals depends on how this is done.

I would like to see our legislators embrace forward technical thinking: putting in question what needs to be stored; and producing a set of technical requirements that offers real protection. I'd like to see them come to understand ideas like Data Rejection – the use of handles with no retention of identifying information except when encrypted for audit purposes under asymmetric keys and decipherable only on off-line systems.

Other advanced techniques should be considered as well, including decentralized storage of aspect-specific information and aspect-specific identifiers such as those enabled by InfoCard technology.

But I digress. Here is the briefing.

(1) Congress Considers Data Security Legislation

If nothing else positive has come from the seemingly unending string of data security breaches at corporations, universities and government agencies over the past year, they have, at the very least, illustrated the need for Congress to establish stronger protections for citizens’ sensitive personal information.

Data compromises at ChoicePoint, LexisNexis, the U.S. Air Force and other high-profile companies and organizations have heightened public concerns about loss of privacy and personal information. Federal and state lawmakers have responded to those concerns by proposing new legal protections specifically designed to protect citizens against the adverse effects of data security failures.

As a starting point, it must be recognized that there is still a need for baseline federal legislation to address the panoply of privacy issues posed by the digital revolution. Maintaining strong security is only one of a number of obligations that should apply to those who collect, use and store personally identifiable information. However, it is unlikely that current legislative efforts will address the larger issues of consumer privacy in the digital age, since enacting federal legislation on the full range of privacy concerns will require a longer and more inclusive dialogue than is currently underway.

Nonetheless, CDT believes there are a number of security issues, going beyond simply notifying citizens when their privacy has been compromised, that merit immediate attention. They share a common theme, arising from the rapid growth of the information services industry, the steep escalation in identity theft, and the government's increasing use of commercial data. These issues have been the subject of hearings and are addressed in one form or another in multiple pending bills.

CDT believes that any data privacy and security legislation that emerges from this Congress must represent a meaningful step forward, from a consumer perspective, over what states are already doing. CDT would oppose legislation that addressed the recent spate of data security breaches in an unduly narrow manner or in a way that resulted in consumers having weaker protections than those afforded under current state laws.

Further references:

  • CDT's April 13, 2005 congressional testimony on securing electronic personal data
  • CDT's March 2005 Policy Post on information security breaches
  • (2) CDT Recommends Key Elements of Legislation

    In CDT's view, federal data security legislation should include the following elements:

  • Notice of Breach: Entities, including government entities, holding sensitive personal data should be required to notify individuals in the event of a security breach. The notice of breach provision should afford at least as much protection as the California notice of breach law, while avoiding over-notification.
  • Security Safeguards: Because notice would be given only after a breach had occurred, Congress should require entities that electronically store personal information to implement security safeguards, similar to those required by FTC rules under Gramm-Leach-Bliley (GLB) and California law. Civil fines should be available against companies that fail to comply with their own safeguards programs.
  • Government Uses of Commercial Data: Congress should address issues raised by the federal government's growing use of commercial databases, especially in the law enforcement and national security contexts, by requiring public disclosure of the databases to which the government subscribes, government scrutiny of these databases’ security safeguards as part of the contracting process, and measures to ensure data quality and redress when decisions about individuals are made on the basis of commercial data.
  • Credit Report Freeze: Currently, consumers have limited options to protect themselves from fraud when they are notified of a breach or otherwise have concerns about the use of their data. Congress should allow customers to request a security freeze on their credit reports, as at least 10 states already have done.
  • Social Security Number (SSN) Protection: SSNs have become the de facto national identifier and, especially when used as an authenticator, are key enablers of identity theft. Congress should seek to end the use of the SSN as an authenticator and should impose tighter controls on the disclosure, use, and sale of SSNs, with an appropriate phase-in period.
  • Consumer Access to Data: Enabling individuals to access their personal data files is an important safeguard against inaccuracy and misuse, particularly when personal data is collected and maintained for disclosure to third parties for their use in risk assessment or other decision making. An access regime is well established under the Fair Credit Reporting Act (FCRA). Data security legislation should impose similar access requirements on information services companies that aggregate and sell personal data.
  • Carefully Crafted Preemption: Nationwide notice of breach legislation should preempt individual state breach notification requirements, provided it affords at least as much protection as California's notification law. Federal legislation also should preempt inconsistent state legislation on other specific subjects addressed in the federal law (for example, security standards), following the model of GLB. Federal legislation should not, however, take the unusual step of preempting state common law or general consumer protection law.
  • (3) The Current Legislative Landscape

    There are a number of bill in Congress in various stages of evolution that address some of the key elements listed above. Although several Senate and House committees have competing jurisdiction over these issues, three bills have emerged with bipartisan support from members of key committees. Given the public pressure to improve data security protections, these measures could come up this fall, even though lawmakers will be primarily focused on hurricane response efforts and Supreme Court nominations.

    The Senate Commerce Committee has considered and approved a bill (S. 1408), introduced by Senators Smith (R-OR), Stevens (R-AK), Inouye (D-HI), McCain (R-AZ), Nelson (D-FL), and Pryor (D-AR), that provides for notice of breach, security safeguards, social security number protections, and a security freeze. While some of the provisions in the Senate Commerce Committee bill provide good consumer protections, in CDT's view the preemption provision goes too far. It is drafted so broadly that it might preclude common law causes of action (cases alleging simple negligence, for example) under state law.

    Prominent members of the Senate Judiciary Committee and House Energy and Commerce Committee are also working on bills, although neither committee has held a markup. The Senate Judiciary Committee bill (S. 1332), introduced by Committee Chairman Specter (R-PA) and Senator Leahy (D-VT), includes provisions on notice of breach, security safeguards, government use of commercial data, social security number protections, and consumer access to data.

    Top members of the House Energy and Commerce Committee have circulated a draft bill that covers notice of breach, security safeguards, and consumer access to data. Lawmakers are likely to introduce the bill in September.

    Other committees with potential claims of jurisdiction over some of these issues include the Senate Banking, House Financial Services, Senate Finance, and House Ways and Means. These committees could take up such issues as credit report freeze requirements or social security number protection.

    More information:

  • Senate Commerce Committee bill, S. 1408
  • Specter-Leahy bill, S. 1332
  • Other bills pending in Congress can be found here.
  • [tags: , , , , ]

    RFID at Tech Ed

    Robin Wilton from Sun wrote to comment on “Just a few scanning machines“. He says:

    I was invited to attend Microsoft Tech Ed 2005 in Amsterdam this year. One of the first things the warm-up presenter told us was that we'd all been RFID-tagged.


    1. as I say, we were all told in the opening session;
    2. it was made clear to us that the RFID tag numbers were not cross-referenced to our names.

    So, for instance, when a couple of raffle winners were announced at the end of that session, only their RFID tag numbers were displayed on screen – it was up to us to check our own badges.

    Robin then refer's to another comment on the post by Felipe Connill:

    Pretty crazy that [the organizers of the Computers, Freedom and Privacy Conference – Kim] did this [tracked conference participants using bluetooth – Kim] without notifying everyone. But it really drives the point that people [and] equipment manufacturers need to start applying the laws of identity or if not our privacy is going to be invaded at every point.

    Robin concludes:

    Felipe's comment is spot on: had we not been told, none of us would have known we'd been tagged. This is absolutely a policy and implementation issue, not a technology one. Policy and implementation have to be based on a clear understanding of the subject's relevant rights to privacy and informed consent.

    Robin is such a gentleman. But this kind of demonstration makes me scratch my head. What exactly were we trying to achieve? I suppose the idea must have been to show how powerful this new technology is. The demo sure accomplishes that! Maybe the idea was to give everyone the creeps so they would think about how not to use RFID tags. That's a novel approach for a product launch. Novelty is important. Anyway, I'll find out one day, and I'll let you know.

    Meanwhile, it goes to show how much work we have left to do in getting a wider set of people to think about the relationship between identity and technology, especially tracking technologies. We haven't gotten the message out clearly enough.

    To be continued…

    [tags: , , , ]

    Turning off Bluetooth

    The digital ink on my last Bluetooth piece was barely dry, if digital ink dries, before Roland Dobbins wrote with a comment I'm sure will be subscribed to by many readers:

    If you'd either a) disable Bluetooth on your phone, etc. (the safest option) or b) at least set them so that they're not visible/browsable (due to design flaws in Bluetooth, they can be detected by an attacker with the right tools and motivations, but it still raises the bar), you'd be a lot better off, IMHO.

    You're right, and I have turned it off. Which bothers me. Because I like some of the convenience I used to enjoy.

    So I write about this because I'd rather leave my Bluetooth phone enabled, interacting only with devices run by entities I've told it to cooperate with.

    We have a lot of work to do to get things to this point. I see our work on identity as being directed to that end, at least in part.

    We need to be able to easily express and select the relationships we want to participate in – and avoid – as cyberspace progressively penetrates the world of physical things.

    The problems of Bluetooth all exist in current Wifi too. My portable computer broadcasts another tracking beacon. I'm not picking on Bluetooth versus other technologies. Incredibly, they all need to be fixed. They're all misdesigned.

    If anything has shocked me while working on the Laws of Identity, it has been the discovery of how naive we've been in the design of these systems to date – a product of our failure to understand the Fourth Law of Identity. The potential for abuse of these systems is collosal – enterprises like the UK's Filter are just the most benign tip of an ugly iceberg.

    For everyone's sake I try to refrain from filling in what the underside of this iceberg might look like.

    Just a few scanning machines…

    Since I seem to be on the subject of Bluetooth again, I want to tell you about an experience I had recently that put a gnarly visceral edge on my opposition to technologies that serve as tracking beacons for us as private individuals.

    I was having lunch in San Diego with Paul Trevithick, Stefan Brands and Mary Rundle. Everyone knows Paul for his work with Social Physics and the Berkman identity wiki; Stefan is a tremendously innovative privacy cryptographer; and Mary is pushing the envelope on cyber law with Berkman and Stanford.

    Suddenly Mary recalled the closing plenary at the Computers, Freedom and PrivacyPanopticon Conference” in Seattle.

    She referred off-handedly to “the presentation where they flashed a slide tracking your whereabouts throughout the conference using your bluetooth phone.”

    Essentially I was flabbergasted. I had missed the final plenary, and had no idea this had happened.

    MAC Name Room Time Talk
    Kim Cameron Mobile
    Grand I (G1) Wed 09:32 09:32 ????
    Grand Crescent (gc) Wed 09:35 09:35 Adware and Privacy: Finding a Common Ground
    Grand I (G1) Wed 09:37 09:37 ????
    Grand Crescent (gc) Wed 09:41 09:42 Adware and Privacy: Finding a Common Ground
    Grand I (G1) Wed 09:46 09:47 ????
    Grand III (g3) Wed 10:18 10:30 Intelligent Video Surveillance
    Baker (ol) Wed 10:33 10:42 Reforming E-mail and Digital Telephonic Privacy
    Grand III (g3) Wed 10:47 10:48 Intelligent Video Surveillance
    Grand Crescent (gc) Wed 11:25 11:26 Adware and Privacy: Finding a Common Ground
    Grand III (g3) Wed 11:46 12:22 Intelligent Video Surveillance
    5th Avenue (5a) Wed 12:33 12:55 ????
    Grand III (g3) Wed 13:08 14:34 Plenary: Government CPOs: Are they worth fighting for?

    Of course, to some extent I'm a public figure when it comes to identity matters, and tracking my participation at a privacy conference is, I suspect, fair game. Or at any rate, it's good theatre, and drives home the message of the Fourth Law, which makes the point that private individuals must not be subjected – without their knowledge or against their will – to technologies that create tracking beacons.

    A picture named kim_cameron.JPGLater Mary introduced me to Paul Holman from The Shmoo Group. He was the person who had put this presentation together, and given our mutual friends I don't doubt his motives. In fact, I look forward to meeting him in person.

    He told me:

    “I take it you missed our quick presentation, but essentially, we just put bluetooth scanning machines in a few of the conference rooms and had them log the devices they saw. This was a pretty unsophisticated exercise, showing only devices in discoverable mode. To get them all would be a lot more work. You could do the same kind of thing just monitoring for cell phones or wifi devices or whatever. We were trying to illustrate a crude version of what will be possible with RFIDs.”

    The Bluetooth tracking was tied in to the conference session titles, and by clicking on a link you could see the information represented graphically – including my escape to a conference center window so I could take a phone call.

    Anyway, I think I have had a foretaste of how people will feel when networks of billboards and posters start tracking their locations and behaviors. They won't like it one bit. They'll push back.

    [tags: , , , ]