The Laws of Identity smack Google

Alan Eustace, Google's Senior VP of Engineering & Research, blogged recently about Google's collection of Wi-Fi data using its Street View cars:

The engineering team at Google works hard to earn your trust—and we are acutely aware that we failed badly here. We are profoundly sorry for this error and are determined to learn all the lessons we can from our mistake.  

I think the idea of learning all the lessons he can from Google's mistake is a really good one, and I accept that Alan really is sorry.  But what constituted the mistake?

Last month Google was good enough to provide us with a “refresher FAQ” that dealt with the subject in a particularly specious way, even though it was remarkable in its condescension:

“What do you mean when you talk about WiFi network information?
“WiFi networks broadcast information that identifies the network and how that network operates. That includes SSID data (i.e. the network name) and MAC address (a unique number given to a device like a WiFi router).

“Networks also send information to other computers that are using the network, called payload data, but Google does not collect or store payload data.*

“But doesn’t this information identify people?
“MAC addresses are a simple hardware ID assigned by the manufacturer. And SSIDs are often just the name of the router manufacturer or ISP with numbers and letters added, though some people do also personalize them.

“However, we do not collect any information about householders, we cannot identify an individual from the location data Google collects via its Street View cars.

“Is it, as the German DPA states, illegal to collect WiFi network information?
“We do not believe it is illegal–this is all publicly broadcast information which is accessible to anyone with a WiFi-enabled device…

Let's start with the last point. Is information that can be collected using a WiFi device actually being “broadcast”?  Or is it being transmitted for a specific purpose and private use?  If everything is deemed to be “broadcast” simply by virtue of being a signal that can be received, then surely payload data – people's surfing behavior, emails and chat – is also being “broadcast”.  Once the notion of “broadcast” is accepted, the FAQ implies there can be no possible objection to collecting it.

But Alan's recent post says, “it’s now clear that we have been mistakenly collecting samples of payload data from open (i.e. non-password-protected) WiFi networks.”  He adds, “We want to delete this data as soon as possible…”  What is the mistake?  Does Alan mean Google has now accepted that WiFi information is not by definition being “broadcast” for its use?  Or does Alan see the mistake as being the fact they created a PR disaster?  I think “learning everything we can” means learning that the initial premises of the Street View WiFi system were wrong (and the behavior perhaps even illegal) because the system collected WiFi information that was intended to be used for private purposes and not intended to include Google.  

The FAQ claims – and this is disturbing – that the information collected about network identifiers “doesn't identify people”.  The fact is that it identifies devices that are closely associated with people – including their personal computers and phones.  MAC addresses are persistent, remaining constant over the lifetime of the device.  They are identifiers that are extremely reliable in establishing identity by virtue of being in peoples’ pockets or briefcases.

As a result, Google breaks two Laws of Identity in one go with their Street View boondoggle, 

Google breaks Law 3, the Law of  Justifiable Parties.

Digital identity systems must limit disclosure of identifying information to parties having a necessary and justifiable place in a given identity relationship

Google is not part of the transactions between my network devices and is not justified in intervening or recording the details of their use and relationship. 

Google also breaks Law 4, Directed Identity:

A universal identity metasystem must support both “omnidirectional” identifiers for use by public entities and “unidirectional” identifiers for private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

My network devices are private entities intended for use in the contexts for which I authorize them.  My home network is a part of my home, and Google (or any other company) has not been invited to employ that network for its own purposes.  The identifiers in use there are contextually specific, not public, and not intended to be shared across all contexts.  They are more private than the IP addresses used in TCP/IP, since they are not shared across end-points in different networks.  The same applies to SSIDs.

One can stand in the street, point a directional microphone at a window and record the conversations inside.  This doesn't make them public or give anyone the right to use the conversations for commercial purposes.  The same applies to recording the information we exchange using digital media – including our identifiers, SSIDs and MAC addresses.  It is particularly disingenuous to argue that because information is not encrypted it doesn't belong to anyone and there are no rights associated with it.  If lack of encryption meant information is fair game a lot of Google's own intellectual property would be up for grabs,

Google's justification for collecting MAC addresses was that if a stranger walked down your street, the MAC addresses of your computers and routers could be used provide his systems (or Googles’?)  with information on where he was.  The idea that Google would, without our consent, employ our home networks for its own commercial purposes betrays a problem of ethics and a lack of control.  Let's hope this is what Alan means when he says,

“Given the concerns raised, we have decided that it’s best to stop our Street View cars collecting WiFi network data entirely.”

I know there are many people inside Google who will recognize that these problems represent more than a “mistake” – there is clearly the need for a much deeper understanding of identity and privacy within the engineering and business staff.   I hope this will be the outcome.  The Laws of Identity are a harsh teacher, and it's sad to see the Street View technology sullied by privacy catastrophes.

Meanwhile, there is one more lesson for the rest of us.  We tend to be cavalier in pooh poohing the idea that commercial interests would actually abuse our networks and digital privacy in fundamental ways.  This episode demonstrates how naive that is.  We need to strengthen the networking infrastructure, and protect it from misuse by commercial interests as well as criminals.  We need clear legislation that serves as a disincentive to commercial interests contemplating privacy-invasive use of technology.  And on a technical note, we need to fix the problems of static MAC addresses precisely because they are strong personal identifiers that ultimately will be used to target individuals physically as criminals begin to understand their possible uses.

 

Interview on Identity and the Cloud

I just came across a Channel 9 interview Matt Deacon did with me at the Architect Insight Conference in London a couple of weeks ago.  It followed a presentation I gave on the importance of identity in cloud computing.   Matt keeps my explanation almost… comprehensible – readers may therefore find it of special interest.  Video is here.

 

In addition, here are my presenation slides and video .

Bizzare customer journey at myPay…

Internet security is a sitting duck that could easily succumb to a number of bleak possible futures.

One prediction we can make with certainty is that as the overall safety of the net continues to erode, individual web sites will flail around looking for ways to protect themselves. They will come across novel ideas that seem to make sense from the vantage point of a single web site. Yet if they implement these ideas, most of them will backfire. Internet users have to navigate many different sites on an irregular basis. For them, the experience of disparate mechanisms and paradigms on every different site will be even more confusing and troubling than the current degenerating landscape. The Seventh Law of Identity is animated by these very concerns.

I know from earlier exchanges that Michael Ramirez understands these issues – as well as their architectural implications. So I can just imagine how he felt when he first encountered a new system that seems to represent an unfortunately great example of this dynamic. His first post on the matter started this way:

“Logging into the DFAS myPay site is frustrating. This is the gateway where DoD employees can view and change their financial data and records.

“In an attempt secure the interface (namely to prevent key loggers), they have implemented a javascript-based keyboard where the user must enter their PIN using their mouse (or using the keyboard pressing tab LOTS of times).

“A randomization function is used to change the position of the buttons, presumably to prevent a simple click-tracking virus from simply replaying the click sequence. Numbers always appear on the upper row and the letters will appear in a random position on the same row where they exist on the keyboard (e.g. QWERTY letters will always appear on the top row, just in a random order).

“At first glance, I assumed that there would be some server-side state that identified the position of the buttons (as to not allow the user's browser to arbitrarily choose the positions). Looking at how the button layout is generated, however, makes it clear that the position is indeed generated by the client-side alone. Javascript functions are called to randomize the locations, and the locations of these buttons are included as part of the POST parameters upon authentication.

“A visOrder variable is included with a simple substitution cipher to identify button locations: 0 is represented by position 0, 1 by position 1, etc. Thus:

VisOrder =3601827594
Substitution =0123456789
Example PIN =325476
Encoded =102867

“Thus any virus/program can easily mount an online guessing attack (since it defines the substitution pattern), and can quickly decipher the PIN if it has access to the POST parameters.

“The web site's security implementation is painfully trivial, so we can conclude that the Javascript keyboard is only to prevent keyloggers. But it has a number of side effects, especially with respect to the security of the password. Given the tedious nature of PIN entry, users choose extremely simplistic passwords. MyPay actually encourages this as it does not enforce complexity requirements and limits the length of the password between 4 and 8 characters. There is no support for upper/lower case or special characters. 36 possible values over an 4-character search space is not terribly secure.”

A few days later, Michael was back with an even stranger report. In fact this particular “user journey” verges on the bizarre. Michael writes:

“MyPay recently overhauled their interface and made it more “secure.” I have my doubts, but they certainly have changed how they interact with the user.

“I was a bit speechless. Pleading with users is new, but maybe it'll work for them. Apparently it'll be the only thing working for them:

Although most users have established their new login credentials with no trouble, some users are calling the Central Customer Support Unit for assistance. As a result, customer support is experiencing high call volume, and many customers are waiting on hold longer than usual.

We apologize for any inconvenience this may cause. We are doing everything possible to remedy this situation.

Michael concludes by making it clear he thinks “more than a few” users may have had trouble. He says, “Maybe, just maybe, it's because of your continued use of the ridiculous virtual keyboard. Yes, you've increased the password complexity requirements (which actually increased security), but slaughtered what little usability you had. I promise you that getting rid of it will ‘remedy this situation.'”

One might just shrug one's shoulders and wait for this to pass. But I can't do that.  I feel compelled to redouble our efforts to produce and adopt a common standards-based approach to authentication that will work securely and in a consistent way across different web sites and environments.  In other words, reusable identities, the claims-based architecture, and truly usable and intuitive visual interfaces.

Green Dam and the First Law of Identity

China Daily posted this opinion piece by Chen Weihua that provides context on how the Green Dam proposal could ever have emerged.  I found it striking because it brings to the fore the relationship of the initiative to the First Law of Identity (User Control).  As in so many cases where the Laws are broken, the result is passionate opposition and muddled technology.

The Ministry of Industry and Information Technology's latest regulation to preinstall filtering software on all new computers by July 1 has triggered public concern, anger and protest.

A survey on Sina.com, the largest news portal in China, showed that an overwhelming 83 percent of the 26,232 people polled said they would not use the software, known as Green Dam. Only 10 percent were in favor.

Despite the official claim that the software was designed to filter pornography and unhealthy content on the Internet, many people, including some computer experts, have disputed its effectiveness and are worried about its possible infringement on privacy, its potential to disrupt the operating system and other software, and the waste of $6.1 million of public fund on the project.

These are all legitimate concerns. But behind the whole story, one pivotal question to be raised is whether we believe people should have the right to make their own choice on such an issue, or the authorities, or someone else, should have the power to make such a decision.

Compared with 30 years ago, the country has achieved a lot in individual freedom by giving people the right to make their own decisions regarding their personal lives.

Under the planned economy three decades ago, the government decided the prices of all goods. Today, the market decides 99 percent of the prices based on supply and demand.

Three decades ago, the government even decided what sort of shirts and trousers were proper for its people. Flared trousers, for example, were banned. Today, our streets look like a colorful stage.

Till six years ago, people still needed an approval letter from their employers to get married or divorced. However bizarre it may sound to the people today, the policy had ruled the nation for decades.

The divorce process then could be absurdly long. Representatives from trade union, women's federation and neighborhood committee would all come and try to convince you that divorce is a bad idea – bad for the couple, bad for their children and bad for society.

It could be years or even decades before the divorce was finally approved. Today, it only takes 15 minutes for a couple to go through the formalities to tie or untie the knot at local civil affair bureaus.

Less than three decades ago, the rigid hukou (permanent residence permit) system didn't allow people to work in another city. Even husbands and wives with hukou in different cities had to work and live in separate places. Today, over 200 million migrant workers are on the move, although hukou is still a constraint.

Less than 20 years ago, doctors were mandated to report women who had abortions to their employers. Today, they respect a woman's choice and privacy.

No doubt we have witnessed a sea of change, with more and more people making their own social and economic decisions .

The government, though still wielding huge decision-making power, has also started to consult people on some decisions by hosting public hearings, such as the recent one on tap water pricing in Shanghai.

But clearly, some government department and officials are still used to the old practice of deciding for the people without seeking their consent.

In the Green Dam case, buyers, mostly adults, should be given the complete freedom to decide whether they want the filtering software to be installed in their computers or not.

Respect for an individual's right to choice is an important indicator of a free society, depriving them of which is gross transgression.

Let's not allow the Green Dam software to block our way into the future.

The many indications that the technology behind Green Dam weakens the security fabric of China indicates Chen Weihua is right in more ways than one. 

Just for completeness, I should point out that the initiative also breaks the Third Law (Justifiable Parties) if adults have not consciously enabled the software and chosen to have the government participate in their browsing.

Ethical Foundations of Cybersecurity

Britian's Enterprise Privacy Group is starting a new series of workshops that deal squarely with ethics.  While specialists in ethics have achieved a signficant role in professions like medicine, this is one of the first workshops I've seen that takes on equivalent issues in our field of work.  Perhaps that's why it is already oversubscribed… 

‘The continuing openess of the Internet is fundamental to our way of life, promoting the free flow of ideas to strengthen democratic ideals and deliver the economic benefits of globalisation.  But a fundamental challenge for any government is to balance measures intended to protect security and the right to life with the impact these may have on the other rights that we cherish and which form the basis of our society.
 
'The security of cyber space poses particular challenges in meeting tests of necessity and proportionality as its distributed, de-centralised form means that powerful tools may need to be deployed to tackle those who wish to do harm.  A clear ethical foundation is essential to ensure that the power of these tools is not abused.
 
'The first workshop in this series will be hosted at the Cabinet Office on 17 June, and will explore what questions need to be asked and answered to develop this foundation?

‘The event is already fully subscribed, but we hope to host further events in the near future with greater opportunities for all EPG Members to participate.’

Let's hope EPG eventually turns these deliberations into a document they can share more widely.  Meanwhile, this article seems to offer an introduction to the literature.

Proposal for a Common Identity Framework

Today I am posting a new paper called, Proposal for a Common Identity Framework: A User-Centric Identity Metasystem.

Good news: it doesn’t propose a new protocol!

Instead, it attempts to crisply articulate the requirements in creating a privacy-protecting identity layer for the Internet, and sets out a formal model for such a layer, defined through the set of services the layer must provide.

The paper is the outcome of a year-long collaboration between Dr. Kai Rannenberg, Dr. Reinhard Posch and myself. We were introduced by Dr. Jacques Bus, Head of Unit Trust and Security in ICT Research at the European Commission.

Each of us brought our different cultures, concerns, backgrounds and experiences to the project and we occasionally struggled to understand how our different slices of reality fit together. But it was in those very areas that we ended up with some of the most interesting results.

Kai holds the T-Mobile Chair for Mobile Business and Multilateral Security at Goethe University Frankfurt. He coordinates the EU research projects FIDIS  (Future of Identity in the Information Society), a multidisciplinary endeavor of 24 leading institutions from research, government, and industry, and PICOS (Privacy and Identity Management for Community Services).  He also is Convener of the ISO/IEC Identity Management and Privacy Technology working group (JTC 1/SC 27/WG 5)  and Chair of the IFIP Technical Committee 11 “Security and Privacy Protection in Information Processing Systems”.

Reinhard taught Information Technology at Graz University beginning in the mid 1970’s, and was Scientific Director of the Austrian Secure Information Technology Center starting in 1999. He has been federal CIO for the Austrian government since 2001, and was elected chair of the management board of ENISA (The European Network and Information Security Agency) in 2007. 

I invite you to look at our paper.  It aims at combining the ideas set out in the Laws of Identity and related papers, extended discussions and blog posts from the open identity community, the formal principles of Information Protection that have evolved in Europe, research on Privacy Enhancing Technologies (PETs), outputs from key working groups and academic conferences, and deep experience with EU government digital identity initiatives.

Our work is included in The Future of Identity in the Information Society – a report on research carried out in a number of different EU states on topics like the identification of citizens, ID cards, and Virtual Identities, with an accent on privacy, mobility, interoperability, profiling, forensics, and identity related crime.

I’ll be taking up the ideas in our paper in a number of blog posts going forward. My hope is that readers will find the model useful in advancing the way they think about the architecture of their identity systems.  I’ll be extremely interested in feedback, as will Reinhard and Kai, who I hope will feel free to join into the conversation as voices independent from my own.

Do people care about data correlation?

While I was working on the last couple of posts about data correlation, trusty old RSS brought in a  corroborating piece by Colin McKay at the Office of the Privacy Commissioner of Canada.   Many  in the industry seem to assume people will trade any of their personal information for the smallest trinkets, so more empirical work of the kind reported here seems to be essential.

‘How comfortable, exactly, are online users with their information and online browsing habits being used to track their behaviour and serve ads to them?

‘A survey of Canadian respondents, conducted by TNS Facts and reported by the Canadian Marketing Association, reports that a large number of Canadians and Americans “(69% and 67% respectively) are aware that when they are online their browsing behaviour may be captured by third parties for advertising purposes.”

‘That doesn’t mean they are comfortable with the practice. The same survey notes that “just 33 per cent of Canadians who are members of a site are comfortable with these sites using their browsing information to improve their site experience. There is no difference in support for the use of consumers’ browsing history to serve them targeted ads, be it with the general population, the privacy concerned, or members of a site.”’

If only only 33% are comfortable with using browsing information to improve site experience, I wonder how many will be comfortable with using browsing information to evaluate terminating of peoples’ credit cards (see thread on Martinism)?  Can I take a guess?  How about 1%?  (This may seem high, but I have a friend in the direct marketing world who tells me 1% of the population will believe in anything at all!)  Colin continues:

‘But how much information are users willing to consciously hand over to win access to services, prizes or additional content?

‘A survey of 1800 visitors to coolsavings.com, a coupon and rebate site owned by Q Interactive, has claimed that web visitors are willing “to receive free online services and information in exchange for the use of my data to target relevant advertising to me.”

‘Now, my impression is that visitors to sites like coolsavings.com – who are actively seeking out value and benefits online – would be predisposed to believing that online sites would be able to deliver useful content and relevant ads.

‘That said, Mediapost, who had access to details of the full Q Interactive survey, cautions that users “… continue to put the brakes on hard when asked which specific information they are willing to hand over. The survey found 77.8% willing to give zip code, 64.9% their age and 72.3% their gender, but only 22.4% said they wanted to share the Web sites they visited and only 12% and 12.1% were willing to have their online purchases or the search history respectively to be shared …” ‘

I want to underline Colin's point.  These statistics come from people who actively sought out a coupon site in order to trade information for benefits!  Even so, we are talking about a mere 12% who were willing to have their online purchases or search history shared.  This empirically nixes the notion, held by some, that people don't care about data correlation (an issue I promised to address in my last post.

Colin's conclusions seem consistent with the idea I sketched there of defining a new “right to data correlation” and requiring delegation of that right before trusted parties can correlate individuals across contexts.

‘In both the TNS Facts/CMA and Q Interactive surveys, the results seem to indicate that users are willing to make a conscious decision to share information about themselves – especially if it is with sites they trust and with whom they have an established relationship.

‘A common thread seems to be emerging: consumers see a benefit to providing specific data that will help target information relevant to their needs, but they are less certain about allowing their past behaviour to be used to make inferences about their individual preferences.

‘They may feel their past search and browsing habits might just have a greater impact on their personal and professional life than the limited re-distribution of basic personal information by sites they trust. Especially if those previous habits might be seen as indiscreet, even obscene.’

Colin's conclusion points to the need to be able to “revoke the right to data correlation” that may have been extended to third parties.  It also underlines the need for a built-in scheme for aging and deletion of correlation data.

 

The Right To Correlate

Dave Kearns’ comment in Another Violent Agreement convinces me I've got to apply the scalpel to the way I talk about correlation handles.  Dave writes:

‘I took Kim at his word when he talked “about the need to prevent correlation handles and assembly of information across contexts…” That does sound like “banning the tools.”

‘So I'm pleased to say I agree with his clarification of today:

;”I agree that we must influence behaviors as well as develop tools… [but] there’s a huge gap between the kind of data correlation done at a person’s request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person’s consent or knowledge.” (Emphasis added by Dave)’

Thinking about this some more, it seems we might be able to use a delegation paradigm.

The “right to correlate”

Let's postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).

Then it would follow that any two parties with whom an individual interacts would not by default have the right to correlate data they had each collected in their separate transactions.

On the other hand, the individual would have the right to organize and correlate her own data across all the parties with whom she interacts since she was party to all the transactions.

Delegating the Right to Correlate

If we introduce the ability to delegate, then an individual could delegate her right for two parties to correlate relevant data about her.  For example, I could delegate to Alaska Airlines and British Airways the right to share information about me.

Similarly, if I were an optimistic person, I could opt to use a service like that envisaged by Dave Kearns, which “can discern our real desires from our passing whims and organize our quest for knowledge, experience and – yes – material things in ways which we can only dream about now.”  The point here is that we would delegate the right to correlate to this service operating on our behalf.

Revoking the Right to Correlate

A key aspect of delegating a right is the ability to revoke that delegation.  In other words, if the service to which I had given some set of rights became annoying or odious, I would need to be able terminate its right to correlate.  Importantly, the right applies to correlation itself.  Thus when the right is revoked, the data must no longer be linkable in any way.

Forensics

There are cases where criminal activity is being investigated or proven where it is necessary for law enforcement to be able to correlate without the consent of the individual.  This is already the case in western society and it seems likely that new mechanisms would not be required in a world resepcting the Right to Correlate.

Defining contexts

Respecting the Right to Correlate would not by itself solve the Canadian Tire Problem that started this thread.  The thing that made the Canadian Tire human experiments most odious is that they correlated buying habits at the level of individual purchases (our relations to Canadian Tire as a store)  with  probable behavior in paying off credit cards (Canadian Tire as a credit card issuer).  Paradoxically, someone's loyalty to the store could actually be used to deny her credit.  People who get Canadian Tire credit cards do know that the company is in a position to correlate all this information, but are unlikely to predict this counter-intuitive outcome.

Those of us prefering mainstream credit card companies presumably don't have the same issues at this point in time.  They know where we buy but not what we buy (although there may be data sharing relationships with merchants that I am not aware of… Let me know…).

So we have come to the the most important long-term problem:  The Internet changes the rules of the game by making data correlation so very easy.

It potentially turns every credit card company into a data-correlating Canadian Tire.  Are we looking at the Canadian Tirization of the Internet?

But do people care?

Some will say that none of this matters because people just don't care about what is correlated.  I'll discuss that briefly in my next post.

The brands we buy are “the windows into our souls”

You should read this fascinating piece by Charles Duhigg in last week’s New York Times. A few tidbits to whet the appetite:

‘The exploration into cardholders’ minds hit a breakthrough in 2002, when J. P. Martin, a math-loving executive at Canadian Tire, decided to analyze almost every piece of information his company had collected from credit-card transactions the previous year. Canadian Tire’s stores sold electronics, sporting equipment, kitchen supplies and automotive goods and issued a credit card that could be used almost anywhere. Martin could often see precisely what cardholders were purchasing, and he discovered that the brands we buy are the windows into our souls — or at least into our willingness to make good on our debts…

‘His data indicated, for instance, that people who bought cheap, generic automotive oil were much more likely to miss a credit-card payment than someone who got the expensive, name-brand stuff. People who bought carbon-monoxide monitors for their homes or those little felt pads that stop chair legs from scratching the floor almost never missed payments. Anyone who purchased a chrome-skull car accessory or a “Mega Thruster Exhaust System” was pretty likely to miss paying his bill eventually.

‘Martin’s measurements were so precise that he could tell you the “riskiest” drinking establishment in Canada — Sharx Pool Bar in Montreal, where 47 percent of the patrons who used their Canadian Tire card missed four payments over 12 months. He could also tell you the “safest” products — premium birdseed and a device called a “snow roof rake” that homeowners use to remove high-up snowdrifts so they don’t fall on pedestrians…

‘Why were felt-pad buyers so upstanding? Because they wanted to protect their belongings, be they hardwood floors or credit scores. Why did chrome-skull owners skip out on their debts? “The person who buys a skull for their car, they are like people who go to a bar named Sharx,” Martin told me. “Would you give them a loan?”

So what if there are errors?

Now perhaps I’ve had too much training in science and mathematics, but this type of thinking seems totally neanderthal to me. It belongs in the same category of things we should be protected from as “guilt by association” and “racial profiling”.

For example, the article cites one of Martin’s concrete statistics:

‘A 2002 study of how customers of Canadian Tire were using the company's credit cards found that 2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months. By contrast, only 530 of the cardholders who used their credit cards at the dentist missed four payments within the next 12 months.’

We can rephrase the statement to say that 98% of the people who used their credit cards in drinking places did NOT miss the requisite four payments.

Drawing the conclusion that “use of the credit card in a drinking establishment predicts default” is thus an error 98 times out of 100.

Denying people credit on a premise which is wrong 98% of the time seems like one of those things regulators should rush to address, even if the premise reduces risk to the credit card company.

But there won’t be enough regulators to go around, since there are thousands of other examples given that are similarly idiotic from the point of view of a society fair to its members. For the article continues,

‘Are cardholders suddenly logging in at 1 in the morning? It might signal sleeplessness due to anxiety. Are they using their cards for groceries? It might mean they are trying to conserve their cash. Have they started using their cards for therapy sessions? Do they call the card company in the middle of the day, when they should be at work? What do they say when a customer-service representative asks how they’re feeling? Are their sighs long or short? Do they respond better to a comforting or bullying tone?

Hmmm.

  • Logging in at 1 in the morning. That’s me. I guess I’m one of the 98% for whom this thesis is wrong… I like to stay up late. Do you think staying up late could explain why Mr. Martin’s self-consciously erroneous theses irk me?
  • Using card to buy groceries? True, I don’t like cash. Does this put me on the road to ruin? Another stupid thesis for Mr. Martin.
  • Therapy sessions? If I read enough theses like those proposed by Martin, I may one day need therapy.  But frankly,  I don’t think Mr. Martin should have the slightest visibility into matters like these.  Canadian Tire meets Freud?
  • Calling in the middle of the day when I should be at work? Grow up, Mr. Martin. There is this thing called flex schedules for the 98% or 99% or 99.9% of us for which your theses continually fail.
  • What I would say if a customer-service representative asked how I was feeling? I would point out, with some vigor, that we do not have a personal relationship and that such a question isn't appropriate. And I certainly would not remain on the line.

Apparently Mr. Martin told Charles Duhigg, “If you show us what you buy, we can tell you who you are, maybe even better than you know yourself.” He then lamented that in the past, “everyone was scared that people will resent companies for knowing too much.”

At the best, this no more than a Luciferian version of the Beatles’ “You are what you eat” – but minus the excessive drug use that can explain why everyone thought this was so deep. The truth is, you are not “what you eat”.

Duhigg argues that in the past, companies stuck to “more traditional methods” of managing risk, like raising interest rates when someone was late paying a bill (imagine – a methodology based on actual delinquency rather than hocus pocus), because they worried that customers would revolt if they found out they were being studied so closely. He then says that after “the meltdown”, Mr. Martin’s methods have gained much more currency.

In fact, customers would revolt because the methodology is not reasonable or fair from the point of view of the vast majority of individuals, being wrong tens or hundreds or thousands of times more often than it is right.

If we weren’t working on digital identity, we could just end this discussion by saying Mr. Martin represents one more reason to introduce regulation into the credit card industry. But unfortunately, his thinking is contagious and symptomatic.

Mining of credit card information is just the tip of a vast and dangerous iceberg we are beginning to encounter in cyberspace. The Internet is currently engineered to facilitate the assembly of ever more information of the kind that so thrills Mr. Martin – data accumulated throughout the lives of our young people that will become progressively more important in limiting their opportunities as more “risk reduction” assumptions – of the Martinist kind that apply to almost no one but affect many – take hold.

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine.  Mr. Duhigg's story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

[Thanks to JC Cannon for pointing me to this article..]