‘Kim has an interesting post today, referencing an article (“What Does Your Credit-Card Company Know About You?” by Charles Duhigg in last week’s New York Times.
‘Kim correctly points out the major fallacies in the thinking of J. P. Martin, a “math-loving executive at Canadian Tire”, who, in 2002, decided to analyze the information his company had collected from credit-card transactions the previous year. For example, Martin notes that “2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months.” But that's barely 2% of the total, as Kim points out, and hardly conclusive evidence of anything.
‘I'm right with Cameron for most of his essay, up til the end when he notes:
“When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine. Mr. Duhigg’s story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.“
‘Change “privacy enhancing” to “intellectual property protecting” and it could be a quote from an RIAA press release!
‘We should never confuse tools with the bad behavior that can be helped by those tools. Data correlation tools, for example, are vitally necessary for automated personalization services and can be a big help to future services such as Vendor Relationship Management (VRM) . After all, it's not Napster that's bad but people who use it to get around copyright laws who are bad. It isn't a cup of coffee that's evil, just people who try to carry one thru airport security.
‘It is easier to forbid the tool rather than to police the behavior but in a democratic society, it's the way we should act.’
I agree that we must influence behaviors as well as develop tools. And I'm as positive about Vendor Relationship Management as anyone. But getting concrete, there's a huge gap between the kind of data correlation done at a person's request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person's consent or knowledge. As VRM's Saint Searls has said, “Sometimes, I don't want a deep relationship, I just want a cup of coffee”.
I'll come clean with an example. Not a month ago, I was visiting friends in Canada, and since I had an “extra car”, was nominated to go pick up some new barbells for the kids.
So, off to Canadian Tire to buy a barbell. Who knows what category they put me in when 100% of my annual consumption consists of barbells? It had to be right up there with low-grade oil or even a Mega Thruster Exhaust System. In this case, Dave, there was no R and certainly no VRM: I didn't ask to be profiled by Mr. Martin's reputation machines.
There is nothing about miminal disclosure that says profiles cannot be constructed when people want that. It simply means that information should only be collected in light of a specific usage, and that usage should be clear to the parties involved (NOT the case with Canadian Tire!). When there is no legitimate reason for collecting information, people should be able to avoid it.
It all boils down to the matter of people being “in control” of their digital interactions, and of developing technology that makes this both possible and likely. How can you compare an automated profiling service you can turn on and off with one such as Mr. Martin thinks should rule the world of credit? The difference between the two is a bit like the difference between a consensual sexual relationship and one based on force.
Returning to the RIAA, in my view Dave is barking up the wrong metaphor. RIAA is NOT producing tools that put people in control of their relationships or property – quite the contrary. And they'll pay for that.