SpyPhone for iPhone

The MedPage Today blog recently wrote about “iPhone Security Risks and How to Protect Your Data — A Must-Read for Medical Professionals.”  The story begins: 

Many healthcare providers feel comfortable with the iPhone because of its fluid operating system, and the extra functionality it offers, in the form of games and a variety of other apps.  This added functionality is missing with more enterprise-based smart phones, such as the Blackberry platform.  However, this added functionality comes with a price, and exposes the iPhone to security risks. 

Nicolas Seriot, a researcher from the Swiss University of Applied Sciences, has found some alarming design flaws in the iPhone operating system that allow rogue apps to access sensitive information on your phone.

MedPage quotes a CNET article where Elinor Mills reports:

Lax security screening at Apple's App Store and a design flaw are putting iPhone users at risk of downloading malicious applications that could steal data and spy on them, a Swiss researcher warns.

Apple's iPhone app review process is inadequate to stop malicious apps from getting distributed to millions of users, according to Nicolas Seriot, a software engineer and scientific collaborator at the Swiss University of Applied Sciences (HEIG-VD). Once they are downloaded, iPhone apps have unfettered access to a wide range of privacy-invasive information about the user's device, location, activities, interests, and friends, he said in an interview Tuesday…

In addition, a sandboxing technique limits access to other applications’ data but leaves exposed data in the iPhone file system, including some personal information, he said.

To make his point, Seriot has created open-source proof-of-concept spyware dubbed “SpyPhone” that can access the 20 most recent Safari searches, YouTube history, and e-mail account parameters like username, e-mail address, host, and login, as well as detailed information on the phone itself that can be used to track users, even when they change devices.

Following the link to Seriot's paper, called iPhone Privacy, here is the abstract:

It is a little known fact that, despite Apple's claims, any applications downloaded from the App Store to a standard iPhone can access a significant quantity of personal data.

This paper explains what data are at risk and how to get them programmatically without the user's knowledge. These data include the phone number, email accounts settings (except passwords), keyboard cache entries, Safari searches and the most recent GPS location.

This paper shows how malicious applications could pass the mandatory App Store review unnoticed and harvest data through officially sanctioned Apple APIs. Some attack scenarios and recommendations are also presented.

 

In light of Seriot's paper, MedPage concludes:

These security risks are substantial for everyday users, but become heightened if your phone contains sensitive data, in the form of patient information, and when your phone is used for patient care.   Over at iMedicalApps.com, we are not fans of medical apps that enable you to input patient data, and there are several out there.  But we also have peers who have patient contact information stored on their phones, patient information in their calendars, or are accessible to their patients via e-mail.  You can even e-prescribe using your iPhone. 

I don't want to even think about e-prescribing using an iPhone right now, thank you.

Anyone who knows anything about security has known all along that the iPhone – like all devices – is vulnerable to some set of attacks.  For them, iPhone Privacy will be surprising not because it reveals possible attacks, but because of how amazingly elementary they are (the paper is a must-read from this point of view).  

On a positive note, the paper might awaken some of those sent into a deep sleep by proselytizers convinced that Apple's App Store censorship program is reasonable because it protects them from rogue applications.

Evidently Apple's App Store staff take their mandate to protect us from people like award winning Mad Magazine cartoonist  Tom Richmond pretty seriously (see Apple bans Nancy Pelosi bobble head).  If their approach to “protecting” the underlying platform has any merit at all, perhaps a few of them could be reassigned to work part time on preventing trivial and obvious hacker exploits..

But I don't personally think a closed platform with a censorship board is either the right approach or one that can possibly work as attackers get more serious (in fact computer science has long known that this approach is baloney).  The real answer will lie in hard, unfashionable and (dare I say it?) expensive R&D into application isolation and related technologies. I hope this will be an outcome:  first, for the sake of building a secure infrastructure;  second, because one of my phones is an iPhone and I like to explore downloaded applications too.

[Heads Up: Khaja Ahmed]

Sorry Tomek, but I “win”

As I discussed here, the EFF is running an experimental site demonstrating that browsers ooze an unnecessary “browser fingerprint” allowing users to be identified across sites without their knowledge.  One can easily imagine this scenario:

  1. Site “A” offers some service you are interested in and you release your name and address to it.  At the same time, the site captures your browser fingerprint.
  2. Site “B” establishes a relationship with site “A” whereby when it sends “A” a browser fingerprint and “A” responds with the matching identifying information.
  3. You are therefore unknowingly identified at site “B”.

I can see browser fingerprints being used for a number of purposes.  Some sites might use a fingerprint to keep track of you even after you have cleared your cookies – and rationalize this as providing added security.  Others will inevitably employ it for commercial purposes – targeted identifying customer information is high value.  And the technology can even be used for corporate espionage and cyber investigations.

It is important to point out that like any fingerprint, the identification is only probabilistic.  EFF is studying what these probabilities are.  In my original test, my browser was unique in 120,000 other browsers – a number I found very disturbing.

But friends soon wrote back to report that their browser was even “more unique” than mine!  And going through my feeds today I saw a post at Tomek's DS World where he reported a staggering fingerprint uniqueness of 1 in 433,751:

 

It's not that I really think of myself as super competitive, but these results were so extreme I decided to take the test again.  My new score is off the scale:

Tomek ends his post this way:

“So a browser can be used to identify a user in the Internet or to harvest some information without his consent. Will it really become a problem and will it be addressed in some way in browsers in the future? This question has to be answered by people responsible for browser development.”

I have to disagree.  It is already a problem.  A big problem.  These outcomes weren't at all obvious in the early days of the browser.  But today the writing is on the wall and needs to be addressed.  It's a matter right at the core of delivering on a trustworthy computing infrastructure.    We need to evolve the world's browsers to employ minimal disclosure, releasing only what is necessary, and never providing a fingerprint without the user's consent.

 

More unintended consequences of browser leakage

Joerg Resch at Kuppinger Cole points us to new research showing  how social networks can be used in conjunction with browser leakage to provide accurate identification of users who think they are browsing anonymously.

Joerg writes:

Thorsten Holz, Gilbert Wondracek, Engin Kirda and Christopher Kruegel from Isec Laboratory for IT Security found a simple and very effective way to identify a person behind a website visitor without asking for any kind of authentication. Identify in this case means: full name, adress, phone numbers and so on. What they do, is just exploiting the browser history to find out, which social networks the user is a member of and to which groups he or she has subscribed within that social network.

The Practical Attack to De-Anonymize Social Network Users begins with what is known as “history stealing”.  

Browsers don’t allow web sites to access the user’s “history” of visited sites.  But we all know that browsers render sites we have visited in a different color than sites we have not.  This is available programmatically through javascript by examining the a:visited style.  So malicious sites can play a list of URLs and examine the a:visited style to determine if they have been visited, and can do this without the user being aware of it.

This attack has been known for some time, but what is novel is its use.  The authors claim the groups in all major social networks are represented through URLs, so history stealing can be translated into “group membership stealing”.  This brings us to the core of this new work.  The authors have developed a model for the identification characteristics of group memberships – a model that will outlast this particular attack, as dramatic as it is.

The researchers have created a demonstration site that works with the European social network Xing.  Joerg tried it out and, as you can see from the table at left, it identified him uniquely – although he had done nothing to authenticate himself.  He says,

“Here is a screenshot from the self-test I did with the de-anonymizer described in my last post. I´m a member in 5 groups at Xing, but only active in just 2 of them. This is already enough to successfully de-anonymize me, at least if I use the Google Chrome Browser. Using Microsoft Internet Explorer did not lead to a result, as the default security settings (I use them in both browsers) seem to be stronger. That´s weird!”

Since I’m not a user of Xing I can’t explore this first hand.

Joerg goes on to ask if history-stealing is a crime?  If it’s not, how mainstream is this kind of analysis going to become?  What is the right legal framework for considering these issues?  One thing for sure:  this kind of demonstration, as it becomes widely understood, risks profoundly changing the way people look at the Internet.

To return to the idea of minimal disclosure for the browser, why do sites we visit need to be able to read the a:visited attribute?  This should again be thought of as “fingerprinting”, and before a site is able to retrieve the fingerprint, the user must be made aware that it opens the possibility of being uniquely identified without authentication.

New EFF Research on Web Browser Tracking

Slashdot's CmdrTaco points us to a research project announced by EFF‘s Peter Eckersley that I expect will provoke both discussion and action:

What fingerprints does your browser leave behind as you surf the web?

Traditionally, people assume they can prevent a website from identifying them by disabling cookies on their web browser. Unfortunately, this is not the whole story.

When you visit a website, you are allowing that site to access a lot of information about your computer's configuration. Combined, this information can create a kind of fingerprint – a signature that could be used to identify you and your computer. But how effective would this kind of online tracking be?

EFF is running an experiment to find out. Our new website Panopticlick will anonymously log the configuration and version information from your operating system, your browser, and your plug-ins, and compare it to our database of five million other configurations. Then, it will give you a uniqueness score – letting you see how easily identifiable you might be as you surf the web.

Adding your information to our database will help EFF evaluate the capabilities of Internet tracking and advertising companies, who are already using techniques of this sort to record people's online activities. They develop these methods in secret, and don't always tell the world what they've found. But this experiment will give us more insight into the privacy risk posed by browser fingerprinting, and help web users to protect themselves.

To join the experiment:
http://panopticlick.eff.org/

To learn more about the theory behind it:
http://www.eff.org/deeplinks/2010/01/primer-information-theory-and-priva…

Interesting that my own browser was especially recognizable:

 

I know my video configuration is pretty bizarre – but don't understand why I should be broadcasting that when I casually surf the web.  I would also like to understand what is so special about my user agent info. 

Pixel resolution like 1435 x 810 x 32 seems unnecessarily specific.  Applying the concept of minimal disclosure, it would be better to reveal simply that my machine is in some useful “class” of resolution that would not overidentify me.

I would think the provisioning of highly identifying information should be limited to sites with which I have an identity relationship.  If we can agree on a shared mechanism for storing information about our trust for various sites (information cards offer this capability) our browsers could automatically adjust to the relationship they were in, releasing information as necessary.  This is a good example of how a better identity system is needed to protect privacy while providing increased functionality.

 

Electronic Eternity

From the Useful Spam Department :  I got an advertisement from a robot at “complianceonline.com” that works for a business addressing the problem of data retention on the web from the corporate point of view. 

We've all read plenty about the dangers of teenagers publishing their party revels only to find themselves rejected by a university snooping on their Facebook account.  But it's important to remember that the same issues affect business and government as well, as the complianceonline robot points out:

“Avoid Documentation ‘Time Bombs’

“Your own communications and documents can be used against you.

“Lab books, project and design history files, correspondence including e-mails, websites, and marketing literature may all contain information that can compromise a company and it's regulatory compliance. Major problems with the U.S. FDA and/or in lawsuits have resulted from careless or inappropriate comments or even inaccurrate opinions being “voiced” by employees in controlled or retained documents. Opinionated or accusatory E-mails have been written and sent, where even if deleted, still remain in the public domain where they can effectively “last forever”.

“In this electronic age of My Space, Face Book, Linked In, Twitter, Blogs and similar instant communication, derogatory information about a company and its products can be published worldwide, and “go viral”, whether based on fact or not. Today one's ‘opinion’ carries the same weight as ‘fact’.”

This is all pretty predictable and even banal, but then we get to the gem:  the company offers a webinar on “Electronic Eternity”.  I like the rubric.  I think “Electronic Eternity” is one of the things we should question.  Do we really need to accept that it is inevitable?  Whose interest does it serve?  I can't see any stakeholder who benefits except, perhaps, the archeologist. 

Perhaps everything should have a half-life unless a good argument can be made for preserviing it. 

 

Green Dam and the First Law of Identity

China Daily posted this opinion piece by Chen Weihua that provides context on how the Green Dam proposal could ever have emerged.  I found it striking because it brings to the fore the relationship of the initiative to the First Law of Identity (User Control).  As in so many cases where the Laws are broken, the result is passionate opposition and muddled technology.

The Ministry of Industry and Information Technology's latest regulation to preinstall filtering software on all new computers by July 1 has triggered public concern, anger and protest.

A survey on Sina.com, the largest news portal in China, showed that an overwhelming 83 percent of the 26,232 people polled said they would not use the software, known as Green Dam. Only 10 percent were in favor.

Despite the official claim that the software was designed to filter pornography and unhealthy content on the Internet, many people, including some computer experts, have disputed its effectiveness and are worried about its possible infringement on privacy, its potential to disrupt the operating system and other software, and the waste of $6.1 million of public fund on the project.

These are all legitimate concerns. But behind the whole story, one pivotal question to be raised is whether we believe people should have the right to make their own choice on such an issue, or the authorities, or someone else, should have the power to make such a decision.

Compared with 30 years ago, the country has achieved a lot in individual freedom by giving people the right to make their own decisions regarding their personal lives.

Under the planned economy three decades ago, the government decided the prices of all goods. Today, the market decides 99 percent of the prices based on supply and demand.

Three decades ago, the government even decided what sort of shirts and trousers were proper for its people. Flared trousers, for example, were banned. Today, our streets look like a colorful stage.

Till six years ago, people still needed an approval letter from their employers to get married or divorced. However bizarre it may sound to the people today, the policy had ruled the nation for decades.

The divorce process then could be absurdly long. Representatives from trade union, women's federation and neighborhood committee would all come and try to convince you that divorce is a bad idea – bad for the couple, bad for their children and bad for society.

It could be years or even decades before the divorce was finally approved. Today, it only takes 15 minutes for a couple to go through the formalities to tie or untie the knot at local civil affair bureaus.

Less than three decades ago, the rigid hukou (permanent residence permit) system didn't allow people to work in another city. Even husbands and wives with hukou in different cities had to work and live in separate places. Today, over 200 million migrant workers are on the move, although hukou is still a constraint.

Less than 20 years ago, doctors were mandated to report women who had abortions to their employers. Today, they respect a woman's choice and privacy.

No doubt we have witnessed a sea of change, with more and more people making their own social and economic decisions .

The government, though still wielding huge decision-making power, has also started to consult people on some decisions by hosting public hearings, such as the recent one on tap water pricing in Shanghai.

But clearly, some government department and officials are still used to the old practice of deciding for the people without seeking their consent.

In the Green Dam case, buyers, mostly adults, should be given the complete freedom to decide whether they want the filtering software to be installed in their computers or not.

Respect for an individual's right to choice is an important indicator of a free society, depriving them of which is gross transgression.

Let's not allow the Green Dam software to block our way into the future.

The many indications that the technology behind Green Dam weakens the security fabric of China indicates Chen Weihua is right in more ways than one. 

Just for completeness, I should point out that the initiative also breaks the Third Law (Justifiable Parties) if adults have not consciously enabled the software and chosen to have the government participate in their browsing.

Ethical Foundations of Cybersecurity

Britian's Enterprise Privacy Group is starting a new series of workshops that deal squarely with ethics.  While specialists in ethics have achieved a signficant role in professions like medicine, this is one of the first workshops I've seen that takes on equivalent issues in our field of work.  Perhaps that's why it is already oversubscribed… 

‘The continuing openess of the Internet is fundamental to our way of life, promoting the free flow of ideas to strengthen democratic ideals and deliver the economic benefits of globalisation.  But a fundamental challenge for any government is to balance measures intended to protect security and the right to life with the impact these may have on the other rights that we cherish and which form the basis of our society.
 
'The security of cyber space poses particular challenges in meeting tests of necessity and proportionality as its distributed, de-centralised form means that powerful tools may need to be deployed to tackle those who wish to do harm.  A clear ethical foundation is essential to ensure that the power of these tools is not abused.
 
'The first workshop in this series will be hosted at the Cabinet Office on 17 June, and will explore what questions need to be asked and answered to develop this foundation?

‘The event is already fully subscribed, but we hope to host further events in the near future with greater opportunities for all EPG Members to participate.’

Let's hope EPG eventually turns these deliberations into a document they can share more widely.  Meanwhile, this article seems to offer an introduction to the literature.

Kim Cameron: secret RIAA agent?

Dave Kearns cuts me to the polemical quick by tarring me with the smelly brush of the RIAA:

‘Kim has an interesting post today, referencing an article (“What Does Your Credit-Card Company Know About You?” by Charles Duhigg in last week’s New York Times.

‘Kim correctly points out the major fallacies in the thinking of J. P. Martin, a “math-loving executive at Canadian Tire”, who, in 2002, decided to analyze the information his company had collected from credit-card transactions the previous year. For example, Martin notes that “2,220 of 100,000 cardholders who used their credit cards in drinking places missed four payments within the next 12 months.” But that's barely 2% of the total, as Kim points out, and hardly conclusive evidence of anything.

‘I'm right with Cameron for most of his essay, up til the end when he notes:

When we talk about the need to prevent correlation handles and assembly of information across contexts (for example, in the Laws of Identity and our discussions of anonymity and minimal disclosure technology), we are talking about ways to begin to throw a monkey wrench into an emerging Martinist machine. Mr. Duhigg’s story describes early prototypes of the machinations we see as inevitable should we fail in our bid to create a privacy enhancing identity infrastructure for the digital epoch.

‘Change “privacy enhancing” to “intellectual property protecting” and it could be a quote from an RIAA press release!

‘We should never confuse tools with the bad behavior that can be helped by those tools. Data correlation tools, for example, are vitally necessary for automated personalization services and can be a big help to future services such as Vendor Relationship Management (VRM) . After all, it's not Napster that's bad but people who use it to get around copyright laws who are bad. It isn't a cup of coffee that's evil, just people who try to carry one thru airport security. 🙂

‘It is easier to forbid the tool rather than to police the behavior but in a democratic society, it's the way we should act.’

I agree that we must influence behaviors as well as develop tools.  And I'm as positive about Vendor Relationship Management as anyone.  But getting concrete, there's a huge gap between the kind of data correlation done at a person's request as part of a relationship (VRM), and the data correlation I described in my post that is done without a person's consent or knowledge.  As VRM's Saint Searls has said, “Sometimes, I don't want a deep relationship, I just want a cup of coffee”.  

I'll come clean with an example.  Not a month ago, I was visiting friends in Canada, and since I had an “extra car”, was nominated to go pick up some new barbells for the kids. 

So, off to Canadian Tire to buy a barbell.  Who knows what category they put me in when 100% of my annual consumption consists of barbells?  It had to be right up there with low-grade oil or even a Mega Thruster Exhaust System.  In this case, Dave, there was no R and certainly no VRM: I didn't ask to be profiled by Mr. Martin's reputation machines.

There is nothing about miminal disclosure that says profiles cannot be constructed when people want that.  It simply means that information should only be collected in light of a specific usage, and that usage should be clear to the parties involved (NOT the case with Canadian Tire!).  When there is no legitimate reason for collecting information, people should be able to avoid it. 

It all boils down to the matter of people being “in control” of their digital interactions, and of developing technology that makes this both possible and likely.  How can you compare an automated profiling service you can turn on and off with one such as Mr. Martin thinks should rule the world of credit?  The difference between the two is a bit like the difference between a consensual sexual relationship and one based on force.

Returning to the RIAA, in my view Dave is barking up the wrong metaphor.  RIAA is NOT producing tools that put people in control of their relationships or property – quite the contrary.  And they'll pay for that. 

Personal information can be a toxic liability…

From Britain's Guardian, another fantastic tale of information leakage:

The home secretary, Jacqui Smith, yesterday denounced the consultancy firm involved in the development of the ID cards scheme for “completely unacceptable” practice after losing a memory stick containing the personal details of all of the 84,000 prisoners in England and Wales.

The memory stick contained unencrypted information from the electronic system for monitoring offenders through the criminal justice system, including information about 10,000 of the most persistent offenders…

Smith said PA Consulting had broken the terms of its contract in downloading the highly sensitive data. She said: “It runs against the rules set down both for the holding of government data and set down by the external contractor and certainly set down in the contract that we had with the external contractor.

An illuminating twist is that the information was provided to the contractor encrypted.  The contractor, one of the “experts” designing the British national identity card, unencrypted it, put it on a USB stick and “lost it”.   With experts like this, who needs non-experts? 

When government identity system design and operations are flawed, the politicians responsible suffer  the repercussions.  It therefore always fills me with wonder – it is one of those inexplicable aspects of human nature – that politicians don't protect themselves by demanding the safest possible systems, nixing any plan that isn't based on at least a modicum of the requisite pessimism.  Why do they choose such rotten technical advisors?

Opposition parties urged the government to reconsider its plan for the introduction of an ID card database following the incident. Dominic Grieve, the shadow home secretary, said: “The public will be alarmed that the government is happy to entrust their £20bn ID card project to the firm involved in this fiasco.

“This will destroy any confidence the public still have in this white elephant and reinforce why it could endanger – rather than strengthen – our security.”

The Liberal Democrats were also not prepared to absolve the home secretary of responsibility. Their leader, Nick Clegg, accused Smith of being worse than the Keystone Cops at keeping data safe.

Clegg said: “Frankly the Keystone Cops would do a better job running the Home Office and keeping our data safe than this government, and if this government cannot keep the data of thousands of guilty people safe, why on earth should we give them the data of millions of innocent people in an ID card database?”

David Smith, deputy commissioner for the information commissioner's office, said: “The data loss by a Home Office contractor demonstrates that personal information can be a toxic liability if it is not handled properly , and reinforces the need for data protection to be taken seriously at all levels.”

Home Office resource accounts for last year show that in March of this year two CDs containing the personal information of seasonal agricultural workers went missing in transit to the UK Borders Agency. The names, dates of birth, and passport numbers of 3,000 individuals were lost.

If you are wondering why Britain seems to experience more “data loss” than anyone else, I suspect you are asking the wrong question.  If I were a betting man, I would wager that they just have better reporting – more people paying attention and blowing whistles.

But the big takeaway at the technical level is that sensitive information – and identity information in particular – needs to be protected throughout its lifetime.  If put on portable devices, the device should enforce rights management and only release specific information as needed – never allow wholesale copying.  Maybe we don't have dongles that can do this yet, but we certainly have phone-sized computers (dare I say phones?) with all the necessary computational capabilities.

 

Trends in what is known about us

We know how the web feeds itself in a chain reaction powered by the assembly and location of information.  We love it.  Bringing information together that was previously compartmentalized has made it far easier to find out what is happening and avoid thinking narrowly.  In some cases it has even changed the fundamentals of how we work and interact.  The blogosphere identity conversation is an example of this.  We are able to learn from each other across the industry and adjust to evolving trends in a fluid way, rather than “projecting” what other peoples’ thinking and motivations might be.  In this sense the content of what we are doing is related to the medium through which we do it.

Information accumulates power by being put into proximity and aggregated.   This even appears to be an inherent property of information itself.  Of course information can't effect its own aggregation, but easily finds hosts who are motivated to do so: businesses, governments, researchers, industries, libraries, data centers – and the indefatigable search engine.

Some forms of aggregation involve breaking down the separation between domains of facts.  Facts are initially discerned within a context.   But as  contexts flow together and merge , the facts are visible from new perspectives.  We can think of them as “views”.

Information trends and digital identity 

How does this fundamental tendency of information to reorganize itself relate to digital identity?

This is clearly a complicated question.  But it is perhaps one of the most important questions of our time – one that needs to come to the attention of students, academics, policy makers, legislators, and through them, the general public.   The answer will affect everyone.

It is hard to clearly explain and discuss trends that are so infrastructural.  Those of us working on these issues have concepts that apply, but the concepts don't really have satisfactory names, and just aren't crisp enough.  We aren't ready for a wider conversation about the things we have seen.

Recently I've been trying to organize my own thinking about this through a grid expressing, on one axis, the tendency of context to merge; and, on the other, the spectrum of data visibility:

Tendency of data to join and become visible

The spectrum of visibility extends from a single individual on the left to everyone in the society on the right  [if reading a text feed please check the graphic – Kim]

The spectrum of contextual separation extends from complete separation of information by context at the top, to complete joining of data across contexts at the bottom.

I've represented the tendency of information to aggregate as the arrow leading from separation to full join, and this should be considered a dynamic tendency of the system.

Where do we fit in this picture?

Now lets set up a few markers from which we can calibrate this field.  For example, let's take what I've labelled “Today's public personas”.  I'm talking about what we reveal about ourselves in the public realm.  Because it's public, it's on the “Visible to all” part of the spectrum.  Yet for most of us, it is a relatively narrow set of information that is revealed – our names, property we own, aspects of our professional lives.  Thus our public personas remain relatively contextual.

You can imagine variants on this – for example a show-business personality who might be situated further to the right than the “public persona”, being known by more people.  Further, additional aspects of such a person's life might be known, which would be represented by moving down towards the bottom of the quadrant (or even further).    

I've also included a marker that represents the kind of commercial relationships encountered in today's western society.  Now we're on the “Visible to some” part of the visibility spectrum. In some cases (e.g. our dealings with lawyers), this marker would hopefully be located further to the left, indicating fewer parties to the information.  The current location implies some overlapping of context and sharing across parties – for example, transactions visible to credit card companies, merchants, and third parties in their employ.

Going forward, I'll look at what happens as the dynamic towards data joining asserts itself in this model.