Kerry McCain bill proposes “minimal disclosure” for transaction

Steve Satterfield at Inside Privacy gives us this overview of central features of new Commercial Privacy Bill of Rights proposed by US Senators Kerry and McCain (download it here):

  • The draft envisions a significant role for the FTC and includes provisions requiring the FTC to promulgate rules on a number of important issues, including the appropriate consent mechanism for uses of data.  The FTC would also be tasked with issuing rules obligating businesses to provide reasonable security measures for the consumer data they maintain and to provide transparent notices about data practices.
  • The draft also states that businesses should “seek” to collect only as much “covered information” as is reasonably necessary to provide a transaction or service requested by an individual, to prevent fraud, or to improve the transaction or service
  • “Covered information” is defined broadly and would include not just “personally identifiable information” (such as name, address, telephone number, social security number), but also “unique identifier information,” including a customer number held in a cookie, a user ID, a processor serial number or a device serial number.  Unlike definitions of “covered information” that appear in separate bills authored by Reps. Bobby Rush (D-Ill.) and Jackie Speier (D-Cal.), this definition specifically covers cookies and device IDs.
  • The draft encompasses a data retention principle, providing that businesses should only retain covered information only as long as necessary to provide the transaction or service “or for a reasonable period of time if the service is ongoing.” 
  • The draft contemplates enforcement by the FTC and state attorneys general.  Notably — and in contrast to Rep. Rush's bill — the draft does not provide a privacy right of action for individuals who are affected by a violation. 
  • Nor does the bill specifically address the much-debated “Do Not Track” opt-out mechanism that was recommended in the FTC's recent staff report on consumer privacy.  (You can read our analysis of that report here.) 

As noted above, the draft is reportedly still a work in progress.  Inside Privacy will provide additional commentary on the Kerry legislation and other congressional privacy efforts as they develop.   

Press conference will be held tomorrow at 12:30 pm.  [Emphasis above is mine – Kim]

Readers of Identityblog will understand that I see this development, like so many others, as inevitable and predictable consequences of many short-sighted industry players breaking the Laws of Identity.

 

WSJ: Federal Prosecutors investigate smartphone apps

If you have kept up with the excellent Wall Street Journal series on smartphone apps that inappropriately collect and release location information, you won't be surprised at their latest chapter:  Federal Prosecutors are now investigating information-sharing practices of mobile applications, and a Grand Jury is already issuing subpoenas.  The Journal says, in part:

Federal prosecutors in New Jersey are investigating whether numerous smartphone applications illegally obtained or transmitted information about their users without proper disclosures, according to a person familiar with the matter…

The criminal investigation is examining whether the app makers fully described to users the types of data they collected and why they needed the information—such as a user's location or a unique identifier for the phone—the person familiar with the matter said. Collecting information about a user without proper notice or authorization could violate a federal computer-fraud law…

Online music service Pandora Media Inc. said Monday it received a subpoena related to a federal grand-jury investigation of information-sharing practices by smartphone applications…

In December 2010, Scott Thurm wrote Your Apps Are Watching You,  which has now been “liked” by over 13,000 people.  It reported that the Journal had tested 101 apps and found that:

… 56 transmitted the phone's unique device identifier to other companies without users’ awareness or consent.  Forty-seven apps transmitted the phone's location in some way. Five sent a user's age, gender and other personal details to outsiders.  At the time they were tested, 45 apps didn't provide privacy policies on their websites or inside the apps.

In Pandora's case, both the Android and iPhone versions of its app transmitted information about a user's age, gender, and location, as well as unique identifiers for the phone, to various advertising networks. Pandora gathers the age and gender information when a user registers for the service.

Legal experts said the probe is significant because it involves potentially criminal charges that could be applicable to numerous companies. Federal criminal probes of companies for online privacy violations are rare…

The probe centers on whether app makers violated the Computer Fraud and Abuse Act, said the person familiar with the matter. That law, crafted to help prosecute hackers, covers information stored on computers. It could be used to argue that app makers “hacked” into users’ cellphones.

[More here]

The elephant in the room is Apple's own approach to location information, which should certainly be subject to investigation as well.   The user is never presented with a dialog in which Apple's use of location information is explained and permission is obtained.  Instead, the user's agreement is gained surreptitiously, hidden away  on page 37 of a 45 page policy that Apple users must accept in order to use… iTunes.  Why iTunes requires location information is never explained.  The policy simply states that the user's device identifier and location are non-personal information and that Apple “may collect, use, transfer, and disclose non-personal information for any purpose“.

Any purpose?

Is it reasonable that companies like Apple can  proclaim that device identifiers and location are non-personal and then do whatever they want with them?  Informed opinion seems not to agree with them.  The International Working Group on Data Protection in Telecommunications, for example, asserted precisely the opposite as early as 2004.  Membership of the Group included “representatives from Data Protection Authorities and other bodies of national public administrations, international organisations and scientists from all over the world.”

More empirically, I demonstrated in Non-Personal information, like where you live that the combination of device identifier and location is in very many cases (including my own) personally identifying.  This is especially true in North America where many of us live in single-family dwellings.

[BTW, I have not deeply investigated the approach to sharing of location information taken by other smartphone providers – perhaps others can shed light on this.]

Google Indoors featured on German TV

Germans woke up yesterday to a headline story on Das Erste's TV Morning Show announcing a spiffy new Internet service – Google indoors

The first's lead-in and Google Indoors spokesman

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes.  Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had “opted in”…

Google Indoors greeted by happy customer

… so the technicians needed to get on with their work:

Google Indoors camera-head enters appartment

Google's deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

Automated privacy features and product placements with revenue shared with residents
 
… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves.  As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves – an example of the advanced privacy options and levels the service makes possible.

Google Indoors app experience

Check out the video.  Navigation features within houses are amazing!  From the amount of effort and wit put into it by a major TV show, I'd wager that even if Google's troubles with Germany around Street View are over, its problems with Germans around privacy may not be. 

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed.  I don't have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony.  And given Eric Schmidt's policy of getting as close to “creepy” as possible, who wouldn't find the video at least partly believable?

[Thanks to Kai Rannenberg for the heads up.]

Malcolm Compton on power imbalance and security

Australia's CRN reports that former Australian Privacy Commissioner Malcolm Crompton has called for the establishment of a formal privacy industry to rethink identity management in an increasingly digital world:

Addressing the Cards & Payments Australasia conference in Sydney this week, Crompton said the online environment needed to become “safe to play” from citizens’ perspective.

While the internet was built as a “trusted environment”, Crompton said governments and businesses had emerged as “digital gods” with imbalanced identification requirements.

Power allocation is where we got it wrong,” he said, warning that organisations’ unwarranted emphasis on identification had created money-making opportunities for criminals.

Malcolm puts this well.  I too have come to see that the imbalance of power between individual users and Internet business is one of the key factors blocking the emergence of a safe Internet. 

CRN continues:

Currently, users were forced to provide personal information to various email providers, social networking sites, and online retailers in what Crompton described as “a patchwork of identity one-offs”.

Not only were login systems “incredibly clumsy and easy to compromise”; centralised stores of personal details and metadata created honeypots of information for identity thieves, he said…

Refuting arguments that metadata – such as login records and search strings – was unidentifiable, Crompton warned that organisations hording such information would one day face a user revolt

He also recommended the use of cloud-based identification management systems such as Azigo, Avoco and OpenID, which tended to give users more control of their information and third-party access rights.

User-centricity was central to Microsoft chief identity architect Kim Cameron’s ‘Laws of Identity’ (pdf), as well as Canadian Privacy Commissioner Ann Cavoukian’s seven principles of ‘Privacy by Design’ (pdf).

Full article here.

Lazy headmasters versus the Laws of Identity

Ray Corrigan routinely combines legal and technological insight at B2fxxx – Random thoughts on law, the Internet and society, and his book on Digital Decision Making is essential.  His work often leaves me feeling uncharacteristically optimistic – living proof that a new kind of legal thinker is emerging with the technological depth needed to be a modern day Solomon.

I hadn't noticed the UK's new Protection of Freedoms Bill until I heard cabinet minister Damian Green talk about it as he pulverized the UK's centralized identity database recently.  Naturally I turned to Ray Corrigan for comment, only to discover that the political housecleaning had also swept away the assumptions behind widespread fingerprinting in Britain's schools, reinstating user control and consent. 

According to TES Connect:

The new Protection of Freedoms Bill gives pupils in schools and colleges the right to refuse to give their biometric data and compels schools to make alternative provision for them.  The several thousand schools that already use the technology will also have to ask permission from parents retrospectively, even if their systems have been established for years…

It turns out that Britain's headmasters, apparently now a lazy bunch, have little stomach for trivialities like civil liberties.  And writing about this, Ray's tone seems that of a judge who has had an impetuous and over-the-top barrister try to bend the rules one too many times.  It is satisfying to see Ray send them home to study the Laws of Identity as scientific laws governing identity systems.   I hope they catch up on their homework…

The Association of School and College Leaders (ASCL) is reportedly opposing the controls on school fingerprinting proposed in the UK coalition government's Protection of Freedoms Bill.

I always understood the reason that unions existed was to protect the rights of individuals. That ASCL should give what they perceive to be their own members’ managerial convenience priority over the civil rights of kids should make them thoroughly ashamed of themselves.  Oh dear – now head teachers are going to have to fill in a few forms before they abuse children's fundamental right to privacy – how terrible.

Although headteachers and governors at schools deploying these systems may be typically ‘happy that this does not contravene the Data Protection Act’, a number of leading barristers have stated that the use of such systems in schools may be illegal on several grounds. As far back as 2006 Stephen Groesz, a partner at Bindmans in London, was advising:

“Absent a specific power allowing schools to fingerprint, I'd say they have no power to do it. The notion you can do it because it's a neat way of keeping track of books doesn't cut it as a justification.”

The recent decisions in the European Court of Human rights in cases like S. and Marper v UK (2008 – retention of dna and fingerprints) and Gillan and Quinton v UK (2010 – s44 police stop and search) mean schools have to be increasingly careful about the use of such systems anyway. Not that most schools would know that.

Again the question of whether kids should be fingerprinted to get access to books and school meals is not even a hard one! They completely decimate Kim Cameron's first four laws of identity.

1. User control and consent – many schools don't ask for consent, child or parental, and don't provide simple opt out options

2. Minimum disclosure for constrained use – the information collected, children's unique biometrics, is disproportionate for the stated use

3. Justifiable parties – the information is in control of or at least accessible by parties who have absolutely no right to it

4. Directed identity – a unique, irrevocable, omnidirectional identifier is being used when a simple unidirectional identifier (eg lunch ticket or library card) would more than adequately do the job.

It's irrelevant how much schools have invested in such systems or how convenient school administrators find them, or that the Information Commissioner's Office soft peddled their advice on the matter (in 2008) in relation to the Data Protection Act.  They should all be scrapped and if the need for schools to wade through a few more forms before they use these systems causes them to be scrapped then that's a good outcome from my perspective.

In addition just because school fingerprint vendors have conned them into parting with ridiculous sums of money (in school budget terms) to install these systems, with promises that they are not really storing fingerprints and they can't be recreated, there is no doubt it is possible to recreate the image of a fingerprint from data stored on such systems. Ross, A et al ‘From Template to Image: Reconstructing Fingerprints from Minutiae Points’ IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 4, April 2007 is just one example of how university researchers have reverse engineered these systems. The warning caveat emptor applies emphatically to digital technology systems that buyers don't understand especially when it comes to undermining the civil liberties of our younger generation.

Broken Laws of Identity lead to system's destruction

Britain's Home Office has posted a remarkable video, showing Immigration Minister Damian Green methodically pulverizing the disk drives that once held the centralized database that was to be connected to the British ID Cards introduced by Tony Blair.  

“What we're doing today is CRUSHING, the final remnants of the national identity card scheme – the disks and hard drives that held the information on the national identity register have been wiped and they're crushed and reduced to bits of metal so everyone can be absolutely sure that the identity scheme is absolutely dead and buried.

“This whole experiment of trying to collect huge amounts of private information on everyone in this country – and collecting on the central database – is no more, and it's a first step towards a wider agenda of freedom.  We're publishing the protection of freedoms bill as well, and what this shows is that we want to rebalance the security and freedom of the citizen.  We think that previously we have not had enough emphasis on peoples’ individual freedom and privacy, and we're determined to restore the proper balance on that.”

Readers of Identityblog will recall that the British scheme was exceptional in breaking so many of the Laws of Identity at once.  It flouted the first law – User control and Consent – since citizen participation was mandatory.  It broke the second – Minimal Disclosure for a Constrained Use – since it followed the premise that as much information as possible should be assembled in a central location for whatever uses might arise…  The third law of Justifiable Parties was not addressed given the centralized architecture of the system, in which all departments would have made queries and posted updates to the same database and access could have been extended at the flick of a wrist.  And the fourth law of “Directed Identity” was a clear non-goal, since the whole idea was to use a single identifier to unify all possible information.

Over time opposition to the scheme began to grow and became widespread, even though the Blair and Brown governments claimed their polls showed majority support.  Many well-known technologists and privacy advocates attempted to convince them to consider privacy enhancing technologies and architectures that would be less vulnerable to security and privacy meltdown – but without success.  Beyond the scheme's many technical deficiencies, the social fracturing it created eventually assured its irrelevance as a foundational element for the digital future.

Many say the scheme was an important issue in the last British election.  It certainly appears the change in government has left the ID card scheme in the dust, with politicians of all stripes eager to distance themselves from it.  Damian Green, who worked in television and understands it, does a masterful job of showing what his views are.  His video posted by the Home Office, seems iconic.

All in all, the fate of the British ID Card and centralized database scheme is exactly what was predicted by the Laws of Identity:

Those of us who work on or with identity systems need to obey the Laws of Identity.  Otherwise, we create a wake of reinforcing side-effects that eventually undermine all resulting technology.  The result is similar to what would happen if civil engineers were to flount the law of gravity.  By following the Laws we can build a unifying identity metasystem that is universally accepted and enduring.

[Thanks to Jerry Fishenden (here and here) for twittering Damian Green's video]

People, meet Facebook HAL…

According to  Irina Slutsky of Ad Age Digital, Facebook is testing the idea of deciding what ads to show you by pigeon-holing you based on your real-time conversations. 

In the past, a user's Facebook advertising would eventually be impacted by what's on her wall and in her stream, but this was a gradual shift based on out-of-band analysis and categorization. 

Now, at least for participants in this test, it will become crystal clear that Facebook is looking at and listening to your activities; making assumptions about who you are and what you want; and using those assumptions to change how you are treated.

Irena writes:

This month — and for the first time — Facebook started to mine real-time conversations to target ads. The delivery model is being tested by only 1% of Facebook users worldwide. On Facebook, that's a focus group 6 million people strong.

The closest Facebook has come to real-time advertising has been with its most recent ad offering, known as sponsored stories, which repost users’ brand interactions as an ad on the side bar. But for the 6 million users involved in this test, any utterance will become fodder for real-time targeted ads.

For example: Users who update their status with “Mmm, I could go for some pizza tonight,” could get an ad or a coupon from Domino's, Papa John's or Pizza Hut.

To be clear, Facebook has been delivering targeted ads based on wall posts and status updates for some time, but never on a real-time basis. In general, users’ posts and updates are collected in an aggregate format, adding them to target audiences based on the data collected over time. Keywords are a small part of that equation, but Facebook says sometimes keywords aren't even used. The company said delivering ads based on user conversations is a complex algorithm continuously perfected and changed. The real aim of this test is to figure out if those kinds of ads can be served at split-second speed, as soon as the user makes a statement that is a match for an ad in the system.

With real-time delivery, the mere mention of having a baby, running a marathon, buying a power drill or wearing high-heeled shoes is transformed into an opportunity to serve immediate ads, expanding the target audience exponentially beyond usual targeting methods such as stated preferences through “likes” or user profiles. Facebook didn't have to create new ads for this test and no particular advertiser has been tapped to participate — the inventory remains as is.

A user may not have liked any soccer pages or indicated that soccer is an interest, but by sharing his trip to the pub for the World Cup, that user is now part of the Adidas target audience. The moment between a potential customer expressing a desire and deciding on how to fulfill that desire is an advertiser sweet spot, and the real-time ad model puts advertisers in front of a user at that very delicate, decisive moment.

“The long-held promise of local is to deliver timely, relevant and measurable ads which drive actions such as commerce, so if Facebook is moving in this direction, it's brilliant,” said Reggie Bradford, CEO of Facebook software and marketing company Vitrue. “This is a massive market shift everyone is pivoting toward, led by services such as Groupon. Facebook has the power of the graph of me and my friends placing them in the position to dominate this medium.” [More here]

This test is important and will reveal a lot.  If the system is accurate and truly real-time, the way it works will become obvious to people.  It will be a simple cause-and-effect experience that leads to a clarity people have not had before around profiling.  This will be good

However, once the analysis algorithms make mistakes in pigeon-holing users – which is inevitable – it is  likely that it will alienate at least some part of the test population, raising their consciousness of the serious potential problems with profiling.  What will that do to their perception of Facebook?

A Facebook that looks more and more like HAL will not be accepted as “your universal internet identity” – as some of the more pathologically shortsighted dabblers in identity claim is already becoming the case.  Like other companies, Facebook has many simultaneous goals, and some of them conflict in fundamental ways.  More than anything else, in the long term, it is these conflicts that will limit Facebook's role as an identity provider.

 

 

Netflix stung with privacy lawsuits

Via Archie Reed, this story by Greg Sandoval of ZDnet:

Netflix, the web's top video-rental service, has been accused of violating US privacy laws in five separate lawsuits filed during the past two months, records show.

Each of the five plaintiffs allege that Netflix hangs onto customer information, such as credit card numbers and rental histories, long after subscribers cancel their membership. They claim this violates the Video Privacy Protection Act (VPPA).

Netflix declined to comment.

In a four-page suit filed on Friday, Michael Sevy, a former Netflix subscriber who lives in Michigan, accuses Netflix of violating the VPPA by “collecting, storing and maintaining for an indefinite period of time, the video rental histories of every customer that has ever rented a DVD from Netflix”. Netflix also retains information that “identifies the customer as having requested or obtained specific video materials or services”, according to Sevy's suit.

In a complaint filed 22 February, plaintiff Jason Bernal, a resident of Texas, claimed “Netflix has assumed the role of Big Brother and trampled the privacy rights of its former customers”.

Jeff Milans from Virginia filed the first of the five suits on 26 January. One of his attorneys, Bill Gray, told ZDNet Australia‘s sister site CNET yesterday that the way he knows Netflix is preserving information belonging to customers who have left the company is from Netflix emails. According to Gray, in messages to former subscribers, Netflix writes something similar to “We'd love to have you come back. We've retained all of your video choices”.

Gray said that Netflix uses the customer data to market the rental service, but this is done while risking its customers’ privacy. Someone's choice in rental movies could prove embarrassing, according to Gray, and should hackers ever get access to Netflix's database, that information could be made publicly available.

“We want Netflix to operate in compliance of the law and delete all of this information,” Gray said.

All the plaintiffs filed their complaints in US District Court for the Northern District of California. Each has asked the court for class action status. [More here].

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example,
Le droit à l’oubli sur Internet).  The notion is that after some time, information should simply fade away (counteracting digital eternity).  The Right to be Forgotten has to be one of the most important digital rights – not only for social networks, but for the Internet as a whole.  

The authors of the Social Network Users’ Bill of Rights have called some variant of this the “Right to Withdraw”.  Whatever words we use, the Right is a far-reaching game-changer – a cure as important as the introduction of antibiotics was in the world of medicine.

I say “cure” because it helps heal problems that shouldn't have been created in the first place. 

For example, Netflix does not need to – and should not – associate our rental patterns with our natural identities (e.g. with us as recognizable citizens).  Nor should any other company that operates in the digital world. 

Instead, following the precepts of minimal disclosure, the patterns should simply be associated with entities who have accounts and the right to rent movies.  The details of billing should not be linked to the details of ordering (this is possible using the new privacy-enhancing technologies).  From our point of view as consumers of these services, there is no reason the linking should be visible to anyone but ourselves.

All this requires a wee bit of a paradigm shift, you will say.  And you're right.  Until that happens, we don't have a lot of alternatives other than the Right to be Forgotten.  Especially, as described in the law suits above, when we have “chosen to withdraw.”

Incident closed – good support from janrain…

When I connected with janrain to resolve the issue described here, they were more than helpful. In fact, I have to quote them, because this is what companies should be like:

“We certainly test on ie 6,7,8,9, and would love to get your situation smoothed out.” 

The scary part came a little while later…

“The cause is likely to be configuration based on the browser.  Browser security settings should be set to default for testing. Temporarily disable all toolbars and add-ons. Clear caches and cookies (at least for your site domain and rpxnow.com.”

Oh yeah.  I've heard that one before.  So I was a bit skeptical. 

On the other hand, I happened to be in a crowd and asked some people nearby with Windows 7 to see what happened to them when they tried to log in.  It was one of those moments.  Everything worked perfectly for everyone but me… 

Gathering my courage, I pressed the dreaded configuration reset button as I had been told to do: 

Then I re-enabled all my add-ons as janrain suggested.  And… everything worked as advertised.

So there you go.  Possibly I did something to my IE config at some point – I do a lot of experimenting.  Conclusion: if any of you run into the same problem, please let me know.  Until then, let's consider the incident closed.

 

Six new authentication methods for Identityblog

Back in March 2006, when Information Cards were unknown and untested, it became obvious that the best way for me to understand the issues would be to put Information Cards onto Identityblog. 

I wrote the code in PHP, and a few people started trying out Information Cards.  Since I was being killed by spam at the time, I decided to try an experiment:  make it mandatory to use an Information Card to leave a comment.  It was worth a try.  More people might check out InfoCards.  And presto, my spam problems would go away.

So on March 18th 2006 I posted More hardy pioneers try out InfoCard, showing the first few people to give it all a whirl.

At first I thought my draconian “InfoCard-Only” approach would get a lot of peoples’ hackles up and only last a few weeks.  But over time more and more people seemed to be subscribing – probably because Identityblog was one of the few sites that actually used InfoCards in production.  And I never had spam again.

How many people joined using InfoCards?  Today I looked at my user list (see the screenshot below with PII fuzzed out).  The answer: 2958 people successfully subscribed and passed email verification.  There were then over 23,000 successful audited logins.  Not very many for a commercial site, but not bad for a technical blog.

Of course, as we all know, the powers at the large commercial sites have preferred the  “NASCAR” approach of presenting a bunch of different buttons that redirect the user to, uh, something-or-other-that-can-be-phished, ahem, in spite of the privacy and security problems.  This part of the conversation will go on for some time, since these problems will become progressively more widespread as NASCAR gains popularity and the criminally inclined tune in to its potential as a gold mine… But that discussion is for another day. 

Meanwhile, I want to get my hands dirty and understand all the implications of the NASCAR-style approach.  So recently I subscribed to a nifty janrain service that offers a whole array of login methods.  I then integrated their stuff into Identityblog.  I promise, Scout's Honor, not to do man-in-the-middle-attacks or scrape your credentials, even though I probably could if I were so inclined.

From now on, when you need to authenticate at Identityblog, you will see a NASCAR-style login symbol.  See, for example, the LOG IN option at the top of this page. 

If you are not logged in and you want to leave a comment you will see :
 

Click on the string of icons and you get something like this:

 

Because many people continue to use my site to try out Information Cards, I've supplemented the janrain widget experience with the Pamelaware Information Card Option (it was pretty easy to make them coexist, and it leaves me with at least one unphishable alternative).  This will also benefit people who don't like the idea of linking their identifiers all over the web.  I expect it will help researchers and students too.

One warning:  Janrain's otherwise polished implementation doesn't work properly with Internet Explorer – it leaves a spurious “Cross Domain Receiver Page” lurking on your desktop.  [Update – this was apparently my problem: see here]  Once I figure out how to contact them (not evident), I'll ask janrain if and when they're going to fix this.  Anyway, the system works – just a bit messy because you have to manually close the stranded empty page.  The problem doesn't appear in Firefox. 

It has already been a riot looking into the new technology and working through the implications.  I'll talk about this as we go forward.