Collusion takes effort; how much?

Eric Norman, from University of Wisconsin, has a new blog called Fun with Metaphors and an independent spirit that is attractive and informed.    He weighs in to our recent discussion with Collusion takes effort:

Now don't get me wrong here. I'm all for protection of privacy. In fact, I have been credited by some as raising consciousness about 8 years ago (pre-Shibboleth) in the Internet2 community to the effect that privacy concerns need to be dealt with in the beginning and at a fundamental level instead of being grafted on later as an afterthought.

There have been recent discussions in the blogosphere about various parties colluding to invade someone's privacy. What I would like to see during such discussions is a more ecological and risk-assessing approach. I'll try to elaborate.

The other day, Kim Cameron analyzed sundry combinations of colluding parties and identity systems to find out what collusion is possible and what isn't. That's all well and good and useful. It answers questions about what's possible in a techno- and crypto- sense. However, I think there's more to the story.

The essence of the rest of the story is that collusion takes effort and motivation on the part of the conspirators. Such effort would act as a deterrent to the formation of such conspiracies and might even make them not worthwhile.

Just the fact that privacy violations would take collusion might be enough to inhibit them in some cases. This is a lightweight version of separation of duty — the nuclear launch scenario; make sure the decision to take action can't be unilateral.

In some of the cases, not much is said about how the parties that are involved in such a conspiracy would find each other. In the case of RPs colluding with each other, how would one of the RPs even know that there's another RP to conspire with and who the other RP is? That would involve a search and I don't think they could just consult Google. It would take effort.

Just today, Kaliya reported another example. A court has held that email is subject to protection under the Fourth Amendment and therefore a subpoena is required for collusion. That takes a lot of effort.

Anyway, the message here is that it is indeed useful to focus on just the technical and cryptographic possibilities. However, all that gets you is a yes/no answer about what's possible and what's not. Don't forget to also include the effort it would actually take to make such collusions happen.

First of all, I agree that the technical and crypto possibilities are not the whole story of linkability.  But they are a part of the story we do need to understand a lot more objectively than is currently the case.  Clearly this applies to technical people, but I think the same goes for policy makers.  Let's get to the point where the characteristics of the systems can be discussed without emotion or the bias of any one technology.

Now let's turn to one of Eric's main points: the effort required for conspirators to collude would act as a deterrent to the formation of such conspiracies.

First, part of what becomes evident is that with browser-based technologies like Liberty, WS-Federation and OpenID,  NO collusion is actually necessary for the identity provider to “see everything” – in the sense of all aspects of the identity exchange.  That in itself may limit use cases.   It also underlines the level of trust the user MUST place in such an IP.  At the very minimum, all the users of the system need to be made aware of how this works.  I'm not sure that has been happening…

Secondly, even if you blind the IP as to the identity of the RP, you clearly can't prevent the inverse, since the RP needs to know who has made the claims!  Even so,  I agree that this blinding represents something akin to “separation of duty”, making collusion a lot harder to get away with on a large scale.

So I really am trying to set up this continuum to allow for “risk assessment” and concrete understanding of different use cases and benefits.  In this regard Eric and I are in total agreement.

As a concrete example of such risk assessment, people responsible for privacy in government have pointed out to me that their systems are tightly connected, and are often run by entities who provide services across multiple departments.  They worry that in this case, collusion is very easy.  Put another way, the separation of duties is too fragile.

Assemble the audit logs and you collude.  No more to it than that.  This is why they see it as prudent to put in place a system with properties that make routine creation of super-dossiers more difficult.  And why we need to understand our continuum. 

Revealing patterns when there is no need to do so

Irving Reid of Controlled Flight into Terrain has come up with exactly the kind of use case I wanted to see when I was thinking about Paul Madsen's points:

Kim Cameron responds to Paul Madsen responding to Kim Cameron, and I wonder what it is about Canadians and identity…

But I have to admit that I have not personally been that interested in the use case of presenting “managed assertions” to amnesiac web sites.  In other words, I think the cases where you would want a managed identity provider for completely amnesiac interactions are fairly few and far between.  (If someone wants to turn me around me in this regard I’m wide open.)

Shibboleth, in particular, has a very clear requirement for this use case. FERPA requires that educational institutions disclose the least possible information about students, staff and faculty to their partners. The example I heard, back in the early days of SAML, was of an institution that had a contract with an on-line case law research provider such that anyone affiliated with the law school at that institution could look up cases.

In this case, the “managed identity provider” (representing the educational institution) needs to assert that the person visiting right now is affiliated with the law school. However, the provider has no need to know anything more than that, and therefore the institution has a responsibility under FERPA to not give the provider any extra information. “The person looking up Case X right now is the same person who looked up Case Y last week” is one of the pieces of information the institution shouldn’t share with the provider.

Put this way it is obvious that it breaks the law of minimal disclosure to reveal that “the person looking up Case X right now is the same person who looked up Case Y last week” when there is no need to do so.

I initially didn't see that a pseudonymous link between Case X and Case Y would leak very much information.  But on reflection, in the competitive world of academic research, these linkages could benefit an observer by revealing patterns the observer would not otherwise be aware of.  He might not know whose research he was observing, but might nonetheless cobble a paper together faster than the original researcher, beating him in terms of publication date.

I'll include this example in discussing some of the collusion issues raised by various identity technologies.

Colluding with yourself

Further to Dave Kearn's article, here is the complete text of Paul Masden's comment

Kim Cameron introduces a nice diagram into his series exploring linkability & correlation in different identity systems.

Kim categorizes correlation as either ‘IP sees all’, ‘RP/RP collusion’, or ‘RP/IP collusion’, depending on which two entities can ‘talk’ about the user.

A meaningful distinction for RP/RP collusion that Kim omits (at least in the diagram and in his discussion of X.509) is ‘temporal self-correlation’, i.e. that in which the same RP is able to correlate the same user's visits occurring over time.

Were an IDP to use transient (as opposed to persistent pseudonymous) identifiers within a SAML assertion each time it asserted to a RP, then not only would RP's be unable to collude with each other (based on that identifier), they'd be unable to collude with themselves (the past or future themselves).

I was working on a diagram comparable to Kim's, but got lost in the additional axis for representing time (e.g. ‘what the provider knows and when they learned it’ when considering collusion potential).

Separately, Kim will surely acknowledge at some point (or already has) that these identity systems, with their varying degrees of inhibiting correlation & subsequent collusion, will all be deployed in an environment that, by default, does not support the same degree of obfuscation. Not to say that designing identity systems to inhibit correlation isn't important & valuable for privacy, just that there is little point in deploying such a system without addressing the other vulnerabilities (like a masked bank robber writing his ‘hand over the money’ note on a monogrammed pad).

First, I love Paul's comment that he “got lost in the additional axis”, since there are many potential axes – some of which have taken me to the steps of purgatory.  Perhaps we can collect them into a joint set of diagrams since the various axes are interesting in different ways.

Second, I want everyone to understand that I do not see correlation as being something which is in itself bad.  It depends on the context, on what we are trying to achieve.  When writing my blog, I want everyone to know it is “me again”, for better or for worse.  But as I always say, I would like to be able to use my search engine and read my newspaper without fear that some profile of me, the real-world Kim Cameron, would be assembled and shared.

The one statement Paul makes that I don't agree with is this: 

Were an IDP to use transient (as opposed to persistent pseudonymous) identifiers within a SAML assertion each time it asserted to a RP, then not only would RP's be unable to collude with each other (based on that identifier), they'd be unable to collude with themselves (the past or future themselves).

I've been through this thinking myself.

Suppose we got rid of the user identifier completely, and just kept the assertion ID that identifies a given SAML token (must be unique across time and space – totally transient).  If the relying party received such a token and colluded with the identity provider, the assertionID could be used to tie the profile at the relying party to the person who authenticated and got the token in the first place.  So it doesn't really prevent linking once you try to handle the problem of collusion.

No masks in the grocery store

Dave Kearns discusses the first part of my examination of the relation between identity technologies and linking, beginning with a reference to Paul Madsen:

Paul Madsen comments on Kim Cameron's first post in a series he's about to do on privacy and collusion in on-line identity-based transactions. He notes:

A meaningful distinction for RP/RP collusion that Kim omits (at least in the diagram and in his discussion of X.509) is ‘temporal self-correlation’, i.e. that in which the same RP is able to correlate the same user's visits occurring over time.

and concludes:

Not to say that designing identity systems to inhibit correlation isn't important & valuable for privacy, just that there is little point in deploying such a system without addressing the other vulnerabilities (like a masked bank robber writing his ‘hand over the money’ note on a monogrammed pad).

Paul makes some good points.  Rereading my post I tweaked it slightly to make it somewhat clearer that correlating the same user's visits occuring over time is one possible aspect of linking. 

But I have to admit that I have not personally been that interested in the use case of presenting “managed assertions” to amnesiac web sites.  In other words, I think the cases where you would want a managed identity provider for completely amnesiac interactions are fairly few and far between.  (If someone wants to turn me around me in this regard I'm wide open.)  To me the interesting use cases have been those of pseudonymous identity – sites that respond to you over time, but are not linked to a natural person.  This isn't to say that whatever architecture we come out with can simply ignore use cases people think are important.

Dave continues:

I'd like to add that Kim's posting seems to fall into what I call on-line fallacy #1 – the on-line experience must be better in some way than the “real world” experience, as defined by some non-consumer “expert”. This first surfaced for me in discussions about electronic voting (see Rock the Net Vote), where I concluded “The bottom line is that computerized voting machines – even those running Microsoft operating systems [Dave, mais vous êtes trop méchant! – Kim]- are more secure and more reliable than any other ‘secret ballot’ vote tabulation method we've used in the past.”

When I re-visit a store, I expect to be recognized. I hope that the clerk will remember me and my preferences (and not have to ask “plastic or paper?” every single blasted time!). Customers like to be recognized when they return to the store. We appreciate it when we go to the saloon where “everybody knows your name” and the bartender presents you with a glass of “the usual” without you having to ask. And there is nothing wrong with that! It's what most people want. Fallacy #2 is that most Jeremiahs (those weeping, wailing, and tooth-gnashing doomsayers who wish to stop technology in it's tracks) think that what they want is what everyone should want, and would want if the hoi-polloi were only educated enough. (and people think I'm elitist! 🙂

I do wish that all those “anonymity advocates” would start trying to anonymize themselves in the physical world, too. So here's a test – next time you need to visit your bank, wear a mask. Be anonymous. But tell your lawyer to stand by the phone…

Dave, I think you are really bringing up an important issue here.  But beyond the following brief comment, I would like to refrain from the discussion until I finish the technical exploration.  I ask you to go with me on the idea that there are cases where you want to be treated like you are in your local pub, and there are cases where you don't.  The whole world is not a pub – as much as that might have some advantages, like beer.

In the physical world we do leave impressions of the kind you describe.  But in the digital world they can all be assembled and integrated automatically and communicated intercontinentally to forces unknown to you in a way that is just impossible in the physical world.  There is absolutely no precedent for digital physics.  We need to temper your proposed fallacies with this reality.

I'm trying to do a dispassionate examination of how the different identity technologies relate to linking, without making value judgements about use cases.

That done, let's see if we can agree on some of the digital physics versus physical reality issues.

News on the Australian “Access Card”

Here is a report from The Australian about the issues surrounding Australia's Human Services Access Card.  Some of the key points: 

“By this time next year, the federal Government hopes to be interviewing and photographing 35,000 Australians each day to create the nation's first ID databank. Biometric photos, matched with names, addresses, dates of birth, signatures, sex, social security status and children's details, would be loaded into a new centralised database. Welfare bureaucrats, ASIO, the Australian Federal Police and possibly even the Australian Taxation Office would have some form of access to the unprecedented collection of identity data.

“Within three years, all Australians seeking benefits such as Medicare, pensions, childcare subsidies, family payments, unemployment or disability allowances – about 16.5 million people – would have joined the databank. They would be given a photographic access card to prove who they are and show their eligibility for social security.

“This week, however, the billion-dollar project hit a bump when Human Services Minister Chris Ellison revealed that legislation due to go before federal Parliament this month had been delayed…

“How will Australians’ privacy be protected? How will the database and cards be kept secure? Who can see information on the card? What identity documents will Australians need to acquire a card, and what will happen to the estimated 600,000 people without a birth certificate, passport or driver's licence?

“The Government's mantra is that this is not an ID card because it does not have to be carried, but users will have to show it to prove their identity when claiming welfare benefits…

“The Government claims the new system will stem between $1.6 billion and $3 billion in welfare fraud over the next decade…

“A key Government adviser, Allan Fels – a former chairman of the Australian Competition and Consumer Commission and now head of the Government's Access Card Consumer and Privacy Taskforce – is at loggerheads with Medicare, Centrelink and the AFP, who all want the new card to display the user's identification number, photograph and signature…

“The photo would be stored in a central database, as well as in a microchip that could be read by 50,000 terminals in government offices, doctors’ surgeries and pharmacies…

“Despite his official role as the citizens’ watchdog, Fels still has not seen the draft bill…

“‘The law should be specific about what is on the card, in the chip and in the database,’ he says. ‘If anyone in future wants to change that they would have to do nothing less than get an act of parliament through. We don't want a situation where, just by administrative decisions, changes can be made…’

“‘There will be no mega-database created that will record a customer's dealings with different agencies,” the minister [Ellison] told the conference…

“Cardholders may be able to include sensitive personal information – such as their blood type, emergency contacts, allergies or illnesses such as AIDS or epilepsy – in the one-third of the microchip space that will be reserved for personal use. It is not yet clear who would have access to this private zone.

“Hansard transcripts of Senate committee hearings into the access card legislation reveal that police, spies and perhaps even the taxman will be able to glean details from the new database. The Department of Human Services admits the AFP will be able to obtain and use information from the databank and card chip to respond to threats of killing or injury, to identify disaster victims, investigate missing persons, or to ‘enforce criminal law or for the protection of the public revenue’.

“Australia's super-secretive spy agency, the Defence Signals Directorate, will test security for the new access card system…

“The Australian Privacy Foundation's no-ID-card campaign director, Anna Johnston, fears future governments could “misuse and abuse” the biometric databank…

(Full story…)

ID Cards can be deployed in ways that increase, rather than decrease, the privacy of citizens, while still achieving the goals of fraud reduction.  It's a matter of taking advantage of new card and crypto technologies.  My view is that politicians would be well advised in funding such products rather than massive centralized databases.

As for the Defense Signals Directorate's access to identity data, what has this got to do with databases offering generalized access to every curious official?  You would think they were without other means.   

More on the iTunes approach to privacy

Reading more about Apple's decision to insert user's names and email addresses in the songs they download from iTunes, I stumbled across a Macworld article on iTunes 6.0.2 where Rob Griffiths described the store's approach to capturing user preferences as “spyware”.

I blogged about Rob's piece, but it turns it was 18 months old, and Apple had quickly published a fix to the “phone-home without user permission” issue.  

Since I don't want to beat a dead horse, and Apple showed the right spirit in fixing things, I took that post down within a couple of hours (leaving it here for anyone who wonders what it said). 

So now, with a better understanding of the context, I can get on with thinking about what it means for Apple to insert our names and email addresses into the music files we download – again without telling us.

First I have to thank David Waite for pointing out that the original profiling issue had been resolved:

Kim, [the Macworld] article is almost 18 months old.  Apple quickly released a newer version of iTunes which ‘fixed’ this issue – the mini store is disabled by default, and today when you select to ‘Show MiniStore’ it displays:

“The iTunes MiniStore helps you discover new music and video right from your iTunes Library. As you select tracks or videos in your Library, information about your selections are sent to Apple and the MiniStore will display related songs, artists, or videos. Apple does not keep any information related to the contents of your iTunes Library.

Would you like to turn on the MiniStore now?”

The interesting thing about the more recent debacle about Apple including your name and email address in the songs you buy from their store is that they have done this since Day 1. Its only after people thought Apple selling music with no DRM was too good to be true that the current stink over it started.

It's interesting to understand the history here.  I would have thought that in light of their previous experience, Apple would have been very up front about the fact that they are embedding your name and email address in the files they give you.  After all, it is PII, and I would think it would require your knowledge and approval. 

I wonder what the Europeans will make of this?

iTunes and Identity-Based Digital Rights Management

 A fascinating posting by Randy Picker at the University of Chicago Law School Faculty Blog:

Over the last week, it has been become clear that Apple is embedding some identifying information in songs purchased from iTunes, including the name of the customer and his or her e-mail address. This has raised the ire of consumer advocates, including the Electronic Frontier Foundation which addressed this again yesterday.

Last year, I published a paper entitled Mistrust-Based Digital Rights Management (online preprint available here). In that paper, I argued that as we switched from content products such as CDs and DVDs to content services such as iTunes, Google Video and YouTube, we would embrace identity-based digital rights management. This is exactly what we are seeing from iTunes. How should we assess identity-based DRM?

Take a step backwards. As long as I keep my songs to myself and don’t share them, the embedded information shouldn’t matter. The information may facilitate interactions between Apple and its customers and might make it easier to verify whether a particular song was purchased from iTunes, but this doesn’t seem to be the central point of embedding identity in the songs.

Instead, identity matters if I share the song with someone else. Identity travels with the content. If I know that and care, I will be less likely to share the content indiscriminately over p2p networks. Why should I care? It depends on what happens with the embedded information. One use would make it possible for Apple to identify who was sharing content on p2p networks. Having traced content to its purchaser, Apple might choose to drop that person as a customer.

But Apple could do this without embedding the information in the clear. As Fred von Lohmann asked in his post on the EFF blog, why embed identity in the clear rather than as encrypted data? After all, if Apple intends to scour p2p networks, it could do so just as easily looking for encrypted identities.

Apple might have a different strategy, one that relies on third-party sanctions, and that strategy would require actual identities. Suppose Apple posted the following notice on iTunes:

“Songs downloaded from iTunes are not to be shared with strangers. We have embedded your name and email address into the songs. Our best guess is that if you share iTunes songs on p2p networks, your name and email will be harvested from those songs and you will receive an extra 10 spam emails per day from third parties.”

Encrypted information works if Apple is doing all of the detection. It would even work, as I suggested in my paper, if Apple relied on third parties to do the detection by turning in p2p uploaders to Apple. We could run that system with encrypted information. All that is required is that the rat knows that he is turning in someone; he doesn’t need to know who that person is exactly.

But a third-party punishment strategy would probably be implemented using actual identity. The spammer who harvests the email address inflicts the penalty for uploading, not Apple itself. For Apple to drop out of the punishment business, it needs to hand off identity. Obviously, extra spam is just one possible cost for disclosing names and emails; other costs would further reduce the incentive to upload.

Disclosing identity is a clumsy tool. It doesn’t scale very well. It will work most powerfully against the casual uploader. It offers no (marginal) deterrence against someone who would upload lots of songs anyway. My mistrust-based scheme (described in the paper) might work better in those circumstances.

So far, Apple doesn’t seem to be saying much about what it is doing. It needs to be careful. As the Sony BMG fiasco—also discussed in the paper—emphasizes, content owners may not get that many opportunities to establish technological protection schemes. Each one they get wrong makes it that much harder to try another scheme later, given the adverse public relations fallout. As I suggest above, Apple may have a legitimate strategy for disclosing identity in the clear. It will be interesting to see what Apple says next.

I haven't read Randy's paper yet but will do so now.

Linkage and identification

Inspired by some of Ben Laurie's recent postings, I want to continue exploring the issues of privacy and linkability (see related pieces here and here). 

I have explained that CardSpace is a way of selecting and transferring a relevant digital identity – not a crypto system; and that the privacy characteristics involved depend on the nature of the transaction and the identity provider being used within CardSpace – not on CardSpace itself.   I ended my last piece this way:

The question now becomes that of how identity providers behave.  Given that suddenly they have no visibility onto the relying party, is linkability still possible?

But before zeroing in on specific technologies, I want to drill into two issues.  First is the meaning of “identification”; and second, the meaning of “linkability” and its related concept of “traceability”.  

Having done this will allow us to describe different types of linkage, and set up our look at how different cryptographic approaches and transactional architectures relate to them.

Identification 

There has been much discussion of identification (which, for those new to this world, is not at all the same as digital identity).  I would like to take up the definitions used in the EU Data Protection Directive, which have been nicely summarized here, but add a few precisions.  First, we need to broaden the definition of “indirect identification” by dropping the requirement for unique attributes – as long as you end up with unambiguous identification.  Second, we need to distinguish between identification as a technical phenomenon and personal identification.

This leads to the following taxonomy:

  • Personal data:
    •  any piece of information regarding an identified or identifiable natural person.
  • Direct Personal Identification:
    • establishing that an entity is a specific natural person through use of basic personal data (e.g., name, address, etc.), plus a personal number, a widely known pseudo-identity, a biometric characteristic such as a fingerprint, PD, etc.
  • Indirect Personal Identification:
    • establishing that an entity is a specific natural person through other characteristics or attributes or a combination of both – in other words, to assemble “sufficiently identifying” information
  • Personal Non-Identification:
    • assumed if the amount and the nature of the indirectly identifying data are such that identification of the individual as a natural person is only possible with the application of disproportionate effort, or through the assistance of a third party outside the power and authority of the person responsible… 

Translating to the vocabulary we often use in the software industry, direct personal identification is done through a unique personal identifier assigned to a natural person.  Indirect personal identification occurs when enough claims are released – unique or not – that linkage to a natural person can be accomplished.  If linkage to a natural person is not possible, you have personal non-identification.  We have added the word “personal”  to each of these definitions so we could withstand the paradox that when pseudonyms are used, unique identifiers may in fact lead to personal non-identification… 

The notion of “disproportionate effort” is an important one.  The basic idea is useful, with the proviso that when one controls computerized systems end-to-end one may accomplish very complicated tasks,  computations and correlations very easily – and this does not in itself constitute “disproportionate effort”.

Linkability

If you search for “linkability”, you will find that about half the hits refer to the characteristics that make people want to link to your web site.  That's NOT what's being discussed here.

Instead, we're talking about being able to link one transaction to another.

The first time I heard the word used this way was in reference to the E-Cash systems of the eighties.  With physical cash, you can walk into a store and buy something with one coin, later buy something else with another coin, and be assured there is no linkage between the two transactions that is caused by the coins themselves. 

This quality is hard to achieve with electronic payments.  Think of how a credit card or debit card or bank account works.  Use the same credit card for two transactions and you create an electronic trail that connects them together.

E-Cash was proposed as a means of getting characteristics similar to those of the physical world when dealing with electronic transactions.  Non-linkability was the concept introduced to describe this.  Over time it has become a key concept of privacy research, which models all identity transactions as involving similar basic issues.

Linkability is closely related to traceability.  By traceability people are talking about being able to follow a transaction through all its phases by collecting transaction information and having some way of identifying the transaction payload as it moves through the system.

Traceability is often explicitly sought.  For example, with credit card purchases, there is a transaction identifier which ties the same event together across the computer systems of the participating banks, clearing house and merchant.  This is certainly considered “a feature.”  There are other, subtler, sometimes unintended, ways of achieving traceability (timestamps and the like). 

Once you can link two transactions, many different outcomes may result.  Two transactions conveying direct personal identification might be linked.  Or, a transaction initially characterized by personal non-identification may suddenly become subject to indirect personal identification.

To further facilitate the discussion, I think we should distinguish various types of linking:

  • Intra-transaction linking is the product of traceability, and provides visibility between the claims issuer, the user presenting the claims, and the relying party  (for example, credit card transaction number).
  • Single-site transaction linking associates a number of transactions at a single site with a data subject.  The phrase “data subject” is used to clarify that no linking is implied between the transactions and any “natural person”.
  • Multi-site transaction linking associates linked transactions at one site with those at another site.
  • Natural person linking associates a data subject with a natural person.

Next time I will use these ideas to help explain how specific crypto systems and protocol approaches impact privacy.

Denial mobs & the Cyberwar on Estonia

Ross Mayfield of Socialtext writes more explicitly about the same possible social-technical phenomena I hinted at in my recent piece on Cyber-attack against Estonia:

The latest thread of the Cyberwar attack against Estonia, as covered in the NY Times, an interview in Cnet with an expert from Arbor Networks and a post Kim Cameron raise an interesting question.  It is unlikely that the Russian government can be directly linked to the massive coordinated and sophisticated denial of service attack on Estonia. It is also possible that such attack could self-organize with the right conditions.  Is a large part of our future dealing with hacktavists as denial mobs?

Given the right conditions that make a central resource a target, a decentralized attack could be decentralized in its coordination as well.  Estonia may be the first nation state to be attacked at the scale of war, but it isn't just nations at threat.  The largest bank in Estonia, in one of the top markets for e-banking, has losses in excess of $1M.  Small amount relatively, but the overall economic cost is far from known.

If a multinational corporation did something to spark widespread outrage, such an attack could emerge against it as a net-dependent institution.  Then we would be asking ourselves if the attack was economic warfare from a nation or terrorist organization.  But it also could be a lesser, and illegal, form of grassroots activism.  None of this is particularly new, but less in concept.

But what is new are tools, that cut both ways, for easy group forming and conversation. 

Russian cyber-attacks on Estonia

Here is a report from the Guardian on what it calls the first cyber assault on a state. 

Whether it's the first or not, this type of attack is something we have known was going to be inevitable, something that was destined to become a standard characteristic of political conflict.

I came across the report while browsing a must-read new identity site called blindside (more on that later…).  Here are some excerpts from the Guardian's piece:

A three-week wave of massive cyber-attacks on the small Baltic country of Estonia, the first known incidence of such an assault on a state, is causing alarm across the western alliance, with Nato urgently examining the offensive and its implications.

While Russia and Estonia are embroiled in their worst dispute since the collapse of the Soviet Union, a row that erupted at the end of last month over the Estonians’ removal of the Bronze Soldier Soviet war memorial in central Tallinn, the country has been subjected to a barrage of cyber warfare, disabling the websites of government ministries, political parties, newspapers, banks, and companies.

Nato has dispatched some of its top cyber-terrorism experts to Tallinn to investigate and to help the Estonians beef up their electronic defences.
“This is an operational security issue, something we're taking very seriously,” said an official at Nato headquarters in Brussels. “It goes to the heart of the alliance's modus operandi.”

“Frankly it is clear that what happened in Estonia in the cyber-attacks is not acceptable and a very serious disturbance,” said a senior EU official…

“Not a single Nato defence minister would define a cyber-attack as a clear military action at present. However, this matter needs to be resolved in the near future…”

Estonia, a country of 1.4 million people, including a large ethnic Russian minority, is one of the most wired societies in Europe and a pioneer in the development of “e-government”. Being highly dependent on computers, it is also highly vulnerable to cyber-attack.

It is fascinating to think about how this kind of attack could be resisted:

With their reputation for electronic prowess, the Estonians have been quick to marshal their defences, mainly by closing down the sites under attack to foreign internet addresses, in order to try to keep them accessible to domestic users…

Attacks have apparently been launched from all over the world:

The crisis unleashed a wave of so-called DDoS, or Distributed Denial of Service, attacks, where websites are suddenly swamped by tens of thousands of visits, jamming and disabling them by overcrowding the bandwidths for the servers running the sites…

The attacks have been pouring in from all over the world, but Estonian officials and computer security experts say that, particularly in the early phase, some attackers were identified by their internet addresses – many of which were Russian, and some of which were from Russian state institutions…

“We have been lucky to survive this,” said Mikko Maddis, Estonia's defence ministry spokesman. “People started to fight a cyber-war against it right away. Ways were found to eliminate the attacker.”

I don't know enough about denial of service attacks to know how difficult it is to trace them. after the fact.  But presumably, since there is no need to receive responses in order to be successful in DOS, the attacker can spoof his source address with no problem.  This can't make things any easier.

Estonian officials say that one of the masterminds of the cyber-campaign, identified from his online name, is connected to the Russian security service. A 19-year-old was arrested in Tallinn at the weekend for his alleged involvement…

Expert opinion is divided on whether the identity of the cyber-warriors can be ascertained properly…

(A) Nato official familiar with the experts’ work said it was easy for them, with other organisations and internet providers, to track, trace, and identify the attackers.

But Mikko Hyppoenen, a Finnish expert, told the Helsingin Sanomat newspaper that it would be difficult to prove the Russian state's responsibility, and that the Kremlin could inflict much more serious cyber-damage if it chose to.  (More here…)

There was huge loss of life and bitterness between Russia and Estonia during the second world war, and there are still nationalist forces within Russia who would see this statue as symbolic of that historical reality.  It is perhaps not impossible that the DOS was mounted by individuals with those leanings rather than being government sponsored.  Someone with a clear target in mind, and the right technical collaborators, and who could muster bottoms up participation by thousands of sympathizers could likely put this kind of attack in place almost as quickly as a nation state.