Jon's piece channeled below, Steven O'Grady‘s comments at RedMonk and Tim Oâ€™Reillyâ€™s Blogger's Code of Conduct all say important things about the horrifying Kathy Sierra situation. I agree with everyone that reputation is important, just as it is in the physical world. But I have a fair bit of trouble with some of the technical thinking involved.
I agree we should be responsible for everything that appears on our sites over which we have control. And I agree that we should take all reasonable steps to ensure we control our systems as effectively as we can. But I think it is important for everyone to understand that our starting point must be that every system can be breached. Without such a point of departure, we will see further proliferation of Pollyannish systems that, as likely as not, end in regret.
Once you understand the possibility of breach, you can calculate the associated risks, and build the technology that has the greatest chance of being safe. You can't do this if you don't understand the risks. In this sense, all you can do is manage your risk.
When I first set up my blog to accept Information Cards, it prompted a number of people to try their hand at breaking in. They were unable to compromise the InfoCard system, but guess what? There was a security flaw in WordPress 2.0.1 that was exploited to post something in my name .
By what logic was I responsible for it? Because I chose to use WordPress – along with the other 900,000 people who had downloaded it and were thus open to this vulnerability?
I guess, by this logic, I would also be responsible for any issues related to problems in the linux kernel operating underneath my blog; and for potential bugs in MySQL and PHP. Not to mention any improper behavior by those working at my hosting company or ISP.
I'm feeling much better now.
So let's move on to the question of non-repudiation. There is no such thing as a provably correct system of any significant size. So there is no such thing as non-repudiation in an end-to-end sense. The fact that this term emerged from the world of PKI is yet another example of its failure to grasp various aspects of reality.
There is no way to prove that a key has not been compromised – even if a fingerprint or other biometric is part of the equation. The sensors can be compromised, and the biometrics are publicly available information, not secrets.
I'm mystified by people who think cryptography can work “in reverse”. It can't. You can prove that someone has a key. You cannot prove that someone doesn't have a key. People who don't accept this belong in the ranks of those who believe in perpetual motion machines.
To understand security, we have to leave the nice comfortable world of certainties and embrace uncertainty. We have to think in terms of probability and risk. We need structured ways to assess risk. And we then have to ask ourselves how to reduce risk.
Even though I can't prove noone has stolen my key, I can protect things a lot more effectively by using a key than by using no key!
Then, I can use a key that is hard to steal, not easy to steal. I can put the lock in the hands of trustworthy people. I can choose NOT to store valuable things that I don't need.
And so, degree by degree, I can reduce my risk, and that of people around me.