Ernst Lopez Cordozo recently posted a very interesting comment about delegation issues:
â€œWe always use delegation: if I use a piece of software on my machine to login to a site or application, it is that piece of software (e.g. the CardSpace client) that does the transaction, claiming it is authorized to do so.
“But if the software is compromised, it may misrepresent me or even somebody else. Is there a fundamental difference between software on my (owned? managed? checked?) machine and a machine under somebody elseâ€™s control?”
Ernstâ€™s â€œWe always use delegationâ€ may be true, but risks losing sight of important distinctions. Despite that, it invites us to think about complex issues.
There are modules within one's operating system whose job it is to vehicle our language and responses by sensing our behavior through a keyboard or mouse or microphone, and transforming it into a stream of bits.
The sensors constitute a transducer performing a conversion of our behavior from the physical to the digital worlds. Once converted, this behavior is communicated to service endpoints (local or distant) through what weâ€™ve called â€œchannelsâ€.
There are lots of examples of similar transducers in more specialized realms – anywhere that benefit is to be had by converting physical being to electrical impulses or streams of bits. And there are more examples of this every day.
Consider the accelerator pedal in your car. Last century, it physically controlled gas flowing to your engine. Today, your car may not even use gas… If it does, there are more efficient ways of controlling fuel injection than with raw foot pressure… The â€œacceleratorâ€ has been transformed into a digital transducer. Directly or indirectly, it now generates a digital representation of how much you want to accelerate; this is fed into a computer as one of several inputs that actually regulate gas flow (or an electric engine).
Microphones make a more familiar example. Their role is to take sounds, which are physical vibrations, and convert them into micro currents that can then be harnessed directly – or digitized. So again, we have a transducer. This one feeds a channel representing the sound-field at the microphone.
Iâ€™m certain that Ernst would not argue that we â€œdelegateâ€ control of acceleration to the foot pedal in our car â€“ the â€œfoot-pedal-associated-componentsâ€ constitute the transducer that conveys our intentions to engine control systems.
Similarly, no singer – recording through a microphone into a solid-state recording chain – would think of herself as â€œdelegatingâ€ to the microphoneâ€¦ The microphone is just the transducer that converts the singer's voice from the physical realm to the electrical â€“ hopefully introducing as little color as possible. The singer delegates to her manager or press representative â€“ not to her microphone.
So I think we need to tease apart two classes of things that we want done when using computers. One is for our computers to serve as colorless transducers through which we can â€œbecome digitalâ€, responding ourselves to digital events.
The other is for computers to â€œact in our steadâ€ – analyze inputs, apply logic and policy (perhaps one day intuition?), and make decisions for us, or perform tasks, in the digital sphere.
Through the transducer systems, we are able to â€œdictateâ€ onto post-physical media. The system which â€œtakes dictationâ€ performs no interpretation. It is colorless, a transcription from one medium to another. Dictation is the promulgation of â€œwhat I sayâ€: stenography in the last century; digitization in this.
When we type at our keyboard and click with our mouse, we dictate to obedient digital scribes that convert our words and movements into the digital realm, rather than onto papyrus. These scribes differ from the executors and ambassadors and other reactive agents who are our â€œdelegatesâ€. There is in this sense a duality with the transducer on one side and the delegate on the other.
Protection of the channel
No one ever worried about evil parties intercepting the audio stream when Maria Callas stood before a microphone. That is partly because the recording equipment was protected through physical security; and partly because there was no economic incentive to do so.
In the early days, computers were like that too. The primitive transducer comprised a rigid and inalterable terminal â€“ if not a punch card reader â€“ and the resulting signal travelled along actual WIRES under the complete control of those managing the system.
Over time the computing components became progressively more dissociated. Distances grew and wires began to disappear: ensuring the physical security of a distributed system is no longer possible. As computers evolved into being a medium of exchange for business, the rewards for subverting them increased disproportionally.
In this environment, the question becomes one of how we know the â€œdigital dictationâ€ received from a transducer has not been altered on the way to the recipient. There is also a fundamental question about who is gave the dictation in the first place.
In the digital realm, the only way to ensure integrity across component boundaries is through cryptography. One wants the dictation to be cryptographically protected â€“ as close to the transducer as possible. The ideal answer is: â€œProtect the dictation in the transducer to ensure no computer process has altered itâ€. This is done by giving the transducer a key. Then we can have secure digital dictation.
Compromise of software
Ernst talks about software compromise. Clearly, other things being equal, the tighter the binding between a transducer (taken in a wide sense) and its cryptographic module, the less opportunity there is for attack on the dictation channel. Given digital physics, this equates to reduction of the software surface area. It is thus best to keep processes and components compartmentalized, with clear boundaries, clear identities.
This, in itself, is a compelling reason to partition activities: the transducer being one component; processes taking this channel as an input then having separate identities.
Going forward we can expect there will be widely available hardware-based Trusted Platform Modules capable of ensuring that only a trusted loader will run. Given this, one will see loaders that will only allow trusted transducers to run. If this is combined with sufficiently advanced virtualization, we can predict machines that will achieve entirely new levels of security.
Given the distinctions being presented here, I see CardSpace as a transducer capable of bringing the physical human user into the digital identity system. It provides a context for the senses and translates her actions and responses into underlying protocol activities. It aims at adding no color to her actions, and at taking no decisions on its own.
In this sense, it is not the same as a process performing delegation â€“ and if we say it is, then we have started making â€œdelegationâ€ such a broad concept that it doesnâ€™t say much at all. This lack of precision may not be the end of the world â€“ perhaps we need new words all the way round. But in the meantime, I think separating transducers from entities that are proactive on our behalf is an indispensible precondition to achieving systems with enough isolation between components that they can be secure in the profound ways that will be required going forward.
2 thoughts on “Transducers and Delegation”
I agree with your analysis. And yes, it is difficult. Ten years ago the car of a well known Dutch opera singer caused a fatal accident while driving on the parking deck of the Amsterdam Arena. The singer, who was behind the wheel, successfully claimed that the accident was caused by his car’s cruise control, rather than his consumption of alcohol that night. I don’t make this up. Reality dovetails nicely with your examples.
Whether we use an innocent transducer or a possibly disobedient agent determines the deniability of the resulting actions.
Kim Cameron replies: Ernst, this is an absolutely incredible story.
Comments are closed.