Delegation requires multiple tokens

ID Maan comments:

Little puzzled with your views respect to SAML:

“It is a lot cleaner for this scenario than the single-token designs such as SAML, proposed by Liberty, or the consequent “disappearing” of the user.”

As I understand WS-Trust is a token agnostic protocol. Delegation on other hand can be essentially considered as the capability of the securty token. So, isn't WS-Trust in a way dependent on the security token capabilities to provide for delegation? In other words, if you state SAML is insufficient to solve delegation problem, so is WS-Trust protocol that uses SAML tokens?

No wonder he is puzzled.  I should have been clearer.  Let me try again.

We all agree the SAML token is a fine and good way of expressing sets of claims.

But beyond the token, there is the SAML protocol – one way of moving SAML tokens around. 

I think the SAML protocol suffers from having a single-token design.  Why?

I don't think delegation problems can be solved through a single token.  Once you are expressing the identities of both a user and a delegate, you need to be able to request and convey two (or more) tokens – in the sense of integral things from separate sources.  In the simplest case, one represents the user, one the delegate.

To be clear, I wasn't hitting on the SAML protocol in all its applications.  I was arguing that WS-Trust, which has the ability to move and request multiple tokens simultaneously and establish relationships between them, solves the delegation problem more cleanly from an architectural point of view. 

When SAML was being elaborated, before the user-centric identity wave, we saw the user as being represented by the portal service.  She had no independent existence.  So you didn't need multiple tokens.

Since Identity 2.0, all this has changed.

Transducers and Delegation

Ernst Lopez Cordozo recently posted a very interesting comment about delegation issues:

“We always use delegation: if I use a piece of software on my machine to login to a site or application, it is that piece of software (e.g. the CardSpace client) that does the transaction, claiming it is authorized to do so.

“But if the software is compromised, it may misrepresent me or even somebody else. Is there a fundamental difference between software on my (owned? managed? checked?) machine and a machine under somebody else’s control?”

Ernst’s “We always use delegation” may be true, but risks losing sight of important distinctions. Despite that, it invites us to think about complex issues.

Analog-to-Digital transducers

There are modules within one's operating system whose job it is to vehicle our language and responses by sensing our behavior through a keyboard or mouse or microphone, and transforming it into a stream of bits.

The sensors constitute a transducer performing a conversion of our behavior from the physical to the digital worlds. Once converted, this behavior is communicated to service endpoints (local or distant) through what we’ve called “channels”.

There are lots of examples of similar transducers in more specialized realms – anywhere that benefit is to be had by converting physical being to electrical impulses or streams of bits. And there are more examples of this every day.

Consider the accelerator pedal in your car. Last century, it physically controlled gas flowing to your engine. Today, your car may not even use gas… If it does, there are more efficient ways of controlling fuel injection than with raw foot pressure… The “accelerator” has been transformed into a digital transducer. Directly or indirectly, it now generates a digital representation of how much you want to accelerate; this is fed into a computer as one of several inputs that actually regulate gas flow (or an electric engine).

Microphones make a more familiar example. Their role is to take sounds, which are physical vibrations, and convert them into micro currents that can then be harnessed directly – or digitized. So again, we have a transducer. This one feeds a channel representing the sound-field at the microphone.

I’m certain that Ernst would not argue that we “delegate” control of acceleration to the foot pedal in our car – the “foot-pedal-associated-components” constitute the transducer that conveys our intentions to engine control systems.

Similarly, no singer – recording through a microphone into a solid-state recording chain – would think of herself as “delegating” to the microphone… The microphone is just the transducer that converts the singer's voice from the physical realm to the electrical – hopefully introducing as little color as possible. The singer delegates to her manager or press representative – not to her microphone.

So I think we need to tease apart two classes of things that we want done when using computers. One is for our computers to serve as colorless transducers through which we can “become digital”, responding ourselves to digital events.

The other is for computers to “act in our stead” – analyze inputs, apply logic and policy (perhaps one day intuition?), and make decisions for us, or perform tasks, in the digital sphere.

Through the transducer systems, we are able to “dictate” onto post-physical media. The system which “takes dictation” performs no interpretation. It is colorless, a transcription from one medium to another. Dictation is the promulgation of “what I say”: stenography in the last century; digitization in this.

When we type at our keyboard and click with our mouse, we dictate to obedient digital scribes that convert our words and movements into the digital realm, rather than onto papyrus. These scribes differ from the executors and ambassadors and other reactive agents who are our “delegates”. There is in this sense a duality with the transducer on one side and the delegate on the other.

Protection of the channel

No one ever worried about evil parties intercepting the audio stream when Maria Callas stood before a microphone. That is partly because the recording equipment was protected through physical security; and partly because there was no economic incentive to do so.

In the early days, computers were like that too. The primitive transducer comprised a rigid and inalterable terminal – if not a punch card reader – and the resulting signal travelled along actual WIRES under the complete control of those managing the system.

Over time the computing components became progressively more dissociated. Distances grew and wires began to disappear: ensuring the physical security of a distributed system is no longer possible. As computers evolved into being a medium of exchange for business, the rewards for subverting them increased disproportionally.

In this environment, the question becomes one of how we know the “digital dictation” received from a transducer has not been altered on the way to the recipient. There is also a fundamental question about who is gave the dictation in the first place.

In the digital realm, the only way to ensure integrity across component boundaries is through cryptography. One wants the dictation to be cryptographically protected – as close to the transducer as possible. The ideal answer is: “Protect the dictation in the transducer to ensure no computer process has altered it”. This is done by giving the transducer a key. Then we can have secure digital dictation.

Compromise of software

Ernst talks about software compromise. Clearly, other things being equal, the tighter the binding between a transducer (taken in a wide sense) and its cryptographic module, the less opportunity there is for attack on the dictation channel. Given digital physics, this equates to reduction of the software surface area. It is thus best to keep processes and components compartmentalized, with clear boundaries, clear identities.

This, in itself, is a compelling reason to partition activities: the transducer being one component; processes taking this channel as an input then having separate identities.

Going forward we can expect there will be widely available hardware-based Trusted Platform Modules capable of ensuring that only a trusted loader will run. Given this, one will see loaders that will only allow trusted transducers to run. If this is combined with sufficiently advanced virtualization, we can predict machines that will achieve entirely new levels of security.


Given the distinctions being presented here, I see CardSpace as a transducer capable of bringing the physical human user into the digital identity system. It provides a context for the senses and translates her actions and responses into underlying protocol activities. It aims at adding no color to her actions, and at taking no decisions on its own. 

In this sense, it is not the same as a process performing delegation – and if we say it is, then we have started making “delegation” such a broad concept that it doesn’t say much at all. This lack of precision may not be the end of the world – perhaps we need new words all the way round. But in the meantime, I think separating transducers from entities that are proactive on our behalf is an indispensible precondition to achieving systems with enough isolation between components that they can be secure in the profound ways that will be required going forward.

My first i-names spam

 I've been using an I-name (it is here) for a couple of years now and have never received anything I considered spam.  It's been a great way for me to get feedback and input (even if I haven't always been able to respond in a very timely fashion due to the demands of my “day job”). 

But today, that period of initial innocence came to an end.  It seems that Mr. Gerg, below, has built a little contraption that makes it past 2idi's email verification process.  I'd say my friends Fen and Victor, who created the Eden in which I've been living, need now to add a Turing test to their system.

Meanwhile, the proposal made by Mr. Gerg is “too muchie”. 

If the search engines are smart enough to figure out this kind of goofie manipulation, why go to all this trouble? Just because you can?

As shown at right, Dane Carson's memey little Technorati applet calculates the value of Mr. Gerg's property as being zero, compared to the bizarre value it places on mine (if anyone wants to buy, please send check).  When I look into Google's page rank, Mr. Gerg's property is just a “5”, even though it has about 4700 links, so Google has figured out the links are to things of very low value.  Seems like we might be getting somewhere with reputation.

So you would wonder why he would he spend his time building an engine that sends me i-name spam to do something that doesn't seem to be working in the first place. Anyway, if anyone wants to look at the pages he is referring to, you'll have to add the “p” to shopping that I removed from the URLs below – so as not to contribute any further links to his site or person.  I've also purposely misspelled his name.

Hello Kim Cameron,

My name is Alex Gerg and I am the manager of the project for

I have a proposal I would like to make. I have looked at your BLOG and think we can benefit from a partnership. Our site has more then 10,000,000 pages. Google, Yahoo and MSN each has already indexed over 300,000 pages with projected 2,000,000 pages in the next 2-3 month.
Google cached pages:

I would put your site’s text link to my site Your link will be placed in our Partners  section at the bottom of our site on every single page, over 10,000,000 pages.

In exchange we would like to ask you to put a text link in the footer or in other section on your web site.

I am open to any other suggestions you might have for partnership. Please fell free to ask any questions or offer other forms of partnership. I would appreciate your reply.

Alex Gerg

Ths message was sent via your 2idi I-Name Contact Service.
Sender Information:

Real Name: Alex Gerg

Doing my research on how many links he has on different systems, I noticed that he's also spammed the list at the debian project.  I'll bet he's really going to pick up a whole lot of support there too…

Of course, maybe this is just a digital centrifuge intended to separate out the real suckers that he can then go after in some other way.

Bandits strike at BrainShare

Incredible news from Dale Olds’ VirtualSoul at Novell:

This week was Novell’s Brainshare conference. It’s a big deal for Novell folks and it’s a great event. It gives us a place to show off new technologies like the emerging Internet identity systems and some of the recent work that we have done on the Bandit team.

Our most significant demo this year was shown during the technology preview keynote on Friday. The whole series of demos is interesting — I especially liked some of the Linux desktop stuff — but if you want to just skip to the infocard stuff, it starts at about 40 minutes into the video.

For those who may want to know more detailed information about what the demo actually does, let me give some background information here:

There were 3 new open source components written by Bandits and made available this week:

  • A fully open source, cross platform identity selector service was contributed to Higgins. Written in C++, this Higgins ISS runs as a daemon (no UI) and provides core infocard selector service: it accesses multiple card stores, enumerates available cards, matches cards based on requested claims, and interacts with the appropriate STS to get a token. It is almost complete on support for personal cards, with an internal STS, etc. The real deal.
  • A UI process for the Higgins ISS. It is currently written in C#, runs on Mono, and leverages much of the management UI of the CASA component of Bandit.
  • A new OpenID context provider was contributed to Higgins. This context provider plugs into the Higgins IdAS and allows identity data to be accessed from any OpenID Provider. What this means is that, with no change to the Higgins STS code (since the STS uses IdAS), we could set up a demo such that infocards can be generated from any OpenID identity. In other words, using the Higgins STS and the new OpenID context provider, I can access any site that accepts infocards with my openID account.

So what Baber showed in the demo:

  1. A fully functional, native infocard selector running on the Mac.
  2. He accessed a shopping site with an infocard generated from an OpenID account. Put some things in the cart and logged out.
  3. Baber switched to a SUSE Linux Desktop machine. Fully functional infocard selector there as well. Accessed the same site with an OpenID infocard and see stuff in his cart from the Mac session.
  4. Goes to check out. The site asks for a card with different claims, needs a payment card.
  5. The Higgins Infocard selector supports multiple card stores. In this case Baber selects a credit card from a card store on his mobile phone via bluetooth.
  6. He authorizes a (hypothetical) payment and the online shopping site (the relying party) only gets his shipping address and an authorization code from the credit card.

It’s a simple demo, and easy to miss the number of technologies and interactions involved, but this is the kind progress that we have been working towards for a long time.

The Bandits are happy and tired.

Hackers selling IDs for $14

This post is from David Evans, at The Progress Bar

Did you see the rejected Superbowl commercials for Godaddy? One particularly funny one was about two guys, one kept asking the other what his girlfriends name was, then his mother and his dog. The guy would immediately purchase their names as a domain name, to the others guys frustration.

Why do I bring this up? Macworld writes about a Symantec report that says hackers are selling ID’s and credit card numbers on the net.

U.S.-based credit cards with a card verification number were available for between US$1 to $6, while an identity — including a U.S. bank account, credit card, date of birth and government-issued identification number — was available for between $14 to $18.

Now it’s even easier to buy someone on the internet, for only $18, scary.

New Visual Studio Toolkit for CardSpace

If you use visual studio and are interested in CardSpace, you'll be interested in Christian Arnold's brand new “Visual Studio 2005 Toolbox for Windows CardSpace”.  It looks like it makes the task of CardSpace enabling .NET 2.0 apps as easy as pie.  I'm out of the country now but can't wait to try it.

You can download the tools here.  Christian also runs what he calls a “little support forum“.

The ToolBox provides an easy way to use Windows CardSpace in your ASP.NET 2.0 Web-Application to register and validate your users. It´s also possible to use the controls to receive a SAML token and get the decrypted values of provided claims. The token decrypting process is build based on the community sample.

The install process looks pretty straightforward – you just add the tools to your toolbox:


That adds two new controls to your Visual Studio 2005 ToolBox:

Here's a taste of how you use the CreateCardSpaceUserWizard Control:


You need to add a little configuration:

<cc1:CreateCardSpaceUserWizard ID=”CreateCardSpaceUserWizard1″ runat=”server” BuildInRegistration=”False” OnUserRegistered=”CreateCardSpaceUserWizard1_UserRegistered1″>

<cc1:IdentityClaim ClaimUri= “” />

<cc1:IdentityClaim ClaimUri= “” />


Christian explains that this causes the system to request the privatepersonalidentifier and the emailadress of a new user powered with CardSpace or other Information Card identity selector.

He explains that by defining the claim

the control will store the emailaddress automatically, so you don't have not to worry about this :-)

After registration the control will fire the UserRegistered Event. The eventargs will tell you the result of the operation and the provided claims as a NameValueCollection.
Christian goes on to explain how to use the system with the default ASP.NET 2.0 Membership-Provider. 

Clearly, there are a great many sites built on this Membership-provider technology and the emergence of this toolkit in the identity ecosystem is a major event.

An example of delegation coupons

Even if the true meaning of uber-geek is “incomprehensible”, my last comment on the use of delegation in VRM was a real winner in terms of terseness. I was discussing Whit's notion that he wanted to give Amazon access to his behavior at Netflix, Powell's and lastfm – the goal being to improve Amazon's relationship with him by revealing more about himself as a complete person. So I'll try to unfold my thought process.

If you ponder the possible architectures that could be used, it becomes clear, as usual, that the identity aspects are the hardest. Let's build a little picture of the parties involved. Let's say the user (I know, I should be calling Whit a “customer”), shares his behavior with Amazon and Powell's. Now let's call some subset of his behavior at Powell's “BP“. Whit would like an outcome that would be modelled in the following diagram, assuming for the moment that U->A:BP just represents Amazon asking Powell's for the customer's relevant information.

But how does Powell's know that Whit really wants it to release information to but not How does it know that the which calls for information is really the same Amazon that Whit was dealing with? Why should Powell's ever take the privacy risk of releasing information to the wrong party? What would its liability be if it were to do so? Can it protect itself from this liability?

When I mentioned delegation, I meant that while the user is “behaving” at Amazon, it gives Amazon a “coupon” that says “User delegates to Amazon the right to see his Behavior at Powell's”. I represent this as U->A:BP, where:

  • U is the user;
  • A is Amazon;
  • P is Powell's; and
  • B is behavior

Amazon can now present this coupon to Powell's, along with cryptographic proof that it is the “A” in the coupon. By retaining this coupon and auditing any release, Powell's can indemnify itself against any accusation that it released information to the wrong party – and better still, actually defend the user's privacy.

‘Breaking up is hard to do.’

I have left two of the most important questions for another time. First, is it really necessary (or advisable) for Powell's to know that the Whit is sharing information with Amazon, rather than “some legitimate party”? Second, how does Whit revoke the permission he has granted to Amazon if he decides the time has come for them to “break up”?

But without even opening those cans of worms, it should be evident that, for reasons of privacy, auditablity and of liability reduction, everyone's interests are served by making sure no service ever acts as an end user. In this example, Amazon continues to act as Amazon, and even if its access is one day anonomized, would would always be identified as “the user's delegate”. The approach constrasts starkly with current approaches – as spooky as they are cooky – in which users release their credentials on “good faith” and eventually, if enough secrets are shared, anyone can be anyone.

Note:   The notation above is my own – please propose a better one if it is just as simple…

SeaMonkeyRodeo on Amazon and VRM

Good description of Vendor Relationship Management (VRM) by Whit B. McNamara  at seamonkeyrodeo (“karaoke mind control…”).  Seems like another place that user control and delegation is the right answer:

Kim Cameron, identity urber-geek, posted an enthusiastic endorsement of Amazon’s recommendation emails over the weekend.

I know what he means — I blogged about the very same positive experience with Amazon’s recommendations a couple of years ago, shortly after noting the inverse experience with eBay’s sad little attempts to send personalized email to me.

While I, like Kim, am still pretty happy with Amazon and continue to view their recommendations as useful (and not spam), my thinking about VRM has taken some of the luster off of this relationship with Amazon.

The problem isn’t anything that Amazon is doing — what they offer is already far better that what most of the market is doing; the problem is that my expectations have grown while Amazon’s capabilities appear to be fundamentally the same as they were two years ago. You see, I’d like to offer Amazon the chance to have an actual relationship with me, rather than a relationship with the incomplete model of me that they’ve built from the transactions that we have in common (I call that construction “Whit: Amazon Virtual Edition”).

Just taking the easy examples, real-world Whit leaves trails of data across the Internet that I’d be happy to share with Amazon, just to see what they could do with them. (With the explicit understanding that both the data and the decision whether or not to continue sharing it is mine, of course.)

I get at least five or six DVDs per month from Netflix, and tend to rate them after viewing. Amazon knows only that I don’t buy DVDs often at all. No recommendations for me, no opportunity to prey on my secret desire to own every episode of The Tick for Amazon.

While I buy a reasonable number of books through Amazon, the overwhelming majority of my book purchases are from Powells. Amazon knows nothing about them. No recommendations for me, and no opportunity to take business away from Powells for Amazon.

I buy some music from Amazon, but not a huge amount. doesn’t know what I’ve bought, but it knows all about what I’ve been listening to. Amazon knows nothing about it. No recommendations for me, and no chance to take business away from eMusic, Apple, CD Baby, and a host of others for Amazon.

Now I know that I could work around this to some extent by using Amazon’s lists, wishlists, and what-have-you, but why should I? I’ve already created all of this information in a variety of places, why can’t I just use that information now, to make my own life easier? And if that means that Amazon gets the chance to make more money by knowing me better, where’s the harm? Isn’t that scenario better for everyone involved?

I know that this isn’t just Amazon’s problem: even if they make it possible for me to put data in, everyone else that I’ve mentioned needs to make it possible for me to get data out. But that’s the way I want these relationships to work. All this metadata I’m creating is mine. I should be able to actively and selectively share it with others. I should be able to offer vendors data that they can’t collect themselves, so that they can build a relationship with me, rather than a relationship with their transaction database.

And that right there is the “R” for one big piece of VRM.

I could give Amazon a “packet” of delegation coupons they could present to netflix et al in order to serve me better. 

GoDaddy’s bad buffness day

More on buffness from Jon Udell:

Last week Kim Cameron wrote about a problem at Flickr that resulted in wrong photos being displayed. Flickr’s acknowledgement and explanation of the problem earned this commendation from Axel Eble, which Kim cited:

Folks, this is one of the best pieces of crisis management I have ever seen! It states the problem; it states the solution; it takes the blame where necessary and it gives a promise to the future. Now, if we could set this as mandatory teaching for all companies worldwide I would feel so much better. [The Quiet Earth]

Kim went on to note that while this new transparency is a great thing, it’s not enough to be transparent, you must also be competent. And he borrowed this wonderful phrase from Don Tapscott: “If you are going to be naked, you had better be buff.”

Yesterday my DNS provider, GoDaddy, had a bad buffness day. My site was offline for hours, during which time the blogosphere speculated wildly about problems related to Daylight Saving Time. GoDaddy had nothing to say about it when I checked yesterday, and has nothing now, though it seems that at some point a note about technical difficulties was posted.

Scanning the commentary on various sites yesterday yielded no conclusion. The outage either was, or wasn’t, a denial of service attack unrelated to DST. I never knew which, yesterday, and I still don’t today.

The corollary to “If you are going to be naked, you had better be buff” is clearly not “On a bad buffness day, cover up.”

Flickr hiccup and transparency

Via Perilocity from The Quiet Earth:

So flickr had a hiccup yesterday. Well, truth be told, it was a major problem on their side: the image caches ran amok and delivered the wrong pics – not a few of them a bit on the more adult oriented side (as a sidenote, this proves what we all knew anyway: The Internet Is All About Porn). To the emotional outcry from lotsa lotsa users came the fact that the problem was not resolved by restarting the flaky cache server(s) but instead resurfaced once again. So finally, after quite a few hours of downtime (and I bet beet red engineers working overtime to find the bug and fix it) the system is back up.

So that's the exposition, which just about gives you an idea of the dimension of this thingy. It didn't? Well, then let me summarize: It Was BIG. However, flickr not only took down their site but pointed to their blog – in which Eric Costello did keep the users informed (if only tersely, but this is better than just a few lame marketing lines stating that all is beautiful and the system is just being enhanced yaddayaddayadda). When it was apparent that flickr would solve the problem he sat down and wrote a decent explanation of the problem – in a way to satisfy both non-technical users and the somewhat tech-savvy ones. He explains the issue without emotional overtures nor does he play it down:

To be clear, we regard this as a serious problem, but it is something that goes away as soon as we restart the malfunctioning servers (tonight we found that the servers were going insane again shortly after restarting, but we have isolated the problem and believe we have a permanent fix).

And finally, he concludes with:

We shamefacedly apologize for the inconvenience and the scare. We understand that it probably seems very, very strange and we know that many people got the impression that their photos were lost forever. But they should all be back now, safe and sound. And everyone who works on Flickr's engineering and technical operations teams are working double time to ensure that it never happens again. Thanks for your understanding and patience!

Folks, this is one of the best pieces of crisis management I have ever seen! It states the problem; it states the solution; it takes the blame where necessary and it gives a promise to the future. Now, if we could set this as mandatory teaching for all companies worldwide I would feel so much better.

Now I feel better about my glitches upgrading to WordPress 2.0.2. Just kidding. I think this is a great story.

I'll just assert one caveat, though, directed not so much to the Flickr incident as to the notion that good communication can fix everything. 

Transparency and visibility are not the whole story, as important as they may be.   

I recently fell back into Don Tapscott's super book from way back in 2003, The Naked Corporation: How the Age of Transparency will Revolutionize Business.  (By the way, it's rated 5 stars by its Amazon peer reviewers.  Don is – rightly – a cyber guru to many Fortune 500 businesses.)  In it he says:

“From the marketing perspective, the message is clear. If you are going to be naked, you had better be buff.”

I love this.  And as Don shows through examples, “Opening the kimono, especially when you're not superbly buff, presents risks…” 

It's a great metaphor:  transparency is bringing about a whole new way of doing business, in which businesses will want – and be required – to “get in shape”.   So under the change in communications is a much bigger change.