Robots reshaping social networks

In May I was fascinated by a story in the Atlantic  on The Ecology Project - a group “interested in a question of particular concern to social-media experts and marketers: Is it possible not only to infiltrate social networks, but also to influence them on a large scale?” 

The Ecology Project was turning the Turing Test on its side, and setting up experiments to see how potentially massive networks of “SocialBots” (social robots) might be able to impact human social networks by interacting with their members.  

In the first such experiment it invited teams from around the world to manufacture SocialBots  and picked 500 real Twitter users, the core of whom shared “a fondness for cats”.  At the end of their two-week experiment, network graphs showed that the teams’ bots had insinuated themselves strikingly into the center of the target network.

The Web Ecology Blog summarized the results this way:

With the stroke of midnight on Sunday, the first Socialbots competition has officially ended. It’s been a crazy last 48 hours. At the last count, the final scores (and how they broke down) were:

  • Team C: 701 Points (107 Mutuals, 198 Responses)
  • Team B: 183 Points (99 Mutuals, 28 Responses)
  • Team A: 170 Points (119 Mutuals, 17 Responses)

This leaves the winner of the first-ever Socialbots Cup as Team C. Congratulations!

You also read those stats right. In under a week, Team C’s bot was able to generate close to 200 responses from the target network, with conversations ranging from a few back and forth tweets to an actual set of lengthy interchanges between the bot and the targets. Interestingly, mutual followbacks, which played so strong as a source for points in Round One, showed less strongly in Round Two, as teams optimized to drive interactions.

In any case, much further from anything having to do with mutual follows or responses, the proof is really in the pudding. The network graph shows the enormous change in the configuration of the target network from when we first got started many moons ago. The bots have increasingly been able to carve out their own independent community — as seen in the clustering of targets away from the established tightly-knit networks and towards the bots themselves.

The Atlantic story summarized the implications this way:

Can one person controlling an identity, or a group of identities, really shape social architecture? Actually, yes. The Web Ecology Project’s analysis of 2009’s post-election protests in Iran revealed that only a handful of people accounted for most of the Twitter activity there. The attempt to steer large social groups toward a particular behavior or cause has long been the province of lobbyists, whose “astroturfing” seeks to camouflage their campaigns as genuine grassroots efforts, and company employees who pose on Internet message boards as unbiased consumers to tout their products. But social bots introduce new scale: they run off a server at practically no cost, and can reach thousands of people. The details that people reveal about their lives, in freely searchable tweets and blogs, offer bots a trove of personal information to work with. “The data coming off social networks allows for more-targeted social ‘hacks’ than ever before,” says Tim Hwang, the director emeritus of the Web Ecology Project. And these hacks use “not just your interests, but your behavior.”

A week after Hwang’s experiment ended, Anonymous, a notorious hacker group, penetrated the e-mail accounts of the cyber-security firm HBGary Federal and revealed a solicitation of bids by the United States Air Force in June 2010 for “Persona Management Software”—a program that would enable the government to create multiple fake identities that trawl social-networking sites to collect data on real people and then use that data to gain credibility and to circulate propaganda.

“We hadn’t heard of anyone else doing this, but we assumed that it’s got to be happening in a big way,” says Hwang. His group has published the code for its experimental bots online, “to allow people to be aware of the problem and design countermeasures.”

The Ecology Project source code is available here.  Fascinating.  We&#39re talking very basic stuff that none-the-less takes social engineering in an important and disturbingly different new direction. 

As is the case with the use of robots for social profiling, the use of robots to reshape social networks raises important questions about attribution and identity (the Atlantic story actually described SocialBots as “fake identities”).  

Given that SocialBots will inevitably and quickly evolve, we can see that the ability to demonstrate that you are a natural flesh-and-blood person rather than a robot will increasingly become an essential ingredient of digital reality.  It will be crucial that such a proof can be given without requiring you to identify yourself,  relinquish your anonymity, or spend your whole life completing grueling captcha challenges. 

I am again struck by our deep historical need for minimal disclosure technology like U-Prove, with its amazing ability to enable unlinkable anonymous assertions (like liveness) and yet still reveal the identities of those (like the manufacturers of armies of SocialBots) who abuse them through over-use.

 

Incident closed – good support from janrain…

When I connected with janrain to resolve the issue described here, they were more than helpful. In fact, I have to quote them, because this is what companies should be like:

“We certainly test on ie 6,7,8,9, and would love to get your situation smoothed out.” 

The scary part came a little while later…

“The cause is likely to be configuration based on the browser.  Browser security settings should be set to default for testing. Temporarily disable all toolbars and add-ons. Clear caches and cookies (at least for your site domain and rpxnow.com.”

Oh yeah.  I&#39ve heard that one before.  So I was a bit skeptical. 

On the other hand, I happened to be in a crowd and asked some people nearby with Windows 7 to see what happened to them when they tried to log in.  It was one of those moments.  Everything worked perfectly for everyone but me… 

Gathering my courage, I pressed the dreaded configuration reset button as I had been told to do: 

Then I re-enabled all my add-ons as janrain suggested.  And… everything worked as advertised.

So there you go.  Possibly I did something to my IE config at some point – I do a lot of experimenting.  Conclusion: if any of you run into the same problem, please let me know.  Until then, let&#39s consider the incident closed.

 

Vittorio&#39s new book is a must-read

Vittorio&#39s new bookIf you are a programmer interested in identity, I doubt you&#39ll find a more instructive or amusing video than this one by Vittorio Bertocci.  It&#39s aimed at people who work in .NET and explores the Windows Identity Foundation.   I expect most programmers interested in identity will find it fascinating no matter what platform they work on, even if it just provides a point of comparison.

And that brings me to Vittorio&#39s new book:  Programming Windows Identity Foundation.  I really only have one thing to say about it:  you are crazy to program in WIF without reading this book.  And if you&#39re an architect rather than a coder – but still have a sense of reading code – you&#39ll find that subjects like delegation benefit immensely from the concrete presentation Vittorio has put together.

I have to admit to being sufficiently engrossed that I had to drop everything I was doing in order to deal with some of the miniature brain-waves the book induced.  

But then, I have a soft spot for good books on programming.  I&#39m talking about books that have real depth but are simple and exciting because the writer has the same clarity as programmers have when they are in “programming trance”.  I used to even take a bunch of books with me when I went on vacation – it drove my mother-in-law nuts.

I&#39m not going to try to descibe Vittorio&#39s book – but it really hangs together, and if you&#39re trying to do anything original or complex it will give you the depth of understanding you need to do it efficiently.  Just as important, you&#39ll enjoy reading it.

Southworks seeds open source claims transformer

Reading Matias Woloski&#39s blog I see that Southworks has put its work bridging OpenID and WS-Federation into an open source project (download here).    This is a great move.  He also shows some screen shots that give a good feel for what was involved in the Medtronics proof of concept described here.  Matias writes:

A year ago I wrote a blog post about how to use the Windows Identity Foundation with OpenID. Essentially the idea was writing an STS that can speak both protocol WS-Federation and OpenID, so your apps can keep using WIF as the claims framework, no matter what your Identity Provider is. WS-Fed == enterprise, OpenID == consumer…

Fast forward to May this year, I’m happy to disclose the proof of concept we did with the Microsoft Federated Identity Interop group (represented by Mike Jones), Medtronic and PayPal. The official post from the Interoperability blog includes a video about it and Mike also did a great write up

The business scenario brought by Medtronic is around an insulin pump trial. In order to register to this trial, users would login with PayPal, which represents a trusted authority for authentication and attributes like shipping address and age for them. Below are some screenshots of the actual proof of concept:

image

image

image

image

While there are different ways to solve a scenario like this, we chose to create an intermediary Security Token Service that understands the OpenID protocol (used by PayPal), WS-Federation protocol and SAML 1.1 tokens (used by Medtronic apps). This intermediary STS enables SSO between the web applications, avoiding re-authentication with the original identity provider (PayPal).

Also, we had to integrate with a PHP web application and we chose the simpleSAMLphp library. We had to adjust here and there to make it compatible with ADFS/WIF implementation of the standards. No big changes though.

We decided together with the Microsoft Federated Identity Interop team to make the implementation of this STS available under open source using the Microsoft Public License.

And not only that but also we went a step further and added a multi-protocol capability to this claims provider. This is, it’s extensible to support not only OpenID but also OAuth and even a proprietary authentication method like Windows Live.

image

 

 

DISCLAIMER: This code is provided as-is under the Ms-PL license. It has not been tested in production environments and it has not gone through threats and countermeasures analysis. Use it at your own risk.

Project Home page
http://github.com/southworks/protocol-bridge-claims-provider

Download
http://github.com/southworks/protocol-bridge-claims-provider/downloads

Docs
http://southworks.github.com/protocol-bridge-claims-provider

If you are interested and would like to contribute, ping us through the github page, twitter @woloski or email matias at southworks dot net

This endeavor could not have been possible without the professionalism of my colleagues: Juan Pablo Garcia who was the main developer behind this project, Tim Osborn for his support and focus on the customer, Johnny Halife who helped shaping out the demo in the early stages in HTML :), and Sebastian Iacomuzzi that helped us with the packaging. Finally, Madhu Lakshmikanthan who was key in the project management to align stakeholders and Mike who was crucial in making all this happen.

Happy federation!

Project Geneva – Part 5

[This is the fifth - and thankfully the final – installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

I&#39ve made a number of announcements today that I think will have broad industry-wide support not only because they are cool, but because they indelibly mark Microsoft&#39s practical and profound committment to an interoperable identity metasystem that reaches across devices, platforms, vendors, applications, and administrative boundaries. 

I&#39m very happy, in this context, to announce that from now on, all Live ID&#39s will also work as OpenIDs.   

That means the users of 400 million Live ID accounts will be able to log in to a large number of sites across the internet without a further proliferation of passwords – an important step forward for binging reduced password fatigue to the long tail of small sites engaged in social networking, blogging and consumer services.

As the beta progresses, CardSpace will be integrated into the same offering (there is already a separate CardSpace beta for Live ID).

Again, we are stressing choice of protocol and framework.

Beyond this support for a super lightweight open standard, we have a framework specifically tailored for those who want a very lightweight way to integrate tightly with a wider range of Live capabilities.

The Live Framework gives you access to an efficient, lightweight protocol that we use to optimize exchanges within the Live cloud.

It too integrates with our Gateway. Developers can download sample code (available in 7 languages), insert it directly into their application, and get access to all the identities that use the gateway including Live IDs and federated business users connecting via Geneva, the Microsoft Services Connector, and third party Apps.

 

Flexible and Granular Trust Policy

 Decisions about access control and personalization need to be made by the people responsible for resources and information – including personal information. That includes deciding who to trust – and for what.

At Microsoft, our Live Services all use and trust the Microsoft Federation Gateway, and this is helpful in terms of establishing common management, quality control, and a security bar that all services must meet.

But the claims-based model also fully supports the flexible and granular trust policies needed in very specialized contexts. We already see some examples of this within our own backbone.

For example, we’ve been careful to make sure you can use Azure to build a cloud application – and yet get claims directly from a third party STS using a different third party’s identity framework, or directly from OpenID providers. Developers who take this approach never come into contact with our backbone.

Our Azure Access Control Service provides another interesting example. It is, in fact, a security component that can be used to provide claims about authorization decisions. Someone who wants to use the service might want their application, or its STS, to consume ACS directly, and not get involved with the rest of our backbone. We understand that. Trust starts with the application and we respect that.

Still another interesting case is HealthVault. HealthVault decided from day one to accept OpenIDs from a set of OpenID providers who operate the kind of robust claims provider needed by a service handling sensitive information. Their requirement has given us concrete experience, and let us learn about what it means in practice to accept claims via OpenID. We think of it as pilot, really, from which we can decide how to evolve the rest of our backbone.

So in general we see our Identity Backbone and our federation gateway as a great simplifying and synergizing factor for our Cloud services. But we always put the needs of trustworthy computing first and foremost, and are able to be flexible because we have a single identity model that is immune to deployment details.


Identity Software + Services

To transition to the services world, the identity platform must consist of both software components and services components.

We believe Microsoft is well positioned to help developers in this critical area.

Above all, to benefit from the claims-based model, none of these components is mandatory. You select what is appropriate.

We think the needs of the application drive everything. The application specifies the claims required, and the identity metasystem needs to be flexible enough to supply them.

Roadmap

Our roadmap looks like this:

Identity @ PDC

You can learn more about every component I mentioned today by drilling into the 7 other presentations presented at PDC (watch the videos…):

Software
(BB42) Identity:  “Geneva” Server and Framework Overview
(BB43) Identity: “Geneva” Deep Dive
(BB44) Identity: Windows CardSpace “Geneva” Under the Hood
Services
(BB22) Identity: Live Identity Services Drilldown
(BB29) Identity: Connecting Active Directory to Microsoft Services
(BB28) .NET Services: Access Control Service Drilldown
(BB55) .NET Services: Access Control In the Cloud Services
 

Conclusion

I once went to a hypnotist to help me give up smoking. Unfortunately, his cure wasn’t very immediate. I was able to stop – but it was a decade after my session.

Regardless, he had one trick I quite liked. I’m going to try it out on you to see if I can help focus your take-aways from this session. Here goes:

I’m going to stop speaking, and you are going to forget about all the permutations and combinations of technology I took you through today. You’ll remember how to use the claims based model. You’ll remember that we’ve announced a bunch of very cool components and services. And above all, you will remember just how easy it now is to write applications that benefit from identity, through a single model that handles every identity use case, is based on standards, and puts users in control.

 

Project Geneva – Part 3

[This is the third installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

Microsoft also operates one of the largest Claims Providers in the world – our cloud identity provider service, Windows Live ID.

It plays host to more than four hundred million consumer identities.

In the Geneva wave, Live ID will add “managed domain” services for sites and customers wanting to outsource their identity management.  With this option, Live would take care of identity operations but the sign in/sign up UX can be customized to fit the look of your site.

But in what I think is an especially exciting evolution, Live IDs also get access to our cloud applications and developer services via the gateway, and are now part of the same open, standards-based architecture that underlies the rest of the Geneva Wave.

Microsoft Services Connector

Some customers may want to take advantage of Microsoft’s cloud applications, hosting, and developer services – and have Active Directory – but not be ready to start federating with others.

We want to make it very easy for people to use our cloud applications and developer services without having to make any architectural decisions.  So for that audience, we have built a fixed function server to federate Active Directory directly to the Microsoft Federation Gateway.

This server is called the Microsoft Services Connector (MSC).   It was built on Project Geneva technology.

Since it’s optimized for accessing Microsoft cloud applications it manages a single trust relationship with the Federation Gateway.  Thus most of the configuration is fully automated.  We think the Microsoft Services Connector will allow many enterprises to start working with federation in order to get access to our cloud, and that once they see the benefits, they’ll want to upgrade their functionality to embrace full federation through Geneva Server and multilateral federation.

Through the combination of Geneva Framework and Server, Microsoft Services Connector, Live ID, the Microsoft Federation Gateway – and the ability to use CardSpace to protect credentials on the Internet -millions of Live and AD users will have easy, secure, SSO access to our cloud applications and developer services.

But what about YOUR applications?

OK.  This is all very nice for Microsoft&#39s apps, but how do other application developers benefit?

Well, since the Federation Gateway uses standard protocols and follows the claims-based model, if you write your application using a framework like “Geneva”, you can just plug it into the architecture and benefit from secure, SSO access by vast numbers of users – ALL the same users we do.  The options open to us are open to you.

This underlines my conviction that Microsoft has really stepped up to the plate in terms of federation.  We haven&#39t simply made it easier for you to federate with US in order to consume OUR services.  We are trying to make you as successful as we can in this amazing new era of identity.  The walled garden is down.  We want to move forward with developers in a world not constrained by zero sum thinking.

Configure your application to accept claims from the Microsoft Federation Gateway and you can receive claims from Live ID and any of the enterprise and government Federation Gateway partners who want to subscribe to your service.  Or ignore the MFG and connect directly to other enterprises and other gateways that might emerge.  Or connect to all of us.

Crossing organizational boundaries

If this approach sounds too good to be true, some of you may wonder whether, to benefit from Microsoft&#39s identity infrastructure, you need to jump onto our cloud and be trapped there even if you don&#39t like it!

But the claims-based model moves completely beyond any kind of identity lock-in.  You can run your application whereever you want – on your customer&#39s premise, in some other hosting environment, even in your garage.  You just configure it to point to the Microsoft Federation Gateway – or any other STS – as a source of claims.

These benefits are a great demonstration of how well the claims model spans organizational boundaries.  We really do move into a “write once and run anywhere” paradigm. 

Do you want choice or more choice?

For even more flexibility, you can use an enterprise-installed “Geneva” server as your application&#39s claim source, and configure that server to accept claims from a number of gateways and direct partners.

In the configuration shown here, the Geneva server can accept claims both hundreds of millions of Live ID users and from a partner who federates directly.

Claims-based access really does mean applications are written once, hosted anywhere.  Identity source is a choice, not a limitation.

You get the ability to move in and out of the cloud at any time and for any reason.

Even more combinations are possible and are just a function of application configuration. It’s a case of “Where do you want to get claims today?”.   And the answer is that you are in control.

In the next installment of this presentation I&#39ll tell you about another service we are announcing – again a claims-based service but this time focussing on authorization.  I&#39ll also link to the demo, by Vittorio Bertocci, of how all these things fit together.

Getting down with Zermatt

Zermatt is a destination in Switzerland, shown above, that benefits from what Nietzsche calls “the air at high altitudes, with which everything in animal being grows more spiritual and acquires wings”.

It&#39s therefore a good code name for the new identity application development framework Microsoft has just released in Beta form.  We used to call it IDFX internally  – who knows what it will be called when it is released in final form? 

Zermatt is what you use to develop interoperable identity-aware applications that run on the Windows platform.  We are building the future versions of Active Directory Federation Services (ADFS) with it, and claims-aware Microsoft applications will all use it as a foundation.  All capabilities of the platform are open to third party developers and enterprise customers working in Windows environments.  Every aspect of the framework works over the wire with other products on other platforms.

 I can&#39t stress enough how important it is to make it easy for application developers to incororate the kind of sensible and sophisticated capabilities that this framework makes available.  And everyone should understand that our intent is for this platform to interoperate fully with products and frameworks produced by other vendors and open source projects, and to help the capabilities we are developing to become universal.

I also want to make it clear that this is a beta.  The goal is to involve our developer community in driving this towards final release.  The beta also makes it easy for other vendors and projects to explore every nook and cranny of our implementation and advise us of problems or work to achieve interoperability.

I&#39ve been doing my own little project using the beta Zermatt framework and will write about the experience and share my code.  As an architect, I can tell you already how happy I am about the extent to which this framework realizes the metasystem architecture we&#39ve worked so hard to define.

The product comes with a good White Paper for Developers by Keith Brown of Pluralsight.  Here&#39s how Zermatt&#39s main ReadMe sets out the goals of the framework.

Building claims-aware applications

Zermatt makes it easier to build identity aware applications. In addition to providing a new claims model, it provides applications with a rich set of API’s to reason about the identity of a caller using claims.

Zermatt also provides developers with a consistent programming experience whether they choose to build their applications in ASP.NET or in WCF environments. 

ASP.NET Controls

ASP.NET controls simplify development of ASP.NET pages for building claims-aware Web applications, as well as Passive STS’s.

Building Security Token Services (STS)

Zermatt makes it substantially easier for building a custom security token service (STS) that supports the WS-Trust protocol. These STS’s are also referred to as an Active STS.

In addition, the framework also provides support for building STS’s that support WS-Federation to enable web browser clients. These STS’s are also referred to as a Passive STS.

Creating Information Cards

Zermatt includes classes that you can use to create Information Cards – as well as STS&#39s that support them.

There are a whole bunch of samples, and for identity geeks they are incredibly interesting.  I&#39ll discuss what they do in another post.

Follow the installation instructions!

Meanwhile, go ahead and download.  I&#39ll share one word of advice.  If you want things to run right out of the digital box, then for now slavishly follow the installation instructions.  I&#39m the type of person who never really looks at the ReadMe&#39s – and I was chastened by the experience of not doing what I was told.  I went back and behaved, and the experience was flawless, so don&#39t make the same mistake I did.

For example, there is a master installation script in the /samples/utilities directory called “SamplesPreReqSetup.bat”. This is a miraculous piece of work that sets up your machine certs automatically and takes care of a great number of security configuration details.  I know it&#39s miraculous because initially (having skipped the readme) I thought I had to do this configuration manually.  Congratulations to everyone who got this to work.

You will also find a script in each sample directory that creates the necessary virtual directory for you.  You need this because of the way you are expected to use the visual studio debugger.

Using the debugger

In order to show how the framework really works, the projects all involve at least a couple of aspx pages (for example, one page that acts as a relying party, and another that acts as an STS).  So you need the ability to debug multiple pages at once.

To do this, you run the pages from a virtual directory as though they were “production” aspx pages.  Then you attach your debugger to the w3wp.exe process (under debug, select “Attach to a process” and make sure you can see all the processes from all the sessions.  “Wake up” the w3wp.exe process by opening a page.  Then you&#39ll see it in the list). 

For now it&#39s best to compile the applications in the directory where they get installed.  It&#39s possible that if you move the whole tree, they can be put somewhere else (I haven&#39t tried this with my own hands).  But if you move a single project, it definitely won&#39t work unless you tweak the virtual directory configuration yourself (why bother?).

Clear samples

I found the samples very clear, and uncluttered with a lot of “sample decoration” that makes it hard to understand the main high level points.  Some of the samples have a number of components working together – the delegation sample is totally amazing – and yet it is easy, once you run the sample, to understand how the pieces fit together.  There could be more documentation and this will appear as the beta progresses. 

The Zermatt team is really serious about collecting questions, feedback and suggestions – and responding to them.  I hope that if you are a developer interested in identity you&#39ll take a look and send your feedback – whether you are primarily a Windows developer or not.  After all, our goal remains the Identity Big Bang, and getting identity deployed and cool applications written on all the different platforms. 

A C# Code Library for building an Information Card STS

I just heard about SharpSTS – a new open source project that allows you to implement a custom claims provider that will support Identity Selectors like CardSpace.  Better still, the code base has been posted.  Barry Dorrans, from idunno.org,  says:

Dominick and David beat me to the punch; last night I hit the “publish” button on codeplex for SharpSTS; a C# library to allow you to develop Information Card Security Token Services.

As with all open source projects there is still a bunch of work to do; as it stands we have a command line STS which should allow you to get started. Well; if you can work out from the source code what you need to do :)

Over the coming weeks and months I, as dictator, Dominick Baier and David  Christiansen hope to deliver a stable, tested, code base from which you can deliver managed information cards to your users, as well as a test web site which will issue and accept managed cards.

In the mean time you can download the code, implement your own authorisation policy provider and get started. In the meantime we’re guiding the rough beast, its hour come round at least, slouching towards Redmond to be born (with apologies to Yeats).

Wow.  Not only an STS but Yeats too!

SharpSTS is a C# code library which enables easy development of a Security Token Service, the server component for managed Information Cards.

To begin developing with SharpSTS you will need Visual Studio 2008 Standard (or higher), an SSL certificate and a client system that supports Information Cards.

The source code is available from http://www.codeplex.com/sharpSTS and is licensed under the Microsoft Public License (MS-Pl).

For those who are curious, the SharpSTS site includes a notice making it clear that “this web site, service and code are unaffiliated with Microsoft…”.

Identityblog mail configuration problem

After the recent attack on my WordPress  software, I moved identityblog to a new more powerful and securable server (I&#39m sticking with TextDrive – they&#39re good guys and it is helpful for me to get a feel for what it&#39s like to be “hosted”).

Recently I got a flock of messages like this one:

I tried again to comment using my card. It says it is sending me a mail. I waited 24 hours and nothing arrived. Are you sure your code is working and your sender address is not blocked by hotmail?

Of course I was sure – NOT.  I tested it out and my messages were definitely disappearing into a worm hole at hotmail, though getting through to a number of other mailboxes.

Yikes.  My first reaction was to wallow in the irony of it all.  But eventually reason prevailed and I started to look at the headers:

Received: by z07191AA.textdrive.com (Postfix, from userid 80)
    id 1749D1280F; Sun,  2 Dec 2007 19:43:24 +0000 (GMT)

Instead of  z07191AA.textdrive.com, the header should have read identityblog.com.

Somehow I had not succeeded in configuring the hosted mailserver on my TextDrive accelerator to use the right hostname.  Hotmail was smart enough to figure this out and give me the finger.  I guess that&#39s why I get relatively little spam at hotmail.

Now I think I&#39ve fixed it, but it will probably take a while for the hostname to propagate.

So, my apologies to people who were trying to comment or try out Information Cards and couldn&#39t register. 

On a side note, when I was reinstalling my blogging software to get all the latest fixes, I was reminded what a fantastic job Pamela Dingle has done in making it easy to configure the PamelaWare plugin that adds both Information Card and now OpenID support to WordPress. 

It provides the best diagnostics Ive ever seen when using certificates and something goes wrong.  I wonder if it would be possible for her plugin to send out an email message and analyse the headers to make sure they are set up in such a way that the registration messages will get through spam filters?  That would be very cool.

I guess a lot of us will be seeing her this week at the Internet Identity Workshop being held in Mountainview.  I&#39ll see what she says.

CardSpace for the rest of us

I&#39ve been a Jon Udell fan for so long that I can&#39t even admit to myself just how long it is!  So I&#39ll avoid that calculation and just say I&#39m really delighted to see the CardSpace team get kudos for its long-tail (no-ssl) work in Jon&#39s recent CardSpace for the rest of us

Hat tip to the CardSpace team for enabling “long tail” use of Information Card technology by lots of folks who are (understandably) daunted by the prospect of installing SSL certificates onto web servers. Kim Cameron’s screencast walks through the scenario in PHP, but anyone who can parse a bit of XML in any language will be able to follow along. The demo shows how to create a simple http: (not https:) web page that invokes an identity selector, and then parses out and reports the attributes sent by the client.

As Kim points out this is advisable only in low-value scenarios where an unencrypted exchange may be deemed acceptable. But when you count blogs, and other kinds of lightweight or ad-hoc services, there are a lot of those scenarios.

Kim adds the following key point:

Students and others who want to see the basic ideas of the Metasystem can therefore get into the game more easily, and upgrade to certificates once they’ve mastered the basics.

Exactly. Understanding the logistics of SSL is unrelated to understanding how identity claims can be represented and exchanged. Separating those concerns is a great way to grow the latter understanding.

I&#39ve never been able to put it this well, even though it&#39s just what I was trying to do.  Jon really nails it.  I guess that&#39s why he&#39s such a good writer while I have to content myself with being an architect.