Trust will make or break cloud ID management services

ZDNet’s John Fontana has written about the Webinar on Identity Management as a Service hosted last week by Craig Burton of Kuppinger Cole.  The session began with a presentation by Craig on the revolutionary impact of the API economy in shaping the requirements for cloud identity.  Then I spoke about the characteristics of Identity Management as a Service as they were shaping the industry and, especially Azure Active Directory, while Chuck Mortimer gave a good description of what we will be seeing in Salesforce.com’s emerging cloud directory service.  The Webinar is available to those who want the details.

John highlights a number of the key emerging concepts in his piece, titled “Trust will make or break cloud ID management services”:

If identity-management-as-a-service is to take hold among enterprises it will need to be anchored by well-crafted rules for establishing trust that incorporate legal parameters and policy requirements, according to a pair of identity experts.

“Where we have seen trust frameworks be really successful in the past is where member organizations have some means and motivation for cooperation be that altruistic, economic, etc.,” said Chuck Mortimore, senior director of product management for identity and security at Salesforce.com. He cited the Shibboleth Project deployed in academia that highly incents universities to collaborate and cooperate.

“We are seeing both the U.S. government and the British government selecting trust frameworks for their respective identity initiatives,” said Kim Cameron, Microsoft’ identity architect. “You need a bunch of people who share the interest of having a trust framework.”

Trust frameworks ensure trust between those issuing an identity and the providers that accept that ID for authentication to access services or data, and in increasing cases, to tap application programming interfaces (APIs).

To wit, 62% of the traffic on Salesforce.com is API calls, mobile clients and desktop clients.

Mortimore and Cameron appeared together Tuesday on a Webinar hosted by Kuppinger Cole analyst Craig Burton.

The identity-management-as-a-service (IdMaaS) concept is rising in importance due to an emerging “API economy,” according to Burton. That economy is characterized by billions of API calls to support services sharing data on a massive, distributed scale that stretches across the enterprise and the cloud.

IdMaaS defines a cloud service that manages identity for an organization’s employees, partners and customers and connects them to applications, devices and data either in the enterprise or the cloud.

“This won’t be a point-to-point situation,” said Cameron. He said existing systems can’t handle the identity, security and privacy requirements of the cloud and its API economy. “The domain-based identity management model of the ‘90s and early 2000s is a non-starter because no one will be staying within the enterprise boundary.”

Cameron said the only way all the requirements can be met is with an identity service that fosters simplification and lower costs. And the only way that gets off the ground is through the use of trust frameworks that simplify the legal and policy requirements.

Cameron pointed to a number of current trust frameworks certification programs including Kantara and the Open Identity Exchange.

Mortimore said end-users need to start with a “baseline of security and trust” and go from there.

He said he believes most enterprises will use a hybrid identity management configuration – enterprise infrastructure plus cloud.

“We firmly believe we will see that architecture for a long time,” said Mortimore. “If you look at the core imperatives for IT, cloud and mobile apps are forcing functions for IT investments, as well as, people looking at existing IDM infrastructure that is running up against friction like how do I expose this API.”

Mortimore said cloud identity management services represent a nice transition path.

Salesforce.com backed up that idea last month when it introduced Salesforce Identity, a service baked into its applications, platform, and development environment.

Mortimore ran the list of features: a directory that anchors identity management, reliance on standard schemas and wire protocols, extensibility and programmability.

“We are not running this as a Salesforce identity service, we are running it on behalf of customers. That is a critical part of these identity cloud systems. We need to facilitate the secure exchange of identities, federation, collaboration and attribute exchange,” said Mortimore.

Cameron concurred, saying “the identity management service operates your directory for you, that has to be the model.”

Microsoft’s service is called Azure Active Directory, and it offers the cloud-based services in a similar fashion to what Active Directory and other Microsoft infrastructure products (authentication, federation, synchronization) do within the enterprise.

“You need to use the efficiencies of the cloud to enable new functions in identity and provide more capability for less money,” he said.

While they are giants, Microsoft and Salesforce.com represent just a handful of providers that offer or are building cloud identity services. (Disclaimer: My employer offers a cloud identity service).

 

The cloud ate my homework

In mid-August I got an email athat made me do a real double-take.  The subject line read:  Legacy Service End of Life – Action Required. 

Action Required:

Legacy Service End of Life

Dear Kim,

We’ve been analyzing customer usage of Joyent’s systems and noticed that you are one of the few customers that are still on our early products and have not migrated to our new platform, the Joyent Cloud.

For many business reasons, including infrastructure performance, service quality and manageability, these early products are nearing their End of Life. We plan to sunset these services on October 31, 2012 and we’d like to walk you through a few options.

We understand this might be an inconvenience for you, but we have a plan and options to make this transition as easy as possible.  We’ve been developing more functionality on our new cloud infrastructure, the Joyent Cloud, for our customers who care about performance, resiliency and security.  Now’s the time to take advantage of all the new capabilities you don’t have today. Everyone that’s moved to our new cloud infrastructure has been pleased with the results.

As a new user to the Joyent Cloud, you are eligible to take advantage of Joyent Cloud’s 30-Day Free Trial using this promotional code… [etc. – Kim]

Sincerely,

Jason Hoffman
Founder and CTO
Joyent
jason@joyent.com

Of course I spend a lot of my time thinking about the cloud: people who’ve heard me speak recently know that I’ve increasingly become a zealot about the new capabilities it opens up, the API economy and all that..

So I suppose that getting a pail of cold salt water thrown in my face by joyent was probably a good thing!  Imagine telling customers their infrastructure will be shut down within three months in an “action required” email!

We understand this might be an inconvenience for you.

Or even more surrealistic, after the hurricane,

We want you to take the time you need to focus on your personal safety, so we are extending the migration deadline from October 31, 2012 to the end of day Wednesday, November 7, 2012.

By the way, don’t think I was using a free service or an unreasonably priced plan.  I had been on a joyent “dedicated accelerator” for many years with an upgraded support plan – on which I only ever made a single call.  This site was the very one that was breached due to a wordpress cross-site scipting bug as described here [note that my view of Joyent as a professional outfit has completely changed in light of the 2 month fork-lift ultimatum they have sent our way].

Anyway, to make a long and illuminating story short, I’ve decided to leave joyent in the dust and move towards something more professionally run.  Joyent served up what has to be one of the nightmare cloud scenarios – the kind that can only give the cloud a bad name.  Note to self:  Read fine print on service end-of-lfe.  Tell customers to do same.

Meanwhile, I’ve taken advantage of the platform change to move to the latest version of wordpress.  This meant paying the price for all the modifications to wordpress I had made over the years to experiment with InfoCards, OpenID, U-Prove, SAML, WS-Trust and the like on a non-Microsoft platform.

So friends, please bear with me while I get through this – with a major goal of keeping all the history of the site intact.  There are still “major kinks” I’m working out – including dealing with the picture in the theme, re-enabling comments and porting the old category system to the new wordpress mechanisms [categories now work – Kim].  None the less if you see things that remain broken please email me or contact me by twitter or linkedin.

OK – I now “throw the big DNS switch in the sky” and take you over to the new version of Identityblog.

Yes to SCIM. Yes to Graph.

Today Alex Simons, Director of Program Management for Active Directory, posted the links to the Developer Preview of Windows Azure Active Directory.  Another milestone.

I'll write about the release in my next post.  Today, since the Developer Preview focuses a lot of attention on our Graph API, I thought it would be a good idea to respond first to the discussion that has been taking place on Twitter about the relationship between the Graph API and SCIM (Simple Cloud Identity Management).

Since the River of Tweets flows without beginning or end, I'll share some of the conversation for those who had other things to do:

@NishantK: @travisspencer IMO, @johnshew’s posts talk about SaaS connecting to WAAD using Graph API (read, not prov) @IdentityMonk @JohnFontana

@travisspencer: @NishantK Check out @vibronet’s TechEd Europe talk on @ch9. It really sounded like provisioning /cc @johnshew @IdentityMonk @JohnFontana

@travisspencer: @NishantK But if it’s SaaS reading and/or writing, then I agree, it’s not provisioning /cc @johnshew @IdentityMonk @JohnFontana

@travisspencer: @NishantK But even read/write access by SaaS *could* be done w/ SCIM if it did everything MS needs /cc @johnshew @IdentityMonk @JohnFontana

@NishantK: @travisspencer That part I agree with. I previously asked about conflict/overlap of Graph API with SCIM @johnshew @IdentityMonk @JohnFontana

@IdentityMonk: @travisspencer @NishantK @johnshew @JohnFontana check slide 33 of SIA322 it is really creating new users

@IdentityMonk: @NishantK @travisspencer @johnshew @JohnFontana it is JSON vs XML over HTTP… as often, MS is doing the same as standards with its own

@travisspencer: @IdentityMonk They had to ship, so it’s NP. Now, bring those ideas & reqs to IETF & let’s get 1 std for all @NishantK @johnshew @JohnFontana

@NishantK: @IdentityMonk But isn’t that slide talking about creating users in WAAD (not prov to SF or Webex)? @travisspencer @johnshew @JohnFontana

@IdentityMonk: @NishantK @travisspencer @johnshew @JohnFontana indeed. But its like they re one step of 2nd phase. What are your partners position on that?

@IdentityMonk: @travisspencer @NishantK @johnshew @JohnFontana I hope SCIM will not face a #LetTheWookieWin situation

@NishantK: @johnshew @IdentityMonk @travisspencer @JohnFontana Not assuming anything about WAAD. Wondering about overlap between SCIM & Open Graph API

Given these concerns, let me explain what I see as the relationship between SCIM and the Graph API.

What is SCIM?

All the SCIM documents begin with a commendably unambiguous statement of what it is:

The Simple Cloud Identity Management (SCIM) specification is designed to make managing user identity in cloud based applications and services easier. The specification suite seeks to build upon experience with existing schemas and deployments, placing specific emphasis on simplicity of development and integration, while applying existing authentication, authorization and privacy models. Its intent is to reduce the cost and complexity of user management operations by providing a common user schema and extension model, as well as binding documents to provide patterns of exchanging this schema using standard protocols. In essence, make it fast, cheap and easy to move users in to, out of and around the cloud. [Kim: emphasis is mine]

I support this goal. Further, I like the concept of spec writers being crisp about the essence of what they are doing: “Make it fast, cheap and easy to move users in to, out of and around the cloud”.  For this type of spec to be useful we need it to be as widely adopted as possible, and that means keeping it constrained, focussed and simple enough that everyone chooses to implement it.

I think the SCIM authors have done important work to date.  I have no comments on the specifics of the protocol or schema at this point – I assume those will continue to be worked out in accordance with the spec's “essence statement” and be vetted by a broad group of players now that SCIM is on a track towards standardization.  Microsoft will try to help move this forward:  Tony Nadalin will be attending the next SCIM meeting in Vancouver on our behalf.

Meanwhile, what is “the Graph”? 

Given that SCIM's role is clear, let's turn to the question of how it relates to a “Graph API”.  

Why does our thinking focus on a Graph API in addition to a provisioning protocol like SCIM?  There are two answers.

Let's start with the theoretical one.  It is  because of the central importance of graph technology in being able to manage connectedness – something that is at the core of the digital universe.  Treating the world as a graph allows us to have a unified approach to querying and manipulating interconnected objects of many different kinds that exist in many different relationships to each other.

But theory only appeals to some… So let's add a second answer that is more… practical.  A directory has emerged that by August is projected to contain one billion users. True, it's only one directory in a world with many directories (most agree too many).  But beyond the importance it achieves through its scale, it fundamentally changes what it means to be a directory:  it is a directory that surfaces a multi-dimensional network.  

This network isn't simply a network of devices or people.  It's a network of people and the actions they perform, the things they use and create, the things that are important to them and the places they go.  It's a network of relationships between many meaningful things.  And the challenge is now for all directories, in all domains, to meet a new bar it has set.    

Readers who come out of a computer science background are no doubt familiar with what a graph is.  But I recommend taking the time to come up to speed on the current work on connectedness, much of which is summarized in Networks, Crowds and Markets: Reasoning About a Highly Connected World (by Easley and Kleinberg).  The thesis is straightforward:  the world of technology is one where everything is connected with everything else in a great many dimensions, and by refocusing on the graph in all its diversity we can begin to grasp it. 

In early directories we had objects that represented “organizations”, “people”, “groups” and so on.  We saw organizations as “containing” people, and saw groups as “containing” people and other groups in a hierarchical and recursive fashion.  The hierarchy was a particularly rigid kind of network or graph that modeled the rigid social structures (governments, companies) being described by technology at the time.

But in today's flatter, more interconnected world, the things we called “objects” in the days of X.500 and LDAP are better expressed as “nodes” with different kinds of “edges” leading to many possible kinds of other “nodes”.  Those who know my work from around 2000 may remember I used to call this polyarchy and contrast it with the hierarchical limitations of LDAP directory technology.

From a graph perspective we can see “person nodes” having “membership edges” to “group nodes”.  Or “person nodes” having “friend edges” to other “person nodes”.  Or “person nodes” having “service edges” to a “mail service node”.  In other words the edges are typed relationships between nodes that may possibly contain other properties.  Starting from a given node we can “navigate the graph” across different relationships (I think of them as dimensions), and reason in many new ways. 

For example, we can reason about the strength of the relationships between nodes, and perform analysis, understand why things cluster together in different dimensions, and so on.

From this vantage point, directory is a repository of nodes that serve as points of entry into a vast graph, some of which are present in the same repository, and others of which can only be reached by following edges that point to resources in different repositories.  We already have forerunners of this in today's directories – for example, if the URL of my blog is contained in my directory entry it represents an edge leading to another object.  But with conventional technology, there is a veil over that distant part of the graph (my blog).  We can read it in a browser but not access the entities it contains as structured objects.  The graph paradigm invites us to take off the veil, making it possible to navigate nodes across many dimensions.

The real power of directory in this kind of interconnected world is its ability to serve as the launch pad for getting from one node to a myriad of others by virtue of different  relationships. 

This requires a Graph Protocol

To achieve this we need a simple, RESTful protocol that allows use of these launch pads to enter a multitude of different dimensions

We already know we can build a graph with just HTTP REST operations.  After all, the web started as a graph of pages…  The pages contained URLs (edges) to other pages.  It is a pretty simple graph but that's what made it so powerful.

With JSON (or XML) the web can return objects.  And those objects can also contain URLs.  So with just JSON and HTTP you can have a graph of things.  The things can be of different kinds.  It's all very simple and very profound.

No technology ghetto

Here I'm going to put a stake in the ground.  When I was back at ZOOMIT we built the first commercial implementation of LDAP while Tim Howes was still at University of Michigan.  It was a dramatic simplification relative to X.500 (a huge and complicated standard that ZOOMIT had also implemented) and we were all very excited at how much Tim had simplified things.  Yet in retrospect, I think the origins of LDAP in X.500 condemned directory people to life in a technology ghetto.  Much more dramatic simplifications were coming down the pike all around us in the form of HTML, latter day SQL and XML.  For every 100 application programmers familiar with these technologies, there might have been – on a good day – one who knew something about LDAP.  I absolutely respect and am proud of all the GOOD that came from LDAP, but I am also convinced that our “technology isolation” was an important factor that kept (and keeps) directory from being used to its potential.

So one of the things that I personally want to see as we reimagine directory is that every application programmer will know how to program to it.  We know this is possible because of the popularity of the Facebook Graph API.  If you haven't seen it close up and you have enough patience to watch a stream of consciousness demo you will get the idea by watching this little walkthrough of the Facebook Graph Explorer.   Or better still just go here and try with your own account data.

You have to agree it is dead simple and yet does a lot of what is necessary to navigate the kind of graph we are talking about.  There are many other similar explorers available out there – including ours.  I chose Facebook's simply because it shows that this approach is already being used at colossal scale.  For this reason it reveals the power of the graph as an easily understood model that will work across pretty much any entity domain – i.e. a model that is not technologically isolated from programming in general.

A pluggable namespace with any kind of entity plugging in

In fact, the Graph API approach taken by Facebook follows a series of discussions by people now scattered across the industry where the key concept was one of creating a uniform pluggable namespace with “any” kind of entity plugging in (ideas came from many sources including the design of the Azure Service Bus).

Nishant and others have posed the question as to whether such a multidimensional protocol could do what SCIM does.  And my intuition is that if it really is multidimensional it should be able to provide the necessary functionality.  Yet I don't think that diminishes in any way the importance of or the need for SCIM as a specialized protocol.  Paradoxically it is the very importance of the multidimensional approach that explains this.

Let's have a thought experiment. 

Let's begin with the assumption that a multidimensional protocol is one of the great requirements of our time.  It then seems inevitable to me that we will continue to see the emergence of a number of different proposals for what it should be.  Human nature and the angels of competition dictate that different players in the cloud will align themselves with different proposals.  Ultimately we will see convergence – but that will take a while.   Question:  How are we do cloud provisioning in the meantime?  Does everyone have to implement every multidimensional protocol proposal?  Fail!

So pragmatism calls for us to have a widely accepted and extremely focused way of doing provisioning that “makes it fast, cheap and easy to move users in to, out of and around the cloud”.

Meanwhile, allow developers to combine identity information with information about machines, services, web sites, databases, file systems, and line of business applications through multidimensional protocols and APIs like the Facebook and the Windows Azure Active Directory Graph APIs.  For those who are interested, you can begin exploring our Graph API here:  Windows Azure AD Graph Explorer (hosted in Windows Azure) (Select ‘Use Demo Company’ unless you have your own Azure directory and have gone through the steps to give the explorer permission to see it…)

To me, the goals of SCIM and the goals of the Graph API are entirely complementary and the protocols should coexist peacefully.  We can even try to find synergy and ways to make things like schema elements align so as to make it as easy as possible to move between one and the other. 

Diagram 2.0: No hub. No center.

As I wrote here, Mary Jo Foley's interpretation of one of the diagrams in John Shewchuk's second WAAD post made it clear we needed to get a lot visually crisper about what we were trying to show.  So I promised that we'd go back to the drawing board.  John put our next version out on twitter, got more feedback (see comments below) and ended up with what Mary Jo christened “Diagram 2.0”.  Seriously, getting feedback from so many people who bring such different experiences to bear on something like this is amazing.  I know the result is infinitely clearer than what we started with.

In the last frame of the diagram, any of the directories represented by the blue symbol could be an on-premise AD, a Windows Azure AD, something hybrid, an OpenLDAP directory, an Oracle directory or anything else.  Our view is that having your directory operated in the cloud simplifies a lot.  And we want WAAD to be the best possible cloud directory service, operating directories that are completely under the control of their data owners:  enterprises, organizations, government departments and startups.

Further comments welcome.

Good news and bad news from Delaware Lawmakers

Reading the following SFGate story was a real rollercoaster ride: 

DOVER, Del. (AP) — State lawmakers have given final approval to a bill prohibiting universities and colleges in Delaware from requiring that students or applicants for enrollment provide their social networking login information.

The bill, which unanimously passed the Senate shortly after midnight Saturday, also prohibits schools and universities from requesting that a student or applicant log onto a social networking site so that school officials can access the site profile or account.

The bill includes exemptions for investigations by police agencies or a school's public safety department if criminal activity is suspected.

Lawmakers approved the bill after deleting an amendment that expanded the scope of its privacy protections to elementary and secondary school students.

First of all there was the realization that if lawmakers had to draft this law it meant universities and colleges were already strong-arming students into giving up their social networking credentials.  This descent into hell knocked my breath away. 

But I groped my way back from the burning sulfur since the new bill seemed to show a modicum of common sense. 

Until finally we learn that younger children won't be afforded the same protections…   Can teachers and principals actually bully youngsters to log in to Facebook and access their accounts?  Can they make kids hand over their passwords?  What are we teaching our young people about their identity?

Why oh why oh why oh? 

 

There is no hub. There is no center.

Mary Jo Foley knows her stuff, knows identity and knows Microsoft.  She just published a piece called “With Azure Active Directory, Microsoft wants to be the meta ID hub“.  The fact that she picked up on John Shewchuk's piece despite all the glamorous announcements made in the same timeframe testifies to the fact that she understands a lot about the cloud.  On the other hand, I hope she won't mind if I push back on part of her thesis.  But before I do that, let's hear it:

Summary: A soon-to-be-delivered preview of a Windows Azure Active Directory update will include integration with Google and Facebook identity providers.

Microsoft isn’t just reimaginging Windows and reimaginging tablets. It’s also reimaginging Active Directory in the form of the recently (officially) unveiled Windows Azure Active Directory (WAAD).

In a June 19 blog post that largely got lost among the Microsoft Surface shuffle last week, Microsoft Technical Fellow John Shewchuk delivered the promised Part 2 of Microsoft’s overall vision for WAAD.

WAAD is the cloud complement to Microsoft’s Active Directory directory service. Here’s more about Microsoft’s thinking about WAAD, based on the first of Shewchuk’s posts. It already is being used by Office 365, Windows InTune and Windows Azure. Microsoft’s goal is to convince non-Microsoft businesses and product teams to use WAAD, too.

This is how the identity-management world looks today, in the WAAD team’s view:

And this is the ideal and brave new world they want to see, going forward.


WAAD is the center of the universe in this scenario (something with which some of Microsoft’s competitors unsurprisingly have problem).

[Read more of the article here]

The diagrams Mary Jo uses are from John's post.  And the second clearly shows the “Active Directory Service”  triangle in the center of the picture so one can understand why Mary Jo (and others) could think we are talking about Active Directory being at the center of the universe. 

Yet in describing what we are building, John writes,

“Having a shared directory that enables this integration provides many benefits to developers, administrators, and users.”

“Shared” is not the same as “Central”.  For the Windows Azure AD team the “shared directory” is not “THE hub” or “THE center”.  There is no one center any more in our multi-centered world.  We are not building a monolithic, world-wide directory.  We are instead consciously operating a directory service that contains hundreds of thousands of directories that are actually owned by individual enterprises, startups and government organizations.  These directories are each under the control of their data owner, and are completely independent until their data owner decides to share something with someone else.

The difference may sound subtle, but I don't think it is.  When I think of a hub I think of a standalone entity mediating between a set of claims providers and a set of relying parties.  

But with Azure Active Directory the goal is quite different:  to offer a holistic “Identity Management as a Service” for organizations, whether startups, established enterprises or government organizations – in other words to “operate” on behalf of these organizations.  

One of the things such a service can do is to take care of connecting an organization to all the consumer and corporate claims providers that may be of use to it.  We've actually built that capability, and we'll operate it on a 24/7 basis as something that scales and is robust.  But IdMaaS involves a LOT of other different capabilities as well.  Some organizations will want to use it for authentication, for authorization, for registration, credential management and so on.  The big IdMaaS picture is one of serving the organizations that employ it – quite different from being an independent hub and following a “hub” business model. 

In this era of the cloud, there are many cloud operators.  Martin Kuppinger has pointed out that “the cloud” is too often vendor-speak for “this vendor's cloud”.  In reality there are “clouds” that will each host services that are premium grade and that other services constructed in different clouds will want to consume.  So we will all need the ability to reach accross clouds with complete agility, security and privacy and within a single governance framework.  That's what Identity Management as a Service needs to facilitate, and the Active Directory Service triangle in the diagram above is precisely such a service.  There will be others operated by competitors handling the identity needs of other organizations.  Each of us will need to connect enterprises we serve with those served by our competitors. 

This said, I really accept the point that to express this in a diagram we could (and should)  draw it very differently.  So that's something John and I are going to work on over the next few days.  Then we'll get back to you with a diagram that better expresses our intentions.

 

Disruptive Forces: The Economy and the Cloud

New generations of digital infrastructure get deployed quickly even when they are incompatible with what already exists.  But old infrastructure is incredibly slow to disappear.   The complicated business and legal mechanisms embodied in computer systems are risky and expensive to replace..  But existing systems can't function without the infrastructure that was in place when they were built…  Thus new generations of infrastructure can be easily added, but old and even antique infrastructures survive alongside them to power the applications that have not yet been updated to employ new technologies.

This persistence of infrastructure can be seen as a force likely to slow changes in Identity Management, since it is a key component of digital infrastructure.

Yet global economic and technological trends lead in the opposite direction. The current reality is one of economic contraction where enterprises and governments are under increasing pressure to produce more with less. Analysts and corporate planners don’t see this contraction as being transient or likely to rebound quickly. They see it as a long-term trend in which organizations become leaner, better focused and more fit-to-purpose – competing in an economy where only fit-to-purpose entities survive.

At the same time that these economic imperatives are shaking the enterprise and governments, the introduction of cloud computing enables many of the very efficiencies that are called for.

Cloud computing combines a number of innovations. Some represent new ways of delivering and operating computing and communications power.  But the innovations go far beyond higher density of silicon or new efficiencies in cooling technologies…  The cloud is ushering in a whole new division of labor within information technology.

Accelerating the specialization of functions

The transformational power of the cloud stems above all else from its ability to accelerate the specialization of functions so they are provided by those with the greatest expertise and lowest costs.

I was making this “theoretical” point while addressing the TSCP conference recently, which brings together people from extremely distributed industries such as aeronautics and defense.  Looking out into the audience I was suddenly struck by something that should have been totally obvious to me.  All the industries represented in that room, except for information technology, had an extensive division of labor across a huge number of parties.  Companies like Boeing or Airbus don't manufacture the spokes on the wheels of their planes, so to speak.  They develop specifications and assemble completed products in cost effective ways that are manufactured and refined by a whole ecosystem.  They have massively distributed supply chains.  Yet our model in information technology has remained rather pre-industrial and there are innumerable examples of companies expending their own resources doing things they aren't expert at, rather than employing a supply chain.  And part of the reason is because of the lack of an infrastructure that supports this diversification.  That infrastructure is just arriving now – in the form of the cloud.   

Redistributing processes to be most efficiently performed

So  technologically the cloud is an infrastructure honed for multi-sourcing – refactoring processes and redistributing them to be most efficiently performed.

The need to become leaner and more fit-to-purpose will drive continuous change.  Organizations will attempt to take advantage of the emerging cloud ecology to substitute off-the-shelf commoditized systems offered as specialized services. When this is not possible they will construct their newly emerging systems in the cloud using other specialized ecosystem services as building blocks.

Given the fact that the best building blocks for given purposes may well be hosted on different clouds, developers will expect to be able to reach across clouds to integrate with the services of their choice. Cloud platforms that don’t offer this capability will die from synergy deficiency.

Technological innovation will need to take place before services will be able to work securely in this kind of loosely coupled world – constituting a high-value version of what has been called the “API Economy”. The precept of the API economy is to expose all functionality as simple and easily understood services (e.g. based on REST) – and allow them to be consumed at a high level of granularity on a pay-as-you-go basis.

In the organizational world, most of the data that will flow through these APIs will be private data. For enterprises and governments to participate in the API Economy they will require a system of access control in which many different applications run by different administrations in different clouds are able to reuse knowledge of identity and security policy to adequately protect the data they handle.  They will also need shared governance.

Specifically, it must be possible to reliably identify, authenticate, authorize and audit across a graph of services before reuse of specialized services becomes practicable and economical and the motor of cloud economics begins to hum.

 

Making Good on the Promise of IdMaaS

The second part of John Shewchuk's blog on Windows Azure Active Directory has been published here.  John goes into more detail about a number of things, focusing on the way it allows customers to hook their Cloud AD into the API Economy in a controlled and secure way.  

Rather than describe John's blog myself I'm going to parrot the blog post that analyst Craig Burton put up just a few hours ago.  I find it really encouraging to see his excitement:  it's the way I feel too, since I also think this is going to open up so many opportunities for innovation, make developing services simpler and make the services themselves more secure and respectful of privacy.  Here's Craig's post:

As a follow up to Microsoft’s announcement of IdMaaS, the company announced the — to be soon delivered — developer preview for Windows Azure Active Directory (WAzAD). As John Shewchuk puts it:

The developer preview, which will be available soon, builds on capabilities that Windows Azure Active Directory is already providing to customers. These include support for integration with consumer-oriented Internet identity providers such as Google and Facebook, and the ability to support Active Directory in deployments that span the cloud and enterprise through synchronization technology.

Together, the existing and new capabilities mean a developer can easily create applications that offer an experience that is connected with other directory-integrated applications. Users get SSO across third-party and Microsoft applications, and information such as organizational contacts, groups, and roles is shared across the applications. From an administrative perspective, Windows Azure Active Directory provides a foundation to manage the life cycle of identities and policy across applications.

In the Windows Azure Active Directory developer preview, we added a new way for applications to easily connect to the directory through the use of REST/HTTP interfaces.

An authorized application can operate on information in Windows Azure Active Directory through a URL such as:

https://directory.windows.net/contoso.com/Users(‘Ed@Contoso.com’)

Such a URL provides direct access to objects in the directory. For example, an HTTP GET to this URL will provide the following JSON response (abbreviated for readability):

{ “d”: {
“Manager”: { “uri”:”https://directory.windows.net/contoso.com/Users(‘User…’)/Manager” },
“MemberOf”: { “uri”:”https://directory.windows.net/contoso.com/Users(‘User…’)/MemberOf” },
“ObjectId”: “90ef7131-9d01-4177-b5c6-fa2eb873ef19”,
“ObjectReference”: “User_90ef7131-9d01-4177-b5c6-fa2eb873ef19”,
“ObjectType”: “User”,
“AccountEnabled”: true,
“DisplayName”: “Ed Blanton”,
“GivenName”: “Ed”,
“Surname”: “Blanton”,
“UserPrincipalName”: Ed@contoso.com,
“Mail”: Ed@contoso.com,
“JobTitle”: “Vice President”,
“Department”: “Operations”,
“TelephoneNumber”: “4258828080”,
“Mobile”: “2069417891”,
“StreetAddress”: “One Main Street”,
“PhysicalDeliveryOfficeName”: “Building 2”,
“City”: “Redmond”,
“State”: “WA”,
“Country”: “US”,
“PostalCode”: “98007” }
}

Having a shared directory that enables this integration provides many benefits to developers, administrators, and users. If an application integrates with a shared directory just once—for one corporate customer, for example—in most respects no additional work needs to be done to have that integration apply to other organizations that use Windows Azure Active Directory. For an independent software vendor (ISV), this is a big change from the situation where each time a new customer acquires an application a custom integration needs to be done with the customer’s directory. With the addition of Facebook, Google, and the Microsoft account services, that one integration potentially brings a billion or more identities into the mix. The increase in the scope of applicability is profound. (Highlighting is mine – Craig).

Now that’s What I’m Talking About

There is still a lot to consider in what an IdMaaS system should actually do, but my position is that just the little bit of code reference shown here is a huge leap for usability and simplicity for all of us. I am very encouraged. This would be a major indicator that Microsoft is on the right leadership track to not only providing a specification for an industry design for IdMaaS, but also is on well on its way to delivering a product that will show us all how this is supposed to work.

Bravo!

The article goes on to make commitments on support for OAuth, Open ID Connect, and SAML/P. No mention of JSON Path support but I will get back to you about that. My guess is that if Microsoft is supporting JSON, JSON Path is also going to be supported. Otherwise it just wouldn’t make sense.

JSON and JSON Path

The API Economy is being fueled by the huge trend of accessibility of organization’s core competence through APIs. Almost all of the API development occurring in this trend are based of a RESTful API design with data being encoded in JSON (JavaScript Object Notation). While JSON is not a new specification by any means, it is only in the last 5 years that JSON has emerged as the preferred — in lieu of XML — data format. We see this trend only becoming stronger.

[Craig presents a table comparing XPath to XML – look at it here.]

Summary

As an industry, we are completely underwater in getting our arms around a workable — distributed and multi-centered identity management metasystem — that can even come close to addressing the issues that are already upon us. This includes the Consumerization of IT and its subsequent Identity explosion. Let alone the rise of the API Economy. No other vendor has come close to articulating a vision that can get us out of the predicament we are already in. There is no turning back.

Because of the lack leadership (the crew that killed off Information Cards) in the past at Microsoft about its future in Identity Management, I had completely written Microsoft off as being relevant. I would have never expected Microsoft to gain its footing, do an about face, and head in the right direction. Clearly the new leadership has a vision that is ambitious and in alignment with what is needed. Shifting with this much spot on thinking in the time frame we are talking about (a little over 18 months) is tantamount to turning an aircraft carrier 180 degrees in a swimming pool.

I am stunned, pleased and can’t wait to see what happens next.
 

I think it goes without saying that “turning an aircraft carrier 180 degrees in a swimming pool” is a fractal mixed metaphor of colossal and recursive proportions that boggles the mind – yet there is more than a little truth to it.  In fact that's really one of the things the cloud demands of us all.

Craig's question about JSON Path is a good one.  The answer is that JSON Path is essentially a way of navigating and extracting information from a JSON document.  WAzAD's Graph API returns JSON documents and if they are complex documents we expect programmers will use JSON Path – which they already know – to extract specific information.  It will be part of their local programming environment on whatever device or platform they are issuing a query from.

On the other hand, one can imagine supporting JSON Path queries in the RESTful interface itself.  Suppose you have a JSON document with many links to other JSON documents.  Do you then support “chaining” on the server so it follows the links for you and returns the distributed JSON Path result?  The problem with this approach is that a programming model we want to be ultra-simple and transparent for the programmer turns into something opaque that can have many side effects, become unpredictable and exhibit performance issues.  As far as I know, the social network APIs that are most sophisticated in their use of links don't support this.  They just get the programmer to chase the links that are of interest.

So for these reasons server support is something we have talked about but don't yet have a position on.  This is exactly the kind of thing we'd like to explore by collaborating with developers and getting their input.  I'd also like to hear what other people have experienced in this regard.

 

Viviane Reding's Speech to the Digital Enlightenment Forum

It was a remarkable day at the annual conference of the Digital Enlightenment Forum  in Luxembourg.  The Forum is an organization that has been set up over the last year to animate a dialog about how we evolve a technology that embodies our human values.  It describes its vision this way: 

The DIGITAL ENLIGHTENMENT FORUM aims to shed light on today’s rapid technological changes and their impact on society and its governance. The FORUM stimulates debate and provides guidance. By doing so, it takes reference from the Enlightenment period as well as from transformations and evolutions that have taken place since. It examines digital technologies and their application openly with essential societal values in mind. Such values might need to be given novel forms taking advantage of both today’s knowledge and unprecedented access to information.

For the FORUM, Europe’s Age of Enlightenment in the 18th century serves as a metaphor for our current times. The Enlightenment took hold after a scientific and technological revolution that included the invention of book printing, which generated a novel information and communication infrastructure. The elite cultural Enlightenment movement sought to mobilise the power of reason, in order to reform society and advance knowledge. It promoted science and intellectual interchange and opposed superstition, intolerance and abuses by the church and state. (more)

 The conference was intended to address four main themes:

  • What can be an effective organisation of governance of ICT infrastructure, including clouds? What is the role of private companies in relation to the political governance in the control and management of infrastructure? How will citizens be empowered in the handling of their personal data and hence in the management of their public and private lives?
  • How do we see the relation between technology and jurisdiction? Can we envisage a techno-legal ecosystem that ensures compliance with law (’coded law’), and how can sufficient political control be ensured in a democratic society?
  • What are the consequences for privacy, freedom and creativity of the massive data collection on behaviour, location, etc. by private and public organisations and their use through mining and inferencing for profiling and targeted advertising?
  • What needs to be done to ensure open discussion and proper political decision-making to find an appropriate balance between convenience of technology use and social acceptability?
  • The day was packed with discussions that went beyond the usual easy over-simplifications.  I won't try to describe it here but will post the link to the webcast when it becomes available.

    One of the highlights was a speech by Mme Viviane Reding, the Vice President of the European Union (who also serves as commissioner responsible for Justice, Fundamental Rights and Citizenship) about her new proposed Data Protection legislation.   Speaking later to the press she emphasized that the principle of private data belonging to the individual has applied in the European Union since 1995, and that her new proposals are simply a continuation along three lines.  First, she wants users to understand their rights and get them enforced;  second she is trying to provide clarity for companies and reduce uncertainty about how the data protection laws will be applied;  and third, she wants to make everyone understand that there will be sanctions.  She said,

    “If you don't have sanctions, who cares about the rules?  Who cares about the law?”

    And the sanctions are major:  2% of world-wide turnover of the company.  Further they apply to all companies, anywhere in the world, that collect information from Europeans.

    I very much recommend that everyone involved with identity and data protection read her speech, “Outdoing Huxley: Forging a high level of data protection for Europe in the brave new digital world”.

    In my view, the sanctions Mme Reding proposes will, from the point of view of computer science, be meted out as corrections for breaking the Laws of Identity.  John Fontana asked me about this very dynamic in an article he did recently on the relevance of the Laws of Identity seven years after they were written (2005).

    ZDNet: The Laws of Identity predicted that government intervention in identity and privacy would increase, why is that happening now?

    Cameron: There are many entities that routinely break various of these identity laws; they use universal identifiers, they collect information and use it for different purposes than were intended, they give it to parties that don’t have rights to it, they do it without user control and consent. You can say that makes the Laws irrelevant. But what I predicted is that if you break those Laws there will be counter forces to correct for that. And I believe when we look at recent developments – government and policy initiatives that go in the direction of regulation – that is what is happening. Those developments are providing the counter force necessary to bring behavior in accordance with the laws. The amount of regulation will depend on how quickly entities (Google, Facebook, etc.) respond to the pressure.

    ZDNet: Do we need regulation?

    Cameron: It’s not that I am calling for regulation. I am saying it is something people bring upon themselves really. And they bring it on themselves when they break the Laws of Identity.

     

    Identity Management before the Cloud (Part 2)

    The First Generation Identity Ecosystem Model

    The biggest problem of the “domain based model of identity management” was that it assumed each domain was an independent entity whose administrators had complete control over the things that were within it – be they machines, applications or people.

    During the computational Iron Age – the earliest days of computing – this assumption worked.

    But even before the emergence of the Internet we began to see domains colliding within closed organizational boundaries – as discussed here.  The idea of organizations having an “administrative authority” revealed itself to be far more complicated than anyone initially thought, since enterprises were evolving into multi-centered things with autonomous business units experiencing bottoms-up innovation. The old-fashioned bureaucratic models, probably always somewhat fictional, slowly crumbled.

    Many of us who worked on IT architecture were therefore already looking for ways to transcend the domain model even before the Internet began to flood the enterprise and wear away its firewalls. Yet the Internet profoundly shook up our thinking. On the one hand organizations began to understand that it was now possible – and in fact mandatory – to interact with people as individuals and citizens and consumers. And on the other any organization that rolled up its sleeves and got to work on this soon saw that it needed a model where it could “plug in” to systems run by partners and suppliers in seamless and flexible ways.

    With increasing experience enterprise and Internet architects concluded that standardization of identity architecture and components was the only way to achieve the flexibility essential for business agility, whether inside or outside the firewall. It simply wasn’t viable to recode or “change out” systems every time organizations were realigned or restructured.

    Technologists introduced new protocols like SAML that implemented a clear separation of standardized identity provider (IdP) and relying party (RP) roles so components would no longer be hard-wired together. In this model, when users want a service the service provider sends them to an IdP which authenticates them and then returns identifying information to the service provider (an RP within the model).  All the CRUD is performed by the IdP which issues credentials that can be understood and trusted by RPs.  It is a formal division of labor – even in scenarios where the same “Administrative Domain” runs both the IdP and the RP.

    The increasing need for inter-corporate communications, data-sharing and transactions led these credentials to become increasingly claims-based, which is to say the hard dependencies on internal identifiers and proprietary sauce that only made sense inside one party’s firewall gave way to statements that could be understood by unrelated systems. This provided the possibility of making assertions about users that could be understood in spite of crossing enterprise boundaries. It also allowed strategists to contemplate outsourcing identity roles that are not core to a company’s business (for example, the maintenance of login and password systems for retirees or consumers).

    Many of the largest companies have successfully set up relations with their most important partners based on this model. Others have wisely used it to restructure their internal systems to increase their flexibility in the future. The model has represented a HUGE step forward and a number of excellent interoperable products from a variety of technology companies are being deployed. 

    Yet in practice, most organizations have found federation hard to do. New technology and ways of doing things had to be mastered, and there was uncertainty about liability issues and legal implications.  These difficulties grow geometrically for organizations that want to establish relationships with a large number of other other organizaitons.  Establishing configuration and achieving secure connectivity is hard enough, but keeping the resultant matrix of connections reliable in an operational sense can be daunting and therefore seen as a real source of risk. 

    When it came to using the model for internet facing consumer registration, service providers observed that individual consumers use many different services and have accounts (or don’t have accounts) with many different web entities. Most concluded that it would be a gamble to switch from registering and managing “their own users” to figuring out how to successfully reuse peoples’ diverse existing identities. Would they confuse their users and lose their customers? Could identity providers be trusted as reliable? Was there a danger of losing their customer base? Few wanted to find out…

    As a result, while standardized architecture makes identity management systems much more pluggable and flexible, the emergence of an ecosystem of parties dedicated to specialized roles has been slow. The one notable entity that has gained some momentum is Facebook, although it has not so much replaced internet-facing registration systems as supplemented them with additional information (claims). 

    [Next in this series: Disruptive Forces: The Economy and the Cloud]