Enterprise lockdown versus consumer applications

My friend Cameron Westland, who has worked on some cool applications for the iPhone, wrote me to complain that I linked to iPhone Privacy:

I understand the implications of what you are trying to say, but how is this any different from Mac OS X applications accessing the address book or Windows applications accessing contacts? (I'm not sure about Windows, but I know it's possible on a Mac).

Also, the article touches on storing patient information on an iPhone. I believe Seriot is guilty of a major oversight in simply correlating the fact that spy phone has access to contacts with it also being able to do so in a secured enterprise.

If the iPhone is deployed in the enterprise, the corporate administrators can control exactly which applications get installed. In the situations where patient information is stored on the phone, they should be using their own security review process to verify that all applications installed meet the HIPPA  certification requirements. Apple makes no claim that applications meet the stringent needs of certain industries – that's why they give control to administrators to encrypt phones, restrict specific application installs, and do remote wipes.

Also, Seriot did no research behavior of a phone connected to a company's active directory, versus just plain old address book… This is cargo cult science at best, and I'm really surprised you linked to it!

I buy Cameron's point that the controls available to enterprises mitigate a number of the attacks presented by Seriot – and agree this is  important.  How do these controls work?  Corporate administrators can set policies specifying the digital signatures of applications that can be installed.  They can use their own processes to decide what applications these will be. 

None of this depends on App Store verification, sandboxing, or Apple's control of platform content.  In fact it is no different from the universally available ability to use a combination of enterprise policy and digital signature to protect enterprise desktop and server systems.  Other features, like the ability for an operator to wipe information, are also pretty much universal.

If the iPhone can be locked down in enterprises, why is Seriot's paper still worth reading?  Because many companies and even governments are interested in developing customer applications that run on phones.  They can't dictate to customers what applications to install, and so lock-down solutions are of little interest.  They turn to Apple's own claims about security, and find statements like this one, taken from the otherwise quite interesting iPhone security overview.

Runtime Protection

Applications on the device are “sandboxed” so they cannot access data stored by other applications. In addition, system files, resources, and the kernel are shielded from the user’s application space. If an application needs to access data from another application, it can only do so using the APIs and services provided by iPhone OS. Code generation is also prevented.

Seriot shows that taking this claim at face value would be risky.  As he says in an eWeek interview:

“In late 2009, I was involved in discussions with the Swiss private banking industry regarding the confidentiality of iPhone personal data,” Seriot told eWEEK. “Bankers wanted to know how safe their information [stores] were, which ones are exactly at risk and which ones are not. In brief, I showed that an application downloaded from the App Store to a standard iPhone could technically harvest a significant quantity of personal data … [including] the full name, the e-mail addresses, the phone number, the keyboard cache entries, the Wi-Fi connection logs and the most recent GPS location.” 

It is worth noting that Seriot's demonstration is very easy to replicate, and doesn't depend on silly assumptions like convincing the user to disable their security settings and ignore all warnings.

The points made about banking applications apply even more to medical applications.  Doctors are effectively customers from the point of view of the information management services they use.  Those services won't be able to dictate the applications their customers deploy.  I know for sure that my doctor, bless his soul,  doesn't have an IT department that sets policies limiting his ability to play games or buy stocks.  If he starts using his phone for patient-related activities, he should be aware of the potential issues, and that's what MedPage was talking about.

Neither MedPage, nor CNET, nor eWeek nor Seriot nor I are trying to trash the iPhone – it's just that application isolation is one of the hardest problems of computer science.  We are pointing out that the iPhone is a computing device like all the others and subject to the same laws of digital physics, despite dangerous mythology to the contrary.  On this point I don't think Cameron Westland and I disagree.

 

Identity Software + Services Roadmap


I continue to receive many questions about how enterprise and government environments and systems can interact with new generations of services that are being hosted in the cloud, especially from an identity management point of view.

It is a fascinating question and getting it right is key.  I think about it a lot these days – as I'm sure everyone in the industry does.

One conclusion:  these new questions are the side-effects of trends we've been witnessing for a long time now – in particular, the decline and fall of the “closed domain”. 

Metadirectory, in the last half of the 1990’s, was the first step towards understanding that even with standards and widespread technological agreement, there would be no single “center” to the world of information.  There were multiple boundaries required by business and government, but by their very nature those boundaries always had to be crossed…  This was a profound contradiction but also a motor for innovation.  We needed kinder, gentler systems predicated on the idea they would have to interact with other systems run by independent people and organizations.

The concept of identity federation arose to facilitate this.  Over time agreement grew that federation was actually something you were able to do once you re-thought the world from a multi-centered point of view – one which allowed multiple viewpoints and criteria for action (call it truth).  This became generalized into “claims-based” system design – an approach in which assertions always have a source and must be evaluated prior to acting on them (i.e. we can accept assertions from multipe sources because our systems include mechanisms for deciding what they mean).

The notion of consuming and combining services, some of which we host ourselves, and others which are hosted for us by third parties, fits perfectly into this multi-centered view.  And in a world of claims-based system design, the combination of cloud and enterprise computing is a completely natural “atomic” capabiity.  So all the work the industry has been doing to advance claims-based computing lays the foundation for these new computing paradigms and makes them dramatically more practicable.

My presentation to the Microsoft Professional Developers Conference was a concrete look at how claims-based system design affects developers, and the synergies they will obtain by adopting the model.  It argued, in essence, that there is ONE relevant architecture for identity (NOT to be confused with “one single monolithic identity, which is an anathema!)  That ONE architecture works in the enterprise, in the cloud and in the home, and works on many loosely-coupled systems designed by many vendors to do many things – in the enterprise and in the cloud.

The presentation also discusses a number of the components we are beginning to make available as software products and services across Microsoft.  It underlines that these components implement widely adopted standards and their very goal is interoperable systems that are synergetic for customers.

The PDF is here, and the Word 2007 version is here.

 

Project Geneva – Part 5

[This is the fifth – and thankfully the final – installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

I've made a number of announcements today that I think will have broad industry-wide support not only because they are cool, but because they indelibly mark Microsoft's practical and profound committment to an interoperable identity metasystem that reaches across devices, platforms, vendors, applications, and administrative boundaries. 

I'm very happy, in this context, to announce that from now on, all Live ID's will also work as OpenIDs.   

That means the users of 400 million Live ID accounts will be able to log in to a large number of sites across the internet without a further proliferation of passwords – an important step forward for binging reduced password fatigue to the long tail of small sites engaged in social networking, blogging and consumer services.

As the beta progresses, CardSpace will be integrated into the same offering (there is already a separate CardSpace beta for Live ID).

Again, we are stressing choice of protocol and framework.

Beyond this support for a super lightweight open standard, we have a framework specifically tailored for those who want a very lightweight way to integrate tightly with a wider range of Live capabilities.

The Live Framework gives you access to an efficient, lightweight protocol that we use to optimize exchanges within the Live cloud.

It too integrates with our Gateway. Developers can download sample code (available in 7 languages), insert it directly into their application, and get access to all the identities that use the gateway including Live IDs and federated business users connecting via Geneva, the Microsoft Services Connector, and third party Apps.

 

Flexible and Granular Trust Policy

 Decisions about access control and personalization need to be made by the people responsible for resources and information – including personal information. That includes deciding who to trust – and for what.

At Microsoft, our Live Services all use and trust the Microsoft Federation Gateway, and this is helpful in terms of establishing common management, quality control, and a security bar that all services must meet.

But the claims-based model also fully supports the flexible and granular trust policies needed in very specialized contexts. We already see some examples of this within our own backbone.

For example, we’ve been careful to make sure you can use Azure to build a cloud application – and yet get claims directly from a third party STS using a different third party’s identity framework, or directly from OpenID providers. Developers who take this approach never come into contact with our backbone.

Our Azure Access Control Service provides another interesting example. It is, in fact, a security component that can be used to provide claims about authorization decisions. Someone who wants to use the service might want their application, or its STS, to consume ACS directly, and not get involved with the rest of our backbone. We understand that. Trust starts with the application and we respect that.

Still another interesting case is HealthVault. HealthVault decided from day one to accept OpenIDs from a set of OpenID providers who operate the kind of robust claims provider needed by a service handling sensitive information. Their requirement has given us concrete experience, and let us learn about what it means in practice to accept claims via OpenID. We think of it as pilot, really, from which we can decide how to evolve the rest of our backbone.

So in general we see our Identity Backbone and our federation gateway as a great simplifying and synergizing factor for our Cloud services. But we always put the needs of trustworthy computing first and foremost, and are able to be flexible because we have a single identity model that is immune to deployment details.


Identity Software + Services

To transition to the services world, the identity platform must consist of both software components and services components.

We believe Microsoft is well positioned to help developers in this critical area.

Above all, to benefit from the claims-based model, none of these components is mandatory. You select what is appropriate.

We think the needs of the application drive everything. The application specifies the claims required, and the identity metasystem needs to be flexible enough to supply them.

Roadmap

Our roadmap looks like this:

Identity @ PDC

You can learn more about every component I mentioned today by drilling into the 7 other presentations presented at PDC (watch the videos…):

Software
(BB42) Identity:  “Geneva” Server and Framework Overview
(BB43) Identity: “Geneva” Deep Dive
(BB44) Identity: Windows CardSpace “Geneva” Under the Hood
Services
(BB22) Identity: Live Identity Services Drilldown
(BB29) Identity: Connecting Active Directory to Microsoft Services
(BB28) .NET Services: Access Control Service Drilldown
(BB55) .NET Services: Access Control In the Cloud Services
 

Conclusion

I once went to a hypnotist to help me give up smoking. Unfortunately, his cure wasn’t very immediate. I was able to stop – but it was a decade after my session.

Regardless, he had one trick I quite liked. I’m going to try it out on you to see if I can help focus your take-aways from this session. Here goes:

I’m going to stop speaking, and you are going to forget about all the permutations and combinations of technology I took you through today. You’ll remember how to use the claims based model. You’ll remember that we’ve announced a bunch of very cool components and services. And above all, you will remember just how easy it now is to write applications that benefit from identity, through a single model that handles every identity use case, is based on standards, and puts users in control.

 

Project Geneva – Part 4

[This is the fourth installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

We have another announcement that really drives home the flexibility of claims.

Today we are announcing a Community Technical Preview (CTP) of the .Net Access Control Service, an STS that issues claims for access control. I think this is especially cool work since it moves clearly into the next generation of claims, going way beyond authentication. In fact it is a claims transformer, where one kind of claim is turned into another.

An application that uses “Geneva” can use ACS to externalize access control logic, and manage access control rules at the access control service.  You just configure it to employ ACS as a claims provider, and configure ACS to generate authorization claims derived from the claims that are presented to it. 

The application can federate directly to ACS to do this, or it can federate with a “Geneva” Server which is federated with ACS.

ACS federates with the Microsoft Federation Gateway, so it can also be used with any customer who is already federated with the Gateway.

The .Net Access Control Service was built using the “Geneva” Framework.  Besides being useful as a service within Azure, it is a great example of the kind of service any other application developer could create using the Geneva Framework.

You might wonder – is there a version of ACS I can run on-premises?   Not today, but these capabilities will be delivered in the future through “Geneva”.

Putting it all together

Let me summarize our discussion so far, and then conjure up Vittorio Bertocci, who will present a demo of many of these components working together.

  • The claims-based model is a unified model for identity that puts users firmly in control of their identities.
  • The model consists of a few basic building blocks can be put together to handle virtually any identity scenario.
  • Best of all, the whole approach is based on standards and works across platforms and vendors.

Let’s return to why this is useful, and to my friend Joe.  Developers no longer have to spend resources trying to handle all the demands their customers will make of them with respect to identity in the face of evolving technology. They no longer have to worry about where things are running. They will get colossal reach involving both hundreds of millions of consumers and corporate customers, and have complete control over what they want to use and what they don’t.

Click on this link – then skip ahead about 31 Minutes – and my friend Vittorio will take you on a whirlwind tour showing all the flexibility you get by giving up complexity and programming to a simple, unified identity model putting control in the hands of its users.  Vitorrio will also be blogging in depth about the demo over the next little while.  [If your media player doesn't accept WMV but understands MP4, try this link.]

In the next (and thankfully final!) installment of this series, I'll talk about the need for flexibility and granulartiy when it comes to trust, and a matter very important to many of us – support for OpenID.

Getting down with Zermatt

Zermatt is a destination in Switzerland, shown above, that benefits from what Nietzsche calls “the air at high altitudes, with which everything in animal being grows more spiritual and acquires wings”.

It's therefore a good code name for the new identity application development framework Microsoft has just released in Beta form.  We used to call it IDFX internally  – who knows what it will be called when it is released in final form? 

Zermatt is what you use to develop interoperable identity-aware applications that run on the Windows platform.  We are building the future versions of Active Directory Federation Services (ADFS) with it, and claims-aware Microsoft applications will all use it as a foundation.  All capabilities of the platform are open to third party developers and enterprise customers working in Windows environments.  Every aspect of the framework works over the wire with other products on other platforms.

 I can't stress enough how important it is to make it easy for application developers to incororate the kind of sensible and sophisticated capabilities that this framework makes available.  And everyone should understand that our intent is for this platform to interoperate fully with products and frameworks produced by other vendors and open source projects, and to help the capabilities we are developing to become universal.

I also want to make it clear that this is a beta.  The goal is to involve our developer community in driving this towards final release.  The beta also makes it easy for other vendors and projects to explore every nook and cranny of our implementation and advise us of problems or work to achieve interoperability.

I've been doing my own little project using the beta Zermatt framework and will write about the experience and share my code.  As an architect, I can tell you already how happy I am about the extent to which this framework realizes the metasystem architecture we've worked so hard to define.

The product comes with a good White Paper for Developers by Keith Brown of Pluralsight.  Here's how Zermatt's main ReadMe sets out the goals of the framework.

Building claims-aware applications

Zermatt makes it easier to build identity aware applications. In addition to providing a new claims model, it provides applications with a rich set of API’s to reason about the identity of a caller using claims.

Zermatt also provides developers with a consistent programming experience whether they choose to build their applications in ASP.NET or in WCF environments. 

ASP.NET Controls

ASP.NET controls simplify development of ASP.NET pages for building claims-aware Web applications, as well as Passive STS’s.

Building Security Token Services (STS)

Zermatt makes it substantially easier for building a custom security token service (STS) that supports the WS-Trust protocol. These STS’s are also referred to as an Active STS.

In addition, the framework also provides support for building STS’s that support WS-Federation to enable web browser clients. These STS’s are also referred to as a Passive STS.

Creating Information Cards

Zermatt includes classes that you can use to create Information Cards – as well as STS's that support them.

There are a whole bunch of samples, and for identity geeks they are incredibly interesting.  I'll discuss what they do in another post.

Follow the installation instructions!

Meanwhile, go ahead and download.  I'll share one word of advice.  If you want things to run right out of the digital box, then for now slavishly follow the installation instructions.  I'm the type of person who never really looks at the ReadMe's – and I was chastened by the experience of not doing what I was told.  I went back and behaved, and the experience was flawless, so don't make the same mistake I did.

For example, there is a master installation script in the /samples/utilities directory called “SamplesPreReqSetup.bat”. This is a miraculous piece of work that sets up your machine certs automatically and takes care of a great number of security configuration details.  I know it's miraculous because initially (having skipped the readme) I thought I had to do this configuration manually.  Congratulations to everyone who got this to work.

You will also find a script in each sample directory that creates the necessary virtual directory for you.  You need this because of the way you are expected to use the visual studio debugger.

Using the debugger

In order to show how the framework really works, the projects all involve at least a couple of aspx pages (for example, one page that acts as a relying party, and another that acts as an STS).  So you need the ability to debug multiple pages at once.

To do this, you run the pages from a virtual directory as though they were “production” aspx pages.  Then you attach your debugger to the w3wp.exe process (under debug, select “Attach to a process” and make sure you can see all the processes from all the sessions.  “Wake up” the w3wp.exe process by opening a page.  Then you'll see it in the list). 

For now it's best to compile the applications in the directory where they get installed.  It's possible that if you move the whole tree, they can be put somewhere else (I haven't tried this with my own hands).  But if you move a single project, it definitely won't work unless you tweak the virtual directory configuration yourself (why bother?).

Clear samples

I found the samples very clear, and uncluttered with a lot of “sample decoration” that makes it hard to understand the main high level points.  Some of the samples have a number of components working together – the delegation sample is totally amazing – and yet it is easy, once you run the sample, to understand how the pieces fit together.  There could be more documentation and this will appear as the beta progresses. 

The Zermatt team is really serious about collecting questions, feedback and suggestions – and responding to them.  I hope that if you are a developer interested in identity you'll take a look and send your feedback – whether you are primarily a Windows developer or not.  After all, our goal remains the Identity Big Bang, and getting identity deployed and cool applications written on all the different platforms. 

Key Piece of The Identity Puzzle

John Fontana, who writes expert pieces about identity for Network World, just posted this piece, called “Microsoft Sets Key Piece of Identity Puzzle“.   

Microsoft Wednesday released a beta of its most important tool to date for helping developers build applications that can plug into the company's Identity Metasystem and provide what amounts to a re-usable identity service for securing network resources.

Code-named Zermatt, the tools are a new extension to the .Net Framework 3.5 that helps developers more easily build applications that incorporate a claims-based identity model for authentication/authorization. Claims are a set of statements that identify a user and provide specific information such as title or purchasing authority…

John goes on to quote Stuart Kwan:

“The model is that when a user arrives at the applications, they bring claims that they fetched from an STS ahead of time,” says Stuart Kwan, director of program management for identity and access for Microsoft. “Zermatt is one part of building apps that can more easily plug into your environment. You use Zermatt so [applications] can use the STS in your environment.”

In fact, a network would have multiple STS nodes. Those nodes will eventually include Active Directory, which will have an STS built into the directory's Federation Services in the next version slated to ship sometime after 2008.

Microsoft will use the new Federation Services capabilities, Zermatt and STS technology to build toward its ultimate goal of an “identity bus.” The nirvana of the idea is that off-the-shelf applications could plug into the bus in order to authenticate users and provide access control.

In my view, as enterpise applications and desktop suites start to integrate with the identity metasystem,  it will become obvious that businesses can build “business logic” into STS's and suddenly get a huge payoff by controlling access, identity and personalization in all their off-the shelf and enterprise-specific applications.  This is going to be huge for developers, who will be able both to simplify and deliver value.

But back to John and Stuart:

Kwan says Zermatt also can be used to build an STS that would run on top of custom built stores of user data.  He says Zermatt could be used to build applications that accept information from CardSpace, the user-centric identity system in Vista and XP.

The final release of Zermatt is expected by year-end.

It is the first time Microsoft has so directly written its sizeable development army into its Identity Metasystem, plan, which was outlined first in 2005 and defines a distributed identity architecture for multi-vendor platforms.

Read the full story here.