SpyPhone for iPhone

The MedPage Today blog recently wrote about “iPhone Security Risks and How to Protect Your Data — A Must-Read for Medical Professionals.”  The story begins: 

Many healthcare providers feel comfortable with the iPhone because of its fluid operating system, and the extra functionality it offers, in the form of games and a variety of other apps.  This added functionality is missing with more enterprise-based smart phones, such as the Blackberry platform.  However, this added functionality comes with a price, and exposes the iPhone to security risks. 

Nicolas Seriot, a researcher from the Swiss University of Applied Sciences, has found some alarming design flaws in the iPhone operating system that allow rogue apps to access sensitive information on your phone.

MedPage quotes a CNET article where Elinor Mills reports:

Lax security screening at Apple's App Store and a design flaw are putting iPhone users at risk of downloading malicious applications that could steal data and spy on them, a Swiss researcher warns.

Apple's iPhone app review process is inadequate to stop malicious apps from getting distributed to millions of users, according to Nicolas Seriot, a software engineer and scientific collaborator at the Swiss University of Applied Sciences (HEIG-VD). Once they are downloaded, iPhone apps have unfettered access to a wide range of privacy-invasive information about the user's device, location, activities, interests, and friends, he said in an interview Tuesday…

In addition, a sandboxing technique limits access to other applications’ data but leaves exposed data in the iPhone file system, including some personal information, he said.

To make his point, Seriot has created open-source proof-of-concept spyware dubbed “SpyPhone” that can access the 20 most recent Safari searches, YouTube history, and e-mail account parameters like username, e-mail address, host, and login, as well as detailed information on the phone itself that can be used to track users, even when they change devices.

Following the link to Seriot's paper, called iPhone Privacy, here is the abstract:

It is a little known fact that, despite Apple's claims, any applications downloaded from the App Store to a standard iPhone can access a significant quantity of personal data.

This paper explains what data are at risk and how to get them programmatically without the user's knowledge. These data include the phone number, email accounts settings (except passwords), keyboard cache entries, Safari searches and the most recent GPS location.

This paper shows how malicious applications could pass the mandatory App Store review unnoticed and harvest data through officially sanctioned Apple APIs. Some attack scenarios and recommendations are also presented.

 

In light of Seriot's paper, MedPage concludes:

These security risks are substantial for everyday users, but become heightened if your phone contains sensitive data, in the form of patient information, and when your phone is used for patient care.   Over at iMedicalApps.com, we are not fans of medical apps that enable you to input patient data, and there are several out there.  But we also have peers who have patient contact information stored on their phones, patient information in their calendars, or are accessible to their patients via e-mail.  You can even e-prescribe using your iPhone. 

I don't want to even think about e-prescribing using an iPhone right now, thank you.

Anyone who knows anything about security has known all along that the iPhone – like all devices – is vulnerable to some set of attacks.  For them, iPhone Privacy will be surprising not because it reveals possible attacks, but because of how amazingly elementary they are (the paper is a must-read from this point of view).  

On a positive note, the paper might awaken some of those sent into a deep sleep by proselytizers convinced that Apple's App Store censorship program is reasonable because it protects them from rogue applications.

Evidently Apple's App Store staff take their mandate to protect us from people like award winning Mad Magazine cartoonist  Tom Richmond pretty seriously (see Apple bans Nancy Pelosi bobble head).  If their approach to “protecting” the underlying platform has any merit at all, perhaps a few of them could be reassigned to work part time on preventing trivial and obvious hacker exploits..

But I don't personally think a closed platform with a censorship board is either the right approach or one that can possibly work as attackers get more serious (in fact computer science has long known that this approach is baloney).  The real answer will lie in hard, unfashionable and (dare I say it?) expensive R&D into application isolation and related technologies. I hope this will be an outcome:  first, for the sake of building a secure infrastructure;  second, because one of my phones is an iPhone and I like to explore downloaded applications too.

[Heads Up: Khaja Ahmed]

Information Cards in Industry Verticals

The recent European Identity Conference, hosted in Munich by the analyst firm Kuppinger Cole, had great content inspiring an ongoing stream of interesting conversations.   Importantly, attendance was up despite the economic climate, an outcome Tim Cole pointed out was predictable since identity technology is so key to efficiency in IT.

One of the people I met in person was James McGovern, well known for his Enterprise Architecture blog.  He is on a roll writing about ideas he discussed with a number of us at the conference, starting with this piece on use of Information Cards in industry verticals.  James knows a lot about both verticals and identity.  He has started a critical conversation, replete with the liminal questions he is known for:

‘Consider a scenario where you are an insurance carrier and you would like to have independent insurance agents leverage CardSpace for SSO. The rationale says that insurance agents have more personally identifiable information on consumers ranging from their financial information such as where they work, how much they earn, where they live, what they own to information about their medical history, etc. When they sell an insurance policy they will even take payment via credit cards. In other words, if there were a scenario where username/passwords should be demolished first, insurance should be at the top of the list.’

A great perception.  Scary, even.

‘Now, an independent insurance agent can do business with a plethora of carriers who all are competitors. The ideal scenario says that all of the carriers would agree to a common set of claims so as to insure card portability. The first challenge is that the insurance vertical hasn't been truly successful in forming useful standards that are pervasive (NOTE: There is ACORD but it isn't widely implemented) and therefore relying on a particular vertical to self-organize is problematic.

‘The business value – while not currently on the tongues of enterprise architects who work in the insurance vertical – says that by embracing information cards, they could minimally save money. By not having to manage so many disparate password reset approaches (each carrier has their own policies for password history, complexity and expiry) they can improve the user experience…

‘If I wanted to be a really good relying party, I think there are other challenges that would emerge. Today, I have no automated way of validating the quality of an identity provider and would have to do this as a bunch of one offs. So, within our vertical, we may have say 80,000 different insurance agencies whom could have their own identity provider. With such a large number, I couldn't rely on white listing and there has to be a better way. We should of course attempt to define what information would need to be exposed at runtime in order for trust to be consumed.’

This raises the matter of how trust would be concretized within the various verticals.  White listing is obviously too cumbersome given the numbers.  James proposes an idea that I will paraphrase as follows:  use claims transformers run by trusted entities (like state departments of insurance) to vet incoming claims.  The idea would be to reuse the authorities already involved in making this kind of decision.

He goes on to examine the challenge of figuring out what identity proofing process has actually been used by an identity provider.  In a paper I collaborated on recently (I'll be publishing it here soon) we included the proofing and registration processes as one element in a chain of factors we called the “security presentation”.  One of the points James makes is that it should be easy to include an explicit statement about the “security presentation” as one element of any claim-set being submitted (see Jame's post for some good examples).  Another is that the relying party should be able to include a statement of its security presentation requirements in its policy.

James concludes with a set of action items that need to be addressed for Information Cards to be widely usedl in industry verticals:

’1. Microsoft needs to redouble its efforts to sell information cards as a business value proposition where the current pitch is towards a technical audience. It is nice that it will be part of Geneva but this means that its capabilities would be fully leveraged unless it is understood by more than folks who do just infrastructure work.

’2. Oasis is a wonderful standards organization and can add value as a forum to organize common claims at an industry vertical level. Since identity is not insurance specific, we have to acknowledge that using insurance specific bodies such as ACORD may not be appropriate. I would be game to participate on a working group to generate common claims for the insurance vertical.

’3. When it comes to developing enterprise applications using the notion of claims, …developers need to do a quick paradigm shift. I can envision a few of us individuals who are also book authors coming up with a book entitled: Thinking in Claims and XACML as there is no guide to help developers understand proper architecture going forward. If such a guide existed, we… (could avoid repeating) …the same mistakes of the past.

’4. I am wildly convinced that industry analysts are having the wrong conversations around identity. Ask yourself, how many ECM systems have on their 2009 roadmap, the ability to consume a claim? How many BPM systems? In case you haven't figured it out, the answer is a big fat zero. This says that the identity crowd is evangelizing to the wrong demographic. Industry analysts are measuring identity products what consumers really need which is to measure how many existing products can consume new approaches to identity. Does anyone have a clue as to how to get analysts such as Nick Malik, Gerry Gebel, Bob Blakely and others to change the conversation.

’5. We need to figure out some additional identity standards that an IDP could expose to an RP to assert vetting, attestation, indemnification and other constructs to relying parties. This will require a small change in the way that identity selectors work but B2B user-centric approaches won't scale without these approaches…’

I know some good work to formalize various aspects of the “security presentation” has been going on in one of the Liberty Alliance working groups – perhaps someone involved could post about the progress that has been made an how it ties in to some of James’ action items. 

James’ action items are all good.  I buy his point that Microsoft needs to take claims beyond the current “infrastructure” community – though I still see the participation of this community as absolutely key.  But we need – as an industry and as individual companies - to widen the discussion and start figuring out how claims can be used in concrete verticals.  As we do this, I expect to see many players, with very strong participation from Microsoft,  taking the new paradigm to the “business people” who will really benefit from the technology.

When Geneva is released to manufacturing later this year, it will be seen as a fundamental part of Active Directory and the Windows platform.  I expect that many programs will then start to kick in that turn up the temperature along the lines James proposes.

My only caution with respect to James’ argument is that I hope we can keep requirements simple in the first go-around.  I don't think ALL the capabilities of claims have to be delivered “simultaneously”, though I think it is essential for architects like James to understand them and build our current deliverables in light of them. 

So I would add a sixth bullet to the five proposed by James, about beginning with extremely simplified profiles and getting them to work perfectly and interoperably before moving on to more advanced scenarios.  Of course, that means more work in nailing the most germane scenarios and determining their concrete requirements.  I expect James would agree with me on this (I guess I'll find out, eh?…)

[By the way, James also has an intriguing graphic that appears with the piece, but doesn't discuss it explicitly. I hope that is a treat that is coming...]

Project Geneva – Part 4

[This is the fourth installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

We have another announcement that really drives home the flexibility of claims.

Today we are announcing a Community Technical Preview (CTP) of the .Net Access Control Service, an STS that issues claims for access control. I think this is especially cool work since it moves clearly into the next generation of claims, going way beyond authentication. In fact it is a claims transformer, where one kind of claim is turned into another.

An application that uses “Geneva” can use ACS to externalize access control logic, and manage access control rules at the access control service.  You just configure it to employ ACS as a claims provider, and configure ACS to generate authorization claims derived from the claims that are presented to it. 

The application can federate directly to ACS to do this, or it can federate with a “Geneva” Server which is federated with ACS.

ACS federates with the Microsoft Federation Gateway, so it can also be used with any customer who is already federated with the Gateway.

The .Net Access Control Service was built using the “Geneva” Framework.  Besides being useful as a service within Azure, it is a great example of the kind of service any other application developer could create using the Geneva Framework.

You might wonder – is there a version of ACS I can run on-premises?   Not today, but these capabilities will be delivered in the future through “Geneva”.

Putting it all together

Let me summarize our discussion so far, and then conjure up Vittorio Bertocci, who will present a demo of many of these components working together.

  • The claims-based model is a unified model for identity that puts users firmly in control of their identities.
  • The model consists of a few basic building blocks can be put together to handle virtually any identity scenario.
  • Best of all, the whole approach is based on standards and works across platforms and vendors.

Let’s return to why this is useful, and to my friend Joe.  Developers no longer have to spend resources trying to handle all the demands their customers will make of them with respect to identity in the face of evolving technology. They no longer have to worry about where things are running. They will get colossal reach involving both hundreds of millions of consumers and corporate customers, and have complete control over what they want to use and what they don’t.

Click on this link – then skip ahead about 31 Minutes - and my friend Vittorio will take you on a whirlwind tour showing all the flexibility you get by giving up complexity and programming to a simple, unified identity model putting control in the hands of its users.  Vitorrio will also be blogging in depth about the demo over the next little while.  [If your media player doesn't accept WMV but understands MP4, try this link.]

In the next (and thankfully final!) installment of this series, I'll talk about the need for flexibility and granulartiy when it comes to trust, and a matter very important to many of us – support for OpenID.

Project Geneva – Part 2

[This is the second installment of a presentation I gave to Microsoft developers at the Professional Developers Conference (PDC 2008) in Los Angeles. It starts here.]

I don’t want to overwhelm you with a shopping list of all the scenarios in which the Claims-based architecture solves problems that used to be insurmountable.

But I’ll start from the enterprise point of view, and look at how this system helps with the big new trend of federation between partners. Then we’ll look at cloud computing, and see that the same architecture dramatically simplifies developing applications that can take advantage of it.  Finally, we’ll see how the approach applies to consumer-oriented web applications.  

Enterprise Federation

The rigid Enterprise perimeter is dissolving as a result of the need for digital relationships between an enterprise and its suppliers and customers, as well as the outsourcing of functions and services, the use of temporary workers, and having employees who sometimes work from home.  The firewall is still a useful element in a concentric set of defences, but must at the same time now be permeable. 

Most of us are even learning to collaborate on a per-project basis with partners who in other contexts might be our competitors.  So the relationships between business entities must be defined with more and more granularity.

In looking at this, I’m going to start with a very simple scenario – a story of two companies, where one has built an app in-house or has installed an ISV app for their own employees, and now wants to extend access to employees from a partner.

In the past, even this simple requirement has been really hard and expensive to fulfill. How can Microsoft help you solve this problem using the claims model?

Code name Geneva

Well, I'm happy to announce today, the first beta of “Geneva” software for building the claims-aware applications I’ve been talking about. It has three parts:

  1. The “Geneva” Framework: A framework you use in your .Net application for handling claims. This was formerly called “Zermatt”.
  2. “Geneva” Server: A claims provider and transformer (STS) integrated with Active Directory.  It comes with Windows, and makes managing trusts and policies easy.  Importantly, it supports Information Cards, making it easier for people to understand what identities they are using where, and to avoid phishing of their enterprise credentials. You may in the past heard this server being referred to as AD FS “2”.
  3. Windows CardSpace “Geneva”:  The second generation Information Card client for federation that is dramatically faster and smaller than the first version of CardSpace, and incorporates the feedback and ideas that have emerged from our customers and collaborators.

In the use case we’ve been considering, our solution works this way:  each enterprise puts up a single Geneva Server – leveraging the power of their Active Directory.

Then the administrators of the application alter the .NET configuration to point to their enterprise’s Geneva server (with the config change I demonstrated here ). At this point, your customer's application has become part of what we call an Enterprise identity backbone, and can accept claims.

So the software framework and components provide a single identity model that users configure in any way they want.  If you have written to this model, your app now works for both “employees” and “partner users” without a code change. All that is required is to set up the Geneve STS’s .

The fatal flaw

Anyone who has been around the block a few times knows there is one fatal flaw in the solution I’ve just described.

Your customer may have partners who don’t use Active Directory or don’t use Geneva or have settled on a non-Microsoft product.

No problem.  All aspects of Project Geneva are based on standards accepted across the industry – WS-Trust and WS-Federation.

I’m also very happy to announce that Geneva supports the SAML 2.0 protocol. Basically, no system that supports federation should be out of reach.

All this means your partners aren’t forced to use “Geneva” if they want to get access to your applications. They can use any third party STS, and that is part of the great power of the solution.

Does Microsoft practice what it preaches?

Microsoft is an enterprise too.  So if this architecture is supposed to be good for our enterprise customers, what about for Microsoft itself?  Are we following our own advice?

I’m here today to tell you Microsoft has fully stepped up to the plate around federation. And it is already providing a lot of benefits and solving problems.

You've heard a lot at the PDC about Azure. Microsoft offers cloud applications like hosted SharePoint and Exchange, and cloud developer services like the .Net Services and SQL Data Services, as well as a whole range of applications.  We want other enterprises to be able to access these services and sites, much like other enterprises want their own customers and partners to access the systems pertaining to their businesses.

So we make our offerings available to customers via the Microsoft Federation Gateway (MFG), which anchors our “services identity backbone”, and is based on the same industry standards and architecture delievered through the Geneva Project's server. It is all part of one wave, the Geneva wave of Identity Software + Services.

The result is pretty stunning, in terms of simplifying our own lives and allowing us to move forward very quickly – as it will be for enterprises that follow the same route. Through a single trust relationship to our gateway, our customers can get access to our full range of services.

Again, we’re not telling our customers what federation software to use. They can federate with the MFG using “Geneva” or other third party servers that support standard protocols.  And they can use the same protocols to federate with other gateways run by other organizations.

What about Live ID?

It is important to understand that the Microsoft Federation Gateway is different from Windows Live ID.  Yet Live ID feeds into the Gateway just as all our partners do.  I'll describe this, and the cool implications for application develoeprs of this approach, in the next installment.

Resources have rights too

Paul Madsen has a knack for pithy identity wisdom.  But his recent piece on HealthVault's use of OpenID made me do a double take.

“Simon Willison defends HealthVault‘s choice of OPs [OpenID providers - Kim].

“I disagree. It is I, as a user, that should be able to dictate to HealthVault the OPs from which they are to accept identity assertions through OpenID.

“Just as I, as a user of Vista, should be able to dictate to Microsoft which software partners they work with to bundle into the OS (I particularly like the Slow Down to Crawl install).

“Just as I, as a Zune user … oh wait, there are no Zune users….

“The mechanism by which I (the user) am able to indicate to HealthVault, or Vista, my preferences for their partners is called ‘the market‘.”

Hmmm.  All passion aside, are Vista and HealthVault really the same things?

When you buy an operating system like Vista, it is the substratum of YOUR personal computer.  You should be able to run whatever YOU want on it.  That strikes me as part of the very definition of the PC.

But what about a cloud service like HealthVault?  And here I want to get away from the specifics of HealthVault, and talk generically about services that live in the cloud.  In terms of the points I want to make, we could just as easily be talking about Facebook, LinkedIn, Blogger or Hotmail.

As a user, do you own such a service? Do you run it in whatever way you see fit?  

I've tried a lot of services, and I don't think I've ever seen one that gives you that kind of carte blanche. 

Normally a service provides options. You can often control content, but you function within parameters.  Your biggest decision is whether you want to use the service in the first place.  That's a large part of what “the market” in services really is like.

But let me push this part of the discussion onto “the stack” for a moment.

PUSH

Last week a friend came by and told me a story.  One of his friends regularly used an Internet advertising service, and paid for it via the Internet too.  At some point, a large transaction “went missing”.  The victim contacted the service through which he was making the transaction, and was told it “wasn't their problem”.  Whose problem was it?

I don't know anything about legal matters and am not talking from that point of view.  It just seems obvious to me that if you are a company that values its relationships with customers, this kind of breach really IS your problem, and you need to face up to that.

And there is the rub.  I never want to be the one saying, “Sorry – this is your problem, not ours.”  But if I'm going share the problem, shouldn't I have some say in preventing it and limiting my liability?

POP

I think that someone offering a service has the right to define the conditions for use of the service (let's for now ignore the fact that there may be some regulation of such conditions – for example certain conditions might be “illegal” in some jurisdictions).  And that includes security requirements.

In other words, matters of access control proceed from the resource.  The resource decides who can access it.   Identity assertions are a tool which a resource may use to accomplish this.  For years we've gotten this backwards, thinking access proceeded from the identity to the resource – we need to reverse our thinking.

Takeaway:  “user-centric” doesn't mean The Dictatorship of the Users.  In fact there are three parties whose interests must be accomodated (the user, the resource, and the claims provider).  At times this is going to be complex.  Proclamations like, “It is I, as a user, that should be able to dictate…” just don't capture what is at stake here. 

I like the way Simon Willison puts this:

“You have to remember that behind the excitement and marketing OpenID is a protocol, just like SMTP or HTTP. All OpenID actually provides is a mechanism for asserting ownership over a URL and then “proving” that assertion. We can build a pyramid of interesting things on top of this, but that assertion is really all OpenID gives us (well, that and a globally unique identifier). In internet theory terms, it’s a dumb network: the protocol just concentrates on passing assertions around; it’s up to the endpoints to set policies and invent interesting applications.

“Open means that providers and consumers are free to use the protocol in whatever way they wish. If they want to only accept OpenID from a trusted subset of providers, they can go ahead. If they only want to pass OpenID details around behind the corporate firewall (great for gluing together an SSO network from open-source components) they can knock themselves out. Just like SMTP or HTTP, the protocol does not imply any rules about where or how it should be used…”

In a later post – where he seems to have calmed down a bit - Paul mentions a Liberty framework that allows relying parties to “outsource the assessment of… OPs to accredited 3rd parties (or at least provide a common assessment framework…)”.  This sounds more like the Paul I know, and I want to learn more about his thinking in this area.

Talking about the Identity Bus

During the Second European Identity Conference, Kuppinger-Cole did a number of interviews with conference speakers. You can see these on the Kuppingercole channel at YouTube.

Dave Kearns, Jackson Shaw, Dave Olds and myself had a good old time talking with Felix Gaehtgens about the “identity bus”.  I had a real ”aha” during the interview while I was talking with Dave about why synchronization and replication are an important part of the bus.  I realized part of the disconnect we've been having derives from the differing “big problems” each of us find ourselves confronted with.

As infrastructure people one of our main goals is to get over our ”information chaos” headaches…  These have become even worse as the requirements of audit and compliance have matured.  Storing information in one authoritative place (and one only) seems to be a way to get around these problems.  We can then retrieve the information through web service queries and drastically reduce complexity…

What does this worldview make of application developers who don't want to make their queries across the network?   Well, there must be something wrong with them…  They aren't hip to good computing practices…  Eventually they will understand the error of their ways and “come around”…

But the truth is that the world of query looks different from the point of view of an application developer. 

Let's suppose an application wants to know the name corresponding to an email address.  It can issue a query to a remote web service or LDAP directory and get an answer back immediately.  All is well and accords with our ideal view.

But the questions application developers want to answer aren't always of the simple “do a remote search in one place” variety.

Sometimes an application needs to do complex searches involving information “mastered” in multiple locations.   I'll make up a very simple “two location” example to demonstrate the issue:  

“What purchases of computers were made by employees who have been at the company for less than two years?”

Here we have to query “all the purchases of computers” from the purchasing system, and “all empolyees hired within the last two years” from the HR system, and find the intersection.

Although the intersection might only represent a few records,  performing this query remotely and bringing down each result set is very expensive.   No doubt many computers have been purchased in a large company, and a lot of people are likely to have been hired in the last two years.  If an application has to perform this type of  query with great efficiency and within a controlled response time,  the remote query approach of retrieving all the information from many systems and working out the intersection may be totally impractical.   

Compare this to what happens if all the information necessary to respond to a query is present locally in a single database.  I just do a “join” across the tables, and the SQL engine understands exactly how to optimize the query so the result involves little computing power and ”even less time”.  Indexes are used and distributions of values well understood: many thousands of really smart people have been working on these optimizations in many companies for the last 40 years.

So, to summarize, distributed databases (or queries done through distributed services) are not appropriate for all purposes. Doing certain queries in a distributed fashion works, while in other cases it leads to unacceptable performance.

The result is that many application developers “don't want to go there” – at least some of the time.  Yet their applications must be part of the identity fabric.  That is why the identity metasystem has to include application databases populated through synchronization and business rules.

On another note, I recommend the interview with Dave Kearns on the importance of context to identity. 

Ralf Bendrath on the Credentica acquisition

Privacy, security and Internet researcher and activist Ralf Bendrath is a person who thinks about privacy deeply. The industry has a lot to learn from him about modelling and countering privacy threats. Here is his view of the recent credentica acquisition:

Microsoft has acquired Montreal-based privacy technology company Credentica. While that probably means nothing to most of you out there, it is one of the most important and promising developments in the digital identity world.

My main criticism around user-centric identity management has been that the identity provider (the party that you and others rely on, like your credit card issuer or the agency that gave you your driver's license) knows a lot about the users. Microsoft's identity architect Kim Cameron explains it very well:

[W]ith managed cards carrying claims asserted by a third party authority, it has so far been impossible, even for CardSpace, to completely avoid artifacts that allow linkage. (…) Though relying parties are not able to collude with one another, if they collude with the identity provider, a set of claims can be linked to a given user even if they contain no obvious linking information.

This is related to the digital signatures involved in the claims flows. Kim goes on:

But there is good news. Minimal disclosure technology allows the identity provider to sign the token and proof key in such a way that the user can prove the claims come legitimately from the identity provider without revealing the signature applied by the identity provider.

Stefan Brands was among the first to invent technology for minimal disclosure or “zero knowledge” proofs in the early nineties, similar to what David Chaum did with his anonymous digital cash concept. His technology was bought by the privacy firm Zero-Knowledge until they ran out of funding and gave it back to Stefan. He has since then built his own company, Credentica, and, together with his colleagues Christian Paquin and Greg Thompson, developed it into a comprehensive middleware product called “U-Prove” that was released a bit more than a year ago. U-Prove works with SAML, Liberty ID-WSF, and Windows CardSpace.

The importance of the concept of “zero-knowledge proofs” for privacy is comparable to the impact public key infrastructures (PKIs) described by Witfield Diffie and Martin Hellmann had on internet security. The U-Prove technology based on these concepts has been compared to what Ron Rivest, Adi Shamir and Leonard Adleman (RSA) did for security when they were the first to offer an algorithm and a product based on PKIs.

When I was at the CFP conference in Montreal last May, I was meeting Kim and Stefan, and a colleague pointed me to the fact that Kim was being very nice to Stefan. “He has some cool patents Microsoft really wants”, my colleague said. Bruce Schneier recently also praised U-Prove, but questioned the business model for companies like Credentica. He added, “I’d like to be proven wrong.”

Kim Cameron is now bragging about having proven Bruce wrong (which is hard to imagine, given the fact that “Bruce Schneier feeds Schrödinger's cat on his back porch. Without opening the box”), while admitting that he still has no business model:

Our goal is that Minimal Disclosure Tokens will become base features of identity platforms and products, leading to the safest possible intenet. I don’t think the point here is ultimately to make a dollar. It’s about building a system of identity that can withstand the ravages that the Internet will unleash. That will be worth billions.

Stefan Brands is also really happy:

For starters, the market needs in identity and access management have evolved to a point where technologies for multi-party security and privacy can address real pains. Secondly, there is no industry player around that I believe in as much as Microsoft with regard to its commitment to build security and privacy into IT systems and applications. Add to that Microsoft’s strong presence in many of the target markets for identity and access management, its brain trust, and the fact that Microsoft can influence both the client and server side of applications like no industry player can, and it is easy to see why this is a perfect match.

A good overview of other reactions is at Kim's latest blog post. The cruicial issue has, again, been pointed out by Ben Laurie, who quotes the Microsoft Privacy Team's blog:

When this technology is broadly available in Microsoft products (such as Windows Communication Foundation and Windows Cardspace), enterprises, governments, and consumers all stand to benefit from the enhanced security and privacy that it will enable.

Ben sarcastically reads it like “the Microsoft we all know and love”, implying market domination based on proprietary technology. But the Microsoft we all know in the identity field is not the one we used to know with Passport and other crazy proprietary surveillance stuff. They have released the standards underlying the CardSpace claims exchange under an open specification promise, and Kim assures us that they will have their lawyers sort out the legal issues so anybody can use the technology:

I can guarantee everyone that I have zero intention of hoarding Minimal Disclosure Tokens or turning U-Prove into a proprietary Microsoft technology silo. Like, it’s 2008, right? Give me a break, guys!

Well. Given the fact that U-Prove is not just about claims flows, but involves fancy advanced cryptography, they really should do everybody a favour and release the source code and some libraries that contain the algorithm under a free license, and donate the patent to the public domain.

First of all, because yes – it's 2008, and “free is the new paid”, as even the IHT has discovered in January 2007.

Second, because yes – it's 2008, and there has been an alternative product out there under a free license for more than a year. IBM Research Labs Zurich have finished their Idemix identity software that works with zero-knowledge proofs in January 2007. It is part of the Higgins identity suite and will be available under an open source license. (The Eclipse lawyers seem to have been looking into this for more than a year, though. Does anybody know about the current status?)

Third, because yes – it's 2008, it's not 1882 anymore, to quote Bruce Schneier again:

A basic rule of cryptography is to use published, public, algorithms and protocols. This principle was first stated in 1883 by Auguste Kerckhoffs.

While I don't follow Ralf into every nook and cranny of his argument, I think he has a pretty balanced view.

But Ralf, you should tell your friend I was being very nice to Stefan in Montreal because I find him very amusing, especially with a scotch in him.  I would have tried to get his technology into widescale use whether I liked him or not, and I would have liked him just as much if he didn't have any patents at all.

I don't want to get into a “free is the new paid” discussion.  As the article you cite states, “Mass media given away freely or at low cost is hardly new, of course. In many countries, over-the-air television and radio have long been financed primarily by advertisers, at no direct cost to consumers.”  So what is new here?  When I can apply this paradigm to my next dinner, tell me about it. 

This having been vented, I come to exactly the same general conclusions you do:  we want a safe, privacy-friendly identity infrastructure as the basis for a safe, privacy-friendly Internet, and we should do everything possible to make it easier for everyone to bring that about.  So your suggestions go in the right direction.  If we were ultimately to give the existing code to a foundation, I would like to know what foundation people in the privacy community would suggest.

As for the business model issue, I agree with you and Bruce – and Stefan – that there is no obvious business model for a small company.  But for companies like Microsoft, our long term success depends on the flourishing of the Internet and the digital economy.  The best and most trustworthy possible identity infrastructure is key to that.  So for the Microsofts, the IBMs, the Suns and others, this technology fits very squarely into our business models.

As for the Identity and Access group at Microsoft, our goal is to have the most secure, privacy-friendly, interoperable, complete, easy to use and manageable identity products available.  As the Internet's privacy and identity problems become clearer to people, this strategy will attract many new customers and keep the loyalty of existing ones.  So there you have it.  To us, U-Prove technology is foundational to building a very significant business.

Ultimate simplicity: 30 lines of code

With the latest CardSpace bits anyone who is handy with HTML and PHP, Ruby, C#, Python or almost any other language can set up CardSpace on their site in minutes – without the pain and expense of installing a certificate.  They can do this without using any of the special libraries necessary to support high security Information Card exchanges.

This approach is only advisable for personal sites like blogs – but of course, there are millions of blogs being born every second, or… something like that.  Students and others who want to see the basic ideas of the Metasystem can therefore get into the game more easily, and upgrade to certificates once they've mastered the basics.

I've put together a demo of everything it takes to be successful (assuming you have the right software installed, as described later in this piece).

From the high security end of the spectrum to the long tail

Given the time pressures of shipping Vista, those of us working on CardSpace had to prioritize (i.e. cut) our features in order to get everything tested and out the door on schedule.  One assumption we decided to make for V1.0 was that every site would have an X.509 certificate.  We wanted our design to start from the high end of the security spectrum so the fundamental security architecture would be right.  Our thinking was that if we could get these cases working, enabling the “long tail” of sites that don't have certificates would be possible too.

Let's face it.  Getting a certificate, setting up a dedicated external IP address, and configuring your web server to use https is non-trivial for the average person.  Nor does it make much sense to require certificates for personal web sites with no actual monetary or hacker value.  I would even say that without proper security analysis, vetting of software and rigorous operating procedures, SSL isn't even likey to offer much protection against common attacks.  We need to evolve our whole digital framework towards better security practices, not just mandate certificates and think we're done.

So again, when all is said and done, it is best to promote an inclusive Identity Metasystem embracing the full range of identity scenarios – including support for the “long tail” of personal and non-commercial sites.  One way to do this is through OpenID support.  But in addition, we have extended CardSpace to work with sites that don't have a certificate.

The user experience makes the difference clear - we are careful to clearly point out that the exchange of identity is not encrypted. 

Warnings are presented in words and graphics

In spite of this, CardSpace continues to provide significant protection against attack when compared with current browsers.  You are shown the DNS name of the site you are visiting as part of the CardSpace ceremony, not on some random screen under the control (or manipulation) of a potentially evil party.  And if you have been redirected to a “look-alike” site containing an unknown DNS name, you will get the “Introductory” ceremony rather than the more streamlined “Known site” ceremony.  This unexpected behavior has been shown to make people much more careful about what is appearing on their screen.  Ruchi from the CardSpace blog has a great discussion of all the potential issues here.

What software is required? 

As my little demo shows, if you have a website to which you want to add CardSpace support, all you need to do is add an “object tag” to your login page and parse a bit of xml when you get the Information Card posted back to your site.

On the “client” side, if you are using IE, first you will need to install an updated browser specific extension that will work at a non-SSL site.  If you have IE7 you probably already have it as part of the October security update.  If not, download it from here.

Second you will need to install an updated version of Cardspace that does the right thing when a website (we call it the “relying party”) does not have a certificate.  The latest version of Cardspace can be downloaded as part of .Net Framework 3.5 from here.

For people using Mac and Linux clients, I look forward to the upcoming Internet Identity Workshop as an opportunity to catch up with my friends from Bandit, OpenInfoCard, Higgins and others about open source support for the same functionality.  I'll pass on any information I can at that time.

Once you watch the demo, more information is available here and here and here.  The code snippets shown are here.

Download cheap oem discount software.

Breached

My blog was hacked over the weekend.  It was apparently a cross-site scripting attack carried out through a vulnerability in WordPress.  WordPress has released a fix (Version 2.3.1) and I've now installed it.

ZDNet broke the news on Monday – I was awakened by PR people.  The headline read, “Microsoft privacy guru's site hacked”.  Fifteen minutes of fame:

IdentityBlog.com, a Web site run by Microsoft’s chief architect of identity and access, has been hacked and defaced.

The site, which is used by Microsoft’s Kim Cameron to promote discussion around privacy, access and security issues, now contains an “owned by me” message and a link to a third-party site (see screenshot).

Naturally there were more than a few congratulatory messages like this one from “Living in Tension” (whose tagline says he has “Christ in one hand, and the world in the other):

Several years of working in the Information Technology world have unintentionally transformed me into a OSS , Linux, security zealot…

… Tasty little tidbits like this are just too good to be true

I wonder if he would have put it this way had he known my blog is run by commercial hosters (TextDrive) using Unix BSD, MySQL, PHP and WordPress – all OSS products.  There is no Microsoft software involved at the server end – just open source.  

The discussion list at ZDNet is amusing and sobering at the same time.  Of course it starts with a nice “ROTFLMAO” from someone called “Beyond the vista, a Leopard is stalking”: 

This one was priceless . How can Microsoft's Security Guru site get hacked ? Oh my all the MS fanboys claim that Microsoft products are so secure .

<NOT!>

But then “ye”, who checks his facts before opening his mouth, has a big ‘aha’:

How can this be? It runs on UNIX!

FreeBSD Apache 29-Jun-2007

Why it's the very same BSD UNIX upon which OS X is based. The very one we've heard makes OS X so ultra secure and hack proof.

This is too much for poor “Stalking Leopard” to bear:

How about explaining as to what a Microsoft employee would be doing using a UNIX server ? I don't think microsoft would be too happy hearing about their employee using… more than their inherently safe IIS server.

Gosh, is the “Stalking Leopard”  caught in a reverse-borg timewarp?

By this point “fredfarkwater” seems to have had enough:

What kind of F-in idiots write in this blog? Apple this or MS that or Linux there….. What difference doesn't it make what OS/platform you choose if it does the job you want it to? A computer is just a computer, a tool, you idiot brainless toads! A system is only as secure as you make it as stated here. You *ucking moron's need a life and to grow up and use these blogs for positive purposes rather than your childish jibbish!

But as passionate as Fred's advice might be, it doesn't seem to be able to save “Linux Geek”, who at this point proclaims:

This is a shining example why you should host on Linux + Apache .

For those who still don't get it, this shows the superiority of Linux and OSS against M$ products.

Back comes a salvo of “It's on Unix”, by mharr; “lol” by toadlife; and “Shut up Fool!” by John E. Wahd.

“Ye” and marksashton are similarly incredulous:

You do realize that you just made an idiot of yourself, right?

Man you are just too much. I'm sure all of the people who use Linux are embarassed by you and people like you who spew such nonsense.

Insults fly fast and furious until “Linux User” tells “Linux Geek”:

I really hope you know  just how idiotic you look with this post! What an ID10T.

It seems the last death rattle of the performance has sounded, but then there's a short “second breath” when “myOSX” has a brainwave:

Maybe he moved the site after it got hacked ???

After that's nixed by “ye”, “Scrat” concludes:

So it appears that dentityblog.com was being hosted by TextDrive, Inc using Apache on FreeBSD.

Bad Microsoft!

The truth of the matter is very simple.  I like WordPress, even if it has had some security problems, and I don't want to give it up.

My site practices Data Rejection, so there is no “honeypot” to protect.  My main interest is in having an application I like to use and being part of the blogosphere conversation.  If I'm breached from time to time, it will raise a few eyebrows, as it has done this week, but hopefully even that can help propagate my main message:  always design systems on the basis they will be breached – and still be safe.

Although in the past I have often hosted operational systems myself, in this project I have wanted to understand all the ins and outs and constraints of using a hosted service.  I'm pretty happy with TextDrive and think they're very professional.

After the breach at St. Francis dam
I accept that I'm a target.  Given the current state of blogging software I expect I'll be breached again (this is the second time my site has been hacked through a WordPress vulnerability). 

But,  I'm happy to work with WordPress and others to solve the problems, because there are no silver bullets when it comes to security, as I hope Linux Geek learns, especially in environments where there is a lot of innovation.

Zend PHP Information Cards

Dr. Dobb's Journal is dear to my heart.  My wife Adele Freedman, an architecture critic, always used to point to the copies I left lying around and tell our friends, “Check it out.  It's amazing to watch him read it.  No two words fit together.”

But to me it was like candy.  So it was exciting to read the following article today on Dobb's Portal:

Microsoft and Zend Technologies have announced a collaboration to enable support for information cards by PHP developers through a component built for Zend Framework. Using this as a stand-alone component or as part of the Framework, PHP developers will be able to specify a Web site's security policy and accept information cards from trusted third parties.

“Microsoft and Zend are making a commitment to deliver information card support to PHP developers, which will reduce development costs and help make the Web safer and more secure for people,” said Vijay Rajagopalan, principal architect for Platform & Interoperability Strategy at Microsoft.

The cooperative work on information cards extends Microsoft's previous interoperability efforts in this area. Microsoft, in collaboration with Fraunhofer Institute FOKUS and ThoughtWorks, has developed open source interoperability projects on information cards for systems based on Java and Ruby.

“Web sites developed on ASP.NET can already accept information cards,” Rajagopalan explained. “With this work, a Java-based Web site, for example, built on the Sun Java System Web Server, Apache Tomcat or IBM WebSphere Application Server can now accept a digital information card for security-enhanced identity. A Web site built on Ruby on Rails can accept an information card. There is also an open source information card library project implemented in C, developed by Ping Identity Corp.”

Information about Microsoft open source interoperability identity card projects can be found at:

When support for information cards within the Zend Framework (an open source PHP application framework for developing Web applications and Web services) is enabled, users who access PHP-enabled Web sites will receive consistent user control of their digital identities and improved confidence in the authentication process for remote applications, all with greater security than password-based Web logins offer. Zend Technologies’ implementation of information cards lets users provide their digital identities in a familiar, security-enhanced way. They are analogous to business cards, credit cards or membership cards that people use every day.

I guess everyone familiar with this blog knows I've developed a deep affection for PHP myself, so I'm very happy to see this.