More on the second generation of metadirectory

Oracle’s Clayton Donley has joined the Metadirectory discussion and maybe his participation will help clarify things.

He writes:

I was reading this posting from my friend and colleague, Phil Hunt, in which he talks about the ongoing discussion between Dave Kearns and Kim Cameron about the death of meta-directories.

Not only is he correct in pointing out that Kim’s definition of Meta 2.0 is exactly what virtual directory has been since 1.0, but it’s interesting to see that some virtual directory vendors continue to push something that looks very much like meta-directory 1.0.

Before we go further, I want to peak at how Clayton’s virtual directory works:

… If the request satisfies the in-bound security requirements, the next step is to invoke any global level mappings and plug-ins. Mapping and plug-ins have the ability modify the operation such as changing the name or value of attributes. The next step after global-plug-ins is to determine which adapter(s) can handle the request. This determination is made based on the information provided in the operation.

The primary information used is the DN of the operation – the search base in the search or the DN of the entry in an all other LDAP operations like a bind or add. OVD will look at the DN and determine which adapters could potentially support an operation for that DN. This is possible because each adapter in its configuration tells what LDAP namespace it’s responsible for.

In the case where multiple adapters can support the incoming DN namespace (for example a search who’s base is the root of the directory namespace such as dc=oracle,dc=com), then OVD will perform the operation on each adapter. The order of precedence is configurable based on priority, attributes or supported LDAP search filters.

Pretty cool. But let’s do a historical reality check. The first metadirectory, which shipped twelve years ago, included the ability to do real-time queries that were dispatched to multiple LDAP systems depending on the query (and to several at once). The metadirectory provided the “glue” to know which directory service agents could answer which queries. The system performed the assembly of results across originating directory service agents – in other words mutliple LDAP services produced by multiple vendors.

And guess what? The distributed queries were accessed as part of “the metaverse”. The metaverse was in no way limited to “a local store”.

The metaverse was the joined information field comprising all the objects in the metadirectory. Only the smallest set of “core” attributes was stored in the local database or synchronized throughout the system. This set of attributes composed the “object root” – the things that MUST BE THE SAME in each of the applications and stores in a management continent. There actually aren’t that many of them. For example, in normal circumstances, my surname should be the same in all the systems within my enterprise. So it makes sense to synchronize surname between systems so that it actually stays the same over time.

As metadriectories started to compete in the marketplace, the problem of provisioning and managing core attributes came to predominate over that of connecting to application specific ones. Basically, I think it was just early. That doesn’t mean one should counterpose metadirectory and virtual directory, or congratulate oneself too much for ”owning” distributed query. The problem of distributed information is complex and needs multiple tools – even the dreaded “caching”.

Let me return to what I said would be the focus of “second generation metadirectory”:

Providing the framework by which next-generation applications can become part of the distributed data infrastructure. This includes publishing and subscription. But that isn’t enough. Other applications need ways to find it, name it, and so on.

If Clayton and Phil think virtual directories already do this, I can see that I wasn’t clear enough. So here are a few precisions:

  • By “next generation application” I mean applications based on web service protocols. Our directories need to integrate completely into the web services fabric, and application developers must to be able to interact with them without knowing LDAP.
  • Developers and users need places they can go to query for “core attributes”. They must be able to use those attributes to “locate” object metadata. Having done so, applications need to be able to understand what the known information content of the object is, and how they can reach it.
  • Applications need to be able to register the information fields they can serve up.

Today’s virtual directories just don’t do this any better or any worse than metadirectories do. Virtual directories expose some of the fabric, just as today’s metadirectories do, but they don’t get at the big prize. It’s what I have called the unified field of information. Back in the 90’s more than one analyst friend made fun of me for thinking this was possible. But today it is not only possible, it is necessary.

Published by

Kim Cameron

Work on identity.