Software that tries to intuit our identity…

I would like to hear more of Scott Lemon&#39s ideas about how philosophical thinkers can help us figure out ways we can write software that intuits – this is my word and perhaps it is too rhetorical – our identity decisions for us…

I&#39ve heard a number of people talk about intelligent policy engines capable of doing this type of thing, but so far, I haven&#39t seen one I would choose for my own personal use.

I certainly think you can have simplistic policy – configuration, really – that decides things like whether, having once decided to interact with an identity, you want to do so automatcially in the future.

And I can understand policies along the lines of, “Trust the identifying assertions of people recommended to me by Scott for access to my discussion papers”.

And I&#39ll even go along with, “Place items containing the words Viagra or Investment in the Spam folder”.

But in general I have become very suspicious of systems that purport to create policy that affects me without asking me for approval. One of the worst outcomes of such technology is that the user ends up living in a “magical system” – where decisions she doesn&#39t understand are constraining her experience. Our systems need to be translucent – we should be able to see into them and understand what is going on.

But I&#39m probably ranting. I&#39m sure Scott meant that an engine would put forward policy proposals and the user would be asked to approve or reject them.

Published by

Kim Cameron

Work on identity.