I would like to hear more of Scott Lemon's ideas about how philosophical thinkers can help us figure out ways we can write software that intuits – this is my word and perhaps it is too rhetorical – our identity decisions for us…
I've heard a number of people talk about intelligent policy engines capable of doing this type of thing, but so far, I haven't seen one I would choose for my own personal use.
I certainly think you can have simplistic policy – configuration, really – that decides things like whether, having once decided to interact with an identity, you want to do so automatcially in the future.
And I can understand policies along the lines of, “Trust the identifying assertions of people recommended to me by Scott for access to my discussion papers”.
And I'll even go along with, “Place items containing the words Viagra or Investment in the Spam folder”.
But in general I have become very suspicious of systems that purport to create policy that affects me without asking me for approval. One of the worst outcomes of such technology is that the user ends up living in a “magical system” – where decisions she doesn't understand are constraining her experience. Our systems need to be translucent – we should be able to see into them and understand what is going on.
But I'm probably ranting. I'm sure Scott meant that an engine would put forward policy proposals and the user would be asked to approve or reject them.