The term “identity rights agreements” was coined by Phil Windley, Doc Searls, and friends in a discussion about identity after OSCON last summer. The full story is in a blog post with that title by Phil.
At the Internet Identity Workshop last October, we held an open space session by that name because a number of Identity Gang folks have been talking about the general concept for several years now. In particular, from an XRI/XDI perspective, identity rights agreements fit perfectly with the concept of data sharing controls embodied in link contracts.
Now the idea is moving from concept to reality. Identity rights agreements are becoming one of the galvanizing forces for a revitalized Identity Commons. One of the reasons is the oft-used analogy that “Identity Commons should be to identity rights what Creative Commons is to copyright”.
I want to take a moment to explain why I believe this analogy may be so profound — and thus why identity rights agreements may become one of the hottest topics in digital identity.
The trigger for these thoughts was Bob Blakely’s post On the Absurdity of Owning One’s Identity, in which he makes an argument why Kim Cameron’s First Law of Identity is, to use another legal term, “unenforceable”. While I think Bob makes a number of strong points in his post (and illustrates them with fascinating, richly researched examples — who says the art of the essay is dead?), I ultimately disagree with his conclusion only because I think he misinterprets the importance of the first word of the First Law:
Technical identity systems must only reveal information identifying a user with the user’s consent.
In other words, although much of what Bob says is true, only it applies to the people and businesses that operate identity systems and collect/disseminate identity data, not to the technical systems themselves, which is what I believe Kim meant the First Law to apply to.
But that’s a different subject. What really struck me about Bob’s essay was the knock-down-brilliant points he makes about the fundamental privacy concept of “consent”. To quote his introduction to this topic:
Negotiating the terms on which you will disclose self-image information is what Consent is all about.
In many cases there are laws and regulations constraining what an organization can do with information it collects about you in situations like this, but you don’t control the content of those laws and regulations – so you’re not making the rules (and in fact the interests of society and the interests of corporations influence the content of laws and regulations at least as strongly as the interests of individuals).
If you want to control your identity based on consent, you have to decide between two approaches:
- Build one set of terms which covers all uses of your information, and let an automated system take care of negotiating your terms and enforcing your rules. In this case, you need to figure out in advance what all the possible scenarios for use of your identity are, and write a policy which covers each scenario.
- Negotiate terms manually each time someone asks for your information. In this case, you need to get notified each time someone tries to use your identity, and make a decision about whether or not to grant consent.
Case 1 clearly isn’t going to work all the time; you can’t know in advance what benefits are going to be offered in exchange for identity information, and you can’t know in advance what risks are going to be created by giving that information out – so no matter what your policy is, there will always be cases it doesn’t handle correctly. This means there will be lots of exceptions to your policy, and when these exceptions arise you’ll have to fall back on case 2.
Case 2 doesn’t really work either. We know because we’ve tried it. Look here, or here, or here, or here for examples of what you’re already being asked to consent to. How well do you understand these terms? How likely are you to take the time to clear up the things you’re not sure about? How likely are you to say “no”?
Bob then goes on to explain that there are three forces behind his assessment of the problems with consent:
The forces at work here are obscurity, coercion, and burdens.
I encourage anyone who’s interested in this topic to read Bob’s arguments in great detail. But the one I want to highlight here is:
Because Identity Allocates Risk, society makes rules to make sure Identity is used fairly. Two typical rules are (1) someone who wants to use your information has to tell you what it will be used for (“notice”), and (2) someone who wants to use your information in a way that might create risks for you has to get your permission (“consent”). You have to pay close attention here: the rules don’t say that businesses and other parties can’t create risks for you – all the rules say is that other parties have to tell you when they create risks for you, and they have to get you to agree to the creation of the risks.
These rules create obscurity, because in business, the language of risk is law. The bank makes lots of loans, and therefore it is exposed to lots of risk. Because it’s exposed to lots of risk, the bank is willing to spend some money to protect itself against that risk. It spends that money on people who speak the language of risk – lawyers – and those lawyers write consent agreements that let the business do what it needs to do profitably (in this case, it needs to create risks for you by using your identity information) without breaking the rules.
You probably aren’t a lawyer, so the language in which consent agreements are written is foreign, and confusing, to you. On the other hand, you don’t value your privacy enough to hire your own lawyer each time you encounter a consent disclosure – so you end up doing something (reading a complicated legal agreement which allocates risks between you and the corporation) which you’re not really qualified to do, and it’s confusing and frustrating (Don Davis calls this kind of situation a “compliance defect“).
Bingo! Now, if you haven’t done so already, go here right now and read Phil’s very simple and intuitive description of the purpose of an identity rights agreement.
The two fit together like hand and glove. What identity rights agreements could solve — possibly in a very short period of time — is the problem Bob has labelled obscurity. By establishing a small number of very well-known identity rights agreements — and giving them very simple and highly recognizable visual icons that don’t require a user to read A SINGLE WORD — the use of “obscurity” as a tool to all-but-eliminate the value of consent disappears.
Why could identity rights agreements catch on so quickly? For the simple reason that sites who want to give users the real power of consent will start to advertise that fact by posting identity rights agreement icons right on the Web form where they collect personal data. Just as millions of Internet users were first exposed to Creative Commons licenses by seeing the icon for a CC license posted on a blog or Web page they were reading, they will be exposed to Identity Commons identity rights agreements icons on Web forms. One click through to see what they mean and I predict the reaction will be, “Wonderful! I hated those indecipherable legal agreements anyway. I’m going to support sites that use these icons to let me know they are being straight with me about the use of my personal data.”
And suddenly sites become motivated to choose this simpler and more user-friendly form of consent — possibly leading to one of those rare but real “virtuous cycles” (to use a term I first learned from Bill Washburn) that can infect an entire ecosystem.
That’s why — despite my current 150%-of-my-time focus on establishing fully operational XRI infrastructure — I plan to invest time in supporting the creation of the first operational set of identity rights agreements at the revitalized Identity Commons. I’m challenging the rest of the current and new Identity Commons supporters to do the same — I want us to present the first draft set at the next Internet Identity Workshop in May.