- Devices and computers (which allow us to penetrate the digital realm in the first place)
- Digital resources (which attract us to it)
- Policies and relationships between other digital subjects (e.g., between humans and devices or documents or services)
Onderstaande oproep bereikte ons deze week vanuit Avaaz en wordt door Privacy First volledig ondersteund:
"Op dit moment wil het Amerikaanse Congres in het geheim een wetsvoorstel aannemen waarmee ze gebruikers overal ter wereld kunnen bespioneren -- en ze hopen dat de wereld dit niet opmerkt. De vorige keer droegen we bij aan het tegengaan van de aanval op internet, laten we dit nog een keer doen.
Meer dan 100 leden van het Congres steunen een wetsvoorstel (CISPA) dat particuliere bedrijven en de Amerikaanse overheid het recht geeft om ieder van ons zonder bevelschrift te bespioneren, op elk moment, zo lang als ze willen. Dit is de derde keer dat het Amerikaanse Congres probeert onze internetvrijheid aan te vallen. Maar we hielpen SOPA verslaan, en PIPA -- en nu kunnen we deze nieuwe 'Big Brother-wet' verslaan.
Onze globale verontwaardiging heeft eerder een leidende rol gespeeld in het beschermen van internet tegen overheden die ons online willen volgen en controleren. Laten we de handen nog eenmaal ineenslaan en deze wet voorgoed verslaan. Teken de petitie en stuur deze door naar iedereen die internet gebruikt: http://www.avaaz.org/nl/stop_cispa/?fp
De Cyber Intellegence Sharing and Protection Act (CISPA) bepaalt dat alleen al in het geval van een vermoeden van een cyberdreiging, de bedrijven die ons toegang tot internet geven het recht hebben om informatie te verzamelen over onze online activiteiten, deze te delen met de regering, ons mogen weigeren hiervan op de hoogte te stellen, en vervolgens immuniteit genieten van vervolging voor inbreuk op privacy of welke andere illegale handeling dan ook. Het betekent een gestoorde afbraak van de privacy waar we allemaal op vertrouwen tijdens onze dagelijkse e-mails, Skype chats, zoekacties enzovoorts.
Maar we weten dat het Amerikaanse Congres bang is voor de reactie van de wereld. Dit is de derde keer dat ze de aanval op onze internetvrijheid in een nieuw jasje te steken om het alsnog door te drukken. Steeds wordt de naam van de wet veranderd, in de hoop dat burgers het niet doorhebben. Groepen die zich bezig houden met internetrechten, zoas Electronic Frontier Foundation, hebben het wetsvoorstel al veroordeeld wegens het schenden van privacybescherming. Het is tijd voor ons om ons uit te spreken.
Teken de petitie aan het Congres tegen CISPA. Zodra we 250.000 handtekeningen hebben zullen we onze oproep overhandigen aan elk van de 100 Amerikaanse vertegenwoordigers die de wet steunen: http://www.avaaz.org/nl/stop_cispa/?fp
Internetvrijheid heeft elke dag te maken met dreigingen van regeringen uit de hele wereld, maar in de VS kan de grootste schade worden aangericht omdat zo'n groot deel van de infrastructuur van internet zich daar bevindt. Onze beweging heeft keer na keer bewezen dat de globale publieke opinie bijdraagt aan het tegenhouden van de dreiging van de VS voor ons internet. Laten we dit nog een keer doen."
Uit de column '[x] ongeschikt' van Arjen Kamphuis (Webwereld, 7 juli 2011):
"Beheersen is illusie
Waar eerder nog de aanname was dat een deel van het probleem van controle over data te beheersen was door gebruik te maken van lokale servers, blijkt ook dat helaas een illusie. Alle 'cloud' diensten die worden aangeboden door bedrijven die in de VS gevestigd zijn vallen onder Amerikaans recht, zelfs als de servers fysiek in een ander land staan. En Amerikaans recht is tegenwoordig nogal, laten we zeggen, problematisch. Bij een verdenking van enige betrokkenheid bij 'terrorisme', specifiek bewijs niet noodzakelijk, kunnen systemen worden afgesloten of overgenomen.
Zonder waarschuwing, mogelijkheid van wederhoor of enige juridische toetsing. De term 'terrorisme' is daarbij zo ver opgerekt dat iemand die geen enkele Amerikaanse wet breekt, geen Amerikaans staatsburger is en zich niet in op het grondgebied van de VS bevindt toch 'terrorist' kan zijn. Gewoon omdat een van de vele drie-letterige diensten (FBI, CIA, NSA, DIA, DHS, TSA, enz...) dat vindt. De EU is niet blij maar gaat kennelijk niet zover dat zij haar burgers en mede-overheden wil adviseren geen gebruik meer te maken van dergelijke diensten.
Betrokken bij kinderporno
De lange arm van de US Patriot Act reikt echter nog verder dan servers van Amerikaanse bedrijven op Europees grondgebied. Zo worden er wel eens domeinen 'in beslag' genomen en voorzien van een sticker: 'deze site was betrokken bij handel in kinderporno. Ga dat maar eens uitleggen aan je relaties als ondernemer of non-profit. Alleen al het gebruiken van een .com, .org of .net extensie voor je domein is genoeg om onder Amerikaans recht te vallen en uitgeleverd te worden. Je kan dus als Europeaan uitgeleverd worden voor het breken van Amerikaanse wetgeving terwijl je gewoon thuis was. Een .com domein maakt van je server effectief Amerikaans grondgebied.
Wel wisten we al dat proprietary platformen zoals Windows en Google Docs -achtige oplossingen ongeschikt waren voor echt belangrijke zaken zoals het runnen van overheden of kritische infrastructuur. Nu blijkt dus echter dat iedere dienst geleverd via een .com/.org/.net domein je de-facto onder buitenlands toezicht laat vallen.
Lekker dicht bij huis
Oplossing? Draai zo veel mogelijk open source software op servers lekker dicht bij huis, er zijn in Nederland en Europa gelukkig flink wat competente hostingbedrijven en bedrijfjes. Hou het lekker bij .nl of, als je echt bulletproof wil zijn, neem een .ch domein. Deze worden beheerd door een Zwitserse stichting en deze mensen nemen hun onafhankelijkheid zeer serieus. Niet voor niets draait wikileaks tegenwoordig onder wikileaks.ch nadat .org en andere domeinen een enkeltje Guantanamo Bay kregen.
En als je dan toch gebruik wil maken van Google Docs, Facebook, Evernote, Mindmeister, Ning, Hotmail of Office 365, doe dit dan met het bewustzijn dat je geen enkele verwachting meer kan hebben van privacy of enige andere vorm van burgerrechten. Prima voor de administratie van de tennisvereniging maar [x] ongeschikt voor alle zaken die er echt toe doen."
Lees HIER de hele column bij Webwereld.
Als het gaat om het afslaan van cyberaanvallen, zouden China en andere regimes voorop lopen, aangezien burgerrechten daar niet gelden en de overheid zonder problemen online activiteiten kan monitoren. Het gaat dan om zaken als deep packet inspection. Gebruikers van het op te richten "veilige internet", waar banken, overheidsleveranciers, gevoelige infrastructuur en de overheid zelf vandaan opereren, zouden hun privacy moeten afstaan en zichzelf moeten identificeren, net als bij het betreden van een legerbasis.
Het huidige web zal blijven bestaan voor mensen die anoniem willen blijven. Beveiligingsexpert James Mulvenon stelt een internet van drie lagen voor. "Voor mensen die willen internetbankieren is er geen anonimiteit." Gebruikers zouden echte namen en digitale identiteiten moeten gebruiken om hier te kunnen inloggen. Op het middelniveau, bijvoorbeeld een .edu domein, zouden minder persoonlijke details van bezoekers worden gevraagd. "Op de bodem kun je als een hobbit blijven rondlopen."
Bron: security.nl, 11 juli 2011.
Door onze gastcolumnist.
Internet: de digitale snelweg waarvan niemand eigenaar is. En ondanks het feit dat er geen eigenaar is, gaan de ontwikkelingen van de technologie op Internet razendsnel. Dit komt doordat een aantal bedrijven intensief samenwerken. Het gaat om W3C (WorldWideWeb Consortium), IETF, IESG, IAB, ISOC en IANA. Ook de Amerikaanse overheid en bedrijven in telecommunicatie, satellieten, netwerken enz. dragen een steentje bij.
Dagelijks ervaart u het gemak van Internet. Mensen surfen, chatten, mailen en registreren er lustig op los. Men geneert zich niet om (alle) privégegevens op het Internet te zetten, soms inclusief foto’s of filmpjes die niets aan de verbeelding overlaten. Al deze vrije informatie-uitwisseling heeft helaas ook een keerzijde. Niet alles kan open en bloot op Internet gezet worden, immers grotere zakelijke instellingen surfen ook op Internet. Als blijkt dat uw op het .net geplaatste informatie schadelijk blijkt voor de onderneming waar u werkt, dan is dit reden voor ontslag. Ook ouders dienen hun kind beter te beschermen tegen - en te informeren over - Internet. Immers, alle gegevens blijven voor altijd circuleren op Internet. Weet u wat men in de toekomst doet met al die overvloed aan informatie en wie dat doet?
Het verloop van de ontwikkeling en de gevaren van Internet zijn de volgende:
- Toegang tot Internet is laagdrempelig en voor iedereen toegankelijk. Eerst (web 1.0) bepaalden de dotcom-bedrijven wat op het Internet gepubliceerd werd. De gegevens konden worden gecontroleerd en het desbetreffende bedrijf of de site- eigenaar was verantwoordelijk en dus ook aansprakelijk voor de inhoud.
- Tegenwoordig (web 2.0) is iedereen betrokken bij het proces. Men voegt zelf informatie toe en wisselt via zakelijke, vrienden- of andere sites en media als Twitter informatie uit. Soms gebeurt het dat iemand de identiteit van iemand anders aanmaakt en misbruikt op Facebook of Twitter met fictieve informatie. Dit valt niet recht te zetten en helaas, de informatie blijft altijd circuleren. Een bijkomstigheid is bovendien ook nog dat de nationale en internationale wetgeving verschilt. Als veel kennis en informatie op een punt is gebundeld zoals het geval is op bijvoorbeeld LinkedIn en Hyves, dan wordt het al helemaal moeilijk. De vraag is hoe gaat het bedrijf met deze informatie om en wat gaat men ermee doen en hoe betrouwbaar is de informatie eigenlijk die op Internet staat?
- De toekomstige stap (web 3.0) zou kunnen zijn de mogelijkheid tot bijvoorbeeld profiling van data en beeldmateriaal. Dit wordt vergemakkelijkt door gebruik van één groot, centraal platform (door gebruik van cloudcomputing: een paar servers verspreid over de wereld) waarop veel websites samenkomen en waarbij het gebruik van lokale servers totaal overbodig wordt gemaakt. Alle op Internet bekende persoonsgegevens kunnen op een snelle en makkelijke manier aan elkaar worden gekoppeld zodat een aardig (doch incompleet) beeld ontstaat van personen. Selectie op kenmerken is een fluitje van een cent. Het Internet is genadeloos, alles blijft bewaard. Een keer fout is altijd fout. De individuele burger heeft geen zeggenschap over de vernietiging van de door hemzelf op het Internet ingevoerde privégegevens. Voorts is hier evenmin sprake van wetgeving die de argeloze burger beschermt. Een marketingmachine zou gegevens kunnen opkopen, waarna u wordt gebombardeerd met een overdaad aan aanbiedingen en kansen.
- De laatste stap is het tweede en ware gezicht van Internet: waar alle kennis is verzameld, daar ligt ook de macht. Nu zijn het een handvol toonaangevende bedrijven die zich gezamenlijk bezighouden met het ontwikkelen van het platform. Door de vrije markt economie zal dit aantal sterk verminderen. Daarom is het wenselijk, de belangen van de individuele burger en de problematiek rondom een centrale database wereldwijd op een wettelijke manier vast te leggen waarbij de privacy en de bescherming van de individuele burger prevaleert boven economische belangen.
Wederom stellen wij de vraag: wat is het doel en heiligt het doel de middelen? LinkedIn is inmiddels beursgenoteerd!
Privacy First meent dat de ontdekking van Kim Cameron baanbrekend is en geeft daarom zijn artikel integraal weer. U kunt zijn website en blogs hier lezen.
The Internet was built without a way to know who and what you are connecting to. This limits what we can do with it and exposes us to growing dangers. If we do nothing, we will face rapidly proliferating episodes of theft and deception that will cumulatively erode public trust in the Internet.
This paper is about how we can prevent the loss of trust and go forward to give Internet users a deep sense of safety, privacy, and certainty about whom they are relating to in cyberspace. Nothing could be more essential if Web-based services and applications are to continue to move beyond “cyber publication” and encompass all kinds of interaction and services. Our approach has been to develop a formal understanding of the dynamics causing digital identity systems to succeed or fail in various contexts, expressed as the Laws of Identity. Taken together, these laws define a unifying identity metasystem that can offer the Internet the identity layer it so obviously requires.
The ideas presented here were extensively refined through the Blogosphere in a wide-ranging conversation documented at www.identityblog.com that crossed many of the conventional fault lines of the computer industry, and in various private communications. In particular I would like to thank Arun Nanda, Andre Durand, Bill Barnes, Carl Ellison, Caspar Bowden, Craig Burton, Dan Blum, Dave Kearns, Dave Winer, Dick Hardt, Doc Searls, Drummond Reed, Ellen McDermott, Eric Norlin, Esther Dyson, Fen Labalme, Identity Woman Kaliya, JC Cannon, James Kobielus, James Governor, Jamie Lewis, John Shewchuk, Luke Razzell, Marc Canter, Mark Wahl, Martin Taylor, Mike Jones, Phil Becker, Radovan Janocek, Ravi Pandya, Robert Scoble, Scott C. Lemon, Simon Davies, Stefan Brands, Stuart Kwan and William Heath.
The Internet was built without a way to know who and what you are connecting to.
A Patchwork of Identity “One-Offs”
Since this essential capability is missing, everyone offering an Internet service has had to come up with a workaround. It is fair to say that today’s Internet, absent a native identity layer, is based on a patchwork of identity one-offs.
As use of the Web increases, so does users’ exposure to these workarounds. Though no one is to blame, the result is pernicious. Hundreds of millions of people have been trained to accept anything any site wants to throw at them as being the “normal way” to conduct business online. They have been taught to type their names, secret passwords, and personal identifying information into almost any input form that appears on their screen.
There is no consistent and comprehensible framework allowing them to evaluate the authenticity of the sites they visit, and they don’t have a reliable way of knowing when they are disclosing private information to illegitimate parties. At the same time they lack a framework for controlling or even remembering the many different aspects of their digital existence.
Criminalization of the Internet
People have begun to use the Internet to manage and exchange things of progressively greater real-world value. This has not gone unnoticed by a criminal fringe that understands the ad hoc and vulnerable nature of the identity patchworkï¿½and how to subvert it. These criminal forces have increasingly professionalized and organized themselves internationally.
Individual consumers are tricked into releasing banking and other information through “phishing” schemes that take advantage of their inability to tell who they are dealing with. They are also induced to inadvertently install “spyware” which resides on their computers and harvests information in long term “pharming” attacks. Other schemes successfully target corporate, government, and educational databases with vast identity holdings, and succeed in stealing hundreds of thousands of identities in a single blow. Criminal organizations exist to acquire these identities and resell them to a new breed of innovators expert in using them to steal as much as possible in the shortest amount of time. The international character of these networks makes them increasingly difficult to penetrate and dismantle.
Phishing and pharming are now thought to be one of the fastest growing segments of the computer industry, with an annual compound growth rate (CAGR) of 1000%. (For example, the Anti-Phishing Working Group “Phishing Activity Trends Report” of February 2005 cites an annual monthly growth rate in phishing sites between July through February of 26% per month, which represents a compound annual growth rate of 1600%.) Without a significant change in how we do things, this trend will continue.
It is essential to look beyond the current situation, and understand that if the current dynamics continue unchecked, we are headed toward a deep crisis: the ad hoc nature of Internet identity cannot withstand the growing assault of professionalized attackers.
A deepening public crisis of this sort would mean the Internet would begin to lose credibility and acceptance for economic transactions when it should be gaining that acceptance. But in addition to the danger of slipping backwards, we need to understand the costs of not going forward. The absence of an identity layer is one of the key factors limiting the further settlement of cyberspace.
Further, the absence of a unifying and rational identity fabric will prevent us from reaping the benefits of Web services.
Web services have been designed to let us build robust, flexible, distributed systems that can deliver important new capabilities, and evolve in response to their environment. Such living services need to be loosely coupled and organic, breaking from the paradigm of rigid premeditation and hard wiring. But as long as digital identity remains a patchwork of ad hoc one-offs that must still be hard-wired, all the negotiation and composability we have achieved in other aspects of Web services will enable nothing new. Knowing who is connecting with what is a must for the next generation of cyber services to break out of the starting gate.
It’s Hard to Add an Identity Layer
There have been attempts to add more standardized digital identity services to the Internet. And there have been partial successes in specific domainsï¿½like the use of SSL to protect connections to public sites; or of Kerberos within enterprises. And recently, we have seen successful examples of federation in business-to-business identity sharing.
But these successes have done little to transform the identity patchwork into a rational fabric extending across the Internet.
Why is it so hard to create an identity layer for the Internet? Mainly because there is little agreement on what it should be and how it should be run. This lack of agreement arises because digital identity is related to context, and the Internet, while being a single technical framework, is experienced through a thousand kinds of content in at least as many different contextsï¿½all of which flourish on top of that underlying framework. The players involved in any one of these contexts want to control digital identity as it impacts them, in many cases wanting to prevent spillover from their context to any other.
Enterprises, for example, see their relationships with customers and employees as key assets, and are fiercely protective of them. It is unreasonable to expect them to restrict their own choices or give up control over how they create and represent their relationships digitally. Nor has any single approach arisen which might serve as an obvious motivation to do so. The differing contexts of discreet enterprises lead to a requirement that they be free to adopt different kinds of solutions. Even ad hoc identity one-offs are better than an identity framework that would be out of their control.
Governments too have found they have needs that distinguish them from other kinds of organization. And specific industry clustersï¿½”verticals” like the financial industryï¿½have come to see they have unique difficulties and aspirations when it comes to maintaining digital relationships with their customers.
As important as these institutions are, the individualï¿½as consumerï¿½gets the final say about any proposed cyber identity system. Anything they don’t like and won’tï¿½or can’tï¿½use will inevitably fail. Someone else will come along with an alternative.
Consumer fears about the safety of the Internet prevent many from using credit cards to make online purchases. Increasingly, malware and identity theft have made privacy issues of paramount concern to every Internet user. This has resulted in increased awareness and readiness to respond to larger privacy issues.
As the virtual world has evolved, privacy specialists have developed nuanced and well-reasoned analyses of identity from the point of view of the consumer and citizen. In response to their intervention, legal thinkers, government policy makers, and elected representatives have become increasingly aware of the many difficult privacy issues facing society as we settle cyberspace. This has already led to vendor sensitivity and government intervention, and more is to be expected.
In summary, as grave as the dangers of the current situation may be, the emergence of a single simplistic digital identity solution as a universal panacea is not realistic.
Even if a miracle occurred and the various players could work out some kind of broad cross-sector agreement about what constitutes perfection in one country, the probability of extending that universally across international borders would be zero.
An Identity Metasystem
In the case of digital identity, the diverse needs of many players demand that we weave a single identity fabric out of multiple constituent technologies. Although this might initially seem daunting, similar things have been done many times before as computing has evolved.
For instance, in the early days of personal computing, application builders had to be aware of what type of video display was in use, and of the specific characteristics of the storage devices that were installed. Over time, a layer of software emerged that was able to provide a set of services abstracted from the specificities of any given hardware. The technology of “device drivers” enabled interchangeable hardware to be plugged in as required. Hardware became “loosely coupled” to the computer, allowing it to evolve quickly since applications did not need to be rewritten to take advantage of new features.
The same can be said about the evolution of networking. At one time applications had to be aware of the specific network devices in use. Eventually the unifying technologies of sockets and TCP/IP emerged, able to work with many specific underlying systems (Token Ring, Ethernet, X.25 and Frame Relay)ï¿½and even with systems, like wireless, that were not yet invented.
Digital identity requires a similar approach. We need a unifying identity metasystem that can protect applications from the internal complexities of specific implementations and allow digital identity to become loosely coupled. This metasystem is in effect a system of systems that exposes a unified interface much like a device driver or network socket does. That allows one-offs to evolve towards standardized technologies that work within a metasystem framework without requiring the whole world to agree a priori.
Understanding the Obstacles
To restate our initial problem, the role of an identity metasystem is to provide a reliable way to establish who is connecting with whatï¿½anywhere on the Internet.
We have observed that various types of systems have successfully provided identification in specific contexts. Yet despite their success they have failed to attract usage in other scenarios. What factors explain these successes and failures? Moreover, what would be the characteristics of a solution that would work at Internet scale? In answering these questions, there is much to be learned from the successes and failures of various approaches since the 1970s.
This investigation has led to a set of ideas called the Laws of Identity. We chose the word “laws” in the scientific sense of hypotheses about the worldï¿½resulting from observationï¿½which can be tested and are thus disprovable. (We consciously avoided the words “proposition,” meaning something proven through logic rather than experiment, and “axiom,” meaning something self-evident.) The reader should bear in mind that we specifically did not want to denote legal or moral precepts, nor embark on a discussion of the “philosophy of identity.” (All three areas are of compelling interest, but it is necessary to tightly focus the current discussion on matters that are directly testable and applicable to solving the imminent crisis of the identity infrastructure.)
These laws enumerate the set of objective dynamics defining a digital identity metasystem capable of being widely enough accepted that it can serve as a backplane for distributed computing on an Internet scale. As such, each law ends up giving rise to an architectural principle guiding the construction of such a system.
Our goals are pragmatic. When we postulate the Law of User Control and Consent, for example, it is because experience tells us: a system that does not put users in control willï¿½immediately or over timeï¿½be rejected by enough of them that it cannot become and remain a unifying technology. How this law meshes with values is not the relevant issue.
Like the other laws, this one represents a contour limiting what an identity metasystem must look likeï¿½and must not look likeï¿½given the many social formations and cultures in which it must be able to operate. Understanding the laws can help eliminate a lot of doomed proposals before we waste too much time on them.
The laws are testable. They allow us to predict outcomes, and we have done so consistently since proposing them. They are also objective, i.e., they existed and operated before they were formulated. That is how the Law of Justifiable Parties, for example, can account for the successes and failures of the Microsoft Passport identity system.
The Laws of Identity, taken together, define the architecture of the Internet’s missing identity layer.
Many people have thought about identity, digital identities, personas, and representations. In proposing the laws we do not expect to close this discussion. However, in keeping with the pragmatic goals of this exercise we define a vocabulary that will allow the laws themselves to be understood.
What is a Digital Identity?
We will begin by defining a digital identity as a set of claims made by one digital subject about itself or another digital subject. We ask the reader to let us define what we mean by a digital subject and a set of claims before examining this further.
What Is a Digital Subject?
The Oxford English Dictionary (OED) defines a subject as:
“A person or thing that is being discussed, described or dealt with.”
So we define a digital subject as:
“A person or thing represented or existing in the digital realm which is being described or dealt with.”
Much of the decision-making involved in distributed computing is the result of “dealing with” an initiator or requester. And it is worth pointing out that the digital world includes many subjects that need to be “dealt with” other than humans, including:
The OED goes on to define subject, in a philosophical sense, as the “central substance or core of a thing as opposed to its attributes.” As we shall see, “attributes” are the things expressed in claims, and the subject is the central substance thereby described.
(We have selected the word subject in preference to alternatives such as “entity,” which means “a thing with distinct and independent existence.” The independent existence of a thing is a moot point hereï¿½it may well be an aspect of something else. What matters is that a relying party is dealing with the thing and that claims are being made about it.)
What Is a Claim?
A claim is:
“An assertion of the truth of something, typically one which is disputed or in doubt.”
Some examples of claims in the digital realm will likely help:
- A claim could just convey an identifierï¿½for example, that the subject’s student number is 490-525, or that the subject’s Windows name is REDMOND\kcameron. This is the way many existing identity systems work.
- Another claim might assert that a subject knows a given keyï¿½and should be able to demonstrate this fact.
- A set of claims might convey personally identifying informationï¿½name, address, date of birth and citizenship, for example.
- A claim might simply propose that a subject is part of a certain groupï¿½for example, that she has an age less than 16.
- And a claim might state that a subject has a certain capabilityï¿½for example, to place orders up to a certain limit, or modify a given file.
The concept of “being in doubt” grasps the subtleties of a distributed world like the Internet. Claims need to be subject to evaluation by the party depending on them. The more our networks are federated and open to participation by many different subjects, the more obvious this becomes.
The use of the word claim is therefore more appropriate in a distributed and federated environment than alternate words such as “assertion,” which means “a confident and forceful statement of fact or belief.” (OED) In evolving from a closed domain model to an open, federated model, the situation is transformed into one where the party making an assertion and the party evaluating it may have a complex and even ambivalent relationship. In this context, assertions need always be subject to doubtï¿½not only doubt that they have been transmitted from the sender to the recipient intact, but also doubt that they are true, and doubt that they are even of relevance to the recipient.
Advantages of a Claims-Based Definition
The definition of digital identity employed here encompasses all the known digital identity systems and therefore allows us to begin to unify the rational elements of our patchwork conceptually. It allows us to define digital identity for a metasystem embracing multiple implementations and ways of doing things.
In proffering this definition, we recognize it does not jibe with some widely held beliefsï¿½for example, that within a given context, identities have to be unique. Many early systems were built with this assumption, and it is a critically useful assumption in many contexts. The only error is in thinking it is mandatory for all contexts.
By way of example, consider the relationship between a company like Microsoft and an analyst service that we will call Contoso Analytics. Let’s suppose Microsoft contracts with Contoso Analytics so anyone from Microsoft can read its reports on industry trends. Let’s suppose also that Microsoft doesn’t want Contoso Analytics to know exactly who at Microsoft has what interests or reads what reports.
In this scenario we actually do not want to employ unique individual identifiers as digital identities. Contoso Analytics still needs a way to ensure that only valid customers get to its reports. But in this example, digital identity would best be expressed by a very limited claimï¿½the claim that the digital subject currently accessing the site is a Microsoft employee. Our claims-based approach succeeds in this regard. It permits one digital subject (Microsoft Corporation) to assert things about another digital subject without using any unique identifier.
This definition of digital identity calls upon us to separate cleanly the presentation of claims from the provability of the link to a real world object.
Our definition leaves the evaluation of the usefulness (or the truthfulness or the trustworthiness) of the claim to the relying party. The truth and possible linkage is not in the claim, but results from the evaluation. If the evaluating party decides it should accept the claim being made, then this decision just represents a further claim about the subject, this time made by the evaluating party (it may or may not be conveyed further).
Evaluation of a digital identity thus results in a simple transform of what it starts withï¿½again producing in a set of claims made by one digital subject about another. Matters of trust, attribution, and usefulness can then be factored out and addressed at a higher layer in the system than the mechanism for expressing digital identity itself.
We can now look at the seven essential laws that explain the successes and failures of digital identity systems.
Technical identity systems must only reveal information identifying a user with the user’s consent. (Blogosphere discussion starts here…)
No one is as pivotal to the success of the identity metasystem as the individual who uses it. The system must first of all appeal by means of convenience and simplicity. But to endure, it must earn the user’s trust above all.
Earning this trust requires a holistic commitment. The system must be designed to put the user in controlï¿½of what digital identities are used, and what information is released.
The system must also protect the user against deception, verifying the identity of any parties who ask for information. Should the user decide to supply identity information, there must be no doubt that it goes to the right place. And the system needs mechanisms to make the user aware of the purposes for which any information is being collected.
The system must inform the user when he or she has selected an identity provider able to track Internet behavior.
Further, it must reinforce the sense that the user is in control regardless of context, rather than arbitrarily altering its contract with the user. This means being able to support user consent in enterprise as well as consumer environments. It is essential to retain the paradigm of consent even when refusal might break a company’s conditions of employment. This serves both to inform the employee and indemnify the employer.
The Law of User Control and Consent allows for the use of mechanisms whereby the metasystem remembers user decisions, and users may opt to have them applied automatically on subsequent occasions.
The solution that discloses the least amount of identifying information and best limits its use is the most stable long-term solution. (Starts here…)
We should build systems that employ identifying information on the basis that a breach is always possible. Such a breach represents a risk. To mitigate risk, it is best to acquire information only on a “need to know” basis, and to retain it only on a “need to retain” basis. By following these practices, we can ensure the least possible damage in the event of a breach.
At the same time, the value of identifying information decreases as the amount decreases. A system built with the principles of information minimalism is therefore a less attractive target for identity theft, reducing risk even further.
By limiting use to an explicit scenario (in conjunction with the use policy described in the Law of Control), the effectiveness of the “need to know” principle in reducing risk is further magnified. There is no longer the possibility of collecting and keeping information “just in case” it might one day be required.
The concept of “least identifying information” should be taken as meaning not only the fewest number of claims, but the information least likely to identify a given individual across multiple contexts. For example, if a scenario requires proof of being a certain age, then it is better to acquire and store the age category rather than the birth date. Date of birth is more likely, in association with other claims, to uniquely identify a subject, and so represents “more identifying information” which should be avoided if it is not needed.
In the same way, unique identifiers that can be reused in other contexts (for example, drivers’ license numbers, Social Security Numbers, and the like) represent “more identifying information” than unique special-purpose identifiers that do not cross context. In this sense, acquiring and storing a Social Security Number represents a much greater risk than assigning a randomly generated student or employee number.
Numerous identity catastrophes have occurred where this law has been broken.
We can also express the Law of Minimal Disclosure this way: aggregation of identifying information also aggregates risk. To minimize risk, minimize aggregation.
Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship. (Starts here…)
The identity system must make its user aware of the party or parties with whom she is interacting while sharing information.
The justification requirements apply both to the subject who is disclosing information and the relying party who depends on it. Our experience with Microsoft Passport is instructive in this regard. Internet users saw Passport as a convenient way to gain access to MSN sites, and those sites were happily using Passportï¿½to the tune of over a billion interactions per day. However, it did not make sense to most non-MSN sites for Microsoft to be involved in their customer relationships. Nor were users clamoring for a single Microsoft identity service to be aware of all their Internet activities. As a result, Passport failed in its mission of being an identity system for the Internet.
We will see many more examples of this law going forward. Today some governments are thinking of operating digital identity services. It makes sense (and is clearly justifiable) for people to use government-issued identities when doing business with the government. But it will be a cultural matter as to whether, for example, citizens agree it is “necessary and justifiable” for government identities to be used in controlling access to a family wikiï¿½or connecting a consumer to her hobby or vice.
The same issues will confront intermediaries building a trust fabric. The law is not intended to suggest limitations of what is possible, but rather to outline the dynamics of which we must be aware.
We know from the Law of Control and Consent that the system must be predictable and “translucent” in order to earn trust. But the user needs to understand whom she is dealing with for other reasons, as we will see in the Law of Human Integration. In the physical world we are able to judge a situation and decide what we want to disclose about ourselves. This has its analogy in digital justifiable parties.
Every party to disclosure must provide the disclosing party with a policy statement about information use. This policy should govern what happens to disclosed information. One can view this policy as defining “delegated rights” issued by the disclosing party.
Any use policy would allow all parties to cooperate with authorities in the case of criminal investigations. But this does not mean the state is party to the identity relationship. Of course, this should be made explicit in the policy under which information is shared.
A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles. (Starts here…)
Technical identity is always asserted with respect to some other identity or set of identities. To make an analogy with the physical world, we can say identity has direction, not just magnitude. One special “set of identities” is that of all other identities (the public). Other important sets exist (for example, the identities in an enterprise, an arbitrary domain, or a peer group).
Entities that are public can have identifiers that are invariant and well known. These public identifiers can be thought of as beaconsï¿½emitting identity to anyone who shows up. And beacons are “omni-directional” (they are willing to reveal their existence to the set of all other identities).
A corporate Web site with a well-known URL and public key certificate is a good example of such a public entity. There is no advantageï¿½in fact there is a great disadvantageï¿½in changing a public URL. It is fine for every visitor to the site to examine the public key certificate. It is equally acceptable for everyone to know the site is there: its existence is public.
A second example of such a public entity is a publicly visible device like a video projector. The device sits in a conference room in an enterprise. Visitors to the conference room can see the projector and it offers digital services by advertising itself to those who come near it. In the thinking outlined here, it has an omni-directional identity.
On the other hand, a consumer visiting a corporate Web site is able to use the identity beacon of that site to decide whether she wants to establish a relationship with it. Her system can then set up a “unidirectional” identity relation with the site by selecting an identifier for use with that site and no other. A unidirectional identity relation with a different site would involve fabricating a completely unrelated identifier. Because of this, there is no correlation handle emitted that can be shared between sites to assemble profile activities and preferences into super-dossiers.
When a computer user enters a conference room equipped with the projector described above, its omni-directional identity beacon could be utilized to decide (as per the Law of Control) whether she wants to interact with it. If she does, a short-lived unidirectional identity relation could be established between the computer and the projectorï¿½providing a secure connection while divulging the least possible identifying information in accordance with the law of minimal disclosure.
Bluetooth and other wireless technologies have not so far conformed to the Law of Directed Identity. They use public beacons for private entities. This explains the consumer backlash innovators in these areas are currently wrestling with.
Public key certificates have the same problem when used to identify individuals in contexts where privacy is an issue. It may be more than coincidental that certificates have so far been widely used when in conformance with this law (i.e., in identifying public Web sites) and generally ignored when it comes to identifying private individuals.
Another example involves the proposed usage of RFID technology in passports and student tracking applications. RFID devices currently emit an omni-directional public beacon. This is not appropriate for use by private individuals.
Passport readers are public devices and therefore should employ an omni-directional beacon. But passports should only respond to trusted readers. They should not be emitting signals to any eavesdropper that identify their bearers and peg them as nationals of a given country. Examples have been given of unmanned devices that could be detonated by these beacons. In California we are already seeing the first legislative measures being taken to correct abuse of identity directionality. It shows a failure of vision among technologists that legislators understand these issues before we do.
A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers. (Starts here…)
It would be nice if there were one way to express identity. But the numerous contexts in which identity is required won’t allow it.
One reason there will never be a single, centralized monolithic system (the opposite of a metasystem) is because the characteristics that would make any system ideal in one context will disqualify it in another.
It makes sense to employ a government issued digital identity when interacting with government services (a single overall identity neither implies nor prevents correlation of identifiers between individual government departments).
But in many cultures, employers and employees would not feel comfortable using government identifiers to log in at work. A government identifier might be used to convey taxation information; it might even be required when a person is first offered employment. But the context of employment is sufficiently autonomous that it warrants its own identity, free from daily observation via a government-run technology.
Customers and individuals browsing the Web meanwhile will in many cases want higher levels of privacy than is likely to be provided by any employer.
So when it comes to digital identity, it is not only a matter of having identity providers run by different parties (including individuals themselves), but of having identity systems that offer different (and potentially contradictory) features.
A universal system must embrace differentiation, while recognizing that each of us is simultaneouslyï¿½and in different contextsï¿½a citizen, an employee, a customer, and a virtual persona.
This demonstrates, from yet another angle, that different identity systems must exist in a metasystem. It implies we need a simple encapsulating protocol (a way of agreeing on and transporting things). We also need a way to surface information through a unified user experience that allows individuals and organizations to select appropriate identity providers and features as they go about their daily activities.
The universal identity metasystem must not be another monolith. It must be polycentric (federation implies this) and also polymorphic (existing in different forms). This will allow the identity ecology to emerge, evolve, and self-organize.
Systems like RSS and HTML are powerful because they carry any content. We need to see that identity itself will have severalï¿½perhaps manyï¿½contexts, and yet can be expressed in a metasystem.
The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks. (Starts here…)
We have done a pretty good job of securing the channel between Web servers and browsers through the use of cryptographyï¿½a channel that might extend for thousands of miles. But we have failed to adequately protect the two or three foot channel between the browser’s display and the brain of the human who uses it. This immeasurably shorter channel is the one under attack from phishers and pharmers.
No wonder. What identities is the user dealing with as she navigates the Web? How understandably is identity information conveyed to her? Do our digital identity systems interface with users in ways that objective studies have shown to work? Identity information currently takes the form of certificates. Do studies show certificates are meaningful to users?
What exactly are we doing? Whatever it is, we’ve got to do it better: the identity system must extend to and integrate the human user.
Carl Ellison and his colleagues have coined the term ‘ceremony’ to describe interactions that span a mixed network of human and cybernetic system componentsï¿½the full channel from Web server to human brain. A ceremony goes beyond cyber protocols to ensure the integrity of communication with the user.
This concept calls for profoundly changing the user’s experience so it becomes predictable and unambiguous enough to allow for informed decisions.
Since the identity system has to work on all platforms, it must be safe on all platforms. The properties that lead to its safety can’t be based on obscurity or the fact that the underlying platform or software is unknown or has a small adoption.
One example is United Airlines’ Channel 9. It carries a live conversation between the cockpit of one’s plane and air traffic control. The conversation on this channel is very important, technical, and focused. Participants don’t “chat”ï¿½all parties know precisely what to expect from the tower and the airplane. As a result, even though there is a lot of radio noise and static, it is easy for the pilot and controller to pick out the exact content of the communication. When things go wrong, the broken predictability of the channel marks the urgency of the situation and draws upon every human faculty to understand and respond to the danger. The limited semiotics of the channel mean there is very high reliability in communications.
We require the same kind of bounded and highly predictable ceremony for the exchange of identity information. A ceremony is not a “whatever feels good” sort of thing. It is predetermined.
But isn’t this limitation of possibilities at odds with our ideas about computing? Haven’t many advances in computing come about through ambiguity and unintended consequences that would be ruled out in the austere light of ceremony?
These are valid questions. But we definitely don’t want unintended consequences when figuring out who we are talking to or what personal identification information to reveal.
The question is how to achieve very high levels of reliability in the communication between the system and its human users. In large part, this can be measured objectively through user testing.
The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.
Let’s project ourselves into a future where we have a number of contextual identity choices. For example:
- Browsing: a self-asserted identity for exploring the Web (giving away no real data)
- Personal: a self-asserted identity for sites with which I want an ongoing but private relationship (including my name and a long-term e-mail address)
- Community: a public identity for collaborating with others
- Professional: a public identity for collaborating issued by my employer
- Credit card: an identity issued by my financial institution
- Citizen: an identity issued by my government
We can expect that different individuals will have different combinations of these digital identities, as well as others.
To make this possible, we must “thingify” digital identitiesï¿½make them into “things” the user can see on the desktop, add and delete, select and share. (We have chosen to “localize” the more venerable word “reify”.) How usable would today’s computers be had we not invented icons and lists that consistently represent folders and documents? We must do the same with digital identities.
What type of digital identity is acceptable in a given context? The properties of potential candidates will be specified by the Web service from which a user wants to obtain a service. Matching thingified digital identities can then be displayed to the user, who can select between them and use them to understand what information is being requested. This allows the user to control what is released.
Different relying parties will require different kinds of digital identities. And two things are clear:
- A single relying party will often want to accept more than one kind of identity, and
- A user will want to understand his or her options and select the best identity for the context
Putting all the laws together, we can see that the request, selection, and proffering of identity information must be done such that the channel between the parties is safe. The user experience must also prevent ambiguity in the user’s consent, and understanding of the parties involved and their proposed uses. These options need to be consistent and clear. Consistency across contexts is required for this to be done in a way that communicates unambiguously with the human system components.
As users, we need to see our various identities as part of an integrated world that nonetheless respects our need for independent contexts.
Those of us who work on or with identity systems need to obey the Laws of Identity. Otherwise, we create a wake of reinforcing side effects that eventually undermine all resulting technology. The result is similar to what would happen if civil engineers were to flaunt the law of gravity. By following them we can build a unifying identity metasystem that is universally accepted and enduring.
Join the identity discussion at http://www.identityblog.com/
The general philosophy of the Fair Information Principles
The most fundamental principle is notice. Consumers should be given notice of an entity's information practices before any personal information is collected from them. Without notice, a consumer cannot make an informed decision as to whether and to what extent to disclose personal information. Moreover, three of the other principles discussed below -- choice/consent, access/participation, and enforcement/redress -- are only meaningful when a consumer has notice of an entity's policies, and his or her rights with respect thereto.
While the scope and content of notice will depend on the entity's substantive information practices, notice of some or all of the following have been recognized as essential to ensuring that consumers are properly informed before divulging personal information:
- identification of the entity collecting the data;
- identification of the uses to which the data will be put;
- identification of any potential recipients of the data;
- the nature of the data collected and the means by which it is collected if not obvious (passively, by means of electronic monitoring, or actively, by asking the consumer to provide the information);
- whether the provision of the requested data is voluntary or required, and the consequences of a refusal to provide the requested information; and
- the steps taken by the data collector to ensure the confidentiality, integrity and quality of the data.
Some information practice codes state that the notice should also identify any available consumer rights, including: any choice respecting the use of the data; whether the consumer has been given a right of access to the data; the ability of the consumer to contest inaccuracies; the availability of redress for violations of the practice code; and how such rights can be exercised.
In the Internet context, notice can be accomplished easily by the posting of an information practice disclosure describing an entity's information practices on a company's site on the Web. To be effective, such a disclosure should be clear and conspicuous, posted in a prominent location, and readily accessible from both the site's home page and any Web page where information is collected from the consumer. It should also be unavoidable and understandable so that it gives consumers meaningful and effective notice of what will happen to the personal information they are asked to divulge.
The second widely-accepted core principle of fair information practice is consumer choice or consent. At its simplest, choice means giving consumers options as to how any personal information collected from them may be used. Specifically, choice relates to secondary uses of information -- i.e., uses beyond those necessary to complete the contemplated transaction. Such secondary uses can be internal, such as placing the consumer on the collecting company's mailing list in order to market additional products or promotions, or external, such as the transfer of information to third parties.
Traditionally, two types of choice/consent regimes have been considered: opt-in or opt-out. Opt-in regimes require affirmative steps by the consumer to allow the collection and/or use of information; opt-out regimes require affirmative steps to prevent the collection and/or use of such information. The distinction lies in the default rule when no affirmative steps are taken by the consumer. Choice can also involve more than a binary yes/no option. Entities can, and do, allow consumers to tailor the nature of the information they reveal and the uses to which it will be put. Thus, for example, consumers can be provided separate choices as to whether they wish to be on a company's general internal mailing list or a marketing list sold to third parties. In order to be effective, any choice regime should provide a simple and easily-accessible way for consumers to exercise their choice.
In the online environment, choice easily can be exercised by simply clicking a box on the computer screen that indicates a user's decision with respect to the use and/or dissemination of the information being collected. The online environment also presents new possibilities to move beyond the opt-in/opt-out paradigm. For example, consumers could be required to specify their preferences regarding information use before entering a Web site, thus effectively eliminating any need for default rules.
Access is the third core principle. It refers to an individual's ability both to access data about him or herself -- i.e., to view the data in an entity's files -- and to contest that data's accuracy and completeness. Both are essential to ensuring that data are accurate and complete. To be meaningful, access must encompass timely and inexpensive access to data, a simple means for contesting inaccurate or incomplete data, a mechanism by which the data collector can verify the information, and the means by which corrections and/or consumer objections can be added to the data file and sent to all data recipients.
The fourth widely accepted principle is that data be accurate and secure. To assure data integrity, collectors must take reasonable steps, such as using only reputable sources of data and cross-referencing data against multiple sources, providing consumer access to data, and destroying untimely data or converting it to anonymous form.
Security involves both managerial and technical measures to protect against loss and the unauthorized access, destruction, use, or disclosure of the data. Managerial measures include internal organizational measures that limit access to data and ensure that those individuals with access do not utilize the data for unauthorized purposes. Technical security measures to prevent unauthorized access include encryption in the transmission and storage of data; limits on access through use of passwords; and the storage of data on secure servers or computers that are inaccessible by modem.
It is generally agreed that the core principles of privacy protection can only be effective if there is a mechanism in place to enforce them. Absent an enforcement and redress mechanism, a fair information practice code is merely suggestive rather than prescriptive, and does not ensure compliance with core fair information practice principles.
The Fair Information Principles as put into Canadian Law
Klik hier voor de bron.
These principles are usually referred to as “fair information principles”.
They are included in the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s private-sector privacy law, and called "Privacy Principles".
Principle 1 — Accountability
An organization is responsible for personal information under its control and shall designate an individual or individuals who are accountable for the organization’s compliance with the following principles.
Principle 2 — Identifying Purposes
The purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected.
Principle 3 — Consent
The knowledge and consent of the individual are required for the collection, use, or disclosure of personal information, except where inappropriate.
Principle 4 — Limiting Collection
The collection of personal information shall be limited to that which is necessary for the purposes identified by the organization. Information shall be collected by fair and lawful means.
Principle 5 — Limiting Use, Disclosure, and Retention
Personal information shall not be used or disclosed for purposes other than those for which it was collected, except with the consent of the individual or as required by law. Personal information shall be retained only as long as necessary for the fulfilment of those purposes.
Principle 6 — Accuracy
Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.
Principle 7 — Safeguards
Personal information shall be protected by security safeguards appropriate to the sensitivity of the information.
Principle 8 — Openness
An organization shall make readily available to individuals specific information about its policies and practices relating to the management of personal information.
Principle 9 — Individual Access
Upon request, an individual shall be informed of the existence, use, and disclosure of his or her personal information and shall be given access to that information. An individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate.
Principle 10 — Challenging Compliance
An individual shall be able to address a challenge concerning compliance with the above principles to the designated individual or individuals accountable for the organization’s compliance.