Our identities have no bodies, so, unlike you, we cannot obtain order
by physical coercion. We believe that from _ethics_, _enlightened
self-interest_, and the _commonweal_, our governance will emerge. Our
identities may be distributed across many of your jurisdictions. The
only law that all our constituent cultures would generally recognize
is the Golden Rule. We hope we will be able to build our particular
solutions on that basis. But we **cannot** accept the solutions you
are attempting to impose.
— John Perry Barlow
▀█▀ he first time I heard the phrase "user sovereignty" was at Mozilla. Firefox
░█░ ostensibly follows user sovereign design principles and respects the user.
It is even baked into the list of principles on page 5 of the Firefox Design
Values Booklet.
The earliest discussion of the phrase I could find is a blog post from August
4th, 2011 by the Chief Lizard Wrangler herself: Mitchell Baker.
Mithchell's Blog, August 4th, 2011
In it she prophetically describes user sovereignty as the consequence of new
"engines" that are "...open, open-source, interoperable, public-benefit,
standards-based, platforms..." She also makes the critical link between the
philosopy of openness and standards-based interoperability with that of
idenitity management and personal data dominion.
Where is the open source, standards-based platform for universally
accessible, decentralized, customized identity on the web? Today there
isn't one... Where is the open source, standards-based engine for
universally accessible, decentralized, customized, user-controlled
management of personal information I create about myself? Today there
isn't one.
Looking back from 2020, so much of what Mitchell says is correct, that I
consider her post a founding document of user sovereign design. But Mitchell
isn't the only one. Mozilla was a hot bed for this line of thinking in 2011 and
the years that followed. Bed Adida, Mozilla's Identity Lead back then, also
posted on more technical aspects of user sovereignty.
Ben's Blog, January 13th, 2012
In his post, Ben outlines three, then new, Mozilla products designed in the
spirit of user sovereignty: decentralized identity (BrowserID), mobile
web-based OS (FirefoxOS), and an app store (progressive web apps). All three of
which failed in the market because I don't think they went far enough towards
full user sovereignty.
I used to think Mozilla was too afraid to make any meaningful change to the web
that would enhance user privacy, pseudonymity and sovereignty because it would
damage their ability to make money from search referals and other deals with
surveillance capitalists. To their credit, they did try to make meaningful
change with the "do not track" setting and first-party isolation to prevent
IFRAME'd content—like the Facebook Like button—from tracking users everywhere
they go on the web. Now it is obvious to me that the problem isn't that Mozilla
won't fix the web, but that the web is so broken that they can't.
I can't fault the Mozilla leadership for not knowing, back in 2011, what we
know now simply because back then Facebook, Twitter and YouTube were still
considered benevolent wonders of the modern internet world. It wasn't until
several years later when the social media (socmed) platforms shifted their
entire focus to global user surveillance and manipulation that technologists
everywhere fully grasped the implications and dangers they embodied.
By 2017 the dominance of the socmed platforms was ubiquitous and global.
Leaders at Facebook and others began policing content and taking us all down
the slippery slope of choosing who gets to speak and who doesn't. Today the
socmed platforms have so much data and power that they act like digital
aristocrats that own the land, the market, and the people. They make diktats
that affect people in their real lives. They can also sway elections one way or
another giving them the power to hold onto power by backing politicians that
support their corporate interests.
We're no longer surprised that saying the wrong thing on a socmed platform can
but what about
or in the case of YouTube content creators, demonitization or outright banning
that is
The worse part is the lack of a balance in terms between users and the
platforms. If YouTube bans your account you have very little opportunity to
appeal the decision and no power to force them to reinstate you.
Popular psychologist Jordan Peterson recently described his experience of being
locked out of his Google/Youtube account
The interesting thing about Mr. Peterson's situation is that he forced
reinstatement by organizing what can best be described as a peasant revolt
against the aristocrats at Google. As he tells it, he tried to warn the Google
account management people that banning him "might not be a good idea." When
Google refused to reconsider, he contacted a number of prominent journalists
and tweeted to his 1.4 million followers. A few hours later his account was
reactivated and he suspects it was the publicity.
This outcome may give some people hope that there is a balance of power between
the users and the socmed platforms. Unfortunately there isn't. Or at least,
what little there is can only be leveraged by famous people with large internet
followings that share their animosity for the digital aristocracy. I'm sure Joe
Rogan, Sam Harris, Jordan Peterson and others like them can use the mob to
protect their user sovereignty but the other 4.5 billion internet users cannot.
The socmed platforms like Facebook, Twitter, and YouTube are important in that
they demonstrate exactly what an internet system with little-to-no user
sovereignty looks like. Users of those systems are faced with choosing between
giving up all sovereignty or not using the system. They have no ability to
be private and anonymous and all authorization is based on who they are. Sure
the platforms use open and standard protocols for communication but the data
you upload is not retrievable in a portable standard format. Worst of all, the
platforms don't have a balance of power with their users. Users are only
granted limited rights under consumer protections laws like the GDPR and CCPA.
All other power rests with the platforms.
If Facebook, Twitter and YouTube define one end of the spectrum of user
sovereignty, what does the other end of the spectrum look like? How would a
system designed to be fully user sovereign function? Before we can answer that
question we must decide what the principles of user sovereignty are. I think
they are easy to figure out just by thinking of the opposite of how socmed
platforms work.
There are just six principles, that when followed, produce a fully user-
sovereign system design:
Privacy for users of a systems is all about correlation across time and space.
Correlation is the ability to identify the same user over subsequent
connections (time) and even from different IP addresses (space). Correlation is
the foundation of all user tracking and the primary way in which our privacy is
violated when we use the web.
A fully user-sovereign system does not keep logs and does not attempt to track
users. Any correlation ability the system has is fully under the control of the
users and can be set to their comfort level. Users may choose to give the
system a "correlation identifier" that the user uses on future connections so
that the system can offer customized services. However the user retains full
control over that situation and may terminate the correlation at any point in
the future.
Without privacy, users lose much of their leverage in a world dominated by
surveillance capitalism.
Pseudonimity is the ability for a user to control to what degree the system and
other users know their real identity. With full pseudonimity, a user can appear
to be a first-time user of the system every time. On the other hand, the user
may also choose to present full know-your-customer (KYC) credentials that
reveal their real world identity in a verifiable way. Of course, anything in
between is possible as well. A user may wish to use a persona in a discussion
forum that isn't tied to their real identity but is given out to other users so
they may know them by that name.
Another key aspect of psudonimity is network level tracking. User sovereign
systems ideally operate only on internet anonymity platforms like the Tor
network. Operating as a Tor hidden service or some other IP masked service not
only maximises user pseudonimity but also the pseudonimity of the operator of
the service. Eventually the user sovereign internet will require a ubiquitous
and pervasive mix net transport layer that is used by everyone for everything.
One of the more recent improvements in user sovereign technology is the
creation of decentralized, blockchain backed, verifiable credentials (VCs) and
proofs.
Verifiable Credentials Data Model
Developed beginning in 2013, VCs allow digital systems to shift away from
identity based authorization—such as access control lists (ACLs)—to more
decentralized capability based authorization. It is now possible to build
systems that care about *what* you are instead of *who* you are. As my friend
Timothy Ruff likes to say:
I only care that the pilot is properly trained and licensed to fly the
plane. I do not care what their name is.
Moving away from ACLs means that systems can have proper and strong
authorization while _also_ allowing fully private and pseudonymous users. As
long as the credential presentation uses zero-knowledge proofs the user cannot
be correlated and tracked while they use authenticated services.
A large part of the balance of power between users and systems is a user's
ability to take their data and go to another competing service. Just like
free-market price competition puts downward pressure on prices, user and data
mobility creates pressure towards more user sovereignty in online systems.
User and data portability is only truly possible if systems use open and
standard protocols and data formats for all communication and storage. It is
why we create such standards.
Strong encryption must always be used to protect data in motion and data at
rest. Without it, users cannot enforce their privacy and pseudonymity. They
cannot use verifiable credentials and zero-knowledge proofs for authorization.
Encryption forms the backbone of all user-sovereign design. Without it users
lose all leverage and have no sovereignty on the internet.
Governments around the world continue to fight legal battles to limit or ban
the use of strong encryption. Just like guns in the hands of citizens, strong
encryption represents a real threat to the power of any government. It is the
only way we will keep the internet from becoming a global full-spectrum
surveillance tool that tracks our every move and—with social credit
systems—manipulates us into becoming livestock held captive in regional people
farms.
Inevitably, internet services have tacit agreements between users and the
system as well as legal terms of service. User sovereign systems use balanced
terms of service to even out the power dynamic. Users will already have a great
deal of power from their ability to stay private, pseudonymous, and portable,
but to completed the balance the terms of service need to also include users'
terms.
The GDPR and CCPA are governmental attempts to balance the power of users and
systems but there are so many loopholes that most internet services just throw
up an interstitial EULA that nearly all users agree to withtout fully
understanding them. Not so on user sovereign systems. Users won't be giving up
their information by default like they do on the web today. They won't be using
software such as web browsers that can't help but track you and they won't be
blindly clicking through EULA's to get at content.
The six principles of user sovereignty are important because it gives us a
values framework within which we can make decisions. Without having these
values, how do systems designers choose one solution over another when the
overall function is the same? Why should we choose verifiable credentials over
a real name and password for authorization? Because one respects the user and
their sovereignty and the other doesn't.
It is sometimes hard to think of a world where users have agency on the
internet because we have all been conditioned to accept the status quo as the
technology was developed over decades. It isn't the fault of past systems
designers that the world is the way it is. Many times, time and money pressures
made them choose the quickest and easiest path without really thinking about
the long term implications. Even if they did take time to consider the
tradeoffs, they didn't have any coherent set of principles to inform their
decision making. Until now that is.
The direction from here is to consider distributed systems and the problems
they solve. Then apply these principles to improve them. There are nine
fundamental problems of distributed systems and each one must be solved with
a solution that is user sovereign to build a fully user sovereign service. It
isn't as easy as you would think and until very recently, it wasn't even
possible. The rest of this series is focused on that.
▪ .
.
█▀▀ █░█ █▀▀ █▀▀ █▀█ █▀ █
█▄▄ █▀█ ██▄ ██▄ █▀▄ ▄█ ▄
. ▛ ╿ ▋
▎ ▪ ╵ ▎ .
▏ ╵ ▏
▏
▎