💾 Archived View for gmi.identity2.com › open_always_wins.gmi captured on 2023-01-29 at 15:35:50. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
A Michael Tiemann collection
Open Voices, Issue 5
opensource.com
Copyright © 2013 Red Hat, Inc. All written content licensed under a
[Creative Commons Attribution-ShareAlike 3.0 Unported
License](http://creativecommons.org/licenses/by-sa/3.0/).
Linus Pauling famously said “The best way to have a great idea is to
have lots of ideas.” This is easier said than done, for many reasons.
For me, the foremost reason is that nobody wants to be known for having
a dumb idea, so we self-edit. If we self-edit too much, we end up having
not a lot of ideas, so having a great idea becomes nearly impossible. A
second challenge is creating a space where ideas can combine, coalesce,
catalyze, evolve, and, if they are truly great ideas, crystalize. Is
that best space an isolated office where the mind of a lone genius can
evaluate all and choose correctly the One Best Idea? Or is it better to
open up the process to a diverse set of perspectives, up to and
including every possible stakeholder in the outcome?
In the book The Wisdom of Crowds, James Surowiecki gives contemporary
and compelling examples of problems that are better solved by random
groups of people than by experts. Not every crowd beats the experts, and
some crowds are provably less intelligent than their least intelligent
member, but when properly (self-) selected and organized, the crowd not
only beats the experts, it acts more intelligently than its most
intelligent member.
Long before Surowiecki began his research into the Wisdom of Crowds, the
free software and open source software communities began self-organizing
in ways that are validated as best practices by Surowiecki’s research.
These communities were built from stakeholders who had a great diversity
of opinion, independence from one another, they were highly
decentralized, and there was a mechanism for aggregating private
judgements and actions into collective decisions and results. Along the
way, these communities of “amateurs” proved to be able to write software
that delivered results sooner, with fewer defects, that were fixed
faster than proprietary approaches.
In 2002, David A. Wheeler published a report that if one were to use
cost and productivity averages of the proprietary software development
community, it would have cost $1.2B to create a Linux distribution. In
2008, that report was updated by Amanda McPherson, Brian Proffitt, and
Ron Hale-Evans of The Linux Foundation to show that the Fedora 9
distribution would have cost $10.8B. Given the capabilities and code
base of the latest Linux distributions, I would not be surprised if that
number were closer to $25B today. Red Hat has been the most successful
open source software company in history, reaching just over $1B in
annual revenue in calendar 2012. Its revenues were around $25M when
Cygnus and Red Hat started merger discussions in 1999. It’s a fun
exercise to imagine how $25B of software value can be created in that
context.
According to [The Seattle
Times](http://seattletimes.com/html/businesstechnology/2003460386_btview04.html),
Microsoft spent $10B creating Windows Vista. Most people would agree
that (1) Vista gave users nothing they didn’t already have and like in
Windows XP, and (2) every penny they spent developing Windows 7 and
Windows 8 was to undo the damage they did to themselves with Windows
Vista. Merely counting the dollars spent on software is not a valid
proxy for estimating the value of software. Indeed, in the case of the
Windows platform, there really has been no measurable return on
investment (ROI) on their software development investments since 2003.
Why is the ROI on Linux so high and the ROI on proprietary software so
low? The answer is simple: **open always wins**.
Every day I have seen examples of the meme “open always wins”, but from
time to time I have seen examples that, to me, are teachable moments.
They are not only obvious to the reader, but obvious in ways that
illuminate the less obvious. These essays are my attempt to capture and
crystalize those ideas in ways that others can bring their own
perspectives, their own experiences, their own ideas into the mix,
thereby transforming those ideas into solutions. I hope you enjoy them,
Michael Tiemann
(originally published June 2010)
In his keynote speech at the Red Hat Summit in Boston, Red Hat CEO [Jim
Whitehurst](https://opensource.com/users/jwhitehurst) made the case that
of the $1.3 trillion USD spent in 2009 on Enterprise IT globally, $500
billion was essentially wasted (due to new project mortality and Version
2.0-itis). Moreover, because the purpose of IT spending is to create
value (typically $6-$8 for each $1 of IT spend), the $500 billion waste
in enterprise IT spending translates to $3.5 trillion of lost economic
value. He goes on to explain that with the right innovations—in software
business models, software architectures, software technologies, and
applications—we can get full value from the money that's being wasted
today, reinforcing the thesis that [innovation trumps cost
savings](http://www.linuxplanet.com/linuxplanet/reports/7010/1/).
But then along comes Accenture's Chief Technology Architect Paul
Daugherty, and in *his* keynote he presents a list of the top five
reasons that customers choose open source software (which is now up to
78% among *their* customers):
\#1 (76%): better quality than proprietary software.
\#5 (54%): lower total cost of ownership.
So which is it? Does innovation trump cost savings? Or does quality
trump cost savings?
[According to the research of Dr. David
Upton](http://opensource.com/business/10/6/radically-simple-it-dr-david-upton),
if you practice path-based innovation (also known as continuous
innovation, or Kaizen), then quality and innovation are one and the same
thing. Or, mathematically, innovation is the integral of quality
improvement over time. Unfortunately, Dr. Upton's research also shows
that most executive compensation structures do not reward disciplined
continuous improvement, but rather efforts that are typically "win
big/lose big". And perversely, they tend to reward upfront those who
place the bets rather than those who are around when the bet can
actually be judged. This encourages executives to make innovation a
risky business when it could be a reliable engine of sustainable value
creation. And it conditions those in the trenches to fear and loathe the
Next Big Thing, especially when it has an executive sponsor. This in
turn leads to the worst-case scenario of IT departments conservatively
protecting systems that were never appropriate in the first place. But
there is a better way.
In his keynote, Jim correctly points out that modular, layered
architectures are much more susceptible to incremental improvement. Not
only do many eyes make all bugs shallow, but many hands make the burden
light. Highly modular systems encourage massive participation, and the
sum total of many, many small improvements can be seen as a large
improvement indeed. This was made absolutely clear in Boston this week
as Red Hat explained its Cloud Foundations platform—a single large
change enabled by thousands of smaller changes enabled by yet thousands
more smaller changes. Red Hat's engineering model embraces incremental
innovation, and the integral across all the communities who contribute
is simply mind-blowing.
But when we break down these innovations into their constituent
elements, what we often find is that at the finest level of detail,
there is no distinction between the atomic change from which the
innovation is derived and a very specific, very concrete improvement to
the quality of the system. Indeed, it is better (and more accurate) to
think of quality not as fixing something that is broken (as if it will
never need to be touched again), but rather making an adaptation that is
an improvement. Of course it is important to eliminate defects in order
to build a quality product, but it is equally important to eliminate
inflexible or wrong assumptions that reduce fitness in future contexts.
When *everybody* is able to make such adaptations, the result is nothing
short of
[transformation](http://opensource.com/business/10/6/tiemann-transforming-it-open-source-way).
I've spent a lot of time in the free/open source software community:
[nearly 10 years as a principal developer of the GNU C and C++ compilers
and the GNU debugger](http://www.usenix.org/about/stug.html), and more
than 10 years since teaching others from my experiences. One of the most
profound insights I've gained about the relationship between open source
software development and software quality came from assimilating an
analysis published in the paper [Two case studies of open source
software development: Apache and
Mozilla](http://portal.acm.org/citation.cfm?doid=567793.567795),
published in TOSEM, July 2002. For a full explanation, please see [this
transcript of a keynote speech I gave
in 2009](http://opensolutionsalliance.org/osa/osaalert%28apr09%29-tiemann.html).
For the purposes of this article, I want to focus on the fact that the
paper counted 388 different contributors to Apache, with Developer \#1
doing 20% of everything and Developer \#388 making a change so
insignificant that it could not really be seen in the graphs. The paper
explains that the open source codes studied in the paper produced
deliverables faster, with fewer bugs, that were themselves fixed faster,
than comparable proprietary software also studied. And the paper
observes that because open source software like Apache did not restrict
participation, bugs that might not have made it to the MUSTFIX list
where developer resources are scarce (as surely they are when every
developer must be paid out of profits) can still be fixed by
about that particular issue. And so I thought I accepted what the paper
explained, and what I knew from my own experience, that open source was
far and away the best way to clean up all the corner cases that
inevitably arise in complex software projects. Hooray for continuous
improvement\! But that was only half the story.
After teaching what this paper taught a few dozen times as a part of New
Hire Orientation at Red Hat, a new insight came to me, which is the
flip-side of the story. Imagine you have your little world of code you
maintain, and you find one day that something is wrong. You search and
search, and you conclude that the problem is not with the code you've
written, but lies beyond, in some library or application you did not
write. You might find the problem is with Apache, and by making that
determination, you could verify your hypothesis by looking at the code,
observing the behavior, and if you were right, you could become
developer \#389 by fixing that defect, as so many have before you. But
suppose instead you find the problem lies in some proprietary software.
That is where your ability to improve the system ends. Moreover, you
still have a problem. WTF?\! (What's The Fix?\!)
You can document the problem, making customers suspicious of your own
software, or you can place a work-around in your own code. The
work-around is not a "correct" fix, but it might give you the behavior
you need, and now instead of fixing a problem, you've actually created a
second problem which, for the time being, cancels out the first, maybe.
You cannot know for sure because you cannot see the original problem,
only the shadows that it casts. Now imagine there are hundreds of
modules with hundreds of opportunities for fixes which instead generate
work-arounds. It is easy to see that there could be hundreds of times
the number of defects or potential defects lurking in the system when,
if the source code were available, there need be none at all\!
Thus, open source not only permits developers to fix the bugs where they
lie, but also a strong incentive (and culture) to not pollute ones own
work just because a bug lies in another module. The cumulative result
has been measured quality differences of 100x or more compared with
proprietary software [as measured by
Coverity](http://scan.coverity.com/). Such a difference in quality *is*
noticable. And empowering. And encouraging to not only fix what is
wrong, but to improve what could be better. And all of this functions as
an encouragement to raise quality, and innovation to the point where IT
delivers on its real promise: creating value.
(originally published May 2010)
When Thomas Friedman enumerated [10 "flattening forces"
](http://en.wikipedia.org/wiki/The_World_Is_Flat#Ten_flatteners)in his
book [The World Is
Flat](http://en.wikipedia.org/wiki/The_World_Is_Flat), he declared that
force \#4, [Open Source](http://en.wikipedia.org/wiki/Open_Source), was
the most powerful and disruptive of all. New discoveries in nature
suggest that Friedman's assessment may be more profound (and more
consistent) than even he imagined.
Friedman notes that open source engenders a feature rarely seen in
previous publishing endeavors: uploading. Traditionally, publishing
followed a waterfall model: some marketable idea or expression would
find some capital partner and the two would join to create a work that
could be purchased or otherwise consumed by a downstream market. Ideas
flowed in one direction, and capital returns would flow in the opposite
direction.
Open source created a bi-directional flow in which the market itself
could make greater intellectual contributions than any of the original
principals. Moreover, this could often be accomplished without any
particular capital partner. Whereas piracy was seen as the scourge of
the private property publisher, ubiquitous distribution was a necessary
prerequisite for open source participation.
Traditional publishers and capitalists wrongly claim that open source
turns the basis of intellectual property on its head, but I disagree. I
think it merely turns them sideways.
Long before Friedman, Alexis de Tocqueville was writing about another
flattening of the world: American Democracy. He seized upon the idea
that Democracy and Equality were profoundly related concepts, the former
operating in a world of politics and the latter observable in the
natural world.
By this device he was able to instantly perceive how America, unshackled
from central controls and authority, could create the most favorable
conditions for innovation:
When a private individual mediates an undertaking, however directly
connected it may be with the welfare of society, he never thinks of
soliciting the cooperation of the Government, but he publishes his plan,
offers to execute it himself, courts the assistance of other
individuals, and struggles manfully against all obstacles. Undoubtedly
he is often less successful than the State might have been in his
position; but in the end the sum of these private undertakings far
exceeds all that the Government could have done.
Not only does this sound like a great description of open source
software vs. proprietary software, but it actually describes quite
accurately my own experience starting the world's first open source
software company. (And though it may surprise some, I take great pride
in the fact that the Government is now embracing open source just as
quickly as it can—the best ideas are those that are good for all, not
just some.)
It also explains the relative benefit of horizontal interaction vs.
vertical integration. (To read more about this, check out [The Only
Sustainable
Edge](http://www.amazon.com/review/RHCRQ8Z1X09DT/ref=cm_cr_rdp_perm)
which really does that subject justice.)
But among the dozens of subjects he considers and the hundreds of
insights that illuminate them, he, like Friedman, holds up one as more
significant than all the rest: The Law of Descent (or, in this
translation, the Law of Inheritance):
But the law of inheritance was the last step to equality. I am surprised
that ancient and modern jurists have not attributed to this law a
greater influence on human affairs. It is true that these laws belong to
civil affairs; but they ought, nevertheless, to be placed at the head of
all political institutions; for they exercise an incredible influence
upon the social state of a people, while political laws show only what
this state already is. They have, moreover, a sure and uniform manner of
operating upon society, affecting, as it were, generations yet unborn.
Through their means man acquires a kind of preternatural power over the
future lot of his fellow creatures. When the legislator has once
regulated the law of inheritance, he may rest from his labor. The
machine once put in motion will go on for ages, and advance, as if
self-guided, towards a point indicated beforehand. When framed in a
particular manner, this law unites, draws together, and vests property
and power in a few hands; it causes an aristocracy, so to speak, to
spring out of the ground. If formed on opposite principles, its action
is still more rapid; it divides, distributes, and disperses both
property and power. Alarmed by the rapidity of its progress, those who
despair of arresting its motion endeavor at least to obstruct it by
difficulties and impediments. They vainly seek to counteract its effect
by contrary efforts; but it shatters and reduces to powder every
obstacle, until we can no longer see anything but a moving and
impalpable cloud of dust, which signals the coming of the Democracy.
de Tocqueville properly predicts that the tendency of proprietary
software, which tends to be treated as indivisible property, is to
create at least aristocracies, and in degenerate cases, monopolies. And
his writing is strangely prescient about open source software as well,
but let me pick that up at the end.
There is one more writer I must invoke before introducing the actual
subject of this article, and that is Charles Darwin. Darwin's theory of
evolution is a staggering contribution to science. Read naĂŻvely, the
theory predicts the survival of the fittest. As such it is no more
insightful than the economic theory that says "buy low, sell high."
Read more deeply, the theory is based upon the evidence of the survival
of the most adaptable, and *that* theory has proven not only durable in
the community of life sciences, but in virtually every field in which
competition and risk over time play a role, *i.e.*, virtually every
human endeavor. Our fascination with fitness likely comes from the fact
that it is so easily (and instantly) measured. The study of adaptability
takes time. But it can also lead to much deeper insights.
Consider [the evolution of the
eye](http://en.wikipedia.org/wiki/Evolution_of_the_eye). Darwin
considered this at once to be "absurd in the highest possible degree"
and yet he wrote:
...if numerous gradations from a perfect and complex eye to one very
imperfect and simple, each grade being useful to its possessor, can be
shown to exist; if further, the eye does vary ever so slightly, and the
variations be inherited, which is certainly the case; and if any
variation or modification in the organ be ever useful to an animal under
changing conditions of life, then the difficulty of believing that a
perfect and complex eye could be formed by natural selection, though
insuperable by our imagination, can hardly be considered real.
And as far as the fossil record can tell us, once the basis of
photoreceptivity appeared, then from that origin it has evolved
independently at least 50-100 times. In that regard possession of this
feature follows the law of inheritance in that if your parents had eyes,
you probably do, too. But the advantage of this feature cannot be
determined by examining the feature itself: The human eye, which from a
design perspective is "built backwards and upside-down" compared with
the elegant design of the eye of the octopus, nevertheless confers a
degree of adaptability that makes irrelevant the details of that
unfortunate design.
But now we know that genetic advantages are not only inherited. And they
are not only conferred by genetic engineering. Consider *Elysia
chlorotica*:
![](http://www.wired.com/images_blogs/wiredscience/2010/01/green_sea_slug.jpg)
(Photo credit: Nicholas E. Curtis and Ray Martinez)
This sea slug discovered in the waters of the Atlantic ocean may be one
of the most dramatic examples of [lateral gene
transfer](http://en.wikipedia.org/wiki/Horizontal_gene_transfer). It
appears that rather than relying on natural selection of random
mutations of its inherited genetic code to achieve greater fitness,
somewhere along the line *Elysia chlorotica* went from shepherding algae
as a captive food source
([symbiosis](http://en.wikipedia.org/wiki/Symbiosis)) to incorporating
the gene psbO into its own DNA, thereby allowing it to integrate a
[photosynthetic](http://en.wikipedia.org/wiki/Photosynthesis) process
based on [chloroplasts](http://en.wikipedia.org/wiki/Chloroplasts)
without any algae present. Now a "solar-powered sea slug", *Elysia
chlorotica* can feed itself for almost a year just by laying about in
the sun.
Lateral gene transfer is understood to be have been fairly common among
very simple single-celled organisms, and the more closely we examine
various genomes, the more we see that there is much more to evolution
than stepwise refinement of inherited wealth.
In the world of open source, on example of such lateral transfer has
been the development of [GTK](http://en.wikipedia.org/wiki/GTK%2B) (now
[GTK+](http://en.wikipedia.org/wiki/GTK%2B)). Started initially as a
toolkit for the GNU Image Manipulation Program
([GIMP](http://en.wikipedia.org/wiki/GIMP)), GTK+ now supports dozens of
desktop environments, window managers, and applications. Many other open
source technologies have been born in one context, refactored for use in
other contexts, broken out as stand-alone projects, and then
reincorporated in yet new programs for new purposes.
To return briefly to de Tocqueville's vision, recall:
They vainly seek to counteract its effect by contrary efforts; but it
shatters and reduces to powder every obstacle, until we can no longer
see anything but a moving and impalpable cloud of dust, which signals
the coming of the Democracy.
Whereas proprietary software tends to remain monolithic, in part because
none of its constituent elements have any independent value whatsoever
if broken apart, modular open source systems transcend the logic of
partition and fragmentation. One thousand people can take copies of
Linux source code and yet there is still a complete copy for the
original developer and yet another copy for the 1,001st who wants a
copy.
All this copying does not dilute the strength of Linux, but by contrast
only makes it stronger and more valuable in the marketplace. This is the
effect that a share-alike license like the GPL provides to Linux and all
who follow its prescription of equality. And today we see the
unstoppable growth and insatiable appetite for open source in the
enterprise. Could it be that de Tocqueville's logic properly predicted
the advent and effect of open source cloud computing?
Back to the main topic at hand...
Mathematically speaking, the combinatorial possibilities of lateral
evolution between multiple domains far exceeds what can be supported by
restricting evolution to only that which can occur in a single domain.
Moreover, the adaptive potential of such cross-pollination must be far
superior than any approach that attempts to adapt whole systems to work
only with other whole systems, even when such whole systems follow
rigorous interoperabilty guidelines.
Suddenly it becomes obvious that when the world is flat, when
evolutionary innovation can take lateral pathways, when we have the
legal, technical, and operational freedom to adopt the most sensible
approaches of the best designs, regardless of their ancestry—or our
own—then we see the kind of adaptability that ensures its own
survival.
And we see that those who don't enjoy that kind of adaptability heading
one step closer to extinction.
(originally published May 2010)
synthetic
life-form](http://www.economist.com/opinion/displayStory.cfm?story_id=16163154).
For those of you who may have missed the announcement last week, Craig
Venter and Hamilton Smith, the two American biologists who unravelled
the first DNA sequence of a living organism (a bacterium) in 1995, have
pushed the envelope again, demonstrating the first successful boot-up of
a synthetic bacterium. Editors at *the Economist* argue that the only
sensible way to protect ourselves from such creations is to require that
the DNA sequences be open source. It is a profound insight.
It would not be the first time that open source saved humanity from
Ventner's creative genius. I don't want to take anything away from
Ventner as a talented and creative technician—he has solved a number of
very tricky problems, and in so doing, has advanced the frontiers of
human knowledge. And while his values might make him among those who
worship Ayn Rand, they consistently threaten the rest of us who must
live in the real world, with each other and with the consequences of our
actions.
There was a time when the US Patent and Trademark Office had no idea
what to do with patent applications that merely identified a genomic
sequence and declared "it's a machine composed of amino acids that is
put together in the following way." [The Wikipedia article on the Human
Genome
](http://en.wikipedia.org/wiki/Human_Genome_Project)[Project](http://en.wikipedia.org/wiki/Human_Genome_Project)
tells how first the US PTO accepted all manner of random genomic
sequences as novel inventions, then limited patents to machines that had
a defined purpose, and then in 2000 President Bill Clinton further
clarified that the Human Genome itself belonged to the public domain and
could not be patented.
That decision was not some fiat decision made by the President, but a
nod to the fact that the scientific and open source community, working
in concert, did the lion's share of decoding and publishing that genome.
By publishing first, we mooted the question of how much of our own DNA
Craig Ventner's company should be allowed to own.
But now he's back, and he's built the one thing that sits as an
exception to the [Gene Patent](http://en.wikipedia.org/wiki/Gene_patent)
exclusions: a wholly synthetic lifeform. Does Ventner really want to
advance science (which he has done), or is he searching, like Charles
Muntz, villain of the PIXAR movie
[*UP*](http://en.wikipedia.org/wiki/Up_%282009_film%29), for his
ultimate, exclusive patent on life?
We may not know, but Ventner's life forms are now multiplying, and what
that may mean for humanity we may not also know. But *The Economist*
argues, and I believe it is a very strong argument, that the only way we
can protect ourselves from them is to ensure that we have their source
code. We may well need it sooner they we can imagine.
(originally published January 2011)
Imagine it is 1912, but that the Titanic is fitted with an underwater
radar system. Imagine that it senses an iceberg so large that even the
captain can understand that by the law of conservation of momentum, the
ship will be stopped in its path. Should the captain use the radar
information to inform the decision to alter course, or should the
captain ignore it because radar is merely an invention of science
therefore prone to exaggeration and false findings?
The New Yorker Magazine has just published an immensely popular article
titled ["The Truth Wears Off —Is there something wrong with the
scientific
method?"](http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all)
The article reports several examples of scientific findings that
appeared to be significant when first published, but when tested over
time, demonstrate weaker and weaker results. Zyprexa is a
second-generation anti-depressant that showed great promise in clinical
trials in the nineteen-nineties. By 2001, Zyprexa earned more revenue
than Prozac, and it remains Eli Lilly's top-selling drug.
According to the article, recent studies of these second-generation
anti-depressants show that the therapeutic power of the drugs appears to
be steadily waning, down to less than half of that documented in the
first trials. It reports that many researchers began to argue that the
expensive pharmaceuticals weren’t any better than first-generation
antipsychotics, which have been in use since the fifties. And it quotes
John Davis, a professor of psychiatry at the University of Illinois at
Chicago, as saying “In fact, sometimes they now look even worse." How
could such drugs be approved if the FDA is using the scientific method,
which requires independent reproducibility of results?
Quoting the article:
But now all sorts of well-established, multiply confirmed findings have
started to look increasingly uncertain. It’s as if our facts were losing
their truth: claims that have been enshrined in textbooks are suddenly
unprovable. This phenomenon doesn’t yet have an official name, but it’s
occurring across a wide range of fields, from psychology to ecology. In
the field of medicine, the phenomenon seems extremely widespread,
affecting not only antipsychotics but also therapies ranging from
cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming
analysis demonstrating that the efficacy of antidepressants has gone
down as much as threefold in recent decades.
In 2005, a paper was published titled "[Why Most Published Research
Findings Are
False,](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/)" and it
has become that author's most-cited work. If we paradoxically accept its
findings as true, what reasonable interpretation should we give to
President Obama's inaugural promise to "restore science to its rightful
place"? This is a serious question. If we read the science on global
climate change, it reads like a radar screen flashing RED ALERT about
the impending iceberg of environmental collapse. Should we heed the
warnings that more than 95% of all climate science papers report, or
should we maintain course, confident that all these predictions are
nothing more than statistical aberrations and gamesmanship?
Over the New Year's holiday I had a chance to watch the movie ["Fair
Game,"](http://www.fairgame-movie.com/) which is based on Valerie Plame
Wilson's book "Fair Game: My Life as a Spy, My Betrayal by the White
House". (It also incorporates material from Ambassador Joe Wilson's book
"The Politics of Truth: Inside the Lies that Led to War and Betrayed My
Wife's CIA Identity".) In that movie, the subject of truth is examined
in many contexts. There are the 16 words that George W. Bush uttered
during his January 28, 2003 State of the Union address ("The British
government has learned that Saddam Hussein recently sought significant
quantities of uranium from Africa."), words accepted as true enough to
authorize the invasion of Iraq. There is the op-ed piece that Ambassador
Wilson wrote on July 6, 2003 (["What I didn't find in
Africa"](http://www.nytimes.com/2003/07/06/opinion/what-i-didn-t-find-in-africa.html%5D)).
There is a dramatization of the discussion between CIA and the Office of
the Vice President about intelligence and whether the seriousness of a
potential scenario (Iraq acquires fissile nuclear material) should be
allowed to influence the assessment that such facts are in evidence.
Finally, the movie shows how the selective use of source information,
without proper process, led to a clearly erroneous assessment of facts
regarding Iraq's nuclear capabilities, not to mention the possibility
that erroneous decisions were made in light of those facts.
It is estimated that the NIH spends more than $30B/year on medical
research, and that the CIA spends more than $40B/year on intelligence
activities. If "most published research findings are false," would we
all be better off in a pre-scientific world? How are we to make policy
and investment decisions, be they question of which armies to raise
against rogue nations or which drugs to take against rogue cells?
The New Yorker reports that in a forthcoming paper, Jonathan Schooler
recommends the establishment of an open source database, in which
researchers are required to outline their planned investigations and
document all their results. This is an interesting prospect, especially
because of the studies I've read about open source software.
The IT industry spends $1.5T/year knowing full well that $500B/year is
being wasted on software and systems that will never make it to
production, or if they do, bad software quality will "challenge" them
with schedule slips, missing features, and bugs significant enough to
interfere with operational capabilities. The [\#1 reason for choosing
open source software in the
enterprise](http://opensource.com/business/10/6/integral-innovation) is
"quality as compared with proprietary software". In [a series of studies
published by Coverity](http://scan.coverity.com/), open source software
has achieved on average (across more than 250 projects, more than 55
million source lines of code (SLOC)) 100x lower defect density than
proprietary software. Is any of that true? Or is it just a bunch of
scientific nonsense?
I believe that the answers to these questions are among the most
important we face today. What is science? What can it know? What can it
teach us? How should we make decisions based on that information? I'm
excited to discover that others believe that open source, which has been
inspired by the scientific method, may yet be called upon to rescue
science from those who merely try to confirm pre-conceive hypothesis.
Can open source prove itself to be valuable? That would be quite a feat.
But honestly, I see no better course.
(originally published October 2011)
If you look at the evolution of the IT landscape over the past 30 years,
you see two distinct trends: the continued growth of the IT dinosaurs
(mainframe computing and mainframe wannabes like Sun) and the emergence
of highly modular, adaptable systems, which, by their very process of
evolution, not only best suit the current needs, but plant the seeds for
the next computer revolution. In the 1980s, modular UNIX systems sowed
the seeds for Linux, which in the 1990s sowed the seeds for the rapid
spread and adoption of the World Wide Web, which in the 2000s, sowed the
seeds for companies like Amazon.com, Google, Facebook, and Twitter to
aggregate and disseminate content as never before.
In the old days, when missions were presumed to be fixed, one could
perform a fixed evaluation of a system and deem it fit or unfit for
service. Today, when any single idea can, overnight, undermine critical
infrastructure (Stuxnet), rewrite fundamental security assumptions
(Anonymous), and overthrow governments (Wikileaks and the Arab Spring),
today's "mission critical" systems are tomorrow's failures of
imagination. Today, there are far too many IT systems that, for all
intents and purposes, are "too big to fail," and that in and of itself
represents a systemic risk that must be addressed.
The history of Google's datacenter (or Facebook's for that matter) is a
history of rapid adaptation and unlimited scalability, made possible by
modular open source software. What makes these systems "mission
critical" is not their sheer size, nor the badges of the people who
delivered them, but the fact that the more completely Google and
Facebook adapt to what users need today, the more they change what users
will want tomorrow. And they have the freedom and the flexibility to
evolve their systems accordingly: faster, better, cheaper, forever.
Mission adaptation is the new mission critical.
The Fedora project is an exercise in creative destruction: every six
months, we identify the single biggest aspect of the project that has
become "too big to fail," and we blow it up. We blow up software into
more modular components; we blow up processes to create greater autonomy
and agility; we blow up governance structures to allow for greater
transparency and accountability. We encourage all our participants to
fail faster in order to succeed sooner. This approach creates the raw
materials that Red Hat uses for its commercial products, including Red
Hat Enterprise Linux. The result: in six years of commercial release,
the Linux kernel in Red Hat Enterprise Linux 4 has suffered zero
critical security failures. In four years of commercial release, the
Linux kernel in Red Hat Enterprise Linux 5 has suffered zero critical
security failures. We understand from our clients in Ft. Meade that this
is the first time they have ever encountered such a trustworthy
operating system. Ever.
The Tokyo Stock Exchange used to suffer trade-stopping outages
regularly. They changed the shape of the trading day just to give their
systems a chance to "cool down" during lunch, and they still had
outages. Other metrics were also in the red zone: non-competitive
latency and ultra-high operating costs were not sustainable. NASDAQ and
NYSE had already migrated to Red Hat Enterprise Linux when in January of
2010, the Tokyo Stock Exchange launched "Arrowhead," their own first
deployment of it. Within seconds of the first new trading day of the
year, traders noticed that matches, which previously took seconds to
complete, were now instantaneous (2.5 ms worst case--6x faster than
video refresh at 60Hz). Imagine the feelings of first the relief (it
works\!) and then the excitement (it's the fastest in the world\!) on
the floor that day. To date, they have [not suffered a trading
outage](http://en.wikipedia.org/wiki/Tokyo_Stock_Exchange).
Open source represents a profound paradigm change to the way software is
developed, deployed, and managed. But it also represents the most
effective, efficient, and reliable way to ensure that the enterprise
itself can evolve to address continuously changing requirements,
environments, challenges, and opportunities. Open source software is the
antidote to "too big to fail." It is a way to create mission capability
that anticipates the future, and thereby creates the future.
(originally published August 2011)
I read with delight Steve Dennings article [Is Montessori The Origin on
Google and
Amazon?](http://blogs.forbes.com/stevedenning/2011/08/02/is-montessori-the-origin-of-google-amazon/).
His arguments are firm, they accommodate a wide range of scientific
facts, and they show what remarkable results can be achieved when we
"follow the child." He writes well enough and clearly enough that I need
not reiterate his points here—you can (and should\!) read his writings
directly. But there is more that can be said, particularly in
understanding how open source principles and philosophies fit so well
with those of Montessori education.
I came to Montessori education late in life, as a parent. I began
knowing literally nothing whatsoever about Montessori's work, but the
school my daughter attended took Montessori's writings very seriously,
and I began to see the profound and deep connections between seemingly
simple classroom activities. After reading [The Science Behind the
Genius](http://www.montessori-science.org/montessori_science_genius.htm),
the grand design became clear to me, and I have since become a dedicated
proponent of the Montessori method.
The Montessori mantra of "Follow the Child" speaks to the idea of
nurturing the agency of the individual. Montessori found that if
children are deprived of the opportunities to make authentic choices,
their selves do not fully develop, and they can become far too dependent
on others to make decisions for them. In an analogous fashion, open
source empowers all participants, whether users, developers,
distributors, or maintainers, to be authentic agents. Such empowerment
encourages not only the improvement of the software (which can be seen
by its [100x better quality than proprietary
software](http://opensource.com/business/10/6/integral-innovation)), but
more importantly it encourages the improvement of the individual. This
is what I have seen in 20+ years of open source, and what I have seen in
10 years as a Montessori parent.
"Follow the child" is not limited to seeing what a child will do with a
fixed curriculum. In Montessori education, all of nature is available
for study, and children are encouraged to spend time outdoors,
observing, journalling, asking questions, and seeking the necessary
knowledge to find answers to those questions. The scientific method is a
modular method, which is to say that results are built upon results that
are built on yet more results. Scientific results must be reproducible
or they are not acceptable as science. In much the same way, the natural
modularity of open source software makes itself a kind of science of
code. Modules can be freely used in much the same way that scientific
results can be freely reproduced. And just as a great scientist tries to
make their results as simple and as accessible as possible, there is
equally a peer reward system for those who make their software as
general, portable, and technically transparent as possible.
A key value of the Montessori method is that learning should be a
life-long process. Denning paraphrases this by saying that education is
not a destination but a journey. Denning observes that those who see
their college diplomas as the all-important destination find themselves
at the end of the road when circumstances change. For those who embrace
learning as a life-long exercise, change is just a new opportunity to
learn. Similarly, open source software tends very much to have open,
expansive futures. So many proprietary programs and frameworks rise and
fall because they were conceived with an end-state in mind. By contrast,
open source software is constantly being rewritten, re-purposed, and
re-invented. Look at [the evolution of Linux over the past 20
years](http://www.linuxfoundation.org/20th/) and tell me: has there ever
been an operating system that has evolved so much, so fast, so far? This
is the genius of a life-long learning approach.
Of course the real proof of the commonality between Montessori and open
source is this: Are Montessori students excited to get their hands on
source code? [You bet\!](http://www.michaelolaf.net/google.html)
(originally published May 2011)
Some of my collegues at Red Hat have been working for some time now on a
book/wiki titled [The Open Source
Way](http://www.theopensourceway.org/book/). It is aimed at answering
the very important questions of "How?" for a given set of Whats, and its
a very important resource for those who are ready to roll up their
sleeves and to start putting open source principles to work. But, why
would anybody want to do that?
Why indeed...
Last year I saw a really great TED video by [Simon
Sinek](http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action.html).
He titled the video "How great leaders inspire action", but my take-away
was that when it comes to really bringing about a change in thinking,
good answer to the question "Why?" beats a good answer to the question
"What?" or "How?" He argues that great leaders inspire action by asking
the right "Why?" questions, questions that ultimately make one wonder
"Why not?"
Clay Shirky's book [Cognitive
Surplus](http://en.wikipedia.org/wiki/Cognitive_Surplus) (also available
as a [TED video](http://opensource.com/business/11/5/open-source-why))
references the estimate that the sum total of all articles, edits,
arguments, etc., ever made to Wikipedia totaled 100 million hours of
human effort. To put that number into perspective, the total time spent
watching television *each year* is 200 billion hours, or about 2,000
times the cumulative total of Wikipedia from inception through the end
of 2008. The point of his book is not to declare this as some great
shame (others have already done that), but to point out that fundamental
new properties of 21-century media and technology provide, for the first
time, a way to harness the cognitive surplus that is currently idling
away 200 billion hours of human attention each year. That's a lot of
attention\!
Clay argues, and I agree, that it is simply too easy—and wrong—to write
off those 200 billion hours per year as a kind of cognitive entropic
loss, even if for many years that has been precisely true. Clay also
argues, and I again agree, that as media has evolved from mostly one-way
communication (from author to reader, or from broadcaster to viewer) to
social and participatory (of peers, for peers, by peers), changes to
both individual motivations and new community norms reveal powerful new
forces that can effect astonishing results. Clay teaches that "more is
different" and that the new forms of association and aggregation that
two-way technologies make possible create entirely different economic
systems than the presumed (and increasingly debunked) models of pure
consumers making rational economic choices.
One of the truly great examples of using the dramatically different
dynamics of a participatory network rather than one-way broadcast
(which, come to think about it, really should have been one of the
examples that made it into Clay's book) is the story of Estonia known as
"Let's Do It\!". [As I wrote](http://www.opensource.org/node/433) on [my
opensource.org blog](http://www.opensource.org/blog/8) back in 2009:
The story begins as many do, with the current generation inheriting all
the good that Earth can provide minus all the accumulated harm that
generations of human stupidity, greed, and unchallenged status quo have
wrought. In Estonia that equation had reached a point where one
visionary said "Enough\!" Rainer Nõlvak organized a project which
effected the cleanup of 10,000 tons of garbage throughout the country's
forests in a single day for a cost of €500,000. It was estimated that if
this task could have been performed by the government, it would have
taken 3 years and cost €22,500,000. The project that Rainer organized
thus delivered not only a cost savings of 45:1 (on par with the 50:1
ratio achieved by Hill Air Force Base when they dumped proprietary
hardware and software for open source and commodity technologies), but
done so quickly that the population of Estonia as a whole could enjoy an
additional 5 million person-years of clean forests that had been
despoiled by previous generations.
The story of "Let's Do It" exemplifies how multi-way media, which gives
individuals a zero-cost way to address the publics that claim them as
members. It also demonstrates how enabling and engaging people's human
priorities and values can achieve transformative results, while also
further repudiating the presumption that individuals are locked in to
making "rational" (i.e., selfish) allocation decisions about of scarce
resources such as time and money. And, like Wikipedia, it represents
less than 1% of the otherwise abundant time the Estonian people have to
devote to competing interests, such as watching television or griping
about how many people are watching television when they should be doing
something more productive.
Which brings us back to the open source "why". For as long as I have
been explaining the excitement and potential of open source software,
some skeptic (or some cynic) would challenge me by saying "who has the
time to write software to solve their own problems?" The fact is that
globally, we have extraordinary amounts of time, we just don't use it
very well. Partly this is because in the past we didn't all have great
tools that would make it easier and more rewarding to use our time more
productively. Partly this is because when new tools become available,
we're stuck in old ways of thinking and old ways of behaving. The answer
as to why create Wikipedia, or Linux, or Apache, or any other great
community project can now be understood as really quite simple: because
we can. Some intellectual endeavors still have high barriers to
participation: not everybody can get unlimited time on the Hubble space
telescope or can direct particle beams at the Large Hadron Collider. But
when it comes to writing open source software, anyone can learn it and
anyone can do it, not only because we as humans have the capacity to
learn, *but because open source software provides the necessary
permissions and implicit invitations to participate as fully vested
partners*.
Which brings us back to [The Open Source
Way](http://www.theopensourceway.org/book/). The Cognitive Surplus
Hypothesis promises virtually unlimited resources for solving
collaborative creative problems, but nobody is going to lend their brain
to an activity that wastes all their effort—television already has that
position\! The Open Source Way makes the "Why" of open source practical
by teaching the What and the How. Put all these together, take some
initiative, and you could see a million, ten million, or one hundred
million hours of effort applied to problems make you wonder "why not?"
(originally published June 2012)
Open source software and open source best-practices have become truly
ubiquitous in the business world. Software used to be the new frontier,
but open source software can be found leading up to the frontier, at the
frontier, and beyond. My experience at [CGI
America 2012](http://www.cgiamerica.org/) (a US-focused subgroup of the
[Clinton Global Initiative](http://www.clintonglobalinitiative.org/))
confirmed this.
The focus areas of [CGI America](http://www.cgiamerica.org/) span the
gamut of challenges and opportunities facing America today: clean
electricity and efficiency, clean fuel and transportation, early
childhood education, entrepreneurship, financial inclusion, housing
recovery, reconnecting youth, small business, STEM education, wellness,
and workforce development drew more than 800 CEO-level attendees from
across the country to share ideas and to make commitments of concrete
progress.
I was invited to attend the working group focused on [Advanced
Manufacturing](http://www.cgiamerica.org/2012/working_groups/advanced_manufacturing.asp).
Our group of 70+ executives from the public, private, and research
sectors took on the strategic (and existential) questions of American
manufacturing in the 21st century along the topics of innovation,
competitiveness along the supply chain, environmental and economic
sustainability, exports, and maintaining/developing a skilled workforce.
I don't have much day-to-day experience with Advanced Manufacturing, but
I was familiar with the work of [Eric Von
Hippel](http://en.wikipedia.org/wiki/Eric_von_Hippel) [generalizing open
source best practices to manufacturing and industrial
processes](http://en.wikipedia.org/wiki/Democratizing_Innovation). He's
done such a great job channeling me that I figured I could channel him
for a few days.
Much to my surprise (and delight), open source was not a new idea at
CGI, CGI America, or even the Advanced Manufacturing working group.
Numerous speakers during the plenary session mentioned projects using
open source across the many topic areas already mentioned. At the
plenary lunch I met Douglas Woods, President of the [Association for
Manufacturing Technology (AMT)](http://www.mfgtech.org/) who told me
that that AMT was sponsoring not one, but two open source projects to
increase standardization and innovation across their membership. Really?
Well, yes and no. I checked out [MTConnect](http://mtconnect.org/), the
home page for the technology and standards projects sponsored by AMT.
The fact is that there are published reference documents that can be
freely downloaded, and there is code that can be freely downloaded,
read, and redistributed from github, but to get to either the documents
or the code one must register (it's free) and one must agree to abide by
their terms of service (by registering). That is not altogether bad: the
website of the [Open Source Initiative](http://opensource.org/) also
demands that users abide by its own terms of service, albeit without the
up-front registration requirement, and asking for positive assent to
abide by open source terms is not contrary to the Open Source
Definition.
Having registered, I looked at the documents and briefly browsed the
code repository. It is a start, as so many other projects were, at the
beginning, a start. The aspirations of the project are bold, as they
should be. As a former OSI Board Member, I would have been really happy
to have seen the project select [a specific OSI-approved
license](http://opensource.org/licenses/alphabetical) for their code
and/or documentation, but that has not yet been done. This is probably
the most important next step, and one that will define for many years to
come the community of users that will nurture and sustain the software.
I am sure they will deliberate carefully, and they will weigh the
arguments for and against various licenses in part based on how those
licenses have worked for situations relevant to the members of the AMT.
That will take time.
But looking at the bigger picture, the manufacturing community is
studying very seriously all the success that the open source community
has created: superior innovation (aka competitive advantage),
interoperability (aka market opportunity), quality (aka cost and risk
management), and long-term sustainability. I think open source software
has a lot to offer, not just in terms of helping them make better use of
technology, but make better use of the whole ecosystem in which they
operate. I look forward to seeing what develops over the next year\!
In the mean time, I'll be checking out
[MakerFaire](http://makerfairenc.com/), which is coming to Raleigh on
June 16th. The Makers may not have the capitalization of most Advanced
Manufacturing companies, but their creativity is unmatched, and may yet
prove helpful in advancing industry as well as the imagination.
(originally published January 2013)
In 1998, Amartya Sen was awarded the Nobel Prize for Economics. The
[lecture](http://www.nobelprize.org/nobel_prizes/economics/laureates/1998/sen-lecture.pdf)
he gave, titled "The Possibility of Social Choice," succinctly captured
both the subject of his work (generalizing economic theory to cover
social groups of disparate actors rather than just individuals or
corporations) and his irrepressible sense of humor (because the
generalization applied to [Arrow's Impossibility
Theorem](http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem)).
Sen's crucial insight (for me) is this (emphasis mine):
Thus, it should be clear that a full axiomatic determination of a
particular method of making social choice must inescapably lie next door
to an impossibility—indeed just short of it. If it lies far from an
impossibility (with various positive possibilities), then it cannot give
us an axiomatic derivation of any specific method of social choice. It
is, therefore, to be expected that constructive paths in social choice
theory, derived from axiomatic reasoning, would tend to be paved on one
side by impossibility results (opposite to the side of multiple
possibilities). No conclusion about the fragility of social choice
theory (or its subject matter) emerges from this proximity.
I am quite familiar with proximity to impossibility. When we started
Cygnus Support, the world's first company based on selling commercial
support for free software, nearly everybody thought it would be
impossible. Those few who did not thought that being so nearly
impossible would make the business too fragile to ever be interesting,
especially by Silicon Valley standards. The [success of
Cygnus](http://oreilly.com/catalog/opensources/book/tiemans.html) and
the [subsequent
success](http://opensource.com/business/12/3/billion-thanks-open-source-community-red-hat)
of [Red
Hat](http://seekingalpha.com/article/1076571-red-hat-management-discusses-q3-2013-results-earnings-call-transcript?part=single)
strongly validate Sen's bold prediction that being on the edge is not a
sign of weakness. Indeed, where do we find leaders, but [out in
front](http://www.redhat.com/about/company/history.html)?
All of the above is a preamble to the subject of this article, which is
the presentation of a new economic paradigm for understanding the future
and potential of cloud computing. With luck, economists smarter than I
will develop the formal methods and analysis that will garner them some
recognition in Sweden. But luck or not, the true beneficiaries will be
those who embrace this paradigm and profit from the insights that it
makes obvious. Insights which, according to today's nay-sayers, are
impossible or at best insignificant, but which in fact are the key to
recovering trillions of dollars in business value wasted every year
under the current paradigms.
Global IT spend tops USD $1.5T per year, and businesses are (or should
be) banking on massive IT-enabled returns on that investment. Yet 18% of
all projects are abandoned before going into production, and another 55%
are "challenged", meaning they are late to market (sometimes very late),
buggy (sometimes very buggy), or missing functionality (sometimes key
functionality). The estimated costs of these shortfalls is USD $500B per
year, but that's only part of the story. The shortfall in terms of
expected ROI is 6x to 8x that number, meaning that USD $3.5T of expected
business returns never materialize
[\[4\]](http://opensource.com/business/10/6/integral-innovation). Each
year. No other industry I can think of can tolerate such abysmal
performance results, yet that's what we have come to expect from IT.
[Which is
unsustainable](http://www.csc.ncsu.edu/corporate_relations/fi_lit/383).
This problem has remained so stubbornly entrenched in part because the
numbers militate against any solution. The probability of failure is so
high (18% for sure to fail totally, 55% chance of missing deadlines,
milestones, or a clean bill of application health) make it about a 50/50
chance that making any effort to improve one's application environment
will actually make it worse, and that even in the best of circumstances,
one will only achieve 50%-80% of what was [originally
intended](http://opensource.com/business/10/6/radically-simple-it-dr-david-upton).
But there is an alternate universe in which we find a working solution:
the world of open source, where measured software defect rates are [50x
to 150x lower than typical proprietary
software](http://it.slashdot.org/story/04/12/14/1340237/linux-has-fewer-bugs-than-rivals),
and where the pace of innovation is can be seen on literally a daily
basis. The first (and still one of the best) economic analyses to
explain this remarkable phenomenon was a game theory analysis by
[Baldwin and
Clark](http://www.people.hbs.edu/cbaldwin/DR2/BaldwinArchPartAll.pdf),
showing that selfish developers benefit from forced sharing (involuntary
altruism) when systems are modular and there is a community of
like-minded (i.e., similarly selfish, lazy, and capable) developers.
Their results also showed that the results are highly scalable, and that
the more modular the system, the larger the community becomes and the
greater the payoff for participating. This formal result justified what
[Tim
O'Reilly](http://www.oreillynet.com/pub/a/oreilly/tim/articles/architecture_of_participation.html)
and so many others observed when they spoke about "[The Architecture of
Participation](http://opensource.com/business/12/6/architecture-participation).”
It also validates the intuition I had when I started Cygnus Support, as
well as what I saw happening between our company and the community
pretty much from the beginning.
A second finding, explained by Oliver Williamson in his 2009 Nobel Prize
lecture, was the formalization of the economics of governance and the
economics of organization, specifically to help answer the question:
"What efficiency factors determine when a firm produces a good or
service to its own needs [rather than
outsource](http://www.nobelprize.org/nobel_prizes/economics/laureates/2009/williamson-lecture.html)?"
For too long, economists, and the proprietary software industry for that
matter, have treated firms as black-boxes, ignoring all the details on
the inside and focusing on prices and outputs as the only interesting
results to study. Williamson builds a new theory of transaction cost
economics based on work first articulated by John R. Commons in 1932 and
strongly echoed by [W. Edwards Deming
in 1982](http://deming.org/index.cfm?content=66), namely that continuity
of contractual relationships is a more meaningful predictor of longterm
value than simple prices and outputs. Indeed, when so much is being
spent and so much being thrown away when it comes to proprietary
systems, the prices and outputs of those systems become almost
meaningless. *At the limit, the firm that treats IT only as a cost, not
a driver of business value, has fallen into a trap from which it is
quite difficult to escape. *By contrast, the architecture of
participation, coupled with ever-increasing utility functions (due to
user-driven innovation), show that the Deming cycle is perfectly
applicable to software, and that the longterm relationships between
firms build far greater value for all concerned than trading price for
quitting.
So what does this all mean for the cloud? One hypothesis is that the
macroeconomics of the cloud makes the microeconomics of open source
insignficant, and therefore irrelevant. If that is true, then the game
is truly fixed: a cloud OS is just another OS, cloud apps are just like
traditional apps, cloud protocols and managment tools are merely
software APIs and consoles, etc. If that is true, then we should all be
prepared for the Blue Cloud of Death.
An alternative hypothesis is that open source is the nanotechnology of
cloud computing, and its nano-scale properties (architecture of
participation, enhanced innovation cycles, quality, and transactional
efficiencies) are crucial to all innovation going forward. I argue that
this is indeed the case, not only because of the arguments made thus
far, but because cloud computing creates a new inductive force that
specifically strengthens the arguments just made. And at this point I'm
compelled to introduce a rather lengthy analogy; please bear with me. A
single tree in the Amazon rainforest can transpire 300L of water per
day, or a bit less than half a (cubic) yard of water for those of us
still using the Imperial measurement system. It seems insignificant. But
when one considers the whole Amazonian rainforest, not only do these
trees transpire as much water as flows through the Amazon river itself,
but they *propel* that sky-borne water as far and as fast as well,
effectively [creating a second Amazon
river](http://www.riosvoadores.com.br/english/the-project) in the sky.
It is one thing to see a tree as shade, or as resource for firewood, or
a carbon sink, or any other discrete use, but when the lens changes from
the small scale to the large, its function in the larger context cannot
be imagined looking at the smaller case. Adam Smith [said the same
thing](http://www.gutenberg.org/ebooks/3300) about the invisible hand of
the market, not to say that it always does the right thing, but to say
that it's always doing *something*. Or, as Gandhi once said
Whatever you do will be insignificant, but it is very important that you
do it.
When I started writing open source software back in 1987, Richard
Stallman was the maintainer of the GNU project, the master repository
was his local disk, and my version control system was Emacs backup files
and, to a lesser extent the frequent tarballs of software distinguished
by a manually-adjusted release number. Merging changes was a
time-intensive (and sometimes energy-intensive) process, but the quality
of Stallman's code, and the few others working with him at the time, was
such that I could do in weeks what companies could scarcely do in years.
The GNU C++ compiler was developed and first released in six months
time, while at the same time I ported the GNU compilers to half a dozen
new architectures. Everything that was wrong about the way we mananged
our software changes in those days represented an opportunity for us to
develop a new software management paradigm for supporting customers
commercially. We adopted the newly-developed CVS (Concurrent Versioning
System) and for a time, the world was our oyster.
Within five years, we had succeeded in many of the ways we imagined:
inclusion on the Inc 500 list, the Software 500 list, the cover of a
special edition of Fortune magazine, even mentions in the New York Times
and the Wall Street Journal. But we succeeded in ways we didn't imagine,
nor design for. We stretched CVS to its breaking point. Signing a new
customer meant potentially creating a new customer branch in the master
repository. This process, which could once be done in a matter of
minutes, could take a day. Which meant that with 200 business days a
year, if we signed up 200 customers that year, then developers would
have precisely zero days with which to do any work against the
repository. This frequently led to arguments about forking—developers
wanted to work in repositories unconstrained by operational bottlenecks,
but somebody had to merge changes that could be delivered to customers.
The cost of forking had become intolerable, and the social choice we had
to engineer was one of lowered expectations for both customers and
employees. Despite those shortcomings, relatively speaking we shined,
with the development and delivery of custom compilers and debuggers on
time and on budget 98.5% of the time.
But things are different now, and being the best in a broken paradigm is
not good enough. In the past five years, a program called "git" has
revolutionized how developers and maintainers manage code, and how code
can be called into production on a moment's notice, sometimes for just a
moment. git has reorganized the open source world so that forking is
neither expensive nor problematic, and where projects can merge and
combine so easily that it is almost possible to think of it as a kind of
quantum superpositioning. This change not only solves the problem that
bottlenecked the old way of doing things (at Cygnus and the FSF), but
opens up entirely new concepts as to what an application itself might
be. Instead of being some monolithic tangle of code that was difficult
to create, expensive to test, and impossible to change, it becomes a
momentary instance of code and data, producing precisely the result
requested before vanishing back into the ether. At any moment in time,
new code, new data, new APIs, and new usage contexts guide the evolution
of each generation of the application. *An application that evolves by
the minute is fundamentally different than one that evolves only every
year or two (regardless how many new features are promised or even
delivered).*
This rapid new dimension of evolution—at the application/operational
level—requires a new economic analysis. Fortunately the groundwork has
been laid: Evolutionary Game Theory studies behavior of populations of
agents repeatedly [engaging in strategic
interactions](http://en.wikipedia.org/wiki/Evolutionary_game_theory).
Behavior changes in populations are driven either by natural selection
via differences in birth and death rates, or by the application of
myopic decision rules by individual agents. In the article Radically
Simple IT by [Dr. David
Upton](http://hbr.org/2008/03/radically-simple-it/ar/1), a deployment
model is described in which all existing functionality of the system
exists in at least two states—the original state and a modified state.
Inspired by the design of fault-tolerant systems that always avoid a
single point of failure by running independent systems in parallel, new
features can be added as optional modules in parallel with the existing
system. When new features are judged to be operationally complete and
correct, the system can "fail over" the old modules to the new, and if a
problem is then later detected, the system can "fail back" to the
original. By constantly running all versions in parallel, some version
of the correct answer is always available, while some version of a new
and better answer may also be available. When implemented by Shinsei
Bank in Tokyo Japan, the bank achieved its operational milestones 4x
faster than using conventional deployment methods, and did so at 1/9th
the cost. And by designing their system for maximum adaptability (rather
than maximum initial functionality) they were able to adapt to customer
needs and expectations so successfully they were recognized as the \#1
Bank for loyalty and satisfaction two years in a row. When [this same
approach was
implemented](http://www.redhat.com/summit/emirates/index.html) by The
Emirates Group (coached by the experience of Shinsei) the results were
even more impressive.
The combination of low-cost forking (which makes new software
generations very rich and diverse) and operational models that can
easily select the fittest code in a given generation create a
super-charged Deming cycle of sustainable innovation, quality, and
value. But to make this cycle effective, the code itself must be
susceptible to innovation. Black boxes of proprietary software define
the point at which population-driven innovation stops. To fully realize
the benefits of the population dynamics of open source innovation, the
source code must be available at every level of the system.
We cannot solve problems by using the same thinking we used when we
created them.—Albert Einstein
To summarize this rather far-reaching thesis, the world of Enterprise IT
has been suffering under the delusion that if we throw enough money at
enough black boxes, one of them will surely solve the problems that we
were originally tasked with solving. Even if true, the world changes at
such a rate that solving a problem once relevant in the past is likely
no longer relevant in the future, especially if that problem is merely a
symptom of a deeper problem. Recent results in economic theory teach
that price and output analysis tend to reveal symptoms, but rarely
uncover real, sustainable solutions. But an economic understanding of
governance, transactions, and mutual benefit can inform not only
sustainable solutions, but can induce ongoing, sustainable innovation,
thereby creating ever-increasing business or social value. Evolutionary
Game Theory provides a framework for national-level and enterprise-level
analysis of a shift from proprietary applications to cloud computing.
Factors such a financial capital, knowledge capital, business value
potential, and trust capital influence both the processes of natural
selection across populations as well as the myopic decisions of agents
within populations. Open source software enables vital mechanisms
prohibited by proprietary software, fundamentally changing the
evolutionary rate and quality of successive generations of (cloud)
applications. There is perhaps no easier nor faster way to add more
value to enterprise, national, or global accounts than to embrace open
source cloud computing and evolve beyond the problems of proprietary
applications and platforms. All it requires is that you do something—as
a member of the open source community—no matter how insignificant it may
seem.
The Open Voices ebook series highlights ways open source tools and open
source values can change the world. Read more at
<http://opensource.com/resources/ebooks>.