💾 Archived View for gemini.onedimension.net › thought › critical-path:-introduction.gmi captured on 2023-01-29 at 03:42:47. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2021-12-17)
-=-=-=-=-=-=-
I thought it'd be worth expanding on some of the notes I took in the margins as I was reading the introduction. I think it'll be good way of exposing some of my preexisting biases[1] before I start going through the book.
Throughout much of the introduction, Fuller writes of a "world energy network" that would connect the energy grids of the world into a single unified system.
The world energy network grid will be responsible for the swift disappearance of planet Earth's 150 different nationalities. We now have 150 supreme admirals, all trying to command the same ship to go in different direction, with the result that the ship is going around in circles--getting nowhere. The 150 nations act as 150 blood clots in blocking the flow of recirculating metals and other traffic essential to realization of the design science revolution.
There's an aspect of this notion that feels correct: as individuals and organizations come to depend on each other more, they become closer across a variety of dimensions. The suggestion here is that creating a unified utility system would cause the world's nations to disappear, perhaps transforming us into a single world sans-borders and ineffective state-level-governments.
Maybe such a vision is possible. I expect that the remainder of the book will expand on this vision, so I'll engage with the specific arguments as I encounter them. But at this point the questions that emerge for me are all around the common problems of centralization. What is the effect of centralizing the world's utilities, governance, social frameworks, and spirit? I realize that Fuller isn't directly arguing for the social or spiritual scope of centralization, but the vision of a single unified world certainly evokes such questions.
Neither the great political and financial power structures of the world, nor the specialization-blinded professionals, nor the population in general realize that sum-totally the omni-engineering-integratable, invisible revolution in the metallurgical, chemical, and electronic arts now makes it possible to do so much more with ever fewer pounds and volumes of material, ergs of energy, and seconds of time per given technological function that it is now highly feasible to take care of everybody on Earth at a "higher standard of living that any have ever known."
Indeed the world's standard of living, on the whole, is increasing. Technological innovation certainly plays quite a large role in that. Our technology abstracts away tasks that we once had to spend hours on, removed obstacles that we once toiled over. We no longer have to farm our own food, pump and carry our own water, worry about the disposal of our feces. We no longer have to prepare our own food, wash our own dishes, maintain our own homes. Services and the technology that enables them allows these sorts of endeavors to become the concern of others.
It's worth asking: why is "standard of living" as a metric worth measuring and valuing? What do we loose from streamlining our lives; removing tasks and delegating them to others? What is the result of living in such a way? Is framing such a metric as "advancement" accurate? What is the metric that matters most to measure "progress" most holistically?
Following the standards of living observation, Fuller writes:
It no longer has to be you or me. Selfishness is unnecessary and henceforth unrationalizable as mandated by survival. War is obsolete.
The phrase "war is obsolete" feels to me to rest on a rather surface-level interpretation of war. I understand Fuller to argue: the material limitations of the world force selfishness, which forces war. Without material limitations, we don't need to be selfish, and therefore don't need war.
He overlooks, I think, something much more essential about our psychology. Selfishness is an inseparable part of us. As is war. It's not something that can just be overcome by some new rationality and technology, or the search for a Universal Truth.
In The Moral Equivalent of War[2], psychologist William James[3] argues:
The earlier men were hunting men, and to hunt a neighboring tribe, kill the males, loot the village and possess the females, was the most profitable, as well as the most exciting, way of living. Thus were the more martial tribes selected, and in chiefs and peoples a pure pugnacity and love of glory came to mingle with the more fundamental appetite for plunder.
We inherit the warlike type; and for most of the capacities of heroism that the human race is full of we have to thank this cruel history. Dead men tell no tales, and if there were any tribes of other type than this they have left no survivors. Our ancestors have bred pugnacity into our bone and marrow, and thousands of years of peace won't breed it out of us. The popular imagination fairly fattens on the thought of wars. Let public opinion once reach a certain fighting pitch, and no ruler can withstand it.
I certainly find James' argument compelling (and his resulting suggestions[4]) at least in a purely observational sense. Looking around, we're surrounded by the artifacts of our own psychology; our history is perhaps the best possible insight into who we really are[5]. It's not that selfishness is just something that can be shrugged off once we've eliminated the material conditions that seem to make it necessary; it's something that's ingrained within us in a far more fundamental way. This is what we'll really need to contend with in an ever-more dematerialized, abstract-globalistic, and service-centric world: the immutable reality of our own psychology and how it interacts with the completely novel ways of living we construct for ourselves. The underlying psychology, I think, should dictate the design[6] of those future states of living.
Asking a computer "What shall I do?" is useless. You can get an informative answer, however, if you program your question into the computer as follows: "Under the following set of operative circumstances, each having a positive or negative value in an only-one-value system, which of the only two possible results will be obtained if I do so and so? And by how much?"
Maybe there is value to modeling the world on a computer. But until the world and the computer become the same, the model will always be an abstraction that rests on human assumptions. In this way the pure rationality of the computer is always limited by the irrational minds that program it.
Obviously this is not a sort of sentiment that I find particularly inspiring. Rationality as the sort of God-above-all we strive to feels limited. We're not rational creatures, and if we are really made in the image of God then there's an aspect of irrationality to God too. Or maybe that's just Satan sneaking his way into us and poisoning the metaphorical well of rationality...
Computers will be used more and more to produce opinion-obsoleting answers to progressive crises-provoked questions about which way world society as a whole will enduringly profit the most. Computers will correct misinformed and disadvantaged conditioned reflexes, not only of the few officials who have heretofore blocked comprehensive technoeconomic and political evolutionary advancement, but also of the vast majorities of heretofore-ignorant total humanity.
Again, "misinformed and disadvantaged conditioned reflexes" suggests that there's something to be overcome here, that there's a level of rationality to attain and in the process that we will shed our primitive irrational selves. The computer will take us there. Maybe it will, but I don't think there is quite the rational place Fuller thinks it will be.
There also seems to be an assumption that there will always be a correct answer given a set of circumstances, that computers will "correct" us. There are plenty of problems in the world--so called wicked problems[7]--for which there are no answers. The problem space is simply too complex to narrow down to simple choices, and if we were to do that, the value of the computer is questionable given the real work was done narrowing down the problem space to begin with.
We can sense that only God is perfect--the exact truth. We can come even nearer to God progressively eliminating residual errors. The nearest each of us can come to God is by loving the truth. If we don't program the computer truthfully with all the truth and nothing but the truth, we won't get the answers that allow us to "make it."
I can appreciate this sentiment, though as I said it seems to frame God in a rather limited way. The Truth--Godly and Universal--feels inhuman. As far as I can see, the path to spiritual[8] fulfillment, whatever that may be, is most certainly not a rational one. Thinking of God as "the exact truth" seems to be quite a specific and distinctly Western-enlightenment way of looking at it. But since this notion seems to be perhaps the most basic foundation Fuller sets on which to build his vision, I see the value in adopting it.
The computer will show that 70 percent of all jobs in America and probably an equivalently high percentage of the jobs in other Western private-enterprise countries are preoccupied with work that is not producing any wealth or life support--inspectors of inspectors, reunderwriters of insurance reinsurers, Obnoxico promoters, spies and counterspies, military personnel, gunmakers, etc.
I agree, the vast majority of work that humanity is engaged in feels derivative and non-essential. It's quite sad to think of the massive amount of skill being put to use on problems that aren't really problems.
But, technology's role in this is perhaps worth considering. It might be argued that the computer actually increased the amount of derivative work to be done. Computers add additional layers of complexity--the essential work is supported by computer systems, which must be maintained and built. And those computer systems are supported by their own computer systems, which also must be maintained and built. On and on, adding additional layers of complexity and removing us further and further from the essential work. The "inspectors of inspectors" Fuller writes of have become "engineers for engineers." More technology blurs the lines between essential and derivative work, both support the other, just with varying degrees of focus.
Fuller includes his reply to a boy who asked him if he was a "doer" or a "thinker:"
The things to do are: the things that need doing: that you see need to be done, and that no one else seems to see need to be done. Then you will conceive your own way of doing that which needs to be done--that no one else has told you to do or how to do it. This will bring out the real you that often gets buried inside a character that has acquired a superficial array of behaviors induced or imposed by others on the individual.
I think this idea is very powerful. The individualistic "do what you see needs to be done" in your own way intuitively feels to me like a good approach to the problem of working in the world. In fact "do what you need to do" is one of my and my partner's principles[9].
[...] if anything, the little, penniless, unknown individual, with dependent wife and child, might be able to do effectively on behalf of humanity that would be inherently impossible for great nations or great corporate enterprises to do.
In inspiring thought. The problem, though, is that acting "on behalf of humanity" is a particularly lofty goal that most aren't really well equipped[10] to take on. Indeed, "the penniless, unknown individual" could, in terms of net benefit, be most effective in helping his own condition and that of his family before addressing the conditions of humanity. Acting locally[11] is perhaps the most effective ways we work. When we can see what's around us and change it, we have a tangible affect on our environment. Thinking abstractly of the problems of humanity and working towards solving those is a different matter entirely. There's probably a medium in there that makes sense too, but getting distracted by global-scale problems feels like just that--a distraction from what really matters to those in our direct community.
Fuller also seems to write from the perspective of an engineer, in terms of systems and their design. While I find this mode of thought reasonable, I also recognize that for many people it does not make a lot of sense. There are more pressing sorts of problems they'd like to engage in beyond global-scale, humanity-level challenges. I don't think the assumption that everyone wants to work "on behalf of humanity" always holds true, or that simply giving them the tools to work at that scale would change that.
Whether it is to be Utopia or Oblivion will be a touch-and-go relay race right up to the final moment. The race is between a better-informed, hopefully inspired young world versus a running-scared, misinformedly brain-conditioned, older world.
A constant battle. The older world always looks irrationally conservative and traditional, the younger world always looks foolhardy and rash. But young is not "better-informed" and old is not "brain-conditioned". It's a natural balance[12] between the old and the new, between the known and the unknown. People become more conservative--some might say wiser--as they age. But they can no longer see beyond the world they know, and the world changes. So the young forge the path on, but not too radically; they're always mediated by the older generation. These two forces ultimately guide us in the right direction. The young will not lead us to utopia since a part of utopia is held in the wisdom of the old.
Last updated Thu Nov 25 2021 in Berkeley, CA
2: https://www.uky.edu/~eushe2/Pajares/moral.html
3: https://en.wikipedia.org/wiki/William_James
4: /thought/binding-the-nation.gmi
7: https://en.wikipedia.org/wiki/Wicked_problem
10: /thought/global-impact.gmi