💾 Archived View for stack.tilde.cafe › whinam › 2022-01-21.fare.10.gmi captured on 2024-08-18 at 17:59:27. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2023-09-08)

-=-=-=-=-=-=-

Chapter 10: Houyhnhnms vs Martians [Fare, 2016]

2016-06-11 :: Urbit, Martian, Impedance Mismatch, Orthogonal Persistence, Persistence, Meta, In The Large, Autistic

⇖ index

What did Ngnghm (which I pronounce “Ann”) think of Urbit? Some elements in Ann’s descriptions of Houyhnhnm computing (which I pronounce “Hunam computing”) were remindful of the famous Martian system software stack Urbit: both computing worlds were alien to Human Computing; both had Orthogonal Persistence; and both relied heavily on pure deterministic computations to minimize the amount of data to log in the persistence journal (as contrasted for instance with the amount of data to manipulate to compute and display answers to end-users). What else did Houyhnhnm computing have in common with Martian software? How did it crucially differ? How did they equally or differently resemble Human systems or differ from them? Ann took a long look at Urbit; while she concluded that indeed the three approaches were quite distinct, she also helped me identify the principles underlying their mutual differences and commonalities.

Urbit: The Martian Model

Martians have developed a peculiar operating system, Urbit (docs), the Terran port of which seems to be semi-usable since 2015. At the formal base of it is a pure functional applicative virtual machine, called Nock. On top of it, a pure functional applicative programming language, called Hoon, with an unusual terse syntax and a very barebones static type inferencer. On top of that, an Operating System, call Arvo, that on each server of the network runs by applying the current state of the system to the next event received. The networking layer Ames implements a secure P2P protocol, while the underlying C runtime system, u3, makes it all run on top of a regular Linux machine.

The data model of Nock is that everything is a noun, which can be either a non-negative integer or a pair of nouns. Since the language is pure and applicative (and otherwise without cycle-creating primitives), there can be no cycle in this binary tree of integers. Since the only equality test is extensional, identical subtrees can be merged and the notional tree can be implemented as a Directed Acyclic Graph (DAG).

On top of those, the execution model of Nock is to interpret some of these trees as programs in a variant of combinatory logic, with additional primitives for literals, peano integers, structural equality, and a primitive for tree access indexed by integers. The inefficiency of a naive implementation would be hopeless. However, just like the tree can be optimized into a DAG, the evaluation can be optimized by recognizing that some programs implement known functions, then using a special fast implementation of an equivalent program (which Martians call a jet, by contrast with JIT) rather than interpreting the original programs by following the definitional rules. Recognizing such programs in general could be hard, but in practice Urbit only needs recognize specific instances of such programs — those generated by Hoon and/or present in the standard library.

Therefore, it is the C runtime system u3 that specifies the operational semantics of programs, whereas Nock only specifies their denotational semantics as arbitrary recursive functions. By recognizing and efficiently implementing specific Nock programs and subprograms, u3, like any efficient implementation of the JVM or of any other standardized virtual machine, can decompile VM programs (in this case Nock programs) into an AST and recompile them into machine code using the usual compilation techniques. At that point, like every VM, Nock is just a standardized though extremely awkward representation of programming language semantics (usually all the more awkward since such VM standards are often decided early on, at the point when the least is known about what makes a good representation). Where Urbit distinguishes itself from other VM-based systems, however, is that the semantics of its virtual machine Nock is forever fixed, totally defined, deterministic, and therefore future-proof.

Hoon is a pure functional applicative programming language. Its syntax is terse, where the core syntax is specified using non-alphanumeric characters and digraphs thereof (or equivalent for letter keywords). The syntax allows to write expressions as one liners using parentheses, but it is colloquial to break functions onto many lines where indentation is meaningful; as contrasted with other indentation-sensitive languages, however, the indentation rules are cleverly designed to prevent extraneous indentation to the right as you nest expressions, by deindenting the last, tail position in a function call. Whereas Nock is trivially typed (some would say untyped or dynamically typed), Hoon has a static type system, although quite a primitive one, with a type inferencer that requires more type hints than a language with e.g. Hindley-Milner type inference (such as ML), yet less than one without type inference (such as Java).

Arvo is the operating system of Urbit. The Urbit model is that the state of the system (a noun) encodes a function that will be applied to the next communication event received by the system. If the processing of the event terminates, then the event is transactionally appended to the event journal making it persistent. The value returned specifies the next state of the system and any messages to be sent to the world. Arvo is just the initial state of the system, a universal function that depending on the next event, may do anything, but in particular provides a standard library including anything from basic arithmetics to virtualization of the entire system. The core of Arvo is typically preserved when processing a message, even as the state of the system changes to reflect the computations controlled by the user; as long as this core keeps running (as it should), Arvo remains the operating system of Urbit; but users who insist may upgrade and replace Arvo with a new version, or with another system of their own creation, if they dare.

The events fed into Urbit are generated by the C runtime system u3, to represent console input, incoming network messages, etc. Conversely the messages generated by Urbit are translated by the implementation into console output, outgoing network messages, etc. If processing an event results in an error, if it is interrupted by the impatient user, or if it times out after a minute (for network messages), then u3 just drops the event and doesn’t include it in the event journal. (Of course, if an adversarial network message can time out an Urbit machine for a minute or even a second, that’s probably already a denial of service vulnerability; on the other hand, if the owner, being remote, can’t get his long-running computations going, that’s probably another problem.) A stack trace is generated by u3 when an error occurs, and injected as an event into Arvo in place of the triggering event, that is not persisted. Users can at runtime toggle a flag in the interactive shell Dojo so that it will or won’t display these stack traces.

The networking layer Ames is conceptually a global broadcast network, where network messages are conceptually visible by all other nodes. However, a message is typically addressed to a specific node, using a public key for which only this node has the private key; and other nodes will drop messages they cannot decrypt. Therefore, the C runtime will optimize the sending of a message to route it directly to its destined recipient, as registered on the network. A node in the network is identified by its address, or plot, that can be 8-bit (“galaxy”), 16-bit (“star”), 32-bit (“planet”), 64-bit (“moon”) or 128-bit (“comet”). A comet has for 128-bit address the cryptographic digest of its public key, making it self-authenticating. A moon has its public key signed by the corresponding planet; a planet has its public key signed by the corresponding star, a star has its public key signed by the corresponding galaxy, a galaxy has its public key included in Arvo itself, in a hierarchical system rooted in whoever manages the base Operating System. All communications are thus authenticated by construction. Galaxies, stars, planets and moons are scarce entities, thus constituting “digital real estate” (hence the name plot), that the Urbit curators intend to sell to fund technological development.

One of Urbit’s innovations is to invent mappings from octet to pronounceable three-letter syllables, so that you can pronounce 8-, 16-, 32-, 64- or 128-bit addresses, making them memorable, though not meaningful. So that names with the same address prefix shall not sound the same, a simple bijective mangling function is applied to an address before to extract its pronunciation. This deemphasizes the signing authority behind an identity: the reputation of a person shouldn’t too easily wash onto another just because they used the same registrar; and it’s easier to avoid a “hash collision” in people’s minds by having vaguely related but notably different identities have notably different names. This constitutes an interesting take on Zooko’s Triangle. Actually, care was taken so that the syllables would not be too meaningful (and especially not offensive) in any human language that the author knew of. Non-alphanumerical characters are also given three-letter syllable names, though this time the names were chosen so that there were simple mnemonic rules to remember them (for instance, “wut” for the question mark “?”); this makes it easier to read and learn digraphs (though you might also name them after the corresponding keywords).

Houyhnhnms vs Martians

Most importantly, the Martian’s Urbit is actually available for humans to experiment with (as of May 2016, its authors describe its status as post-alpha and pre-beta). By contrast, no implementation of Houyhnhnm Computing system is available to humans (at the same date), though the ideas may be older. This alone make Urbit superior in one, non-negligible, way. Yet, we will hereon examine it in all the other ways.

Superficially, both Martian and Houyhnhnm Computing provide Orthogonal Persistence. But the way they do it is very different. Martians provide a single mechanism for persistence at a very low-level in their system, separately on each virtual machine in their network. But Houyhnhnms recognize that there is no one size fits all in matter of Persistence: for performance reasons, the highest level of abstraction is desired for the persistence journal; at the same time, transient or loosely-persisted caches are useful for extra indices; and for robustness, a number of replicas are required, with a continuum of potential synchronization policies. Therefore, Houyhnhnms provide a general framework for first-class computations, based on which users may select what to persist under what modalities.

One could imagine ways that Urbit could be modified so its persistence policies would become configurable. For instance, the underlying C runtime u3 could be sensitive to special side-effects, such as messages sent to a magic comet, and modify its evaluation and persistence strategies based on specified configuration. That would mean, however, that most of the interesting work would actually happen inside u3, and not over Nock. What would Nock’s purpose then be? It could remain as an awkward but standardized and future-proof way to represent code and data. However, unless great care is taken, using formal proofs and/or extensive testing, so that the semantics of the Nock code generated indeed implements the actual computations, while indeed being implemented by the underlying system, then at the first bug introduced or “shortcut” taken, the entire Nock VM becomes a sham.

Now, assuming Nock isn’t a complete sham, it remains an obligatory intermediate representation between the computations desired by users and the machine implementations provided by the system. Because Nock is never exactly what the user wants or what the machine provides, this intermediate representation always introduces an impedance mismatch, that is all the more costly as the desired computing interactions are remote from the Nock model.

In an extreme case, one could imagine that u3 would be configured using a Houyhnhnm first-class computation framework. Users would develop their computations at the level of abstraction they desired; and they would dynamically configure u3 to use the desired tower of first-class implementations. At this point, any encoding in terms of Nock could be altogether short-circuited at runtime; and any impedance mismatch introduced by Nock is thus worked around. But then, Nock is purely a hurdle and not at all an asset: all the semantics that users care about is expressed in the Houyhnhnm Computing system; any Nock code generated is just for show, obfuscating the real high-level or low-level computations without bringing anything; and Nock is either a sham, or an expensive tax on the computation framework.

Future-proofing the wrong thing

Both Martians and Houyhnhnms rely heavily on pure deterministic computations to minimize the amount of data to log in the persistence journal to describe issues (as contrasted for instance with the amount of data to manipulate to compute and display answers to end-users). But Martians rely on Nock, and to a lesser extent, Hoon, Arvo, Ames, etc., having a constant deterministic semantics, cast in stone for all users at all time; Houyhnhnms frown at the notion: they consider that constraint as unnecessary as it is onerous. Martians justify the constraint as making it possible to have robust, future-proof persistence. Houyhnhnms contend that this constant semantics doesn’t actually make for robust persistence, and that on the contrary, it prevents future improvements and fixes while encouraging bad practice. Also, Houyhnhms claim that requiring the function to be the same for everyone introduces an extraordinary coordination problem where none existed, without helping any of the real coordination problems that users actually have.

A global consensus on deterministic computation semantics only matters if you want to replay and verify other random people’s computations, i.e. for crypto-currencies with “smart contracts” like Ethereum; but that’s not at all what Urbit is about, and such computation replay in a hostile environment indeed has issues of its own (such as misincentives, or resource abuse) that Urbit doesn’t even try to address. If you only want to replay your own computations (or those of friends), you don’t need a global consensus on a deterministic function; you only need to know what you’re talking about, and write it down.

Houyhnhnms always consider first the interactions that are supposed to be supported by computing activities. In the case of Persistence, Houyhnhnms are each interested in persisting their own code and data. There is no global entity interested in simultaneously looking at the persistence logs of everyone; there is no “collective” will, no magically coordinated knowledge. Each individual Houyhnhnm wants to ensure the persistence of their own data and that data only, or of that entrusted to them personally; and even if they want more, that’s both the only thing they must do and the only thing they can do. Now, they each want the most adequate technology for their purpose, taking costs and benefits into account. If they somehow had to coordinate together to find a common solution, the coordination would be extraordinarily costly and would take a lot of time; they would have to settle on some old technology devised when people knew least, and could never agree on improvements. And if the technology were frozen in time at the beginning, as in Urbit, nothing short of retroactive agreement using a time machine could improve it. If on the contrary each individual is allowed to choose his own persistence solution, then those who can devise improved solutions can use them without having to convince anyone; they can also compete to have their improvements adopted, whereas users compete to not be left behind, until they all adopt the improvements that make sense. In the end, in matters of persistence as of build systems, allowing for divergence creates an incentive towards convergence, reaching better solutions, through competition.

Urbit incorrectly formulates the problem as being a social problem requiring a central solution, when it is actually a technical problem for which a decentralized social arrangement is much better. Persistence doesn’t require anyone to agree with other people on a low-level protocol; it only requires each person to maintain compatibility with their own previous data. To decode the data they persisted, users don’t need a one deterministic function forever, much less one they agree on with everyone else: what they need is to remember the old code and data, and to be able to express the new code (generator) in terms of the old one (to upgrade the code) and able to interpret the old data schema in terms of the new data schema (to upgrade the data). Indeed, even the Urbit whitepaper acknowledges that as far as data above the provided abstraction matters, such schema changes happen (see section 2.0.3 Arvo).

Where Martians get it just as wrong as Humans is in believing that solving one issue (e.g. persistence) at the system level is enough. But onerous local “persistence” of low-level data can actually be counter-productive when what users require is distributed persistence of high-level data at some level of service involving enough replicas yet low-enough latency: local persistence costs a lot, and for no actual benefit to distributed persistence may cause a large increase in latency. The entire point of computing is to support user programs, and solving an issue for some underlying system at a lower-level of abstraction without solving it at the higher-level that the user cares about is actually no solution at all. It can sometimes be part of a solution, but only if (1) the desired property can also be expressed in a composable way so that higher layers of software may benefit from it, and (2) the lower layers don’t impose specific policy choices that will be detrimental to the higher layers of software. And this is what Houyhnhnm systems uniquely enable that Human and Martian systems can’t express because it goes against their paradigm.

Neglect for the Meta-level

The mistake shared by Martians and Humans is to share the approach of neglecting the importance of metaprogramming.

For Humans, this is often out of ignorance and of fear of the unknown: Humans are not usually trained in metaprogramming they don’t understand the importance of it, or its proper usage; they don’t know how to define and use Domain Specific Languages (DSLs). Though their job consists in building machines, they “enjoy” the job security that comes from breaking machines that would replace their current jobs: Mechanized modernity for me, protectionist luddyism for thee.

For Martians, unhappily, there is a conscious decision to eschew metaprogramming. One recent Urbit presentation explicitly declares that DSLs are considered harmful; the rationale given is that the base programming language should have low cognitive overload on entry-level programmers. (Though there again, the very same Urbit authors who claim their programmers shouldn’t do metaprogramming themselves spend most of their time at the meta-level — base-level for thee, meta-level for me.) To Martians, making the system deliberately simpler and less sophisticated makes it easier for people to understand and adopt it. Martians with Hoon commit the same error as the Humans systematically committed with COBOL, or to a lesser degree with Java: they designed languages that superficially allow any random layman (for COBOL) or professional (for Java) or enthusiast (for Hoon) to understand each of the steps of the program, by making those steps very simple, minute and detailed.

But the price for this clarity at the micro-level is to make programs harder to follow at the macro-level. The abstractions that are denied expression are precisely those that would allow to concisely and precisely express the ideas for the actual high-level problem at hand. Every issue therefore become mired with a mass of needless concerns, extraneous details, and administrative overhead, that simultaneously slow down programmers with make-work and blur his understanding of the difficult high-level issues that matter to the user. The concepts that underlie these issues cannot be expressed explicitly, yet programmers need to confront them and possess the knowledge of them implicitly to grasp, develop and debug the high-level program. Instead of having a DSL that automatically handles the high-level concepts, programmers have to manually compile and decompile them as “design patterns”; they must manually track and enforce consistency in the manual compilation, and restore it after every change; there are more, not fewer, things to know: both the DSL and its current manual compilation strategy; and there are more things to keep in mind: both the abstract program and the details of its concrete representation. Therefore, the rejection of abstraction in general, and metaprogramming in particular, prevents unimpeded clear thinking where it is the most sorely needed; it makes the easy harder and the hard nearly impossible, all for the benefit of giving random neophytes a false sense of comfort.

The same mistake goes for all languages that wholly reject syntactic abstraction, or provide a version thereof that is very awkward (like C++ templates or Java compile-time annotations) and/or very limited (such as C macros). It also applies to all programmers and coding styles that frown upon syntactic abstraction (maybe after being bitten by the bad implementations thereof such as above). If you don’t build DSLs, your general purpose language has all the downsides of Turing-equivalence with none of the upsides.

Note however that even though Urbit officially rejects abstraction, Hoon is at its core a functional programming language. Therefore, unlike Humans stuck with COBOL or Java, Martian programmers using Hoon can, if they so choose, leverage this core to develop their own set of high-level composable abstractions; and for that they can reuse or get inspired by all the work done in more advanced functional languages such as Haskell or Lisp. But of course, if that’s the route chosen for further development, in the end, the programmers might better directly adopt Haskell or Lisp and make it persistent rather than use Urbit. If the Urbit persistence model is exactly what they need, they could implement a Hoon backend for their favorite language; if not, they can probably more easily reimplement persistence on their platform based on the Urbit experience than try to evolve Urbit to suit their needs.

Finally, in their common rejection of metaprogramming, both the Human and Martian computing approaches lack first-class notions of meta-levels at runtime. Therefore, all their software is built and distributed as a fixed semantic tower on top of a provided common virtual machine. It’s just that the virtual machine is very different between the Humans and Martians: the Martian VM is oriented towards persistence and determinism, the Human VM is just a low-level portability layer for families of cheap human hardware. As we explained in our chapter 4 and subsequent chapters, this makes for rigid, brittle and expensive development processes.

Impedance Mismatch

One way that Martian is worse than Human as well as Houyhnhnm systems though is that it introduce a virtual machine that makes sense neither at a high-level nor at a low-level, but only introduces an impedance mismatch.

Houyhnhnms clearly understand that the ultimate purpose of computer systems is to support some kind of interaction with some sentient users (be it via a console, via a robot, via a wider institutional process involving other sentient beings, etc.). In other words, the computer system is an enabler, a means, and the computing system is the goal, i.e. the user interactions involving applications. If some computer system makes it harder (than others; than it can; than it used to) to write, use or maintain such applications, then it is (comparatively) failing at its goal.

Humans clearly understand that the ultimate starting point for building the computer software is whatever cost efficient computer hardware is available. At the bottom of the software stack are thin portable abstractions over the hardware, that together constitute the operating system. Every layer you pile on top is costly and goes against the bottom line. If it’s a good intermediate abstraction in the cheapest path from the low-level hardware to the desired high-level application, then it’s part of the cost of doing business. Otherwise it’s just useless overhead.

Unhappily Martians seem to miss both points of view. The Nock virtual machine is justified neither by sophisticated high-level concepts that allow to easily compose and decompose high-level applications, nor by efficient low-level concepts that allow to cost-effectively build software as layers on top of existing hardware. It sits in the middle; and not as a flexible and adaptable piece of scaffolding that helps connect the top to the bottom; but as a fixed detour you have to make along the way, as a bottleneck in your semantic tower, a floor the plan of which was designed by aliens yet compulsorily included in your architecture, that everything underneath has to support and everything above has to rest upon.

Thus, if you want your high-level programs to deal with some low-level concept that isn’t expressible in Nock (hint: it probably won’t be), then you’re in big trouble. One class of issues that Nock itself makes unexpressible yet that any programmer developing non-trivial programs has to care for is resource management: the programmer has no control over how much time or memory operations really take. Yet resources such as speed and memory matter, a lot: “Speed has always been important otherwise one wouldn’t need the computer.” — Seymour Cray. There is a resource model in Urbit, but it’s all defined and hidden in u3, out of sight and out of control of the Martian programmer (unless we lift the lid on u3, at which point Urbiters leave Martian computing to go back to all too Human computing — and certainly not Houyhnhnm computing). At best, you have to consider evaluation of Nock programs as happening in a big fat ugly Monad whereby programs compute functions that chain state implicitly managed by u3.

Of course, you could write a resource-aware language as a slow interpreter on top of Nock, then reimplement it efficiently under u3 as “jets”. Sure you could. That’s exactly what a Houyhnhnm would do if forced to use Urbit. But of course, every time you make a change to your design, you must implement things twice, where you used to do it only once on Human or Houyhnhnm systems: you must implement your logic once as a slow interpreter in Nock; and you must implement it a second time in the Human system in which u3 jets are written. And how do you ensure the equivalence between those two implementations? You can fail to, or lie, and then Urbit is all a sham; or you can spend a lot of time doing it, at which point you wasted a lot of effort, but didn’t win anything as compared to implementing the human code without going through Urbit. What did the detour through Nock buy you? Nothing. Maybe the persistence — but only if persistence with the exact modalities offered by u3 are what you want. If you aim at a different tradeoff between latency, coherency, replication, etc., you lose. And even if perchance you aimed at the exact very same tradeoff, you might be better off duplicating the general persistence design of u3 without keeping any of Nock and Urbit above it.

Oh, if only you had an advanced metaprogramming infrastructure capable of manipulating arbitrary program semantics in a formally correct way! You might then automatically generate both the Nock code in Monadic style and the supporting u3 code for your software, and be confident they are equivalent. And if furthermore your metaprogramming infrastructure could also dynamically replace at runtime an inefficient implementation by a more efficient one that was shown to be equivalent, and for arbitrary programs defined by the users rather than a fixed list of “jets” hardwired in the system, then you could short-circuit any inefficiency and directly call the low-level implementation you generated without ever going through any of the Urbit code. But then, you’d have been using a Houyhnhnm system all along, and Urbit would have been a terrible impediment that you had to deal with and eventually managed to do away with and make irrelevant, at the cost of a non-trivial effort.

Computing Ownership

Martian computing is presented as a technical solution to a social problem, that of allowing individuals to reclaim sovereignty on their computations. That’s a lofty goal, and it would certainly be incorrect to retort that technology can’t change the structure of society. Gunpowder did. The Internet did. But Urbit is not the solution, because it doesn’t address any of the actually difficult issues with ownership and sovereignty; I have discussed some of these issues in a previous speech: Who Controls Your Computer? (And How to make sure it’s you) The only valuable contribution of Urbit in this space is its naming scheme with its clever take on Zooko’s triangle — which is extremely valuable, but a tiny part of Urbit (happily, that also makes it easy to duplicate in your own designs, if you wish). The rest, in the end, is mostly a waste of time as far as ownership goes (but resurrecting the idea of orthogonal persistence is still independently cool, though its Urbit implementation is ultimately backwards).

It could be argued that the Nock VM makes it easier to verify computations, and thus to ascertain that nobody is tampering with your computations (though of course these verifications can’t protect against leakage of information at lower levels of the system). Certainly, Urbit makes this possible, where random Human systems can’t do it. But if Humans wanted to verify computations they could do it much more easily than by using Urbit, using much lighter weight tools. Also, the apparent simplicity of Nock only hides the ridiculous complexity of the layers below (u3) or above (Arvo, Ames). To really verify the computation log, you’d also have to check that packets injected by u3 are consistent with your model of what u3 should be doing, which is extremely complex; and to make sense of the packets, you have to handle all the complexity that was moved into the higher layers of the system. Once again, introducing an intermediate virtual machine that doesn’t naturally appear when factoring an application creates an impedance mismatch and a semantic overhead, for no overall gain.

Not Invented Here

Martian computing comes with its own meta-language for sentient beings to describe computing notions. Since Martians are not Humans, it is completely understandable that the (meta)language they speak is completely different from a Human language, and that there is not exact one-to-one correspondence between Martian and Human concepts. That’s a given.

Still, those who bring Martian technology to Earth fail their public every time they use esoteric terms that make it harder for Humans to understand Martian computing. The excuse given for using esoteric terms is that using terms familiar to Human programmers would come with the wrong connotations, and would lead Humans to an incorrect conceptual map that doesn’t fit the delineations relevant to Martians. But that’s a cop out. Beginners will start with an incorrect map anyway, and experts will have a correct map anyway, whichever terms are chosen. Using familiar terms would speed up learning and would crucially make it easier to pin point the similarities as well as dissimilarities in the two approaches, as you reuse a familiar term then explain how the usage differs.

As someone who tries to translate alien ideas into Human language, I can relate to the difficulty of explaining ideas to people whose paradigm makes it unexpressible. This difficulty was beautifully evidenced and argued by Richard P. Gabriel in his article The Structure of a Programming Language Revolution. But the Urbit authors are not trying to be understood—they are trying their best not to be. That’s a shame, because whatever good and bad ideas exist in their paradigm deserve to be debated, which first requires that they should be understood. Instead they lock themselves into their own autistic planet.

There is a natural tradeoff when designing computing systems, whereby a program can be easy to write, be easy to read, be fast to run, and can even be two of these, but not three. Or at least, there is a “triangle” of a tradeoff (as with Zooko’s triangle), and you can only improve a dimension so much before the other dimensions suffer. But Urbit seems to fail in all these dimensions. Its alien grammar, vocabulary, primitives, paradigm, etc., make it both hard to read and hard to write; and its forced abstraction makes programs slower to run.

If that abstraction came “naturally” when factoring some programs, then it could make writing these programs easier; but the Urbit VM looks very little like what either Humans or machines use for anything, and offers no “killer app” that can’t be implemented more simply. Its applicative functional machine with no cycles exchanging messages is reminiscent of the Erlang VM; but then it’s not obvious what advantages Nock brings for the applications that currently use the Erlang VM, and all too obvious what it costs. It would be much easier to make an Erlang VM persistent or to teach Erlang Ames-style authentication than to teach u3 to do anything useful.

Yet, by having deliberately cut themselves from the rest of the world in so many ways, Urbit programmers find themselves forced to reinvent the world from scratch without being able to reuse much of other people’s code, except at a very high cost both in terms of implementation effort (doing things both in Nock and in u3) and integrity (ensuring the two things are equivalent, or cheating). For instance, the Urbit authors wrote a markdown processor in Hoon, and have a “jet” recognizing it and replacing it by some common Markdown library in C; however the two pieces of code are not bug compatible, so it’s all a lie.

Urbit as a demo

Urbit has none of the support for modular design necessary for programming “in the large”. But the superficial simplicity of Nock makes it suitable as a cool demo of orthogonally persistent system.

Of course, the demo only “works” by sweeping under the rug the difficult issues, to be solved by u3, the metasystem of Urbit; and unlike Nock, u3, where most of the interesting things happen, remains informal in its all-important side-effects, and not actually bound to behave as a faithful implementation of the parts specified by the Nock machine. In other words, the pretense of having fully formalized the state of the system and its state function, and of putting the end-user in control of it, is ultimately a sham, a corruption. The power remains in the opaque and totally unspecified centralized implementation of the metaprogram that implements Nock and issues real-world side-effects.

There is no one-size fits all way to handle all the issues with connection to real-world devices, and with policies that resolve tradeoffs regarding persistence, privacy, latency, efficiency, safety, etc. A centralized implementation for the metaprogram that handles them is not a universal solution. Only a general purpose platform for people to build their own metaprograms can enable them to each solve the issues to their satisfaction. And once you have this platform, you don’t need any of the Urbit operating system, because you already have a Houyhnhnm computing system.

Houyhnhnms have no ill feelings towards either Martians or Humans. They hope that Urbit will be a great success, and demonstrate a lot of cool things and inspire people to adopt orthogonal persistence. However, Houyhnhnms believe that Urbit won’t be able to outgrow being a cool demo unless it embraces a more general purpose metaprogramming architecture.

⇖ index

Author's References:

http://urbit.org/

http://moronlab.blogspot.com/2010/01/urbit-functional-programming-from.html

http://media.urbit.org/whitepaper.pdf

http://urbit.org/docs/

https://medium.com/@urbit/design-of-a-digital-republic-f2b6b3109902

https://en.wikipedia.org/wiki/Zooko%27s%5Ftriangle

https://www.ethereum.org/

http://common-lisp.net/project/asdf/ilc2010draft.pdf

https://wiki.haskell.org/Monad

http://fare.tunes.org/computing/reclaim_your_computer.html

https://www.dreamsongs.com/Files/Incommensurability.pdf

https://en.wikipedia.org/wiki/Programming_in_the_large_and_programming_in_the_small