💾 Archived View for dioskouroi.xyz › thread › 24986132 captured on 2020-11-07 at 00:45:19. Gemini links have been rewritten to link to archived content
-=-=-=-=-=-=-
________________________________________________________________________________
Apparently, the author of this blog post had a change of heart:
https://memo.barrucadu.co.uk/blub-crisis.html
I have realised in recent conversations about programming languages, and in reflection of my very negative and kind of arrogant blog post about Go, that I have become trapped by the Blub Paradox. I have become dismissive of non-Haskell languages. I think in Haskell. Languages less powerful than Haskell are obviously too limited to get any real work done in, and languages with more advanced features than Haskell (like dependent types) are really just weird and not actually solving a problem I would ever face.
The arrogance! Such a know-it-all!
I have decided to call such a realisation a “Blub Crisis”. Haskell is my Blub. The only solution is to become proficient—actually proficient—in a different language, and once again learn a new way of thinking.
I thought the criticisms and praise, and the article's Good/Neutral/Bad structure, were fair and thoughtful on balance. It didn't read as arrogant to me.
It did to me.
He slams go for not doing stuff the Haskell-way (e.g. pure code vs effectful code).
This is not how you approach new things
I've never written Haskell or another pure functional language more than a couple of lines, but the more I write code the more convinced I become that the ability to reason about mutability and side effects is a major force multiplier in writing robust software.
I tend to agree, but, even then, one doesn't necessarily have to take things quite as far as Haskell does.
In Rust, for example, mutable data is allowed, but, when the owner of mutable data shares a reference to it, it gets to decide whether the borrower is also allowed to mutate the data. This doesn't eliminate the more challenging things you can do with shared mutable variables, but it does mean that enabling them requires mutual consent.
Nim does an interesting thing, too. It has a two-color function mechanism where "procedures" are allowed to have side effects, and "functions" are not. But even functions are allowed to use mutable variables behind closed doors. That can arguably be an ergonomic win. Many people find that, within a sufficiently bounded context, an implementation that uses mutable variables might be more maintainable than a purely functional implementation.
The main reason Haskell goes even further, and bans mutable variables from the insides of functions as well, was never really about maintainability, per se. It was done that way because Haskell, as a lazy language, couldn't allow anything that might require statements to be evaluated in a particular order. That design decision turned out to lead to an impressive bounty of interesting and useful discoveries. But there also seems to be something of a tendency to swaddle the bathwater with the baby.
It's worth noting that Haskell allows mutable variables in IO actions - a type used to model computations that are not referentially transparent. It's just that using this facility is not super ergonomic.
I agree that being able to reason about mutability and other effects is useful. However, that doesn’t necessarily imply an all-or-nothing approach where either you’re in a pure function or you can do anything with anything. In a sibling comment, gwd mentioned const in C, which is one example of something in between. Rust’s ownership semantics and borrow checker are another.
> However, that doesn’t necessarily imply an all-or-nothing approach where either you’re in a pure function or you can do anything with anything.
To elaborate this point I'd say that the most important practical use of all that "monad mumbo-jumbo" in Haskell is that you can tag your functions with what they can and can't do and then the type system tracks this for you:
-- pure function f1 :: Text -> Int -- can fail f2 :: Text -> Maybe Int -- can read from some MyEnv record f3 :: Text -> Reader MyEnv Int -- can keep a set of bool as state around f4 :: Text -> State (Set Bool) Int
etc... and of course to go nuclear:
-- can do anything f5 :: Text -> IO Int
The tracking part is that you can't call f5 from within f1, the type checker says no. It enforces separation between all these various effect boundaries.
Also we don't have to stop here. A natural next step is defining exact effects one is after. For example say I want my function to be able to get some entity from a DB:
-- any type that is an instance of Entity can identify itself by uuid class Entity e where identify :: e -> UUID -- The effect we want, ie. get an entity via its uuid class (Monad m, Entity e) => GetEntity e m where getEntityById :: UUID -> m (Maybe e)
now we can say things like:
data User = MkUser {uId :: UUID, uName :: Text} -- a User can identify itself instance Entity User where identify = uId -- as User is an Entity so we can get it via its uuid (if it exists) getUserById :: (GetEntity User m) => UUID -> m (Maybe User) getUserById = getEntityById
Also notice how getUserById does not say anything about IO or a DB. All it states is that whatever context it will run in that context must know how to get a user via its uuid. You can then plug in whatever actual context you want, say:
newtype Prod a = MkProd {unProd :: ReaderT ProdDB IO a} deriving newtype (Functor, Applicative, Monad, MonadReader ProdDB, MonadIO) -- get a user from a DB for real instance GetEntity User Prod where getEntityById :: UUID -> Prod (Maybe User) getEntityById eid = do ProdDB {..} <- ask -- query the DB, etc...
or
newtype Mock a = MkMock {unMock :: State (Map UUID User) a} deriving newtype (Functor, Applicative, Monad, MonadState (Map UUID User)) -- get a user from a mock DB instance GetEntity User Mock where getEntityById :: UUID -> Mock (Maybe User) getEntityById eid = gets (Map.lookup eid)
All in all the programmer have fine-grained control over what various parts of their code can or can't do.
The trick seems to be how to implement this kind of idea in a composable way, so we can still have a clear, modular design and write generic, reusable code even in the presence of several different types of effect. IMHO, the research into type and effect systems in recent years has been promising, but also shows that this is not an easy problem to solve. I’m hoping that in a few more years we might see a new generation of programming languages that incorporate this kind of “effect safety” as easily and naturally as many developers use const and qualifiers and resource management idioms today.
I haven't yet looked much into effect systems so can't really talk about them but I think even with something like MTL you can achieve a decent level composability/modularity where you build up a library of capabilities that you can then use to pick and choose effects from, eg.:
type FooContext m = (GetTime m, GenUUID m, Log m, Auth m, ReadDB m, WriteS3 m, BarApi m, etc...) foo :: (FooContext m) => UUID -> m Foo
The main downside is ergonomics, ie. the amount of boilerplate needed, but personally I don't think that's a high price to pay.
Unison's ability system (an implementation of algebraic effects) is looking promising as an ergonomic method of tracking effects.
> He slams go for not doing stuff the Haskell-way (e.g. pure code vs effectful code).
Heck, I come from a C background, and that's a complaint I have about Go. Sometimes you want to have a function accept a pointer to a large structure to avoid copying, but have the compiler prevent you from making any changes. In C you'd write "const"; in Go there's no way to do that.
Sure, const is great to have.
But Haskells view on this topic is far far more opinionated than just supporting 'const'.
That is a single dubious point amongst some serious and in my opinion legitimate issues with the language.
Complaining about strict evaluation, the std lib not doing what he wants it to, the type system, the lack of heap profiler/threadscape, and .... demanding Golang handle zero values in his preferred way is basically him whining it ain't Haskell. I could blubber about it not having power conjunctions or self intervals or whatever if I wanted Golang to be J, which would be a similarly moronic roster of complaints. In reality golang is a basket of compromises like any other programming language. There's vastly more good stuff built in it than Haskell, despite Haskell being older and more theoretically amazing, so they must be doing something right.
His only criticism that resonated with me was go get defaulting to head, which befuddled me when I first saw it, but that's a compromise too; one oriented to the large codebases that golang is designed for. It's actually a pretty good compromise compared to the propeller head Haskell dork _"Things break backwards compatibility in Haskell, and the users just update their code because they know the library author did it for a reason."_ -aka this appears to be a statement from an academic wanker who obviously never had to ship on a timeline in _his entire fucking life._
The type system complaints are somewhat in the area of "it should be more like Haskell", but the debugger, heap profiler and thread debugger ones are certainly not.
Golang has extremely basic tools for anything beyond formatting/compiling compared to most modern languages. This is just a limitation of the current ecosystem, not something you can say has tradeoffs (except of course for the Go team's time/prioritization of features).
Delve is getting better, but it has a LOT of ground to cover before it is half as useful as gdb or a java/C# debugger. Same goes for the profiling tools, especially on the CPU side.
I've never really done Haskell, but I have Erlang, Java, C#, JS, TS and Erlang under my belt.
I agree with everything the author said after having spend some time with Go.
My Blub language has been PHP. While the language has been getting better over the years, especially with the introduction of types, I still felt at one point that I wasn't growing. I had a try at Haskell, and failed miserably. I tried OCaml, but still something felt unnatural to me, although what really intrigued me was the sound type system and the guarantee that if it compiles, then it's probably going to run well. I then tried Elixir, and I love it. I think Elixir is a wonderful transition language, but something that I wouldn't mind keeping close in my toolstack. The Actor model, immutability, pattern matching, meta programming, these are all concepts that uppended my way of thinking. PHP/JavaScript array sort functions feel so weird to me now for doing the sorting directly on the variable. But to be honest, I wish Elixir had types. I don't trust myself writing functions that can return different things.
I personally believe that Elixir is an excellent 'gateway' functional programming language _because_ it is dynamically typed.
Learning OCaml and especially Haskell (I've done some of both) requires understanding a complex type system and learning how to encode what you want in it. On top of that you have to learn to code in the 'functional' way - representing your data, pure code, recursion (or reduces), higher order functions etc.
When you write Elixir you eliminate most of the type system stuff, and you just have to learn to code in a functional way. There is surprisingly little 'language' to learn - functions, tuples, lists, maps, modules and that's about it. There is then the OTP (processes and supervisors) side of it, but that's separate (and if you're writing a web backend in Phoenix, mostly ignorable).
We're just (hopefully) wrapping up a 9 month project with a team of about 8, none of which had written much Elixir before the start (a background of mostly C). It has been a very smooth process, and definitely the right choice of language.
I sorely lack Elixir's easy entry experience while I am getting better and better at Rust. :(
And I'll look for ways to use it at my $dayjob. Rust is an excellent language but I feel that for some pieces of our stack Rust is kind of a bazooka in a scenario where a 9mm pistol would do.
I could see an argument where Rust is performant and low level enough that you can use it at the bottom of the stack, and has good enough abstractions that it's productive to use higher in the stack. There are certainly benefits to using one language for everything, but this logic reasoning relies on you having a team that is very familiar with it, and having your bits of tooling and libraries worked out for your setup.
Yep, exactly. Lower-level languages however also give you much more foot-guns.
These days my ideal app is written in Elixir for almost everything due to the insane parallelisation and fault-tolerance that it offers (and the copy semantics that make shared memory bugs impossible) with Rust sprinkled at the critically important-to-be-fast places -- like working with sqlite3 for example.
All of that could be achieved with Rust or any other language really. But it requires tons of tooling, CI/CD jobs, Git hooks, static analyzers, linters, formatters etc. And a good chunk of all that doesn't even exist for many languages.
So I prefer to make my own blend that has the important advantages at the right places.
Erlang has some type annotations. Not sure whether they made it over to Elixir?
Yep, there's `dialyxir` which is a very heroic wrapper effort to translate Dialyzer's messages to something more readable and manageable. Even with that though sometimes you get real head scratchers no matter which of both you use.
Dialyzer is fully integrated into Elixir, yes. But again, it’s optional typing.
Dialyzer also takes a while to get used to and for several teams I've been in the initial cost of learning turned out to not outweigh the theoretical savings won by it.
There are other more modern libraries that defer the errors to runtime but also make them much more clear (like `norm`). Libraries like `boundaries` can also help you enforce, a-hem, boundaries in your code and making sure that you write less spaghetti code.
I personally stick to Dialyzer in personal projects but it's an uphill battle to constantly keep it happy while OCaml / Rust / Haskell will just immediately yell at you with a well-described error (well, Rust anyway, the other two can be vague sometimes).
I know how to use Dialyzer and am quite proficient with it. I cannot say, however, that it helps me in any way. I think strong typing is a crutch for bad programmers and I've yet to encounter anything in my 30+ years of programming to change my mind.
There are type systems out there that allow you to model more than "is a string, is a float, is an int".
There are CVEs for use-after-free bugs in Firefox, IE, the Linux kernel and more. Rust's type system helps prevent those. There are CVEs for double-free bugs in OpenSSH, OpenSSL, and Kerberos. Again, Rust's type system helps prevent those. There are CVEs for null dereferencing in the Linux kernel, CUPS, and OpenSSL. Many languages have type systems that help preventing those.
There's many more classes of bugs that can be prevented by judicious use of robust type systems. Dismissing strong, static typing as "a crutch for bad programmers" is just plain wrong.
All of this, of course, assumes the only use for static types is preventing issues. Algebraic data types actually allow you to write simpler, more concise code in languages that support them. Typeclasses make for a whole model of polymorphism that's just incredibly painful to implement in languages that don't support them.
> Dismissing strong, static typing as "a crutch for bad programmers" is just plain wrong.
No, it's correct. The problem is that innocentoldguy doesn't realize that _he_ is a bad programmer. (That's not an attack on him - we _all_ are bad programmers.)
So here we are, bad programmers. Here's this crutch - or, less negatively put, here's a tool that can help. Maybe we can be somewhat better bad programmers if we use it.
I get your point, but if we're all bad then "bad" loses its meaning. I'm definitely a better programmer than many of my peers, and worse than a whole bunch of others.
We're bad, not in comparison with each other, but in comparison to the external standard of "good enough that we don't need a crutch".
In fact, if you want to be a better (less bad) programmer, learn to use the available crutches more effectively.
Well, it's a bit like saying that computers of the electronic kind are a crutch for bad computers of the human kind.
Or that planes are a crutch for bad travelers who can't fly under their own power.
There's two sides to it:
First, there's only so much I can hold in my brain at once.
If the computer can do some of the tedious busy work, let it.
Second, the first perspective talks about catching errors and oversights. A sufficiently non-bad programmer could make do without. But static typing also allows for some programming techniques that wouldn't work without.
As a silly example, Haskell makes heavy use of overloading-by-return-type. You just don't the information about what return-type is expected available dynamically, it needs to be static.
There's also lots of optimizations that are safe only when the compiler knows more about your code. (Of course, a sufficiently non-bad programmer could write directly in the machine language of every processor she's writing code for..)
In practice, typing helps me a lot with eg exhaustiveness checking for my pattern matches.
The type system that dialyzer exposes is pretty weak. So not a good standard to measure all of typing against.
Strong typing can be a crutch for bad programmers in the same way that a prosthetic leg can be a crutch for an amputee.
The world record 100m men's sprint for double-leg-below-the-knee amputees is now frozen at 10.57s (record set in 2013); omitting Usain Bolt who is clearly freakishly good even among world-class sprinters, the 100m world record is 9.74s.
I am very happy to use strong static type systems.
That depends on what kind of software you have worked on. I also got quite far with dynamic languages in the last 10 years but at one point the static strong typing helps you a lot in making invalid state impossible at compile time.
Growing complexity kind of mandates a static strong typing at one point. Sure, complexity can be managed and reduced -- and distributed among smaller projects (i.e. splitting big projects) -- but even that has a limit.
Dialyzer solves certain problems for me that still exist in many "typed" languages. A great example is pos_integer(). In something like Typescript I can use "number", but that doesn't assert I can't receive a negative number, or a zero.
In general, you'd need something like dependent types for solving this properly. Like eg Agda has.
Ada has this without general dependent types AFAIK
Yes and no. The definitions are in the type system, but the checks are done at runtime, just like you would do manually. You don't really get the benefit of typing.
The reason I can say that: you don't have to provide proofs that a value _has to be positive_ to get things to compile.
Checks at runtime are also a good idea, but yes, very different from static dependent types.
There's some interesting work on making such contracts work well in a lazy language with higher-order functions.
While I think there's some sense in the analogy and you get real insights using new languages, I do find that 'beating the averages' article really unconvincing.
Firstly there's the obvious observation that Y Combinator companies very rarely use lisp, but there's also really a lack of any solid evidence of any kind anywhere that language choice has such a big impact.
I am a fan of exploring languages and making a good choice for the problem at hand and always remaining open-minded but I can't stand the religious flame war side of it and I really do not think that article helps, at all.
To me languages are always a trade-off between abstraction and control, to which each problem has an appropriate range of languages to apply.
I also think the concept doesn't apply here as go is certainly less powerful and has fewer abstractions than Haskell (by design) - so if anything it is the 'blub' language* according to the article here (my point being not to criticise go but rather to point out how silly the beating the averages concept is).
I don't think his original piece was arrogant, I think this is something of an overreaction and a problem with assessing any language - adherents meeting criticism with 'you just have to use it more' can use that excuse indefinitely. Better to address individual points.
* I love go and contributed to the core compiler back in 2011/12 so this isn't an attack on it.
Such a weird thing to frame this in terms of the Blub Paradox. Regardless of what you think about the paradox, this is what it states:
> _But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn’t realize he’s looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub._
It's very, very hard to argue Go is "up the power continuum" from Haskell, or that Haskell is a Blub language compared to Go. Everything else can be debatable, but surely not this.
He could have simply said "I was arrogant about Go, and need to look at it from a fresh perspective instead of comparing it to a language I'm more familiar with", no appeals to Blub needed.
The Blub Paradox is flat-out wrong. To see why, just look at Haskell and Lisp.
When a Lisp user looks at Haskell, they are sure they're looking down, because Haskell doesn't have macros. But when a Haskell type (pun intended) looks at Lisp, they are _also_ sure that they're looking down, because Lisp doesn't have a decent (Hindley-Milner) type system.
This situation - two languages, each sure that they're above the other - shows the problem. _You cannot rank languages on a one-dimensional axis called power._
Another way of getting there is to ask: Power? Power for what? For writing programs. What kind of programs? General programs? I've never written a general program in my life. I've written a bunch of specific ones, though. What I care about is power to write _this_ program - the one I'm currently trying to write. (Why do I care about a language's power to write a program that I'm _not_ trying to write?)
Instead, think in terms of yak shaving. What are the yak shaving aspects of the program I'm trying to write? Think of that as a vector in a multi-dimensional space. Think of languages as branches on a tree. Which branch extends farthest in the direction of the yak-shaving vector? Use that language.
But you have to have kind of an expanded idea of what "yak shaving" means. In particular, if you pick a "non standard" language, training your team becomes part of the yak shaving.
I think he's not saying Golang is up the power continuum, but that languages like Idris and Coq are up the power continuum. He doesn't bother learning those because he's "stuck".
I think you're right! I misread it.
Also interesting: it's his only article tagged Go, from 2017. He never mentions Go again. Since he has several more articles about Haskell, I wonder if he eventually quit his job, or whether he simply didn't have anything else to say about Go.
Recently I've become a bit jaded, because I see flaws and the like in every language I've used; I (like to think I) don't have a "blub" language.
Mind you I've never gotten along with pure functional languages. I've mostly done CRUD applications and I'm not sure functional languages are appropriate for that use case.
FP languages like Elixir are perfect for CRUD apps IMO (Clojure too but it lacks the runtime guarantees of Erlang's BEAM VM so to me that's a no-go).
Reason being: they are immutable in general but allow you escape hatches that you can relatively easily identify in your code if you need to troubleshoot.
Definitely! I couldn't get my head around why people like untyped languages, but I keep an open mind on it. I won't close off that they could be better, if the right patterns/practices are used (whatever they are!)
Well, if your alternative was 90s style Java or C++ static typing, Python-style dynamic typing looks comparatively less insane.
As soon as you have algebraic data types and parametric polymorphism, static typing is less onerous, so dynamic typing is less appealing.
I liked 90s Java at the time. ;-) well the alternative seemed to be c++ or vb or Perl and I didn’t know about python.
And especially type inference.
Yes. Ideally something as powerful as Hindley-Milner or similar. But even the weak sauce versions in Go (or recent Java or C++) are a boon to developer ergonomics.
Java's is particularly weaksauce, and has problems chaining inferences within even the same expression, but it's definitely way better than not having it at all.
> I couldn't get my head around why people like untyped languages
What do you mean exactly with "liking" untyped languages?
I personally program mainly in both a ("cognitively demanding") typed and an untyped languages, and they have different use cases.
For relatively small programs, untyped languages are much faster to develop. A program I wrote yesterday in a few hours would have taken 2 to 4 times as much in a typed language.
On large libraries (I consider gaming engines as so), common wisdom suggests that typed languages make the project easier to manage (modify and so on).
There's obviously a wide range of use cases in between.
Is that true? A simple program that isn't going to see long term use means I don't have to worry about anything but the happy path, either typed or untyped. And something where I have to nail down the edge cases is easier in a typed language.
True? I personally have no doubt (of course, programs vary a lot, so there are certainly cases where the effort is comparable).
Besides the unstructured trees, another example, although more high-level, is interfaces. They're a concept required in statically typed languages (some languages they have a simpler design than others), but not in dynamically typed languages (which have duck typing).
If you have a couple of types that you need to abstract, in a S.T. you'll need to define an interface, the method signatures, which include the return types, and maybe (in lower level languages) the generic types and the super interfaces. Even something as (relatively) simple as generalizing a tuple and a matrix types will require some design.
In D.T. languages, zero effort is required. As long as an object responds to a method (name), it's game.
I think it's not realistic to assume that the concepts that need to be taken care of in a S.T. language are comparable to the ones in a D.T. language. And more concepts imply more cognitive load. And cognitive resources are limited :-)
By like I mean prefer. So they’d pick Ruby instead of Java for most tasks for example.
I joked with a friend that as people get older they start to prefer static typing. I personally don't have a preference, it depends on the application. I think it's obvious what are the advantages of static typing so let me rant about what is for me the main disadvantage: as soon as you have an advanced type system people will try to get creative writing code in the most abstract possible way. It's inevitable. It's the typed equivalent of premature optimization. Why restrict ourselves to implement matrix multiplication for floats? Why not rational numbers? Why not over arbitrary fields? This specially true when people write libraries. And yes, you can encode a lot of things in the type system but maybe it's not such a great metalanguage when taken outside of the well known uses. It's similar of how people get crazy with macros or even worse template metaprogramming.
I don't use go so I don't have an opinion if they made the right choice but I sympathize. I suppose you have to be Ken Thompson to not give a crap and say I'm going to create a language without support for generic programming in 2009. The closest thing would be a tenured professor but they wouldn't dare unless under a pseudonym.
"I joked with a friend that as people get older they start to prefer static typing."
I don't think you can tell that right now. Statically-typed languages have gotten much, much better over the past 30 years (progressively), and over the last 20 years we've all aged, you know, 30 years, so right now I think there's a lot of correlation there rather than necessarily causation. I have also switched to static typing as I've "gotten older", but I would still _totally_ rate things in the order "1990 static < 2010 dynamic < 2020 static", even if I had to choose right now. Static was a real mess in the past.
It also isn't even necessarily big jumps that make that true as much as a steady development of innovations, since obviously it's not like all modern static languages are Hindley-Milner or anything. But a slow dribble of both little features and improvements in understanding of how to use everything over the years, like type deduction (getting rid of the usually-redundant specification of type on both sides of "="), getting away from the idea that OO === matching physical models, libraries that have learned to take more interfaces instead of concrete types, languages like Go that privilege composition over inheritance instead of the other way around and the general trend towards composition, getting iterators embedded more deeply into languages like C#, control flow improvements like usable threading methodologies ("share memory by communicating, don't communicate by sharing memory", and even as much as I dislike it vs. the alternatives, having async/await is better than not having any options even though I prefer Go-style threading by a mile), and so on and so on... a steady dribble of little improvements that one step at a time changed the cost/benefit tradeoffs of a static vs. dynamic language from clearly dynamic circa 2000 to fairly clearly (IMHO) static for any non-trivial code base in 2020. And that's before we talk about Rust or anything like that.
as a counterpoint, genericness can actually serve as a form of documentation. you can often infer a lot from just a signature, e.g:
any :: (Functor f, Foldable f) => (a -> Bool) -> f a -> Bool
tells me that `any` has to work across the whole collection `f a` (list/tree/whatever), because that's
how folding works, and that it will get the answer by calling the function on the collection's elements (the collection is a Functor, and the only thing you can do with one is `map` a function over it).
[of course this is assuming that `any` isn't implemented as `any _ _ = True`]
This advantage can be overstated and give a false sense of security, though. It’s true that for _entirely_ generic functions, you can sometimes infer useful properties just from the type signature, but once you start getting any more specific types in there, all bets may be off. You said
_[of course this is assuming that `any` isn't implemented as `any _ _ = True`]_
but actually there are many more possibilities. For example, this function might return True if the provided data structure has exactly 5 elements, never using the provided (a -> Bool) function or mapping over anything.
you're right! but i think under normal circumstances it's fair to assume that the function actually _uses_ all of its arguments. and i mean "uses" in a general handwavy sense, e.g. that it doesn't do `any f xs = length (fmap f xs)`, where `f` is technically used, but the results of applying `f` are instantly dropped.
I agree and disagree. I love how you can do abstraction in Haskell based on the minimal interface. If it's "ring-like" or whatever you can define matrix multiplication. That can often come up useful, extending the power of existing functions.
But it means your web scraping code, crud app, {some other boring everyday app} or whatever is now adorned with all this deep mathematical complexity that if you just used nodejs or something it would be easier to understand.
Premature optimization is probably a good term for it, because the good think about haskell is it is easy to make refactoring. Make things specific now and more general later, and it should be backwards compatible for the most part.
But making things into "arrows" is part of the sport I think.
For some, it is “as developers get more mature they start to prefer static typing”.
Almost all of your “whys” have sensible, practical, answers. The practical bit is the sticky bit that gets set when you get “older”.
Prototyping or gluing components together is definitely faster without static typing.
I think this is a fault of language ergonomics rather than static typing.
If you are gluing components together, then surely the components have a common interface that uses compatible types. And surely you need to tell the runtime what those types are in some way, even if that is simply by writing them to return appropriate values.
In theory, a statically typed language with ideal ergonomics would make that as easy to specify as a dynamic language, but would have the benefit of warning you of mistakes as early as possible.
Of course, many statically-typed languages trade-off some amount of ergonomics for other features.
> If you are gluing components together, then surely the components have a common interface that uses compatible types
This is not guaranteed; for example, the representation of a JSON object is trivial in dynamically typed languages, because there are no type bounds.
Ease of metaprogramming is another. Possibly also debugging - I don't get powerful REPLs in the statically typed languages I've used, as much as those in the the dynamically typed language I use.
Concepts like these don't represent best practices for sure at large scale, but they do make things easier at least at small scale, and they don't need to be necessarily supported.
Taken to the extreme, Perl variable autoinitialization and context sensitivity are very convenient for 3-statement programs, but they'd be horror above that. It doesn't make sense for any language to try to support that.
In my experience when gluing things one needs at the very least dependent types to properly express relations. Without that the best hope is a good integration test, but with that in place static types for things that can be expressed in mainstream languages do not bring much extra value while contributing to the cost.
Go is a programming language for teams, not primarily designed to impress individual programmers...
I agree with all points with the author. But working in teams, or even mutiple teams on the same software, then go solves a lot of problems for you..
Yeah its a dumb language, missing a lot of features. But there is only 1 way to program, dependency management is sane (no circular dependencies), and the language is build for readability, not for writing code fast and elegantly...
Often short dense code with complex types is just really hard to read for your overage joe programmer.
We're not all computer scientists here.
Go is designed to write maintainable server side programs that can utilize concurrency in computer these days. Therefore they left out generics... I hope its coming.
I do wish error handling and generics where part of the language... And a tuple type indeed.
Go impedes its own readability by encourage or even requiring such huge volumes of code.
(And Go ain't very good at concurrency. At least they should have let you mark values as immutable, or communicated entirely via copying like Erlang.)
Yes, tuples are a really weird thing with Go. If they had completely left them, that would be bad but sort of defendable. But instead they give you a half-baked implementation of some of what tuples do with their 'multiple return types'.
(And multiple return types aren't even a good fit for signaling errors. For error-handling you want to return _either_ the result _or_ the error, in a way that the compiler can check that you handled the error.
Instead as far as the types are concerned they are always returning both the result and an error, and human reviewers have to make sure that they are checked properly.)
Go is incredibly readable I find. Yes, you tend to find yourself writing a lot of code because of the lack of generics, but that is being fixed as we speak. Generics has a draft and it looks nice from a Go developers perspective.
And Go let's you communicate by copying. That's what a Channel is. Pass a struct and that is copied. Pass a pointer and the pointer is copied. The thing it points to isn't copied for glaringly obvious reasons.
And what would you suggest is a good alternative for returning multiple parameters. Currently this basically forces you to handle any possible errors and results in software you can very easily reason about.
Not sure what you mean about human reviewers needing to check errors.
Every isolated Go piece is pretty readable. The problem is that getting most things done requires enough code that it's a lot of work to take it all in. Run all the for loops in your head. Etc. Higher level (and esp FP) languages like Haskell or Scala will be the opposite. That bunch of function compositions may take a little bit of work to digest, but once you understand it you understand a lot.
When people disagree on readability its often bc they mean two different things. We would benefit from different terms for readable-in-the-small and readable-in-the-large.
It is a lot like reading levels. Go feels like a grade 5 reading level. This means most people can read it. It also means complex concepts take a lot longer to explain because you have to avoid all sophistication.
Counterpoint: I find the go standard libraries (which are quite extensive) incredibly readable.
> Go is incredibly readable I find.
I'm the opposite, I don't find it readable at all.
I find Go too verbose to be readable.
Exactly this. I find that each line of go is readable. However actually trying to figure out what the code is trying to accomplish becomes a chore because I can't keep all of the lines in my head.
Maybe it is because what I am used to but I find I am often "pattern matching" loops into `map`s `filter`s and similar in my head. Of course doing this, and making sure that I didn't miss a side effect or condition that makes it not-quite-a-map takes a lot more brainpower than just seeing `slice.map(|path| path.basename())`.
> And Go let's you communicate by copying. That's what a Channel is. Pass a struct and that is copied. Pass a pointer and the pointer is copied. The thing it points to isn't copied for glaringly obvious reasons.
I'm not sure what those glaringly obvious reasons are. In Erlang, you just send a copy of the whole struct. Not some kind of references or pointers.
See eg
https://play.golang.org/p/P3qUtFenp2q
But as I am saying, if you want to send pointers (or references) over a channel, they should support marking things as immutable.
> And what would you suggest is a good alternative for returning multiple parameters. Currently this basically forces you to handle any possible errors and results in software you can very easily reason about.
Go doesn't force you to handle errors at all. The compiler will happily mix up the branches of your `if` that checks for errors, or let you get away without checking anything at all.
My suggestion would be eg algebraic data types. Especially sum-types. Or in more C inspired terms: tagged unions plus pattern matching. Or 'an enum with parameters'.
> Not sure what you mean about human reviewers needing to check errors.
In a language without any checking at all like JavaScript, human authors and reviewers have to make sure that you don't accidentally eg add a string to an int. Or otherwise, have to make sure that at least you have enough test coverage.
In Go, the compiler can yell at you when you are trying to add an int to a string.
In OCaml or Haskel or Rust (or any language with algebraic data types), the compiler can make sure that you check your errors. So in eg Haskell syntax that looks like this:
case someFunctionThatMightGoWrong(someParameter) of Error errorDetails -> handleErrors(errorDetails) Success someValue -> doSomethingSensible(someValue)
Crucially, `someValue` is only in scope in the branch where we match the right pattern. So you can't accidentally go on computing with that variable in the wrong branch.
Does this make sense? If not, I can try to explain in some other way.
Regarding the example in your first point, you are still sending a reference `m` of the map. If you wanted to send an immutable type, you could declare the channel to only accept struct values (and not pointers to structs) and you'll get your desired behavior.
Here's a playground for reference:
https://play.golang.org/p/U2tYZVTtUf-
Given that there is no way to make a non-pointer map in Go, there is no workaround for the example they gave.
The broader point is that there is no way in Go to ensure that data passed over a channel will not be modified concurrently without modifying the type of the data specifically for the channel use case.
A deep_copy() primitive would fix this.
Right, but they mentioned sending immutable structs which is why I gave the struct example. But you're right that every declared map is a just a pointer.
With respect to your second point, I see what you mean but I still don't think that's a negative of Go. You can pass in copied values without having to have a deep_copy() primitive (loop map values and copy to new map, dereferencing a pointer, etc.). Like other parts of Go, if you want to make your data immutable there is no syntactic sugar. You have to explicitly write it out.
You'd have to roll your own deep_copy by hand (or code generation) for every single data type.
Yes, either a deep_copy() or a way to declare things immutable. (Or both.)
> And Go let's you communicate by copying. That's what a Channel is. Pass a struct and that is copied. Pass a pointer and the pointer is copied. The thing it points to isn't copied for glaringly obvious reasons.
Copying large structs is not ideal, but passing a pointer to the channel means the other side of the channel can mutate that data. If Go had immutable types, you could pass a constant pointer, allowing the other end of the channel to read from the pointer but not modify it.
> And what would you suggest is a good alternative for returning multiple parameters. Currently this basically forces you to handle any possible errors and results in software you can very easily reason about.
What he's talking about is returning a union type. You have a single return value, it's type is either a valid result (i.e. a string) or an error. The compiler expects you to do runtime type checks (basically) to assert whether your return value is actually a value or not.
The explicit return values get clunky in some situations. If I have a function that processes an array of items, where each item can fail individually, in Go the cleanest way to handle that is to make a struct that holds a pointer to a value (so you can check if it's nil) and an error. And you return an array of those structs. In languages with Options, you can simply return an Option object, and let the upstream caller figure out what to do if it's an error.
Javascript/Typescript Promises are a similar, though less featureful, implementation of the same kind of idea.
Not sure what you mean about human reviewers needing to check errors.
> What he's talking about is returning a union type. You have a single return value, it's type is either a valid result (i.e. a string) or an error. The compiler expects you to do runtime type checks (basically) to assert whether your return value is actually a value or not.
Not quite. I am suggesting a tagged union. Not a runtime type check.
You'd check the tag.
Some language only support the runtime type-checked version of union types. That's a bit silly. For example, it makes it much harder to write a function that may either error out itself, or produce an error message as its bona fide value.
I think even more than the lack of generics, Go is difficult to read because it forces you to mix the error path with the success path in every function which can possibly fail, often making you bubble errors up the call stack manually.
Well, the onerous error handling is a symptom of the lack of generics.
Go has no facilities to would allow you to abstract away the repetitive parts of error handling.
The argument for Go being a simple language that naturally produces code everyone can easily grok comes up time and time again.
I have never gotten it. I find Go code somewhat easy to write, just just give up on trying to be concise or elegant, but not at all easy to read. The excessive boilerplate just makes me feel like my brain does not have the working memory to piece it all together.
The constructs are simple, true, but if you are trying to write anything non-trivial, then simple syntax will not make your _problem_ simpler. It is just expressed in tens of times more lines than in Elixir, Haskell or even PHP. This takes a toll on the reader.
As an example - you can accomplish interesting projects with Lego. And it is easy to get going for sure. But at some point, when you are trying to do something non-trivial, it becomes a hindrance, despite it's simplicity. Hence "I built a life-size XYZ in Lego" becomes impressive in it's own right.
Oh, and Go modules is a story of a train wreck.
I agree! If people believe that Go is easy to read because it is simple they must love reading brainfuck as there are only a handful of simple operations!
In what way are Go modules a train wreck?
Here is a good start:
https://donatstudios.com/Go-v2-Modules
With the relevant HN discussion:
https://news.ycombinator.com/item?id=24429045
This is told so often told that people believe it is true
The lack of any type of abstraction is not great for teams. The need to write boilerplate code harms readability more than writeability.
Understanding what a single statement does is not important. Understanding intent is more important. And Go provides very few tools on it.
The reason for Go's popularity is mostly tooling. I like structural typing too. But other than that it is a very tedious language.
> other than that it is a very tedious language
And that, is precisely why it is a good language for teams.
Yeah it's hard to write weirdly complicated code in Go. Stick Java or C# in the hands of a team and before you know it you've got typed factory service factories and every abstraction known to man.
What higerorderma said above applies to this ^ comment as well. It is another of those nonsensical things that keeps getting repeated all the time by some Go programmers.
That line of thinking promotes workarounds over long-term solutions. If your team has people who are abusing tools (like types, abstractions etc), the solution is to teach them how to not do that. The solution is NOT to remove the tool itself! That is like saying that a drilling machine is easier to misuse so "only the hammer is a tool for teams". Ugh.
Go is awesome because of things it _can_ do (it has the speed of C and enough libraries to write full-blown webservices with it). Please don't encourage the whole marketing agenda of touting its shortcomings as advantages. It doesn't work. All it does is it turns people away from the language.
Why do you think it gets repeated by Go developers?
I was a C# developer writing all these abstractions like every other C# developer does. It's idiomatic C#. OOP exists, you're encouraged to use it to abstract and decouple code.
Even if you try and strip all that away, you'll never escape it because every library you depend on is written this way. For example, try understanding
https://github.com/protobuf-net/protobuf-net
like I once did. The cognitive load is huge.
This is C#, this is what having a huge toolset results in.
A simple language with simple code isn't a shortcoming. It's an advantage. Overly complex code is a shortcoming.
I liked writing heavily abstracted code in C#, that's what I used to do every day. But I never liked reading other people's code because without the intimate knowledge of the code it is a huge mental load. Onboarding new developers into a code base they are unfamiliar with is also very time consuming.
In Go I realised that the only abstraction you need is an interface. There's no "abstraction first" mentality like there is in C#. You can define interfaces at any point in time, whereas in C# you must define an interface first, and if you don't then you end up going the OOP route with more abstraction.
And because of this small toolset, reading other people's code is easy. It all looks the same.
> Why do you think it gets repeated by Go developers?
Not by all, only by a subset of Go developers (until very recently I was one as well, maintained a moderate-size webservice). I specifically said "some Go developers" in my comment because that is what happens (at GopherCons, meetups and so on).
I don't really have a lot of C# experience, but I have read the same story about C++ from lot of people. I think you will agree that abstractions are not a problem, abuse is. Now, I have worked on a C# project for a very short time where I have seen the problems you described. But I do not follow the thinking that a tool was abused and hence its the tool's fault. That is silly.
I was a Go developer for ~4 years where I struggled because I had to constantly spend mental effort trying to ignore the people in the Go community who, instead of focusing on what Go does well, spent all their time (in GopherCon, meetups and so on) rationalizing missing features from Go. Here I was - with this nice language that let me write webservices which are efficient and do not carry a heavy runtime (CLR, JVM etc). But it is severely missing features that can help you be more productive when programming. After all that, listening to people say "its feature, not a defect" just makes you lose all hope that things will get any better.
A better approach is what (thankfully) some folks from the Go team take. You will find blogs from the core team where it is explained that Generics aren't missing because they are outright bad. They are missing because the Go team had not yet[1] figured out how to do it the right way (TM). _That_ is fine, it is the truth.
Finally, you say that it just isn't possible to avoid writing complex code with a powerful language. Well, I have heard that as well, from the same people. But it isn't true, I have worked on a C++ project where I was careful to keep things simple[2] and had quite a few developers on the team (with only college-level C++ experience, most of them Go devs) contribute to the project without much trouble.
But hey, maybe it is personal preference then. I would rather use powerful tooling and put cognitive effort into keeping things simple - rather than using something else and feeling limited by it all the time.
[1] now they have, for the most parts
[2] for example, it would use templates but avoid nesting more than one level, and so on. You get the idea.
I'm not trying to argue that C# is at fault by any means. But having a large toolset and trying to enforce very specific code quality when handling a code base touched by 100+ developers is very hard because using all these tools is very easy.
In Go it's quite difficult to make idiomatic overly complex code. If someone is making something really complex in Go I can easily see what they're trying to do and suggest an easier more idiomatic way of doing it.
In C# trying to understand some overly complex abstraction is a task in itself. Then trying to simplify it in a way that pleases everyone else who is of the mindset that these abstractions are good is another challenge. Most Go developers are of the opinion that there is only 1 or 2 idiomatic ways to solve most problems. But in C# there's so many more because of the more advanced language.
I'm not trying to justify the shortcomings of Go either. From day 1 of learning Go I was craving for generics and I read people saying "you don't need generics, just use interface{} and type switch.". It's ugly and it's one of the worst parts of Go, I absolutely hate seeing interface{} in a function.
But in the context of working in a team I've worked on sizeable projects in Go and bringing new devs of various skill levels onto the project has been a breeze. The code they write is the same as the code I write because it almost has to be, there's not many ways they can stray from the path laid out ahead.
I've worked professionally on C# and Go. I've seen overly abstracted C# code but I also have seen Go code that was clearly missing good patterns and ended up a ton of spaghetti. The difference that I see is that C# gives you the tools to do it well whereas Go lacks many tools to make a complex code base nearly as maintainable as C#.
> _But I never liked reading other people's code because without the intimate knowledge of the code it is a huge mental load._
But that's exactly where Go projects end up at. Reading any 1-4 Go lines is easy. Trying to mentally decode a function of 70 lines almost never is. It is verbose as hell. And then you are like "a-ha! this goes through those two collections, filters one of them, maps the other and then combines the results through that algorithm". But you lost 20 minutes until you got to that conclusion.
Compare this with OCaml or Elixir for example where such a function would literally be 10-15 lines.
(It's not an unique disadvantage to Go, mind you, but we're discussing it currently. Other languages have the same problem, e.g. C# and Java.)
> Yeah it's hard to write weirdly complicated code in Go
Just because Go lacks many forms of abstraction, doesn't mean complex code will not be created with it. Quite the opposite. It's lack of expressiveness will encourage "frameworks", elaborate encodings, code generation and various productivity aids. Look at what the lack of generics has done to the Kubernetes code base:
"The core team replaced a compile-time language feature that was missing (Generics) with their home-built runtime system"
https://medium.com/@arschles/go-experience-report-generics-i...
> Quite the opposite.
No. What you see in K8S is the exception rather than the norm in Go - but is the norm in other languages.
> What you see in K8S is the exception rather than the norm in Go - but is the norm in other languages.
I suggest that is because not many large applications have yet been built in Go and it is still relatively young. If Go was used more sparingly as a "domain-specific language for concurrent network services", then I might be inclined to agree with you, but it seems to be marketed and promoted as a general purpose language.
Go is over a decade old and basically powers the majority of web infrastructure right now (k8s, docker, traefik, istio, terraform, cloudflare) and is used by google, Uber, twitch, SoundCloud, dropbox, YouTube, sendgrid... these are not exactly backyard projects. 90% of all software written falls into concurrent network services now.
Kubernetes is infamous for being one of the worst examples of a Go codebase in the community. And that proves my point really, this codebase sticks out like a sore thumb because it's no idiomatic Go.
> Often short dense code with complex types is just really hard to read for your overage joe programmer.
I think people make the mistake of measuring how long it takes you to read x lines, when really you need to measure how long it takes to read x functionality.
There've been times when I replaced a 200 line class with 4 lines of Scala. Those lines were very much the kind of complexly-typed code that people attack Scala for. It took time and effort to understand what they did. But they were still a lot more maintainable than a 200 line class.
There is a middle way here. 4 terse lines still sucks. Make it 10-15 readable lines please.
I disagree with _only 1 way_ to write programs in Go.
Go is not an old Fortran (see [1] about "the only real structure is an array") so there is always choice between array-of-structures and structure-of-arrays.
[1]
https://www.ee.ryerson.ca/~elf/hack/realmen.html
Haskell (and C++ for that matter) can hide the difference with associated types (and templates).
PS
Speaking about arrays - I once read Go's specification for some obscure fun and realized that I read a third of it and still am reading about arrays and stuff and not about _programming_.
> I do wish error handling and generics where part of the language... And a tuple type indeed.
On top of that I wish `null` wasn't and sum types + pattern matching were (and were used for error handling).
I agree with the first sentence, and that I found is one of the strongest points of Go. That you can look at your colleague code and be almost sure what it does, there just aren't hidden traps or clever hacks. It's always the same "dull" code, telling you in plain language what is happening.
I had an opposite experience. Go code is just like other code in any other languages. It still can be abused to be unholy mess. I see a Go code that have channels all over the place that it's hard to follow the program flow, a function that can panic deep down the stack when feed with certain input.
Channels for me make Go harder to read than most languages. Once you encounter a channel when reading Go code, good luck understanding the flow from then on.
I actually wish gofmt was _more_ opinionated. Break my lines please. Prettier has spoiled me - write absolutely hideous looking code, and one save later it is exactly what I want almost every time.
I was very apprehensive about prettier at first, but I pulled it into a side project of mine and I am never looking back. I absolutely loathe people’s obsession over formatting code. Just let a formatter do it, don’t fuck with the defaults, and work on your business logic instead of bike shedding on perfectly placed spaces.
Totally agree, but you _need_ an iron-fisted formatter to make that work. Because otherwise you get an ever-changing quilt of every dev's style du jour, and while I don't think the specific formatting you use matters, I think sloppiness and inconsistency are actually harmful.
https://github.com/mvdan/gofumpt
Drop in replacement for `go fmt` and probably does most of what you want.
Thanks! I noticed the other day I was doing some manual formatting (putting short struct initializers on a single line), and the newlines mentioned as well.
> Running gofmt after gofumpt should be a no-op.
This looks perfect actually.
(2016)
The versioning and slice sorting have since been solved, by go.mod and sort.Slice respectively.
The problem with "Make it an error to not initialise a struct field" is you lose source compatibility when adding new struct fields.
> The problem with "Make it an error to not initialise a struct field" is you lose source compatibility when adding new struct fields.
If you allow the struct definition to specify a default value then the struct author can set default in the cases where there is a sensible default, and leave it compile-time incompatible in the cases where there really is no good default and the person doing the upgrade needs to make a decision.
That is already possible by exposing a `New` function and making the struct itself unexposed; are you suggesting syntax should be added to provide default values to struct fields? Because there is high resistance (which I agree with btw) to adding additional syntax and constructs to the language.
> That is already possible by exposing a `New` function and making the struct itself unexposed;
I thought Go didn't allow functions with default argument values either?
Still no official debugger though. Delve is just full of bugs.
Just do as the Head of Marketing from previousEmployer suggested and "leave out the bugs"
Revolutionary thinking right there
That's next-level.
Was he being ironic, or serious? Love to hear more.
"Head of Marketing". He was probably serious.
delve (through vscode-go) has worked fine from my perspective
In Go, you import packages by URL. If the URL points to, say, GitHub, then go get downloads HEAD of master and uses that. There is no way to specify a version, unless you have separate URLs for each version of your library.
Go modules has solved this problem.
I find that in practice Go modules work very well. Certainly better than the alternatives and I've been through most of them.
Incrementing the major version of your module is very clunky though.
I don't believe the problem is fully solved yet with modules (there's still the V1/V2 problem), but it's a step in the right direction. I've been using modules in my current project (new codebase) since the start of the year and so far have run into no problems whatsoever when it comes to modules.
Mind you, a big plus, I think, is that Go doesn't have the ridiculous library ecosystem that others I've worked with (JS/Node, Java) have; I only use a handful of libraries at the moment (for REST api, .env file handling, database interaction and logging, and some additional ones for development like mocking, neat diffs for tests, and code generation (xsd & swagger to code).
> Mind you, a big plus, I think, is that Go doesn't have the ridiculous library ecosystem that others I've worked with (JS/Node, Java) have; I only use a handful of libraries at the moment (for REST api, .env file handling, database interaction and logging, and some additional ones for development like mocking, neat diffs for tests, and code generation (xsd & swagger to code).
I think this is a great selling point. Probably due to the stdlib being so extensive.
The OP is from 2016
@dang, can you fix the title?
Other comments have claimed that Go is intended for teams, but my personal experience shows that Go code is easier to maintain than Haskell code, even for a single developer. Though I've coded with Haskell for many years, I never reached an expert level. So, when I went back to an application after 6 month of break, my Haskell skills were rusty, and I struggled to understand what I had written. What's that pragma `MultiParamTypeClasses, TypeFamilies`? What does `<*>` mean in that context? And so on. A similar break happened last year, and I still had a hard time getting back to fluent Haskell, while modifying my Go code was still easy.
Still, I agree with some of the bad points. Like him, I would have liked more types with a stricter compiler that helps refactoring (sum types, zero values).
The official documentation of the language is also very disappointing, as noted by Eric S. Raymond in his conversion notes[^1] from Python to Go.
The author of this blog post criticizes the tooling for the lacks of features that in fact do exist[^2], but are hard to find due to the poor documentation (split over a a shallow "tour of Go", blogs, other official docs, with no links between them).
[^1]:
https://gitlab.com/esr/reposurgeon/blob/master/GoNotes.adoc
[^2]:
https://golang.org/pkg/runtime/trace/
I think comparing Go to Haskell at all is slightly bizarre, really. When I think Haskell, enterprise maintainability coding isn't exactly what comes to mind.
I really don't like Go, but I won't deny that it basically does what it's supposed to do.
> When I think Haskell, enterprise maintainability coding isn't exactly what comes to mind.
One day hopefully
I really agree with some points. In my project, we moved from Java/Spring because it was just too much magic happening behind the scenes. We wanted something that was just clear and straight to the point. We should be able to read the code and it should do exactly that, not something else because you annotated the class.
I just wish it wasn't so bloated with error handling. It's difficult to see the actual code among all the if err != nil (even though GoLand does a pretty good job at hiding them). Something like this?
f := os.Open("file") or { return err }
After a couple of years with Scala, I'm also really missing proper FP, pattern matching, optional types, immutability, etc. Still, I'm pretty happy with Go and it has made me more excited about coding again.
when I approach a programming problem, I first think about the types and abstractions that will be useful; I think about statically enforcing behaviour;
That's it. A large majority of the people don't think on those terms.
Go tries to be pragmatic and hit a decent balance between productivity and language features.
This is from 2016 and it's not accurate, especially about toolings.
I did not realize, thanks. The year of publishing should be quoted in the link title?
Things have moved on in the Handel world too, especially around garbage collection.
Go is like SQL, where it's pragmatic for TEAM work, where your teammate has little background on programming, because Go code is easy to follow, extend (copy and paste and modify).
In my case, my team is composed of most of DBA experts, they have no problem following Golang code.
So your programming languages of choice should base on your team.
> So your programming languages of choice should base on your team.
While that is a good thing to keep in mind when making tech decisions, make sure not tunnel vision only on it.
More about that here
https://news.ycombinator.com/item?id=24880273
Yeah, especially when there are not _only_ preferences in between but also other strong reasons to help the team to decide to pick one or another language for the job(s).
I would like to use Go, Haskell, Rust or whatever that I like (or my team as well) but this is just one piece of the whole cake IMO.
Is the stuff about GHC's garbage collector still valid?
(I have no idea, but I found this
https://www.well-typed.com/blog/2019/10/nonmoving-gc-merge/
"Low-latency garbage collector merged for GHC 8.10", which is more recent than the OP blog post)
Is the “backwards compatibility at all costs” point about Go still valid?
https://blog.golang.org/using-go-modules
...it seems like you can specify versioned deps now
whereas Python’s significant whitespace (which is there for exactly the same reason: enforcing readable code) has been much more contentious across the programming community.
Because with gofmt there's nothing to argue about, it is what it is. With Python’s significant whitespace there was plenty of rope to play tug-o-war with: tabs or spaces? How many? That was enough to fuel the flame wars.
I've been using generics with go2go and am looking forward to replacing generated code when they are officially released.
Serious question, I've been toying about with Go but aside from having to recently write a tree for myself, what are you using generics for?
One of the things I really lean into with go is that your program tightly fits your problem. Where's your generic limitation?
Libraries and library-like code. Any code that fetches data from generic storage/protocols.
Prime examples:
- http requests. For an API it's almost always a generic request parametrized by some type.
For example, you API always returns `{result: ...some data..., nextPageToken: ...}`. Well, that's a `PagedResponse<...the type of data...>`
- cache
Caches store objects. In go any `.get` from a cache will return an `interface{}` that you have to type-assert because you can't do a `Cache<T>`.
and so on.
Re: HTTP requests, you mean the exact HTTP response, which in your case is a JSON object that can be parsed via a different package; the two should not be conflated, and the HTTP package should not be polluted with assumptions about what (if any) data is being passed.
Re: caches, what kinda caches do you mean? For the most simple use case you have maps, which in Go are already generic. Of course, anything more advanced would be greatly helped with generics, since at the moment they (I presume) store `interface{}` types and the consumer has to cast them back to the real types. Or they use code generators, which is also pointed out in the article.
To add some potential goodness coming from generics: Option types to avoid nil, Either types to replace the value/error tuples (which the article points out is a weird outlier, because you don't have tuples elsewhere in the language).
Those make me wonder if parts of the language and codebases written therein would actually improve compared to the weirdness of multiple return values and the like.
> and the HTTP package should not be polluted with assumptions about what (if any) data is being passed.
I'm not talking about the HTTP package itself. In Java and C# you do something along the lines of
// Java CompletableFuture<PagedResult<Contract>> getContracts(...); CompletableFuture<PagedResult<Client>> getClients...(...); CompletableFuture<PagedResult<Book>> getBooks...(...); // C# JsonSerializer.Deserialize<PagedResult<Contract>>(responseBody); JsonSerializer.Deserialize<PagedResult<Client>>(responseBody); JsonSerializer.Deserialize<PagedResult<Book>>(responseBody);
And that's basically it. With Go you end up having fifteen identical PagedResult types for every single type that can be returned from the API because you can't parametrize anything:
// Go type ContractsResult struct { Result []*entities.Contract NextPageToken string } type ClientsResult struct { Result []*entities.Client NextPageToken string } type BooksResult struct { Result []*entities.Book NextPageToken string }
> Or they use code generators, which is also pointed out in the article.
Code generators are just bandaid for glaring holes in the language. Worse still, I don't know if you can even specify code generating tools in your go.mod. For example, wire's installation instructions say "you need to install wire globally and have it on your $GOPATH" [1] So your go build will just fail mysteriously until you have all the necessary tools installed.
> Of course, anything more advanced would be greatly helped with generics
That... that is exactly what I'm talking about.
> Option types to avoid nil, Either types to replace the value/error tuples
Indeed. I forgot about those :) Yup, that would be a great use case for generics.
[1]
https://github.com/google/wire
Really late but... I use them to avoid type assertions. Most recently I need to operate on channels of all types.
Go was designed for teams and by a very opinionated designer, Rob Pike, regarding both the language features and the programming experience (tooling, etc), akin to acme/plan9.
I do enjoy this experience. Very text oriented. It’s subjective. And it has served me well.
While the author has a very Haskell-centric view and his comparison is a bit biased, I completely relate with some of his points.
"Use it more" is not an answer. If you have found a good way to produce software and it has served you well -- even better if you had other ways in the past and your latest one is objectively better! -- then you are not under any obligation to "give it a chance".
Many people longed for certain ways of doing things and one day they found the tools that enable them. That's quite fine. No need to insert platitudes in there which can't ever be satisfied (like "use it more", especially if you are not inclined to).
The way in Go to handle possibly-failing functions is to have multiple return values: an actual result, and an error. If the error is nil, then the actual result is sensible; otherwise the actual result is meaningless. This means you can forget to check the error and use a bogus result and, because there are no compiler warnings (another wtf), you will know nothing of this until things fail at runtime.
? You can't forget to check the error because not checking err is a compiler error unless you forcibly mute it by using _
While you're correct, you can still avoid handling an error and not get any feedback from the compiler or linters by doing something like:
val, err := something() val2, err := somethingElse() if err != nil // etc
You can - and all the examples usually do - reuse and shadow an 'err' variable, and tooling won't complain as long as something is done with it at least once.
If Go wants to enforce not ignoring errors, it still has a few holes like this to fix.
I see now. That is indeed one of the things I hated about Go, having to reuse err (or maybe, making the mistake of reusing err) several times in a single function.
Nothing about concurrency??
The concurrency models of Haskell and Go are very similar, so it's possible that nothing stood out in particular.