💾 Archived View for yujiri.xyz › software › go.gmi captured on 2023-06-14 at 14:23:01. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2023-01-29)
-=-=-=-=-=-=-
My overall opinion of Go is fairly mid. The error-handling is a huge source of pain for me and it has many awful footguns, but also several virtues, and I appreciate its small number of language features.
(update: Go actually has generics now and that makes a big difference to my opinion, but this article hasn't been fully updated because I haven't used them yet)
Go's error handling approach is by far the worst thing about it. There are four main problems with it:
Failure is signaled by returning error values instead of raising exceptions; for example `os.Open` has the signature `Open(name string) (*File, error)`. You're supposed to catch both the File pointer and the error and check the error manually before doing anything with the file. Here's the signature boilerplate:
file, err := os.Open("file") if err != nil { return err }
You have to do this after *every single function call that might fail*. (It can be written with the function call and condition on the same line, but that's questionable style.) This boilerplate can easily take up 1/4 of a Go program.
And the errors themselves carry only the error message, not any sort of context, so if an error gets returned through a stack of 5 functions, you'll see "no such file or directory" with no information about what file or what function that came from. The github.com/pkg/errors package has been created by the community (no, it's not even stdlib) to make it easier to get context with your errors.
It enables this:
if err != nil { return errors.Wrap(err, "When doing X") // The returned error will be, "When doing X: ..." }
Mostly solves the information problem, but we've still got 2 other problems to look at. The next is what happens if you *forget* to handle an error: *silent runtime failures*, very similar in principle to the Javascript behavior of `{} - []` being `NaN`, something most Gophers would rightly ridicule. And the compiler doesn't even help you out. It doesn't print a warning if you leave an error unchecked (if you're catching from a function that returns multiple values then it'll make sure you have the right number of variables, but other than that the compiler is silent.)
I think the fact that 90% of the time what you do in these incessant error handling blocks is just return the error (or `log.Fatal` if it's top-level) is a pretty good argument that propagating errors upward should be the default.
How does it break composability? Because if `func1` returns a `Thing` and an `error` and `func2` takes a `Thing`, you can't just do `func2(func1())` - you have to call `func1` and catch the error and check it manually before deciding whether to call `func2`. Similarly, helper functions are made expensive because you can't just have a function that concisely wraps another and doesn't have to know about its possibility of failure; you have to catch and return the error at *every step* to stop it from getting swallowed (unless the return types match exactly).
Pretty primitive: no parameterized types (generics; although I hear those will be added in an upcoming version of Go which should significantly improve my opinion of the language), no product types (multiple return values is a special case) and no sum types.
It does have a fairly enlightened way of thinking about objects though. It uses concise structs and struct embedding for inheritance, and methods are basically just functions that take the struct as an argument with a special syntax. Polymorphism is in the form of interfaces, which consist of only method signatures and allow any type that implements them to be used as the interface type.
Go can *sort of* get around the lack of generics with `interface{}` since an empty interface is implemented by every type; this is how `sort.Slice` in the standard library works (see below for why it can't take `[]interface{}`). But that sacrifices type-checking: since it only requires that its argument satisfy `interface{}`, Go can't even check at compile-time that it's getting a slice, so `sort.Slice` compiles successfully and then panics at run-time if you use it on something that isn't a slice.
A subtler issue is that type information is lost if you pass a value to a function that takes an interface, and the function does something to the value and returns it. It has to be returned as the interface type, since that's all the function knows about its argument - meaning the caller loses the information of which specific type it is, even if it's provably the same object. Unlike for example Haskell, Go's type checker is not smart enough for a specific type to "reappear" on the other side of a function that takes an interface. This isn't a big issue since you can basically get it back with a type assertion on the returned value, but it's inconvenient and the type assertion can't be type-checked.
Null is not dealt with. Pointers, slices, maps, and channels can all be `nil`, with no compile-time checking and disastrous results for a mistake:
The non-pointer ones are easy mistakes to make because declaring those types, for example `var nums []int`, makes it nil; you're supposed to use `make` to initialize them if you don't want nil.
There's also a gotcha with nil interfaces:
package main import "fmt" type Cat struct {} func (c Cat) Speak() { fmt.Println("Meow") } type Animal interface { Speak() } func main () { var c *Cat listen(c) } func listen(a Animal) { if a == nil { return } a.Speak() }
This code will compile and then panic at runtime with `panic: value method main.Cat.Speak called using nil *Cat pointer`.
But how is that possible? I checked if it was nil!
In brief, interfaces are never nil if they have a defined concrete type, even if they hold a nil pointer of that type.
A struct with a method that returns a pointer to a struct that implements an interface isn't considered to implement an interface with a method that returns that interface; `[]struct{}` can't be passed for `[]interface{}`.
The Go FAQ discusses this and explains that they did this because "If two methods return different types, they are not doing the same thing. Programmers who want covariant result types are often trying to express a type hierarchy through interfaces. In Go it's more natural to have a clean separation between interface and implementation."
Go uses braces to mark blocks, but thankfully doesn't require semicolons and parentheses around conditions. Channels and goroutines get intuitive dedicated syntax: `go func()` to launch a goroutine running `func`, `channel <- value` to send on a channel and `value <- channel` to receive.
It's unfortunate that all the binary operators and syntax features like indexing and `range` are special magic for built-in types. There's no interface you can implement to use them on custom or library types. `time.Time` has methods `Add` and `Sub` instead of `+` and `-`, `After` and `Before` instead of `<` and `>`, etc.
Go doesn't have a conditional operator or any other type of control-flow-as-expression. I've been in a lot of situations, like with CSV, where I have a long row of values:
field1, field2, field3,
And a lot of them need to be converted to string but can't be without a conditional operator because they're a type like `null.Time` which requires null checking. So I have to do the logic before writing the CSV row and use a temporary variable for the string representation. This makes my code not only bigger but *less clear*, since formatting the value is separated from the place I format it for.
The relevant FAQ section:
There is no ternary testing operation in Go. You may use the following to achieve the same result:
if expr {
n = trueVal
} else {
n = falseVal
}
The reason `?:` is absent from Go is that the language's designers had seen the operation used too often to create impenetrably complex expressions. The if-else form, although longer, is unquestionably clearer. A language needs only one conditional control flow construct.
The if/else does make this single construct clearer (I guess), but if it takes up 5 times as much space, you can fit less code on the screen, which harms readability.
And if a language needs only one conditional control flow construct, then why does Go have `switch`?
A function that returns two values has to be called and caught on its own if you only want one value. You can't do `f1(f2()[0])` because you'll get `multiple-value f2() in single-value context` even if f1 takes one argument and it's the same type as f2's first return value.
Curiously, while Go does support composing functions that return multiple values with other functions that accept the same types, this only works as a special case, not with more arguments. For example,
func main() { f1(f2()) } func f2() (int, int, int) { return 4, 5, 6 } func f1(a int, b int, c int) { fmt.Println(a, b, c) }
works, but
func main() { f1(f2()..., 6) } func f2() (int, int) { return 4, 5 } func f1(a int, b int, c int) { fmt.Println(a, b, c) }
Is a syntax error. `...` is Go's variadic operator, equivalent to Python's `*`, except not. Why implement this as a confusing special case instead of letting it work the way Python does?
Having these makes little sense when the compiler also finds unused vars, but oh well. And there are two syntaxes for it: `var val = 5` or `val := 5`. Most of the time they're interchangeable, but there are subtle differences:
var x = 5 var x, y = getMeTwoInts()
But this works:
var x = 5 x, y := getMeTwoInts()
So while I generally prefer the more concise `:=`, there are cases where you *have* to use `var`, and we'd certainly prefer to keep our style consistent... So basically it's an area rife for style disagreements where `go fmt` doesn't have an opinion.
Most array operations are extremely verbose.
(With the introduction of generics, this stuff will get much better. I haven't checked what the status is now.)
for i := len(items)/2 - 1; i >= 0; i-- { opp := len(items) - 1 - i items[i], items[opp] = items[opp], items[i] }
As an additional downside of this not being a built-in oneliner, every time you type this out there's a chance you'll make a mistake, and it'll probably be a logic bug that'll take at least a good five to ten minutes to track down if it's part of a big project.
var found bool for i := range items { if items[i] == item { found = true break } }
Yes, that's literally what you have to do.
For a language that's supposedly so pragmatic, this is inexcusable.
And without generics, you can't even really implement these yourself like you could in say, OCaml (another language sorely lacking in them although not this bad). They *can* be implements using reflection, but that of course sacrifices type checking as well as incurs a huge performance cost (I've heard reflection is one of the slowest things in Go).
https://github.com/thoas/go-funk
One of Go's main marketing points is "concurrent by design", and it's true - it's probably the best language for concurrency except for Rust. It uses green threads called goroutines which have a great syntax and can do parallel processing as easily as concurrent IO. Communication between them is done with channels, which are a flexible message queue system that support synchronous send and receive, asynchronous send and receive and selecting on multiple channels with the builtin `select` statement, and an option to be buffered so they can hold a few pending messages before sending blocks.
One of the things I like about channels is how their semantics correspond to Unix pipes. Receiving from a closed channel returns the zero value immediately, sending to a closed channel panics (SIGPIPE), and you can survive it with a `defer`/`recover` pattern, which is reminiscent of a signal handler.
Some of the other concurrency stuff is also amazing, like `sync.WaitGroup`.
My only notable gripe with channels is their behavior if `nil`. I've also heard people complain that there's no way to have an *infinitely* buffered channel that never blocks, and it sounds like a valid criticism to me but I've never run into it.
For managing resources, Go has `defer`, which queues a function call for execution when the current function returns. Deferred calls are run even if the function panics. The typical idiom is to `defer file.Close()` after opening it, etc.
It's a very flexible solution; it can be used with any function call and doesn't require implementing an interface. I also like that `defer` doesn't indent the area the resource is used for. Python's equivalent does, and it often feels like a needless indent to me.
But a notable downside is that is that you can't have it execute *before* the current function exits. For example, if you open a file, defer closing it, do your stuff with it and then do the same with a second file, the first one stays open until you're done with the second. Zig's and Hare's `defer` are block-scoped, so each resource is closed as soon as you're done with it.
Go was the first language I learned with a built-in code formatter, which I fell in love with and inspired me to make this meme:
There are also some great non-built-in tools: goimports is an improved version of `go fmt` that also can usually handle your imports for you, adding missing ones and removing unused ones and even automatically sorting them, and there are also some really useful third-party vetting and linting tools, like golangci-lint.
My only criticism related to tooling. The compiler won't compile a program with an unused import or variable (although it does allow unused functions, function parameters, and global variables). It's great to have the compiler *warn* us about these things, but *preventing* us from compiling it is just absurd. All the time in debugging I comment something out and it results in an unused variable or import, and Go refuses to compile. Ugh... gotta go back into the source and comment out the declaration too.
The Go FAQ actually address this:
First, if it's worth complaining about, it's worth fixing in the code. (And if it's not worth fixing, it's not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
This reasoning is so obviously wrong it's hard to believe it's sincere. Yes, it *is* worth fixing, which is why I didn't `git add`! It's *not* worth delaying my test and forcing me to fix code that *will never be run outside of this test* so that it takes me longer to get the final commit ready and fixed. This justification is written as if debugging isn't a thing.
And their point about "encourages the implementation to warn about weak cases..."? Arguing that the language spec should treat unused code as fatal errors because treating it as non-fatal will lead to people writing Go compilers that will output so much unimportant noise that the unused code warnings will get ignored is such an insane stretch that you might as well say Go shouldn't allow programs to log to stderr unless they're crashing because that will lead to people writing programs that generate so much log volume that important stuff will get ignored.
Compilers are for verifying that code works and making it executable; *linters* are for things like this. In fact, `go vet` comes with Go and does this exact thing.
Go has the best documentation of any language I've seen, as well as the best CLI tool for generating and viewing it. It's thorough, clear, easy to find exactly what I'm looking for (for example, I can run `go doc library.function` where `function` is actually a method of a struct in that library and it gives me the method docs), and I can also view the source of a symbol in a library using `go doc -src`, without having to go find the source myself.
Go has been around for less than half as long as Python, but its standard library and ecosystem seem to be almost if not equally vast. You pretty much can't find anything there isn't a Go library for. The community is also much better than the Python community at documentating their packages.