The things that originally attracted me to Go were the concurrency model, the interface system, and the speed. I was kind of meh about static typing (and
definitely meh about the
interface{}
escape hatch) but figured the benefits might be worth the price?
But it hasn’t really turned out that way. I still
like the concept of having no locks exposed to the user (safely hidden in the channel internals) à la Erlang or Clojure. But I’m not going to pay for it with
err
everywhere, static types, a profusion of channels, and a lack of generics.
Seriously, all of the synchronization choices of Go seem to come down to, “Use another channel.” Keeping track of so many channels among a few stages of processing is a whole new layer of heavy work. That would be pretty much unnecessary if channels could be “closed with error,” which could then be collected by the UI end.
Then there is the whole problem of generics. The runtime
clearly has them: basically, anything creatable through
make()
is generic. But there’s no way for Go code to define new types that
make
can create generically. There’s no way for Go code to accept a type name and act on it
as a type, either.
You can pretend to hack around it with
interface{}
and runtime type assertions, but you lose
all of the static checking. The compiler itself knows that a
map[string]int
uses strings as keys and can only store integers, but an
interface{}
based pseudomap won’t fail until runtime.
To get the purported advantages of static typing, the data has to be fit to the types that are already there.
I’d almost say it doesn’t matter to my code, but it seems to be a big deal for libraries. How they choose their data layout has effects on callers and integration with other libraries. I don’t want to write a tall stack of dull code to transform between one and the other.
The static types thing, I’m kind of ambivalent about. If the compiler can use the types to optimize the generated code, so much the better. But it radically slows down
prototyping by forcing decisions earlier. On the balance, it doesn’t seem like a win or a loss.
Especially with all the performance optimization work centering on dynamic languages, refined in Java (to a certain extent), C#, and JRuby, now flowing into JavaScript. It’s getting crazy out there. I don’t know if static typing is going to hold onto its edge.
I think that brings us back around to
err
. Everywhere. I really want lisp’s condition system instead. It seems like a waste to define a new runtime, with new
managed stacks, that doesn’t have restarts and handlers. With the approach they’ve chosen, half of go code is solving the problem, and the other half is checking and rethrowing
err
codes.
Go isn’t supposed to have exceptions, but if you can deal with the limitations of it,
recover
is a thing. (But it’s
still not Lisp’s condition system, and by convention, panic/recover isn’t supposed to leak across module boundaries.)
I forgot about the mess that is vendoring and
go get
ruining everything, but I guess they’re working on fixing that. It’s a transient pain that’ll be gone in a couple more years, too late for my weary soul.
But am I wrong? What about the “go is the language of the cloud” thing that Docker, Packer, and friends have started? I don’t think Go is “natively cloud” because that’s meaningless. I think a few devs just happened to independently pick Go when they wanted to experiment, and their experiments became popular.
It surely helps that Go makes it easy to
cross-compile machine-code binaries that will run anywhere without stupid glibc versioning issues, but you know what else is highly portable amongst systems? Anything that doesn’t compile to machine code. For instance, the AWS CLI is written in Python… while their Go SDK is still in alpha.
tl;dr
I find the limitations more troublesome than the good parts, on the balance. I recently realized I
do not care about Go anymore, and haven’t written any serious code in it since 1.1 at the latest. It’s not interesting on all sides in the way Clojure is.