Tuesday, November 03, 2015

Test Package Naming in Go

I’m still – still – playing with Google’s Go language, making the occasional one-off utility and also working on a Crazy Web Server Project. I hesitate to call it a Framework, since all Frameworks are ultimately doomed. It’s nothing too extreme, but it’s (hopefully) going to solve some of my web-server problems for small sites; and it’s (absolutely) helping me learn Go “for reals.”

I recently discovered something very simple, more or less by accident, that has big implications. It is this: tests for package foo should declare package foo_test.

To me this was counterintuitive, since one of the big hairy things you have to adjust to when learning go is the flat package directory. A package is made up of a bunch of .go files, and they all live in one flat directory. If you want to organize the files in a hierarchical set of directories, you must also have a corresponding set of packages (though they need not be hierarchical).

Incidentally, as I get better at Go I find myself hating such conventions less, because they are useful in forcing the programmer’s hand: be idiomatic damn you! And that certainly has its upsides.

Thus, logically, every .go file in a given directory declares the same package. If it doesn’t, you get a compile-time error!

Except, that is, for tests.

The Test Exception

For a directory with a package foo, you may have two packages: foo and foo_test. You may not have any others. So, why would you want to use foo_test?

There is a very good reason, which I’ll get to in a moment. If you already have a lot of unit-testing experience and know a little Go you may already have guessed.

Let’s first consider the way I had been doing it until recently. Apologies in advance for the lack of syntax highlighting.


package foo

func Bar() string { return "BAZ"}


package foo

import "testing"

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

OK, great. This makes perfect sense so far. But now consider this, which in fact seems quite the obvious thing to do:


package foo

func Bar() string { return ook(1) }

func ook(i int) string {
    if i > 0 {
        return "BAZ"
    } else {
        return "BAT"


package foo

import "testing"

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

func Test_ook(t *testing.T) {

    exp0 := "BAT"
    got0 := ook(0)
    if got0 != exp0 {
        t.Errorf("For 0, expected '%s', got '%s'", exp0, got0)

    exp1 := "BAZ"
    got1 := ook(1)
    if got1 != exp1 {
        t.Errorf("For 1, expected '%s', got '%s'", exp1, got1)


The example above shows a very common pattern, about whose virtue we could certainly argue: you can have much more thorough test coverage if you have robust unit tests for private functions. (Remember, in Go only Capitalized functions are exported.)

However, there’s a cost in clarity: you should be testing all things that could happen in the use of the package, i.e. you should be testing all variations of the public API. If there are conditions your public API can not trigger, and which thus can not be tested via the public API, then remove them from the code.

Go is strongly biased towards writing only the code that is actually used. Speculative code, while possible, is definitely not idiomatic.

As somebody who writes code for a lot of “what-if” scenarios in Perl, I can certainly apprecitate that discipline.

This is why you should use the foo_test package: Go will then throw a compile time error if you try to test a private function, and in your quest for 100% code coverage you will see the unreachable parts of your code.

So let’s get back to our example, changing the test first:


package foo_test

import (

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := foo.Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

If Bar is the only thing that ever calls ook then you’ll have to change ook or you’ll never get 100% coverage. In this contrived example, the compiler will most likely abstract away ook entirely, but bear with me.


package foo

func Bar() string { return ook() }

func ook() string {
    return "BAZ"

And there we have it, in silly little examples. Always use a _test package name for your Go tests. Testing will be harder. Deal with it: your tests will also be better.

Labels: , , , ,

Sunday, November 01, 2015

Never Pay Marhaba In Advance.

I recently flew to lovely, hazy Singapore, with a stopover in Dubai. It was a trip of many firsts: my first time in Asia, let alone Singapore; my first time flying Emirates; my first flight on an Airbus A380; and my first time in the famous Dubai Airport.

Since I had arranged for a four-hour stopover, I thought it would be nice to kill the time in a lounge. Since my lovely wife and I were traveling coach – not half bad on the A380 – the business lounges were out of the question. But fortunately there was a pay lounge avaialable: Marhaba.

After first getting a bite to eat elsewhere, we went to the Marhaba desk and asked about buying in. The attendants were quite friendly, and told us the lounge was rather full and thus we might not have a seat. I looked around, and while it was indeed full, there were some bistro-style chairs at a table. Good enough. We paid about 100 EUR for the two of us, and enjoyed the lounge.

I was planning on recommending it: it’s not as nice as a real business lounge, but it’s not too expensive and it beats just killing time in the airport. Plus, they mix a strong G & T.

However, the trip home went rather differently, and hence instead of a glowing recommendation I give you this advice:






Why not, you ask? Wouldn’t it be cheaper that way, you ask?

Well yes, it would be cheaper, except for the part about Marhaba keeping your money if your flight is delayed.

Our flight from Singapore to Dubai was significantly delayed after boarding. We sat on the plane for over two hours waiting for the engines to be replaced, or whatever technical defect they needed to fix (the Captain left us guessing). Then we were further delayed in the air; and just to make sure everyone was in the best possible mood, Emirates canceled the dinner service, at least in Coach.

When we arrived in Dubai, with clearly insufficient time to make our connection, we hustled through security and asked an attendant what we should do. He told us we should hurry. And so we jogged to the gate, and found the plane waiting.

About ten minutes later we were boarded, along with a few slower joggers, and on our way back to Europe.

Marhaba Will Not Refund Your Money.

I had booked the Marhaba lounge, pre-paid at a small discount, back in Singapore this time, as I figured it would be better that way. I had entered my flight number on their web site, so their system was fully aware of the delay.

Once back at a computer, I sent them an e-mail politely reminding them that my flight had been so delayed it would have been physically impossible to go to the lounge before making my connection, and asked them to refund my payment if they had not done so yet. (Honestly I half expected them to have issued a refund already, since they knew about the flight, but then I am a computer programmer.)

Marhaba got back to me promptly and told me that unfortunately, since I failed to cancel the booking eight hours in advance – i.e., since I failed to call them from the air – they would be keeping my money. Have a nice day.

I thought, well, that can’t really be your policy if you’re providing airport services, can it? This must be a pretty common occurrence: flight delayed, services can’t be used, at least issue a credit for the next time, right?

Bear in mind that lounge access is only one of the services Marhaba provides. You can easily shell out hundreds of Euros (thousands of Dirham) on Marhaba services, and then what happens when your flight is delayed, or for that matter canceled?

So I asked for clarification. And I got it:

We regret to inform you that no refund will be processed for a no show booking since any amendments or cancellation should be done at least 10hrs prior to the flight. Please be advised that all no show bookings has no refund.

Never Pay Marhaba In Advance.

Maybe that’s just how they roll in the Emirates. Maybe it’s usually somebody else’s money. Maybe everyone’s rich enough they don’t care about such things.

But if you care about your money, or about the principle of the matter, then it’s imperative that you never pay Marhaba in advance, for anything.

If I fly through Dubai again I might use their walk-in lounge service, since a strong drink is a strong drink, but my credit card shall never again darken the Marahaba website’s door. Neither should yours.

Labels: , , ,

Tuesday, October 27, 2015

Test Post from Classeur

Mostly nothing to see here folks, move along, move along…

This is just a test post from Classeur, which is a very slick in-browser Markdown editor with support for some popular blogging platforms.

I usually compose my blog posts here in Markdown first, then copy the generated HTML, then make a few corrections to get around Blogger’s buggy HTML-to-HTML converter, then paste it into the blogger UI.

Cumbersome, to say the least. Maybe this is a better way.

Also, maybe it’s an easy way to collaborate on content creation. Well, for open-source stuff it would be pretty easy to come up with a workflow, and the price is way right: $5/mo I believe.

But what if you’re, say, the editor of an online newspaper, and you want your writers to compose in Classeur?

The web site suggests you can publish to DropBox, which might be a solution, but that seems to not be implemented yet. Or you can share documents as read-write among Premium users, which would be a solution for getting it as far as the editor. The editor would then need to export it somewhere.

Yes, I realize Real Newspapers would have online workflows for all this, but first: I’m not a Real Newspaper, and second: I bet their workflows really really suck. Oh, and yes, I realize most online publications don’t employ editors, but humor me please.

So let’s assume this workflow is OK and will actually work:

  1. Reporter, who has a Classeur Premium account, writes a story.
  2. Reporter sends its link to the Editor, who also has a Premium account.
  3. Editor edits, if necessary sends it back, and so on until it’s ready.
  4. Editor exports the Markdown source and sticks it in a database, or source control, or whatever.

I like this so far. Not perfect, but the editor is really quite nice, and in my experience it’s not an easy thing to get non-techies to use plain text editors, much less parse Markdown in their minds.

The problem, though, is in the pictures. I can add a nice picture of a squash ballsquash ballsquash ball here:


The problem is that it’s on Imgur, an image hosting site very popular with the young and bored. And not a place I want my internet-breaking Paparazzi shots of the Donald.

But hey, if I’m a literary editor maybe I don’t need pictures. And if I’m just running some random click -baiting Bored Wombat site, maybe I don’t care.

Again: nothing to see here, test post mostly, &c., YMMV, E&OE, and the like. But I think Classeur is pretty interesting, and I will keep it in mind for future use.

Labels: , ,

Thursday, June 25, 2015

Go for Minor Utilities

I continue to play around with Google’s Go language, and I continue to have very mixed feelings about it.

It’s clearly not built for You and Me, i.e. for the programmers of the world; rather it’s built for Google and shared with us. This is fine as far as it goes, but as with so many other Google products the joy of the user is mostly “out of scope.”

Also, presumably because the whole Go ecosystem is so young, there are all kinds of quirky things being done in the many, many, many third-party packages scattered around the Githubbernets. I may write more later about the Goverse’s tendency towards Database Worst Practices and its shunning of the Null. (Ironic for a language in which every thirteenth line is a nil check.)

So far, I’m pretty sure I would not use Go for any big, complex, long-term project. But it’s good enough at some things that I’d definitely consider it for any straightforward, easy-to-run subcomponents of a big project. I wouldn’t choose Go for code shared with a bunch of other developers, but if I were dropped into such a project I’d probably do just fine. I might use Go if I planned on bringing in outside help, because as others have pointed out, one of the Google use-cases it seems optimized for is bringing inexperienced programmers into a project.

But there is one thing for which I absolutely love Go, and for which I could picture myself using it for a long time to come: Go is really great for writing simple utility programs.

Tonight I found myself needing some random-ish strings for a goofy little side-project I’m doing. After not finding quite what I wanted at the otherwise wonderful Random.org, I took an hour to throw together a little utility. It will probably only ever be utilitous to me, and I may or may not bother cleaning it up and testing it, but now at least it exists:


And that means I can, should I need to on some other computer, simply do this:

$ go get github.com/biztos/randstr
$ which randstr
$ randstr --help

…assuming my $GOPATH is in order, and subject to dependencies I have not yet “vendored in,” and so on.

There are, I think, four things that make it especially attractive to write such programs in Go:

  1. It compiles, quickly no less, into a single binary.
  2. It’s reasonably cross-platform.
  3. The standard library, while sometimes bizarre, is very rich.
  4. Docopt!

The Joy of DocOpt

Command-line utilities, and Unix-like programs in general, usually take options and other arguments, and they usually have some rudimentary help text explaining these things.

DocOpt parses a structured, but very human-readable, help text in order to determine what the available options are. It’s not perfect but it more than covers the reasonable use-cases of most utilities I’ll ever need to write.

For any nerds who have not yet heard of DocOpt, here is the canonical example:

Naval Fate.

  naval_fate ship new <name>...
  naval_fate ship <name> move <x> <y> [--speed=<kn>]
  naval_fate ship shoot <x> <y>
  naval_fate mine (set|remove) <x> <y> [--moored|--drifting]
  naval_fate -h | --help
  naval_fate --version

  -h --help     Show this screen.
  --version     Show version.
  --speed=<kn>  Speed in knots [default: 10].
  --moored      Moored (anchored) mine.
  --drifting    Drifting mine.

There are many implementations of DocOpt, but the one I have been using with Go is the excellent docopt-go by Keith Batten et al.

I do still love the completeness and flexibility of Perl’s Getopt::Long, and I’ve not found anything similar for Go; but now that I’ve gotten used to DocOpt I would think twice before committing to anything more complicated.

With DocOpt in your toolkit, Go is an awesome language for writing quick little utilities. Whatever else it may be, and whatever it may lack, this is reason enough to know some Go.

Labels: , ,

Monday, February 02, 2015

Go First Impressions

Over the last couple of weeks I’ve been playing with Go, an open-source programming language from Google.

My motivation was pretty simple: I felt like trying out a new server language, something modern and perhaps even trendy. Partly because it’s always a good idea to learn new things, and partly because I was growing frustrated with the limitations of Perl and Javascript/Node. Those are the two languages I use for most of my work, professional and otherwise. While I’d happily sing the praises of either one for many, many domains, every language has its baggage and I was lately feeling the weight of theirs.

Also, a highly respected member of the Perl community recently published a soul-searching article, The Mid-Career Crisis of the Perl Programmer. One thing it reminded me of is that I never considered myself a “Perl Programmer” any more than I consider myself an “Oil Painter.” I earn my keep as a software engineer. I have certain preferences and biases as would any serious professional, but I’d much sooner go to war over text editors than programming languages.

When I look into the crystal ball at my coding future, I see some Perl and some SQL (since 1994...) and that sure looks like Swift out there past the liquor store, but beyond that it’s a little fuzzy as we get closer to the beach. Maybe one of the fuzzies is a gopher, maybe not.

Without further ado, my first impressions of Go follow below.

Pro Go

Productivity Now!

I’m very impressed with how quickly I was able to be at least minimally productive in Go.

When trying out a new language, like most people I want to see if I can use it for something in the real world. In this case I tried a simple web application, as I’ve been writing a bunch of those lately.

A web app should be easy to prototype in this day and age, but it should also be obvious how you can go deep and scale up to a complex and high-performance system. That said, it doesn’t exercise features of the language so much as the available libraries.

After a week of part-time hacking and quite a bit of reading around the web, I feel like I know how to build a robust web app using Go and some of the popular libraries; but also, and here is where Go stands out, I’m fairly confident I could do the same with just the standard library. We’ll soon see why that’s important, but it’s worth noting that I couldn’t do the equivalent in Perl or Javascript without a lot of brute force, and I’ve been writing in those languages since before some Silicon Valley billionaires got to high school.

In fact I can’t remember the last time I played with a new language and so quickly found myself able to write real software, software I understand, and not some Thy-CRUD-Be-Thus templated Sinatrail tiger-trap.

This is a big deal: you might need to hire people, and assuming you are not an idiot you will probably hire people who don’t yet have expertise in the particular software stack you’re using, because that’s not a proxy for intelligence or productivity. (Hiring that way is, as we say in the coding trenches, an anti-pattern, like hiring only Ivy Leaguers or only those who can solve math puzzles in an interview.) These smart people you’ve hired need to get “up to speed” quickly, and it looks to me like Go is a language that would give one a leg up in that inevitable struggle.

The main criticism of Go I’ve encountered so far on the Web is that it may be a practical and a useful language, but it is also ugly and lacks this or that feature. This sounds about right, but for me ugliness exists on a sliding scale: show me the language that gets things done and I’ll show you the language that’s ugly and lacking some feature. Productivity is its own kind of beauty, at least for us Engineers.

Static Types.

As a Perl guy I didn’t expect to find myself saying this, but here it is: I’m starting to prefer static typing. It’s harder to do some things that way, but when I think of all the boilerplate crap I have to put into my dynamically typed code to make it reasonably safe, and all the tests I have to write to make sure I didn’t screw it up, static types start to look pretty good. In my experiments with Swift the static types have been more helpful than not.

And it’s not like you can’t deal with arbitrary types when you need to. Go makes it easy to write things like func DoSomething(m interface{}) {...} – though of course it would be a bad habit to do that very often.

What I’ve grown to like about statically typed languages is what we might call the enforcement of intent: if I write a function to turn Gophers into Marmots then the compiler or interpreter, by default, will make sure that I only ever give it gophers and that I know marmots, specifically, will be returned. This is probably as valuable for its encouragement of design clarity as for its bug-reduction potential.


I wouldn’t say Go is a particularly beautiful language, in an aesthetic sense. But it is fairly expressive and once you get used to a few quirks it’s quite easy to read. This is obviously a key part of productivity: every time you write public static void or my ($self,$params) = @_ a poet dies.

You do have bad things, things that are ugly and non-obvious and will be the object of ridicule when Go is a bit older: map[string]reflect.Type is, for example, a perfectly reasonable return value in Go. But most of it is fairly obvious and easily grokable if you’ve done a bit of programming already. (And if you have not, maybe you should see the turtle before you talk to the gopher.)

Compiling and Statically Linking.

This is another one of those things that might sound odd coming from a Perl guy, but when I think about deploying Go programs on servers – possibly on very many servers – I really like the idea that my program, including all its dependencies, will be compiled into a single binary file.

That’s probably not the only file I’d have to deploy, but it removes a lot of hassle from my deployment story, whether that’s just me deploying to a Linode or a team of IT operations experts deploying to a server farm on SeaLand.

Sure, this introduces the problem that you need to compile on the same platform you intend to deploy on; but isn’t that what virtual machines are for? And if you’re not already testing on your destination platform then you’re not really testing, so to my mind this is the smallest of inconveniences. Even for a one-off little project you ought to ssh into your host for a final compile-and-test anyway.

Big Fat Standard Library.

One of the great strengths of Perl is CPAN, a vast repository of free software that will save you immeasurable effort in writing your programs. Go has nothing quite like that (see below); but it does have a very deep standard library. This means you can:

  1. Build a lot without external requirements.
  2. Have a common frame of reference in talking about Go programming.
  3. Fall back to a reasonable midpoint if you find an external package buggy.

The frame of reference might be the most important part of this. If I’m building a web server I can use the net/http package and chances are very good I will be able to get help, should I need it, on sites like Stack Overflow. Because just about everyone who “knows Go” will know how net/http works.

Building software with fewer external requirements can be a great advantage in itself, simply because every dependency introduces a risk of bugs. If I find something genuinely wrong with net/http I am very confident it will get fixed, and that the fix will then be part of the standard software. If I find something wrong with github.com/biztos/gogogo/http then maybe the author will fix it, or maybe not; or maybe I’ll fork it, or maybe somebody else already forked it, or maybe I’ll just throw some sand into the wind and see where it lands, which from a stability point of view isn’t much less useful.

It’s also nice to know you can fall back if you need to, and not have to fall back all the way to sockets. How far this is realistically possible depends on the domain in which you’re operating, but one could at least bring up a decent web API without much external code beyond the database drivers. This might be irrelevant if you’re building the Twitter-Enabled Platform for Instagram Drivesharing (TEPID.IO, YC16). But if you’re stuck in that old-school world of solving actual problems with software it should help you sleep at night.

Scalable and Fast (they say).

I have not yet done anything at all with the features of Go that should, and purportedly do, make it scalable and fast. Unless you count compiling my code as a minimalist demonstration of speed:

$ go build hello.go
$ time ./hello
Hello world.

real    0m0.005s
use     0m0.001s
sys     0m0.003s
$ time echo Hello world.
Hello world.

real    0m0.000s
user    0m0.000s
sys     0m0.000s 

Hmm... But seriously, there’s a lot of chatter online about Go programs being pretty fast, and until I’m in a position to judge (not likely soon) I will take the Internet’s word for it. In any case it ought to be fast-ish just by virtue of its being compiled, right? Right?

The thing that looks most promising for higher-performance programming in Go is its concurrency model. I have not used this at all yet, but I like the look of it, and the fact that its keyword is go makes me think they gave it some serious thought. In fact it looks so promising that I find myself trying to think up a programming task for which I would need it. Some kind of batch processing I suppose...

Cloud Friendly, and Trendy!

What is cloud friendliness anyway? A buzzword, a buzzphrase; maybe it’s meaningless. But when you are thinking about writing things that might need “cloud scale” it’s worth asking how friendly the “cloud” is to your language. Amazon likes Go. Rackspace likes Go. Heroku tolerates Go, in a friendly sort of way. Google App Engine supports Go, in a tentative sort of way.

The concurrency model should make it fairly straightforward to write programs that scale with the size of a compute host, and that ought to help in cloud-style applications. But in the end the trendiness may be the key here: trendiness gets you support (Heroku is Trendy!) and trendiness gets you free software mashups with other trendy things. (The Cloud is Trendy!)

The other thing trendiness gets you is enthusiasm. This might sound shallow, but at the end of the day you will not be the only one writing your software. Having people excited about working with the latest, trendiest thing will improve their work. And that will make you money.

Standards You Can Live With.

With Perl, there’s always more than one way to do anything, but there are some established best practices for common scenarios. With Javascript, there’s usually more than one way to do anything, and there are dozens of competing practices all claiming to be the best. With Go, as far as I can tell, there is a consensus about the “right” way to do most things.

In coding patterns, the Go folks call this idiomatic programming, by which they mean that which follows the language team’s preferences. This is probably a good thing, but it’s hard to see how to enforce it other than to have an appointed Arbiter of the Idiom. That can work in open-source projects, where the Go Community will happily play that role. Internal projects are another story. As far as I know there is no equivalent of PerlCritic, but that hardly puts Go at a disadvantage to any language other than Perl.

Other standards are more actionable. For instance, unit testing is to be done with the built-in testing software. Code formatting is to be done with the gofmt command – a pale, sickly shadow of Perltidy but one that for better or worse brooks no argument.

Project layout also follows a standard, but it’s a very problematic one as I’ll discuss below.

In the end there is a “Go Way” and you can save a lot of time by just submitting to it instead of arguing about its merits. When this works, it’s probably worth having it not be perfect. I’m right about indentation and they are wrong, but if I have to prove I’m right to one more f---ing nerd then I might as well save the effort and live with their wrong indentation. When it doesn’t work, however, it leads to nonstandard standards, which is a slippery slope indeed.

Plan Nine!

Once upon a time, we computer types thought the world needed new and better operating systems. It turned out the world didn’t exactly need them: the Apple computer I’m writing this on, and the server rendering it for you, and the iPhone on which I’m listening to music right now, are all running their fancy software on top of UNIX, which is venerable and beautiful and stable and definitely not modern. Windows limps along apace.

Before the rise of Linux and Steve Jobs’ return to Apple, people really did think there would be something new running the world’s computers by 2015. One of the most exiting new things was Plan 9, named after a classic low-budget sci-fi movie.

It was brilliant and advanced and promising and it looked a lot like the future. But Linux rose, Steve Jobs used NextStep to save Apple, Microsoft stayed in business, and new operating systems like Plan 9 didn’t come to much.

But it was a great idea. And some of the same people who made Plan 9 are behind Go.

No Go

Living with Half-Baked Standards.

I’m sure there are others, but so far I’ve repeatedly been stymied by two poorly thought-out Go standards: project layout and packaging. The standard is in both cases so oddly incomplete that I wonder about the bubble in which it must have been created. Nice, comfortable bubble, no mess here, ah the purity of the bubble...

Project Layout

Go projects are supposed to live in workspaces, and a workspace has a few standard directories: src, pkg, and bin, with pkg holding compiled packages so you don’t have to recompile every last thing every time. You keep your workspaces in your $GOPATH environment variable, which supports multiple workspaces. Multiple workspaces can be useful in development if you want all your third-party packages in one place.

Here for example is a workspace in which I’ve built and installed the foo package, which has a foo/bar package within it and uses one external package, github.com/kr/text:

                      .git/[shit-ton of git stuff]
                      ...et al. 

You will have noticed that the License and Readme files are thrown in with the source code, which is a bit messy but not unreasonable. And the shit-ton of git stuff is there because I fetched the text package with the standard go get command, and that clones the git repo.

So far so good, but what happens when I have a Real Life Project, in which I have a bunch of things that are essential but are not source code? Templates, configuration files, resources, test data. Where should it live?

Obviously I should have something like this, more or less, for a simple web app:


As far as I can tell, that’s not what Go wants. Go wants me to do this, which is not wise:


I have yet to encounter any defense of this craziness, nor any official way to sanely lay out real-world projects in which the source code is but one part among many. I’m an optimist so I’ll keep looking, but so far it looks like it’s up to me to hack my way around Go’s deficient workspace concept.

If I’m lucky somebody smarter than me will come up with a great solution and it will become the de-facto standard. This often happens when somebody writes a popular Web framework and ships it with a sample application and/or an application generator, the default layout of which is carried forward by thousands of people using the framework.

But if we’re going to look for de-facto standards to emerge from the community, and in the meantime roll our own, then what is the point of having an Official Standard?


Go’s default package manager is simply the go get command, which fetches a package from a public repository, does any necessary unpacking and installing, and drops the results under the first directory in your $GOPATH. It’s very clever about cloning Git repositories, and thus gives you a lot of flexibility when dealing with GitHub-hosted open-source packages, which seem to be the majority.

This is great up to a point. That point would be when you want to ship software that’s supposed to work, the not-working of which would be your problem, and which has external dependencies. Then suddenly you realize – oh shit! – you forgot to specify the exact version of that cool parser package from GitHub. And it’s at 0.31 so even assuming malice and human error do not exist in the world, your code still might break when cool/parser hits 0.33 and the API changes.

I keep hoping I will run across some justification for this, since it’s such an obviously bad design yet Go was written by obviously smart people. I have read that “stable code on HEAD” is the answer, but whoever says that doesn’t understand the question. It’s not about hygiene, it’s about predictability. The official justification isn’t much help either.

My best guess is that Go was written in a context, in-house software development at Google, in which package versioning was not an issue, and as a result the authors simply didn’t think of it.

Standard Anarchy.

One of the wonderful things about the open-source world, especially in this age of cheap internet infrastructure, is the way problems are often solved for you before you are even aware of them.

In the case of Go, we have GoPkg.cc, which very elegantly solves the package versioning problem for packages hosted on GitHub (the majority, I believe).

We also have GoDoc for package discovery and documentation. Between these two things Go almost has a CPAN.

I get the sense that Go revels in its anarchy at the moment, while also trying to insist that there are standard ways of doing most things. Presumably something like GoPkg will become a standard, de-facto or otherwise, in time.

The problem I see with this mix of anarchy and imperative is that not everyone is going to agree on the original standards. This problem grows with the user base, particularly as older, more experienced (and more opinionated) programmers get involved. The standard documentation, for example, is ugly and hard to use. Let’s say I make a better documentation package: should I refrain? Should the best solution win, or the factory standard?

Much has been made of the time saved by having gofmt format source code in the One True Standard Way, thus obviating the need to argue about indentation. But if there’s no standard manager for versioned external packages, and no standard Posix-compliant option parser, then why shouldn’t I make a competing code formatter? Let’s say I do, and it catches on. This could be a very good thing, but reduces the point of gofmt to simply saying “automated code formatters are good.” We’ve been saying that in the Perl world for a long long time.

Objects, Not Objects, Deja Vu!

I almost put this in the Pro Go section, because after half a million years writing object-oriented code I welcome the challenge of not thinking in “OO” terms, and I very much look forward to learning more about good objectless design. And anyway Go is kinda-sorta object-y to begin with.

In the real world, however, somebody will devise a clever hack that messily glues object inheritance onto Go’s structs and interfaces, which is where its non-inheriting yes-and-no object-orientation already lives.

In the real world, this object hack will be derided as nonsense by Go purists, will be more or less disowned by the Go community, will be exposed as borderline fraudulent in six dozen blog posts and half that many conference presentations, and will nonetheless be used, largely out of sight, in thousands of Go projects.

I think it was a big mistake of the Go authors to not build in proper object orientation. I don’t see why they couldn’t have. I get that they don’t want people to write Go that way, but sadly I believe a lot of people eventually will; and there won’t be any official way to do it, so somebody’s unofficial solution will end up as the base of some large and indispensible package that sooner or later everybody, or at least you, has to use.

Google: The Glass is Half.

To paraphrase jricher42’s Reddit comment: Google is an advertising company that realizes it must produce excellent software in order to continue dominating the advertising market.

It would be a little ridiculous to expect such a company to over-commit to any particular technology. If you’re a Go lover, and particularly if you’re one who doesn’t love Java and Python, then you probably wish Google would “Go” all-in and make Go their primary language. Then the outside world would love Go even more, because after all it’s good enough to run Google; and Google would get a bunch of free training for potential engineering hires.

But that’s not how this works, and for Go’s sake that’s mostly a good thing. First, given that Google’s support of Go is a little tepid even now, I wouldn’t assume it has any special prominence in Mountain View. As recently as last year I had a Google recruiter pitch me without once mentioning Go: he said I could code in the language of my choice as long as it was Java or Python. Second, Google for obvious reasons insists on the flexibility to discard its experiments, no matter how much “the community” might care about them (viz. Reader), and for that reason alone it’s better if Go isn’t a “Google Thing” per se. Third, Google is every bit as closed-source and poker-faced as Apple when it comes to any software that actually makes it money. If Google comes to depend on Go, then the internal Google fork of Go will start growing proprietary features if it hasn’t already. Good for Google, bad for Go.

This adds up to a risk, and one that we must hope Go outgrows. Google is capricious, and exerts an influence on the language. It might overcommit, it might meddle, it might abandon all support of Go; and in each of these cases the language might suffer, to an unknown degree.

Youth Itself.

Go is a very young language. It’s been publicly available since November 2009, which makes it five years old as of this writing.

Five years is time enough to mature some but not really time enough to prove any staying power. These have been faddish years in computer languages, and while Go’s popularity is growing it’s hard to guess how far it’ll, er, go.

Consider Perl: so very not trendy, yet so well established that an enormous amount of very serious code is written in it every day, making a lot of people a lot of money (and the programmers some too). Most of it isn’t powering startups, and I suppose none of it is powering Google, but Perl remains a reliable industry workhorse. I would much sooner trust Perl’s DBI than anything gluing databases onto Go.

Will Go become that reliable? Will an investment in Go expertise pay dividends in ten years? In twenty? Will there be a canonical Oracle driver, Perforce and Atlassian integration, that sort of thing?

I like Go largely because it solves problems I actually have, and the “canonical Go program” is a web application, of which I expect to write many more in years to come. But Go’s youth is a risk and will be for at least a few years. It would be helpful to hear success stories about large-scale adoptions of Go at major companies other than Google. Are there any such stories?

Limited Sense of Humor.

Much like Google itself, I get the feeling that desipite its cute (?) gopher mascot the Goosphere takes itself a bit too seriously. The Perl folks are much wittier.

Case in point:

$ go ogle
go: unknown subcommand “ogle”
Run ’go help’ for usage. 

Further Reading

If you’re interested in using Go, I recommend you start by playing with it, or following the official tutorial, or reading a free online book. If you’re the learn-by-doing type that will probably be enough to get you excited or repulsed, depending on whether Go is a natural match for you.

I also strongly recommend reading a talk the Go authors published in 2012, Go at Google: Language Design in the Service of Software Engineering. That explains the why of Go as well as much of the how,
and I don’t think the FAQ really makes sense until you’ve read it.

Finally, I found it enlightening to read a few “Go vs X” writeups, such as Scala vs. Go on Quora.

Now go build something!

Labels: , ,

Wednesday, December 10, 2014

Evernote, Forevernote, Nevernote?

I write a lot of things down, most of them on a computer. What I’m writing on a computer usually falls into one of several fairly typical categories:

  • Brainstorms
  • Plans
  • Lists
  • E-mails
  • Software

I wish I could add essays and fiction but I’m not there just yet. And I do keep something vaguely like a journal, and something like a studio notebook, but those are done with pens on paper and thus, as we say, out of scope. I might yet get around to regularly blogging or otherwise writing for the web, but I’ll burn that bridge when I come to it.

So, back to my categories. Let’s start from the bottom. Software, for me, lives in GitHub as long as it’s my own software. If it’s work for hire then I’m not at liberty to say where it lives. And most software is written in an editor of choice. When the choice is mine it’s usually TextMate 2, otherwise it’s usually XCode; and of course Vim is the fallback. Thus you could say that the software-writing problem is solved, for better or for worse.

I would say the same about e-mail, though here most technically minded people would aggree that “for worse” applies. I write my e-mails in Apple Mail and that’s unlikely to change. If I were a Windows user I’d probably use Outlook, and if I were on neither I’d probably use a mix of Thunderbird and something like Pine. I’ve used all of these as my primary e-mail client at some point; I’m not a Gmail type, nor do I “glass”, nor do I drive a Segway. Call me old-fashioned, or say I have a modicum of taste, or ride into the sunset on my color TV screen. As you like.

At any rate I, like most people who write software and e-mails, am sufficiently invested in my current workflows that it would take something fairly revolutionary to make me switch.

The next thing up my list is a different story: Lists! How do I list thee, let me list the ways! As banal as it might seem at first glance, list-writing and list-managing and list-using are all basically unsolved problems that are perfectly suited to software. And because nobody has really solved these problems together in a universally good way, and because they look on the surface like simple problems, there are lots and lots of computer programs that try to help you with your lists.

Most of them, as far as I can tell, are concerned with the subgenre of To-Do Lists. Among those, one of the most established and probably the most comprehensive is OmniFocus, and I have used it for years. I bought their Mac software and their iPhone software and their iPad software, and thanks to Apple’s dislike of “upgrade pricing” I may well buy them all again.

OmniFocus is GTD-oriented but it works for generalists just as well. I think it’s fair to say that OmniFocus was built for, and is championed by, people who have resigned themselves to always having large and complex To-Do lists, and in general to being “busy people” in a way that involves, or at least should ideally involve, lists. I may be such a person, and that may be a good thing, or maybe not and/or not. That’s a topic for a more philosophical blog post in The Future.

Over time I developed a serious problem with OmniFocus: it was such a good program for “capturing” (writing down) list items that I started keeping all my lists in it, not just my lists of actionable To-Do items. That was very much not the intended use of the program, and the user interface frequently reminded me of this fact.

Let’s move further up my category list: more trouble arrives with Lists. My List-Landfill slowly morphed into a Plan-Landfill: I was using OmniFocus as a project outliner, again because it was so easy to put things into it. But a Plan-Landfill – Plandfill? – is a dangerous thing. After all, these are plans! Things to be realized! Not things to be forgotten!

So after a year or so of this I decided to move the plan-writing to OmniOutliner, which seemed like just the ticket. And I suppose it might be, but I couldn’t warm to its look and feel, and its file format and export options were adding complexity where I already had enough.

Meanwhile I was still using my favorite text editor, TextMate 2, to do most of my Brainstorm writing –- the top of my writing pyramid, or column at least. I would write things in Markdown, and when then got too long or needed images or diagrams I would break them up into messy directories of Stuff.

That is, when I wasn’t further abusing the OmniGroup by mixing up my brainstorms and my plans and my lists in OmniOutliner and OmniFocus. Oh recidivist I!

Eventually it dawned on me, like sign language on a chimp, that I was overcomplicating things and that none of this would ever “scale” or for that matter remain useful five or ten years down the road.

As it happened OmniFocus had just been upgraded, and it would cost some money to re-commit; and I had been growing weary of their sync service that almost never worked with my iPhone (I had to re-download my whole database several times a week). So I decided to try something new.

Enter Evernote.

But wait: first, enter Wunderlist. It occurred to me that the best way to stop overusing OmniFocus was to use something else for my To-Do lists, and there was a new startup on the block with a well-reviewed and simple app. As a bonus they are based in Berlin, as am I (more or less). Wunderlist’s business model seems to be team collaboration and subscription revenue, and they’re doing a good job as far as I can tell. As usual, I digress: Wunderlist will be the subject of Yet Another Future Blog Post.

Enter again: Evernote.

The promise of Evernote is to be, like Yojimbo before it, a catch-all information organizer. Only modern. On all the smart devices, in the “Cloud,” and so on.

This works pretty well in practice. You have “notes” (documents) which are organized in “notebooks” (which can be nested), and there’s some basic tagging support and some interesting takes on organizing it all in the UI. Notes are more or less rich-text, but can have quite a bit of styling in them, and you can add images and other media. You can “clip” (capture and save) web pages. You can graphically annotate your graphics, and your notes as well. You can share notes and collaborate and this and that and the other popular thing.

Overall, I like it, especially for brainstorms.

Evernote works on the subscription model much like Wunderlist, and is also pushing its collaboration and “business” features a lot lately. It’s free to use and quite useful at the free tier; the paid version is something like $5/month, which for us grownups is more than affordable. In fact as long as you live in the developed world it’s affordable: it costs less per month than a single Starbucks coffee-esque beverage, and it provides real utility.

As with any app I disagree with some of the UI conventions, and as with any app there are some annoying bugs, but overall it strikes me as a great product, and after a month or two of use I could see it potentially solving my brainstorming, planning, and list-writing problems. It even supports To-Do lists, but based on my experience with OmniFocus I’m keen to keep that function separate from the rest.

Evernote has been very successful and as far as I can tell the success is well deserved. It even has a public API that a hacker like me could use to integrate it into other, weirder parts of my information superduperset. Part of me wants to go all-in and make Evernote my primary note-taking, information-ingesting, brainstorming-and-more gateway to the magical Cloud in the electric-blue sky.



But what about the lock-in? And what about security?

To be fair, I think Evernote goes above and beyond the industry standard for both data-freedom and security. There is a well-documented, machine-readable document format and you can export all your data whenever you like. You can even do it through the API if you want.

The company clearly takes security seriously and does a lot, probably all they reasonably can, to keep your notes secure and private. Or at least all they reasonably can while maintaining a useful service for normal humans.

As I see it, the lock-in problem for any system comes down to two questions:

  1. If I want to move to another system, how hard will it be?
  2. How confident am I that the service (and my data) will be around (and useful) for a long time?

Moving to another system will be easy as long as the other system imports Evernote files. Given Evernote’s popularity I think it’s very likely that serious competitors, once large enough, would import your data. Or at least import my data: in my case it’s mostly text, so I’m not worried about importing a terabyte of painstakingly catalogued home video.

Moving to no system, or to a system with no Evernote import, is another matter. At least it could be done, but I’d probably spend a week writing that converter. And the thing you’d end up with is, at best, a big pile of HTML documents.

For most people that’s utterly irrelevant. But for us hackers it’s a thing to consider: the portability of a document decreases as its structure gains in complexity. Evernote’s HTML variant is pretty manageable, especially by software, but it’s still a far cry from the plain-text goodness of Markdown.

I don’t think anybody’s doing this better than Evernote, but it does give me pause that there isn’t a simpler underlying format in case I change my mind.

And what of the product’s longevity? This matters not just in terms of access to your data but also, principally, for your continuity of habit. If you learn all the tips and tricks of Evernote, build yourself a workflow around it, and use it day in and day out, your commitment has great value. The value to the company is obvious as long as they keep enough customers paying; but the value to you is much greater. The cost of learning another system would be very high.

Plus, as unpleasant as it is to think of such things, longevity isn’t just a matter of existence. You need the product to be cared for, to be developed futher with the kind of passion that’s brought it this far; and above all you need them to keep caring about data-freedom and security.

In favor of Evernote’s longevity we find, mostly, their great popularity. I think their business model also makes a lot of sense: get people involved and committed on the free tier, sell them up to the premium tier, and make most of your money from a business tier that becomes more and more attractive to businesses as more of their employees already know the software.

But I don’t think there’s any proof yet that I’m right. They may well have a very lopsided user base, with the free tier consuming most of their resources and the paid tiers not contributing enough money.

They’ve apparently raised over $290 Million in venture funding over many rounds. That’s enough that there will definitely not be an OmniGroup-like small-shop happy end to all this. Either Evernote goes public, and is subject to market pressures that may force it to change its behavior; or it is acquired by a large player like Google, and is “integrated” into something like GooglePlus.

Or, of course, they go out of business. Given their open data format they are more vulnerable than not should a game-changing revolutionary note-management thingy evolve and come shuffling up out of the mud. I don’t think that’s likely, but then I was one of the many geniuses who thought up Twitter in 2005 and then dismissed the idea as culturally pointless and commercially hopeless. (We may all be proved right in the end…)

All of these risks exist with any product you might choose, certainly with any product that’s this popular. I have no reason to believe the odds are any worse with Evernote than with any of its competitors.

But it’s still a game with odds. It would be possible to build a product that specifically avoids lock-in, but what kind of business would that be?

Finally, let’s consider the security risks.

Evernote – the app, the service, the company – wants to be used for as much of your information-world as possible. I wouldn’t be surprised if they build in e-mail at some point. And just as with e-mail, the result is that a lot of your very private information lives, unencrypted, on somebody else’s server. Or in somebody else’s “Cloud.”

That doesn’t mean it necessarily will be compromised, but it does mean it can be compromised. Through your own stupidity or theirs. And as Evernote grows, it becomes a target for government-sponsored cybercriminals who have beaten the likes of Google, so it’s not just a questing of hiring good programmers. Evernote’s security was breached in 2013, and it probably will be again one day.

This problem, much like the lock-in problem, exists with any useful service, because the things that make it useful are precisely the things that limit its security.

In this day and age, sadly, you should assume that everything stored outside your own (encrypted!) hard drive will eventually be stolen by Bad Guys. Hopefully it won’t, but consider how private you want it to remain if that should happen.

Again this is something you could design for, but it’s not clear that would be smart business. Security always comes at the expense of convenience.

I don’t think “normal people” will ever really care about lock-in or security, and Facebook is the standard Exhibit A of this argument. I don’t think Evernote needs to lose any more sleep over these things than they already do. As the man said: If you’re not cop you’re little people.

But I, in this metaphor stew, am cop.

A Roll of One’s Own

I’ve visited and revisited this problem so many times over the years, and cobbled together so many half-assed “solutions,” that the temptation is very great indeed to just go all-in with Evernote and let the Everelves manage my Everstuff for Ever and Ever.

Yet still, there is the classic hacker’s weakness: I am always tempted to roll my own. (Back when I was a smoker, I rolled with the best of ’em. Just FYI.)

If I were to build a Note Manager Thing, its goals would be simplicity and security.

Simplicity would mean a strong bias towards plain text as the source format for, well, everything; and human-readability in addition to machine-readability wherever possible in metadata, i.e. no XML ever; and the assumption that anything else manipulating the notes is probably operating in a UNIX-like environment. These ideas are, unsurprisingly, very close to the original rules of Markdown.

Security would mean that everything is encrypted such that only the user can decrypt it. That’s a hard nut to crack of course. If I’m editing a bunch of text files in folders on my Mac then I can just stick them in an encrypted disk image et voilà, encryption. But then there’s the problem of moving things around, and of doing cool machine-learning tricks on the server, and a million other things. In reality I’d probably have to encrypt each piece of data separately, thus making the software slower and introducing some nontrivial number of bugs. What about auto-save? Ad nauseam.

Oh yes, and usability. That would be a goal too, at some point.

Just because it’s hard doesn’t mean it’s not worth doing.

It doesn’t seem quite sane to imagine this working as a business, at least absent some Silicon Valley Venture-Capital Slush Fund. Maybe if I called it Slush.ly.

Just because it’s not a business doesn’t mean it’s not worth doing.

Furthermore, I have a lot of things going on. I have a job, I have a wife, I have a gym, I have a painting studio (sort of), I have a photo lab, I have half a dozen experimental software projects, and I have a taste for the good life.

So for the moment, for me, it’s probably not worth doing. I’ll probably continue to use a bunch of different programs, including Evernote, for my digital writing and writing-down. And I will probably return to think about the problem once or twice a year, secretly hoping someone who shares my design goals has gone and built the thing I would build so I can just trade him (or her!) a few Merkels… er, € for it.

On the other hand:

If you’re listening, Marc Andreessen, I wouldn’t say no to your money.

Labels: , ,

Friday, September 26, 2014

’Ello Ello

There is a new social network in town: Ello. ’Ello Ello!

I’ve noticed enough people in my Facebook list joining Ello that I would definitely say it’s “trending.” And it turns out it was started by the guy who makes those nerdy-edgy plastic figurines you’ve probably seen on some programmer’s desk: Kidrobot.

I plan to give it a whirl as soon as somebody invites me in. Yes, that means Ello is trying to limit its growth rate by being invitation-only and limiting the number of invitations people get. It’s an old trick but it probably works, even if (sorry) it’s no longer cool to having a pocket full of invites.

Ello says that you are not a product.

But I see a few problems that lie in store for Ello.

1. Exponential Growth

Sooner or later they have to stop rationing invitations, and then since they are the buzz-child of the moment their growth will explode.

This is a really, really hard thing to handle even if you have a lot of money and great connections in the relevant nerd circles.

The normal way to deal with this is to take on venture funding so you can hire the experts and build up the infrastructure. But as soon as you take VC money you have somebody who can veto your Manifesto.

Now, a smart VC will let them burn money and follow their Manifesto and hope for an Instragram Moment, when Facebook fears all the cool kids will be on Ello and thus buys them for billions of dollars.

Either way, I don’t think there is a way to fund a conventional social network (which this is) without taking investment, and if they do then sooner or later they will sell out.

2. Scope Creep vs. Focus

Ello’s business model, such as it is, basically amounts to scope creep. I think they are seriously underestimating how hard it will be to just keep their minimalist feature set intact and usable at scale. If they want to add a bunch of features at the same time, that’s great, but then who’s going to keep the lights on?

In that sense it sounds like they want to be Craigslist — a lifestyle business that “goes viral” — but running a social network is a lot harder and more expensive than running Craig’s little apartments-and-prostitution site.

They also will need serious apps for iOS, Android and Windows Mobile. The iOS and Android apps are at least on their official feature backlog. Maybe Microsoft will write the Windows Mobile app for them.

In the best case you might have a group of talented programmers and designers who are willing to quit their jobs and work full-time for a year or two for equity. But sooner or later (most likely sooner) people need to get paid, and Ello needs real talent in two very high-end disciplines: mobile app engineering and scalability.

Again this comes down to the VC question. Ello has already taken a small seed investment, according to Crunchbase. But it’s not enough to buy full-time engineers, and until they can hire people they will have to choose between growth and features. Or perhaps I should say I hope they are smart enough to realize it’s one or the other.

3. Financial Sustainability

If by some miracle they manage to avoid the VC trap, how are they supposed to make money?

So far they talk about selling “premium features” but can you really operate a big social network on subscription revenue, with most of your users not paying anything? With server farms and a bunch of engineers who get market pay because they have to do the boring work? With a legal department that’s not going to be little pro-bono junior Lessigs?

I don’t buy it, not for a second. Even if enough of my friends went on Ello to make it worth splitting my activity into two networks (“cool” Ello vs “dull” Facebook) and even if I would pay $50/year for the service, I don’t think anybody else I know in Budapest would pay for it. Say that’s 200 people, now you have revenue of 25c/user/year, with which you can buy exactly nothing. I’m sure the “premium conversion” rate would be higher in the Bay Area but globally I don’t think 0.5% sounds pessimistic at all.

To put it another way: I don’t think Facebook is an ad platform because Zuckerberg is Evil, I think Facebook is an ad platform because there’s no other way to pay for a full-scale social network with all the features people want. That they’re very very good at being an ad platform, and utterly craven about their users’ privacy, accounts for their huge profits; but without ads per se I don’t think they could keep the blinkenlights on.

When the founder says things like “data is cheap” you have to wonder whether he’s been exposed to real-world, large-scale data-management problems. Of course the physical and virtual infrastructure required to launch a startup is much, much cheaper than it was back in 2004, but engineering talent is more expensive and a lot of the problems are vastly more complex. And sooner or later you have to scale up.

Never Say Never

Of course there is nothing new about these problems, just as there is nothing new about making the Anti-Facebook. But that’s no reason to not try.

I personally think the Next Big Social Thing will be less centralized, if only because Facebook sits in social networks where Google sits in web search: behind a massive capital-intensive barrier to entry, namely its server farms. To a lesser extent there is also the network effect on Facebook, but I think that’s easier to break than it seems. The rush of people wanting to try Ello points to the depth of dissatisfaction with Facebook; I would expect any reasonably convincing Anti-Facebook would benefit from that.

To be clear: if I were the Ello people I’d take this buzz and run as fast and far as I could with it. I’d get half a billion dollars in venture capital and make every investor sign off on my utterly unproved business model. I’d use the successful obstinance of Mark Zuckerberg as my template. I’d build a war chest big enough to run Ello according to my chosen principles for years and years and figure that sooner or later I’ll either figure out how to make money at it or find good money to chase the bad.

In today’s Silicon Valley investment climate I would even be a little surprised if they didn’t build a preposterously large war chest. And if they really could force their investors to sign on to the Manifesto, returns & exits be damned, then that would solve all three of the problems I metnioned above.

While I’m more than a little sceptical about their long-term prospects, maybe it doesn’t matter.

Maybe Ello is the next Facebook. Maybe Ello is the Anti-Facebook. Maybe Ello lives forever. Maybe it’s better to switch social networks every few years anyway.

I wish them the best. I know they will go through a lot of pain when they have to scale, but the very fact that it looks like they will have to scale puts them at the head of the pack.

Labels: ,