Monday, April 03, 2017

A Thought on Airlines

I recently bought a ticket on AirBerlin, which is my preferred airline for traveling between my home in Berlin and my favorite destination of Budapest.

It was a horrible experience.  Their preferred payment system, the security of which seems questionable, failed to process my payment (from my Berlin bank no less); it was impossible to return to a booking in progress if the browser window closed (hello, cookies?); and in the middle of retrying this, literally within minutes, they raised the price of one of the flights.  And that's just the technical part.  It was impossible to figure out which fare class would actually be cheapest once luggage and seats were added; it wasn't obvious who operates the flight (Alitalia, says SeatGuru, which would be a first for this route); and in general things were designed to keep the customer in the dark.

Unfortunately, there's nothing particularly unique about that.  Airlines are in general hostile to transparency, and constantly try to squeeze you for an extra five Euros here, an extra $19.95 there.  (Here at least the cheapos like WizzAir are honest: pay for the good seat or your trip will suck.  Pay for your booze, but they'll be happy to sell it to you.  Ryan and Easy, however, remain below contempt.)

The strange thing in all this is that I expect my flight to be pleasant, professionally flown, and more or less on time.

And it's like that almost every time.

We buy a ticket from a company that obviously hates its customers, and we then put our lives in that same company's hands.  And somehow this works.

Wednesday, November 09, 2016

Ten Quiet Predictions for a Trump Presidency

It’s done: in an upset that should have surprised no one, Donald Trump has been elected President of the United States. As to how this happened and why, the pundit classes have been so busy dissecting the cause and the method that they didn’t look up to see the oncoming train. The train is upon us now, lumbering forward in its orange sputtering of nonsequitur trutherisms.

For the moment, I’m not very interested in how we got here; as an American from the sticks, I accept that he speaks for the majority. Most of that majority really does stand behind him, and a few (like Spineless Paul Ryan, and most of the Religious Right) are mere opportunists who have nonetheless ceded their voice to the Orange King. He is their voice. For real this time.

(I read somewhere that the Republican Party had created the monster that is Trump’s base; and Trump had stolen their monster for his own uses. This is probably true, but it’s also true that this kind of talk inspired the monster to don a Deplorable Me t-shirt and get out to vote.)

It’s early. Hillary has conceded, so there is no second-guessing Trump’s victory. (And in at least one point I’m relieved: I will not have to see a second attempt at dynastic leadership just yet, at least not until Ivanka runs).

The Republican Party – the Trump Party, as it frankly ought to be renamed, its logo painted in gold, with a 5% members’ discount at select properties – will also control both houses of Congress, and most likely the Supreme Court with the ideologue of their choice by mid-January.

That means they will have to govern, or at least try. This will not be easy, especially after so long in cushy obstructionism. The likes of Ryan are a little too used to writing budgets without using math, and taking responsibility will be hard. But I do believe they will take action, because without action the Donald will bore and worry and fear for his popularity; and if there’s one thing we now know for certain, it’s that the Orange King can eat Ryan and his covey of twerps for lunch.

And now to my predictions. Like most predictions, they will most likely not age well. But here we go all the same.

1. Trump will return to a limited pragmatism.

He will certainly remain an oafish, bigoted conspiracy theorist, but as far as he can he will avoid outright confrontation with half the country.

Instead he will try for things he can win without protest, or win despite protest, or look good losing.

Ivanka will be chief of staff and unofficial COO.

2. Environmental protections will be gutted to the point of meaninglessness.

Fracking ho! This will be the price of the Koch brothers’ support, but Trump will pay it gladly as time and again his base has come out in self-defeating opposition to government regulation of industry.

In the short term, the poor will pay the price as they usually do. Long-term this means the worst for climate change, but the worst was coming anyway.

3. Black Lives will Matter Less.

This one is obvious: after all we’re talking about the Birther President.

The government will no longer support any efforts towards racial equality. The Justice Department will no longer pursue vote-suppression investigations. There will be laws against recording police actions on video.

These things will be to the great satisfaction of the white supremacists, but Trump will appoint a few token People of Color in his adminstration and swear he’s not a racist.

4. Assad will win in Syria, with Russia’s help, and with some partition.

Trump will wash his hands of Syria, but look like a statesman of sorts for brokering a deal between Turkey and Russia that will give most of Syria back to Assad while carving out a small zone for the anti-Assad forces loyal to Turkey. The Kurds may or may not be sold out – largely depending on whether Trump learns anything about them before making the deal.

The deal may well be proposed by Russia, but giving Trump credit will be part of the deal.

5. Trump will play chicken with Mexico and Canada on trade, and win.

I think NAFTA will be an easy target for Trump. It’s insanely unpopular with his base, who blame it above all else for the gutting of American factory jobs. And while it certainly benefits the US as such, that benefit accrues mostly to big business.

His argument will be that NAFTA was a lousy deal for the USA, and that Mexico and to a lesser extent Canada are the big winners. He will demand concessions, maybe even a whole new treaty. And he will get his concessions, particularly from Mexico.

The big question is whether they will be symbolic concessions or substantive ones. If they are substantive, we might see it helping agriculture in the US, but I have a hard time picturing it doing anything for manufacturing.

6. Trump will play chicken with China on trade, and lose.

Bolstered by the popularity of his NAFTA re-do, Trump will try to “get a better deal” with China. China will play him like a Guzheng and extract serious political concessions in return for meaningless trade platitudes, then use the cudgel of their dollar reserves to prevent anything substantive changing to their disadvantage.

Trump will find a way to spin this in his favor, but the financial press will give him a very hard time for it.

7. US isolationism will be taken advantage of across the globe.

China, Russia, Iran, Pakistan, Saudi Arabia, North Korea, and Poland – just to name a few – will all do things at home and with (or to) their close neighbors that would previously have gotten them in deep trouble with America. Trump will let it slide.

His political intuition on this point will turn out to be right: most Americans will prefer to not care who’s killing whom on the other side of the world.

8. Obamacare will be abolished, but the pre-existing-condition protection will remain.

This point, and the next, are where I see a glimmer of optimisim around the Trump presidency. I think he has enough populist sense to not take away something as important as the pre-existing condition protection. (For any international readers: pre-Obamacare you could not buy individual insurance in America that covered any illness or injury you already had.)

I also think Ivanka will push him to do this, and unlike any pre-Trumpian Republican he will not have any problem sticking it to the (obscenely profitable) insurance industry, nor will Spineless Paul Ryan be able to stand up to him. On anything, really.

9. The tax code will be reformed.

Over the objections of the business elite, and to the consternation of the entrenched Republicans in Congress, Trump will force through a simplified tax code that even he can explain to the people. It won’t be great for anyone but the rich but it will “seem fair” at first glance.

As with the “something better” for Obamacare he will face stiff opposition from his own ranks, but he will bulldoze them, and also get some unexpected support from left-leaning Democrats if the tax is not completely regressive.

This, together with the NAFTA deal, will be his signature achievement.

If he gets the Trump Party in line early, don’t rule out a flat tax on income, possibly with a lower rate for investment income. This kind of regressive tax plays well with the middle class because it “seems fair,” and it also buys a lot of loyalty from the upper-middle class that might not be with you ideologically. It worked in Hungary and I’m sure Orbán will suggest it to Trump at some point.

10. The poor will get poorer, etc; victory will be declared.

Chaos, unpredicatbility, and amateurish mistakes will exacerbate the problems you’d already expect from the normal, growth-killing Republican policies.

The economy won’t tank unless there is a major unforeseen catastrophe, but it will be sluggish at best. That part of Trump’s base that could be called the economic losers of globalization will be even worse off than they are now, and except for the very rich nobody will be much better off.

But Trump will have a few big wins to offset his big losses, and the Democratic Party will be split as usual between the business-establishment wing and the Sanders leftists.

2020 is his to win, looking at it from here. But to do it he has to be a little bit lucky with global events – no new wars, no economic meltdown – and he has to avoid the temptation to stock his administration with characters from the clown car of his recent campaign.

While I’m sure there will be pressure to appoint, say, Rudy Giuliani as Attorney General, I hope that Trump the Opportunist at least recognizes that he’s now the Winner and can choose from a more competent class of sycophants.

Labels: , ,

Thursday, April 14, 2016

Interesting JSON Benchmarks Go.

These days lots of people are buiding microservices, and microservices usually involve HTTP API’s, which in turn usually exchange data as JSON.

Not long ago somebody pointed out that a lot of effort goes into generating and parsing that JSON. It would be unwise to simply ignore this part of your system’s design.

Since it’s very easy to benchmark things in Go, I decided to do a quick comparison of JSON encoding strategies.

Go JSON Encoding

The normal way to generate JSON in Go is to use the encoding/json package, and feed your struct into the MarshalJSON function. This function will take anything and try to convert it to JSON. If your struct, or anything in it, has its own MarshalJSON function then that is used, otherwise it’s examined using reflection.

Reflection is (supposed to be) expensive, so I wanted to see how much I might save by making my own JSON encoder for a struct. The main point being that I already know what the struct is made of, so I can save the encoder the trouble of examining it.

Benchmarked Variations

I started with several, er, structurally identical structs:

  1. A naïve one, with no MarshalJSON function of its own.
  2. A hinted one, with field names provided.
  3. A smart one, with its own proper MarshalJSON function.
  4. A fake one, which returns previously set data from its MarshalJSON.

The point of the fake one, of course, is to isolate the overhead of the actual JSON encoding.

All of these, when set up with a bit of standard fake data, generate the following JSON:

   "Id" : 123,
   "Stuff" : [
   "Desc" : "Something with \"quotes\" to untangle.",
   "Time" : "1970-01-01T01:16:40+01:00",
   "Insiders" : {
      "One" : {
         "Id" : 321,
         "Name" : "Eenie"
      "Two" : {
         "Id" : 421,
         "Name" : "Meenie"

Surprising Results

Here are the benchmark results for this little experiment, as run on a MacBook Pro (Mid 2014) with 2.8 GHz i5, 8 GB RAM, under Go 1.6.1.

BenchmarkNaïveJsonMarshal-4               300000          4299 ns/op
BenchmarkHintedJsonMarshal-4              300000          4293 ns/op
BenchmarkSmartJsonMarshal-4               200000          6490 ns/op
BenchmarkSmartJsonMarshalDirect-4         300000          4149 ns/op
BenchmarkFakeSmartJsonMarshal-4          1000000          2299 ns/op
BenchmarkFakeSmartJsonMarshalDirect-4   10000000           115 ns/op

In the “Direct” benchmarks, the struct’s own MarshalJSON function is called without going through encoding/json, i.e. without any sanity-checking.

I expected to see a lot of overhead from the reflection, i.e. the unknown struct being examined. Instead I found that using your own MarshalJSON function is actually slower because json.MarshalJSON (sensibly enough) validates the JSON output for you, lest it accidentally return invalid JSON itself.

Also, the hinting doesn’t make much of a difference, but it can make your JSON output prettier and more predictable: one usually uses it to have lowercase and/or underscore_separated key names in JSON objects, and to omit null objects in order to compact the JSON.

Using the numbers above we can very crudely estimate:

  • Custom encoding with validity checks is about 50% slower.
  • Custom encoding without validity checks is about 3.5% faster.
  • Best-case custom encoding with validity checks is about 50% faster.

In order to use the custom encoding without validity checks, you have to do all the encoding in a non-idiomatic way. This makes your codebase more fragile, because a new collaborator can’t just step in and do the obvious thing without undoing your optimizations.

It would be interesting to see how these numbers scaled with more complex structs, in particular deeper nested objects.

Based on these benchmarks, which I admit are oversimplified, I recommend avoiding custom MarshalJSON functions unless you absolutely need them for handling unusual data structures. If you want them for speed, make sure to benchmark your implementation before making a final decision.

Source Code

Labels: , , ,

Wednesday, April 06, 2016

I Am Possibly Legend

Recently I finally – finally! – got around to seeing the 2007 Will Smith SciFi vehicle I Am Legend. Apparently it was a remake, of sorts, of Omega Man, which I saw as a teenager and barely remember.

Spoiler Alert! Just in case you haven’t seen it.

Here’s the really big problem. The Zombies aren’t trying to kill Will Smith because he’s the savior of the human race, they’re trying to kill him because he’s a serial killer, responsible for hundreds of disappearances, tortures, and grissly murders. He’s more than a bad guy, he’s Zombietown’s own Josef Mengele.

He abducts the Zombie King’s girlfriend/daughter/wife/buddy/chess-partner (we aren’t told which), drawing the Zombie King out of his lair at risk of death by exposure. Our Hero then performs sick medical experiments on her, fully expecting her to die of them, in the name of “curing” her. She dies.

Then, the Zombie King – proving, by the way, they’re not zombies at all in the traditional film sense – comes up with an elaborate trap to capture Lt. Col. Mengele. After that fails, the Zombie King directs an army that almost kills Our Hero, but the Lt. Col. is saved by another NonZombie who shows up out of nowhere.

It should be said that Our Hero is at that point basically suicidal, because the Zombie King’s dogs zombified the Hero’s dog in their failed capture attempt, and the dog was Our Hero’s best and only friend, there being no other NonZombies around, and to top it off Hero had to kill Dog to prevent the zombification.

But! But all along our Hero makes audio notes, such as the audience is privy to, in which he says things about the Zombies that are demonstrably not true. Mostly, that they are actually zombies, and not merely Very Different Humans.

The only part of this I don’t understand is whether the filmmakers were trying to gloss over the murderous aspect, or whether they were through it trying to point out the futility of all life, of our folly before the gods. Because – spoiler alert – in the end the Good Dr. Lt. Col. Hero’s life’s work of abduction, torture and murder succeeds, and ends the plague of zombism.

Well, if you believe the narrator it does. Considering how reliable the narration is up to that point, I am much more inclined to believe there was a successful genocide launched against the Zombies. And for that matter I am specifically disinclined to believe they ate all the nice folks in the first place.


Saturday, March 12, 2016

Many Servers in One, with Go.

I’m still fiddling with a web server idea, currently trying my umpteenth version of it in Go. One of the things I want to do is serve a bunch of different things from one binary; they would usually live behind Nginx anyway, and I like the simplicity of serving a bunch of (somewhat) dynamic, small sites from a single program.

So, how does one do it? Why, one does it with Goroutines!

I had expected that to be the answer, but even then I was surprised at the simplicity of this solution. You don’t even need channels, since http.ListenAndServe is a blocking call if it’s not in a Goroutine.

Here’s the code, which should be self-explanatory.

Labels: ,

Friday, January 08, 2016

How to Test Logging in Go

Google’s Go language has a very rich and at times utterly infuriating standard library. One of the things it includes is Logging. Since there are six million ways to log (choose one!) we shouldn’t get too worked up about the ways in which the log package doesn’t log the right way for, well probably for anyone outside of Google. That’s not the point.

The point, and it’s a happy one today, is that a mere ounce of prevention will get you easily testable logging with Go’s standard log package. Caveat lector: I have no idea whether this works with Go prior to version 1.5.

Your Ounce of Prevention

Especially when hacking together quick utilities or packages, it’s always tempting to do this:

package logdemo

import "log"

func Log() {
    log.Println("woo hoo")

However, that will be very hard to test. Take the time to set up a logger, and expose it. Ideally you can do this with a struct:

package logdemo

import "log"

type LoggingThing struct {
    Logger *log.Logger
func New(name string) *LoggingThing {
    prefix := "thing-" + name + " "
    flags := log.Ldate | log.Ltime | log.Lmicroseconds
    return &LoggingThing{Logger: log.New(os.Stdout, prefix, flags)}
func (lt *LoggingThing) Log(msg string) {

Now you have something that can be messed with in your unit tests, and is also much easier to debug should the need arise.

But if you really can’t deal in structs, at least set up a package variable for the logger. This too can be manipulated.

package logdemo

import "log"

var logger_prefix = "global "
var logger_flags = log.Ldate | log.Ltime | log.Lmicroseconds
var Logger = log.New(os.Stdout, logger_prefix, logger_flags)

func Log(msg string) {

Your Pound Sterling of Cure

Now that we have an exposes Logger (two, really), we only need to know one thing: that a Logger can have its Output set after the fact, and that the Output needs only to be an io.Writer, which as you may recall simply requires that it implement this:

type Writer interface {
        Write(p []byte) (n int, err error)

In our test package, we can set up a simple struct to catch logs instead of writing them:

type LogCatcher struct {
    Logs []string
    Last string

func (lc *LogCatcher) Write(p []byte) (n int, err error) {
    s := string(p)
    lc.Logs = append(lc.Logs, s)
    lc.Last = s
    return len(p), nil

That will give us raw logs, but they may include prefixes and timestamps. Prefixes, if they are used at all, are quite useful; but timestamps are very hard to test for, since you then have to match log entries with regular expressions instead of simple strings.

Fortunately, that too can be overridden. Our overrides then look something like this:

thing := logdemo.New("one")

catcher := &LogCatcher{}

// If you want to test the prefix and output format you can of course
// do that separately.  For testing the written logs it's asking a
// lot, but we can override!

// Here we log, and catch it!
assert.Equal("here\n", catcher.Last, "caught first log")

Note that I am using the excellent assert package in order to make unit testing sane. (Again with the rich but infuriating standard library…)

If you’ve read this far and you aren’t totally lost then you can imagine how this continues. But you don’t have to imagine, because the code is there for you on GitHub under biztos/go-demos, specifically as
logdemo.go and logdemo_test.go.

Congratulations, now your logging will not prevent you from achieving the coveted (and utterly spurious) 100% code-coverage metric!

Labels: , , ,

Tuesday, November 03, 2015

Test Package Naming in Go

I’m still – still – playing with Google’s Go language, making the occasional one-off utility and also working on a Crazy Web Server Project. I hesitate to call it a Framework, since all Frameworks are ultimately doomed. It’s nothing too extreme, but it’s (hopefully) going to solve some of my web-server problems for small sites; and it’s (absolutely) helping me learn Go “for reals.”

I recently discovered something very simple, more or less by accident, that has big implications. It is this: tests for package foo should declare package foo_test.

To me this was counterintuitive, since one of the big hairy things you have to adjust to when learning go is the flat package directory. A package is made up of a bunch of .go files, and they all live in one flat directory. If you want to organize the files in a hierarchical set of directories, you must also have a corresponding set of packages (though they need not be hierarchical).

Incidentally, as I get better at Go I find myself hating such conventions less, because they are useful in forcing the programmer’s hand: be idiomatic damn you! And that certainly has its upsides.

Thus, logically, every .go file in a given directory declares the same package. If it doesn’t, you get a compile-time error!

Except, that is, for tests.

The Test Exception

For a directory with a package foo, you may have two packages: foo and foo_test. You may not have any others. So, why would you want to use foo_test?

There is a very good reason, which I’ll get to in a moment. If you already have a lot of unit-testing experience and know a little Go you may already have guessed.

Let’s first consider the way I had been doing it until recently. Apologies in advance for the lack of syntax highlighting.


package foo

func Bar() string { return "BAZ"}


package foo

import "testing"

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

OK, great. This makes perfect sense so far. But now consider this, which in fact seems quite the obvious thing to do:


package foo

func Bar() string { return ook(1) }

func ook(i int) string {
    if i > 0 {
        return "BAZ"
    } else {
        return "BAT"


package foo

import "testing"

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

func Test_ook(t *testing.T) {

    exp0 := "BAT"
    got0 := ook(0)
    if got0 != exp0 {
        t.Errorf("For 0, expected '%s', got '%s'", exp0, got0)

    exp1 := "BAZ"
    got1 := ook(1)
    if got1 != exp1 {
        t.Errorf("For 1, expected '%s', got '%s'", exp1, got1)


The example above shows a very common pattern, about whose virtue we could certainly argue: you can have much more thorough test coverage if you have robust unit tests for private functions. (Remember, in Go only Capitalized functions are exported.)

However, there’s a cost in clarity: you should be testing all things that could happen in the use of the package, i.e. you should be testing all variations of the public API. If there are conditions your public API can not trigger, and which thus can not be tested via the public API, then remove them from the code.

Go is strongly biased towards writing only the code that is actually used. Speculative code, while possible, is definitely not idiomatic.

As somebody who writes code for a lot of “what-if” scenarios in Perl, I can certainly apprecitate that discipline.

This is why you should use the foo_test package: Go will then throw a compile time error if you try to test a private function, and in your quest for 100% code coverage you will see the unreachable parts of your code.

So let’s get back to our example, changing the test first:


package foo_test

import (

func Test_Bar(t *testing.T) {

    exp := "BAZ"
    got := foo.Bar()
    if got != exp {
        t.Errorf("Expected '%s', got '%s'", exp, got)

If Bar is the only thing that ever calls ook then you’ll have to change ook or you’ll never get 100% coverage. In this contrived example, the compiler will most likely abstract away ook entirely, but bear with me.


package foo

func Bar() string { return ook() }

func ook() string {
    return "BAZ"

And there we have it, in silly little examples. Always use a _test package name for your Go tests. Testing will be harder. Deal with it: your tests will also be better.

Labels: , , , ,