Havoc's Blog

this blog contains blog posts

Layout APIs don’t have to be terrible – lessons from Bokeh

Our APIs for arranging elements on the screen are stuck in the stone age. Using these APIs in the simplest way results in an ugly layout. To achieve pretty layouts we deploy tricks that aren’t obvious to newcomers, though many of us have internalized them and don’t notice anymore.

If you ask a designer to arrange items on a page, by default they’re going to line shit up. (They’ll think a lot about exactly which things to align with which other things in order to communicate what’s related to what, and there may be some strategic non-alignment, but … alignment will be happening.)

Layout APIs aren’t designed to implement what graphic designers design.

In the print design tradition, designers don’t have to think about the window size changing or users choosing a larger font. Developers have to translate mockups to code in a way that keeps the design intent while allowing stuff to scale. Alignment of items will often be the most important aspect of the design to preserve.

When we aren’t working with a designer, lining stuff up is a good default anyway. It’ll look better than not. But typical layout APIs require extra work to get stuff aligned.

In this post I’ll talk about:

  • how we tried to improve this in the Bokeh project
  • three ideas that any UI toolkit could explore to escape the limitations of today’s layout APIs

I suspect there’s a large innovation space here that hasn’t been explored.

Bokeh layout

If you aren’t familiar with Bokeh, it’s a toolkit to create plots and data-centric applications or dashboards. Bokeh lets someone build a web app in plain Python without becoming a web developer. Many data scientists are Python developers but don’t keep up with the modern web stack (JS, HTTP, CSS). Web development is a full-time job in itself, and it’s a distraction.

We faced an issue that typical Bokeh data visualization apps didn’t look nice; after all, they were created by data scientists without design help. How could we make the easy approach a data scientist might choose look nice by default?

Bokeh users were placing their plots in rows and columns (also known to some as hboxes and vboxes). But if you had two plots in a column, the different label widths on the Y axes would keep the Y axes from aligning with one another, for example. If you stacked two rows into a column, the rows might not be the same total width, and even if each row had the same number of plots, the result wouldn’t be an aligned grid.

We asked designer Sara Schnadt to mock up some prototypical layouts for data visualization apps. She used rows and columns just as data scientists were doing, but she aligned a bunch of the elements.

Here’s the goal we arrived at: let the data scientist group plots into rows and columns, and then match widths/heights and align axes as a designer might.

That is, the input should be something like this with no additional hints or manual layout tweaks:

column(row(plot1, plot2), row(plot3, plot4))

Using that input alone, the output has to look good.

Existing solutions used in native toolkits and in CSS were not appropriate:

  • boxes and variations on them are simple but the results look bad without a lot of manual massaging;
  • grid/table systems require manually choosing the grid size and manually specifying coordinates to attach each element, which is too much work, and making changes later is a pain;
  • constraint systems are complicated to understand and horrible to debug;
  • fixed sizes and positions aren’t acceptable for most apps.

Here’s what we ended up doing. We converted the rows and columns into a constraint system (using the same Cassowary algorithm that’s made it from scwm to iOS); but for Bokeh, the constraint system is an internal implementation detail, so data scientists shouldn’t end up debugging it. And our constraints are about alignment.

Bryan van de Ven and I created an initial prototype of the algorithm, and Sarah Bird did 95% of the work fixing it and converting the concept into a reality. Bokeh uses Chris Colbert’s implementation of Cassowary for JavaScript.

Our prototype used wireframe dummy widgets. Here’s what happens when you arrange some dummy “plots” with a traditional box layout algorithm:


And here’s what happens when the same dummy widgets (plus I guess a couple more at the bottom) are arranged with the new algorithm:


The code for that prototype can be found here. After a ton of work by Sarah, you can find some real Bokeh code running at http://demo.bokehplots.com, and the source on GitHub.

There’s a lot more that Bokeh could do to be even smarter about layout, but this is a start.

Three ideas to explore further

In the spirit of “yes, and…” I’d love to see other toolkits (and Bokeh) take this further. Here are some ideas I think are useful.

1. Do layout globally, not recursively.

For implementation reasons, toolkits use recursive layout algorithms; each node in the widget tree is responsible for arranging its immediate children.

But often we want to align things that may be in different sub-branches. There are ways to do this, such as hacking in some fixed sizes, or adding hints (one such hint API is GtkSizeGroup). In Bokeh, we use the tree of rows and columns to infer constraints, but we pull everything up into a global toplevel constraint system. With global layout, constraints can span the entire tree.

2. Use constraints as implementation, but try to avoid them in the public API.

Cassowary is a clever algorithm, but not a friendly API in any way. Read these iOS docs; simple, right?

Bokeh takes rows and columns as input. It could probably allow you to provide additional hints, but manually providing constraints gets ugly quickly.

I initially tried to prototype the Bokeh layout algorithm without using a constraint solver; I think it’s possible to do, but the constraint solver approach is much, much simpler. As long as app developers aren’t exposed to debugging constraint systems, it’s a good route to take.

3. Align visual elements, not logical bounds of widgets.

For example:

  • even though a plot including its axes might be one widget, we want to consider the axes an element for alignment purposes
  • even though a widget such as a slider might have some whitespace around it, we want to consider the visible pixel bounds for alignment purposes

We could think of something like text baseline alignment as a special case of this idea.

Lots more to do

For me, layout remains an unsolved problem. I think it’s because we’re in a rut of old assumptions, such as recursive algorithms that use logical widget bounds without ever “peeking in” to the visual elements inside the widget. We’ve also failed to find the right role for constraint systems; we are trying to use them as the API itself, but in fact they are a powerful implementation technique that could support a lot of different higher-level layout APIs. We could do more to figure out what those APIs could be.

Historically, layout APIs have been implementation-driven; they didn’t start by thinking about “what graphic designers design,” or “what app developers would like to maintain,” they started with something like “how GUI toolkits work.” Rethinking these APIs in a more top-down, experience-oriented way could find quite a bit of room for improvement.


Professional corner-cutting

Steve Jobs famously cared about the unseen backs of cabinets. Antique furniture built with hand tools isn’t like that at all. Cabinetmakers made each part to the tolerance that mattered. The invisible parts were left rough, with plane and saw marks, to save time. The visible parts, however, were cleaned up and polished. Some surfaces were made precisely straight and square, for structural reasons; while nonstructural surfaces were only straight enough to look good to the eye.

Think about an apprentice in an old cabinet shop. An apprentice painstakingly smoothing an invisible surface would be yelled at for wasting time. An apprentice failing to smooth a visible surface would be yelled at for producing crappy work. To become a professional, the apprentice learned to work efficiently but still do a good job. Crucially, a “good job” was defined in terms of customer concerns. [1]

Cabinetmakers were focused on what their customers cared about. Customers wanted the furniture to look good, and they wanted it to be structurally sound. They didn’t care about invisible tool marks, and didn’t want to pay extra to have those removed.

Software remains a craft rather than a science, relying on the experience of the craftsperson. Like cabinetmakers, we proceed one step at a time, making judgments about what’s important and what isn’t at each step.

A professional developer does thorough work when it matters, and cuts irrelevant corners that aren’t worth wasting time on. Extremely productive developers don’t have supernatural coding skills; their secret is to write only the code that matters.

How can we do a better job cutting corners? I think we can learn a lot from people building tables and dressers.

1. Own the implementation decisions

It is irresponsible to ask the customer (or manager, or other not-doing-the-work stakeholder) to tell you how to make technical tradeoffs. Cabinetmakers didn’t ask their customers how flat a tenon had to be and this is not the customer’s problem. The customer wants us to do it properly but not wastefully. It is our decision how to go about this, and if we get it wrong it’s our fault.

On software teams, there’s often a developer trying to push these decisions up to management or onto the customer, because they don’t want to “get in trouble” later. Perhaps they complain to management about “technical debt” and being “given time to work on it.” This is a sign that we aren’t owning our decisions. If the technical debt is a problem, 1) we shouldn’t have put it in there, and 2) we should include it in our estimates and address it. A cabinetmaker would not ask the customer to put “make tenons straight” on the sprint. Nobody cares. Technical debt is our problem; that’s the job.

If you don’t own your technical decisions, you can never get them right, because nobody else knows how to make them. Insist on making them. And yes, that means getting them wrong is your fault. It may mean giving people bad news about how long things will take. It may mean you get yelled at sometimes.

2. Understand the customer’s needs and preferences

Because we must make tradeoffs and not push choices onto the customer, we have to understand what matters and what doesn’t. It’s easier to establish this in the world of furniture (“doesn’t break when you sit on it,” “looks nice”). In software, we have to know what job our software will do for the customer.

This is where we should be getting customer input (though watching what they do may be more valuable than asking them what they think), and reaching a consensus with our management team or client.

We should not ask customers for more precision than they can give us (a symptom of this is to badger customers or managers for detailed “requirements,” then complain endlessly about “changing requirements”). Our job involves converting vague needs into concrete software — if we’re lucky, we have the help of a skilled product designer, or guidance from a management team that’s able to be somewhat precise, but if not we have to do it ourselves. Accept the job and learn to do it.

It’s unprofessional to be the kind of developer who doesn’t care about user experience, doesn’t care about business context, or “just wants to be told the requirements.” It’s impossible to work efficiently or to do a good job without understanding the context.

A professional developer can take a desired UX and work out the technical steps to get there as efficiently as possible. And they do get there; they don’t build something odd that doesn’t meet the need, or something slapdash that doesn’t work.

3. Don’t be lazy

Corner-cutting should be a deliberate decision; “this truly doesn’t matter.” It should not be because it’s 5pm and we’re going home. When we find ourselves asking “do I really have to redo this…” then we need to redo it.

Cutting corners should feel like you have a clear focus and you’re skipping tasks that don’t matter for that focus. Cutting corners should not feel like you’re doing poor-quality work.

To push back on an unrealistic schedule, work to narrow the scope or weaken the requirements.

Let’s say you’re making some kitchen cabinets. You could make them all with hand tools, no metal connectors and no plywood. They would be gorgeous and cost about $150,000. When the customer says that’s too much time and too expensive, you could make them the usual modern way with machines, screws and plywood; which is a sound approach, though a little uglier. This is like offering to build a web app that’s not quite as slick and beautiful — something a little more off-the-shelf.

That’s all fine. What’s not fine: delivering either of those choices unfinished and broken. “Oh, I forgot the cabinet doors.” “Sorry these things aren’t painted!”

To cut scope, we should do something defined (such as leave out a feature or refinement), rather than something undefined (like skipping testing).

Professionals are doing it for others

All of this sounds hard, and it is. As in Amy Hoy’s description of these students learning a craft, at first we may fight it and focus on our own needs and emotions.

Professional software developers are performing a service for others. That’s the difference between a professional and a hobbyist or an artist. To perform a service for others, we have to know what others need, and apply our expertise to meet those needs.


[1] Furniture made by machine doesn’t have the same ability to flex tolerances to save time. For the most part, with woodworking machines you get what you get; the machine doesn’t know whether your surface will be visible, or how flat it has to be. It makes one kind of surface and that’s it. Some parts of machine-made furniture aren’t as good as a handmade joint or surface could be, while other parts are far more precise than necessary. Check out this discussion of how to cut a dado joint by hand, which mentions several ways to save time on the back, non-show side of the piece.


The dangerous “UI team”

Background: customers hire products to do a job

I enjoyed Nikkel Blaase’s recent discussion of product design. In this article he puts it this way:


In another article he puts it this way:


This isn’t a new insight, but it’s still important, and well-stated here in these graphics. The product has to work and it can only work if you know what it’s supposed to do, and who it’s supposed to do it for.

“Interact with a UI” is not a job

Customers do not want to click on UI controls. Nor do they want to browse a web site, or “log in,” or “manage” anything, or for that matter interact with your product in any way. Those aren’t goals people have when they wake up in the morning. They’re more like tedious tasks they discover later.

So why does your company have a “UI team”? You’ve chartered a team with the mission people need to click on stuff, let’s give them some clicky pixels.

The “UI team” has “UI” right there in the name (sounds user-friendly doesn’t it?). But this is a bottom-up, implementation-driven way to define a team. You’ve defined the team by solution rather than by problem.

Instead, define your teams by asking them to solve real customer problems in the best way they can come up with. Product design doesn’t mean “come up with some pixels,” it means “solve the problem.”

The best UX will often be no UI at all: if the customer gets your product and their problem goes away instantly with no further steps, that’s amazing. Failing that, less UI beats more UI; and for many customers and problems, a traditional GUI won’t be right. Alternatives: command line, voice recognition, gestures, remote control, documentation, custom hardware, training, …

When we were inexperienced and clueless at Red Hat back in the day (1999 or so), we started stamping GUIs on all the things — because “hard to use” was a frequent (and true) criticism of Linux, and we’d heard that a GUI would solve that. Our naïveté resulted in stuff like this (this is a later, improved version, slightly more recent than the 1999-era):


We took whatever was in the config file or command line tools and translated it into GTK+ widgets. This may be mildly useful (because it’s more discoverable than the command line), but it’s still a learning curve… plus this UI was a ton of work to implement!

In the modern Linux community, people have a little more clue about UX.  There’s still a (nicer) screen like this buried somewhere. However, I never use it. The main UX is that if you plug in a network cable, the computer connects to the network. There’s no need to open a window or click any widgets at all.

Most of the work to implement “the computer connects automatically” was behind the scenes; it was backend work, not UI work.

At Red Hat, we should have gone straight for some important problem that real people had, such as “how do I get this computer on the network?”, and solved that. Eventually we did, but only after wasting time.

Don’t start with the mission “make a UI for <insert hard-to-use thing here>.” Nobody wants to click your widgets.

Define teams by the problem they’ll be solving. Put some people on the team that know how to wrangle the backend. Put some people on the team that know HTML or UI toolkit APIs. Put a designer on the team. Maybe some QA and a copywriter. Give the team all the skills they need, but ask them to solve an articulated customer problem.

Another way to put it: a team full of hammers will go around looking for nails. A team that’s a whole toolbox might figure out what really needs doing.

P.S. a related wrong belief: that you can “build the backend first” then “put a UI on it” later.


JSON-like config, a spectrum of under/overengineering

Cynics might say that overengineered means I didn’t write it and don’t understand it yet.

I found a nice real-world example, JSON-like configuration file formats, where reasonable developers have implemented many points on a complexity spectrum. (Full disclosure: I implemented one of these.)

STOP. Don’t take this post as an excuse to defend the answer you already like!

We’ll learn more if we spend some time wrapping our heads around other developers’ thinking. Someone saw a good rationale for each point on this spectrum. The point of the exercise is to look at why each of these could make sense, to see what we can learn.

Most of these file formats are principled. They have a rationale.

Here’s an overview of the spectrum I’ve chosen (not an exhaustive list of JSON-like formats, but an illustrative range):

(sorry for the unclickable links, click on the section headers below)

Most of these are JSON supersets or near-supersets, and they end up producing a JSON-style data structure in memory.

The points on the spectrum

I’d encourage you to click on these. Go look at the details of each one.


You all know this one already. JSON‘s principle might be ease of implementation, which means not much code to write, and less room for interoperability problems. Every software stack you’re likely to use comes with a JSON parser.

(It’s not uncommon to see JSON-with-comments, as a simple single-feature extension to JSON.)


One step beyond JSON-with-comments, HJSON adds more syntactic sugar to JSON, including comments, multiline strings, and the ability to omit quotes and commas. But HJSON avoids features that introduce abstraction (HJSON does not give a config file maintainer any way to clean up duplicate or repetitive configuration). Everything in the file is a literal value.

3. Ad Hoc Play 1.x and Ad Hoc Akka 1.x

Unlike the other examples on my spectrum, these aren’t specifications or libraries. They are obsolete chunks of code intended to illustrate “I’ll just do something simple and custom,” a common developer decision. Neither one has a specification, and both have implementations involving regular expressions. HOCON (discussed next) replaced both of these in Play 2.x and Akka 2.x.

Play 1.x’s ad hoc format is a riff on Java properties, adding include statements and a ${foo} syntax to define one property in terms of another.

Akka 1.x’s ad hoc format is sort of like HOCON or HJSON in syntax, and also adds include statements to allow assembling a config from multiple files.

These ad hoc formats evolved organically and may be interesting data points showing what people want from a config format.


HOCON includes similar syntactic niceties to HJSON, but introduces abstractions. That is, it tries to help the config file maintainer avoid duplication. It does this by adding two features: “merging” (two objects or two files can be combined in a defined way), and “substitution” (a reference syntax ${foo} used to point to other parts of the config or to environment variables). Include statements are also supported (defined in terms of merging, that is, an include inserts another file inline and merges its fields).

HOCON avoids anything that feels like “programming”; it lacks loops, conditionals, or arithmetic. It remains purely a data file.


YAML doesn’t quite belong here, because it wasn’t designed for configuration specifically. It’s a more readable way to write JSON. In that sense, it’s closer to HJSON than it is to HOCON or Jsonnet, but I’ve put it on the “more engineering” end of the spectrum because YAML has a large specification with quite a few features. Because YAML has an extension mechanism, it could in principle be extended (using tags) to support abstraction features such as includes.

6. Jsonnet

With Jsonnet we jump into the world of configuration-as-code. Jsonnet is a domain-specific programming language designed to generate JSON, with conditionals, expressions, and functions.

7. Writing code in a general-purpose programming language

Many developers are passionate advocates of avoiding config-specific languages entirely; they prefer to load and evaluate a chunk of regular code, instead. This code could be anything from JavaScript to Scala (often, it’s the same language used to implement the application).

Principled Design

Most of these formats have a thoughtful philosophy — an overall approach that guides them as they include or exclude features. This is a Good Thing, and it’s often overlooked by less-experienced developers.


What are some of the tradeoffs, when choosing a point on this spectrum? Here are some that I came up with.

  • Dependencies. Do you need a custom library?
  • Library size. How large is the code to read/write config files?
  • Leakiness of abstraction. How much are you going to have to care about the file format, when you’re using it to get some settings for your app?
  • Config file readability. Can people tell what your config file means?
  • DRY-ness of config files. Are there any means of abstraction?
  • Composing external sources. Can config files reference environment variables, remote resources, and the like?
  • Machine-editability. Can a program reliably load/edit/save a config file without sci-fi AI?
  • Cross-language interoperability. Are multiple implementations of the config file format likely to be compatible?
  • Learnability. Can the people editing your file format guess or easily learn how the format works?

The right answer hinges on people, not tech

Often, tradeoffs like these push a problem around between people.

An application developer who chooses to use JSON config to keep things simple, may be pushing complexity onto someone else — perhaps a customer, or someone in ops who will be deploying the app.

An application developer who uses anything more complex than JSON for their config may be asking customers, ops, or support to learn a new syntax, or even to learn how to program.

When we think about engineering tradeoffs, sometimes we feel we’re advocating the Right Thing, but in fact we’re advocating the Easiest Thing For Us Personally.

There won’t be a single right way to balance different interests. Who will configure your app? What background do they have? The people matter.

All of the choices work

None of these choices for config are categorically broken. When we choose one, we’re making a judgment that matters about tradeoffs, and we’re applying some measure of personal taste, but we aren’t choosing between broken and not-broken. (That’s what makes this an example worth discussing, I think.)

Working on Data Science User Experience

For the last month or so, I’ve been working at Continuum Analytics, doing product design and development to help out data scientists. I would be overjoyed to hire a few more awesome people to work with, and I’m hoping some of you reading this will be interested enough to learn more about it.

If you’re used to a “software geek” vs. “nontechnical end user” dichotomy, data scientists might challenge your assumptions. They are a technical audience and they write code, but many wouldn’t consider themselves general software developers. Some of them are research scientists who write code to analyze their data. Others work in industry, finance, nonprofits, and journalism. The “data scientist” title could refer to a domain expert who’s picked up some coding, or to a software developer who’s picked up some statistics. The best data scientists, of course, are good at all sides of the hybrid role.

A cool thing about data scientists is that they are focused on a non-software goal (understanding some aspect of the world through data), rather than tangled up in software as an end in itself.

Peter Wang and Travis Oliphant, from the open source Python data science world, are the founders of Continuum.

At Continuum so far I’ve been involved with Bokeh, which boils down to a specialized UI toolkit for building interactive data visualizations. (Most of the technical challenges are the same ones found in any UI toolkit or “canvas” library.)

Here’s the first small feature I’ve been working on, auto-reload of Bokeh apps:

(This idea is not new, shout out to Play Framework, Bret Victor, and Bruce Hauman for the inspiration.)

There’s a lot to do on Bokeh but it’s only part of the picture. For the projects I’m a part of, we’d like to hire more people with a background in building apps, UI toolkits, vector graphics libraries, and the like. There’s room for both design and development skillsets (anyone who cares about user experience, understands that it has to work, and knows how to make it work). The tech stack is mostly Python plus web tech (JS/HTML/CSS). You might enjoy the projects at Continuum if you think apps that involve numbers and data are neat (think spreadsheets, IPython Notebook, Reinteract, for example); if you enjoy UI-toolkit kind of problems; or if you like the idea of designing a development framework especially for data scientists.

There are a couple of official job descriptions on the Continuum site focused on web tech (web application architect and web application developer).  However if you come at this from another background (as I do), such as UI toolkit implementation or spreadsheet implementation, that would be interesting too. If you have a track record of good work as a developer or designer, that’s what counts. Enthusiasm for open source or data science are big pluses. Official applications should go through the website but feel free to send me email and ask questions.

It’s not new

If you’ve ever written a technical article, or announced some software you created, chances are someone commented “this isn’t new, it’s just like _____.”

Commenters of the world, slow down. Think about why you would say that. Readers, ask why you would think it, even if you don’t comment.

Do you mean:

  • I have already heard of this, and the article was written only for me, so you wasted your time.”
  • “This is not suitable for publication in an academic journal.”
  • “This could not be patented due to prior art.”
  • “There was another article about this once, so we need never mention it again.”
  • “I don’t know why you wrote this software, the only reason to write software is to demo a novel idea.”

I guess all of those are pretty silly.

So here is my theory. There’s an old, in no way new, cliché question: “Do you want to be right, or do you want to be effective?”

Most of us software people, at some point, had our self-esteem tied up in the idea of being “smart.” (Try to get over it.)

When we don’t watch ourselves, we would rather be right than effective. And we would rather think about a shiny new idea than learn, practice, refine, and teach a tried-and-true idea.

There are lots of old, endlessly-repeated ideas out there which you are not applying. I’m sure you can find some thousand-year-old ones in the world’s religious and philosophical heritage, unless you have your shit together a lot more than I do. And I’m sure you can find some 5- and 30- and 50-year-old ones related to software, which you should be using, and are not. I know I could.

So when someone writes an article about one of those ideas, or brings together some well-known ideas in a new piece of software, it is not because OH MY GOD I JUST THOUGHT OF THIS. Effective people do not ignore old ideas, nor do they consider “knowing” an idea to be the purpose of ideas. Ideas are for applying, not for cataloging.

Commenters, I’d ask you to work harder. Link to related articles and software; compare and contrast; discuss how you’ve used the idea; add to the discussion.

Here’s the thing: if you click on something on the Internet, and it’s not news to you and you learned nothing, the rest of us don’t need to be told that. We don’t plan to launch an initiative to remove all information you already know from the net. So close the browser tab, and move on.

Thanks for listening to this rant, and I welcome your pointers to prior art.

P.S. I drafted this post some time ago, but was just reminded to post it by a comment on an article about racial (re)segregation. Someone said “this is not new” and cited a previous academic research paper! The comment seems to be gone now (perhaps they came to their senses).

Fedora 20 Beta on Thinkpad T440s Report

Before buying a T440s I kept asking people on Twitter to tell me how it works with Linux, so I figure I should write down the answer.

Note: this is a beta distribution on a brand-new laptop model.

Punchline: Lenovo’s new clickpad is worse than the old physical buttons for me with the trackpoint, but I find it usable. YMMV. If you’re a touchpad user then the clickpad is probably an upgrade from older Thinkpads. The rest of the laptop is mostly solid, but has a couple of bugs, not suprising for brand-new hardware and a Fedora beta. I like the hardware a lot, other than wanting my trackpoint buttons back.



  • The clickpad is not configured to have middle/right click out of the box, fortunately the installer only needs left click. Bug report
  • To configure the middle/right button areas with synclient you need to use undocumented options. Bug report
  • Disabling touchpad in the BIOS seems to be useless (also turns off clicking so trackpoint has no buttons).
  • I also set PalmDetect, HorizHysteresis, VertHysteresis to prevent accidental mouse motion.
  • With the trackpoint, the issue is that you will occasionally click the wrong mouse button.
  • It may be my imagination, but I think the line between the soft buttons may move depending on whether you last touched the pad on the right or left side of the line, or something. I am mostly used to it now but I think it could be a showstopper for people who are picky. It makes it very hard to configure the soft buttons to match the physical affordances on the touchpad. Whatever is going on, there could be something the synaptics driver could do to reduce clicking on the wrong button, because I can’t seem to configure the soft buttons to always be where I expect them to be. (Update: maybe the confusing thing when trying to experimentally configure the button areas is that it tries to ignore motion once the click begins so looks at where your finger first touched down? But this would also make it hard to feel around for the middle button bumps and then click. Anyway, with a ruler, the middle button affordance is from 40% to 60% on the physical touchpad, so I’m going to try Option "SoftButtonAreas" "60% 0 0 0 40% 60% 0 0".)
  • I want a mode where the touchpad has clickpad-clicking and two-finger scroll (or other gestures) but NO pointer motion and NO tap to click. Synaptics doesn’t seem to have a way to have scrolling without pointer motion.
  • At one point my touchpad got into a mode where it had one-finger scrolling and no pointer motion, but it went away on reboot and I don’t know how to reproduce it.
  • Despite the issues it’s still better than having to use a touchpad. Trackpoint forever!


  • I ordered with the Intel wifi card (strongly recommended for Linux) and it works great. I installed over wifi in fact.
  • The ethernet card picks 10 mb/s instead of 1000 on my network. Since my wifi is fast this isn’t bothering me much but if I needed ethernet it would be pretty bad. Bug report
  • Not sure if it’s the card or something more general, but wired autoconnect was disabled by default. Bug report


  • All screens blank when you dock with a monitor connected to the dock. This would be a showstopper if I needed to use an external monitor. Bug report (anyone know which module the logic to adjust xrandr on monitor plug/unplug lives in?)
  • Dock ethernet works the same as non-dock ethernet (i.e. broken, but in the same way).
  • Dock power seems to work fine.


  • Nice size and weight. Power brick is smaller than pre-Haswell Thinkpads too.
  • Screen is pretty (I got the 1080p one). Pixels are small, text is tiny without tweaking.
  • I think the new keyboard is fine or even better than the old Thinkpad style. Probably less prone to getting gooped up too.
  • Home/End/PgUp/PgDn moved again but I think they’ve moved every time I bought a new Thinkpad so I’m used to it.
  • I don’t care about lack of dedicated volume buttons and lack of status LEDs but some people don’t like that in the new Thinkpads.
  • Battery life (with the internal 3-cell and a removable 3-cell) seems to be 4-6 hours depending on what you are doing, and how many powertop tunables you toggle. I haven’t rigorously tested.
  • I opened the laptop to swap out the hard drive. This was pretty difficult (it requires a spudger or thin blade, I used a plastic scraper). Maybe the price of a thin laptop that feels solid.
  • The factory drive had a protective sheet wrapped around it to separate it from the case, so a little worried my replacement drive might short out against the case or something. But seems to be working so far.


  • Minor cosmetic artifact in gnome-shell, not something you’ll care about. Bug report
  • It comes with a 16G SSD designed to be a cache for the main HD. This shows up in Linux but I’m not using it. I was thinking of using it as a boot disk but just didn’t bother yet. Linux only has experimental support for the caching trick. I might rather put a larger SSD in the slot and use it as a non-cache, but SSDs in this form factor aren’t widely available yet.
  • Default fonts are too small on the high-DPI screen and GNOME has the configuration for this only in tweak tool. OS X puts this config in their tweak tool too, but my guess is that they have better defaults on all of their hardware.
  • Adjusting fonts upward doesn’t affect web sites. Firefox has no “just automatically zoom all pages by N steps” setting that I can figure out. Text is too small on most sites.
  • Powertop doesn’t have the obvious “make all these tunings persistent” button (or more importantly, the tunables are not properly tuned by default).
  • Switching wifi networks involves 3 more clicks than it used to. I have an upstairs and a downstairs one so I do this a lot.
  • GNOME 3.10 feels extremely solid and smooth. Fedora install was seamless with no troubles. Overall it was an easy upgrade and I was back to productivity after a day; most of the time was spent copying over my data.

Thanks to all the developers involved! Great hardware and software upgrade from my T510/F17, overall.

Don’t screw up your next presentation

Have you ever screwed up a talk or high-stakes presentation? You thought it would be fine, had a plan, did well in practice runs, but under pressure you fell apart and fell flat. In short: you choked.

There’s almost a binary switch: sometimes you can tell a few minutes into an hour-long talk whether it’s going to be confident or lackluster. Graphing the quality of many talks from the same person, I bet it would distribute like this:

Rather than a normal distribution:

Performance has two parts:

  1. Your on-a-good-day peak ability. Most public speaking advice intends to help you with this.
  2. Your consistency. Avoid choking!

Even if it’s tough to boost your peak performance, limiting your chokes can bring the average up quite a bit. It might be the quickest way to boost your average performance.

It turns out there’s some research on the subject and some tactics to avoid collapsing under pressure. I should have read about it years ago, since I choke all the time.

What makes us choke?

According to Wikipedia, there are two popular theories:

  • Explicit monitoring or “thinking too hard.”
  • Distraction or split attention.

To understand “explicit monitoring” intuitively, try to walk by carefully and consciously monitoring the process to be sure you get it right: now I’m picking up my left foot while balancing on my right, now I’m moving my leg forward flexing the knee just so, etc. Imagine learning to walk by reading a book. This is the difference between knowing “what” (words you can articulate) and knowing “how” (being able to do it without thinking).

The “distraction” theory is just what it sounds like: worry and anxiety give you something to think about besides the task at hand, compromising your working memory with clutter and lowering performance.

(If Wikipedia doesn’t do it for you, here’s a pretty readable paper that happens to have a summary of other research.)

Strategies to be more consistent

Here are some approaches. By writing them down, I’m hoping to remember them in the future; I hope some of them work for you as well.

  1. Practice more. Almost too obvious, but here’s the key: don’t stop when you can do the task well in practice. You have to go past that and make it automatic, because your goal under pressure is to be on autopilot.
  2. Focus on making your first words confident. I learned this one in a public speaking class run by Second City. Don’t think about the whole talk you’re about to give, just think about the tone of the first sentence or two, starting with a confident “Hi, my name is … “
  3. Banish tips, advice, and content from your mind. When practicing, maybe you were coached or self-coached with things to remember, advice about what to say, mannerisms, etc. Never repeat this stuff in your mind just before a talk. Write it down in slide notes, or have it memorized cold, but don’t stand up on stage repeating it in your mind. You know it or you don’t at that point. Use advice when practicing but not in live performance.
  4. If something throws you off, press reset before you start talking. It’s easy to get flustered by travel troubles or a flaky projector or an unexpected audience size. Techniques such as meditation or going for a walk might help. Or try tactical breathing (it even has an iPhone app from the federal government).
  5. If you start off badly, take a pause. You may notice mid-performance that you’re choking. Consider inventing some excuse, perhaps in between slides: maybe have a drink of water. Take a deep breath. People may find the pause a little long, but it beats them suffering through a flat talk.
  6. Try to avoid and forget praise. Believing you’re “good at” something can be poison. In that Second City course I mentioned, we had to give two practice speeches. They loved my first one and said so; not coincidentally, my second one was a horrible choke in an attempt to live up to the first.
  7. Pep rally. Psych yourself up in whatever way might work for you: music, mantra, remembering “why you do it.”
  8. The basics. Get enough sleep. Arrive early. Drink water, and caffeine if you’re accustomed. Once I gave a talk by flying to Europe arriving in the morning, talking almost immediately, then going straight back to the airport. It did not go well.

The common factor is that it’s about emotional state, not details. Once you’re about to start speaking or on your way to that big meeting, you want to be focused on feeling confident, rested, and calm; your brain shouldn’t be spending cycles on what you’re going to say or how you’ll say it, only on feeling good.

If you have your own tips, please share in the comments!

What Matters In Software Development

Lots of traffic on Twitter about Steve Yegge’s post defining a “software ideology” spectrum. Myles Recny made a survey to help you place yourself along said spectrum.

Thinking about it over the weekend, I can’t identify with this framing of software development. Those survey questions don’t seem to cover what I think about most in my own software work; in fact I’d be a little worried if I saw colleagues focused on some of these.

Here’s what I do think about, that could be called an ideology.

Ideological View 1: Risk = Getting Worse On Average

Whether we’re talking about dieting, finance, or software, flows matter more than stocks.

The risk I worry about is: are you adding bugs faster than you’re fixing them? Is your technical debt going up? Is this code getting worse, on average?

If the average direction is “worse” then sooner or later your code will be an incomprehensible, hopeless disaster that nobody will want to touch. The risk is descent into unmaintainable chaos where everyone involved hates their life and the software stops improving. I’ve been there on the death march.

Bugs vs. Features: Contextual Question!

In Steve’s post, he says conservatives are focused on safety (bugs in production) while liberals are focused on features. I don’t have an ideological view on “bugs in production risk”; it’s contextual.

For example: Red Hat maintains both Fedora and Enterprise Linux, two branches of the same software project with mostly the same team but with distinct “bugs in production” risk profiles and distinct processes to match. Red Hat uses the same code and the same people to support different tradeoffs in different contexts. Maybe they’re a post-partisan company?

If I were working on software for the Mars rover, I’d strenuously object to continuous deployment. (Maybe we should test that software update before we push it to Mars?) If I were working on I Can Has Cheezburger, bugs in production wouldn’t bother me much, so I’d be happy to keep the process lightweight.

But in both cases I don’t want to see the code getting worse on average, because in both cases I’d want to keep that code alive over a period of years. That’s the ideology that stays constant.

A project that’s getting worse on average will achieve neither safety nor features. A healthy project might have both (though not in the same release stream).

How to Avoid Getting Worse

To avoid risk of steadily getting worse, a couple issues come up every time.

Ideological View 2: Clarity and Simplicity Are Good

Can the team understand it?

This is relative to the team. If your team doesn’t know language XYZ you can’t write code in that language. If your API is intended for mainstream, general programmers, it can’t be full of niche jargon. If your team doesn’t speak German you can’t write your comments in German. Etc.

Software developers learn to make judgment calls about complexity and over- vs. under-engineering. These calls are countless, contextual, and always about tradeoffs. Experience matters.

A definition of “competent software developer” surely includes:

  • they worry about complexity and can make judgments about when it’s worth it
  • they can write both prose and code such that the rest of the team understands it

Not all teams have the same complexity capacity, but they all have some limit, and good ones use it wisely.

Ideological View 3: Process: Have One

I’ve seen many different methodologies and processes work. They optimize for different team skills, or different levels of “bugs in production” risk. My belief is that you need some method to your madness; something other than free-for-all. Examples:

  • Good unit test coverage with mandatory coverage for new code.
  • OR hardass code review. (Hardass = reviewer spends a lot of time and most patches get heavily revised at least once. Most reviews will not be “looks good to me.”)
  • OR just one developer on a codebase small enough to keep in one head.
  • OR Joel’s approach.

You don’t need all of those, but you need at least one thing like that. There has to be some daily habit or structural setup that fights entropy, no matter how smart the individual team members are.

Companies may have rule-based or trust-based cultures, and pick different processes. Lots of different approaches can work.


Ideological lines in the sand framing my thinking about software development:

  • Risk = the project becomes intractable.
  • Prerequisite to avoid this risk: you have to be understandable and understood.
  • Process to avoid this risk: have one and stick to it.

If you can write clear, maintainable code, and keep it that way, using your OS, text editor, dynamic language, static language, XML-configured framework, agile process, or whatever, then I’m open to your approach.

If you’re creating complexity that doesn’t pay its way, not making any sense to the rest of the team, don’t have a working process, etc. then I’m against it.

“How many bugs in production are OK,” “static vs. dynamic languages,” “do we need a spec for this,” “do we need a schema here”, “what do I name this function”: these are pragmatic, context-dependent issues. I like to consider them case-by-case.

Postscript: Me me me

A lot of these example “liberal/conservative” statements feel ego-driven. I’d look bad if we shipped a bug, I’m smart and can learn stuff, I never write slow code, I always write small code, blah blah.

It’s not about you.

When you agree or disagree with “programmers are only newbies for a little while” – are you thinking of software creation as an IQ test for developers? The goal is not to “dumb down” the code or to prove that for you, it need not be.

Let me suggest a better framing: is this complexity worth it (in the context of our customers and our team). If we’re trying to maximize how useful our software can be given a certain level of complexity our team can cope with, should we use our brain cycles in this corner of the code or some other corner?

When you agree or disagree with “software should aim to be bug free before it launches” – do you have the same opinion about both the Mars lander and icanhascheezburger? If you do, you might need to refocus on the outside world our software’s supposed to be serving.

Better framing: it has to work.

You get the point I guess…

This article is translated to Serbo-Croatian by Jovana Milutinovich from Webhostinggeeks.com.

Developers: request for complaints

I’m looking for a new personal weekend project that would be useful to others. Maybe there’s a useful book or useful piece of software I could create for fellow developers, tech leads, project managers, etc. Doesn’t have to be anything related to any of my current or past work (Linux, C, GTK+, Scala, etc.), but it could be.

Here’s how you can help: complain in the comments (or in private email or on Twitter), about an aggravation, anxiety, ignorance, or unsolved problem that you encounter as part of your work. Where are you wasting time or being kept up nights?

“+1” comments are awesome because if something annoys a bunch of people that makes it more worthwhile to try to solve.

This is once-in-a-lifetime: I’m going to appreciate hearing you complain, and also appreciate “+1” comments! Nobody else is going to do that for you.

I’m not asking for project ideas directly (though if you have them great), I’m just looking for crap you put up with that you wish someone would do something about, problems you had that you couldn’t find a solution to, topics you wanted to learn about that appeared to be undocumented, whatever.

No need to be original; obvious problems are great, as long as it’s still an unsolved problem for you, I don’t care whether it’s obvious, or how many people have already tried to solve it.

No need to have a solution, some of the most important problems are hard to solve, I’m just wondering what’s at the top of your list of sure wish someone would figure out this problem.

Some examples to get you started, if these resonate you could “+1”, but just as good is to come up with more:

  • Wish you knew more about <some topic>? Tell me which and maybe I could research it for everyone and report back.
  • Anything about your bug tracking, sprint planning, etc. that is tedious, ineffective, or broken?
  • Baffled by how to handle trademarks, copyrights, and patents in your projects?
  • Unhappy with how your team communicates?
  • Are there any software tools you’ve looked for only to say “all the choices suck”?
  • Wish you could write your own window manager, if only you had esoteric and rapidly-obsoleting X11 skills? (j/k … right?)

I don’t promise to solve everyone’s problems, but maybe I can solve one real-world actual problem and that would be cool.

Who knows — if we gather enough data someone other than me might run with some of these problems too. Or your fellow commenters might point you to a good existing solution. So let us know what you’re having trouble with.