Havoc's Blog

this blog contains blog posts

practice and belief

This NYTimes blog post scrolled past the other day, a discussion of an article by John Gray. John Gray has this to say:

The idea that religions are essentially creeds, lists of propositions that you have to accept, doesn’t come from religion. It’s an inheritance from Greek philosophy, which shaped much of Western Christianity and led to practitioners trying to defend their way of life as an expression of what they believe.

The most common threads of religion, science, and philosophy I learned about in school shared this frame; their primary focus was accurate descriptions of outside reality. Which is fine and useful, but perhaps not everything. In some very tiresome debates (atheism vs. religion, “Truth” vs. “relativism strawman”), both sides share the assumption that what matters most is finding a set of words that best describe the world.

There is at least one alternative, which is to also ask “what should we practice?” not only “what should we believe?”

If you’re interested in this topic, I’ve stumbled on several traditions that have something to say about it so I thought I’d make a list:

  1. Pragmatist philosophy, for example this book is a collection of readings I enjoyed, or see Pragmatism on Wikipedia.
  2. Unitarian Universalism, which borrows much of the format and practice of a Protestant church but leaves the beliefs up to the individual. I’ve often heard people say that their belief is what matters but they don’t like organized religion; UU is the reverse of that. (Not that UU is against having beliefs, it just doesn’t define its membership as the set of people who agree on X, Y, and Z. It is a community of shared practice rather than shared belief.)
  3. Behavioral economics and psychology. For example, they have piled on the evidence that one’s beliefs might flow from one’s actions (not the other way around), and in general made clear that knowing facts does not translate straightforwardly into behavior.
  4. Buddhism, not something I know a lot about, but as explained by Thich Nhat Hanh for example in The Heart of the Buddha’s Teaching. Themes include the limitations of language as a way to describe reality, and what modern bloggers might call “mind hacks” (practical ways to convince the human body and mind to work better).

A few thoughts on open projects, with mention of Scala

Most of my career has been in commercial companies related to open source. I learned to code just out of college at a financial company using Linux. I was at Red Hat from just before the IPO when we were selling T-shirts as a business model, until just before the company joined the S&P500. I worked at litl making a Linux-based device, and now I’m doing odd jobs at Typesafe.

I’ve seen a lot of open source project evolution and I’ve seen a lot of open-source-related commercial companies come and go.

Here are some observations that seem relevant to the Scala world.

Open source vs. open project

In some cases, one company “is” the project; all the important contributors are from the company, and they don’t let outsiders contribute effectively. There are lots of ways to block outsiders: closed infrastructure such as bug trackers, key decisions made in private meetings, taking forever to accept patches, lagged source releases, whatever. This is “one-way-ware,” it’s open source but not an open project.

In other cases, a project is bigger than the company. As noted in this article, Red Hat has a mission statement “To be the catalyst in communities of partners and customers and contributors building better technology the open-source way” where the key word is “catalyst.” And when Linux Weekly News runs their periodic analysis of Linux kernel contributions, Red Hat is a large but not predominant contributor. This is despite a freaking army of kernel developers at Red Hat. Red Hat has many hundreds of developers, while most open source startups probably have a dozen or two.

In a really successful project, any one company will be doing only a fraction of the work, and this will remain true even for a billion-dollar company. As a project grows, an associated company will grow too; but other companies will appear, more hobbyists and customers will also contribute, etc.  The project will remain larger than any one company.

(In projects I’ve been a part of, this has gone in “waves”; sometimes a company will hire a bunch of the contributors and become more dominant for a time, but this rarely lasts, because new contributors are always appearing.)

Project direction and priorities

Commercial companies will tend to do a somewhat random grab-bag of idiosyncratic paying-customer-driven tasks, plus maybe some strategic projects here and there. The nature of open projects is that most work is pretty grab-bag; because it’s a bunch of people scratching their own itches, or hiring others to scratch a certain itch.

In the Scala community for example, some work is coming from the researchers at EPFL, and (as I understand it) their itch is to write a paper or thesis.  Given dictatorial powers over Scala, one could say “we don’t want any of that work” but one could never say “EPFL people will work on fixing bugs” because they have to do something suitable for publication. Similarly, if you’re building an app on Scala, maybe you are willing to work on a patch to fix some scalability issue you are encountering, but you’re unlikely to stop and work on bugs you aren’t experiencing, or on a new language feature.

An open project and its community are the sum of individual people doing what they care about. It’s flat-out wrong to think that any healthy open project is a pool of developers who can be assigned priorities that “make sense” globally. There’s no product manager. The community priorities are simply the union of all community-member priorities.

It’s true that contributors can band together, sometimes forming a company, and help push things in a certain direction. But it’s more like these bands of contributors are rowing harder on one side of the boat; they aren’t keeping the other side of the boat from rowing, or forcing people on the other side of the boat to change sides.

Commercial diversity

My experience is that most “heavy lifting” and perhaps the bulk of the work overall in big open projects tends to come  from commercial interests; partly people using the technology who send in patches, partly companies that do support or consulting around the technology, and partly companies that have some strategic need for the technology (for example Intel needs Linux to run on its hardware).

There’s generally a fair bit of research activity, student activity, and hobbyist activity as well, but commercial activity is a big part of what gets done.

However, the commercial activity tends to be from a variety of commercial entities, not from just one. There are several major “Linux companies,” then all the companies that use Linux in some way (from IBM to Google to Wall Street), not to mention all the small consulting shops. This isn’t unique to Linux. I’ve also been heavily involved in the GNOME Project, where the commercial landscape has changed a lot over the years, but it’s always been a multi-company landscape.

The Scala community will be diverse as long as it’s growing

With the above in mind, here’s a personal observation, as a recent member of the Scala community: some people have the wrong idea about how the community is likely to play out.

I’ve seen a number of comments that pretty much assume that anything that happens in the Scala world is going to come from Typesafe, or that Typesafe can set community priorities, etc.

From what I can tell, this is currently untrue; there are a lot more contributors in the ecosystem, both individuals and companies. And in my opinion, it’s likely to remain untrue. If the technology is successful, there will be a never-ending stream of new contributors, including researchers, hobbyists, companies building apps on the technology, and companies offering support and consulting. Empirically, this is what happens in successful open projects.

I’ve seen other comments that assume the research aspect of the Scala community will always drive the project, swamping us in perpetual innovation. From what I can tell, this is also currently untrue, and likely to remain untrue.

Some open communities do get taken over by narrow interests. This can kill a community, or it can happen to a dead community because only one narrow interest cares anymore. But the current Scala ecosystem trend is that it’s growing: more contributors, more different priorities, more stuff people are working on.

How to handle it

Embrace growth, embrace more contributors, embrace diversity.

The downside is that more contributors means more priorities and thus more conflicts.

When priorities conflict, the community will have to work it out. My advice is to get people together in-person and tackle conflicts in good faith, but head-on. Find a solution. In-person meetings are critical. If you have a strong opinion about Scala ecosystem priorities, you must make a point of attending conferences or otherwise building personal relationships with other contributors.

Never negotiate truly hard issues via email.

As the community grows and new contributors appear, there will be growing pains figuring out how to work together. All projects that get big have to sort out these issues. There will be drama; it’s best taken as evidence that people are passionate.

Structural solutions will appear. For example, in the Linux world, the “enterprise Linux” branches are a structural solution allowing the community to roll forward while offering customers a usable, stable target. Red Hat’s Fedora vs. Enterprise Linux split is a structural solution to separate its open project from its customer-driven product. In GNOME, the time-based release was a structural solution that addressed endless fights about when to release. Most large projects end up explicitly spelling out some kind of governance model, and there are many different models out there.

Whatever the details, the role of Typesafe — and every other contributor, commercial or not — will be to discuss and work on their priorities. And the overall community priorities will include, but not be limited to, what any one contributor decides to do. That’s the whole reason to use an open project rather than a closed one — you have the opportunity, should you need it, to contribute your own priorities.

When talking about an open project, it can be valuable (and factually accurate) to think “we” rather than “they.”

(Hopefully-unnecessary note: this is my personal opinion, not speaking for anyone else, and I am not a central figure in the Scala community. If I got it wrong then let me know in the comments.)


The Java ecosystem and Scala ABI versioning

On the sbt mailing list there’s a discussion of where to go with “cross versioning.” Here’s how I’ve been thinking about it.


I’m a relative newcomer to the Scala community. If I push anyone’s buttons it’s not intentional. This is a personal opinion.


Two theories:

  • The largest problem created by changing ABI contracts is an explosion of combinations rather than the ABI change per se.
  • The ABI of the Scala standard library is only one of the many ABIs that can cause problems by changing. A general solution to ABI issues would help cope with ABI changes to any jar file, even those unrelated to Scala.

Proposal: rather than attacking the problem piecemeal by cross-versioning with respect to a single jar (such as the Scala library), cross-version with respect to a global universe of ABI-consistent jars.

This idea copies from the Linux world, where wide enterprise adoption has been achieved despite active hostility to a fixed ABI from the open source Linux kernel project, and relatively frequent ABI changes in userspace (for example from GTK+ 1.2, to 2.0, to 3.0). I believe there’s a sensible balance between allowing innovation and providing a stable platform for application developers.

Problem definition: finding an ABI-consistent universe

If you’re writing an application or library in Scala, you have to select a Scala ABI version; then also select an ABI version for any dependencies you use, whether they are implemented in Scala or not. For example, Play, Akka, Netty, slf4j, whatever.

Not all combinations of dependencies exist and work. For example, Play 1.2 cannot be used with Akka 1.2 because Play depends on an SBT version which depends on a different Scala version from Akka.

Due to a lack of coordination, identifying an ABI-consistent universe involves trial-and-error, and the desired set of dependencies may not exist.

Projects don’t reliably use something like semantic versioning so it can be hard to even determine which versions of a given jar have the same ABI. Worse, if you get this wrong, the JVM will complain very late in the game (often at runtime — unfortunately, there are no mechanisms on the JVM platform to encode an ABI version in a jar).

Whenever one jar in your stack changes its ABI, you have a problem. To upgrade that jar, anything which depends on it (directly or transitively) also has to be upgraded. This is a coordination problem for the community.

To see the issue on a small scale, look at what happens when a new SBT version comes out. Initially, no plugins are using the new version so you cannot upgrade to it if you’re using plugins. Later, half your plugins might be using it and half not using it: you still can’t upgrade. Eventually all the plugins move, but it takes a while. You must upgrade all your plugins at once.

Whenever a dependency, such as sbt, changes its ABI, then the universe becomes a multiverse: the ecosystem of dependencies splits. Changing the ABI of the Scala library, or any widely-used dependency such as Akka, has the same effect. The real pain arrives when many modules change their ABI, slicing and dicing the ecosystem into numerous incompatible, undocumented, and ever-changing universes.

Developers must choose among these universes, finding a working one through trial and error.

For another description of the problem, see this post from David Pollak.

Often, projects are reluctant to have dependencies on other projects, because the more dependencies you have the worse this problem becomes.

One solution: coordinate an explicit universe

This idea shamelessly takes a page from Linux distributions.

We could declare that there is a Universe 1.0. This universe contains a fixed ABI version of the Scala standard library, of SBT, of Akka, of Play — in principle, though initially not in practice, of everything.

To build your application, rather than being forced to specify the version of each individual dependency, you could specify that you would like Universe 1.0. Then you get the latest release for each dependency as long as its ABI remains Universe-1.0-compatible.

There’s also a Universe 2.0. In Universe 2.0, the ABI can be changed with respect to Universe 1.0, but again Universe 2.0 is internally consistent; everything in Universe 2.0 works with everything else in Universe 2.0, and the ABI of Universe 2.0 does not ever change.

The idea is simple: convert an undocumented, ever-changing set of implicit dependency sets into a single series of documented, explicit, testable dependency sets. Rather than an ad hoc M versions of Scala times N versions of SBT times O versions of Akka times P versions of whatever else, there’s Universe 1.0, Universe 2.0, Universe 3.0, etc.

This could be straightforwardly mapped to repositories; a repository per universe. Everything in the Universe 1.0 repository has guaranteed ABI consistency. Stick to that repository and you won’t have ABI problems.

One of the wins could be community around these universes. With everyone sharing the same small number of dependency sets, everyone can contribute to solving problems with those sets. Today, every application developer has to figure out and maintain their own dependency set.

How to do it

Linux distributions and large multi-module open source projects such as GNOME provide a blueprint. Here are the current Fedora and GNOME descriptions of their process for example.

For these projects, there’s a schedule with a development phase (not yet ABI frozen), freeze periods, and release dates. During the development phase incompatibilities are worked out and the final ABI version of everything is selected.

At some point in time it’s all working, and there’s a release. Post-release, the ABI of the released universe isn’t allowed to change anymore. ABI changes can only happen in the next version of the universe.

Creating the universe is simply another open source project, one which develops release engineering infrastructure. “Meta-projects” such as Fedora and GNOME involve a fair amount of code to automate and verify their releases as a whole. The code in a Universe project would convert some kind of configuration describing the Universe into a published repository of artifacts.

There are important differences between the way the Linux ecosystem works today and the way the Java ecosystem works. Linux packages are normally released as source code by upstream open source developers, leaving Linux distributions to compile against particular system ABIs and to sign the resulting binaries. Java packages are released as binaries by upstream, and while they could be signed, often they are not. As far as I know, however, there is nothing stopping a “universe repository” project from picking and choosing which jar versions to include, or even signing everything in the universe repository with a common key.

I believe that in practice, there must be a central release engineering effort of some kind (with automated checks to ensure that ABIs don’t change, for example). Another approach would be completely by convention, similar to the current cross-build infrastructure, where individual package maintainers could put a universe version in their builds when they publish. I don’t believe a by-convention-only approach can work.

To make this idea practical, there would have to be a “release artifact” (which would be the entire universe repository) and it would have to be tested as a whole and stamped “released” on a certain flag day. There would have to be provisions for “foreign” jars, where a version of an arbitrary already-published Java jar could be included in the universe.

It would not work to rely on getting everyone on earth to buy into the plan and follow it closely. A small release engineering team would have to create the universe repository independently, without blocking on others. Close coordination with the important packages in the universe would still be very helpful, of course, but a workable plan can’t rely on getting hundreds of individuals to pay attention and take action.

Scala vs. Java

I don’t believe this is a “Scala” problem. It’s really a Java ecosystem problem. The Scala standard library is a jar which changes ABI when the major version is bumped. A lot of other jars depend on the standard library jar. Any widely-used plain-Java jar that changes ABI creates the same issues.

(Technicality: the Scala compiler also changes its code generation which changes ABIs, but since that breaks ABIs at the same time that the standard library does, I don’t think it creates unique issues.)

Thinking of this as a “Scala problem” frames it poorly and leads to incomplete solutions like cross-versioning based only on the Scala version. A good solution would also support ABI changes in something like slf4j or commons-codec or whatever example you’d like to use.

btw, it would certainly be productive to look at what .NET and Ruby and Python and everyone else have done in this area. I won’t try to research and catalog all those in this post (but feel free to discuss in comments).


The goal is that rather than specifying the version for every dependency in your build, you would specify “Universe 1.0″; which would mean “the latest version of everything in the ABI-frozen and internally consistent 1.0 universe of dependencies.” When you get ready to update to a newer stack, you’d change that to “Universe 2.0″ and you’d get another ABI-frozen, internally-consistent universe of dependencies (but everything would be shinier and newer).

This solution scales to any number of ABI changes in any number of dependencies; no matter how many dependencies or how many ABI changes in those dependencies, application developers only have to specify one version number (the universe version). Given the universe, an application will always get a coherent set of dependencies, and the ABI will never change for that universe version.

This solution is tried and true. It works well for the universe of open source C/C++ programs. Enterprise adoption has been just fine.

After all, the problem here is not new and unique to Java. It wasn’t new in Linux either; when we were trying to work out what to do in the GNOME Project in 1999–2001 or so, in part we looked at Sun’s longstanding internal policies for Solaris. Other platforms such as .NET and Ruby have wrestled with it. There’s a whole lot of prior art. If there’s an issue unique to Java and Scala, it seems to be that we find the problem too big and intimidating to solve, given the weight of Java tradition.

I’m just writing down half-baked ideas in a blog post; making anything like this a reality hinges on people doing a whole lot of work.


You are welcome to comment on this post, but it may make more sense to add to the sbt list thread (use your judgment).


Configuring the Typesafe Stack

My latest work project was a quick side-track to unify the config file handling for Akka 2.0 and Play 2.0. The result is on GitHub and feels pretty well-baked. Patches have now landed in both Akka and Play, thanks to Patrik and Peter.

I can’t make this project seem glamorous. It was a code cleanup that reinvented one of the most-reinvented wheels around. But I thought I’d introduce this iteration of the wheel for those who might encounter it or want to adopt it in their own project.

The situation with Akka 1.2 and Play 1.2 was:

  • Akka 1.2 used a custom syntax that was JSON-like semantically, but prettier to human-edit. It supported features such as including one file in another.
  • Play 1.2 used a Java properties file that was run through Play’s template engine, and supported some other custom stuff such as substituting environment variables (the syntax looked like ${HOME}).

Akka’s format looked like this:

actor {
    timeout = 5
    serialize-messages = off

While Play was like this:


With the new 2.0 setup, both Akka and Play support your choice of three formats: JSON, Java properties, or a new one called “Human-Optimized Config Object Notation”. You can mix and match; if you have multiple files in different formats, their contents are combined.

HOCON has a flexible syntax that can be JSON (it’s a superset), or look much like Akka’s previous file format, or look much like a Java properties file. As a result, some existing Akka and Play config files will parse with no changes; others will require minor changes.

A single configuration file for the whole app

Play 1.2 has a single configuration file; everything you might want to set up is done in application.conf. We wanted to keep a single configuration, even as Play 2.0 adds a dependency on Akka.

With the new setup, once Play moves to Akka 2.0, you should be able to set Akka settings in your Play application.conf. If other libraries in your app also use the config lib, you should be able to set their settings from this single config file, as well.

To make this happen, apps and libraries have to follow some simple conventions. A configuration is represented by an object called a Config, and after loading a Config applications should provide it to all of their libraries. Libraries should have a way to accept a Config object to use, as shown in this example.

Applications can avoid having to pass a Config instance around by using the default Config instance; to make this possible, all libraries should use the same default, obtained from ConfigFactory.load(). This default loads application.conf, application.json, and application.properties from the classpath, along with any resources called reference.conf.

For a given app, either everyone gets a Config passed down from the app and uses that, or everyone defaults to the same “standard” Config.

Keeping useful features from the existing formats

Akka allowed you to split up the config into multiple files assembled through include statements, and the new format does too.

Play allowed you to grab settings such as ${DATABASE_URL} from the system environment, and the new format does too.

In the spirit of those two features, the new format also allows ${} references within the config, which enables “inheritance” and otherwise avoids cut-and-paste; there are some examples in the README.

Migration path

Some existing Akka and Play config files will parse unchanged in the new format. Handling of special characters, escaping, and whitespace does differ, however, and you could encounter those differences. To migrate from an existing Play application.conf, you can use one of two strategies:

  1. Rename the file to application.properties, which will make the escaping rules more like the old format. However, you won’t be able to use environment variable substitution, it’s just a plain vanilla properties file.
  2. Add quoting and escaping. If you get parse errors, add JSON-style double quotes around the strings causing the problem.

Akka is similar; if you have parse errors, you might need quoting and escaping to avoid them. The error messages should be clear: if they are not, let me know.

There’s a section in the HOCON spec (search for “Note on Java properties”, near the end) with a list of ways the new format differs from a Java properties file.

Override config at deploy time

After compiling your app, you may want to modify a configuration at deploy time. This can be done in several ways:

  • With environment variables if you refer to them using ${DATABASE_URL} syntax in your config.
  • System properties override the config, by default. Set -Dfoo.bar=42 on the command line and it will replace foo.bar in the app’s config.
  • Force an alternative config to load using the system properties config.file, config.resource, or config.url. (Only works on apps using the default ConfigFactory.load(), or apps that independently implement support for these properties.)

A more machine-friendly syntax

To generate the previous Play or Akka formats, you would need custom escaping code. Now you can just generate JSON or a properties file, using any existing library that supports those standard formats.

Implemented in Java

The config library is implemented in Java. This allows Java libraries to “join” the config lib way of doing things. In general the Typesafe stack (including Play and Akka) has both Java and Scala APIs, and in this case it seemed most appropriate to implement in Java and wrap in Scala.

That said, I haven’t implemented a Scala wrapper, since it seems barely necessary; the API is small and not complicated. You can easily create an implicit enhancement of Config with any convenience methods you would like to have. While the API is a Java API, it does some things in a Scala-inspired way: most notably the objects are all immutable.

The implementation is much larger and more complex than it would have been if it were implemented in Scala. But I endured the Java pain for you.

Conventional way of managing defaults

By convention, libraries using the config lib should ship a file called reference.conf in their jar. The config lib loads all resources with that name into ConfigFactory.defaultReference() which is used in turn by ConfigFactory.load(). This approach to loading defaults allows all libraries to contribute defaults, without any kind of runtime registration which would create ordering problems.

(By the way: all of these conventions can be bypassed; there are also methods to just parse an arbitrary file, URL, or classpath resource.)

Well-defined merge semantics

The HOCON spec defines semantics for merging two config objects. Merging happens for duplicate keys in the same config file, or when combining multiple config files.

The API exports the merge operation as a method called withFallback(). You can combine any two config objects like this:

val merged = config.withFallback(otherConfig)

And you can combine multiple config objects with chained invocations of withFallback(), for example:

val merged = configs.reduce(_.withFallback(_))

withFallback() is associative and config objects are immutable so the potentially-parallel reduce() should work fine.

Retrieving settings

This is straightforward:

val foobar = config.getInt("foo.bar")

The getters such as getInt() throw an exception if the setting is missing or has the wrong type. Typically you have a reference.conf in your jar, which should ensure that all settings are present. There’s also a method checkValid() you can use to sanity-check a config against the reference config up front and fail early (this is nicer for users).

Each Config object conceptually represents a one-level map of paths to non-null values, but also has an underlying JSON-parse-tree-style representation available via the root() method. root() gives you a ConfigObject which corresponds pretty exactly to a JSON object, including null values and nested child object values. Config and ConfigObject are alternative views on the same data.

Any subtree of a config is just as good as the root; handy if you want multiple separately-configurable instances of something.

Configuration as data or code

I know many people are experimenting with configuration as Scala code. For this cleanup, we kept configuration as data and implemented the library in plain Java. My impression is that often a machine-manipulable-as-data layer ends up useful, even though there’s a code layer also. (See your .emacs file after using M-x customize, for example, it inserts a “do not edit this” section; or, SBT’s equivalent of that section is the .sbt file format.) But we did not think about this too hard here, just kept things similar to the way they were in 1.2, while improving the implementation.

Have fun

Not a whole lot else to it. Please let me know if you have any trouble.

Task Dispatch and Nonblocking IO in Scala


Modern application development platforms are addressing the related issues of globally-coordinated task dispatch and nonblocking IO.

Here’s my definition of the problem, an argument for why it matters, and some suggestions for specific standard library features to add to Scala in particular.

The same ideas apply to any application development platform, though. It’s rapidly becoming mandatory for a competitive platform to offer an answer here.


Let’s define a blocking task to be anything that ties up a thread or process but does not use CPU cycles. The most common ways to block are on IO channels and locks.

A busy loop is not a blocking operation in this sense; it takes up a thread, but it’s using the CPU, not “wasting” the thread.

By “task” I mean any piece of executable code. A task is blocking if it spends part of its time waiting, or nonblocking if it needs the CPU the whole time.

Dispatch just means scheduling the task on a thread and executing it.

Dispatch for nonblocking tasks, in an ideal world

For nonblocking tasks (which are CPU-bound), the goal is to use 100% of all CPU cores. There are two ways to lose:

  • Fail to use all the cores (not enough threads or processes).
  • Too many threads for the number of cores (inefficient and wastes memory).

The ideal solution is a fixed thread or process pool with a number of threads related to the number of cores. This fixed pool must be global to the app and used for all nonblocking tasks. If you have five libraries in your app and they each create a thread per CPU core, you’re losing, even though each library’s approach makes sense in isolation.

When the fixed number of threads are all in use, tasks should be queued up for later dispatch.

Dispatch for blocking tasks, in an ideal world

Blocking tasks pose some competing concerns; the trouble with blocking tasks is that these concerns are hard to balance.

  • Memory: each blocking task ties up a thread, which adds overhead (the thread) to the task. A super-tiny http parser gets you nowhere if you accompany each one with a thread.
  • Deadlocks: blocking tasks are often waiting for another blocking task. Limiting the number of threads can easily create deadlocks.
  • Tasks outstanding: with IO, it is desirable to send lots of requests at once or have lots of sockets open at once. (With CPU-bound tasks, the opposite is true.)

The ideal solution (if you must block) is an “as huge as memory allows” thread/process pool.

If you run blocking tasks on a bounded pool, you could have deadlocks, and you would not maximize tasks outstanding. Still, as memory pressure arrives, it would be better to start making some tasks wait than it would be to exhaust memory. Apps inevitably become pathological when memory is exhausted (either you have swap and performance goes to hell, or you don’t have swap and an out-of-memory exception breaks the app). But as long as memory is available, it’s better to add threads to the pool than it is to queue tasks.

An automatic solution to this problem might require special help from the JVM and/or the OS. You’d want to have an idea about whether it’s reasonable to create another thread, in light of each thread’s memory usage, the amount of memory free, and whether you can GC to recover memory.

In practice, you have to do some amount of manual tuning and configuration to get a thread pool setup that works in practice for a particular deployment. Maybe setting a large, but fixed, thread pool size that happens to keep your app using about the right amount of memory.

Different tasks, different pools

It’s broken to dispatch nonblocking tasks to an unbounded (or large) pool, and broken to dispatch blocking tasks to a small bounded pool. I can’t think of a nice way to handle both kinds of task with the same pool.

Possible conclusion: a dispatch API should permit the application to treat the two differently.

Physical resource coordination requires global state

We’ve all been taught to avoid global variables, global state, and singletons. They cause a lot of trouble, for sure.

Assuming your app runs on a single machine, you have N CPUs on your computer — period. You can’t create a new “CPUs context” with your own CPUs every time you need more CPUs. You have N megabytes of RAM — period. You can’t create a new “RAM context” with its own RAM.

These resources are shared among all threads and processes. Thus you need global, coordinated task dispatch.

Nonblocking IO creates another global resource

With nonblocking IO APIs, such as poll(), you can have multiple IO operations outstanding, using only one thread or process.

However, to use poll() or equivalent you have a new problem: every IO operation (on the same thread) must be coordinated so that the file descriptors end up in a single call to poll(). The system for coordinating this is called an “event loop” or “main loop.”

In an API such as libev or GMainLoop, applications can say “I need to wake up in 3 seconds” or “I need to know about activity on this socket handle,” and the library aggregates all such requests into a single invocation of poll(). The single poll() puts the thread to sleep until one of the requests is ready.

Nonblocking IO requires a globally-coordinated “managed poll()” — also known as an event loop. Otherwise you’re back to needing threads.

How Java sucks at this

In brief:

  1. No global task dispatcher to coordinate CPU and memory usage.
  2. The APIs are mostly blocking.
  3. The nonblocking APIs in nio have limited utility because there’s no global event loop.

1. No global dispatcher

Java has all sorts of nice executors allowing you to dispatch tasks in many different ways.

But for average apps doing average things, we need two global singleton executors, not a zillion ways to create our own.

An average app needs the executor for nonblocking CPU-bound tasks, so that executor can coordinate CPU-limited tasks. And it needs the executor for blocking tasks, so that executor can coordinate memory-limited tasks.

In the JVM ecosystem, you start using a library for X and a library for Y, and each one starts up some tasks. Because there’s no global executor, each one creates its own. All those per-library executors are probably great by themselves, but running them together sucks. You may never create a thread by hand, but when you run your app there are 100 threads from a dozen different libraries.

2. Blocking APIs

With the standard Java APIs, many things are hard or impossible to do without tying up a thread waiting on IO or waiting on a lock. If you want to open a URL, there’s URL.openStream() right there in the standard library, but if you want to open a URL without blocking you’ll end up with a far more involved external dependency (such as AsyncHttpClient).

Just to kick you while you’re down, many of the external dependencies you might use for nonblocking IO will create at least one dedicated thread, if not a whole thread pool. You’ll need to figure out how to configure it.

3. No event loop

Low-level nonblocking APIs in the spirit of poll() are not enough. Even if every library or code module uses poll() to multiplex IO channels, each library or code module needs its own thread in which to invoke poll().

In Java, a facility as simple as Timer has to spawn its own threads. On platforms with an event loop, such as node.js, or browsers, or many UI toolkits, you tell the platform how long to wait, and it ensures that a single, central poll() has the right timeout to wake up and notify you. Timer needs a thread (or two) because there’s no centralized event loop.

The impact in practice

If you just use Java and Scala APIs naively in the most convenient way, you end up with a whole lot of threads. Then you have to start tracking down thread pools inside of all your libraries, sharing them when possible, and tuning their settings to match the size of your hardware and your actual production load. Or just buy a single piece of hardware more powerful than you’ll ever need and allow the code to be inefficient (not a rare strategy).

I recently wrote a demo app called Web Words, and even though it’s not complex, it shows off this problem well. Separately, the libraries it uses (such as Akka, AsyncHttpClient, Scala parallel collections, RabbitMQ) are well-behaved. Together, there are too many threads, resulting in far more memory usage than should be required, and inefficient parallelization of the CPU-bound work.

This is a whole category of developer tedium that should not exist. It’s an accident of broken, legacy platform design.

The node.js solution

node.js has a simple solution: don’t have any APIs that block. Implement all nonblocking APIs on top of a singleton, standard event loop. Run one process per CPU. Done.

Dispatch of blocking tasks is inherently hard, so node.js makes it impossible to implement a blocking task and avoids the problem.

This would fail without the global singleton event loop. If node.js provided poll() instead of an event loop, poll() would be a blocking API, and any task using it would take over the node.js process.

People often say that “no threads” is the secret to node.js; my take is that first the global singleton event loop enables all APIs to be nonblocking; second the lack of blocking APIs removes the need for threads. The global singleton event loop is the “root cause” which unlocks the big picture.

(The snarky among you may be thinking, “if you like node.js so much, why don’t you marry it?”; node.js is great, but I love a lot of things about Scala too. Type safety goes without saying, and I’ll also take actors over callbacks, and lots more. Comparing one aspect of the two platforms here.)

Event loop as a third-party library: not a solution

You can get event loop libraries for lots of languages. This is in no way equivalent to a standard, default global singleton event loop.

First: the whole point of an event loop is to share a single call to poll() among all parts of the application. If the event loop library doesn’t give you a singleton, then every code module has to create its own loop with its own poll().

Second: if the event loop singleton is not in the standard library, then the platform’s standard library can’t use it. Which means the standard library either has no IO facilities, or it has only broken, blocking IO facilities.

Solving it in Scala

This could be solved on the Java level also, and maybe people are planning to do so — I hope so.

In the meantime, if you’ve read this far, you can probably guess what I’d propose for Scala.

Blocking tasks are inherently somewhat intractable, but they are also a legacy fact of life on the JVM. My suggested philosophy: design APIs assuming nonblocking tasks, but give people tools to manage blocking tasks as best we can.

The critical missing pieces (the first three here) should not be a lot of work in a strictly technical sense; it’s more a matter of getting the right people interested in understanding the problem and powering through the logistics of landing a patch.

1. Two global singleton thread pools

The Scala standard library should manage one thread pool intended for CPU-bound nonblocking tasks, and one intended for blocking tasks.

  • The simplest API would let you submit a function to be executed in one of the pools.
  • Another useful API would create an ExecutorService proxy that used the pools to obtain threads. ExecutorService.shutdown() and ExecutorService.awaitTermination() would have to work correctly: wait for tasks submitted through the proxy to complete, for example. But shutting down the proxy should not interfere with the underlying global thread pool. The proxy would be provided to legacy Java APIs that allow you set a custom ExecutorService.

Built-in Scala features such as actors and parallel collections should make use of these thread pools, of course.

2. A global default event loop instance

The goal is to allow various code modules to add/remove channels to be watched for IO events, and to set the timeout on poll() (or equivalent) to the earliest timeout requested by any code module.

The event loop can be very simple; remember, it’s just a way to build up a poll() invocation that “takes requests” from multiple code modules. More complexity can be assembled on top.

3. A standard Future trait

The standard Future in Scala doesn’t quite have what people need, so there’s a proliferation of third-party solutions, most of them similar in spirit. There’s even a wrapper around all the flavors of Future, called sff4s.

(Note: all of these Future are nonblocking, while Java’s Future requires you to block on it eventually.)

A standard Future is essential because it’s the “interoperability point” for nonblocking APIs. Without a good Future in the standard library, there can’t be good nonblocking APIs in the standard library itself. And without a standard Future, third-party nonblocking APIs don’t play nicely together.

4. Bonus Points and Optional Elaborations


In my opinion, the C# await operator with async keyword is the Right Thing. Microsoft understands that great async support has become a required language feature. At my last company, C. Scott Ananian built a similar continuation-passing-style feature on JavaScript generators and it worked great.

Scala has the shift/reset primitives for continuation-passing style, but it isn’t clear to me whether they could be used to build something like await, and if they could, there’s nothing in the standard library yet that does so.

await depends on a standard Future because it’s an operator on Future. In Scala, await would probably not be a language feature, it would be a library function that operated on Future. (Assuming the shift/reset language feature is sufficient to implement that.)

Nonblocking streams

Many, many Java APIs let you pass in or obtain an InputStream or an OutputStream. Like all blocking APIs, these are problematic. But there isn’t a great alternative to use when designing an API; you’d have to invent something custom. The alternative should exist, and be standard. The standard library should have a nonblocking version of URL.openStream(), too.


If you code exclusively with actors and write only nonblocking code inside your actors, there’s never a need to mess with dispatchers, futures, or event loops. In that sense, actors solve all the problems I’m describing here. However: to code exclusively with actors, you need libraries that implement actor-based alternatives to all blocking APIs. And those libraries in turn need to be implemented somehow. It can’t be actors all the way down. There’s also the matter of using existing, non-actor APIs.

Ideally, Akka, and associated actor-based APIs, would build on a standard task dispatch and event loop facility. Akka already builds on regular Java executors; it doesn’t need anything super-fancy.

An issue with actors today is that they’re allowed to block. My idea: there should be a way to mark a blocking actor as such. It would then be dispatched to the unbounded executor intended for blocking tasks. By default, Akka could dispatch to the bounded executor intended for nonblocking tasks. People who want to block in their actors could either mark them blocking so Akka knows, or (brokenly) configure their nonblocking task executor to be unbounded. If you do everything right (no blocking), it would work by default. Remember the theory “design for nonblocking by default, give people tools to damage-control blocking.”

JDBC and similar

Peter Hausel pointed out to me recently that JDBC only comes in blocking form. Now there’s a fun problem… There are quite a few libraries with similar issues, along with people trying to solve them such as Brendan McAdams’s Hammersmith effort for MongoDB.

What’s the big deal?

Developers on platforms with no good solution here are used to the status quo. When I first came to server-side development from client-side UI toolkit development, the lack of an event loop blew my mind. I’m sure I thought something impolitic like How can anybody use anything so freaking broken??? In practice of course it turns out to be a manageable issue (obviously lots of stuff works great running on the JVM), but it could be better.

In my opinion, with node.js, C# adding async and await to the language, and so on, good answers in this area are now mandatory. People are going to know there’s a better way and complain when they get stuck using a platform that works the old way.

A high-quality application development platform has to address global task dispatch coordination, have an event loop, and offer nice nonblocking APIs.

Related posts…

Some old posts on related topics.

It has to work

Often when our minds turn to design, we think first of cosmetics. With all the talk of Apple these days, it’s easy to think they’re successful because of shiny aluminum or invisible screw holes.

But this is a side issue. The Steve Jobs quote that gets to the heart of product development might be: So why the fuck doesn’t it do that?

If you’re trying to make a successful tech product, 90% of the battle is that it works at all.

Works” means: the intended customers really use it for its intended purpose.

Plain old bugs (deviations from the design specification) can keep a product from working, if they mean people don’t use the product. But bugs can be irrelevant.

It’s easy to fail even without bugs:

  • Using the product has too many steps or a single “too hard” step. People give up.
  • The product solves a non-problem or the wrong problem. Nobody will use it.

It doesn’t matter why people don’t or can’t use the product. If they don’t use it, it does not work.

Startups based on making it work

Plenty of startups were successful because they broke the “it has to work” barrier. A few famous examples:

How many “file sync” products existed before Dropbox? It must have been thousands. But it’s easy to create a nonworking file sync product. Too many steps, too much complexity, or too many bugs: any of these could mean “file sync” didn’t work.

Can you imagine pitching these companies to potential investors? “Yeah, thousands of people have failed to do our idea. But we’re going to finally do it right.” Tough sell.

Working vs. well-designed

Dropbox, Sonos, and Flip are working products people seem to agree were well-designed. But some successful products have pretty bad cosmetics:

  • despite its eventual failure, MySpace became popular initially because it met a need and it worked
  • craigslist
  • eBay
  • git got the big picture right, but every small UI design decision or “cosmetic” issue seemed to be wrong
  • HTML/CSS/JavaScript (now I’m just trolling, sorry)

Working” is a little bit different from “well-designed.” Sometimes working can involve “worse is better,” perhaps. “Working” has to do with solving a problem, not (necessarily) doing so in an elegant or beautiful way.

It just works” vs. solving the problem

The phrase “it just works” comes up a lot, and that’s almost enough. “Just working” seems to mean there aren’t many manual steps to use the product.

But you still have to solve the problem. I remember the iPod because it solved my music problem: its use of a hard drive meant I could get all my music on there, and listen to all my music. That’s what I wanted to do. Other players didn’t work for me because they didn’t hold all my music, which created a problem (deciding what to put on there) rather than solving one (getting rid of all those CDs). To this day, I find the iTunes app hard to use (bordering on terrible), but it works.

Easy to use is not quite the same as working, though perhaps it’s a helpful step.

QA that asks “does it work?”

At Red Hat, we used to use a sophisticated QA technique called the “yellow pad.” To yellow pad a product, you get one of those yellow legal pads. You need a fresh setup just like the one the customer will have (say, a computer your software has never been installed on). Then you install and try to use your software, writing down on the yellow pad anything that fails or looks embarrassing or nobody would ever understand.

Plenty of “finished” products will fail miserably.

QA teams and developers get tunnel vision. It’s easy to go for months with nobody on the team taking a fresh look at the product, step-by-step.

Once you can pass your own “yellow pad” criticism, or in parallel if you like, you can jump to hallway usability testing, maybe still using a yellow pad: watch someone else try to use the product for the first time, and again take notes. Fix that stuff.

The point is this: over and over and over, iteratively as you fix stuff, you need to try the product by walking through it, not by looking at features in isolation. Feature lists and requirements documents are death. Step-by-step stories are life.

I’m sure you can see the resonance with agile software development and lean startup methodology here, but you need not buy into a complex method or theoretical framework.

Yellow pads won’t help you solve the right problem, but they’ll get you past a lot of details that can sabotage you even if you’re solving the right problem.


A good way to kill a product: start adding extra features when it doesn’t work. First it needs to work. Focus on that. Then you can elaborate.

People who can’t tell if it works

Many people argue that non-technical CEOs can’t lead a technology company. One reason, I believe, is that many non-technical CEOs can’t tell whether a product works, or can’t understand which technical barriers to workingness are surmountable and which are fundamental.

Often, IT departments can’t tell if software works; they produce lists of requirements, but “it works” may not be one of them. Remember, “working” means “people will really use it in practice,” not “it can be made to do something if you have enough patience.”

Interaction design is a branch of design with an emphasis on step-by-step, and I find that designers with this background understand that it has to work. Many (not all) designers with other backgrounds may have an emphasis on the appearance of static snapshots, rather than the flow through the steps; and I’ve found that some of those designers don’t know whether a design will work.

Software developers are good at recognizing some ways that products don’t work, but as frequently noted, many of us overestimate the general public’s tolerance for complexity.

It might take a few years

Some kinds of product (notably, many web apps) can go from concept to launch in a few months or less, but whole categories cannot. Operating systems; anything involving hardware; non-casual video games such as World of Warcraft. In my experience, which spans a few projects anyway, complicated tech products tend to take a year or two to be “done” and around three years to work.

Sometimes I wonder if the “secret sauce” at Apple is no more than understanding this. Other hardware manufacturers have structural problems investing enough time in a single product. While I have no idea how long Apple spent developing the iPhone, I’m willing to bet it was at least three years. That’s more than enough time for most big companies to cancel a project.

Video game companies seem a little more open to investing the time required than other kinds of software companies, with major games spending several years in development. Video games don’t have the luxury of launching a “1.0″ and then fixing it, I guess.

The first milestone that matters

If you’re building something, celebrate on the day that the product works.

You may have a long way to go yet (does anyone know your product exists?), but you’ve already succeeded where others failed.

Callbacks, synchronous and asynchronous

Here are two guidelines for designing APIs that use callbacks, to add to my inadvertent collection of posts about minor API design points. I’ve run into the “sync vs. async” callback issue many times in different places; it’s a real issue that burns both API designers and API users.

Most recently, this came up for me while working on Hammersmith, a callback-based Scala API for MongoDB. I think it’s a somewhat new consideration for a lot of people writing JVM code, because traditionally the JVM uses blocking APIs and threads. For me, it’s a familiar consideration from writing client-side code based on an event loop.


  • A synchronous callback is invoked before a function returns, that is, while the API receiving the callback remains on the stack. An example might be: list.foreach(callback); when foreach() returns, you would expect that the callback had been invoked on each element.
  • An asynchronous or deferred callback is invoked after a function returns, or at least on another thread’s stack. Mechanisms for deferral include threads and main loops (other names include event loops, dispatchers, executors). Asynchronous callbacks are popular with IO-related APIs, such as socket.connect(callback); you would expect that when connect() returns, the callback may not have been called, since it’s waiting for the connection to complete.


Two rules that I use, based on past experience:

  • A given callback should be either always sync or always async, as a documented part of the API contract.
  • An async callback should be invoked by a main loop or central dispatch mechanism directly, i.e. there should not be unnecessary frames on the callback-invoking thread’s stack, especially if those frames might hold locks.

How are sync and async callbacks different?

Sync and async callbacks raise different issues for both the app developer and the library implementation.

Synchronous callbacks:

  • Are invoked in the original thread, so do not create thread-safety concerns by themselves.
  • In languages like C/C++, may access data stored on the stack such as local variables.
  • In any language, they may access data tied to the current thread, such as thread-local variables. For example many Java web frameworks create thread-local variables for the current transaction or request.
  • May be able to assume that certain application state is unchanged, for example assume that objects exist, timers have not fired, IO has not occurred, or whatever state the structure of a program involves.

Asynchronous callbacks:

  • May be invoked on another thread (for thread-based deferral mechanisms), so apps must synchronize any resources the callback accesses.
  • Cannot touch anything tied to the original stack or thread, such as local variables or thread-local data.
  • If the original thread held locks, the callback will be invoked outside them.
  • Must assume that other threads or events could have modified the application’s state.

Neither type of callback is “better”; both have uses. Consider:


in most cases, you’d be pretty surprised if that callback were deferred and did nothing on the current thread!



would be totally pointless if it never deferred the callback; why have a callback at all?

These two cases show why a given callback should be defined as either sync or async; they are not interchangeable, and don’t have the same purpose.

Choose sync or async, but not both

Not uncommonly, it may be possible to invoke a callback immediately in some situations (say, data is already available) while the callback needs to be deferred in others (the socket isn’t ready yet). The tempting thing is to invoke the callback synchronously when possible, and otherwise defer it. Not a good idea.

Because sync and async callbacks have different rules, they create different bugs. It’s very typical that the test suite only triggers the callback asynchronously, but then some less-common case in production runs it synchronously and breaks. (Or vice versa.)

Requiring application developers to plan for and test both sync and async cases is just too hard, and it’s simple to solve in the library: If the callback must be deferred in any situation, always defer it.

Example case: GIO

There’s a great concrete example of this issue in the documentation for GSimpleAsyncResult in the GIO library, scroll down to the Description section and look at the example about baking a cake asynchronously. (GSimpleAsyncResult is equivalent to what some frameworks call a future or promise.) There are two methods provided by this library, a complete_in_idle() which defers callback invocation to an “idle handler” (just an immediately-dispatched one-shot main loop event), and plain complete() which invokes the callback synchronously. The documentation suggests using complete_in_idle() unless you know you’re already in a deferred callback with no locks held (i.e. if you’re just chaining from one deferred callback to another, there’s no need to defer again).

GSimpleAsyncResult is used in turn to implement IO APIs such as g_file_read_async(), and developers can assume the callbacks used in those APIs are deferred.

GIO works this way and documents it at length because the developers building it had been burned before.

Synchronized resources should defer all callbacks they invoke

Really, the rule is that a library should drop all its locks before invoking an application callback. But the simplest way to drop all locks is to make the callback async, thereby deferring it until the stack unwinds back to the main loop, or running it on another thread’s stack.

This is important because applications can’t be expected to avoid touching your API inside the callback. If you hold locks and the app touches your API while you do, the app will deadlock. (Or if you use recursive locks, you’ll have a scary correctness problem instead.)

Rather than deferring the callback to a main loop or thread, the synchronized resource could try to drop all its locks; but that can be very painful because the lock might be well up in the stack, and you end up having to make each method on the stack return the callback, passing the callback all the way back up the stack to the outermost lock holder who then drops the lock and invokes the callback. Ugh.

Example case: Hammersmith without Akka

In Hammersmith as originally written, the following pseudocode would deadlock:

connection.query({ cursor => /* iterate cursor here, touching connection again */ })

Iterating the cursor will go back through the MongoDB connection. The query callback was invoked from code in the connection object… which held the connection lock. Not going to work, but this is natural and convenient code for an application developer to write. If the library doesn’t defer the callback, the app developer has to defer it themselves. Most app developers will get this wrong at first, and once they catch on and fix it, their code will be cluttered by some deferral mechanism.

Hammersmith inherited this problem from Netty, which it uses for its connections; Netty does not try to defer callbacks (I can understand the decision since there isn’t an obvious default/standard/normal/efficient way to defer callbacks in Java).

My first fix for this was to add a thread pool just to run app callbacks. Unfortunately, the recommended thread pool classes that come with Netty don’t solve the deadlock problem, so I had to fix that. (Any thread pool that solves deadlock problems has to have an unbounded size and no resource limits…)

In the end it works, but imagine what happens if callback-based APIs become popular and every jar you use with a callback in its API has to have its own thread pool. Kind of sucks. That’s probably why Netty punts on the issue. Too hard to make policy decisions about this in a low-level networking library.

Example case: Akka actors

Partly to find a better solution, next I ported Hammersmith to the Akka framework. Akka implements the Actor model. Actors are based on messages rather than callbacks, and in general messages must be deferred. In fact, Akka goes out of its way to force you to use an ActorRef to communicate with an actor, where all messages to the actor ref go through a dispatcher (event loop). Say you have two actors communicating, they will “call back” to each other using the ! or “send message” method:

actorOne ! Request("Hello")
// then in actorOne
sender ! Reply("World")

These messages are dispatched through the event loop. I was expecting my deadlock problems to be over in this model, but I found a little gotcha — the same issue all over again, invoking application callbacks with a lock held. This time it was the lock on an actor while the actor is processing a message.

Akka actors can receive messages from either another actor or from a Future, and Akka wraps the sender in an object called Channel. The ! method is in the interface to Channel. Sending to an actor with ! will always defer the message to the dispatcher, but sending to a future will not; as a result, the ! method on Channel does not define sync vs. async in its API contract.

This becomes an issue because part of the “point” of the actor model is that an actor runs in only one thread at a time; actors are locked while they’re handling a message and can’t be re-entered to handle a second message. Thus, making a synchronous call out from an actor is dangerous; there’s a lock held on the actor, and if the synchronous call tries to use the actor again inside the callback, it will deadlock.

I wrapped MongoDB connections in an actor, and immediately had exactly the same deadlock I’d had with Netty, where a callback from a query would try to touch the connection again to iterate a cursor. The query callback came from invoking the ! method on a future. The ! method on Channel breaks my first guideline (it doesn’t define sync vs. async in the API contract), but I was expecting it to be always async; as a result, I accidentally broke my second guideline and invoked a callback with a lock held.

If it were me, I would probably put deferral in the API contract for Channel.! to fix this; however, as Akka is currently written, if you’re implementing an actor that sends replies, and the application’s handler for your reply may want to call back and use the actor again, you must manually defer sending the reply. I stumbled on this approach, though there may be better ones:

private def asyncSend(channel: AkkaChannel[Any], message: Any) = {
    Future(channel ! message, self.timeout)(self.dispatcher)

An unfortunate aspect of this solution is that it double-defers replies to actors, in order to defer replies to futures once.

The good news about Akka is that at least it has this solution — there’s a dispatcher to use! While with plain Netty, I had to use a dedicated thread pool.

Akka gives an answer to “how do I defer callbacks,” but it does require special-casing futures in this way to be sure they’re really deferred.

(UPDATE: Akka team is already working on this, here’s the ticket.)


While I found one little gotcha in Akka, the situation is much worse on the JVM without Akka because there isn’t a dispatcher to use.

Callback-based APIs really work best if you have an event loop, because it’s so important to be able to defer callback invocation.

That’s why callbacks work pretty well in client-side JavaScript and in node.js, and in UI toolkits such as GTK+. But if you start coding a callback-based API on the JVM, there’s no default answer for this critical building block. You’ll have to go pick some sort of event loop library (Akka works great), or reinvent the equivalent, or use a bloated thread-pools-everywhere approach.

Since callback-based APIs are so trendy these days… if you’re going to write one, I’d think about this topic up front.

Update: Health insurance credits and penalties

In a post a couple months ago I argued that a given financial effect on an individual could be called a “tax increase plus a tax credit for doing XYZ,” or a “tax penalty for not doing XYZ,” and that the two are identical in terms of how many dollars all parties involved end up with. Based on that, I was wondering why a tax penalty for not buying insurance would be legally different from the existing, longstanding tax credits for buying insurance.

Two of the judges in the recent Sixth Circuit opinion upholding the law addressed this issue, though the court as a whole declined to rule on taxing power grounds (because they upheld the law on commerce clause grounds anyway).

First reaction: wow, the Sixth Circuit reads my blog and answers my questions! Nice!  (Note to the thick: kidding.)

I was arguing (as a non-lawyer) that the government shouldn’t have the power to do something, or lack that power, based purely on what label they stick on it. Surely if the individual mandate were unconstitutional, the government could not fix the constitutional problem with a rewording that would have no practical effect. Should abridging free speech or skipping due process become OK as long as we use the right words to describe them?

(A small twist that might merit a footnote: due to a special rule in the health care bill, the IRS isn’t allowed to enforce the individual mandate’s penalty as strongly as they can enforce falsely claiming a credit. That’s a small practical difference between the penalty and a credit, but one that makes the penalty a weaker infringement on individual rights than the existing credits.)

In the Sixth Circuit opinion, the two judges addressing the tax issue write (see page 29 for the relevant stuff):

…it is easy to envision a system of national health care, including one with a minimum-essential-coverage provision, permissibly premised on the taxing power. Congress might have raised taxes on everyone in an amount equivalent to the current penalty, then offered credits to those with minimum essential insurance. Or it might have imposed a lower tax rate on people with health insurance than those without it.

That is, they agree with me that the tax increase plus the credit would have been the same thing as the penalty, economically speaking. (“Economically speaking” = the same parties end up with the same number of dollars in the same situations.)

But they go on and say it matters what you call it. In other words, they argue the taxing power does not include tax penalties for XYZ, but does include tax credits for not-XYZ.

I thought that was an absurdity proving either 1) the penalty is allowed or 2) the credits are not allowed, depending on your political bent. They embraced the position I found absurd. Lawyers! (eye roll) (Sorry, lawyer friends.)

I see a big gap between those who are all for, or up in arms about, the individual mandate (everyone cites political-philosophy-oriented arguments), and what the two judges are arguing here. Those I’ve heard arguing about political philosophy would be against both penalty and credit, or for both penalty and credit. I don’t know of a philosophical argument where the labeling makes the difference.

Relabeling something doesn’t change what rights an individual has, or what rights a state has, in practice. The distinction, if any, is legalistic.

I’m imagining the founding fathers: “Call it a credit rather than a penalty, or give me death!”

I’m blown away that a law with such huge practical effects — some will say positive, some will say negative, not the point — can be in limbo over this. Neither advocates nor opponents of reform would consider the wording of the bill “what’s at issue,” but it could be what the courts base a decision on. Congress, break out your thesaurus next time.

(As with the previous post, spare us the generic debate about health care reform in the comments, we’ve all heard it before. I haven’t heard much discussion of this very specific legal topic though, insights are welcome on that.)

Some personal finance answers

I contribute to money.stackexchange.com sometimes; the site doesn’t get a ton of traffic yet, sadly. Some of the stuff I’ve written over there is as good (or bad) as my blog posts over here, so I thought I’d link to some highlights.

It’s not a bad Q&A site on personal finance, if you browse around. Though it needs more people asking and answering.

Book Review: Selfish Reasons to Have More Kids

Despite the title, Selfish Reasons to Have More Kids, the “why you should have more kids” part feels tacked-on. The interesting part of the book reviews twin and adoption studies, making the case that parenting style doesn’t matter much in the long run.

The book’s argument in brief is:

  • in twin studies and adoption studies, most variation in how adults turn out can’t be explained by what their parents did. “Normal” variation in first-world parenting (i.e. excluding abuse, malnutrition, etc.) does not affect how people turn out in adulthood very much.
  • parenting affects how kids act while they are kids, but once people move out of the house they bounce back to their “natural state.” (“As an adult, if I want a cookie, I have a cookie, okay?” — Seinfeld)
  • one long-run thing parents can strongly affect is whether their kids have fond memories of them and enjoy having them around.
  • parents should be more relaxed and invest less time in “kid improvement,” more time in everyone enjoying themselves even if it’s by watching TV.
  • discipline is mostly for the benefit of the parents (“keep kids from being annoying around us”) not for the benefit of the kids (“build character”).
  • (the conclusion in the title) since parenting can and should be less unpleasant, people should consider having more kids than they originally planned (i.e. the cost/benefit analysis is better than they were thinking).

Can you tell the author is an economist?

I get annoyed by “nature vs. nurture” popular science writing, and have a couple thoughts along those lines about this book.

Thank you for skipping the just-so stories

There’s a mandatory paragraph in nature vs. nurture articles where someone speculates on the just-so story. Something along the lines of “such-and-such behavior evolved so that women could get their mates to bond with them and hang around to care for children,” or whatever. Bryan Caplan 100% skips the silly evolutionary speculation. Thank you!

Cultural and genetic factors

However, I did find the book a little quick to jump to the genes. In “Appendix to Chapter 2,” Caplan explains that twin and adoption studies try to estimate three variables:

  1. fraction of variance explained by heredity (similarity of twins raised apart)
  2. fraction of variance explained by shared family environment (similarity of biologically unrelated children raised together)
  3. fraction of variance explained by non-shared family environment (the rest of variance)

Caplan (to his credit) goes to some lengths to point out that the twin and adoptions studies are virtually all from first-world countries and typical homes in those countries. For example, one of the studies was done in Sweden, likely even more homogeneous than the ones done in the United States.

Obvious point, I’m sure Caplan would agree: when something correlates with genes, that doesn’t mean “there’s a gene for it.” For example, attractive people have higher incomes; attractiveness is mostly genetic. So one way income can be genetic has to do with your genes for appearance. Most people would find it misleading to say there’s a gene for high income, even though income correlates with various genes. This is a pretty simple example, but it can be much, much more complicated. Look at this diagram for how genes might get involved in autism, for example.

An outcome such as income often involves genes interacting with culture — say, standards of appearance. The ideal genes for appearance change over time and around the world.

But income isn’t just affected by appearance. Who-knows-how-many genes get involved: countless genes affect appearance, personality, intelligence, and then all of those factors in turn affect your income… in ways that depend on your culture and environment. Make it more complicated: there are also genes that affect how we react to certain appearances or personalities. If standards of attractiveness have a genetic component, then you’d expect that there are also genetic variations in what people find attractive, and then cultural variations layered on that.

Genes and culture also interact with what I think of as tradeoffs or “physical properties of the world.” All I mean here is that you can’t combine behaviors and traits arbitrarily, they tend to come in clusters that make sense together or work well together. This seems true to me for both personalities and for cultures. If you slice-and-dice your time or your concepts in one way, then you didn’t slice-and-dice them another way. Some ways of thinking or doing work better than others, some are more compatible with each other, etc. There’s a source of universals here that need not point to genes.

Finally, culture, like genes, gets passed on between generations. A very simple example: if you had a culture that was fundamentally anti-natalist, it would not last very long. And in fact most cultures are very enthusiastic about having children. This is a cultural universal (other than short-lived sub-groups), but just its universality doesn’t connect it to genes; it could be cultural rather than genetic evolution. Humans inherit so many ways of thinking and doing, and so much background knowledge, through interaction with other people, rather than through DNA.

Getting back around to the book. If you do a twin study in Sweden, then you might find that twins raised apart have similar outcomes. But it’s important to recognize that there aren’t (necessarily) genes for those outcomes; there are genes that cause the twins to have (likely unknown) traits which somehow result in those outcomes, in Sweden.

A couple thoughts:

  • This probably doesn’t matter so much for the book’s practical conclusions. Parents can’t change the culture surrounding their children any more than they can change their kids’ genes.
  • But from a wider perspective, it sure would be interesting — and perhaps useful at times — to know the mechanism for hereditary outcomes. i.e. the causality and not only the correlation.

When Caplan says “it’s genetic” I would say that’s true in that there appears to be a Rube Goldberg chain of causality, such that somehow certain genes are resulting in certain outcomes. However, my feeling is that the mechanism matters.

There’s a difference between a gene for unattractiveness/low-IQ/bad-personality (or sex, or race, for that matter) resulting in low income because the world (including culture) works against high income for people with those traits, and saying that “income is genetic.” Does it really feel accurate to say that “such-and-such percent of variation in income is explained by genetics” here, without mentioning the intermediate traits that are genetic, which in turn affect income?

I’m not sure Caplan would really disagree, but I do think the “nature vs. nurture” genre, including this book, glosses over a lot of complexity that kinda matters.

(The mistake is inherent in the phrase “nature vs. nurture”; if you start spelling out the mechanisms, it’s pretty clear that the two interact and the real question is how specifically in this case, right?)

Bottom line, when talking about the “fraction of variance explained by heredity” I’d add the footnote in this physical and cultural environment, because the background reality, cultural and otherwise, shared among families in the study — or even shared among all families on Earth — has a lot to do with which genetic traits matter and exactly how they matter.

More hours spent parenting

Caplan talks a bit about how people spend a lot more time parenting these days than they used to, and mostly blames increased parental guilt (people thinking they have to “improve” their children).

I’d wonder about other trends, such as the decline of extended family ties and the tendency for people to move across the country. If you’ve had a child, you may have noticed that they’re designed to be raised by more than two people. (The fact that some single parents raise them alone blows my mind.)

Missing from the book, I thought, was research into how other social/demographic trends were affecting parenting. More geographic mobility, more tendency toward social isolation, more tendency to need two incomes to achieve a “middle class” lifestyle, etc. — I have no idea which trends are most important, but surely some of these factors go into parenting decisions.

Caplan kind of implies that changes in parenting are almost all “by choice,” due to beliefs about the importance of parenting, and I’m not sure I buy it. It seems plausible to me that changes in parenting could be mostly by choice, but also plausible that they could be mostly due to socioeconomic trends.


The book has several semi-related side trails, which may be interesting:

  • some discussion of safety statistics, probably familiar from Free-Range Kids
  • some discussion of the decades-old Julian Simon vs. Paul Ehrlich “will population outstrip resources” debate
  • editorial in favor of reproductive technology
  • thoughts for grandparents or future grandparents on how to end up with more grandchildren

Child services risk

On the Free-Range Kids tangent, I always wonder about “child services risk,” the risk that some nosy neighbor gets incensed and creates a whole traumatic drama with child services. To me this risk seems plausible.  In the book, Caplan says they let their 7-year-old stay home alone; which seems fine to me for the right 7-year-old in the right context, but I’m pretty sure a lot of state governments say it’s not fine.

I’d sort of like some “child services screwing up your child’s life” statistics to go along with the stats on abductions and drownings. Should we worry about that?

It’s one thing to say you don’t care what other people think, but when people can turn you in and just the fact of being reported creates a nightmare, I can understand why one wouldn’t want to appear unconventional.


I found the review of twin and adoption research interesting (and practical, for parents). Can’t hurt to remember that your kids are mostly going to do whatever they want anyway, once they move out of the house, and that chances are they’ll do a bunch of the same stuff their parents did. Caplan follows through nicely with implications for discipline, family harmony, and so on.

Some of the rest of the book wasn’t as interesting to me; I’ve heard the Free-Range Kids and Simon vs. Ehrlich stuff for example many times before. But you can skim these bits if you like.

Buy on Amazon — I get money if you do!