Havoc's Blog

this blog contains blog posts

New blog hosting

I’m trying out WPEngine instead of self-managed EC2 since my self-managed uptime stats were pretty bad.

Let me know if anything about the blog is more broken than it was before.

Desktop Task Switching Could Be Improved

In honor of GUADEC 2012, a post about desktop UI. (On Linux, though I think some of these points could apply to Windows and OS X.)

When I’m working, I have to stop and think when I flip between two tabs or windows. If I don’t stop and think, I flip to the wrong destination a high percentage of the time. I see this clunkiness every minute or two.

For me to do the most common action (flip between documents/terminals/websites) I may need to use my workspace switch hotkey (Alt+number), app switch (Alt+`), window switch (Alt+Tab), tab switch (Alt+PgUp, Alt+PgDn, C-x-b), or possibly a sequence of these (like change workspace then change window or change window then change tab).

I believe it could be reduced to ONE key which always works.

The key means “back to what I was doing last” and it works whether you were last on a tab, a window, or another workspace. There’s a big drop-off in goodness between:

  • one key that always works
  • two keys to choose from

Once you have two, you have the potential to get it wrong and you have to slow down to think.

Adding more than two (such as the current half-dozen, including sequences) makes it worse. But the big cliff is from one to two.

User model vs. implementation model

Can’t speak for others, but I may have two layers of hierarchy in my head:

  • A project: some real-world task like “file expense report” or “write blog post” or “develop feature xyz”
  • A screen: a window/tab/buffer within the project, representing some document I need to refer to or document I’m creating

The most common action for me is to switch windows/tabs/buffers within a project, for example between the document I’m copying from and the one I’m pasting to, or the docs I’m referring to and the code I’m writing, or whatever it is.

The second most common action for me is to move among projects or start a new project.

Desktop environments give me all sorts of hierarchy unrelated to the model in my head:

  • Workspace
  • Application
  • Window
  • Tab (including idiosyncratic “tabs” like Emacs buffers)
  • Monitor (multihead)

None of these correspond to “projects” or “screens.” You can kind of build a “projects” concept from these building blocks, but I’m not sure the desktop is helping me do so. There’s no way to get a unified view of “screens.”

I don’t know what model other people have in their head, but I doubt it’s as complex as the one the desktop implements.

Not a new problem

I’m using GNOME 3 on Fedora 17 today, but this is a long-standing issue. Back when I was working on Metacity for GNOME 2, we tried to get somewhere on this, but we accepted the existing setup as a constraint (apps, windows, workspaces, etc.) and therefore failed. At litl we spent a long time wrestling with the answer and found something pretty good though perhaps not directly applicable to a regular desktop. I wish I had a good video or link to show for litl’s solution (essentially a zoomable grid of maximized windows, but lots of details matter).

iPhone has simplified things here as well. They combine windows and applications into one. But part of the simplification on iPhone is that it’s difficult to do things that involve more than one “screen” at a time. On a desktop, it wouldn’t be OK to make that difficult.

In GNOME 3, I also use the Windows key to open the overview and pick a window by thumbnail. Some issues with this:

  • It does not include tabs, only windows.
  • In practice, I have to scan all the thumbnails every time to find the one I want.

These were addressed in the litl design:

  • Tabs and windows were the same thing.
  • Windows remained in a stable, predictable location in the overview.
  • The overview was spatially related to the window, that is you were actually zooming in and out, which meant during the animation you got an indication of where you were.
  • I believe you could even click on a window before the zoom in/out animation was complete, though I could be wrong. In any case you could be moving toward it while it was coming onto the screen.

As a result, the litl design was much faster for task switching via overview key plus mouse. If you were repeatedly flipping between two tasks, you could memorize their location in space and find them quickly based on that. If other windows were opened and closed, the remaining ones might slide over, but they’d never reshuffle entirely.

I think GNOME tries to “shrink the windows in their current location” rather than “zoom out”, so it’s trying to have a spatial relationship. A problem is that I have everything maximized (or halfscreen-maximized). “Shrink to current location” ends up as “appears random” when windows don’t have any meaningful relationships on the x/y axes (they’re just in a z-axis stack). (Direction for thought: is there some way maximized windows could be presented as adjacent rather than stacked?)

Overall I vastly prefer Fedora 17 to my previous GNOME 2 setup and I think it’s a step on the path to cleaning this up for good. In the short term, a couple things seem to make the problem worse:

  • The “application” layer of hierarchy (Alt+Tab vs. Alt+`) adds one more way to switch “screens,” though for me this just made an existing problem slightly worse (the bulk of the problem is longstanding and we were already far from one key).
  • The window list on the panel had a fixed order and was always onscreen, so it was faster than the thumbnail overview. I believe the thumbnail overview approach could be fixed; on the litl, for me zoom-out-to-thumbnails was as fast as the window list. The old window list was an ugly kluge (it creates an abstraction/indirection where you have to match up two objects, a button and a window — direct manipulation would be so much better). But its fixed spatial layout made it fast.

GNOME 3 opens the door to improving matters; GNOME 2’s technology (e.g. without animation and compositing) made it hard to implement ideas that might help. GNOME 3 directions like encouraging maximized apps, automatic workspace management, the overview button, etc. may be on the path to the solution.

Can it be improved?

I’ll limit this post to framing the problem and hinting at a couple of directions. I don’t know the right design answer. I’m definitely going to omit speculation on how to implement (for example, getting tabs into the rotation would be possible, but require some implementation heroics).

I know everything is the way it is now for good historical reasons, valid technical and practical constraints, and so on. But I bet there’s a way to get past those with enough effort.

ACA constitutionality doesn’t hinge on what you call it

TL;DR I got it right, you may now send me your offers for lucrative legal consulting work. Note: I am not a lawyer.

I’ve had a little series of posts, first and second, arguing that the tax code already punishes you for failure to purchase health insurance and health care. (Because any tax credit can be framed as an equivalent tax increase + tax penalty.) Thus, the individual mandate should be constitutional in the same way that existing credits are, because practically and economically speaking, it’s the same thing as many credits already in the tax code, including health insurance and health care credits.

I’ve only read the syllabus of the Supreme Court decision so far, but it looks like John Roberts bought the argument that something can’t be unconstitutional just because it’s named the wrong thing:

4. CHIEF JUSTICE ROBERTS delivered the opinion of the Court with
respect to Part III–C, concluding that the individual mandate may be
upheld as within Congress’s power under the Taxing Clause. Pp. 33–
44.
(a) The Affordable Care Act describes the “[s]hared responsibility
payment” as a “penalty,” not a “tax.” That label is fatal to the appli-
cation of the Anti-Injunction Act. It does not, however, control
whether an exaction is within Congress’s power to tax. In answering
that constitutional question, this Court follows a functional approach,
“[d]isregarding the designation of the exaction, and viewing its sub-
stance and application.” United States v. Constantine, 296 U. S. 287,
294. Pp. 33–35.
(b) Such an analysis suggests that the shared responsibility
payment may for constitutional purposes be considered a tax. The
payment is not so high that there is really no choice but to buy health
insurance; the payment is not limited to willful violations, as penal-
ties for unlawful acts often are; and the payment is collected solely by
the IRS through the normal means of taxation. Cf. Bailey v. Drexel
Furniture Co., 259 U. S. 20, 36–37. None of this is to say that pay-
ment is not intended to induce the purchase of health insurance. But
the mandate need not be read to declare that failing to do so is un-
lawful. Neither the Affordable Care Act nor any other law attaches
negative legal consequences to not buying health insurance, beyond
requiring a payment to the IRS. And Congress’s choice of language—
stating that individuals “shall” obtain insurance or pay a “penalty”—
does not require reading §5000A as punishing unlawful conduct. It
may also be read as imposing a tax on those who go without insur-
ance. See New York v. United States, 505 U. S. 144, 169–174.
Pp. 35–40.
(c) Even if the mandate may reasonably be characterized as a
tax, it must still comply with the Direct Tax Clause, which provides:
“No Capitation, or other direct, Tax shall be laid, unless in Proportion
to the Census or Enumeration herein before directed to be taken.”
Art. I, §9, cl. 4. A tax on going without health insurance is not like a
capitation or other direct tax under this Court’s precedents. It there-
fore need not be apportioned so that each State pays in proportion to
its population. Pp. 40–41.

On a more serious note, this law will have huge positive consequences for my family, and I’m grateful that it held up in court.
I was going to be particularly upset to suffer giant practical problems in my own life just because someone failed to open their search-and-replace function in a word processor and change “penalty” to “tax.” I’m very happy we weren’t screwed on that technicality.

While I haven’t read the whole decision yet, it looks like those looking for limitations on federal power will be happy with the discussion of commerce powers and the precedents established in that area.

The best answer requires some aggravation

Once you think you have a good answer to an important problem, it’s time to drive everyone crazy looking for an even better answer.

Here’s a scenario I’ve been through more times than I can count:

  • I thought I had a pretty good approach, or didn’t think anything better was possible, and wasn’t looking to spend more time on the problem.
  • Someone had the passion to keep pushing, and we either stayed in the room or kept the email thread going beyond a “reasonable” amount of effort.
  • We came up with a much better approach, often reframing the problem to eliminate the tradeoff we were arguing about at first.

Steve Jobs was legendarily cruel about pushing for more. But in my experience good results come from more mundane aggravation; there’s no need to make people cry, but there probably is a need to make them annoyed. Annoyed about spending three extra hours in the meeting room, annoyed about the length of the email thread, annoyed about compromising their artistic vision… if the human mind thinks it already has an answer, it will fight hard not to look for a new answer.

That might be the key: people have to be in so much pain from the long meeting or thread or harsh debate or Jobsian tongue-lashing that they’re willing to explore new ideas and even commit to one.

It shows just how much we hate to change our mind. I often need to be well past dinnertime or half a novel into an email thread before my brain gives up: “I’ll set aside my answer and look for a new one, because that’s the fastest way out of here.”

The feeling that you know the answer already is a misleading feeling, not a fact.

Some people use brainstorming rules, like the improv-inspired “yes, and…” rule, trying to separate generative thinking from critical thinking. First find and explore lots of alternatives, then separately critique them and select one. Avoid sticking on an answer prematurely (before there’s been enough effort generating options). Taking someone else’s idea and saying “I like this part, what about this twist…” can be great mental exercise.

To know you’ve truly found the best decision possible, your team might need to get fed up twice:

  • Brainstorm: stay in the room finding more ideas, long after everyone thinks they’re tapped out.
  • Decide: stay in the room debating, refining, and arguing until everyone thinks a decision should have been made hours ago.

A feeling of harmony or efficiency probably means you’re making a boring, routine decision. Which is fine, for routine stuff. But if you have an important decision to make, work on it until the whole team wants to kill each other. Grinding out a great decision will feel emotional, difficult, and time-consuming.

Binding an implicit to a Scala instance

In several real-world cases I’ve had a pair of types like this:

An implicit often leaves a policy decision undecided. At some layer of your code, though, you want to make the decision and stick to it.

Passing around a tuple with an object and the implicit needed to invoke its methods can be awkward. If you want to work with two such tuples in the same scope, you can’t import their associated implicits, so it’s verbose too.

It would be nice to have bind[A,I](a: A, asImplicit: I), where bind(cache, cacheContext) would return the equivalent of BoundCache.

I guess this could be done with macro types someday, but probably not with the macro support in Scala 2.10.

If implemented in the language itself, it’s possible BoundCache wouldn’t need any Java-visible methods (the delegations to Cache could be generated inline).

However, one use of “bound” classes could be to adapt Scala APIs to Java. In Java you could bind one time, instead of explicitly passing implicit parameters all over the place.

Has anyone else run into this?

practice and belief

This NYTimes blog post scrolled past the other day, a discussion of an article by John Gray. John Gray has this to say:

The idea that religions are essentially creeds, lists of propositions that you have to accept, doesn’t come from religion. It’s an inheritance from Greek philosophy, which shaped much of Western Christianity and led to practitioners trying to defend their way of life as an expression of what they believe.

The most common threads of religion, science, and philosophy I learned about in school shared this frame; their primary focus was accurate descriptions of outside reality. Which is fine and useful, but perhaps not everything. In some very tiresome debates (atheism vs. religion, “Truth” vs. “relativism strawman”), both sides share the assumption that what matters most is finding a set of words that best describe the world.

There is at least one alternative, which is to also ask “what should we practice?” not only “what should we believe?”

If you’re interested in this topic, I’ve stumbled on several traditions that have something to say about it so I thought I’d make a list:

  1. Pragmatist philosophy, for example this book is a collection of readings I enjoyed, or see Pragmatism on Wikipedia.
  2. Unitarian Universalism, which borrows much of the format and practice of a Protestant church but leaves the beliefs up to the individual. I’ve often heard people say that their belief is what matters but they don’t like organized religion; UU is the reverse of that. (Not that UU is against having beliefs, it just doesn’t define its membership as the set of people who agree on X, Y, and Z. It is a community of shared practice rather than shared belief.)
  3. Behavioral economics and psychology. For example, they have piled on the evidence that one’s beliefs might flow from one’s actions (not the other way around), and in general made clear that knowing facts does not translate straightforwardly into behavior.
  4. Buddhism, not something I know a lot about, but as explained by Thich Nhat Hanh for example in The Heart of the Buddha’s Teaching. Themes include the limitations of language as a way to describe reality, and what modern bloggers might call “mind hacks” (practical ways to convince the human body and mind to work better).

A few thoughts on open projects, with mention of Scala

Most of my career has been in commercial companies related to open source. I learned to code just out of college at a financial company using Linux. I was at Red Hat from just before the IPO when we were selling T-shirts as a business model, until just before the company joined the S&P500. I worked at litl making a Linux-based device, and now I’m doing odd jobs at Typesafe.

I’ve seen a lot of open source project evolution and I’ve seen a lot of open-source-related commercial companies come and go.

Here are some observations that seem relevant to the Scala world.

Open source vs. open project

In some cases, one company “is” the project; all the important contributors are from the company, and they don’t let outsiders contribute effectively. There are lots of ways to block outsiders: closed infrastructure such as bug trackers, key decisions made in private meetings, taking forever to accept patches, lagged source releases, whatever. This is “one-way-ware,” it’s open source but not an open project.

In other cases, a project is bigger than the company. As noted in this article, Red Hat has a mission statement “To be the catalyst in communities of partners and customers and contributors building better technology the open-source way” where the key word is “catalyst.” And when Linux Weekly News runs their periodic analysis of Linux kernel contributions, Red Hat is a large but not predominant contributor. This is despite a freaking army of kernel developers at Red Hat. Red Hat has many hundreds of developers, while most open source startups probably have a dozen or two.

In a really successful project, any one company will be doing only a fraction of the work, and this will remain true even for a billion-dollar company. As a project grows, an associated company will grow too; but other companies will appear, more hobbyists and customers will also contribute, etc.  The project will remain larger than any one company.

(In projects I’ve been a part of, this has gone in “waves”; sometimes a company will hire a bunch of the contributors and become more dominant for a time, but this rarely lasts, because new contributors are always appearing.)

Project direction and priorities

Commercial companies will tend to do a somewhat random grab-bag of idiosyncratic paying-customer-driven tasks, plus maybe some strategic projects here and there. The nature of open projects is that most work is pretty grab-bag; because it’s a bunch of people scratching their own itches, or hiring others to scratch a certain itch.

In the Scala community for example, some work is coming from the researchers at EPFL, and (as I understand it) their itch is to write a paper or thesis.  Given dictatorial powers over Scala, one could say “we don’t want any of that work” but one could never say “EPFL people will work on fixing bugs” because they have to do something suitable for publication. Similarly, if you’re building an app on Scala, maybe you are willing to work on a patch to fix some scalability issue you are encountering, but you’re unlikely to stop and work on bugs you aren’t experiencing, or on a new language feature.

An open project and its community are the sum of individual people doing what they care about. It’s flat-out wrong to think that any healthy open project is a pool of developers who can be assigned priorities that “make sense” globally. There’s no product manager. The community priorities are simply the union of all community-member priorities.

It’s true that contributors can band together, sometimes forming a company, and help push things in a certain direction. But it’s more like these bands of contributors are rowing harder on one side of the boat; they aren’t keeping the other side of the boat from rowing, or forcing people on the other side of the boat to change sides.

Commercial diversity

My experience is that most “heavy lifting” and perhaps the bulk of the work overall in big open projects tends to come  from commercial interests; partly people using the technology who send in patches, partly companies that do support or consulting around the technology, and partly companies that have some strategic need for the technology (for example Intel needs Linux to run on its hardware).

There’s generally a fair bit of research activity, student activity, and hobbyist activity as well, but commercial activity is a big part of what gets done.

However, the commercial activity tends to be from a variety of commercial entities, not from just one. There are several major “Linux companies,” then all the companies that use Linux in some way (from IBM to Google to Wall Street), not to mention all the small consulting shops. This isn’t unique to Linux. I’ve also been heavily involved in the GNOME Project, where the commercial landscape has changed a lot over the years, but it’s always been a multi-company landscape.

The Scala community will be diverse as long as it’s growing

With the above in mind, here’s a personal observation, as a recent member of the Scala community: some people have the wrong idea about how the community is likely to play out.

I’ve seen a number of comments that pretty much assume that anything that happens in the Scala world is going to come from Typesafe, or that Typesafe can set community priorities, etc.

From what I can tell, this is currently untrue; there are a lot more contributors in the ecosystem, both individuals and companies. And in my opinion, it’s likely to remain untrue. If the technology is successful, there will be a never-ending stream of new contributors, including researchers, hobbyists, companies building apps on the technology, and companies offering support and consulting. Empirically, this is what happens in successful open projects.

I’ve seen other comments that assume the research aspect of the Scala community will always drive the project, swamping us in perpetual innovation. From what I can tell, this is also currently untrue, and likely to remain untrue.

Some open communities do get taken over by narrow interests. This can kill a community, or it can happen to a dead community because only one narrow interest cares anymore. But the current Scala ecosystem trend is that it’s growing: more contributors, more different priorities, more stuff people are working on.

How to handle it

Embrace growth, embrace more contributors, embrace diversity.

The downside is that more contributors means more priorities and thus more conflicts.

When priorities conflict, the community will have to work it out. My advice is to get people together in-person and tackle conflicts in good faith, but head-on. Find a solution. In-person meetings are critical. If you have a strong opinion about Scala ecosystem priorities, you must make a point of attending conferences or otherwise building personal relationships with other contributors.

Never negotiate truly hard issues via email.

As the community grows and new contributors appear, there will be growing pains figuring out how to work together. All projects that get big have to sort out these issues. There will be drama; it’s best taken as evidence that people are passionate.

Structural solutions will appear. For example, in the Linux world, the “enterprise Linux” branches are a structural solution allowing the community to roll forward while offering customers a usable, stable target. Red Hat’s Fedora vs. Enterprise Linux split is a structural solution to separate its open project from its customer-driven product. In GNOME, the time-based release was a structural solution that addressed endless fights about when to release. Most large projects end up explicitly spelling out some kind of governance model, and there are many different models out there.

Whatever the details, the role of Typesafe — and every other contributor, commercial or not — will be to discuss and work on their priorities. And the overall community priorities will include, but not be limited to, what any one contributor decides to do. That’s the whole reason to use an open project rather than a closed one — you have the opportunity, should you need it, to contribute your own priorities.

When talking about an open project, it can be valuable (and factually accurate) to think “we” rather than “they.”

(Hopefully-unnecessary note: this is my personal opinion, not speaking for anyone else, and I am not a central figure in the Scala community. If I got it wrong then let me know in the comments.)

 

The Java ecosystem and Scala ABI versioning

On the sbt mailing list there’s a discussion of where to go with “cross versioning.” Here’s how I’ve been thinking about it.

Disclaimer

I’m a relative newcomer to the Scala community. If I push anyone’s buttons it’s not intentional. This is a personal opinion.

Summary

Two theories:

  • The largest problem created by changing ABI contracts is an explosion of combinations rather than the ABI change per se.
  • The ABI of the Scala standard library is only one of the many ABIs that can cause problems by changing. A general solution to ABI issues would help cope with ABI changes to any jar file, even those unrelated to Scala.

Proposal: rather than attacking the problem piecemeal by cross-versioning with respect to a single jar (such as the Scala library), cross-version with respect to a global universe of ABI-consistent jars.

This idea copies from the Linux world, where wide enterprise adoption has been achieved despite active hostility to a fixed ABI from the open source Linux kernel project, and relatively frequent ABI changes in userspace (for example from GTK+ 1.2, to 2.0, to 3.0). I believe there’s a sensible balance between allowing innovation and providing a stable platform for application developers.

Problem definition: finding an ABI-consistent universe

If you’re writing an application or library in Scala, you have to select a Scala ABI version; then also select an ABI version for any dependencies you use, whether they are implemented in Scala or not. For example, Play, Akka, Netty, slf4j, whatever.

Not all combinations of dependencies exist and work. For example, Play 1.2 cannot be used with Akka 1.2 because Play depends on an SBT version which depends on a different Scala version from Akka.

Due to a lack of coordination, identifying an ABI-consistent universe involves trial-and-error, and the desired set of dependencies may not exist.

Projects don’t reliably use something like semantic versioning so it can be hard to even determine which versions of a given jar have the same ABI. Worse, if you get this wrong, the JVM will complain very late in the game (often at runtime — unfortunately, there are no mechanisms on the JVM platform to encode an ABI version in a jar).

Whenever one jar in your stack changes its ABI, you have a problem. To upgrade that jar, anything which depends on it (directly or transitively) also has to be upgraded. This is a coordination problem for the community.

To see the issue on a small scale, look at what happens when a new SBT version comes out. Initially, no plugins are using the new version so you cannot upgrade to it if you’re using plugins. Later, half your plugins might be using it and half not using it: you still can’t upgrade. Eventually all the plugins move, but it takes a while. You must upgrade all your plugins at once.

Whenever a dependency, such as sbt, changes its ABI, then the universe becomes a multiverse: the ecosystem of dependencies splits. Changing the ABI of the Scala library, or any widely-used dependency such as Akka, has the same effect. The real pain arrives when many modules change their ABI, slicing and dicing the ecosystem into numerous incompatible, undocumented, and ever-changing universes.

Developers must choose among these universes, finding a working one through trial and error.

For another description of the problem, see this post from David Pollak.

Often, projects are reluctant to have dependencies on other projects, because the more dependencies you have the worse this problem becomes.

One solution: coordinate an explicit universe

This idea shamelessly takes a page from Linux distributions.

We could declare that there is a Universe 1.0. This universe contains a fixed ABI version of the Scala standard library, of SBT, of Akka, of Play — in principle, though initially not in practice, of everything.

To build your application, rather than being forced to specify the version of each individual dependency, you could specify that you would like Universe 1.0. Then you get the latest release for each dependency as long as its ABI remains Universe-1.0-compatible.

There’s also a Universe 2.0. In Universe 2.0, the ABI can be changed with respect to Universe 1.0, but again Universe 2.0 is internally consistent; everything in Universe 2.0 works with everything else in Universe 2.0, and the ABI of Universe 2.0 does not ever change.

The idea is simple: convert an undocumented, ever-changing set of implicit dependency sets into a single series of documented, explicit, testable dependency sets. Rather than an ad hoc M versions of Scala times N versions of SBT times O versions of Akka times P versions of whatever else, there’s Universe 1.0, Universe 2.0, Universe 3.0, etc.

This could be straightforwardly mapped to repositories; a repository per universe. Everything in the Universe 1.0 repository has guaranteed ABI consistency. Stick to that repository and you won’t have ABI problems.

One of the wins could be community around these universes. With everyone sharing the same small number of dependency sets, everyone can contribute to solving problems with those sets. Today, every application developer has to figure out and maintain their own dependency set.

How to do it

Linux distributions and large multi-module open source projects such as GNOME provide a blueprint. Here are the current Fedora and GNOME descriptions of their process for example.

For these projects, there’s a schedule with a development phase (not yet ABI frozen), freeze periods, and release dates. During the development phase incompatibilities are worked out and the final ABI version of everything is selected.

At some point in time it’s all working, and there’s a release. Post-release, the ABI of the released universe isn’t allowed to change anymore. ABI changes can only happen in the next version of the universe.

Creating the universe is simply another open source project, one which develops release engineering infrastructure. “Meta-projects” such as Fedora and GNOME involve a fair amount of code to automate and verify their releases as a whole. The code in a Universe project would convert some kind of configuration describing the Universe into a published repository of artifacts.

There are important differences between the way the Linux ecosystem works today and the way the Java ecosystem works. Linux packages are normally released as source code by upstream open source developers, leaving Linux distributions to compile against particular system ABIs and to sign the resulting binaries. Java packages are released as binaries by upstream, and while they could be signed, often they are not. As far as I know, however, there is nothing stopping a “universe repository” project from picking and choosing which jar versions to include, or even signing everything in the universe repository with a common key.

I believe that in practice, there must be a central release engineering effort of some kind (with automated checks to ensure that ABIs don’t change, for example). Another approach would be completely by convention, similar to the current cross-build infrastructure, where individual package maintainers could put a universe version in their builds when they publish. I don’t believe a by-convention-only approach can work.

To make this idea practical, there would have to be a “release artifact” (which would be the entire universe repository) and it would have to be tested as a whole and stamped “released” on a certain flag day. There would have to be provisions for “foreign” jars, where a version of an arbitrary already-published Java jar could be included in the universe.

It would not work to rely on getting everyone on earth to buy into the plan and follow it closely. A small release engineering team would have to create the universe repository independently, without blocking on others. Close coordination with the important packages in the universe would still be very helpful, of course, but a workable plan can’t rely on getting hundreds of individuals to pay attention and take action.

Scala vs. Java

I don’t believe this is a “Scala” problem. It’s really a Java ecosystem problem. The Scala standard library is a jar which changes ABI when the major version is bumped. A lot of other jars depend on the standard library jar. Any widely-used plain-Java jar that changes ABI creates the same issues.

(Technicality: the Scala compiler also changes its code generation which changes ABIs, but since that breaks ABIs at the same time that the standard library does, I don’t think it creates unique issues.)

Thinking of this as a “Scala problem” frames it poorly and leads to incomplete solutions like cross-versioning based only on the Scala version. A good solution would also support ABI changes in something like slf4j or commons-codec or whatever example you’d like to use.

btw, it would certainly be productive to look at what .NET and Ruby and Python and everyone else have done in this area. I won’t try to research and catalog all those in this post (but feel free to discuss in comments).

Rehash

The goal is that rather than specifying the version for every dependency in your build, you would specify “Universe 1.0”; which would mean “the latest version of everything in the ABI-frozen and internally consistent 1.0 universe of dependencies.” When you get ready to update to a newer stack, you’d change that to “Universe 2.0” and you’d get another ABI-frozen, internally-consistent universe of dependencies (but everything would be shinier and newer).

This solution scales to any number of ABI changes in any number of dependencies; no matter how many dependencies or how many ABI changes in those dependencies, application developers only have to specify one version number (the universe version). Given the universe, an application will always get a coherent set of dependencies, and the ABI will never change for that universe version.

This solution is tried and true. It works well for the universe of open source C/C++ programs. Enterprise adoption has been just fine.

After all, the problem here is not new and unique to Java. It wasn’t new in Linux either; when we were trying to work out what to do in the GNOME Project in 1999-2001 or so, in part we looked at Sun’s longstanding internal policies for Solaris. Other platforms such as .NET and Ruby have wrestled with it. There’s a whole lot of prior art. If there’s an issue unique to Java and Scala, it seems to be that we find the problem too big and intimidating to solve, given the weight of Java tradition.

I’m just writing down half-baked ideas in a blog post; making anything like this a reality hinges on people doing a whole lot of work.

Comments

You are welcome to comment on this post, but it may make more sense to add to the sbt list thread (use your judgment).

 

Configuring the Typesafe Stack

Update: see also this post on JSON-like config formats.

My latest work project was a quick side-track to unify the config file handling for Akka 2.0 and Play 2.0. The result is on GitHub and feels pretty well-baked. Patches have now landed in both Akka and Play, thanks to Patrik and Peter.

I can’t make this project seem glamorous. It was a code cleanup that reinvented one of the most-reinvented wheels around. But I thought I’d introduce this iteration of the wheel for those who might encounter it or want to adopt it in their own project.

The situation with Akka 1.2 and Play 1.2 was:

  • Akka 1.2 used a custom syntax that was JSON-like semantically, but prettier to human-edit. It supported features such as including one file in another.
  • Play 1.2 used a Java properties file that was run through Play’s template engine, and supported some other custom stuff such as substituting environment variables (the syntax looked like ${HOME}).

Akka’s format looked like this:

actor {
    timeout = 5
    serialize-messages = off
}

While Play was like this:

application.name=yabe
db=${DATABASE_URL}
date.format=yyyy-MM-dd

With the new 2.0 setup, both Akka and Play support your choice of three formats: JSON, Java properties, or a new one called “Human-Optimized Config Object Notation“. You can mix and match; if you have multiple files in different formats, their contents are combined.

HOCON has a flexible syntax that can be JSON (it’s a superset), or look much like Akka’s previous file format, or look much like a Java properties file. As a result, some existing Akka and Play config files will parse with no changes; others will require minor changes.

A single configuration file for the whole app

Play 1.2 has a single configuration file; everything you might want to set up is done in application.conf. We wanted to keep a single configuration, even as Play 2.0 adds a dependency on Akka.

With the new setup, once Play moves to Akka 2.0, you should be able to set Akka settings in your Play application.conf. If other libraries in your app also use the config lib, you should be able to set their settings from this single config file, as well.

To make this happen, apps and libraries have to follow some simple conventions. A configuration is represented by an object called a Config, and after loading a Config applications should provide it to all of their libraries. Libraries should have a way to accept a Config object to use, as shown in this example.

Applications can avoid having to pass a Config instance around by using the default Config instance; to make this possible, all libraries should use the same default, obtained from ConfigFactory.load(). This default loads application.conf, application.json, and application.properties from the classpath, along with any resources called reference.conf.

For a given app, either everyone gets a Config passed down from the app and uses that, or everyone defaults to the same “standard” Config.

Keeping useful features from the existing formats

Akka allowed you to split up the config into multiple files assembled through include statements, and the new format does too.

Play allowed you to grab settings such as ${DATABASE_URL} from the system environment, and the new format does too.

In the spirit of those two features, the new format also allows ${} references within the config, which enables “inheritance” and otherwise avoids cut-and-paste; there are some examples in the README.

Migration path

Some existing Akka and Play config files will parse unchanged in the new format. Handling of special characters, escaping, and whitespace does differ, however, and you could encounter those differences. To migrate from an existing Play application.conf, you can use one of two strategies:

  1. Rename the file to application.properties, which will make the escaping rules more like the old format. However, you won’t be able to use environment variable substitution, it’s just a plain vanilla properties file.
  2. Add quoting and escaping. If you get parse errors, add JSON-style double quotes around the strings causing the problem.

Akka is similar; if you have parse errors, you might need quoting and escaping to avoid them. The error messages should be clear: if they are not, let me know.

There’s a section in the HOCON spec (search for “Note on Java properties”, near the end) with a list of ways the new format differs from a Java properties file.

Override config at deploy time

After compiling your app, you may want to modify a configuration at deploy time. This can be done in several ways:

  • With environment variables if you refer to them using ${DATABASE_URL} syntax in your config.
  • System properties override the config, by default. Set -Dfoo.bar=42 on the command line and it will replace foo.bar in the app’s config.
  • Force an alternative config to load using the system properties config.file, config.resource, or config.url. (Only works on apps using the default ConfigFactory.load(), or apps that independently implement support for these properties.)

A more machine-friendly syntax

To generate the previous Play or Akka formats, you would need custom escaping code. Now you can just generate JSON or a properties file, using any existing library that supports those standard formats.

Implemented in Java

The config library is implemented in Java. This allows Java libraries to “join” the config lib way of doing things. In general the Typesafe stack (including Play and Akka) has both Java and Scala APIs, and in this case it seemed most appropriate to implement in Java and wrap in Scala.

That said, I haven’t implemented a Scala wrapper, since it seems barely necessary; the API is small and not complicated. You can easily create an implicit enhancement of Config with any convenience methods you would like to have. While the API is a Java API, it does some things in a Scala-inspired way: most notably the objects are all immutable.

The implementation is much larger and more complex than it would have been if it were implemented in Scala. But I endured the Java pain for you.

Conventional way of managing defaults

By convention, libraries using the config lib should ship a file called reference.conf in their jar. The config lib loads all resources with that name into ConfigFactory.defaultReference() which is used in turn by ConfigFactory.load(). This approach to loading defaults allows all libraries to contribute defaults, without any kind of runtime registration which would create ordering problems.

(By the way: all of these conventions can be bypassed; there are also methods to just parse an arbitrary file, URL, or classpath resource.)

Well-defined merge semantics

The HOCON spec defines semantics for merging two config objects. Merging happens for duplicate keys in the same config file, or when combining multiple config files.

The API exports the merge operation as a method called withFallback(). You can combine any two config objects like this:

val merged = config.withFallback(otherConfig)

And you can combine multiple config objects with chained invocations of withFallback(), for example:

val merged = configs.reduce(_.withFallback(_))

withFallback() is associative and config objects are immutable so the potentially-parallel reduce() should work fine.

Retrieving settings

This is straightforward:

val foobar = config.getInt("foo.bar")

The getters such as getInt() throw an exception if the setting is missing or has the wrong type. Typically you have a reference.conf in your jar, which should ensure that all settings are present. There’s also a method checkValid() you can use to sanity-check a config against the reference config up front and fail early (this is nicer for users).

Each Config object conceptually represents a one-level map of paths to non-null values, but also has an underlying JSON-parse-tree-style representation available via the root() method. root() gives you a ConfigObject which corresponds pretty exactly to a JSON object, including null values and nested child object values. Config and ConfigObject are alternative views on the same data.

Any subtree of a config is just as good as the root; handy if you want multiple separately-configurable instances of something.

Configuration as data or code

I know many people are experimenting with configuration as Scala code. For this cleanup, we kept configuration as data and implemented the library in plain Java. My impression is that often a machine-manipulable-as-data layer ends up useful, even though there’s a code layer also. (See your .emacs file after using M-x customize, for example, it inserts a “do not edit this” section; or, SBT’s equivalent of that section is the .sbt file format.) But we did not think about this too hard here, just kept things similar to the way they were in 1.2, while improving the implementation.

Have fun

Not a whole lot else to it. Please let me know if you have any trouble.

Task Dispatch and Nonblocking IO in Scala

TL;DR

Modern application development platforms are addressing the related issues of globally-coordinated task dispatch and nonblocking IO.

Here’s my definition of the problem, an argument for why it matters, and some suggestions for specific standard library features to add to Scala in particular.

The same ideas apply to any application development platform, though. It’s rapidly becoming mandatory for a competitive platform to offer an answer here.

Definitions

Let’s define a blocking task to be anything that ties up a thread or process but does not use CPU cycles. The most common ways to block are on IO channels and locks.

A busy loop is not a blocking operation in this sense; it takes up a thread, but it’s using the CPU, not “wasting” the thread.

By “task” I mean any piece of executable code. A task is blocking if it spends part of its time waiting, or nonblocking if it needs the CPU the whole time.

Dispatch just means scheduling the task on a thread and executing it.

Dispatch for nonblocking tasks, in an ideal world

For nonblocking tasks (which are CPU-bound), the goal is to use 100% of all CPU cores. There are two ways to lose:

  • Fail to use all the cores (not enough threads or processes).
  • Too many threads for the number of cores (inefficient and wastes memory).

The ideal solution is a fixed thread or process pool with a number of threads related to the number of cores. This fixed pool must be global to the app and used for all nonblocking tasks. If you have five libraries in your app and they each create a thread per CPU core, you’re losing, even though each library’s approach makes sense in isolation.

When the fixed number of threads are all in use, tasks should be queued up for later dispatch.

Dispatch for blocking tasks, in an ideal world

Blocking tasks pose some competing concerns; the trouble with blocking tasks is that these concerns are hard to balance.

  • Memory: each blocking task ties up a thread, which adds overhead (the thread) to the task. A super-tiny http parser gets you nowhere if you accompany each one with a thread.
  • Deadlocks: blocking tasks are often waiting for another blocking task. Limiting the number of threads can easily create deadlocks.
  • Tasks outstanding: with IO, it is desirable to send lots of requests at once or have lots of sockets open at once. (With CPU-bound tasks, the opposite is true.)

The ideal solution (if you must block) is an “as huge as memory allows” thread/process pool.

If you run blocking tasks on a bounded pool, you could have deadlocks, and you would not maximize tasks outstanding. Still, as memory pressure arrives, it would be better to start making some tasks wait than it would be to exhaust memory. Apps inevitably become pathological when memory is exhausted (either you have swap and performance goes to hell, or you don’t have swap and an out-of-memory exception breaks the app). But as long as memory is available, it’s better to add threads to the pool than it is to queue tasks.

An automatic solution to this problem might require special help from the JVM and/or the OS. You’d want to have an idea about whether it’s reasonable to create another thread, in light of each thread’s memory usage, the amount of memory free, and whether you can GC to recover memory.

In practice, you have to do some amount of manual tuning and configuration to get a thread pool setup that works in practice for a particular deployment. Maybe setting a large, but fixed, thread pool size that happens to keep your app using about the right amount of memory.

Different tasks, different pools

It’s broken to dispatch nonblocking tasks to an unbounded (or large) pool, and broken to dispatch blocking tasks to a small bounded pool. I can’t think of a nice way to handle both kinds of task with the same pool.

Possible conclusion: a dispatch API should permit the application to treat the two differently.

Physical resource coordination requires global state

We’ve all been taught to avoid global variables, global state, and singletons. They cause a lot of trouble, for sure.

Assuming your app runs on a single machine, you have N CPUs on your computer – period. You can’t create a new “CPUs context” with your own CPUs every time you need more CPUs. You have N megabytes of RAM – period. You can’t create a new “RAM context” with its own RAM.

These resources are shared among all threads and processes. Thus you need global, coordinated task dispatch.

Nonblocking IO creates another global resource

With nonblocking IO APIs, such as poll(), you can have multiple IO operations outstanding, using only one thread or process.

However, to use poll() or equivalent you have a new problem: every IO operation (on the same thread) must be coordinated so that the file descriptors end up in a single call to poll(). The system for coordinating this is called an “event loop” or “main loop.”

In an API such as libev or GMainLoop, applications can say “I need to wake up in 3 seconds” or “I need to know about activity on this socket handle,” and the library aggregates all such requests into a single invocation of poll(). The single poll() puts the thread to sleep until one of the requests is ready.

Nonblocking IO requires a globally-coordinated “managed poll()” — also known as an event loop. Otherwise you’re back to needing threads.

How Java sucks at this

In brief:

  1. No global task dispatcher to coordinate CPU and memory usage.
  2. The APIs are mostly blocking.
  3. The nonblocking APIs in nio have limited utility because there’s no global event loop.

1. No global dispatcher

Java has all sorts of nice executors allowing you to dispatch tasks in many different ways.

But for average apps doing average things, we need two global singleton executors, not a zillion ways to create our own.

An average app needs the executor for nonblocking CPU-bound tasks, so that executor can coordinate CPU-limited tasks. And it needs the executor for blocking tasks, so that executor can coordinate memory-limited tasks.

In the JVM ecosystem, you start using a library for X and a library for Y, and each one starts up some tasks. Because there’s no global executor, each one creates its own. All those per-library executors are probably great by themselves, but running them together sucks. You may never create a thread by hand, but when you run your app there are 100 threads from a dozen different libraries.

2. Blocking APIs

With the standard Java APIs, many things are hard or impossible to do without tying up a thread waiting on IO or waiting on a lock. If you want to open a URL, there’s URL.openStream() right there in the standard library, but if you want to open a URL without blocking you’ll end up with a far more involved external dependency (such as AsyncHttpClient).

Just to kick you while you’re down, many of the external dependencies you might use for nonblocking IO will create at least one dedicated thread, if not a whole thread pool. You’ll need to figure out how to configure it.

3. No event loop

Low-level nonblocking APIs in the spirit of poll() are not enough. Even if every library or code module uses poll() to multiplex IO channels, each library or code module needs its own thread in which to invoke poll().

In Java, a facility as simple as Timer has to spawn its own threads. On platforms with an event loop, such as node.js, or browsers, or many UI toolkits, you tell the platform how long to wait, and it ensures that a single, central poll() has the right timeout to wake up and notify you. Timer needs a thread (or two) because there’s no centralized event loop.

The impact in practice

If you just use Java and Scala APIs naively in the most convenient way, you end up with a whole lot of threads. Then you have to start tracking down thread pools inside of all your libraries, sharing them when possible, and tuning their settings to match the size of your hardware and your actual production load. Or just buy a single piece of hardware more powerful than you’ll ever need and allow the code to be inefficient (not a rare strategy).

I recently wrote a demo app called Web Words, and even though it’s not complex, it shows off this problem well. Separately, the libraries it uses (such as Akka, AsyncHttpClient, Scala parallel collections, RabbitMQ) are well-behaved. Together, there are too many threads, resulting in far more memory usage than should be required, and inefficient parallelization of the CPU-bound work.

This is a whole category of developer tedium that should not exist. It’s an accident of broken, legacy platform design.

The node.js solution

node.js has a simple solution: don’t have any APIs that block. Implement all nonblocking APIs on top of a singleton, standard event loop. Run one process per CPU. Done.

Dispatch of blocking tasks is inherently hard, so node.js makes it impossible to implement a blocking task and avoids the problem.

This would fail without the global singleton event loop. If node.js provided poll() instead of an event loop, poll() would be a blocking API, and any task using it would take over the node.js process.

People often say that “no threads” is the secret to node.js; my take is that first the global singleton event loop enables all APIs to be nonblocking; second the lack of blocking APIs removes the need for threads. The global singleton event loop is the “root cause” which unlocks the big picture.

(The snarky among you may be thinking, “if you like node.js so much, why don’t you marry it?”; node.js is great, but I love a lot of things about Scala too. Type safety goes without saying, and I’ll also take actors over callbacks, and lots more. Comparing one aspect of the two platforms here.)

Event loop as a third-party library: not a solution

You can get event loop libraries for lots of languages. This is in no way equivalent to a standard, default global singleton event loop.

First: the whole point of an event loop is to share a single call to poll() among all parts of the application. If the event loop library doesn’t give you a singleton, then every code module has to create its own loop with its own poll().

Second: if the event loop singleton is not in the standard library, then the platform’s standard library can’t use it. Which means the standard library either has no IO facilities, or it has only broken, blocking IO facilities.

Solving it in Scala

This could be solved on the Java level also, and maybe people are planning to do so — I hope so.

In the meantime, if you’ve read this far, you can probably guess what I’d propose for Scala.

Blocking tasks are inherently somewhat intractable, but they are also a legacy fact of life on the JVM. My suggested philosophy: design APIs assuming nonblocking tasks, but give people tools to manage blocking tasks as best we can.

The critical missing pieces (the first three here) should not be a lot of work in a strictly technical sense; it’s more a matter of getting the right people interested in understanding the problem and powering through the logistics of landing a patch.

1. Two global singleton thread pools

The Scala standard library should manage one thread pool intended for CPU-bound nonblocking tasks, and one intended for blocking tasks.

  • The simplest API would let you submit a function to be executed in one of the pools.
  • Another useful API would create an ExecutorService proxy that used the pools to obtain threads. ExecutorService.shutdown() and ExecutorService.awaitTermination() would have to work correctly: wait for tasks submitted through the proxy to complete, for example. But shutting down the proxy should not interfere with the underlying global thread pool. The proxy would be provided to legacy Java APIs that allow you set a custom ExecutorService.

Built-in Scala features such as actors and parallel collections should make use of these thread pools, of course.

2. A global default event loop instance

The goal is to allow various code modules to add/remove channels to be watched for IO events, and to set the timeout on poll() (or equivalent) to the earliest timeout requested by any code module.

The event loop can be very simple; remember, it’s just a way to build up a poll() invocation that “takes requests” from multiple code modules. More complexity can be assembled on top.

3. A standard Future trait

The standard Future in Scala doesn’t quite have what people need, so there’s a proliferation of third-party solutions, most of them similar in spirit. There’s even a wrapper around all the flavors of Future, called sff4s.

(Note: all of these Future are nonblocking, while Java’s Future requires you to block on it eventually.)

A standard Future is essential because it’s the “interoperability point” for nonblocking APIs. Without a good Future in the standard library, there can’t be good nonblocking APIs in the standard library itself. And without a standard Future, third-party nonblocking APIs don’t play nicely together.

4. Bonus Points and Optional Elaborations

await

In my opinion, the C# await operator with async keyword is the Right Thing. Microsoft understands that great async support has become a required language feature. At my last company, C. Scott Ananian built a similar continuation-passing-style feature on JavaScript generators and it worked great.

Scala has the shift/reset primitives for continuation-passing style, but it isn’t clear to me whether they could be used to build something like await, and if they could, there’s nothing in the standard library yet that does so.

await depends on a standard Future because it’s an operator on Future. In Scala, await would probably not be a language feature, it would be a library function that operated on Future. (Assuming the shift/reset language feature is sufficient to implement that.)

Nonblocking streams

Many, many Java APIs let you pass in or obtain an InputStream or an OutputStream. Like all blocking APIs, these are problematic. But there isn’t a great alternative to use when designing an API; you’d have to invent something custom. The alternative should exist, and be standard. The standard library should have a nonblocking version of URL.openStream(), too.

Actors

If you code exclusively with actors and write only nonblocking code inside your actors, there’s never a need to mess with dispatchers, futures, or event loops. In that sense, actors solve all the problems I’m describing here. However: to code exclusively with actors, you need libraries that implement actor-based alternatives to all blocking APIs. And those libraries in turn need to be implemented somehow. It can’t be actors all the way down. There’s also the matter of using existing, non-actor APIs.

Ideally, Akka, and associated actor-based APIs, would build on a standard task dispatch and event loop facility. Akka already builds on regular Java executors; it doesn’t need anything super-fancy.

An issue with actors today is that they’re allowed to block. My idea: there should be a way to mark a blocking actor as such. It would then be dispatched to the unbounded executor intended for blocking tasks. By default, Akka could dispatch to the bounded executor intended for nonblocking tasks. People who want to block in their actors could either mark them blocking so Akka knows, or (brokenly) configure their nonblocking task executor to be unbounded. If you do everything right (no blocking), it would work by default. Remember the theory “design for nonblocking by default, give people tools to damage-control blocking.”

JDBC and similar

Peter Hausel pointed out to me recently that JDBC only comes in blocking form. Now there’s a fun problem… There are quite a few libraries with similar issues, along with people trying to solve them such as Brendan McAdams‘s Hammersmith effort for MongoDB.

What’s the big deal?

Developers on platforms with no good solution here are used to the status quo. When I first came to server-side development from client-side UI toolkit development, the lack of an event loop blew my mind. I’m sure I thought something impolitic like How can anybody use anything so freaking broken??? In practice of course it turns out to be a manageable issue (obviously lots of stuff works great running on the JVM), but it could be better.

In my opinion, with node.js, C# adding async and await to the language, and so on, good answers in this area are now mandatory. People are going to know there’s a better way and complain when they get stuck using a platform that works the old way.

A high-quality application development platform has to address global task dispatch coordination, have an event loop, and offer nice nonblocking APIs.

Related posts…

Some old posts on related topics.