Havoc's Blog

this blog contains blog posts

A Sequential, Actor-like API for Server-side JavaScript

Idea: how a JavaScript request handler could look

When I saw node.js, we’d been writing a huge pile of JavaScript code for litl based on gjs (gjs is a custom-built JavaScript runtime, created for native desktop/mobile/consumer-gadget apps rather than server-side code).

At litl, we use a syntax for asynchronicity invented by C. Scott Ananian – who had to talk me into it at some length, but now I’m glad he did. In this syntax, a function that needs to do asynchronous work is a “promise generator”; it passes promises out to a driver loop using the “yield” keyword. The driver loop resumes the async function when the promise is completed. There’s a bug to add this to gjs upstream.

Here’s how a web request handler which relies on a backend service and a cache service might look:

var someBackendService = require('someBackendService');
var someCacheThing = require('someCacheThing');

var id = request.queryParams['id'];

var promiseOfFoo = someCacheThing.lookupFoo(id);

// yield saves continuation and suspends
var foo = yield promiseOfFoo;

if (!foo) {
    promiseOfFoo = someBackendService.fetchFoo(id);

    // again suspend while waiting on IO
    foo = yield promiseOfFoo;

    var promiseOfSaveFoo = someCacheThing.storeFoo(id, foo);

    // wait for cache to complete, in case there's an exception
    yield promiseOfSaveFoo;
}

// this write would be async via event loop also of course
request.response.write(foo);

You see the idea. There are potentially three asynchronous requests here to serve the incoming request. While waiting on them, we don’t use up a thread – we return to the event loop. But, the code is still sequential and readable.

In Java, you can do this with Kilim for example. You can get the same performance effect, with less syntactic help, using Jetty Continuations. I’m not claiming this is a new idea or anything. But more frameworks, including node.js, could work this way. The abstraction might be pretty nice in desktop apps as well.

If you aren’t familiar with it, “yield” is how JavaScript supports continuations. See this page on Mozilla Developer Network. The HTTP handler in the above example would be implicitly enclosed in a generator function that generates promises. The framework would generate a promise, wait in the event loop until the promise was completed, then resume the generator. When the generator resumes, its yield statements either evaluate to the value of the promise, or they throw an exception.

I believe Scala/Akka actors could use a style like this now that Scala 2.8 has continuation support. I’m not sure whether anyone has tried that yet. With continuations, a pipeline of actors could be replaced by a single actor that suspends itself whenever it’s waiting on a future.

Why

Each request is a suspendable actor that can save a continuation whenever it’s waiting on the event loop.

  • No callback spaghetti, just clear concise syntax and sequential code.
  • Threads aren’t visible in JavaScript; thread bugs are only possible in native code modules.
  • A bunch of actors like this will automatically max out all cores. Whether you’re doing IO or computation, the right thing happens.
  • Rather than “worker threads” you can just have more actors, using the same thread pool and event loop.

You just write code. As long as the code doesn’t block on IO, the framework uses the CPU as efficiently as possible. No need to jump through hoops. Even if the code does block on IO or does a long computation, you can get usable results if you allow the thread pool to add threads.

According to the node.js home page, they aren’t open to adding threads to the framework; node.js also removed promises (aka futures or deferreds). It may still be possible to implement this syntax in node.js as a third-party module, however – I’m not sure.

I agree shared state across threads in JavaScript would be bad, but I’d love to see threads on the framework level.

  • It’s easier for people using the framework if they only have to mess with one process.
  • The JavaScript level would have less API because worker threads would be replaced by a more general idea of an actor.
  • The opportunities for fast message-passing and smart load balancing between request handlers (or more generally, actors) would be increased.
  • In short the framework could do more stuff for you, so application code Just Works.

Partial implementation

I started on some code to try out this idea. I decided not to keep going on it for now, so I’m posting the incomplete work just in case someone’s interested or finds it useful. The license is MIT/BSD-style. You’re welcome to fork on github or just cut-and-paste, whatever you find useful.

When plotting how I’d implement the above request-handling code, I wasn’t familiar with actors (an idea from Erlang, taken up by Kilim, Jetlang, Scala, etc.). I ended up re-inventing the idea of a code module, which I called a Task, which would always run in a single thread, but would not be bound to a particular thread or share state with other threads. I didn’t come up with the mailboxes-and-messages idea found in existing actor frameworks, though, so that isn’t implemented.

The most potentially-useful part of the code I have so far is a C GObject called Task; this object is pretty much an actor. You could also think of it as a collection of related event watchers where the event handlers never run concurrently.

My code is based on GLib, libev, SpiderMonkey, and http-parser.

The code falls short of the http request handler described above. I have a good implementation with test coverage of a Task object in C, with a GLib-style API. This may well be useful already for C developers. There’s a lot left to be done though as described in the README.

node.js couldn’t use the code I have here, unfortunately. I didn’t patch node.js because I thought V8 lacked generators – apparently that was wrong – and because node.js upstream has stated opposition to both threads and promises. And I was already familiar with GLib/SpiderMonkey but not V8. If I really wanted to use an API like this in production, building it as a module on top of node.js would probably be logical. An issue to be overcome would be any unlocked global state in the node.js core. I’m not sure what else would be involved.

What’s not implemented

To run the hypothetical HTTP request handler above, you’d have to add some large missing pieces to my code:

  • HTTP. The lovely http-parser from node.js is in the code tree, but after parsing there has to be code to handle things like chunking. i.e. it needs to implement HTTP, not just HTTP parsing.
  • a JavaScript platform. A module system and a way to write native-code modules.
  • some simple http container stuff, such as a convention to map URL paths to a tree of JS files, auto-reloading changed JS files, and executing the JS handlers assuming they contain a generator that will yield promises back to the main loop
  • Unlike most actor implementations, I haven’t done anything with message passing among actors. I don’t think it’s even necessary for the web request case, but it would make the framework more useful and general.

See the README for more details.

How it works, for desktop developers who know GLib

If you already understand the GLib main loop or similar, here’s what my code adds:

  • Event callbacks are invoked by a thread pool, rather than the GMainContext thread
  • All callbacks belonging to the same actor are serialized, so we don’t run the same actor on two threads at once
  • Actors automatically disappear when they don’t have any event sources remaining

As long as the actors (which you can think of as groups of main loop sources) don’t share any state, their code doesn’t have to be thread-safe in any way.

In GTK+ programming, it’s discouraged to write sequential code by recursively blocking on the main loop whenever an event is pending. There are two key differences between suspending an actor until the next event, and a recursive main loop:

  • the stack doesn’t recurse, so you don’t have unbounded stack growth (or weird side effects such as inability to quit an outer loop until the inner one also quits).
  • because actors don’t have shared state, you don’t care about the fact that random event handlers can run; those only affect other actors. In a typical GTK+ app on the other hand, recursing the main loop causes reentrancy bugs due to shared state between your code and stuff that might run in another event handler.

My implementation uses GMainContext “outside” of the actor pool (so things look like a regular GLib API) but there’s a choice of GMainContext or libev “inside” the actor pool. GMainContext can’t be used directly from actors. Unfortunately, GMainContext doesn’t perform well enough for server-side applications, for example this bug, and actors need a custom API to add event sources in any case because the sources have to be associated with the actor.

Have fun!

This is just a code doodle, figured I should put it out there since I spent time on it. I hope the code or the ideas are useful to someone.

The Task (aka actor) implementation is pretty solid though, I believe, and I’d encourage trying it out if you have an appropriate application.

Playing a sound file

Is there really no Linux API to just play a file?

Something like:

id = cache_file("foo.ogg"); // cache the sample
play(id); // play to default device

libcanberra seems to only support playing stuff from a sound theme, not a file. PulseAudio seems to require setting up a main loop thing, creating a context, converting the file to a stream object in the right format, uploading the sample, and then finally playing it.

This should be a two-liner (with caching) and a one-liner without!

Which API should I know about?

(Is there at least an example of doing it with PulseAudio that can be quickly copied?)

Take risks in life, for savings choose a balanced fund

Jackson Building and Blue Ridge Savings Bank

Jackson Building and Blue Ridge Savings Bank

Many of us read one or two investment books, and take away that younger people should take more risk. If you buy a “target date retirement” fund in your 20s, you might end up invested 85% or more in stocks.

If you’re in the tech industry, young, fired up about entrepreneurship, immune to risk — you might be even more open to this message than the average person.

While younger people can afford to invest more money in risky assets, I don’t believe that they should.

You should not go as high-risk along the efficient frontier as your time to retirement theoretically allows.

Here’s an example plan that would work well for many people’s retirement money, to show what I mean. After the plan, I’ll explain why I like it.

  • Select a balanced portfolio of 60%, 40%, or 20% risky assets (such as stocks, commodities, high-yield) and the rest in high-quality bonds. Choose 60% or 40% based on how comfortable you are losing money. Choose 20% risky stuff if you’re near retirement.
  • Save until you have at least 25 times your annual income dedicated for retirement, then you can stop. The rule of thumb for withdrawing money without running out is 4% per year. Once you can do this, you don’t need more (for retirement), though more would add extra safety. You could also add safety by continuing to work while your investments grow, not continuing to save, but not withdrawing anything either. If you have spending goals other than retirement – a house, college funds – money saved toward those goals doesn’t count toward your 25x.
  • Save as much as possible, via automatic withdrawal; if less than 10% of your income, you’re hosed, and more is better. Saving less money is too risky. You’ll need money for goals other than retirement. Your working life might not be as long or as highly-compensated as you expect. If you’re living so close to your means that you’re saving only 5%, it’s too easy to slip beyond your means. You probably can’t get your employer’s full 401k match at only 5%, either. How much you save matters far far far far far more than what you invest in. Investments do not have lottery-like results. Stop thinking that way. If you aren’t saving enough, ratchet up by 1% of income or the amount of any raise, every year, until you are.
  • An ideal balanced portfolio would be a single fund. This forces you to look at your investments as one big bucket, eliminating mental accounting. It ensures you rebalance continuously, something it’s tough to do otherwise. More funds might be OK if you have a financial planner who automatically rebalances for you.
  • An ideal balanced portfolio includes lots of asset classes. For example, both US and international stocks, government bonds, inflation-protected bonds, high-yield bonds, commodities, etc. … as long as they’re all in one fund and aren’t creating complexity for you. Don’t overcomplicate it and add more mutual funds to get exotic asset classes. “Just one fund” is a bigger win than owning all this stuff.

The balanced fund approach works in real life, with real emotions, and real unexpected circumstances. Here’s why:

  • Focus on your life. Maybe it’s a startup, maybe it’s art, maybe it’s children. Risky and/or complex investments are distracting. You have better things to do.
  • The goal is to have enough. Enough is not the same as the maximum possible. Investment books keep talking about “beating the market” – that has nothing to do with anything. You care about absolute returns, not relative. Losing 35% instead of 40% … congrats, you beat the market.
  • The extra returns aren’t as high as you think. Vanguard has a nice page showing how asset allocations worked out historically. Even using those numbers, the extra risk just isn’t worth it (8.7% vs. 9.9%). But there’s a strong argument that those numbers are misleadingly high. And these numbers neglect your counterproductive behaviors, which will include failure to rebalance, buying low and selling high, and losing motivation to save, among others. Those counterproductive behaviors negate the extra returns — and then some.
  • You don’t need to take the extra risk. If you’re saving enough to handle adverse scenarios (unexpected life events, ugly macroeconomic events, questionable government policies) then you’re saving enough to be fine with a good, but not “maximum possible,” rate of return. So optimize to be sure you get the good rate of return, consistently.  The question is how much risk you need to take, not how much you could probably get away with. If you have a reasonable plan overall, including an adequate savings rate, there is no way you need to max out your risk. If your plan relies on maxing out risk to succeed, it’s a risky plan, right? Choosing a risky plan is dumb. Reducing uncertainty is more important than maximum theoretical returns.
  • Extra risk will cause you pointless angst and pain. It is not human nature to think “I’ve been miserable opening my 401k statement for years, but it will all work out over a 30-year horizon.” You are not a robot. Be sure your savings balance will be going in the right direction as often as possible. (Granted: it’s tough to be making slow and steady progress in those years when the crazy speculators are doing well. But a 60/40 portfolio will still be doing very well in those years, just not insanely well, and you’ll be auto-rebalancing to harvest some of the high numbers while the getting is good.)
  • Nobody has a 30-year time horizon (and knows for sure that they do). Risky portfolios only win on average, over very very long times such as 20-30 years. Emotionally, nobody’s time horizon is that long. Practically, nobody knows the future: whether they’ll have an emergency, become disabled, get a divorce, have children, or what will happen to the world economic and political situation. Or on the positive side, maybe you want to retire early or change careers. A 30-year time horizon is a fairy tale.
  • Motivation. With an overly risky portfolio, you can be saving 10% per year and still spend a decade with 0% growth in your account balance. What does that do to your motivation to save more money? Savings rate is far more important than investment returns, so stay motivated to save.
  • Narrow down the choices. If you’re trying to decide between 35% and 40% stocks, you’re a victim of false precision. It doesn’t matter. Narrow down the choices to 20%, 40%, or 60% stocks. Less than 20% starts to be risky in itself (inflation, or a bad decade for bonds). Over 60% is too risky to be required for any reasonable plan. Tuning more precisely than 20% increments is false precision. So pick from the three options. More options creates paralysis.
  • If you’re a risk-seeker, stable savings allow you to take risks elsewhere in life. Join a startup. Move across the country. Take a sabbatical. Even go to Vegas. Whatever. If you don’t have any savings, lots of great experiences may be out of reach. Real life risks are more emotionally rewarding, and often more profitable, than treating your investments like a casino.

Here are a few related thoughts and implementation tips:

  • Index funds are cheaper, but low expenses can’t compensate for complexity and emotions. If you have to pay extra to get on autopilot, I say do it. For example if your retirement plan has a well-regarded, actively-managed, well-diversified balanced fund and a collection of single-asset-class index funds, I’d take the actively-managed diversified fund rather than messing with a collection of index funds. If your plan has a well-diversified, balanced index fund, that’s awesome.  Here’s a previous post on index funds.
  • Investment and savings decisions should use rules of thumb, not precise calculations. It’s easy to read a book about financial theory and then use some software to run complicated scenarios where you plug in your savings rate, your salary, future inflation, future equity returns, future bond returns, future tax rates, future health care costs, future social security payouts…  waste of time. Small tweaks to these numbers change the outcome dramatically and nobody has any idea what the real future will hold. (If you must use a complex calculator, use one that simulates a “worst case,” and plan for that.) Avoid precision bias. Because of compounding, small changes in rate of return or timing of cash flows lead to huge changes in end result. This makes financial calculations dangerous tools in support of wishful thinking.
  • Target-date retirement funds can be used as balanced funds. Your retirement plan may well lack a balanced fund, but you can often use a target date fund instead; choose one that will have about the right stocks/bonds allocation over the next decade, rather than choosing according to the date (a date-based choice will be too risky). Revisit your choice in ten years after the target date fund has shifted its allocation.
  • Please don’t invest your retirement savings in company stock. Especially if you have restricted stock or options outside the retirement plan.
  • Risk is risk. Don’t start abstracting it into something theoretical like “volatility.” Risk is the risk you don’t reach your goals. The idea is to minimize the risk of missing your goals. Too conservative a portfolio makes it harder to reach your goals; too aggressive a portfolio makes it harder to reach your goals. Go with Goldilocks here.

Balanced funds used to be the norm, before the financial industry took off and grew enormous. They’re simple, practical, and time-tested. They’re compatible with how your brain works. They ought to be the default for long-term savings (though other approaches have a time and a place).

Lincoln in Illinois (2009 Proof Lincoln Cent)

2009 Proof Lincoln Cent

Getting a Real Blog

I wrote my own blog software in 2003, and today is the day I’m giving it up. My custom blog code just generated static files, so it never had a security bug or a performance problem. But it was kind of annoying to use.

If all goes well and it doesn’t break, I’ve set up WordPress on an Amazon EC2 Micro instance. Part of the experiment is to learn to use EC2; I may post about how that goes once I see how it goes. (If it turns out to be a giant pain, it’s back to my 2003 hack I suppose.) I have a single command to fire up my EC2 server from a pristine AMI, install all the needed software, configure everything, and enable this website.

On WordPress I have fancy blog features I’d heard about, such as “photos” and “comments” – even “search.”  On top of that, a web-based editor so I can work on posts from any computer and save drafts in progress. Welcome to the future. Maybe this will mean more posts.

Freddie the dachsund

Freddie, our recently departed dachsund

GTK+ becomes a canvas

Wishing I could be at the GTK+ hackfest this week.

Hoping the heroes there will discuss converting GTK+ to a nice canvas
library. I think it’s a couple person-months of
work… but it may not be work there’s time to do for 3.0.

  • Implement a “hovered” or “contains-pointer” property to replace
    enter/leave notify event in most cases.
  • Allow
    no-window widgets to get events
    (this is a big API headache that
    could use much discussion)
  • Remove all input-only windows from widgets that come with GTK+.
  • Implement a way to scroll and clip without a GdkWindow. (Not yet
    investigated. Not too hard… client-side GdkWindow already
    does it…)
  • Remove all output windows from widgets that come with GTK+.
  • Implement paint clock.
  • Profit!

Advantages include:

  • Deletes a bunch of code from GTK widget implementations
    (realizing/unrealizing GdkWindow, translating window coords to widget
    coords, tracking enter/leave state, etc.).
  • GdkWindow is freaking
    huge
    while GtkWidget is not.
  • Recursive rather than “global knowledge” event and redraw system
    mixes more nicely with other toolkits (e.g. clutter-gtk
    setup). Destroy gtkmain.c
  • Can do a GDK backend on COGL, tuned for a clutter-gtk combo, that does
    not have to mess with child windows – only implement the toplevel,
    similar to Clutter backends. Recent Wayland backend for Clutter was
    much easier than a GDK backend would have been. This is bad because
    GDK has important code (i18n, multihead, multidevice, drag and drop,
    etc.) that should not be lost just to use proper hardware rendering.
    Part of the appeal of Clutter is ease of doing a custom backend.
    (Food for thought: Clutter got it right by having the platform
    portability layer be “implement the toplevel window” instead of
    “implement a window system.”)
  • Can add GtkRectangle, GtkEllipse, and other “canvasy”
    primitives and GTK+ would be a good “canvas.” No more need
    to invent a new widget system based on “canvas items” just to get more
    flexible drawing.
  • Straightforward future directions would allow widgets to have a
    paint/pick transform similar to Clutter (but 2D); support widgets
    drawing outside their allocation; support 0x0 allocations; and
    generally remove limitations inspired by GdkWindow.
  • Another future direction is to always have GtkWidget delegate to parent,
    rather than using global state.

I’m sure in-person discussions can improve on this.

An important point maybe is that lots of Clutter users are doing
their own backend or can use an “embedded” custom setup
rather than one that could be the default desktop GTK+. That’s why
it’s useful to strip GdkWindow out of all the default non-toplevel
widgets, even though certain widgets in a full desktop will still have
windows and window widgets will still have to be supported during 3.x
on a default desktop build.

An overall goal in my mind is to allow apps to switch to proper
hardware rendering, without rewriting the app or regressing on all the
“solutions to hard problems” GTK+ code that ought to be
kept.

(This post was originally found at http://log.ometer.com/2010-10.html#18)

Flexible UI Toolkit Implementation

I’m not the sort of programmer that runs around quoting design
patterns
and drawing UML diagrams.

However, maybe it’s useful to discuss this rule of thumb, in the
context of toolkits such as GTK+ and Clutter: It’s best to avoid
code that “knows about” the scene graph or toolkit as a whole.

Here are some approaches I like, along with examples that came to
mind. I hope nobody takes anything personally.

Containers forward recursively to children.

At the root of the tree, the root container (such as GtkWindow or
ClutterStage) gets context from the platform. It then passes
information or operations down to children of the root, the children
pass on beyond that, etc.

Example: the new GtkWidget draw() method passes down a cairo
context which is transformed as it goes down recursively.

Example: HippoCanvas
does both drawing and events in this way. Events are translated as
they go down through the tree.

Bad: gtk_propagate_event()
just walks the widget tree directly from outside the widget tree. This
logic makes it annoying to nicely integrate a widget in a new context,
such as in a Clutter scene graph.

Children bubble up information to containers.

Children can report news up to their parent container, which can
in turn hand it up the chain.

Example: clutter_actor_queue_redraw()
works by having each child notify its parent that it needs to
repaint.

Bad: gtk_widget_queue_draw_area()
walks up the widget tree to find a parent with a GdkWindow it can poke
at. Instead, the container that has the window should contain the
logic to invalidate the window, and nobody else in the tree should
know anything about GdkWindow. (Note how the existence of GdkWindow has
been leaked out so all widgets have to know about it. In the
alternative design, only widgets that have windows need to
know about them.)

Interfaces provided by containers.

A special case of bubbling up to containers is to ask containers for
an interface to use – delegate to the parent, in effect.

Rather than using global singletons, child actors or widgets can ask
their container for what they need. Containers then have the
opportunity to override anything that the child sees by reimplementing
the interface, possibly delegating most methods back up to the
implementation they received from their own parent.

Example: HippoCanvasContext
is a grab-bag interface provided by containers to children, providing
a very reduced and simplified set of things a widget might need, such
as a Cairo surface or Pango layout specialized for the window system –
or print-to-PDF operation – currently in progress. The
context is used to obtain a PangoLayout or a Cairo context, which are
also abstractions. The parent container can pick the right settings
and backend for Pango and Cairo.

Bad: gtk_grab_add()
goes straight to the toplevel window and then to a global list of
window groups. Instead, widgets could ask their container for a grab,
each container could do grabbing within that container, and it would
recursively move up to the toplevel; the toplevel would then deal with
the window groups. By delegating this, it could even be possible to
make grabs work on a tree that contains non-GtkWidget in it.

Use interfaces rather than concrete objects.

I don’t see a lot of value to making a specific control, such
as a GtkTextView, into an interface. However, the main “touch points”
that form the core of a toolkit really ought to be. This includes
GtkWidget, ClutterActor, and all the global state getters and setters
(which in turn should come from the parent container, rather than
using singletons).

Interfaces let you make bridges between “worlds.” If you’re putting a
widget or actor into a nonstandard context, whether it’s a PDF
printout, or a container that rotates or clips or clones, or drawing
an actor inside a widget or a widget inside an actor, the cleanest
solution will involve reimplementing interfaces to match the context.

Use model-view rather than omniscient objects.

This one seems obvious, but isn’t always done.

Bad: gtk_main_do_event()
hardcodes knowledge of other bits of GTK+, for example invoking _gtk_settings_handle_event() and
_gtk_clipboard_handle_event(). These should be connecting to a signal
so they don’t leak out of their own modules. gtk_propagate_event() is
another nasty piece of non-modularity. Future direction: add event
signals to GdkWindow and GdkDisplay and drop this central dispatch mess.

Bad: Clutter
master clock
should be a model, not a controller. It knows about
stages and timelines specifically and just tells them what to do.
Instead of saying “anything that does repainting, repaint now” it says
“repaint the stage.” If this code were model-view, it could simply be
dropped into GTK+ and used there as well, for example. Flexibility.

Summary

Replace code that knows about “the world” with:

  • Containers that know about only their immediate children.
  • Children that know only about their parent and purpose-built interfaces
    provided by the parent.
  • Models with multiple views.

Widgets and actors should know about their parents and their children,
never their grandparents, never their grandchildren, and certainly not
strangers they met in a singleton bar.

Getting this right in Clutter and GTK+ would make
both toolkits more robust in situations where the two toolkits are
mixed – something I’d like to see much more of – and in situations
where the toolkits are used in odd ways, such as in a
window/compositing manager, or just in any creative way that isn’t
quite what was intended. In short, getting this right makes the code
more modular and reusable and clear, not to mention less fragile.

Disentangling widgets and actors from the “global view of the world”
is great for unit
testing
, as well.

(This post was originally found at http://log.ometer.com/2010-09.html#19)

XCB enhancements

Disclaimer: I’ve never followed XCB development, this may have
all been discussed already, but I hope this post is at least helpful
as a data point illustrating what I understood and didn’t understand.

Julien Danjou was
disappointed recently
because after porting dbus-launch to XCB, we had a bit of a “what’s
the point?” reaction to the patch. Why move “#ifdef XCB” up into apps,
when libX11 is already a compat layer that uses XCB if available?

However, XCB remains a useful idea. For the litl OS shell, we’ve
discovered that we really need certain X requests to be async. I ended
up digging into the XCB code to find an obscure
bug
. Since I’d just read a bunch of XCB code, I figured I’d try to
expand on the API docs
. Hopefully someone can fix up my docs
efforts and land them upstream. (If you happen to notice bugs in my
docs patch, please comment on the bug.)

Better docs ought to help people know when and how to use XCB. But
the library could also benefit from two larger changes inspired by the
higher-level desktop toolkits.

1. Better support a main loop, in addition to threads.

Heck, even server-side developers are
piling on this bandwagon lately. XCB goes to a lot of trouble to
support multithreaded operation. Unfortunately, multiple threads
messing with UI code is highly problematic for most actual Linux
toolkits and apps. (More conceptually: because X replies are always
returned in request order, and the X server is single-threaded,
everything is serialized anyhow…)

I filed a
bug
with my theory on what XCB needs for better main loop support.
Please, comment on the bug with corrections and improvements.

I’d like to be able to queue a reply handler as a main loop callback,
something that’s difficult with XCB as it stands, unless I’m
missing something.

2. Export the protocol introspection data (and use it to “dynamicize”
protocol bindings)

XCB is in the pre-gobject-introspection school of binding
implementation, i.e. libraries full of autogenerated stubs. (Not
saying gobject-introspection invented dynamic stubs; COM and CORBA
have it, as does Qt, doubtless many other things before that. But
gobject-introspection is the GNOME version.)

I filed
another bug
proposing that XCB could export a binary “typelib” for
the protocol. This could remove all request-specific code
from non-C language bindings. It could make the C language bindings
smaller, too. And it makes it easy to write debugging tools and code.

If the C bindings were dynamic, they could export two other flavors of
each request without adding bloat: a “fire and forget” flavor that
ignores both errors and replies; and a blocking, synchronous flavor
(matching the convenience of libX11). These two flavors would be nice
to have, without them XCB code can be more verbose than it ought to
be. (It’s a poor advertisement for XCB when the patch to go from
libX11 to XCB adds extra lines of code, which happens most of the time
when you make a request with a reply – at least two lines in xcb,
vs. one in Xlib.)

A bonus suggestion!

An easy X protocol debug hook would be fantastic. xtrace doesn’t provide easy
customization for a particular debugging problem or particular application.
I’m thinking something like:

void xcb_set_trace(xcb_connection_t *c,
                   xcb_trace_callback_t out_callback,
                   void *out_closure,
                   xcb_trace_callback_t in_callback,
                   void *in_closure);

Where I’m not sure yet if xcb_trace_callback_t should just take raw
bytes to be parsed by the app, or should be passed something
higher-level including sequence numbers and opcodes. A problem with
higher-level would be the need to parse the requests Xlib pushes
through xcb_writev() inside XCB. A potential middle road would provide
raw bytes to the trace callbacks, but export a simple X protocol
parser API that apps could use for tracing.

A trace hook wouldn’t be as useful without exporting the protocol
description “typelib” so apps and tools can make sense out of what
they’re seeing.

Adding XCB utility

At the moment, for a single-threaded application XCB only helps you
vs. Xlib in a narrow set of situations where you are willing to block
on a reply eventually, but not right now. This is handy to “batch”
requests as in the
tutorial examples
(search for useXCBProperly).

But XCB could offer much more, such as main loop
async callbacks
, dynamic language bindings, and debug/trace
support. These are simple features to add to XCB, that would have
been hard to add to the old libX11 codebase.

Just a bit more “what’s in it for me?” would be helpful when it comes
to convincing apps and toolkits to port over to XCB.

One more thing!

While I’m posting: it seems to be hard (impossible?) to use libxcb-glx
because libGL doesn’t export a way to get or set the context tag.
Not sure where to file this bug or if I’m just missing something.

(This post was originally found at http://log.ometer.com/2010-08.html#22)

litl and computer frustration

Nat Friedman has interesting results up for his informal survey on computer frustration, noting that “About a third of these issues could be addressed by webbook efforts like ChromeOS and litl, although the webbook model will probably raise new issues as well.”

Seems like a good time to discuss how we designed the litl webbook to reduce computer frustration.

Design with a computer-frustrated audience in mind

We designed litl OS with Cooper, Pentagram, and our own design team. Cooper contributed a
set of personas, adding to our own thinking about who would
love the litl. We focus on busy families at home. While we have big
dreams for how litl OS can evolve, for now we didn’t think about work
computing, ignoring the needs of business travelers and IT guys.

Windows will ask hundreds of questions busy families don’t care about
understanding. It’s not that they can’t understand, but they do not
care
. (The most famous example might be Vista’s overzealous need
to “Allow or Deny?”). We can say definitively that our audience
doesn’t care about this stuff, and so we don’t ask it. Period.

As geeks, who have been spent our entire adult lives using and
administering PCs, we tend to think the entire world is like us… the
more the better… we want total control.  Our research (and our
own families) have shown that there’s a huge portion of the world,
such as busy moms, who only care about results. They don’t care about
tech specs, and they don’t care about tweaking what Tufte calls
“computer administrative debris.”

As software developers, we don’t realize how much worthless debris we put in front of people.  Stuff they don’t care about or don’t need to know. At litl, we’re trying to take a different approach.

Make the OS automatic

If your favorite web app or web site fixes a bug, it isn’t nagging
you about whether you want the fix. You simply get the fix. We
approached litl OS in the same way. litl OS is smart about avoiding
updates while you’re using the webbook, and quietly updates itself
while you sleep.

Hide implementation detail – manage it for you

File management is one of the more complex features of traditional
operating systems, and litl OS avoids it entirely. Web apps just store
their stuff, they don’t ask you where to store it. We continue the
entire OS in that spirit.

Sandboxed Sites and Channels

Applications on the litl don’t have free run of the operating
system. We have two kinds of “app”; web apps running in our browser,
and channels. (Channels are a special kind of app with three states,
one for lean-forward/laptop, one for lean-back/easel, and one
widget-like state in card view.) Channels are run by a custom flash
player in their own process.

This gives us a number of tools to control malware (since we don’t
have to distinguish it from “normal” unsandboxed apps), and it throws
out all kinds of complexity associated with installing and updating
traditional application software.

Sandboxing eliminates a whole class of “system integration” issues
where applications interfere with one another or with the OS. On the
litl, web pages and channels can’t (and need not) install their own
annoying updater software. They can’t add tray icons to your
screen. They can’t break other apps in unforeseen ways.

Hardware/Software Integration

Building for a single hardware platform throws out whole
domains of complexity. There’s no mess of interface on the litl
related to hardware drivers; we know about our hardware already. We
know which buttons are on the keyboard (and incidentally, a bunch of
useless ones are not). We know the screen resolution.

This means no setup or configuration to start using the litl. It means
our help and instructions can be precise – instead of “look for the
key that says…” we can say “press the big blue key in the lower
left.” It means we can ship the litl preconfigured with information
entered during the ordering process. It means any number of OS
features “just work” instead of requiring tuning to the particular
hardware the customer has.

Eliminate the hard drive

The hard drive is the number one point of failure in PCs, and when it
breaks, it’s a disaster – you lose all your stuff. Best practice is to
use the hard drive only as a cache, keeping a backup copy of
everything on some web service. litl does this by default, going
further to automatically manage the cache so it only has what you’re
actively using. No hard drive failures; no data loss; no setting up or
managing backups.

A new issue: web service integration

The webbook model isn’t all positive complexity-wise (yet) – as Nat
says, it may raise new issues. Here’s one: a litl OS design principle
is to use any and all existing web services and apps, rather than
reinventing the wheel. We decided to use web mail rather than create
our own litl mail app, we decided to use Flickr and Shutterfly rather
than invent our own photo storage and sharing site, and so forth. We
see our goal as improving the web, and helping people use the web,
rather than replacing the web with a “walled garden” of litl-branded
services.

There’s no question that a “walled garden” of services we
controlled completely would be simpler and easier to use. But we don’t
think our customers would be happy as hothouse flowers. We want to be
the best OS for using the whole Internet, rather than a limited
appliance.

A Challenge: Internet and WiFi setup

Internet and WiFi setup are tough to address, because problems on
the access point side are outside litl’s control. Still, on the litl
itself, wifi configuration couldn’t be simpler – we start with a big
list of access points, instead of a tiny little tray icon. People need
to recognize their network name and know their password. If they have
those two things, we automate everything else.

Personal anecdote: I recently helped my sister fix her wifi; there
were two problems, and both were caused by Windows complexity.

First, Dell had installed some garbage “wifi manager” software that
interfered with Apple’s AirPort software. On the litl, we don’t ship
OEM crapware.

Second, when you add a network, Windows opens this absurd, verbose dialog that makes no sense; she’d
clicked the wrong answer. litl OS does not ask this sort of question,
by design. If we don’t think our customers care about a question, we
don’t ask it. (This has nothing to do with the webbook model per se;
but it does have to do with our well-defined target audience. We know
our customers don’t care about this question.)

Only the beginning

We’ve come a long way with litl OS, but
there’s a lot more we could do. Nat’s survey mentions printing; we
could automatically discover printers with no driver installation. He
mentions performance; we could manage CPU usage of sandboxed sites and
channels to keep the “too much stuff” problem (too many open sites)
from degrading performance. We could much more extensively lock down
the OS using SELinux-style technology, to further restrain malware.
There are so many possibilities because the OS is truly managed on
behalf of our customers, not managed by our customers when they have
better things to do.

To be sure we get this right, we’re planning to rotate the litl
development team through customer support, giving every software
developer firsthand knowledge of our customers.

We would love to hear your ideas on how to further reduce computer
frustration – let us know!

(This post was originally found at http://log.ometer.com/2009-11.html#16)

Is the litl webbook a netbook?

We’ve had lots of great comments on the litl webbook (see here and here for samples). Some discussion
about whether a “webbook” is really different from a “netbook.” Here’s
why litl webbook is not a netbook.

  • litl OS is completely different from Windows 7. You can’t
    even run Windows or Linux very well on the litl because our custom
    hardware is missing legacy keys and ports. litl OS is entirely
    managed, all state stored online, with web apps and channels
    only.
  • No hard drive. And you don’t need one. There’s no way in
    litl OS to even see how much disk space you have, because in litl OS
    disk space gets used like a web browser cache. You never manually
    create or delete anything. The hard drive size affects how often we’ll
    get a cache hit. Otherwise, who cares. Web apps don’t store files on
    the hard drive.
  • Comes with online storage and services. litl OS syncs its
    state to the server, supports sharing any card (web page or channel)
    to anyone in your friends network, and has a friends network integrated
    into the OS. Backup and sharing are built-in, trivial, and automatic.
    There’s no subscription fee.
  • litl webbook is a fabulous photo frame. It has a
    nicer screen and nicer software than this
    $650 frame
    for example. Family and friends can share new
    slideshows directly to your litl webbook, which will show all your own
    photos and those shared with you in one big slideshow. (Or add channels
    with just one album, if you like.) Deep integration with photo
    services you already use means you don’t have to do anything special
    to see photos on the litl. Most photo frames end up in the closet
    because loading new photos is a manual process. With the litl webbook,
    your family members post new photos and they appear as a channel
    on your litl. No work to do.
  • litl’s build quality and design blow away netbooks. Most
    netbooks are “cranked out,” with the engineering done in a few months
    with an eye to minimal cost. litl’s engineering was highly refined
    over time, with an eye to quality, ease-of-use, and aesthetics.
  • Screen size and quality. Only one or two netbooks have a
    screen as large as the litl webbook’s, and none have a screen with the
    same brightness or viewing angle. The litl’s screen quality enables
    “lean back” mode with photos and channels.
  • Mobility within the home. The usual use-case for a netbook
    is travel. We designed the litl to live at home. That’s why it has a
    larger screen, and displays useful and attractive channels when you
    leave it sitting around house.
  • Hardware/software integration. The software is finely-tuned
    to the hardware, and the flippable hardware inspires one of litl OS’s core
    features, that it’s both “desktop” and “media center” all in one
    smoothly-integrated UI. The litl “look” spans the beautiful packaging,
    hardware, and software.
  • 100% legacy-free. No caps lock. HDMI, not VGA. etc.
  • Amazing guarantee. litl’s
    included warranty
    is better than the service plans you have to pay
    extra for when you buy a netbook.

Here’s how litl webbook is like a netbook:

  • It uses an Atom processor. litl webbook uses Atom because
    it makes the webbook smaller, lighter, thinner, and quieter; not to
    mention more efficient, saving some trees.

Here’s my question: when you go shopping for a cell phone or set-top
box, is your first question which CPU it runs? Would you choose iPhone
vs. Blackberry based on which one had the fastest CPU clock? Or would
you instead first look at what the device does, and in
particular look at the details of the hardware and software experience?

Fast
Company
says

But litl isn’t selling hardware specs; they’re selling a stone-cold brilliant design. And to appreciate it, you have to be able to play with the device.

But for now, litl is only being sold online. And therein lies the
problem. Without handling it, you’ll never appreciate the thoroughness
of the design language–the scroll wheel on the laptop, echoed in the
scroll wheel of the remote; the perfectly weighted hinge which doubles
as a handle and hides the battery; the sturdiness of the case; the
brightness of the screen; the way the packaging and branding looks
domestic but not quite feminine; or even the fact that when the power
pack is plugged in, a tiny, embedded LED illuminates the dot of the
‘”i” in “litl”

(This post was originally found at http://log.ometer.com/2009-11.html#13)

Which piece of big government are you against?

If you’re against big government, it’s time to be specific.
You can see the budget pie chart at
Wikipedia
. Over the next decades, remember that Social Security
and especially Medicare become ever-larger slices of the pie.

As an against-big-government activist, how many of the following will you
have the integrity to advocate dropping:

  • 21%: Social Security
  • 16.6%: Defense
  • 13.3%: Medicare
  • 11.2%: Unemployment Insurance and Welfare
  • 9.0%: Interest Payments
  • 7.2%: Medicaid and SCHIP
  • 5.0%: War on Terror

Total of the above: 83.3%

Everything else is rounding error in terms of cost, though
important in terms of impact (education, highways, court system,
national parks, bailouts, etc.)

There are two options here.

Option One: You are in favor of eliminating or deeply
cutting several of the Big Items in the list above: Social Security,
Defense, Medicare/Medicaid, or Unemployment Insurance.

If you believe we should eliminate Social Security, Medicare,
Defense, and other stuff with 60-80%-plus public support then I
respect your argument and your integrity, but let’s face it, you
probably aren’t a politician facing re-election, and you’re advocating
something that’s not going to happen soon.

Option Two: You are not against “big government”; you are in
favor of “let’s trim 10-20% off the government while leaving it pretty
big” or something like that.

If you really mean “let’s trim 10-20%” can we please stop being so
melodramatic? As I’ve whined before, moving government size, or tax
brackets, by a few percent is not the difference between capitalism
and socialism.

  • Libertarian: government should be 5% of its current size.
  • Socialist: government should be 200% of its current size.
  • Republicans and Democrats judged by actions not rhetoric:
    government should be 105% of whatever it just was. Disagreement on
    where the new 5% goes.

Politicians are obligated to be in favor of cutting taxes while
raising spending, because the public in the aggregate is in favor of
that impossibility. Ridiculous, right? But if you oppose the vague
abstraction of “big government” without bringing up which of the big
programs you’re wanting to cut, you’re part of the problem.

There are only 7 areas accounting for 83.3% of the budget. Should be
pretty easy to pick one and encourage cutting it as a concrete path to
meaningful government-ectomy. It’s time to get specific.

(This post was originally found at http://log.ometer.com/2009-09.html#12)