Havoc's Blog

this blog contains blog posts

Direction: Abstract vs. Specific

There’s some talk on desktop-devel-list
about exactly what “online
desktop
” means, and in private mail I got a good suggestion to
focus it such that end users would understand.

“Online Desktop” is an abstraction. First, let me try to convince you
that it’s more specific than what GNOME purports to be about right
now. Then I’ll suggest a way to avoid architecture
astronauting
the Online Desktop abstraction.

Right now it says on gnome.org:

GNOME offers an easy to understand desktop for your Linux or UNIX computer.

This is aimless, in my opinion. As Alex Graveley said in his keynote,
the “easy to understand” (i.e. usability) point is a feature, like the
color of a car. It doesn’t give any direction (sports car or
truck?). “Runs on Linux” is a feature too, like “car that drives on
roads.”

Chop out the two features, and the only direction here is “desktop” -
that word by itself just means “a thing like Windows or OS X” -
it’s a category name, and thus defined by what’s already in the
category. It dooms us to cloning, in other words.

Here’s what I offered at GUADEC as an alternative:

GNOME Online Desktop: The perfect window to the Internet: integrated with all your favorite
online apps, secure and virus­free, simple to set up and zero­
maintenance thereafter.

This is still a conceptual mission statement (or something), not a
product. I went on and had a series of slides about possible
products that fit into the above mission — the idea is that
end users would get the products, and would be marketed a product.
Here are the products I mentioned:

  • An amped-up mobile device (internet tablet or phone) designed to
    work with popular web sites and seamlessly share your stuff with your
    desktop (including your proprietary desktop, if you have one!). The
    closest current products to this might be the Helio Ocean, the iPhone, and of
    course GNOME’s own Nokia N800.
  • An Internet appliance. To date, the only one of these I was ever
    impressed with was the Netpliance — as best
    I can tell, the company cratered because it sold a $300 device for
    $99, hoping to make it up on service contracts cell-phone-style, but
    did not lock people in to a 2‑year contract. So people bought the
    device, canceled the contract, and the company went bankrupt. Anyway,
    my grandfather had one of these, and it was perfect for him. I think
    it was a good idea and might have gone somewhere if they hadn’t dug
    themselves into a big old hole by losing $200 per device sold.
    The idea is also more timely today than in 1999, because you can do a
    lot more with a simple device as long as it has a web browser.
  • PCs for students. College students love the cuteness of the One Laptop
    Per Child. Imagine a cheap, well-designed, durable laptop, with an OS designed
    for the Internet. A slightly scaled-up version of the One Laptop with
    an OS designed for online instead of peer-to-peer.
  • Managed clients for small/medium businesses. If Google is successful
    with Google Apps for Your
    Domain
    , or someone else is successful with something similar
    (already there are other examples — QuickBooks Online Edition,
    salesforce.com), then
    companies can outsource their server-side IT at low cost.
    But they’ll still be maintaining Windows, with its downsides.
    GNOME could offer an alternative client, perhaps managed by the
    service provider just as the online services themselves are,
    but in any case better optimized for a “window to the Internet” role
    than Windows and lower maintenance to boot.
  • An awesomer version of developer distributions like Fedora and Ubuntu,
    with neat features for services Linux lovers tend to use, such as
    Flickr, and nice support for stuff Linux lovers care about such as
    ssh.

You can probably imagine how to improve the above products, or even
come up with a few more.

In deciding what to hack on next, we should probably always be
thinking of one of the specific products, rather than the Online Desktop
abstract mission statement concept.

If you were selling GNOME to someone, you’d want to tell them about
one of these products, not the “window to the Internet”
blurb.

I proposed the Online Desktop abstraction because 1) a high-concept
mission sounds more exciting to many people and 2) the specific
products each exclude some of the primary GNOME constituents. The
GNOME project can support several of these products. The Online
Desktop abstraction is meant to be something a large part of the GNOME
community can have in common, even though we’re working on a variety
of different products.
But we should keep working on products, not
abstract missions.

Even though Online Desktop is an abstraction, I think
it’s both more specific and a better direction than
the current abstraction on www.gnome.org — “a desktop.” “Perfect
window to the Internet” is still vague, and I’m sure can be improved
on, but at least it isn’t a pre-existing product category that’s
already been defined by proprietary competitors.

You may notice that I tacked a bunch of features onto the Online
Desktop definition: “integrated with all your favorite online apps,
secure and virus­free, simple to set up and zero­ maintenance
thereafter.” I guess these are more illustrations than anything
else. The point is to capture what the various products built around
GNOME could have in common.

(This post was originally found at http://log.ometer.com/2007–07.html#23)

Last 5%

Talking to lots of developers at GUADEC about their designs, I’m
reminded of the hardest thing to get right in software engineering:
when are you doing too much of it?

The “agile development” model is to always do as little as possible,
adding code and design complexity only as needed. I’m a big fan of
this, especially for apps. It breaks down a bit for libraries and
APIs, though; it’s too hard to get anybody to try the API until you
have it fairly far along, and then it becomes too hard to change the
API. A good approach that helps a bit is to always develop an API as
part of writing an app — for example we’ve developed HippoCanvas as
needed while writing Mugshot, Big Board, and Sugar. Compared to a
written-in-a-vacuum API the core turned out very nicely IMO, but one
consequence of as-needed development is that the API has a lot of
gaps. Still, an API founded in as-needed development would often be a
better start for a productized API than a from-scratch design.

Another guideline I use is that the last 5% of your use cases
or corner cases should be addressed with hacks and
workarounds. Otherwise you will double your complexity to cover that
last 5% and make the first 95% suck. The classic Worse is Better
paper says about the same thing, without the made-up percentages.
Typical hacks in this context might include:

  • Just don’t do that then” — declare that while the API could be
    misused or something bad could happen in a particular case, the case
    is avoidable and people should just avoid it.
  • Convention rather than enforcement” — all of Ruby on Rails is
    based on this one — rather than jumping through hoops to “enforce”
    something that’s hard to enforce, just don’t.
  • Slippery slope avoidance” — pick some bright line for what to add
    vs. what not to add in a particular category and stick to it, even
    though each individual addition seems sensible in isolation.

I bet someone could write (or has written) a whole book on
“complexity avoidance patterns.”

I’ve tried different complexity levels in different projects. For
example, libdbus is flexible in various ways and even adds probably
30% more code just to handle rare out-of-memory situations. (The GLib
and GTK+ stack reduce API and code complexity that other C libraries
have by punting the out-of-memory issue.) While Metacity doesn’t even
use GObject to speak of, just structs in a loose object-oriented style.
(I frequently prefer a struct with new/ref/unref methods to GObject in
application code, though not in APIs.)

It’s a useful thing to think about and experiment with.

(This post was originally found at http://log.ometer.com/2007–07.html#18)

Keynote Reactions

Lost my voice last night talking to people about our GNOME Online Desktop keynote.
I’ll try to remember some interesting things people brought up.

Some commenters thought the talk wasn’t alarmist enough and
should have more strongly stressed the urgency of the situation.
I
don’t think we need to panic but I do think a lot of hard work is in
order, in either the online desktop direction (or some other focused
direction others may propose).

Aren’t many web services proprietary? was a good question
raised during the talk Q&A. My short answer to this is yes, but
ignoring the real user benefits of these services will result in
everyone using them anyway, while not using GNOME or any other open
source software. We have to engage with where the world is going and
what users want to have. That will put us in a position to affect
openness. Taking a “stop progress!” attitude won’t help.

Even among GNOME developers, almost everyone uses
Google or Flickr or something. Expecting a wider
non-geek audience to forego these services on ideological grounds
while we aren’t even doing it ourselves doesn’t seem very reasonable
to me.

Also, web services may well not be proprietary in the sense of the
Open Source Definition, but are proprietary in effect. See
below, on the need for an Open Service Definition.

I don’t want to put my data on someone else’s server and other
security issues are a common concern. Let’s be clear that of course it
will always be possible to keep your data locally, or run your own
server.

But I think the privacy issues are very solveable, even for
people who care deeply about them. An Open Service Definition or the
like might address these. And of course you can do strong cryptography
— though for the average consumer, the prospect of losing 5 years of
data when they lose their private key is not acceptable, Mozy is an example of a service that gives
you an option to strongly encrypt with your own private key. They
don’t default to that choice since it’s too risky for an
average person.

As with the issue of proprietary web services, though, a “stop
progress!” attitude won’t put us in a position to affect security or
privacy. If we want to affect these things, we first have to offer the
user benefits and be a project people really care about. And then we
can affect what other participants in the industry do.

Several people suggested the argument against security concerns is
“do you use online banking?” and that seems like a good point, since
most people do use it.

We need a Free Services License, Open Service Definition, Free
Terms of Service, or whatever we want to call it.
I see more and
more people talking about this, even aside from the GNOME Online
Desktop conversation. Topics to cover in an Open Service Definition
might include ability to export your personal data, your right to own
your data’s copyright, etc. There may also be a requirement to use an
Affero GPL type of license. This is very open-ended and unclear at the
moment.

To me the reason open source works is that multiple parties with
competing interests can collaborate on the software. What would make
multiple parties interested in collaborating on a service? Probably a
fairly radical-sounding set of requirements. But the GPL was pretty
radical-sounding too, many years ago.

Running servers that require real bandwidth, hardware, and
administration will be hard for the open source community.
This is
absolutely true. On the other hand, I can imagine a lot of ways we can
approach this, and we don’t need very much in the way
of servers to get started. As I said in the talk, if we produce
something compelling people will be excited about it and we’ll have a
number of opportunities to work with for-profit and nonprofit funding
sources to get the server problem solved. If we don’t produce
something compelling, then there won’t be a scalability issue.

There are some precedents, the main one being Wikipedia, but
I’m also thinking of the Internet Archive, iBiblio, and ourmedia.org
as examples of nonprofit services.

What about just using a WebDAV home directory or syncing files
around?
If you start to prototype this, I would bet it produces a
distinctly different and probably worse user experience than building
around domain-specific services like calendar, photos, etc., for a
variety of reasons. But it could be part of the answer and is
certainly worth prototyping.

(This post was originally found at http://log.ometer.com/2007–07.html#18.2)

Online Desktop Talk

For those not at GUADEC, we put up some slides and screencasts from our
talk about GNOME Online Desktop.

(This post was originally found at http://log.ometer.com/2007–07.html#17)

GPL + AFL

Since it’s come up at GUADEC, I wanted to post a bit about D‑Bus
licensing. D‑Bus is dual licensed under your choice of the GPL or the
Academic
Free License 2.1
. The AFL is essentially an MIT/X11 style license,
i.e. “you can do almost anything” — however, it has the following
patent clause:

This License shall terminate automatically and You may no longer
exercise any of the rights granted to You by this License as of the
date You commence an action, including a cross-claim or counterclaim,
against Licensor or any licensee alleging that the Original Work
infringes a patent. This termination provision shall not apply for an
action alleging patent infringement by combinations of the Original
Work with other software or hardware.

In other words, if you sue claiming that D‑Bus infringes your patent,
you have to discontinue distributing D‑Bus. The patent clause does not
affect anything “outside” of D‑Bus, i.e. it does not affect patents on
stuff other than D‑Bus, or your right to distribute stuff other than
D‑Bus.

Versions of the AFL prior to 2.1 had a more scary patent
clause. However, I have not heard any objections to this more
limited one in 2.1.

Let’s compare the situation here to LGPL. LGPL is a dual license; the
LGPL license plus GPL. Quoting from LGPL:

You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library.

As I understand it, this is why the LGPL is GPL-compatible. If you
link your GPL app to an LGPL library, you are using the library under
GPL.

I believe if you distributed D‑Bus under GPL or LGPL, you would be
making a patent grant of any patents affecting D‑Bus. The AFL
patent clause does not require you to make a patent grant; it still
allows you to sue. You just have to stop distributing D‑Bus while you
do it. With the GPL or LGPL, you can never distribute in the first
place, without giving up the right to sue at all. Unless I’m missing
something, there’s no way the AFL patent clause can be a problem
unless LGPL or GPL would be a problem in the same context.

That said, there may be some advantages to relicensing D‑Bus, some of
the options would be:

  • Add LGPL as a choice (so LGPL + GPL + AFL)
  • Add GPLv3 as a choice
  • Switch the whole thing to MIT/X11
  • Some combination of the above

For the record, I’m not against any of these in principle. I would
just say, 1) it’s a lot of work due to all the authors,
so someone would have to volunteer to sort this out (and figure out
what to switch to), and 2) I think some people are not understanding
the current licensing — in particular, at the moment it isn’t clear to
me at least what LGPL would allow you to do that the current licensing
does not. AFL is much less restrictive than the LGPL, and the
GPL is not compatible with the LGPL either — the GPL is only
LGPL-compatible because LGPL programs are dual-licensed under GPL,
just as D‑Bus is.

I may be confused on point 2): it would seem the implication is that
if your app is “GPL + exception” you can’t use an LGPL library such as
GTK+, except by adding another exception to allow using GTK+ under
LGPL rather than GPL. This is the same with GPL+AFL. But people don’t
worry about linking their GPL+exception apps to GTK+, and they do
worry about linking them to D‑Bus. What am I missing?

Historically, the intent of using AFL rather than LGPL was to be less
restrictive (and vague) than the LGPL, and the intent of AFL rather
than MIT/X11 was to retain some very minimal patent protection for
patents that affect D‑Bus itself while keeping the MIT/X11 concept
otherwise. Also, AFL is a slightly more “legally complete and correct”
way to write the MIT/X11 type of license.

There isn’t any ideology here, just an attempt to pick the best
license, and we can always revise the decision.

(This post was originally found at http://log.ometer.com/2007–07.html#17.2)

Nonrecursive make advocacy

When I set up the Mugshot
client build
I noticed that the automake manual suggests a
non-recursive setup, so I thought I’d try it. I’ve used a
non-recursive setup for every project since. The
automake manual
points to the classic 1997 paper, Recursive
Make Considered Harmful
, if you want the detailed rationale.

My first attempt at the Mugshot client build had one big Makefile.am
with every target inline in it; that was a downside. Owen nicely
fixed it with a convention: for build subcomponent “libfoo” put
“include Makefile-libfoo.am” in the Makefile.am, then put the build
for stuff related to “libfoo” in “Makefile-libfoo.am”.

I recommend doing your project this way. I’ve spent a lot less
time messing with weird build issues; nonrecursive make “just works”
as long as you get the dependencies right, while recursive make
involves various hacks and workarounds for the fact that make can’t
see the whole dependency graph. In particular, nonrecursive make
supports “make ‑jN” without extra effort, a big win since
most new computers have multiple cores these days.

Nonrecursive make has the aesthetic benefit that it keeps all
your build stuff separate from your source code. On top of
that, since srcdir != builddir will work more easily, you can make a
habit of building in a separate directory. Result: your source tree
contains source and nothing else.

GNOME unfortunately makes nonrecursive automake painful. Two issues
I’ve encountered are that “gtkdocize” generates a Makefile that is
broken in a nonrecursive setup, and that jhbuild always wants to do
srcdir=builddir even though my project is nice and clean and doesn’t
require that.

I’m not sure why GNOME started with recursive make and everyone has
cut-and-pasted it ever since; it’s possible automake didn’t support
nonrecursive make in older versions, or maybe it was just dumb luck.

With “weird build bugs due to recursive make” knocked off the
list, my current top automake feature requests:

  • A way to echo only the filename “foo.c” instead of a 10-line
    list of compiler flags, so I can see all warnings and errors at once without
    scrolling through pages of junk.
  • In a nonrecursive make setup, a way to set a base directory for
    SOURCES, so instead of “foo_SOURCES=src/foo.c src/bar.c” I can have
    “foo_SOURCE_BASEDIR=src foo_SOURCES=foo.c bar.c” or something along
    those lines.
  • Ability to include in the build a relative directory outside the
    source tree, like “../common” or “../some-not-installed-dependency” -
    this almost works but breaks in “make dist” because it tries
    to copy “../common” to “distdir/../common” — we fix that in the
    Mugshot client build with a little hack, but I can imagine an
    automake-level convention for how to handle it.

These are obviously pretty minor quibbles.

(This post was originally found at http://log.ometer.com/2007–07.html#14)

jhbuild modules for online desktop

I added a jhbuild
moduleset
for online desktop
related modules such as the canvas, local-export-daemon, bigboard,
etc.

(This post was originally found at http://log.ometer.com/2007–07.html#5)

Ideas that won’t die

gnomedesktop.org linked to
a
VentureCake article
on ways to improve GNOME. While the article
covers a lot of ground, as a kind of aside it brings up the age-old “X
would be faster if it knew about widgets and not just drawing”
theory.

I wonder why this theory comes up repeatedly — it is one hundred
percent bogus. In fact the direction over the last few years is the
opposite. With Cairo, the X server is responsible for hardware
acceleration and the client side implements operations like drawing a
rectangle. Moving more logic client-side was one of the reasons for
the switch to Cairo.

Must be a lesson here somewhere about the value of applying intuition
to performance.

(And yes, some things may be slower and others faster in recent
Cairo-using apps vs. older apps, but I assure you it has nothing to do
with whatever you think it has to do with — unless your thinking is
based on profile results from solid benchmarks.)

(This post was originally found at http://log.ometer.com/2007–07.html#5.2)

No More Spaces

If you haven’t seen Marc Andreessen’s new blog, it is very worthwhile.
(2015 update: Marc’s blog doesn’t exist anymore except as an ebook, so all links to it below are 404, don’t try to click them. Other than adding this note, this post is the same as it was written in 2007.)

Among others I like this post, which touched one of my pet issues better than I could have said it — “space” disease:

But here’s the problem.

Web 2.0 has been picked up as a term by the entrepreneurial community and its corollaries in venture capital, the press, analysts, large media and Internet companies, and Wall Street to describe a theoretical new category of startup companies.

Or a “space”, if you will.

As in, “Foobarxango.com is in the Web 2.0 space”.

At its simplest level, this is just shorthand to indicate a new Web company.

The technology industry has a long history of creating and naming such “spaces” to use as shorthand.

Before the “Web 2.0 space”, you had the “dot com space”, the
“intranet space”, the “B2B space”, the “B2C space”, the “security
space”, the “mobile space” (still going strong!)… and before that,
the “pen computing” space, the “CD-ROM multimedia space”, the
“artificial intelligence” space, the “mini-supercomputer space”, and
going way back, the “personal computer space”. And many others.

But there is no such thing as a “space”.

There is such a thing as a market — that’s a group of people who
will directly or indirectly pay money for something.

There is such a thing as a product — that’s an offering of a new
kind of good or service that is brought to a market.

There is such a thing as a company — that’s an organized business
entity that brings a product to a market.

But there is no such thing as a “space”.

As tech industry participants, anytime we describe our ideas as either a “space” or some generic product category (“groupware”, “desktop”, etc.) we should be kicked in the ass. If you can’t describe the market (specific target audience) and the product (specific new value to audience that said audience doesn’t already have) and the company (people who will create the new value and bring it to the audience) then you are talking nonsense.

It seems to me that there are a good number of people in the industry who take something more seriously if you label it with an existing, proven category or “space” — and less seriously if you get specific. You explain the specifics and they say “that’s it?” and perhaps later they say “oh, you mean this is a play in the _____ space” — as if that makes things clearer and in some way more credible!

The thing is, once a category or space has a name, and a bunch of existing companies, it’s almost certainly a stupid idea to pile on (unless you can explain your specific new value offered that isn’t the same as those existing companies — in which case the name of a category or space is wholly inadequate to explain what you want to do).

There’s no real difference on this if you’re an open source project instead of a company. Project motives may be fame or popularity or freedom rather than profits, but from the perspective of the market, success will still hinge on specific value to specific audience.

Another way to think about it is that while a “space” is a descriptive abstraction, it isn’t one that’s helpful for guiding action. Specifically, it doesn’t set any priorities, because it doesn’t say “here’s the thing we’re doing that’s different.”

I love the project name One Laptop Per Child because it says right there in the name what the project does for what audience. In certain people’s hands, this would have been about the “sub-$200 laptop space” or the “international/developing-nations client OS space” or something awful like that. Bullet well-dodged, kudos to whoever got this right.

Other sample posts I liked so far on Marc’s blog:

I’m leaving out a bunch of stuff too, such as a list of killer OS X apps in 2007, top 10 sci-fi novelists of the 2000’s, and a series on venture capital. And the guy started blogging on June 2. Either he’s doing nothing but blogging; he built up posts for the last year before starting; or he has minions to help. Alternatively, he’s some type of freak of nature.

(This post was originally found at http://log.ometer.com/2007–06.html#23)

New GTK+ Book

Apress sent me a free copy of Foundations of GTK+ Development by
Andrew Krause. On gtkbook.com,
Andrew has some sample chapters, supplementary articles, and an eBook
version.

As far as I know, this is the only up-to-date GTK+ book. It’s
introductory-level and strikes a nice balance of tutorial and
reference. It goes through the major GTK+ features in a comprehensive
way and has simple examples for each one. If you’re new to GTK+ or
just want to be able to look up a new widget you haven’t used and get
an overview, this looks like the book to get.

While I don’t know him personally, based on his book I’d say Andrew is
a smart guy who writes well, and it’s obvious he put in a lot of
effort researching GTK+ and developing nice example code.

Either a pro or a con depending on what you’re looking for, the book
doesn’t include too much “deep secrets” or background commentary kind
of material; in other words, there isn’t a lot of “here is how this
subsystem works internally” or “this API sucks, don’t use it” kind of
information. On the macro scale, some recommendations might include
“never use gdk_draw_*, always use Cairo” or “use PyGTK+ rather than C
whenever possible,” for example. A more trivial example, it’s useful
to know that “gint” is historical cruft and just “int” is fine to use.

The book would risk being too long and too cluttered with that kind of
thing mixed in, though — one can only learn so many details at once,
and the book has quite a bit of detail already. It does mention major
“gotchas” that are likely to matter in real life, for example warning
about using gtk_scrolled_window_add_with_viewport() with widgets such
as TextView that support scrolling natively. Choosing the level of
detail and background info is a tough balance to strike and Andrew did
a nice job.

One chapter I feel could use more elaboration covers writing a
custom widget. I’d like to see more information on GdkWindow
double-buffering and invalidation, info on no-window widgets and why
they are preferred when possible, and more discussion of the default
implementations of the widget methods and when to override them or
not.

Two nice supplements to this book might be a “deep GTK+
secrets”/“design and implementation of GTK+” type of book, and a
“rapid development with PyGTK+ and Glade” type of book.

That said, if you want to get started with GTK+, you’ll do well with
Foundations of GTK+ Development.
Thanks to Andrew and Apress for offering us an up-to-date GTK+ guide.

(This post was originally found at http://log.ometer.com/2007–06.html#23.2)