Havoc's Blog

this blog contains blog posts

It has to work

Often when our minds turn to design, we think first of cosmetics. With all the talk of Apple these days, it’s easy to think they’re successful because of shiny aluminum or invisible screw holes.

But this is a side issue. The Steve Jobs quote that gets to the heart of product development might be: So why the fuck doesn’t it do that?

If you’re trying to make a successful tech product, 90% of the battle is that it works at all.

“Works” means: the intended customers really use it for its intended purpose.

Plain old bugs (deviations from the design specification) can keep a product from working, if they mean people don’t use the product. But bugs can be irrelevant.

It’s easy to fail even without bugs:

  • Using the product has too many steps or a single “too hard” step. People give up.
  • The product solves a non-problem or the wrong problem. Nobody will use it.

It doesn’t matter why people don’t or can’t use the product. If they don’t use it, it does not work.

Startups based on making it work

Plenty of startups were successful because they broke the “it has to work” barrier. A few famous examples:

How many “file sync” products existed before Dropbox? It must have been thousands. But it’s easy to create a nonworking file sync product. Too many steps, too much complexity, or too many bugs: any of these could mean “file sync” didn’t work.

Can you imagine pitching these companies to potential investors? “Yeah, thousands of people have failed to do our idea. But we’re going to finally do it right.” Tough sell.

Working vs. well-designed

Dropbox, Sonos, and Flip are working products people seem to agree were well-designed. But some successful products have pretty bad cosmetics:

  • despite its eventual failure, MySpace became popular initially because it met a need and it worked
  • craigslist
  • eBay
  • git got the big picture right, but every small UI design decision or “cosmetic” issue seemed to be wrong
  • HTML/CSS/JavaScript (now I’m just trolling, sorry)

“Working” is a little bit different from “well-designed.” Sometimes working can involve “worse is better,” perhaps. “Working” has to do with solving a problem, not (necessarily) doing so in an elegant or beautiful way.

“It just works” vs. solving the problem

The phrase “it just works” comes up a lot, and that’s almost enough. “Just working” seems to mean there aren’t many manual steps to use the product.

But you still have to solve the problem. I remember the iPod because it solved my music problem: its use of a hard drive meant I could get all my music on there, and listen to all my music. That’s what I wanted to do. Other players didn’t work for me because they didn’t hold all my music, which created a problem (deciding what to put on there) rather than solving one (getting rid of all those CDs). To this day, I find the iTunes app hard to use (bordering on terrible), but it works.

Easy to use is not quite the same as working, though perhaps it’s a helpful step.

QA that asks “does it work?”

At Red Hat, we used to use a sophisticated QA technique called the “yellow pad.” To yellow pad a product, you get one of those yellow legal pads. You need a fresh setup just like the one the customer will have (say, a computer your software has never been installed on). Then you install and try to use your software, writing down on the yellow pad anything that fails or looks embarrassing or nobody would ever understand.

Plenty of “finished” products will fail miserably.

QA teams and developers get tunnel vision. It’s easy to go for months with nobody on the team taking a fresh look at the product, step-by-step.

Once you can pass your own “yellow pad” criticism, or in parallel if you like, you can jump to hallway usability testing, maybe still using a yellow pad: watch someone else try to use the product for the first time, and again take notes. Fix that stuff.

The point is this: over and over and over, iteratively as you fix stuff, you need to try the product by walking through it, not by looking at features in isolation. Feature lists and requirements documents are death. Step-by-step stories are life.

I’m sure you can see the resonance with agile software development and lean startup methodology here, but you need not buy into a complex method or theoretical framework.

Yellow pads won’t help you solve the right problem, but they’ll get you past a lot of details that can sabotage you even if you’re solving the right problem.


A good way to kill a product: start adding extra features when it doesn’t work. First it needs to work. Focus on that. Then you can elaborate.

People who can’t tell if it works

Many people argue that non-technical CEOs can’t lead a technology company. One reason, I believe, is that many non-technical CEOs can’t tell whether a product works, or can’t understand which technical barriers to workingness are surmountable and which are fundamental.

Often, IT departments can’t tell if software works; they produce lists of requirements, but “it works” may not be one of them. Remember, “working” means “people will really use it in practice,” not “it can be made to do something if you have enough patience.”

Interaction design is a branch of design with an emphasis on step-by-step, and I find that designers with this background understand that it has to work. Many (not all) designers with other backgrounds may have an emphasis on the appearance of static snapshots, rather than the flow through the steps; and I’ve found that some of those designers don’t know whether a design will work.

Software developers are good at recognizing some ways that products don’t work, but as frequently noted, many of us overestimate the general public’s tolerance for complexity.

It might take a few years

Some kinds of product (notably, many web apps) can go from concept to launch in a few months or less, but whole categories cannot. Operating systems; anything involving hardware; non-casual video games such as World of Warcraft. In my experience, which spans a few projects anyway, complicated tech products tend to take a year or two to be “done” and around three years to work.

Sometimes I wonder if the “secret sauce” at Apple is no more than understanding this. Other hardware manufacturers have structural problems investing enough time in a single product. While I have no idea how long Apple spent developing the iPhone, I’m willing to bet it was at least three years. That’s more than enough time for most big companies to cancel a project.

Video game companies seem a little more open to investing the time required than other kinds of software companies, with major games spending several years in development. Video games don’t have the luxury of launching a “1.0” and then fixing it, I guess.

The first milestone that matters

If you’re building something, celebrate on the day that the product works.

You may have a long way to go yet (does anyone know your product exists?), but you’ve already succeeded where others failed.

Callbacks, synchronous and asynchronous

Here are two guidelines for designing APIs that use callbacks, to add to my inadvertent collection of posts about minor API design points. I’ve run into the “sync vs. async” callback issue many times in different places; it’s a real issue that burns both API designers and API users.

Most recently, this came up for me while working on Hammersmith, a callback-based Scala API for MongoDB. I think it’s a somewhat new consideration for a lot of people writing JVM code, because traditionally the JVM uses blocking APIs and threads. For me, it’s a familiar consideration from writing client-side code based on an event loop.


  • A synchronous callback is invoked before a function returns, that is, while the API receiving the callback remains on the stack. An example might be: list.foreach(callback); when foreach() returns, you would expect that the callback had been invoked on each element.
  • An asynchronous or deferred callback is invoked after a function returns, or at least on another thread’s stack. Mechanisms for deferral include threads and main loops (other names include event loops, dispatchers, executors). Asynchronous callbacks are popular with IO-related APIs, such as socket.connect(callback); you would expect that when connect() returns, the callback may not have been called, since it’s waiting for the connection to complete.


Two rules that I use, based on past experience:

  • A given callback should be either always sync or always async, as a documented part of the API contract.
  • An async callback should be invoked by a main loop or central dispatch mechanism directly, i.e. there should not be unnecessary frames on the callback-invoking thread’s stack, especially if those frames might hold locks.

How are sync and async callbacks different?

Sync and async callbacks raise different issues for both the app developer and the library implementation.

Synchronous callbacks:

  • Are invoked in the original thread, so do not create thread-safety concerns by themselves.
  • In languages like C/C++, may access data stored on the stack such as local variables.
  • In any language, they may access data tied to the current thread, such as thread-local variables. For example many Java web frameworks create thread-local variables for the current transaction or request.
  • May be able to assume that certain application state is unchanged, for example assume that objects exist, timers have not fired, IO has not occurred, or whatever state the structure of a program involves.

Asynchronous callbacks:

  • May be invoked on another thread (for thread-based deferral mechanisms), so apps must synchronize any resources the callback accesses.
  • Cannot touch anything tied to the original stack or thread, such as local variables or thread-local data.
  • If the original thread held locks, the callback will be invoked outside them.
  • Must assume that other threads or events could have modified the application’s state.

Neither type of callback is “better”; both have uses. Consider:


in most cases, you’d be pretty surprised if that callback were deferred and did nothing on the current thread!



would be totally pointless if it never deferred the callback; why have a callback at all?

These two cases show why a given callback should be defined as either sync or async; they are not interchangeable, and don’t have the same purpose.

Choose sync or async, but not both

Not uncommonly, it may be possible to invoke a callback immediately in some situations (say, data is already available) while the callback needs to be deferred in others (the socket isn’t ready yet). The tempting thing is to invoke the callback synchronously when possible, and otherwise defer it. Not a good idea.

Because sync and async callbacks have different rules, they create different bugs. It’s very typical that the test suite only triggers the callback asynchronously, but then some less-common case in production runs it synchronously and breaks. (Or vice versa.)

Requiring application developers to plan for and test both sync and async cases is just too hard, and it’s simple to solve in the library: If the callback must be deferred in any situation, always defer it.

Example case: GIO

There’s a great concrete example of this issue in the documentation for GSimpleAsyncResult in the GIO library, scroll down to the Description section and look at the example about baking a cake asynchronously. (GSimpleAsyncResult is equivalent to what some frameworks call a future or promise.) There are two methods provided by this library, a complete_in_idle() which defers callback invocation to an “idle handler” (just an immediately-dispatched one-shot main loop event), and plain complete() which invokes the callback synchronously. The documentation suggests using complete_in_idle() unless you know you’re already in a deferred callback with no locks held (i.e. if you’re just chaining from one deferred callback to another, there’s no need to defer again).

GSimpleAsyncResult is used in turn to implement IO APIs such as g_file_read_async(), and developers can assume the callbacks used in those APIs are deferred.

GIO works this way and documents it at length because the developers building it had been burned before.

Synchronized resources should defer all callbacks they invoke

Really, the rule is that a library should drop all its locks before invoking an application callback. But the simplest way to drop all locks is to make the callback async, thereby deferring it until the stack unwinds back to the main loop, or running it on another thread’s stack.

This is important because applications can’t be expected to avoid touching your API inside the callback. If you hold locks and the app touches your API while you do, the app will deadlock. (Or if you use recursive locks, you’ll have a scary correctness problem instead.)

Rather than deferring the callback to a main loop or thread, the synchronized resource could try to drop all its locks; but that can be very painful because the lock might be well up in the stack, and you end up having to make each method on the stack return the callback, passing the callback all the way back up the stack to the outermost lock holder who then drops the lock and invokes the callback. Ugh.

Example case: Hammersmith without Akka

In Hammersmith as originally written, the following pseudocode would deadlock:

connection.query({ cursor => /* iterate cursor here, touching connection again */ })

Iterating the cursor will go back through the MongoDB connection. The query callback was invoked from code in the connection object… which held the connection lock. Not going to work, but this is natural and convenient code for an application developer to write. If the library doesn’t defer the callback, the app developer has to defer it themselves. Most app developers will get this wrong at first, and once they catch on and fix it, their code will be cluttered by some deferral mechanism.

Hammersmith inherited this problem from Netty, which it uses for its connections; Netty does not try to defer callbacks (I can understand the decision since there isn’t an obvious default/standard/normal/efficient way to defer callbacks in Java).

My first fix for this was to add a thread pool just to run app callbacks. Unfortunately, the recommended thread pool classes that come with Netty don’t solve the deadlock problem, so I had to fix that. (Any thread pool that solves deadlock problems has to have an unbounded size and no resource limits…)

In the end it works, but imagine what happens if callback-based APIs become popular and every jar you use with a callback in its API has to have its own thread pool. Kind of sucks. That’s probably why Netty punts on the issue. Too hard to make policy decisions about this in a low-level networking library.

Example case: Akka actors

Partly to find a better solution, next I ported Hammersmith to the Akka framework. Akka implements the Actor model. Actors are based on messages rather than callbacks, and in general messages must be deferred. In fact, Akka goes out of its way to force you to use an ActorRef to communicate with an actor, where all messages to the actor ref go through a dispatcher (event loop). Say you have two actors communicating, they will “call back” to each other using the ! or “send message” method:

actorOne ! Request("Hello")
// then in actorOne
sender ! Reply("World")

These messages are dispatched through the event loop. I was expecting my deadlock problems to be over in this model, but I found a little gotcha – the same issue all over again, invoking application callbacks with a lock held. This time it was the lock on an actor while the actor is processing a message.

Akka actors can receive messages from either another actor or from a Future, and Akka wraps the sender in an object called Channel. The ! method is in the interface to Channel. Sending to an actor with ! will always defer the message to the dispatcher, but sending to a future will not; as a result, the ! method on Channel does not define sync vs. async in its API contract.

This becomes an issue because part of the “point” of the actor model is that an actor runs in only one thread at a time; actors are locked while they’re handling a message and can’t be re-entered to handle a second message. Thus, making a synchronous call out from an actor is dangerous; there’s a lock held on the actor, and if the synchronous call tries to use the actor again inside the callback, it will deadlock.

I wrapped MongoDB connections in an actor, and immediately had exactly the same deadlock I’d had with Netty, where a callback from a query would try to touch the connection again to iterate a cursor. The query callback came from invoking the ! method on a future. The ! method on Channel breaks my first guideline (it doesn’t define sync vs. async in the API contract), but I was expecting it to be always async; as a result, I accidentally broke my second guideline and invoked a callback with a lock held.

If it were me, I would probably put deferral in the API contract for Channel.! to fix this; however, as Akka is currently written, if you’re implementing an actor that sends replies, and the application’s handler for your reply may want to call back and use the actor again, you must manually defer sending the reply. I stumbled on this approach, though there may be better ones:

private def asyncSend(channel: AkkaChannel[Any], message: Any) = {
    Future(channel ! message, self.timeout)(self.dispatcher)

An unfortunate aspect of this solution is that it double-defers replies to actors, in order to defer replies to futures once.

The good news about Akka is that at least it has this solution – there’s a dispatcher to use! While with plain Netty, I had to use a dedicated thread pool.

Akka gives an answer to “how do I defer callbacks,” but it does require special-casing futures in this way to be sure they’re really deferred.

(UPDATE: Akka team is already working on this, here’s the ticket.)


While I found one little gotcha in Akka, the situation is much worse on the JVM without Akka because there isn’t a dispatcher to use.

Callback-based APIs really work best if you have an event loop, because it’s so important to be able to defer callback invocation.

That’s why callbacks work pretty well in client-side JavaScript and in node.js, and in UI toolkits such as GTK+. But if you start coding a callback-based API on the JVM, there’s no default answer for this critical building block. You’ll have to go pick some sort of event loop library (Akka works great), or reinvent the equivalent, or use a bloated thread-pools-everywhere approach.

Since callback-based APIs are so trendy these days… if you’re going to write one, I’d think about this topic up front.

Update: Health insurance credits and penalties

In a post a couple months ago I argued that a given financial effect on an individual could be called a “tax increase plus a tax credit for doing XYZ,” or a “tax penalty for not doing XYZ,” and that the two are identical in terms of how many dollars all parties involved end up with. Based on that, I was wondering why a tax penalty for not buying insurance would be legally different from the existing, longstanding tax credits for buying insurance.

Two of the judges in the recent Sixth Circuit opinion upholding the law addressed this issue, though the court as a whole declined to rule on taxing power grounds (because they upheld the law on commerce clause grounds anyway).

First reaction: wow, the Sixth Circuit reads my blog and answers my questions! Nice!  (Note to the thick: kidding.)

I was arguing (as a non-lawyer) that the government shouldn’t have the power to do something, or lack that power, based purely on what label they stick on it. Surely if the individual mandate were unconstitutional, the government could not fix the constitutional problem with a rewording that would have no practical effect. Should abridging free speech or skipping due process become OK as long as we use the right words to describe them?

(A small twist that might merit a footnote: due to a special rule in the health care bill, the IRS isn’t allowed to enforce the individual mandate’s penalty as strongly as they can enforce falsely claiming a credit. That’s a small practical difference between the penalty and a credit, but one that makes the penalty a weaker infringement on individual rights than the existing credits.)

In the Sixth Circuit opinion, the two judges addressing the tax issue write (see page 29 for the relevant stuff):

…it is easy to envision a system of national health care, including one with a minimum-essential-coverage provision, permissibly premised on the taxing power. Congress might have raised taxes on everyone in an amount equivalent to the current penalty, then offered credits to those with minimum essential insurance. Or it might have imposed a lower tax rate on people with health insurance than those without it.

That is, they agree with me that the tax increase plus the credit would have been the same thing as the penalty, economically speaking. (“Economically speaking” = the same parties end up with the same number of dollars in the same situations.)

But they go on and say it matters what you call it. In other words, they argue the taxing power does not include tax penalties for XYZ, but does include tax credits for not-XYZ.

I thought that was an absurdity proving either 1) the penalty is allowed or 2) the credits are not allowed, depending on your political bent. They embraced the position I found absurd. Lawyers! (eye roll) (Sorry, lawyer friends.)

I see a big gap between those who are all for, or up in arms about, the individual mandate (everyone cites political-philosophy-oriented arguments), and what the two judges are arguing here. Those I’ve heard arguing about political philosophy would be against both penalty and credit, or for both penalty and credit. I don’t know of a philosophical argument where the labeling makes the difference.

Relabeling something doesn’t change what rights an individual has, or what rights a state has, in practice. The distinction, if any, is legalistic.

I’m imagining the founding fathers: “Call it a credit rather than a penalty, or give me death!”

I’m blown away that a law with such huge practical effects — some will say positive, some will say negative, not the point — can be in limbo over this. Neither advocates nor opponents of reform would consider the wording of the bill “what’s at issue,” but it could be what the courts base a decision on. Congress, break out your thesaurus next time.

(As with the previous post, spare us the generic debate about health care reform in the comments, we’ve all heard it before. I haven’t heard much discussion of this very specific legal topic though, insights are welcome on that.)

Some personal finance answers

I contribute to money.stackexchange.com sometimes; the site doesn’t get a ton of traffic yet, sadly. Some of the stuff I’ve written over there is as good (or bad) as my blog posts over here, so I thought I’d link to some highlights.

It’s not a bad Q&A site on personal finance, if you browse around. Though it needs more people asking and answering.

Book Review: Selfish Reasons to Have More Kids

Despite the title, Selfish Reasons to Have More Kids, the “why you should have more kids” part feels tacked-on. The interesting part of the book reviews twin and adoption studies, making the case that parenting style doesn’t matter much in the long run.

The book’s argument in brief is:

  • in twin studies and adoption studies, most variation in how adults turn out can’t be explained by what their parents did. “Normal” variation in first-world parenting (i.e. excluding abuse, malnutrition, etc.) does not affect how people turn out in adulthood very much.
  • parenting affects how kids act while they are kids, but once people move out of the house they bounce back to their “natural state.” (“As an adult, if I want a cookie, I have a cookie, okay?” – Seinfeld)
  • one long-run thing parents can strongly affect is whether their kids have fond memories of them and enjoy having them around.
  • parents should be more relaxed and invest less time in “kid improvement,” more time in everyone enjoying themselves even if it’s by watching TV.
  • discipline is mostly for the benefit of the parents (“keep kids from being annoying around us”) not for the benefit of the kids (“build character”).
  • (the conclusion in the title) since parenting can and should be less unpleasant, people should consider having more kids than they originally planned (i.e. the cost/benefit analysis is better than they were thinking).

Can you tell the author is an economist?

I get annoyed by “nature vs. nurture” popular science writing, and have a couple thoughts along those lines about this book.

Thank you for skipping the just-so stories

There’s a mandatory paragraph in nature vs. nurture articles where someone speculates on the just-so story. Something along the lines of “such-and-such behavior evolved so that women could get their mates to bond with them and hang around to care for children,” or whatever. Bryan Caplan 100% skips the silly evolutionary speculation. Thank you!

Cultural and genetic factors

However, I did find the book a little quick to jump to the genes. In “Appendix to Chapter 2,” Caplan explains that twin and adoption studies try to estimate three variables:

  1. fraction of variance explained by heredity (similarity of twins raised apart)
  2. fraction of variance explained by shared family environment (similarity of biologically unrelated children raised together)
  3. fraction of variance explained by non-shared family environment (the rest of variance)

Caplan (to his credit) goes to some lengths to point out that the twin and adoptions studies are virtually all from first-world countries and typical homes in those countries. For example, one of the studies was done in Sweden, likely even more homogeneous than the ones done in the United States.

Obvious point, I’m sure Caplan would agree: when something correlates with genes, that doesn’t mean “there’s a gene for it.” For example, attractive people have higher incomes; attractiveness is mostly genetic. So one way income can be genetic has to do with your genes for appearance. Most people would find it misleading to say there’s a gene for high income, even though income correlates with various genes. This is a pretty simple example, but it can be much, much more complicated. Look at this diagram for how genes might get involved in autism, for example.

An outcome such as income often involves genes interacting with culture – say, standards of appearance. The ideal genes for appearance change over time and around the world.

But income isn’t just affected by appearance. Who-knows-how-many genes get involved: countless genes affect appearance, personality, intelligence, and then all of those factors in turn affect your income… in ways that depend on your culture and environment. Make it more complicated: there are also genes that affect how we react to certain appearances or personalities. If standards of attractiveness have a genetic component, then you’d expect that there are also genetic variations in what people find attractive, and then cultural variations layered on that.

Genes and culture also interact with what I think of as tradeoffs or “physical properties of the world.” All I mean here is that you can’t combine behaviors and traits arbitrarily, they tend to come in clusters that make sense together or work well together. This seems true to me for both personalities and for cultures. If you slice-and-dice your time or your concepts in one way, then you didn’t slice-and-dice them another way. Some ways of thinking or doing work better than others, some are more compatible with each other, etc. There’s a source of universals here that need not point to genes.

Finally, culture, like genes, gets passed on between generations. A very simple example: if you had a culture that was fundamentally anti-natalist, it would not last very long. And in fact most cultures are very enthusiastic about having children. This is a cultural universal (other than short-lived sub-groups), but just its universality doesn’t connect it to genes; it could be cultural rather than genetic evolution. Humans inherit so many ways of thinking and doing, and so much background knowledge, through interaction with other people, rather than through DNA.

Getting back around to the book. If you do a twin study in Sweden, then you might find that twins raised apart have similar outcomes. But it’s important to recognize that there aren’t (necessarily) genes for those outcomes; there are genes that cause the twins to have (likely unknown) traits which somehow result in those outcomes, in Sweden.

A couple thoughts:

  • This probably doesn’t matter so much for the book’s practical conclusions. Parents can’t change the culture surrounding their children any more than they can change their kids’ genes.
  • But from a wider perspective, it sure would be interesting — and perhaps useful at times — to know the mechanism for hereditary outcomes. i.e. the causality and not only the correlation.

When Caplan says “it’s genetic” I would say that’s true in that there appears to be a Rube Goldberg chain of causality, such that somehow certain genes are resulting in certain outcomes. However, my feeling is that the mechanism matters.

There’s a difference between a gene for unattractiveness/low-IQ/bad-personality (or sex, or race, for that matter) resulting in low income because the world (including culture) works against high income for people with those traits, and saying that “income is genetic.” Does it really feel accurate to say that “such-and-such percent of variation in income is explained by genetics” here, without mentioning the intermediate traits that are genetic, which in turn affect income?

I’m not sure Caplan would really disagree, but I do think the “nature vs. nurture” genre, including this book, glosses over a lot of complexity that kinda matters.

(The mistake is inherent in the phrase “nature vs. nurture”; if you start spelling out the mechanisms, it’s pretty clear that the two interact and the real question is how specifically in this case, right?)

Bottom line, when talking about the “fraction of variance explained by heredity” I’d add the footnote in this physical and cultural environment, because the background reality, cultural and otherwise, shared among families in the study — or even shared among all families on Earth — has a lot to do with which genetic traits matter and exactly how they matter.

More hours spent parenting

Caplan talks a bit about how people spend a lot more time parenting these days than they used to, and mostly blames increased parental guilt (people thinking they have to “improve” their children).

I’d wonder about other trends, such as the decline of extended family ties and the tendency for people to move across the country. If you’ve had a child, you may have noticed that they’re designed to be raised by more than two people. (The fact that some single parents raise them alone blows my mind.)

Missing from the book, I thought, was research into how other social/demographic trends were affecting parenting. More geographic mobility, more tendency toward social isolation, more tendency to need two incomes to achieve a “middle class” lifestyle, etc. – I have no idea which trends are most important, but surely some of these factors go into parenting decisions.

Caplan kind of implies that changes in parenting are almost all “by choice,” due to beliefs about the importance of parenting, and I’m not sure I buy it. It seems plausible to me that changes in parenting could be mostly by choice, but also plausible that they could be mostly due to socioeconomic trends.


The book has several semi-related side trails, which may be interesting:

  • some discussion of safety statistics, probably familiar from Free-Range Kids
  • some discussion of the decades-old Julian Simon vs. Paul Ehrlich “will population outstrip resources” debate
  • editorial in favor of reproductive technology
  • thoughts for grandparents or future grandparents on how to end up with more grandchildren

Child services risk

On the Free-Range Kids tangent, I always wonder about “child services risk,” the risk that some nosy neighbor gets incensed and creates a whole traumatic drama with child services. To me this risk seems plausible.  In the book, Caplan says they let their 7-year-old stay home alone; which seems fine to me for the right 7-year-old in the right context, but I’m pretty sure a lot of state governments say it’s not fine.

I’d sort of like some “child services screwing up your child’s life” statistics to go along with the stats on abductions and drownings. Should we worry about that?

It’s one thing to say you don’t care what other people think, but when people can turn you in and just the fact of being reported creates a nightmare, I can understand why one wouldn’t want to appear unconventional.


I found the review of twin and adoption research interesting (and practical, for parents). Can’t hurt to remember that your kids are mostly going to do whatever they want anyway, once they move out of the house, and that chances are they’ll do a bunch of the same stuff their parents did. Caplan follows through nicely with implications for discipline, family harmony, and so on.

Some of the rest of the book wasn’t as interesting to me; I’ve heard the Free-Range Kids and Simon vs. Ehrlich stuff for example many times before. But you can skim these bits if you like.

Buy on Amazon – I get money if you do!

Keith Pennington

My Dad died from cancer one year ago, today I’d like to write something about him.

Dad liked animals, (certain) children, the woods, hunting, fine guns and knives, books, history, sharing his knowledge, strong coffee, and arguing about politics.

He grew up partly at my grandparents’ summer camp in Michigan called Chippewa Ranch, and partly on their cattle ranch in Georgia.

With brother Kenny

Keith On Poco

Dad’s favorite book, The Old Man and the Boy, is almost a blueprint for how he wanted to live and what kind of father he wanted to be. Robert Ruark’s epigraph in that book says “Anyone who reads this book is bound to realize that I had a real fine time as a kid,” and we spent our childhood weekends doing all sorts of things other kids weren’t allowed to do. In my copy of the book when I was 15, Dad wrote “This is all you really need to know, all you have to do is ‘do'”.

Havoc hunting

With my sister

I inherited a lot more of Dad’s reading-a-book-constantly side than his outdoor adventure side, but a little of both rubbed off.

Dad signed up for Vietnam, and while he never talked about it much, I’m guessing in some ways it was the last time he mostly enjoyed his day job.

His closest friend summarized his military career:

Diverted in 1968 from assignment to 5th SFGA to the Americal Division he was a LRRP Platoon Commander for his first tour. He extended in country to serve with the II Corps Mike Force in their separate 4th Battalion in Kontum. He chose to command a Rhade company in preference to available staff positions and was wounded severely during the Joint Mike Force Operation at Dak Seang in 1970 sufficient to require medevac to Japan with a severe leg wound from taking a grenade at about 4 feet while leading an assault on an NVA position. He was awarded three Silver Stars if the third one ever caught up with him–he certainly never searched it out.

with a dog in Vietnam

explaining something in Vietnam

Dad wasn’t one to define himself by military glory days, though. I think the adventure in Vietnam was just one more adventure, preceded by others, and he continued throughout his life.

One of the themes running through Dad’s life was his dislike for convention, and people who were too conventional in his eyes. He wasn’t afraid to name his son Havoc, for example. He loved revolutionaries and adventurers of all stripes, right-wing or left-wing. His military dogtags list his religion as “animist.” As we were growing up, he had nothing to say about religion one way or the other; he felt we ought to figure it out for ourselves. That was another of his parenting philosophies, he wasn’t going to tell us what to think. The fastest way to earn Dad’s contempt was to have an opinion just because other people had it, or to have an ignorant opinion because you hadn’t read enough books.

Another quick way to earn contempt was to be unprepared or incompetent. We had to have enough equipment at all times; I still have a basement full of equipment and a house full of books. Some old friends may remember laughing about my pile of assorted axes and hatchets. Dad could never remember for sure whether I had enough, including the several necessary varieties, so he’d send another one along every so often.

Whenever we got into some activity, whether cycling or leatherworking or hunting or backpacking, we’d end up with several times more equipment for that activity than we could ever use, as Dad tried everything out to be sure we had what worked best. We’d also have a complete library of books on the topic. And we got into a lot of activities.

Dad loved anything he thought was neat, which included most animals. We had a lot of crazy pets, from a squirrel to a 500-pound wild hog. As I’m looking through old photos, he’s always hanging out with a dog.

With a hunting hound

With another hound

It turned out that he more or less killed himself with cigarettes. He’d always rationalized bad habits saying he didn’t want to get old and dependent anyway, but in the end I think he’d rather have lived to see his grandchildren grow up. He died at home with family and friends, and was only confined to bed for his last day or two.

When he died my own son was six months old, and I stood outside the house where I grew up and hugged my son for all I was worth. Whenever I start to think about my son knowing my Dad, learning some of the things I learned as a kid, that’s what brings on the tears. I wish we’d had some adventures with the three of us.

I know my son and I will have some adventures anyhow, and I’ll think about Dad every time, and tell my son what advice Grandpa would have had, as best I can remember it.

At Horace Kephart's grave in Bryson City, September 2009


Individual mandate and the power to tax

I happened to stumble on this Yale Law Journal Online article yesterday. I hadn’t realized what the court cases about the individual mandate were (at least in part) arguing.

Credits and penalties

The individual mandate is structured as a tax penalty (i.e. an extra tax hike) if you don’t have health insurance. Those of us who have lived in Massachusetts know what it might look like, since it’s modeled on “Romneycare.”

The thing about a tax penalty, as far as I can see, is that it’s economically identical to a tax credit plus a tax hike. That is, say I’m the government and I want people who do not buy child care to pay $500 more tax than people who do buy child care. I can either increase taxes across the board $500 and then offer a $500 credit if you buy childcare; or I can increase taxes by a $500 penalty if you don’t buy childcare. In either case, if you don’t buy childcare, you pay $500 more than before, and if you do buy it, you pay the same as before.

Certainly the credit and the penalty are a different “spin” – I’m sure people have a different reaction if the tax forms say “you must do this or pay a penalty” vs. “you can do this to get a credit.” There’s a psychological difference. But if people were completely rational and didn’t look at the wording, the fact is that it doesn’t matter to their pocketbook what the tax forms call the rule. The rule is simply “you pay less if you buy X and more if you don’t.”

The tax code is already full of credits for buying stuff. Well-known ones include child care and the temporary first-time homebuyer credit. And… you can even deduct health insurance costs already.

The new law’s penalty for not having insurance is identical to raising taxes by the penalty, and then allowing anyone who has insurance an additional credit, on top of the existing deduction, equal to the penalty.

The penalty means you pay less on taxes, partially offsetting the cost of insurance, if you buy insurance. That’s all it means.

In the same sense, the tax code already requires you to buy insurance (if self-employed anyway). The new penalty increases the incentive somewhat, but there’s a tax incentive to buy insurance today.

(I realize there are complicating elements to how this works — phase-outs, credits vs. deductions, refundable vs. nonrefundable credits, etc. — but I don’t think they matter for this discussion.)

Some implications

  • There’s a claim in these cases that the government has never required people to actively buy a certain product. However, at least the enforcement of this requirement, i.e. a tax savings if you do buy, is precisely equivalent to all the credits and deductions you can already get for buying various things. Go into TurboTax and look at the credits and deductions available. You are “required” to buy all of that in exactly the same sense that you are required to buy health insurance under the new law — at least as far as enforcement goes. The punishment is the same, you pay higher taxes if you don’t buy.
  • In fact there’s a popular argument “can the government make you buy GM cars?” — and yes, there have been tax credits for buying certain kinds of car (hybrid, electric, whatever). Which means you pay more (you are penalized) if you don’t buy those cars. The government can, under current law, punish you through taxation if you don’t buy the right car.
  • Because there’s not an economic difference between the penalty and a hike+credit, and people don’t want to argue all the existing credits are unconstitutional, the legal argument in these court cases seems to be that it’s unconstitutional because Congress called it a penalty and not a tax. “No calling it a penalty when it’s a tax!” is some kind of grammar-nerd point, not something that should inspire throwing crates of tea into the harbor…
  • The Yale Law Journal Online article makes two points on this, first that the statute does call it a tax in many places, and second that Supreme Court precedent since the 1860s is that it doesn’t matter whether it’s called a tax or not, when judging constitutionality.

Laurence Tribe’s argument

Laurence Tribe predicted an 8-1 vote to uphold the individual mandate. He says:

There is every reason to believe that a strong, nonpartisan majority of justices will do their constitutional duty, set aside how they might have voted had they been members of Congress and treat this constitutional challenge for what it is — a political objection in legal garb.

He feels that the law is so clearly constitutional according to precedent, on both interstate commerce and tax power grounds, that the Court will have no coherent way to strike it down.

I see his point, based on the power to tax. Interstate commerce may be a fuzzier issue, I don’t know. But the government only has to have the power on one ground. If it’s constitutional using the power to tax, it’s constitutional.


There’s no need to post comments about whether the government should have a taxation power in the Constitution, or whether the health care law is a good idea, or any generic debate like that.

In this post I wanted to raise the issue of whether a tax incentive to buy insurance is constitutional following existing precedent, and whether it can be legally distinguished from other tax incentives (whether framed as credit or penalty) that involve buying particular goods and services. I don’t see where the distinction can be made. Anyone have any good theories?

I am not a lawyer, if you are one, please add your thoughts!

Update July 2011

The Sixth Circuit discussed this topic in their ruling, here’s a new post on it.

Update June 2012

The deciding vote from John Roberts was based on this same tax power argument.

Some stuff I like

As you may have noticed, this blog is a big old grab-bag of random topics. In that spirit, here are some products I enjoyed lately, that people may not have heard of. In several cases these products are from small companies and I think they deserve a mention for their good work.

I’m going to affiliate-link the stuff, because why not, so be aware of that if it bothers you.


The computer geeks reading my blog have probably seen this, but for the rest of you, I’d recommend it if you do any kind of desk work.

GeekDesk lets you use a standing desk part of the day without committing to standing up always. It has a little motor so you can quickly raise or lower the desktop.

(Why a standing desk? Sitting down all day is really bad for you, even if you exercise daily.)

I’ve found that I almost always stand, now that I’m used to it. But it’s nice to have the option to sit.

I have the GeekDesk Mini which is still large, about 3 laptops wide by 2 deep.

When standing, a laptop screen is too low for me and requires hunching over, so I had to get a monitor with a stand, and then I had to pile some books under the stand. With the monitor, I can stand up straight and look at the screen directly.

You can also buy only the motorized legs and put your own top on the GeekDesk if you have a nice piece of wood in mind.

The desk has decent cable management, but I also screwed a power strip to the bottom of the desktop so only one power cord goes to the floor.


Quakehold is a removable museum putty that makes things stay put.

As the name suggests, one use of it is to keep stuff on shelves during an earthquake, and I’m guessing those of you who live in an earthquake zone already know about it. I didn’t know about it.

It’s useful in a duct-tape kind of way. Some examples in our house:

  • making our lamps harder for kids to knock over
  • keeping a power strip on the bottom of my GeekDesk from slipping off its mounting screws
  • sticking down a diaper changing apparatus to keep it from sliding around
  • keeping our toddler from sliding an anti-toddler fence across the room

In most cases you could also use duct tape, I suppose, but the putty is easier to remove without damaging surfaces, and avoids looking too There, I Fixed It.


SimpliSafe is an alarm system we installed in our house a few weeks ago, and it’s a Boston startup, for those of you in Boston wanting to support local companies.

I’m very impressed with the product, but boy was it hard to discover. I just Googled “alarm system” for example, and they aren’t in the ads and aren’t in the first 8 pages of organic results. (If you’re an SEO consultant you might want to get in touch.)

SimpliSafe uses completely wireless (battery powered) sensors that stick to your wall with 3M Command. When you get the system it’s preloaded with the codes for your sensors, so there’s no pairing process. All you do is pull the plastic tab blocking the battery from each sensor, stick it to the wall or put it on a shelf, and then plug a USB stick they provide into your computer. On the computer there’s a simple setup process to tell the monitoring service who to call and so on. After setting up, you put the USB stick in the base station to transfer the settings, and that’s it.

It takes about half an hour to install and set up. Maybe an hour if you’re the kind to read the (clear and excellent) instructions.

Here’s the comparison:

  • ADT: you have to talk to a salesperson on commission. They are selling a 3-year contract that auto-renews if you don’t cancel in time, and it costs almost $50/month if you get cellular monitoring. The up-front equipment can be expensive (they have free or cheap packages, but those don’t include what you probably need).
  • SimpliSafe: you order online and self-install in half an hour. There’s no contract, and cellular monitoring is $15/month. No need for a land line. The up-front equipment is reasonably-priced.

As a middle ground, I guess there’s a whole community out there of people who roll their own alarm and home automation systems, and it looks possible to get a lot cheaper than ADT that way as well. However, it looked way too time consuming for me. I think you can also switch your ADT equipment over to a cheaper monitoring service, at least after your 3 year contract expires if you catch it prior to autorenew.

SimpliSafe‘s industrial and interaction design are great. The web UI is simple, and everything is pre-configured as much as possible. (For example, the base unit already knows about your sensors when you get it.) They really thought through the whole experience.

The product is marketed for apartments (because there’s no permanent installation), but it seems to be fine for our house. If you live in a large enough place or have metal walls, it may not work for you.

Other possible downsides:

  • I’m guessing the system uses ZigBee or something similar, but they don’t say, and they don’t claim to support any sensors they don’t sell. Basically it isn’t an open system, you have to buy components from them.
  • They only have basic sensors right now, for example no fire alarm or glass break detector yet, though they say they will in the future.
  • There’s no “home automation” stuff, it’s purely a burglar alarm.

By the way, in researching this two other interesting companies I saw using low-power wireless were VueZone and AlertMe (UK only). I have not tried either one, but they seem to be similar in spirit to SimpliSafe (i.e. low-power wireless technology with decent industrial design).

SLS-Free Toothpaste

This product won’t be relevant to everyone, but if it is and you don’t know about it, you might thank me.

At the risk of too much information, I used to get canker sores, anytime I bit my lip or flew on an airplane or just had bad luck. These would make me miserable and grumpy for days, every other week or so. (Yeah, more grumpy than usual, haha.) While not life-threatening, it was unpleasant.

Now I use some stuff called Biotene which has been essentially a miracle cure. Instead of being in pain on a regular basis, I never have a problem. It isn’t some placebo effect “maybe it’s a bit better” kind of thing, it’s a change from “have constant chronic problem for years” to “never have the problem at all.” If I travel or something and don’t use the miracle toothpaste, I can get a canker sore again, but on resuming the toothpaste it will clear up.

Biotene claims to have magic enzymes. I don’t know if the enzymes do anything, or if it’s primarily the SLS-freeness that works. You may have luck with other toothpastes as well. Anyway, I pay for my overpriced toothpaste and it is worth every penny. Most drugstores, Target, etc. carry it.

GoGo Babyz Travelmate car seat wheels

The biggest problem here is the name, GoGo Babyz Travelmate. Bad name.

If you take your baby or toddler on a plane, this eliminates the need for a stroller, giving you one less thing to check and one more free hand. If you’ve taken a baby or toddler on a plane, you understand the value of that.

The gadget adds roller-bag wheels and handle to your car seat, so you can push or pull your kid like a roller bag. Comical but it works.

We gate-checked the GoGo Babyz, but I think you could get it in the overhead bin especially if you pop the wheels off (which is pretty easy).

Keep the JVM, dump the rest (Scala+Play+MongoDB)

I decided to try three new things at once and get back into some server-side development.

  • Scala uses the JVM for the hard work, but feels more like Ruby or Python. It’s statically-typed, but for the most part infers types rather than making you type them in.
  • Play ignores the Tomcat/J2EE legacy in favor of a more modern approach — “just press reload” to see changes.
  • MongoDB dodges the ORM problem by storing objects in the first place. People like JSON, so the database should store JSON. Done!

I was curious whether I could lose all traces of Enterprise Readiness and use the JVM with a nice language, a nice web framework, and no annoying SQL/ORM gunge.

TL;DR Answer: Promising, but lots of rough edges.

Some disjointed thoughts follow.


Before starting, I read Programming in Scala and MongoDB: The Definitive Guide. Also the free Programming Scala but I would recommend the Odersky book instead.

I like Scala a lot.

If you aren’t familiar with it, Scala is a pragmatic language design.

  • it compiles to Java bytecode and runs on the JVM. Thus, no headaches for operations team, no need to port it to new architectures, quality JIT and GC, etc. A Scala app looks like a regular Java jar/war/ear/*ar.
  • you can use any existing Java library from Scala with no special effort, just import it.
  • it’s statically typed, but sophisticated generics and type inference keep the code looking clean and concise.
  • it supports both functional and OO styles, sanely mixing the two.
  • it has multiple inheritance (mixin traits) that works properly, thanks to linearization.
  • it has a basket of features that save typing, such as pattern matching and automatically creating accessors for fields.
  • language features give you ways to avoid Java’s preprocessor/reflection/codegen hacks (think aspect-oriented programming, dependency injection, Kilim, and other extralinguistic hacks)

While C# fixes Java by adding a bunch of features but keeping a pretty similar language design, Scala fixes Java by trying to bring the conciseness and fun people find in Ruby, Python, Haskell, or whatever onto the JVM.

There’s a popular controversy about whether Scala is too complex, and just reading about it I had the same fear.

  • I found I could get a web app working without hitting any complexity hiccups. You don’t have to design a DSL or understand variance annotations to write a web app.
  • I didn’t get any mystery error messages of the kind I used to get all the time in the early days of C++.
  • I’m not sure it’s more complex than Java if you include the usual stack of extralinguistic hacks in Java, and I doubt Scala is a larger language than C#.

Scala’s Option type wasn’t very helpful.

Scala has thing called Option, which I believe Haskell calls a Maybe type. It’s a container that contains a single value or else nothing. While Scala has null (for Java compatibility), Scala encourages the use of Option instead to clearly indicate in the type system whether a value can be missing.

In a web app, this was often used to check whether data store lookups returned anything. And in most cases I wanted to show an error page if not, accomplished in Play by returning a special object from the controller method.

While I’ve read posts such as this one, I couldn’t find a great pattern for using Option in this context. It was leaking up the call stack like a checked exception (code using an Option value had to return another Option value and all the way up). I found myself wanting to do:

val foo = maybeFoo.getOrThrow(WebErrorPageException(“whatever”))

then I wanted the framework to globally catch that (unchecked) exception and show an error page. For all I know Play has such an exception. You can unconditionally Option.get, but that will turn up as an internal server error, which isn’t nice.

Without using exceptions, Option tended to produce nested if() (or pattern match) goo just like null, or creep up into calling functions like a checked exception.

Play itself is very polished, but Play with Scala is not.

Play’s big win, which you get with Scala too, is to bring “just press reload” development to the JVM. You change either code or templates, then go to your browser and test. There’s no pressing build or even waiting for Eclipse to build, and there’s no creating a war file and deploying it.

Play brings other niceties, using convention rather than configuration (in the Rails style). Its template engine is much nicer than JSP/JSF/etc. ugliness, and you get quick and easy mapping from URLs to controller methods. If you’ve used Rails or Django or whatever Play will feel pretty normal. But this normal is a prettier picture than the traditional Tomcat/JSP/blah setup.

Originally, Play was for Java only. The Scala support is marked experimental, and I hit some issues such as:

  • The docs are spotty and sort of outdated.
  • The docs talk about using JPA (Hibernate) with Scala, but Googling around I learned it’s broken and not recommended. Some of Play’s magic convenience is lost if you aren’t using Play’s database and ORM. I started out trying to use JPA with SQL, but when I learned I’d be manually doing DB stuff anyhow, I switched to MongoDB.
  • Some bug in the MongoDB driver makes it break if you also use Joda Time.
  • The template engine sometimes couldn’t find object fields from superclasses (my guess, I haven’t tested yet, is that adding @BeanProperty would work around this).
  • File upload support only works via a temporary file (a File object) and not streaming.
  • Some hacky-feeling conversion from Scala to Java collections/iterators was sometimes needed for the benefit of the template engine.

Little bugs aside, I did get a simple app working and I enjoyed it a lot more than writing in Java.

The stack isn’t nonblocking by default as node.js is.

Play supports suspending an HTTP request and returning to it later, without tying up a thread. Scala, especially in combination with Akka, offers some nice primitives for concurrency while avoiding shared state.

node.js gets huge mileage because the core platform defines what the “main loop” looks like and has primitives for a nonblocking file descriptor watch, nonblocking timeout, and so on. All libraries and modules then work within this framework.

In theory, Play and Akka would let you do the same thing, but since it isn’t the default, you’re going to suffer. You would have to manually write little stub controller methods that always suspended the request and forwarded it to an actor. And your persistence layer probably has a blocking API (such as MongoDB’s blocking API) that would need another bunch of glue code and a thread or actor pool to encapsulate. It’s even an issue, though a simple one, that Akka doesn’t come with Play or Scala, and the Play module for it seems to be well behind the latest Akka version.

I suspect that in a real-world project, you’d start finding blocking IO all over the place, hidden in every third-party jar you tried to use.

It would be great if I could write an actor, annotate it as a controller, and it would receive web request messages and send back web reply messages. Similarly, it would be great if there were a MongoDB-via-actors kind of nonblocking API.

Play’s philosophy is that most controller methods should block because they should be fast; this is likely true, but it’d be nice if the framework overengineered it for me. BTW, here’s a comparison of Play and Node.js.

(I can’t decide whether I like the idea of this Play/Scala/JVM stack or node.js more. I think I like both. The simplicity of node.js, the nonblockingness, and the same language from client to server, make it appealing. But sometimes I like the idea of JVM solidity, the full-featured statically-typed loveliness of Scala, and the giant ecosystem of Java tools and libraries.)

Eclipse support isn’t great yet.

I’ve heard the Scala plugin for Eclipse was recently improved, and honestly I’m not sure whether I have the important improvements in the version I’m using. But it was not great.

  • I had an error “class scala.annotation.implicitNotFound not found” that just wouldn’t go away, though it seemed harmless. Suspect that it’s some version mismatch.
  • Perhaps because Scala is less verbose and redundant than Java, it’s much easier to get the editor confused about your code.
  • Autocomplete/intellisense/whatever-you-call-it rarely worked, the IDE didn’t seem to know what methods my variables had most of the time.
  • Refactorings and code generation shortcuts you’d be used to from Java weren’t there (though they were also far less necessary).

All this said, with Play and Scala you’d be fine with an editor instead of an IDE. Working autocomplete would be nice, though.

MongoDB is the Right Thing.

I love MongoDB (disclaimer: I haven’t deployed it, only coded to it). From a development perspective, it’s exactly what I want.

If everyone wants objects anyway, and in particular they want objects with JavaScript’s type system since they’re eventually going to display in a browser (or use node.js), then the data store should freaking store objects! Figure it out for me.

If the data store’s native representation matches the form I really want my data in, then it should (at least eventually) be able to store and query that data intelligently. JPA/Hibernate is a very, very leaky abstraction. I don’t need part of the storage engine inside my app, trying to convert what I really want into SQL. Then the SQL engine has to optimize without high-level information.

As far as I know, MongoDB is the “NoSQL” thing that adapts to how my app works, instead of vice versa. Nice.


There’s a lot of promise here, but if I were building a real app on this stack, I’d be tempted to hack on the stack itself a fair bit (always dangerous!). Play defaults to Java+SQL, and Scala+MongoDB isn’t the well-traveled path.

Disclaimer: I’ve done a lot of Java in the old school Tomcat/JSP/Hibernate style, but Scala, Play, and MongoDB are new to me. Your corrections and elaborations in the comments are very welcome.

Why I hope my kid won’t like The Phantom Menace…

… because it’s a terrible movie, but I have one other reason.

We have a young child and I read parenting books. More than one talks about agency and effort. If you emphasize innate attributes rather than choices and habits, people get messed up.  They value themselves in terms of something they have no control over.

In real life, effort gets more reward than inborn attributes. (There are studies on it, aside from common sense.) Believing that what you do matters more than who you are is a freeing idea. It creates optimism that it’s worth trying and learning, rather than pessimism that you and the world are what they are.

Who knows if parents can affect how children think about these things, but one can hope.

Like a lot of nerds, I enjoy science fiction and fantasy. These books and movies tend to involve heroes, frequently young, who save the world or some such.

Consider some classics everyone knows. In The Lord of the Rings, the hero’s virtue is perseverance. The book hammers you with just how long it took to walk across Middle Earth. (A good movie version had to be 3 movies.) Frodo doesn’t have any special talents, other than finishing the journey. Even then, he fails at the end and has to be rescued by luck.

In the original Star Wars trilogy, sure the force is strong with Luke, but he has to do a bunch of training, and when he leaves Yoda without enough practice he gets his hand chopped off.

Not exactly fantasy, but take Seven Samurai. A bunch of old pros illustrating character and experience as they save a village, with one young samurai bumbling along for the ride. Some of them get killed.

Now consider some less-classics. In the Phantom Menace, an annoying kid saves the day more than once, using his inborn scientology midi-chlorians.  Even though he’s a little punk, everyone praises his midi-chlorian count. No wonder he turned out to be evil.

I recently finished and didn’t enjoy The Name of the Wind in which some kid is the best at everything without doing any work at all, and while having no character at all. (I could go on about other problems with this book, let’s just say this sort of praise seems baffling. Forgive me, I know this book has a lot of fans.)

Aside from a bad message, there’s no interesting story in these. Someone is born special and then they do special things and … whatever. Where’s the meaning? To me that isn’t a good story. Jar Jar is an extra insult – the real problem is bad story and characters.

I’m still debating how Harry Potter fits in to my argument.