This is what I wanted to do when I was in school – take a
reverse-engineering pragmatic approach to figuring out how to copy
human intelligence. I couldn’t figure out how to do it in an academic
context though. Most people seemed to be doing research where they
presupposed questionable-sounding theories (the cognitive science
“mind as a bunch of processing modules” or AI “mind as Prolog”/”mind
as logic engine” nonsense). After presupposing everything interesting
they would design experiments to test little nitpicky aspects of it.
The day-to-day life of a grad student looked sort of horrible to be
So I figured software development is a lot more fun day-to-day, and
even if working on intelligence is interesting, it’d be preferable to
come back to it in some other way later in life.
I’m pretty interested in this Numenta thing then. My guess is that
we’ve decided AI is more star-trek-impossible than it is because of
all the people who’ve approached it in the wrong way. If Numenta gets
even a basic, highly limited version of their concept working, a whole
class of applications that were computationally intractable will
The reason I think Hawkins is more likely to be right than some of the
past train-wreck AI attempts is that his theories have a hope of
explaining actual conversations, creativity, culture, etc. I studied
anthropology/linguistics/pscyhology and it always struck me that the
psychology theories of mind had no chance of explaining the world as
documented by anthropology.
My favorite reading in school was a monster called The
improvisational performance of “culture” in realtime discursive
practice, notable for its awe-inspiringly complex and pretentious
writing style (starting with the title). Once parsed it turns out to
contain a theory of why and how people talk. It spells out the theory
by explaining a single conversation and why that conversation happened
as it did.
If you take this one conversation as an example, the general direction
of Hawkins’s AI thinking has a hope in hell of replicating the
conversation, and the general direction of a lot of other AI thinking
has no hope.
The other thing I like about Hawkins is that he doesn’t seem to be
religious. A lot of people studying intelligence seem to have gone
into it with an emotional investment in the value of
Of course, I never knew that much about all this, and anything
I did know is now almost a decade out of date… plus I’ve forgotten
most of it. So don’t invest in Numenta on my advice. But I’m hoping it
will turn out to be interesting.
(This post was originally found at http://log.ometer.com/2005-04.html#10)