Monday, January 15, 2007

What Matters--the Expected or Unexpected?

I encourage you to read Cognomad's comment to my "On Intelligence II" post below (as well as checking out his blog). I was about to just respond to the comment, but the thing that caught my eye seemed a new entry: First--to Cognomad--enjoyed your blog...thanks for coming by and commenting. Cognomad seems to have far more physiological background in this stuff than I do, so I'll stay at the model level. If we accept any flavor of the "escalation" approach of Hawkins, that new perceptions get "noticed" or "passed upwpard" in the brain based on something about the perceptions, it's a pretty fundamental question whether the brain "notices" and "passes upward" the expected or the unexpected. Hawkins asserts the unexpected, while Cognomad asserts the expected. It's a tough question because seeing the difference physiologically is way beyond us for now (hmm, a neural packet sniffer?) AND I suspect it has the sort of Ptolemaic/Copernican thing is going on. Remember in high school science learning that Ptolomy said the earth is the center of the universe and Copernicus said we went around the Sun? I remember being interested at the time that the Ptolemaic model --with a few tweaks--was apparently working just fine helping ships navigate. So the question was still important, but on a practical basis, it didn't matter. For a while at least, I bet the same thing is true here: a good model (i.e. one that is pretty good at predicting outcomes) might be possible assuming either expected or unexpected things get escalated in the brain.

Anyway...I'm still on the Hawkins side that "unexpected stuff gets passed upward." My current pondering is whether the underlying algorithm might be something like a default binary test of "This doesn't matter." Then, when a perception breaks that default (i.e. it does matter because it's not only different, but different in a way that seems important), it gets passed upward. Arguably, this just begs the bigger question of how "importance" is measured. But at least it gets me going thinking about it....

Cognomad makes a case in his/her comment that escalating the unexpected would get bogged down by random noise. But given the length of connections between neurons, I'm not sure this causes the implied problem. That is, it seems plausible that direct connections between neurons at different levels could--once the escalation is established--bypass lots of levels to support the speed of thought. But that's just guessing. And although this is another topic (sorry), I also wonder whether this same mechanism might deal with Cognomad's comment about "deep" hierarchies being slower. If anything, this seems to be exactly what Gladwell, in Blink, called the benefit of expertise: lots of connections offering a deeply textured understanding of a topic allow extremely fast, accurate conclusions.

2 comments:

Boris Kazachenko said...

Thanks for the welcome & linking, great to see someone raising the big questions.
I am not an expert in neuroscience. Human brain is an accident of evolution, full of irrelevant baggage & biases on all levels. I am more concerned with how "ideal" intelligence should work.

My disagreement with Hawkins is somewhat more subtle. It's obvious that there's no sense in resending upward the info that's already there. Rather, the question is how that info is selected for elevation (sorry, I like it better than "escalation":).

Hawkins says the patterns/ features/ concepts are selected for elevation by "invariance". I call it recurrence, generality, accumulated match, or predictive power: same thing in different contexts. These patterns must be sufficiently "expected" on their level but "unexpected" on the higher, destination, level. This means that higher-level patterns, having passed through multiple levels of selection by recurrence, are more general ( & less novel) than those on lower levels.
So, what I object to is that Hawkins is trying to have it both ways. It seems obvious to me that if the purpose is to predict, which can only be done by projecting past recurrence, then such recurrence is what the hierarchy must select for.

The interesting point is that change does appear to be valuable too. However, I believe here we're dealing with "contrast" effect: the value of change is determined by, & subtracted from, the recurrent pattern it interrupts. In other words, change has "negative" value, it's important only to the extent that it cancels positive predictive value of the interrupted pattern.
The change within noise does not interrupt any pattern & has no independent value.
I don't know if you want to get into precise economics of the matter :).

By random noise I don't necessarily mean that generated by the brain itself, the environment (mediated by senses) is full of it too. In fact, the difference between noise & a pattern is a matter of degree & is subject-dependent. I don't see how the length of connections makes any difference here.

Regarding depth vs speed of a hierarchy:
Cross-level shortcuts would in effect "flatten" the hierarchy, bypassing filtering function of the intermediate levels. It's a great way to speed things up after learning, but would impair future learning ability by reallocating resources from generalization (critical thinking) to retrieval (as in expert system). Perhaps that's what happens to the experts.
Actually, another trade-off is that lesser critical recurrence per level would allow one to build deeper hierarchy faster, at the expence of selectiveness (less critical thinking).

BTW, call me a sexist but I thought my gender was an easy guess. I've never heard of a woman seriously & independently interested in this kind of generalizations (I'd love to be proven wrong). Also, I've read it somewhere that women have higher proportion of white to grey matter than men :).

Todor "Tosh" Arnaudov - Twenkid said...

Cool blog - "we are not alone"!

cognomad, I know one woman which is very advanced... and she was trying to discuss such type of high-level AI stuff back in 2004.

 
Add to Technorati Favorites