In studies, it turned out that a single neuron fired in test subjects when they were asked to recognise famous people. Each time the test guys were shown various pictures of Halle Berry, the same neuron fired. The same neuron also fired when viewing the sequence of letters 'H-A-L-L-E-B-E-R-R-Y' or when viewing drawings of the star.
It turns out that the brain uses a whole lot less neurons than was previously thought to store information - not like bits in an array but more like individual computers in a network.
Now, I've done absolutely no study into this at all, and have no qualifications in the field (or any field, for that matter) but here's my half-arsed hypothesis.
Perhaps the brain's awesome power to store and recall things is tied directly to it's ability to imagine and create. Maybe when I'm recalling a picture of Halle Berry in a catwoman costume, I'm actually just triggering a switch that asks for the recall to occur, and then a few neurons get together to imagine and create the recall information based on some complex imagination theory. Throw in some ego and self esteem for good measure, and I 've got a pretty good recollection of what I saw.
I often feel that when I'm going about my day to day life, I'm really only seeing and analysing a very narrow field of view. My peripheral vision, all the things that I think I can see seem to me to be pre-rendered in my head based on some time in the past when I've looked at them. Of course as soon as I look at anything in my peripheral vision, I end up focusing my attention on it, and everything else becomes peripheral. Turns out that Neurologists call this the Attention Window ( I learned about it here) and that the image you see is generally built up over time, not rendered out all at once like in a FPS. That explains why sleepwalkers can go adventuring in their sleep, without falling over stuff.
All this reminded me how lame computers are by comparison, and of the eventual doom of metadata. Someday, people are going to think that it's cute that we spent so much time telling computers how to describe their data.
Eventually, a computer will be able to derive as much information from something as we can. It will be able to look at a webcam image and then say "The weather looks crap today", probably by firing a few silicon 'neurons' and 'imagining' the result. The metadata is inferred, not explicit.
Whereas we're all busy tagging everything so a computer can understand it. Nowadays, information management is all about explicit metadata, often for elements that aren't meta at all - they're just data:
<report>
<datestamp>2000-09-01</datestamp>
<station fullname="San Jose" abbrev="KSJC">
<latitude>37.3618619</latitude>
<longitude>-121.9290089</longitude>
</station>
<temperature>
<min>-5</min>
<max>10</max>
<forecast-low>0</forecast-low>
<forecast-high>11</forecast-high>
</temperature>
<wind>
<speed>5</speed>
<direction>NNW</direction>
</wind>
</report>
if @forecast-high<18
puts "The weather is crap today"
end
Your brain doesn't need all that stuff tagged. If you just see the report, You'll figure out what's what. Surely someday a computer will do it reliably too.
Meantime, on with the tagging...
Hi Sam,
ReplyDeleteI agree that all that data is required - but then I don't think of those key things as meta- they're actually the data itself. We're always going to have to tell a computer what to compute somehow. The real thing my example lacked was a DTD or schema definition...
With regards to the AI problem, I'd like to think that sophisticated algorithms and elegant thinking would win out over brute force (It makes us so much more interesting!) but your chess example certainly points the other way.
As someone who spends far too much of their life poking around with metadata, I think I probably have a hidden agenda when it comes to it's demise.
Of course, it will all get really complicated when computers start having hidden agendas...