Sunday, June 26, 2005

We are the Taggers.

How cool is this: a single human brain cell can recognise a person.

In studies, it turned out that a single neuron fired in test subjects when they were asked to recognise famous people. Each time the test guys were shown various pictures of Halle Berry, the same neuron fired. The same neuron also fired when viewing the sequence of letters 'H-A-L-L-E-B-E-R-R-Y' or when viewing drawings of the star.

It turns out that the brain uses a whole lot less neurons than was previously thought to store information - not like bits in an array but more like individual computers in a network.

Now, I've done absolutely no study into this at all, and have no qualifications in the field (or any field, for that matter) but here's my half-arsed hypothesis.

Perhaps the brain's awesome power to store and recall things is tied directly to it's ability to imagine and create. Maybe when I'm recalling a picture of Halle Berry in a catwoman costume, I'm actually just triggering a switch that asks for the recall to occur, and then a few neurons get together to imagine and create the recall information based on some complex imagination theory. Throw in some ego and self esteem for good measure, and I 've got a pretty good recollection of what I saw.

I often feel that when I'm going about my day to day life, I'm really only seeing and analysing a very narrow field of view. My peripheral vision, all the things that I think I can see seem to me to be pre-rendered in my head based on some time in the past when I've looked at them. Of course as soon as I look at anything in my peripheral vision, I end up focusing my attention on it, and everything else becomes peripheral. Turns out that Neurologists call this the Attention Window ( I learned about it here) and that the image you see is generally built up over time, not rendered out all at once like in a FPS. That explains why sleepwalkers can go adventuring in their sleep, without falling over stuff.

All this reminded me how lame computers are by comparison, and of the eventual doom of metadata. Someday, people are going to think that it's cute that we spent so much time telling computers how to describe their data.

Eventually, a computer will be able to derive as much information from something as we can. It will be able to look at a webcam image and then say "The weather looks crap today", probably by firing a few silicon 'neurons' and 'imagining' the result. The metadata is inferred, not explicit.

Whereas we're all busy tagging everything so a computer can understand it. Nowadays, information management is all about explicit metadata, often for elements that aren't meta at all - they're just data:

   <report>
<datestamp>2000-09-01</datestamp>
<station fullname="San Jose" abbrev="KSJC">
<latitude>37.3618619</latitude>

<longitude>-121.9290089</longitude>
</station>
<temperature>
<min>-5</min>
<max>10</max>

<forecast-low>0</forecast-low>
<forecast-high>11</forecast-high>
</temperature>
<wind>
<speed>5</speed>

<direction>NNW</direction>
</wind>
</report>


if @forecast-high<18
puts "The weather is crap today"
end

Your brain doesn't need all that stuff tagged. If you just see the report, You'll figure out what's what. Surely someday a computer will do it reliably too.

Meantime, on with the tagging...

2 comments:

  1. Hi Gordon,

    Interesting post.

    It's not true to say that human's don't need the meta-data though.

    In your example XML almost all of that meta-data (min, max, speed, direction etc) would be required by a human reader (datestamp and report perhaps not), although the syntax in which it might be expressed would be much more flexible

    (although not as flexible as people might think, how much can you play with a date before humans cannot recognise it as a date ? What about human names, our approach to recognising names as names seems basically to be a list of previously verified names).

    I think with AI in general sophisticated algorithms and knowledge-representation will be found to be far less important than informaton size and processing speed.

    In chess software when programs played poorly people theorized about the nuanced abstract strategic approaches that humans were using, the relatively brute force approach of the programs would never beat them.

    It turned out that a sufficiently forceful brute force chess algorithm is close to indistinguishable from a 'nuanced abstract strategy' when analysed (and no doubt within 5 years humans will not be able to distinguish between a game played by a human and a computer). The programs won.

    I suggest that computer programs will appear 'intelligent' and be able to deal with meaning rather than form when they have a massively increased knowledge base to work on at massively increased speeds.

    Obviously the data capacity of a human brain (even if you assume a 1 bit per nueron equivalence, which your article suggests is probably unfair to the nueron) is vastly superior to even our largest computers data-banks.

    And even the fact that neurons fire dramatically slower than a computer processes may be deceptive given the massively parrallel architecture of the brain.

    Given that a truely massive set of data, the internet, is now available to play with, we may some real break-throughs.

    That's my two cents worth :)

    ReplyDelete
  2. Hi Sam,

    I agree that all that data is required - but then I don't think of those key things as meta- they're actually the data itself. We're always going to have to tell a computer what to compute somehow. The real thing my example lacked was a DTD or schema definition...

    With regards to the AI problem, I'd like to think that sophisticated algorithms and elegant thinking would win out over brute force (It makes us so much more interesting!) but your chess example certainly points the other way.

    As someone who spends far too much of their life poking around with metadata, I think I probably have a hidden agenda when it comes to it's demise.

    Of course, it will all get really complicated when computers start having hidden agendas...

    ReplyDelete