Ben Cantrick (mackys) wrote,
Ben Cantrick
mackys

  • Mood:
  • Music:

Jeff Hawkins and machine vision.


George and Hawkins called the new technology hierarchical temporal memory, or HTM. An HTM consists of a pyramid of nodes, each encoded with a set of statistical formulas. The whole HTM is pointed at a data set, and the nodes create representations of the world the data describes — whether a series of pictures or the temperature fluctuations of a river. The temporal label reflects the fact that in order to learn, an HTM has to be fed information with a time component — say, pictures moving across a screen or temperatures rising and falling over a week. Just as with the brain, the easiest way for an HTM to learn to identify an object is by recognizing that its elements - the four legs of a dog, the lines of a letter in the alphabet - are consistently found in similar arrangements. Other than that, an HTM is agnostic; it can form a model of just about any set of data it’s exposed to. And, just as your cortex can combine sound with vision to confirm that you are seeing a dog instead of a fox, HTMs can also be hooked together. Most important, Hawkins says, an HTM can do what humans start doing from birth but that computers never have: not just learn, but generalize.

http://www.wired.com/wired/archive/15.03/hawkins.html?pg=1

I do not expect to see human-level ("strong") AI in my lifetime. However, I do think that this stuff is probably either the key, or one of the keys, to good machine vision. And that's pretty cool in and of itself. We shall see if it can prove itself in the real world...
Tags: ai, reddit
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments