Using this model, the program then scanned a new set of 120 brand new pictures to predict what kind of fMRI patterns these would make in the visual cortex.
After that, the volunteers themselves looked at the 120 new pictures while being scanned. The computer then matched the measured brain activity against the predicted brain activity, and picked an image that it believed was the closest match.
They notched up a 92 percent success rate with one volunteer, and accuracy was 72 percent in the other. The probability of this happening on the basis of chance -- i.e. the computer picking the right image out of the 120 -- is only 0.8 percent.
In short, if the computer knows what the viewer could be seeing, it can guess what it is almost all the time. The numbers decrease over a longer sequence, but not to an insignificant number:
The ambitious experiment was taken a stage further, expanding the set of novel images from 120 to up to 1,000. The first volunteer took this test, and accuracy declined, but only slightly, from 92 percent to 82 percent.
"Our estimates suggest that even with a set of one billion images -- roughly the number of images indexed by Google on the Internet -- the decoder would correctly identify the image about 20 percent of the time," said Gallant.
That's very, very meaningful. If we extrapolate further, assuming a computer could catalog all the sensations a human could experience, it would be possible to train a computer to recognize the brainwaves associated with them. The better computers get and the better the programming get, the closer this becomes to an actual mind-reading computer. It'll be a while (at least a decade) before this becomes really practical, but in that time I can see this developing into a number of ways of controlling and manipulating computers. Very interesting.