Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.
In 2014, Yamins and colleagues showed that a deep learning system that had learned to identify objects in pictures – nearly as well as humans could – did so in a way that closely mimicked the way the brain processes vision. In fact, the computations the deep learning system performed matched activity in the brain’s vision-processing circuits substantially better than any other model of those circuits.
“Once you have a sensory impression of the world, you want to make decisions based on it,” Yamins said. “We’re trying to make models of decision making, learning to make decisions and how you interface between sensory systems, decision making and memory.”
Visible Legacy Comment
The research team believes that there are good next goals for cognitively inspired artificial intelligence. Tech Scouts following the Stanford Bio-X ecosystem will find this collaboration between psychology and physics of interest.
- Caption: Map of Stanford Neurosciences Institute
- How H2 Becomes the Molecule that Make the Universe
- How can we design electronic devices that don’t overheat?
- As Moore’s law nears its physical limits, a new generation of brain-like computers comes of age in a Stanford lab
- Stanford’s Coulter grants program helps healthcare innovations reach patients
- Stanford-led international collaboration discovered an elusive neutron star