Extra-condensed knowledge
- A serious pursuit: using AI to understand the brain.
- Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.
- But real brains are highly unlikely to be relying on the same algorithm.
- For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
- Learning Through Backpropagation.
- No one knew how to effectively train artificial neural networks with hidden layers — until 1986 when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.
- In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance.
- Impossible for the Brain
- The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains.
- By analyzing 1,056 artificial neural networks implementing different models of learning, Daniel Yamins, and his colleagues at Stanford found that the type of learning rule governing a network can be identified from the activity of a subset of neurons over time.
Condensed knowledge
- Learning Through Backpropagation
- For decades, neuroscientists’ theories about how brains learn were guided primarily by a rule introduced in 1949 by the Canadian psychologist Donald Hebb, which is often paraphrased as “Neurons that fire together, wire together.”
- It was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.
- The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
- Impossible for the Brain. The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”
- The Role of Attention
- An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons.
- Given the advances, computational neuroscientists are quietly optimistic. “There are a lot of different ways the brain could be doing backpropagation,” said Kording. “And evolution is pretty damn awesome. Backpropagation is useful. I presume that evolution kind of gets us there.”
Category 2: The Big Picture of the Digital Age
[genioux fact produced, deduced or extracted from Quanta Magazine]
Type of essential knowledge of this “genioux fact”: Essential Deduced and Extracted Knowledge (EDEK).
Type of validity of the "genioux fact".
- Inherited from sources + Supported by the knowledge of one or more experts + Supported by research.
Authors of the genioux fact