Sunday, February 21, 2021

g-f(2)133 THE BIG PICTURE OF THE DIGITAL AGE, Quanta Magazine, Artificial Neural Nets Finally Yield Clues to How Brains Learn.




Extra-condensed knowledge


The learning algorithm that enables the runaway success of deep neural networks doesn’t work in biological brains, but researchers are finding alternatives that could.
  • A serious pursuit: using AI to understand the brain. 
    • Today, deep nets rule AI in part because of an algorithm called backpropagation, or backprop. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.
    • But real brains are highly unlikely to be relying on the same algorithm.
    • For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
  • Learning Through Backpropagation. 
    • No one knew how to effectively train artificial neural networks with hidden layers — until 1986 when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.
    • In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance.
  • Impossible for the Brain
    • The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. 
  • By analyzing 1,056 artificial neural networks implementing different models of learning, Daniel Yamins, and his colleagues at Stanford found that the type of learning rule governing a network can be identified from the activity of a subset of neurons over time.


Genioux knowledge fact condensed as an image


Condensed knowledge  


  • Learning Through Backpropagation
    • For decades, neuroscientists’ theories about how brains learn were guided primarily by a rule introduced in 1949 by the Canadian psychologist Donald Hebb, which is often paraphrased as “Neurons that fire together, wire together.”
    • It was obvious even in the 1960s that solving more complicated problems required one or more “hidden” layers of neurons sandwiched between the input and output layers. No one knew how to effectively train artificial neural networks with hidden layers — until 1986, when Hinton, the late David Rumelhart and Ronald Williams (now of Northeastern University) published the backpropagation algorithm.
    • The algorithm works in two phases. In the “forward” phase, when the network is given an input, it infers an output, which may be erroneous. The second “backward” phase updates the synaptic weights, bringing the output more in line with a target value.
  • Impossible for the Brain. The invention of backpropagation immediately elicited an outcry from some neuroscientists, who said it could never work in real brains. The most notable naysayer was Francis Crick, the Nobel Prize-winning co-discoverer of the structure of DNA who later became a neuroscientist. In 1989 Crick wrote, “As far as the learning process is concerned, it is unlikely that the brain actually uses back propagation.”
  • The Role of Attention 
    • An implicit requirement for a deep net that uses backprop is the presence of a “teacher”: something that can calculate the error made by a network of neurons.
  • Given the advances, computational neuroscientists are quietly optimistic. “There are a lot of different ways the brain could be doing backpropagation,” said Kording. “And evolution is pretty damn awesome. Backpropagation is useful. I presume that evolution kind of gets us there.”

Category 2: The Big Picture of the Digital Age

[genioux fact produced, deduced or extracted from Quanta Magazine]

Type of essential knowledge of this “genioux fact”: Essential Deduced and Extracted Knowledge (EDEK).

Type of validity of the "genioux fact". 

  • Inherited from sources + Supported by the knowledge of one or more experts + Supported by research.


Authors of the genioux fact

Fernando Machuca


References




ABOUT THE AUTHORS


Anil Ananthaswamy is a journalist and author. He is a 2019-20 MIT Knight Science Journalism fellow. His latest book, Through Two Doors at Once, is about quantum mechanics and the double-slit experiment. He is a former deputy news editor for New Scientist magazine and currently a freelance feature editor for PNAS’s Front Matter. Besides Quanta, he writes for New Scientist, Scientific American, Knowable and Undark, among others. He won the UK Institute of Physics’ Physics Journalism award and the British Association of Science Writers’ award for Best Investigative Journalism. His first book, The Edge of Physics, was voted book of the year in 2010 by Physics World, and his second book, The Man Who Wasn’t There, was long-listed for the 2016 Pen/E. O. Wilson Literary Science Writing Award. 


Key “genioux facts”













Featured "genioux fact"

g-f(2)2250 Visualizing Progress: The g-f New World's Viral Image

  Viral Image of the Day in the g-f New World (4/18/2024) genioux Fact post by  Fernando Machuca  and   Bard  ( Gemini ) The Viral Image of ...

Popular genioux facts, Last 30 days