Loading...
「ツール」は右上に移動しました。
利用したサーバー: natural-voltaic-titanium
1062いいね 63120回再生

Stanford Seminar - Can the brain do back-propagation? Geoffrey Hinton

"Can the brain do back-propagation?" - Geoffrey Hinton of Google & University of Toronto

About the talk:
Deep learning has been very successful for a variety of difficult perceptual tasks. This suggests that the sensory pathways in the brain might also be using back-propagation to ensure that lower cortical areas compute features that are useful to higher cortical areas. Neuroscientists have not taken this possibility seriously because there are so many obvious objections: Neurons do not communicate real numbers; the output of a neuron cannot represent both a feature of the world and the derivative of a cost function with respect to the neuron's output; the feedback connections to lower cortical areas that are needed to communicate error derivatives do not have the same weights as the feedforward connections; the feedback connections do not even go to the neurons from which the feedforward connections originate; there is no obvious source of labelled data. I will describe joint work with Timothy Lillicrap on ways of overcoming these objections.

Support for the Stanford Colloquium on Computer Systems Seminar Series provided by the Stanford Computer Forum.

Speaker Abstract and Bio can be found here:
ee380.stanford.edu/Abstracts/160427.html

Colloquium on Computer Systems Seminar Series (EE380) presents the current research in design, implementation, analysis, and use of computer systems. Topics range from integrated circuits to operating systems and programming languages. It is free and open to the public, with new lectures each week.

Learn more: bit.ly/WinYX5

0:00 Introduction
0:48 Online stochastic gradient descent
2:43 Four reasons why the brain cannot do backprop
5:20 Sources of supervision that allow backprop learning without a separate supervision signal
8:18 The wake-sleep algorithm (Hinton et. al. 1995)
12:15 New methods for unsupervised learning
13:39 Conclusion about supervision signals
14:03 Can neurons communicate real values?
16:16 Statistics and the brain
18:39 Big data versus big models
23:32 Dropout as a form of model averaging
24:53 Different kinds of noise in the hidden activities
28:38 How are the derivatives sent backwards?
30:18 A fundamental representational decision: temporal derivatives represent error derivatives
32:24 An early use of the idea that temporal derivatives encode error derivatives (Hinton & McClelland, 1988)
35:17 Combining STDP with reverse STDP
37:02 If this is what is happening, what should neuroscientists see?
39:22 What the two top-down passes achieve
40:11 A way to encode the top-level error derivatives
48:28 A consequence of using temporal derivatives to code error derivatives
48:40 The next problem
50:18 Now a miracle occurs
56:44 Why does feedback alignment work?

コメント