In a recent post on Oct 21, I linked to Michael Jordans interview - TopicsExpress



          

In a recent post on Oct 21, I linked to Michael Jordans interview in IEEE Spectrum, in which he comments on various topics including neural nets and deep learning. Mike was somewhat unhappy about the way his opinions were expressed in the interview and felt compelled to write a long comment to my post to clarify some of his positions. Some of his comments could be construed as being largely dismissive of neural nets and, by extension, of deep learning. In fact, he does not criticize deep learning. He has said nice things about the recent practical success of deep learning and convolutional nets in his recent Reddit AMA. What he does criticize is the hype that surrounds some works that claim to be neurally inspired or to work like the brain. Let me say, as forcefully as I can, that he and I totally agree on that. Mike and I have been friends since we met at the first Connectionist Summer School at CMU in 1986 (which was co-organized by Geoff Hinton). We have a lot of common interests, even if our favorite topics of research have seemed orthogonal for many years. In a way, it was inevitable that our paths would diverge. Mikes research direction tends to take radical turns every 5 years or so, from cognitive psychology, to neural nets, to motor control, to probabilistic approaches, graphical models, variational methods, Bayesian non-parametrics, etc. Mike is the Miles Davis of Machine Learning, who reinvents himself periodically and sometimes leaves fans scratching their heads after he changes direction. Here are a few things Mike and I agree on regarding deep learning, neural nets and such (he will comment if he disagrees): 1. There is nothing wrong with deep learning as a topic of investigation, and there is definitely nothing wrong with models that work well, such as convolutional nets. 2. There is nothing wrong with getting a bit on inspiration from neuroscience. Old-style neural nets, convnets, SIFT, HoG and many other successful methods have all been inspired by neuroscience to some degree. 3. The neural inspiration in models like convolutional nets is very tenuous. Thats why I call them convolutional nets not convolutional neural nets, and why we call the nodes units and not neurons. As Mike says in his interview, our units are very simple cartoonish elements, when compared to real neurons. Yes, most of the ideas behind some of the most successful deep learning models have been around since the 80s. That doesnt make them less useful. 4.There is something very wrong with claiming that a model is good just because it is inspired by the brain. Several efforts have attracted the attention of the press and have increased the hype level by claiming to work like the brain or to be cortical. There is quite a bit of hype around brain-like chips, brain-scale simulations, spiking this, and cortical that. Much of these claims are unsubstantiated and are not backed by real and believable results. Hype has killed AI several times in the past. We dont want that to happen again. 5. Serious research in deep learning and computational neuroscience should not be conflated with over-hyped work on brain-like systems. The fact that an organization receives 10^7 or 10^9 Dollars or Euros in investment or in research funding does not make it serious. Real results and the recognition of the research community make it serious. 6. Among serious researchers, there are four kinds of people. (1) People who want to explain/understand learning (and perhaps intelligence) at the fundamental/theoretical level. (2) People who want to solve practical problems and have no interest in neuroscience. (3) People who want to understand intelligence, build intelligent machines, and have a side interest in understanding how the brain works. (4) People whose primary interest is to understand how the brain works, but feel they need to build computer models that actually work in order to do so. There is nothing wrong with any of these approaches to research. 7. People whose primary interest is to understand how the brain works will be driven to work on models that are biologically plausible. They will occasionally come up with (or work on) methods that are useful but not particularly plausible biologically. Our dear friend Geoff Hinton falls into this category. 8. Trying to figure out a few principles that could be the basis of how the brain works (through mathematics and computer models) is a perfectly valid topic of investigation. How does the brain solve the credit assignment problem? how does the brain build representations of the perceptual world? These are important questions that must be researched.
Posted on: Fri, 24 Oct 2014 08:42:29 +0000

Trending Topics



div>

Recently Viewed Topics




© 2015