**Interview with Demis Hassabis of Google, co-founder of - TopicsExpress



          

**Interview with Demis Hassabis of Google, co-founder of DeepMind** Interesting snippets: The idea for our research program is to slowly widen and widen those domains. We have a prototype of this — the human brain. We can tie our shoelaces, we can ride cycles and we can do physics with the same architecture. So we know this is possible. Tell me about the two companies, both out of Oxford University, that you just bought. These Oxford guys are amazingly talented groups of professors. One team [formerly Dark Blue Labs] will focus on natural language understanding, using deep neural networks to do that. So rather than the old kind of logic techniques for NLP, we’re using deep networks and word embeddings and so on. That’s led by Phil Blunsom. We’re interested in eventually having language embedded into our systems so we can actually converse. At the moment they are obviously prelinguistic—there is no language capability in there. So we’ll see all of those things marrying up. And the second group, Vision Factory, is led by Andrew Zisserman, a world famous computer vision guy. When will we see this happening? In six months to a year’s time we’ll start seeing some aspects of what we’re doing embedded in Google Plus, natural language and maybe some recommendation systems. What do you hope to do for Google in the long run? I’m really excited about the potential for general AI. Things like AI-assisted science. What’s the big problem you’re working on now? The big thing is what we call transfer learning. You’ve mastered one domain of things, how do you abstract that into something that’s almost like a library of knowledge that you can now usefully apply in a new domain? That’s the key to general knowledge. At the moment, we are good at processing perceptual information and then picking an action based on that. But when it goes to the next level, the concept level, nobody has been able to do that. One condition you set on the Google purchase was that the company set up some sort of AI ethics board. What was that about? It was a part of the agreement of the acquisition. It’s an independent advisory committee like they have in other areas. Why did you do that? I think AI could be world changing, it’s an amazing technology. All technologies are inherently neutral but they can be used for good or bad so we have to make sure that it’s used responsibly. I and my cofounders have felt this for a long time. Another attraction about Google was that they felt as strongly about those things, too. What has this group done? Certainly there is nothing yet. The group is just being formed — I wanted it in place way ahead of the time that anything came up that would be an issue. One constraint we do have— that wasn’t part of a committee but part of the acquisition terms—is that no technology coming out of Deep Mind will be used for military or intelligence purposes. Do you feel like a committee really could make an impact on controlling a technology once you bring it into the world? I think if they are sufficiently educated, yes. That’s why they’re forming now, so they have enough time to really understand the technical details, the nuances of this. There are some top professors on this in computation, neuroscience and machine learning on this committee. And the committee is in place now? It’s formed yes, but I can’t tell you who is on it. Why not? Well, because it’s confidential. We think it’s important [that it stay out of public view] especially during this initial ramp-up phase where there is no tech— I mean we’re working on computing Pong, right? There are no issues here currently but in the next five or ten years maybe there will be. So really it’s just getting ahead of the game. Will you eventually release the names? Potentially. That’s something also to be discussed. Transparency is important in this too. Sure, sure. There are lots of interesting questions that have to be answered on a technical level about what these systems are capable of, what they might be able to do, and how are we going to control those things. At the end of the day they need goals set by the human programmers. Our research team here works on those theoretical aspects partly because we want to advance [the science], but also to make sure that these things are controllable and there’s always humans in the loop and so on. Link: https://medium/backchannel/the-deep-mind-of-demis-hassabis-156112890d8a
Posted on: Mon, 19 Jan 2015 19:36:00 +0000

Trending Topics



Recently Viewed Topics




© 2015