Computers will understand sarcasm before Americans do.

I have a Reagan-like ability to believe in my own data.

I feel slightly embarrassed by being called 'the godfather.'

Making everything more efficient should make everybody happier.

I think it's very clear now that we will have self-driving cars.

Humans are still much better than computers at recognizing speech.

I got fed up with academia and decided I would rather be a carpenter.

The brain sure as hell doesn't work by somebody programming in rules.

Backhoes can save us a lot of digging. But of course, you can misuse it.

In A.I., the holy grail was how do you generate internal representations.

I am betting on Google's team to be the epicenter of future breakthroughs.

My main interest is in trying to find radically different kinds of neural nets.

I refuse to say anything beyond five years because I don't think we can see much beyond five years.

In a sensibly organised society, if you improve productivity, there is room for everybody to benefit.

We want to take AI and CIFAR to wonderful new places, where no person, no student, no program has gone before.

My view is we should be doing everything we can to come up with ways of exploiting the current technology effectively.

To deal with a 14-dimensional space, visualize a 3-D space and say 'fourteen' to yourself very loudly. Everyone does it.

The question is, can we make neural networks that are 1,000 times bigger? And how can we do that with existing computation?

Most people at CMU thought it was perfectly reasonable for the U.S. to invade Nicaragua. They somehow thought they owned it.

I think we should think of AI as the intellectual equivalent of a backhoe. It will be much better than us at a lot of things.

The pooling operation used in convolutional neural networks is a big mistake, and the fact that it works so well is a disaster.

I get very excited when we discover a way of making neural networks better - and when that's closely related to how the brain works.

Any new technology, if it's used by evil people, bad things can happen. But that's more a question of the politics of the technology.

The NSA is already bugging everything that everybody does. Each time there's a new revelation from Snowden, you realise the extent of it.

In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.

In the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses.

Once your computer is pretending to be a neural net, you get it to be able to do a particular task by just showing it a whole lot of examples.

The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things.

I am scared that if you make the technology work better, you help the NSA misuse it more. I'd be more worried about that than about autonomous killer robots.

All you need is lots and lots of data and lots of information about what the right answer is, and you'll be able to train a big neural net to do what you want.

Take any old classification problem where you have a lot of data, and it's going to be solved by deep learning. There's going to be thousands of applications of deep learning.

In science, you can say things that seem crazy, but in the long run they can turn out to be right. We can get really good evidence, and in the end the community will come around.

In science, you can say things that seem crazy, but in the long run, they can turn out to be right. We can get really good evidence, and in the end, the community will come around.

Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don't get irony either. Computers are going to reach the level of Americans before Brits.

In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn't work too well.

The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.

Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals.

You look at these past predictions like there's only a market in the world for five computers [as allegedly said by IBM founder Thomas Watson] and you realize it's not a good idea to predict too far into the future.

In the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses - 10 to the 15, it's a very big number.

The paradigm for intelligence was logical reasoning, and the idea of what an internal representation would look like was it would be some kind of symbolic structure. That has completely changed with these big neural nets.

I think people need to understand that deep learning is making a lot of things, behind-the-scenes, much better. Deep learning is already working in Google search, and in image search; it allows you to image search a term like "hug."

Everybody right now, they look at the current technology, and they think, 'OK, that's what artificial neural nets are.' And they don't realize how arbitrary it is. We just made it up! And there's no reason why we shouldn't make up something else.

Now that neural nets work, industry and government have started calling neural nets AI. And the people in AI who spent all their life mocking neural nets and saying they'd never do anything are now happy to call them AI and try and get some of the money.

Deep learning is already working in Google search and in image search; it allows you to image-search a term like 'hug.' It's used to getting you Smart Replies to your Gmail. It's in speech and vision. It will soon be used in machine translation, I believe.

Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable, so they put tellers out of work.

My father was an entomologist who believed in continental drift. In the early '50s, that was regarded as nonsense. It was in the mid-'50s that it came back. Someone had thought of it 30 or 40 years earlier named Alfred Wegener, and he never got to see it come back.

As soon as you have good mechanical technology, you can make things like backhoes that can dig holes in the road. But of course a backhoe can knock your head off. But you don't want to not develop a backhoe because it can knock your head off, that would be regarded as silly.

I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.

We now think of internal representation as great big vectors, and we do not think of logic as the paradigm for how to get things to work. We just think you can have these great big neural nets that learn, and so, instead of programming, you are just going to get them to learn everything.

I had a stormy graduate career, where every week we would have a shouting match. I kept doing deals where I would say, 'Okay, let me do neural nets for another six months, and I will prove to you they work.' At the end of the six months, I would say, 'Yeah, but I am almost there. Give me another six months.'

Share This Page