Driverless cars are a great thing.

Israel is a wonderful place to grow up.

The only rollercoasters I get on are startups.

My dream is to achieve AI for the common good.

Science is going to be revolutionized by AI assistants.

AI is a tool. The choice about how it gets deployed is ours.

The biggest reason we want autonomous cars is to prevent accidents.

I love sophisticated algorithms that help consumers in a tangible way.

AI is neither good nor evil. It's a tool. It's a technology for us to use.

A.I. should not be weaponized, and any A.I. must have an impregnable 'off switch.'

Taking new technology and incorporating into how people work and live is not easy.

Everybody should do at least one startup sometime in life. It's such an amazing ride.

The Turing Test was a brilliant idea, but it's evolved into a competition of chatbots.

A universal basic income doesn't give people dignity or protect them from boredom and vice.

I think that there are so many problems that we have as a society that AI can help us address.

When there are hiring decisions and promotion decisions to be made, people are hungry for data.

Automation has emerged as a bigger threat to American jobs than globalization or immigration combined.

Our highways and our roads are underutilized because of the allowances we have to make for human drivers.

The best students are ones that are willing to take intellectual risks and challenge conventional thinking.

We have an obligation to figure out how to help people cope with the rapidly changing nature of technology.

I'm not so worried about super-intelligence and 'Terminator' scenarios. Frankly I think those are quite farfetched.

Deep learning is a subfield of machine learning, which is a vibrant research area in artificial intelligence, or AI.

It's much more likely that an asteroid will strike the Earth and annihilate life as we know it than AI will turn evil.

Machines and people are both necessary for Facebook, Twitter, Wikipedia, Google, and neither is sufficient on its own.

If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI.

Infrastructure investment in science is an investment in jobs, in health, in economic growth and environmental solutions.

Sooner or later, the U.S. will face mounting job losses due to advances in automation, artificial intelligence, and robotics.

Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control.

A lot of people are scared that machines will take over the world, machines will turn evil: the Hollywood 'Terminator' scenario.

To take intellectual risks is to think about something that can't be done, that doesn't make any sense, and go for it responsibly.

If you step back a little and say we want to do A.I., then you will realize that A.I. needs knowledge, reasoning, and explanation.

One of my favorite sayings is, 'Much have I learned from my teachers, but even more from my friends and even more from my students.'

To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations.

It's paradoxical that things that are hard for people are easy for the computer, and things that are hard for the computer, any child can understand.

Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models. This is a real problem.

When you think of driverless cars, there's a huge potential for these cars to save lives by preventing accidents and by reducing congestion on highways.

I'm trying to use AI to make the world a better place. To help scientists. To help us communicate more effectively with machines and collaborate with them.

I could do a whole talk on the question of is AI dangerous.' My response is that AI is not going to exterminate us. It's a tool that's going to empower us.

There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence.

Understanding of natural language is what sometimes is called 'AI complete,' meaning if you can really do that, you can probably solve artificial intelligence.

Because of their exceptional ability to automatically elicit, record, and analyze information, A.I. systems are in a prime position to acquire confidential information.

In the past, much power and responsibility over life and death was concentrated in the hands of doctors. Now, this ethical burden is increasingly shared by the builders of AI software.

Just as our roads and bridges are overdue for investment, so is the infrastructure for scientific research; that is, the body of scientific thought and the tools for searching through it.

Ultimately, to me, the computer is just a big pencil. What can we sketch using this pencil that makes a positive difference to society and advances the state of the art, hopefully in an outsized way?

Life is short. Don't do the same thing everyone else is doing - that's such a herd mentality. And don't do something that's two percent better than the other person. Do something that changes the world.

The truth is that behind any AI program that works is a huge amount of, A, human ingenuity and, B, blood, sweat and tears. It's not the kind of thing that suddenly takes off like 'Her' or in 'Ex Machina.'

At least inside the city of Seattle, driving is going to be a hobby in 2035. It's not going to be a mode of commuting the same way hunting is a hobby for some people, but it's not how most of us get our food.

I don't think that all the coal miners - or even more realistically, say, the truck drivers whose jobs may be put out by self-driving cars and trucks - they're all going to go and become web designers and programmers.

All these things that we've contemplated, whether it's space travel or solutions to diseases that plague us, Ebola virus, all of these things would be a lot more tractable if the machines are trying to solve these problems.

The mechanical loom and the calculator have shown us that technology is both disruptive and filled with opportunities. But it would be hard to find a decent argument that we would have been better off without these inventions.

Share This Page