Creating AI Could Be the Biggest & Last Event in Human History | Stephen Hawking |


Stephen Hawking, in full Stephen William Hawking, (conceived January 8, 1942, Oxford, Oxfordshire, England—passed on March 14, 2018, Cambridge, Cambridgeshire), English hypothetical physicist whose hypothesis of detonating dark openings drew upon both relativity hypothesis and quantum mechanics. He likewise worked with space-time singularities.

You can see Stephen Hawking's reaction to talking to people in this video

Stephen Hawking's words in one of his speeches.



Today I would like to

speak about the origin and

destiny of intelligence

in our universe.

I shall take this to include

the human race, even though

much of its behaviour throughout

history has been pretty

stupid and not calculated

to aid the survival of the

species.

We all know that, over

time, things tend to get

messy.

The second law of thermodynamics

says that a total amount

of disorder or entropy

always increases

over time.

However, there is a loophole

allowing a small system

to decrease its disorder

as long as it increases

the disorder of its surroundings

by an even greater amount

Our initially barren universe

has evolved remarkably

complex entities doing

just this as well as

reproducing.

We call these entities ëlife í

Information is at the

heart of life.

DNA passes the blueprints

of life between generations.

Evermore complex life

forms input information

from sensors such as eyes and

ears, process the information

in brains or other systems

to figure out how to act

and connect in the world by outputting

information to muscles

for example.

At some point during

our 13.8

billion years of cosmic

history, something

beautiful happened.

This information processing

got so intelligent that

life forms became conscious

Our universe has now

awoken, becoming aware

of itself.

Iíve given you a brief history

of intelligence.

Whatís next?

Some think that humanity

today is the pinnacle of

evolution, and that this

is as good as it gets

I disagree.

There ought to be something very

special about the boundary

conditions of our universe

and what can be more special

than that there is no boundary

And there should be no boundary

to human endeavour.

I think there is no qualitative

difference between the brain

of an earthworm and a computer

I also believe that evolution

implies there can be

no qualitative difference between

the brain of an earthworm and

that of a human.

It therefore follows that computers

can, in principle

emulate human intelligence

or even better it.

Up to now, computers

have obeyed Mooreís law which

says that computers double

their speed and memory capacity

every two years.

Human intelligence

may also increase because

of genetic engineering

but not so fast.

The result is that computers

are likely to overtake

humans in intelligence

at some point in the next

100 years.

When that happens, we will

need to ensure that our computers

have goals aligned with

ours.

Itís tempting to dismiss

a notion of highly intelligent

machines as mere science

fiction,

but this would be a mistake

and potentially our worst

mistake ever.

Artificial intelligence

research is now progressing

rapidly.

Recent landmarks, such

as self-driving cars

a computer winning at

Jeopardy, and the digital

personal assistants Siri

Google Now and Cortana

are merely symptoms of

an IT arms race.

A race fuelled by unprecedented

investments and building

on an increasingly mature

theoretical foundation.

Such achievements will probably

pale against what our coming

decades will bring.

The potential benefits are

huge.

Everything that civilisation

has to offer is a product

of human intelligence

We cannot predict what

we might achieve when

this intelligence is amplified

by the tools AI may provide

but the eradication

of war, disease and poverty

would be high on anyoneís list

Success in creating AI

would be the biggest event

in human history.

Unfortunately, it

might also be the last

unless we learn how to avoid

the risks.

In the near term, for example

world militaries are

considering starting an

arms race in autonomous

weapon systems that

can choose and eliminate

their own targets,

while the UN is debating

a treaty banning such

weapons.

Autonomous weapons proponents

usually forget to ask

the most important question

What is the likely end point

of an arms race, and is that

desirable for the human

race?

Do we really want cheap

AI weapons to become

the Kalashnikovs of tomorrow

sold to criminals and

terrorists on the black

market?

Given concerns about

long-term controllability

of evermore advanced

AI systems, shouldwe

arm them and turnover

our defence to them?

In 2010, computerised

trading systems created

a stock market flash

crash.

What would a computer-triggered

crash look like in the

defence arena?

The best time to stop the

autonomous weapons arms

race is now.

In the medium term, AI

may automate our jobs

to bring both great prosperity

and equality.

Looking further ahead

there are no fundamental

limits to what can be achieved

There is no physical law

precluding particles from

being organised in ways

that perform even more

advanced computations

than the arrangements of particles

in human brains.

An explosive transition

is possible, although

it may play out differently

than in the movies.

As Irving Good realised

in 1965

machines with superhuman

intelligence could

repeatedly improve their

design even further, triggering what

Vernon Vine called ëa

singularity í

One can imagine such

technology outsmarting

financial markets, out-inventing

human researchers,

out-manipulating human leaders

and potentially subduing

us with weapons we cannot

even understand.

Whereas the short-term impact

of AI depends on who

controls it, the long-term

impact depends on

whether it can be controlled

at all.

In short, the advent of super

intelligent AI would

be either the best or the worst

thing ever to happen to humanity

so we should plan ahead

If a superior alien

civilisation send

us a text message saying

ëWeíll arrive in a few

decades,í would we just

reply, ëOkay. Call us when you

get here. Weíll leave the

lights oní?

Probably not, but this is

more or less what has happened

with AI.

Little serious research

has been devoted to these issues

outside of a few small

non-profit institutes

Fortunately, this is now

changing.

Technology pioneers

Elon Musk, Bill Gates

and Steve Wozniak have

echoed my concerns, and

a healthy culture of risk

assessment and awareness of societal

implications is

begin to take root in the

AI community.

Many of the worldís leading

AI researchers recently

signed an open letter

calling for the goal of AI

to be redefined from

simply creating raw undirected

intelligence to creating

intelligence directed

at benefiting humanity

The Future of Life Institute

where I serve on the scientific

advisory board, has

just launched a global

research programme aimed

at keeping AI beneficial

When we invented fire, we

messed up repeatedly

then invented a fire extinguisher

With more powerful technology

such as nuclear weapons

synthetic biology

and strong artificial

intelligence, we

should instead plan ahead

and aim to get things right

the first time, because

it may be the only chance

we will get.

I am an optimist and donít

believe in boundaries

neither for what we can do

in our personal lives, nor

for what life and intelligence

can accomplish in our

universe.

This means that the brief history

of intelligence that

I have told you about

is not the end of the story

but just the beginning of

what I hope will be

billions of years of life

flourishing in the cosmos

Our future is a race between

the growing power of our technology

and the wisdom with which we

use it.

Letís make sure that

wisdom wins.

Thank you for listening.