Is Superintelligent AI an Existential Risk? - Nick Bostrom on ASI

Описание к видео Is Superintelligent AI an Existential Risk? - Nick Bostrom on ASI

Artificial superintelligence or a superintelligence in general is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds.

According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

I. J. Good's "intelligence explosion" model predicts that a future superintelligence will trigger a singularity.

Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Müller, suggested a median probability estimate of 50% that artificial general intelligence or AGI would be developed by 2040 to 2050.

Public figures such as Stephen Hawking and Elon Musk have expressed concern that full artificial intelligence or strong AI could result in human extinction. But the consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Philosopher Nick Bostrom defines an existential risk as one in which an extinction-level event is not only possible but likely. He argues that advanced artificial intelligence is "likely" to be an existential risk.
In addition, time frame is also a factor, as a superintelligence might decide to act quickly before humans would have a chance to react with any countervailing action. A superintelligence might decide to preemptively eliminate all of humanity for reasons that may be incomprehensible to us.

There is also a possibility where a superintelligence might seek to colonize the universe. A superintelligence might do this in order to maximize the amount of computation it could do or to obtain raw materials for manufacturing new supercomputers.

While it may seem alarmist to worry about these scenarios in a current world where only narrow AI exists, we do not know how long it takes or if it's even possible to develop safe artificial super-intelligence that shares our goals. Thus, we better start planning today about the advent of ASI... While we still can!

#AI #ASI #AGI

SUBSCRIBE to our channel "Science Time":    / sciencetime24  
SUPPORT us on Patreon:   / sciencetime  
BUY Science Time Merch: https://teespring.com/science-time-merch

Sources:
Nick Bostrom talks at Google:    • Superintelligence | Nick Bostrom | Ta...  
https://en.wikipedia.org/wiki/Superin...

Комментарии

Информация по комментариям в разработке