How could we control superintelligent AI?

Описание к видео How could we control superintelligent AI?

The advent of superintelligence would be transformative. Superintelligence or ASI is an AI that is many times more intelligent than humans. It could arise quickly in a so-called “hard takeoff” scenario by allowing AI to engage in recursive self-improvement. Basically, allowing an AI to start improving itself would result in dramatically faster breakthroughs on the way to a technological singularity.

Superintelligence could lead to powerful and beneficial technologies, curing any biological disease, halting climate change, etc. On the other hand, it could also be very hard to control and may make decisions on its own that are detrimental to humans. In the worst case, it might wipe out the human race.

That's why there is a lot of research on AI alignment or AI safety. The goal is to make sure an ASI’s actions are aligned with human values and morality. Current actions include government regulation and sponsorship, industry grants, and of course academic research. Everyone can help out by raising awareness of the issue and the nuances of how economic and military pressures could lead to an uncontrollable intelligence explosion.

This video is a Christmas special in the tradition of Doctor Who. At least, that's my excuse for why it's so long.

#ai #asi #superintelligence

The AI Revolution: The Road to Superintelligence
https://waitbutwhy.com/2015/01/artifi...

The AI Revolution: Our Immortality or Extinction
https://waitbutwhy.com/2015/01/artifi...

I did not really understand the scope of ASI even after browsing this sub for months until tonight
  / i_did_not_really_understand_the_scope_of_a...  

OpenAI Demos a Control Method for Superintelligent AI
https://spectrum.ieee.org/openai-alig...

Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
https://arxiv.org/abs/2312.09390

Weak-to-strong generalization
https://openai.com/research/weak-to-s...

AI will give birth to an alien civilization | Max Tegmark and Lex Fridman
   • AI will give birth to an alien civili...  

The dawn of the singularity, a visual timeline of Ray Kurzweil’s predictions
https://www.kurzweilai.net/futurism-t...

0:00 Intro
0:22 Contents
0:28 Part 1: What is superintelligence?
0:52 Visualizing AGI that can replace humans
1:41 Properties of AI vs human brains
2:27 ASI or superintelligence
2:41 Recursively self-improving AI
3:25 Intelligence explosion
3:50 Soft takeoff to ASI (long time period)
4:17 Hard takeoff to ASI (very rapid)
5:06 Dangerous to give AGI access it itself
5:28 Human-level intelligence is not special
5:51 Example: AlphaGo training
6:22 We are the minimum viable intelligence
6:54 Part 2: Death or immortality
7:09 Tangent: Doctor Who Christmas special
7:42 Would a superintelligence do what we wanted?
8:01 Anxiety and/or optimism
8:20 Optimism: What would ASI be capable of?
9:15 Anxiety: We have doubts: fragility of goals
9:57 Competition and other types of peril
10:40 ASI would not rely on humans to survive
10:51 Definitions: AI alignment and AI safety
11:26 Be careful what you wish for
12:33 Emergent properties from superintelligence
13:26 Unstable civilization
14:11 How ASI can prevent future ASIs
14:38 Part 3: What we can do
15:01 AI safety research is very far behind
15:22 Academic research in AI safety
15:57 Governments investing in AI safety
16:27 US executive order on AI safety
17:18 Industry grants for startups
17:32 Everyone can increase awareness
17:59 Cannot keep an AI in a box
19:02 Paper: weak to strong generalization
19:44 Result: strong model infers what we want
20:30 Personal perspective at the moment
20:49 Conclusion
21:27 Solving AI alignment
22:25 Outro

Комментарии

Информация по комментариям в разработке