Will AI labs lose their models to espionage?

Описание к видео Will AI labs lose their models to espionage?

AI labs that are training new frontier models are becoming larger and larger attack targets. They spend millions of dollars on training individual models, and the result is a single file containing model weights. It's very tempting for attackers to steal that file instead of spending all the money on training runs. Not to mention the opportunity this represents for rival intelligence agencies to sabotage or steal the model weights.

We consider what type of security an AI lab should have in place. We describe a taxonomy of five different attacker levels and five different defender levels as presented by RAND. Level four and five involve intelligence agency attackers, and extreme defenses to try to counter them. Companies can't reasonably be expected to succeed at defending these types of attacks without assistance from intelligence agencies.

So right now, there's no easy way for an AI lab to protect its crown jewels against theft by rogue actors or especially rival countries' intelligence agencies. AI model weights have to become one of the most closely guarded secrets in the world to achieve that goal. It might prove very difficult in the long run to maintain closed source model weights.

#ai #ailabs #cybersecurity

Securing AI Model Weights: Preventing Theft and Misuse of Frontier Models
https://www.rand.org/pubs/research_re...

Mistral CEO confirms ‘leak’ of new open source AI model nearing GPT-4 performance
https://venturebeat.com/ai/mistral-ce...

An over-enthusiastic employee of one of our early access customers leaked…
https://x.com/arthurmensch/status/175...

Anthropic’s Responsible Scaling Policy
https://www.anthropic.com/news/anthro...

Meta’s powerful AI language model has leaked online — what happens now?
https://www.theverge.com/2023/3/8/236...

0:00 Intro
0:27 Contents
0:34 Part 1: Securing model weights
0:57 Three groups of malicious actors
1:26 What can organizations do to defend themselves?
1:35 Two cases where model weights were leaked
1:44 Llama model from meta was leaked
2:06 Mistral model leaked
2:48 How labs are changing their security posture
3:25 Where model weights need protecting
3:56 RAND report "Securing AI model weights"
4:22 Video series about AI lab security
4:35 Part 2: Organizational security levels
4:57 Five levels of attacker capability
5:02 OC1: Amateur attack capability
5:18 OC2: Professional opportunistic
5:34 OC3: Criminal groups and insider attacks
6:10 OC4: Intelligence agency operation
6:36 OC5: Top cyber operation by top agency
7:10 Stuxnet example (OC5)
7:44 Five different levels of defense capability
7:51 SL1: Defenses that would defeat OC1
8:16 SL2: Defeat small scale professional attacks (startup level security)
8:55 SL3: Defeating criminal groups (large company security)
9:50 Questions to consider for SL3
10:17 SL4: Defending against intelligence agencies
11:15 Need involvement from local intelligence agency
11:51 SL5: Thwarting $1 billion intelligence agency attacks
12:45 Prioritizing security even during outages
13:32 Part 3: What labs should do
14:01 General suggestions for labs
14:27 Confidential computing
14:44 Why the NSA has a poor reputation in private sector
15:37 Just go to Canada instead
15:53 Physical bandwidth constraints don't do much
16:31 Cryptographic hardware model weight storage
17:13 Conclusion
17:35 Intelligence agencies need to get involved
18:01 Outro

Комментарии

Информация по комментариям в разработке