A [[Focusing event]], often called warning shot in AI safety circles, is a critical moment that brings a policy issue to the front.
AI might not give us a warnings shot: a sufficiently intelligent agent would know not to reveal its misalignment while weak, and would therefore make a [tracherous turn](https://www.lesswrong.com/w/treacherous-turn), showing no signs before it's too late.
However this seems unlikely, given the current slow(ish) take-off and AI's [[Jagged capabilities]].
A sufficiently large focusing event is likely to open a policy window, which we should be prepared to use for [[AI regulation]].
Previous focusing events in AI
- AlphaGo beats Lee Sedol
- ChatGPT launch
- CAIS AI statement
- Cambridge Analytica
Likely future focusing events
- AI-assisted [[CBRN]] incidenct
- AI-assisted large-scale cyberattack
- Large voice-cloning fraud incident
- Deployed model caught attempting self-exfiltration/shutdown resistance
- Large-scale labor displacement
- Agent AI causing large real-world harm