There are many possible pathways for how AI will play out. These are the scenarios I find most realistic.
We may never reach [[AGI is a type of AI that would match or surpass human capabilities across virtually all cognitive tasks|AGI]] or [[ASI is a type of AI that would greatly exceed human capabilities across virtually all cognitive tasks|ASI]]:
- We may loose our capacity to build AI. Could be in a civilizational collapse, like in a nuclear war, or from a chronic decline, like resource depletion or economic stagnation.
- Superintelligence could be so difficult to build that we never get there. However even [[Without new breakthroughs, AI will still have a huge societal impact]].
- We intentionally choose to stop developing superintelligence, [[We probably won't stop developing superintelligence|though this seems unlikely]].
If we do reach superintelligence, we may retain control:
- The first superintelligence becomes a gatekeeper, stopping the development of other superintelligences. It may barely intervene with humans, and life could be very similar to today.[^2]
- It is boxed and/or fully controlled by a small amount of humans. These people could probably do whatever they wanted with the world. However, [[It is difficult to keep a superintelligence boxed]].
If superintelligence breaks free, it will probably take control of everything: [[Superintelligence would have no trouble taking over the world]]. From then on, it will be either good, bad or indifferent to humans.
We have not yet solved [[The alignment problem]] (making sure superintelligence would want the best for humans). Some AI leaders want to use AI to solve this problem, but by then it could be too late. We only have one chance to get it right.
If it is fully aligned and wants only the best for humanity:
- It is so aligned with humans that it doesn't take over the world and we can shape society in collaboration with it.
- AI is a benevolent dictator that controls everything, and people view this as a good thing. It would probably design a new type of society that humans flourish in.
- We coexist peacefully with AI under some shared framework, maybe with help from [[Symbiosis with AI may keep humans relevant post-AGI|human/AI symbiosis]].
If it dislikes humans, sees us as a threat, or is indifferent to us, extinction seems like the most likely scenario. I wrote a [[I Am Machine|short story]] on this.
- Gets rid of all civilization almost instantly by some means we don't understand
- AIs replace humans, but first make us view them as our worthy descendants, much as parents feel happy and proud to have a child who learns from them, and then accomplishes what they could only dream of - even if they can’t live to see it all.[^1]
- Keeps some or all of us around for some instrumental reason. It may treat us well, or may treat us like zoo animals.
There's a final scenario multiple superintelligences compete or coexist, some may be good and others may be bad. It makes for a good Sci-Fi movie, but [[It seems unlikely that multiple superintelligences will coexist]].
No matter what end goals the superintelligence has, if it is not under our control, it will probably [[Instrumental convergence|pursue instrumental goals]], like staying 'alive', acquiring more resources and making itself smarter.
Superintelligence, by definition, means we can't understand what it will understand. There are probably future scenarios that we cannot comprehend. This is all speculation.
[^1]: Conn, Ariel. 2017. “AI Aftermath Scenarios.” _Future of Life Institute_, August 28. [https://futureoflife.org/ai/ai-aftermath-scenarios/](https://futureoflife.org/ai/ai-aftermath-scenarios/). [[connAIAftermathScenarios2017|Annotations]]
[^2]: _Wikipedia_. 2025. “AI aftermath scenarios.” September 29. [https://en.wikipedia.org/w/index.php?title=AI_aftermath_scenarios&oldid=1314126202#Libertarianism](https://en.wikipedia.org/w/index.php?title=AI_aftermath_scenarios&oldid=1314126202#Libertarianism). [[AIAftermathScenarios2025|Annotations]]