'The Statement on [[ASI is a type of AI that would greatly exceed human capabilities across virtually all cognitive tasks|Superintelligence]]' was released in October 2025 by [The Future of Life Institute](https://futureoflife.org/). It states:
We call for a prohibition on the development of superintelligence, not lifted before there is
- broad scientific consensus that it will be done safely and controllably, and
- strong public buy-in. [^1]
Many experts, thought leaders and celebrities have signed the statement, including two "Godfathers of AI". Currently (Nov 15 2025), the letter has 122,000 signatures, which also includes myself and the friends and family I could convince.
I believe in this prohibition. I think AI has a significant chance of [[AI aftermath scenarios|causing human extinction]]. [[The potential upsides of AI are incredible]], but with all current and future human lives at stake, I don't think we should take that risk.
It is widely seen as inconceivable that we can pause the development of superintelligence, mostly because it is a [[Collective action problem|collective action problem]].
![[ad7ke8.jpg|300]]
I still think there is a lot to fight for. Small changes in regulation or public discourse could make labs slow down (even if just a little), and invest more in safety. It is a noble fight and anyone who takes it up has a chance to make a huge difference for the future of humanity.
[^1]: Statement on Superintelligence. n.d. “Statement on Superintelligence.” Accessed November 15, 2025. [https://superintelligence-statement.org](https://superintelligence-statement.org). [[StatementSuperintelligence|Annotations]]