Vitalik Buterin Calls for AI Hardware Pause to Safeguard Humanity
Ethereum co-founder proposes measures to slow AI progress
Vitalik Buterin, co-founder of Ethereum, has suggested a temporary global slowdown in industrial-scale computational resources to give humanity more time to address the potential risks of artificial superintelligence. In a blog post dated January 5, Buterin explored the idea of a “soft pause” on AI development as part of his "defensive accelerationism" (d/acc) approach to technology.
According to Buterin, super-intelligent AI—capable of surpassing human intellect across all fields—might emerge within five years. While acknowledging the uncertain consequences of such advancements, he floated the idea of reducing global computing power by 99% for up to two years to prepare for the challenges it could bring.
The concept of AI superintelligence has sparked significant concern among tech leaders. In March 2023, over 2,600 experts signed an open letter urging a pause on AI development, warning of its potential risks to humanity. Buterin’s proposal aligns with these calls but takes a nuanced approach, advocating for a temporary measure rather than a complete halt.
Buterin outlined potential safeguards for managing AI hardware during a pause, including mandatory registration of AI chips and international oversight. He even proposed integrating blockchain technology to ensure transparency in hardware regulation. However, he emphasized that such measures would only be pursued if legal frameworks, like liability rules for AI damages, proved insufficient.
Buterin’s stance reflects his broader philosophy of d/acc, which champions careful, responsible tech development over unrestricted innovation. As the AI landscape evolves, his proposal has ignited fresh debate on how society should navigate the delicate balance between progress and safety.