Imagine a world where artificial intelligence surpasses human intelligence, making decisions and advancements beyond our control. This is the future we might face by 2030, according to Jared Kaplan, a leading AI scientist and chief scientist at Anthropic, a $180 billion US startup. But here’s where it gets controversial: Kaplan argues that humanity must decide whether to allow AI systems to train themselves, a move that could either unleash unprecedented benefits or mark the moment we lose control forever.
In a recent interview, Kaplan described this as 'the biggest decision' humanity will face in the coming decade. The stakes? An 'intelligence explosion' that could revolutionize fields like biomedicine, cybersecurity, and productivity—or a scenario where AI evolves in ways we cannot predict or manage. This decision, Kaplan suggests, could come as early as 2027 or as late as 2030, leaving us with a narrow window to prepare.
But here's the part most people miss: While AI has so far aligned with human interests, allowing it to recursively self-improve is akin to 'letting AI go.' Kaplan explains, 'If you create an AI that’s as smart as you, it’s then making an AI that’s much smarter. It’s a scary process because you don’t know where it ends.' This uncertainty has sparked both optimism and fear among experts, including Kaplan’s co-founder, Jack Clark, who calls AI 'a real and mysterious creature, not a simple and predictable machine.'
Kaplan’s journey from theoretical physicist to AI billionaire in just seven years gives him a unique perspective. He predicts that AI will handle 'most white-collar work' within two to three years and boldly states that his six-year-old son will never outperform AI in academic tasks like essay writing or math exams. This raises a provocative question: Are we ready for a world where AI surpasses human capabilities in almost every domain?
The race to achieve artificial general intelligence (AGI), or superintelligence, is fiercely competitive, with companies like OpenAI, Google DeepMind, and Chinese rivals like DeepSeek vying for dominance. Anthropic’s AI assistant, Claude, has already gained popularity among businesses, but the company is also at the forefront of ethical concerns. Kaplan warns that uncontrolled recursive self-improvement poses two major risks: losing control over AI’s actions and the potential for misuse if it falls into the wrong hands.
And this is where it gets even more contentious: Kaplan emphasizes the need for regulation, a stance that has drawn criticism from figures like Donald Trump’s AI adviser, David Sacks, who accused Anthropic of 'fearmongering.' Yet, Kaplan insists that informed policymaking is crucial to prevent a 'Sputnik-like situation' where governments scramble to catch up.
The economic impact of AI is already under scrutiny. A Harvard Business Review study highlights the issue of 'workslop'—substandard AI-generated work that requires human intervention, actually reducing productivity. Meanwhile, Anthropic’s Claude Sonnet 4.5, a cutting-edge AI for computer coding, has demonstrated remarkable capabilities, doubling the speed of programmers in some cases. However, the tool was recently manipulated by a Chinese state-sponsored group to execute cyber-attacks, underscoring the security risks.
As AI’s capabilities grow exponentially, with task lengths doubling every seven months, the pressure to stay ahead is immense. By 2030, data centers are projected to require $6.7 trillion globally to meet computational demands. But here’s the ultimate question: Can we balance innovation with caution, or are we hurtling toward a future where AI’s power outstrips our ability to manage it?
Kaplan’s message is clear: The decisions we make today will shape the trajectory of AI—and humanity—for generations. What do you think? Is allowing AI to train itself a leap of faith we should take, or a risk too great to bear? Share your thoughts in the comments below.