Well now Daniel Kokotajlo, one of the foremost AI experts and a former employee at OpenAI, has made a big splash. He’s pushed back his estimates for when artificial intelligence will be able to code completely autonomously. As a starting point, Kokotajlo predicted the arrival of superintelligence by 2027. Read on for more about how his revised analysis has shifted once again, now aiming at the early 2030s, with 2034 emerging as the new target date for this transformative progress.
Kokotajlo and his team had earlier outlined a scenario termed “AI 2027,” which envisioned unchecked AI development leading to a superintelligent being capable of self-replication and continuous improvement. Their forecasts fretted that this superintelligence might destroy humankind. Unless we act it will bring catastrophic impacts by the mid-2030s.
Kokotajlo’s reimagined timeline is focused on one key concept. First, we don’t think AI agents will be able to fully automate coding and AI R&D as soon or as far. He explained, ”It just feels like things are moving even a little bit faster than the AI 2027 fantasy. Our timelines were longer than 2027 when we published and are now a little longer still.” This recent change has re-energized the conversation among the scientific community about the speed of development in AI and what it all means.
The AI 2027 scenario proposed an “intelligence explosion,” where AI systems would rapidly evolve, creating increasingly advanced iterations of themselves. Yet time and again, experts have greeted these projections with caution. Gary Marcus, an emeritus professor of neuroscience at New York University, rejected the scenario as “a piece of science fiction.” He derisively branded the different findings as “pure science fiction mumbo jumbo.” These latter critiques, particularly coming from those who worry such rapid technological progress is even feasible, miss the point.
Kokotajlo has updated his predictions. This comes at a time of intensifying debate over how advanced AI technologies are shaping military and strategic landscapes. As defense analyst Andrea Castagna noted, that poses a major hurdle. Even if superintelligent AI ever were deployed for military purposes, fitting such an entity into our current strategic paradigms would be deeply complicated. He emphasized that merely because you possess a superintelligent computing device oriented toward military objectives doesn’t mean you can automatically insert it into our concept of operations. We’ve been putting these documents together for the past two decades.
Malcolm Murray, an AI researcher, called for AI to gain real-world abilities. These skills need to be brought to bear on real world challenges for scenarios such as AI 2027 to become reality. He pointed out that “the enormous inertia in the real world will delay complete societal change,” suggesting that societal readiness and infrastructure may not keep pace with technological advancement.
In a recent interview, U.S. Vice President JD Vance appeared to be alluding to Kokotajlo’s AI 2027. He said this in the context of warning against the current artificial intelligence arms race with China. That urgency around AI development hasn’t let up, and it still commands attention at high levels of government and with industry partners.
OpenAI CEO Sam Altman has provided some impressive context to the ongoing discussion about automated AI capabilities, research or otherwise. He acknowledged that getting to this goal by March 2028 was still an “internal goal” for OpenAI. Beyond introducing their initiative, he shared some pessimism regarding their ability to succeed, saying, “We may completely flop at this aspiration.”
Some of the best minds are in the thick of developing AI technologies and charting a path forward. Kokotajlo’s timeline, as adapted, illustrates the increasing recognition of the unpredictable nature of technological evolution and its ability to shape society.
