Silicon Valley’s Elite Turn to Bunkers as AGI Approaches

Silicon Valley’s Elite Turn to Bunkers as AGI Approaches

To some of these tech billionaires, urgency is starting to set in. One solution in that extreme case has led some powerful AI players to tread down apocalyptic paths, including gamifying your own underground shelter. This comes on the heels of their apocalyptic forecasts around the near-future development of Artificial General Intelligence (AGI). This new cutting-edge technology will surpass human understanding and dramatically reshape the world we know. The co-founder and chief scientist of OpenAI Ilya Sutskever has put forth a captivating theory. Specifically, he thinks the organization should build a doomsday lair for its most important scientists before releasing technology this powerful.

As the conversation around AGI intensifies, Sam Altman, CEO of OpenAI, remarked that AGI will arrive “sooner than most people in the world think.” On the other hand, Sir Demis Hassabis, the CEO of DeepMind, predicts AGI will happen in the next five to ten years. So much has the AI progress accelerated, that Dario Amodei – co-founder of AI startup Anthropic – has even suggested that “general-purpose AI” might emerge as soon as 2026.

The implications of AGI are profound. Experts define it as the point at which machines can replicate human cognitive abilities, raising both possibilities and ethical concerns. With the recent launch of ChatGPT, OpenAI has gotten the attention of hundreds of millions. This amazing new AI chatbot has generated massive, sweeping conversations about what could happen with unpredictable, advanced AI systems.

In 2023, due to surging safety pressures, President Biden finally stepped up. He signed an executive order requiring certain companies to release their safety testing results to the federal government. It is an important step, given the broad concern over risks from such advanced AI technologies that this specific proposal is taking shape. In addition, the new AI Safety Institute was created two years ago to do more work to investigate all of these risks.

The mood in the tech community seems to mirror at once a spirit of optimism and wariness. Elon Musk thinks super-intelligent AI would lead to an era of “universal basic income.” He argues that by making this technology widely available, we can radically improve economic opportunity and mobility for all. He expresses recognition for the need of safety precautions.

“If it’s smarter than you, then we have to keep it contained.” – Tim Berners Lee

Yet the idea of containment has also chimed with some of the brightest stars in the field. Reid Hoffman, co-founder of LinkedIn, characterized the purchase of property in remote locations as a form of “apocalypse insurance,” reflecting a shared sentiment among Silicon Valley elites about potential existential risks linked to AGI.

Mark Zuckerberg’s reported acquisition of nearly a dozen properties in Palo Alto—totaling approximately $110 million—indicates a strategic move towards establishing secure living arrangements. Zuckerberg himself likened these purchases to “just like a little shelter, it’s like a basement,” hinting at an underlying concern regarding future uncertainties.

With many tech billionaires investing in land with underground spaces suited for conversion into luxury bunkers, a clear pattern emerges. Sutskever’s declaration that “We’re definitely going to build a bunker before we release AGI” further emphasizes this urgency within the industry.

Even the ongoing debate on the prospect of AGI is sceptical. Neil Lawrence, professor of machine learning at Cambridge University, has lambasted the idea itself. He argues that equating AI with generalized human intelligence is misleading, stating, “The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle.’” To Joel, the purpose of a vehicle is fundamentally shaped by its context. Just like other AI models and systems, most of these AI models will be customized for special needs.

“The right vehicle is dependent on the context.” – Neil Lawrence

This skeptical or cautionary view sheds light on that sharp split among AI researchers over how soon and how effectively AGI will be achieved. Many emphasize that achieving true AGI is not merely a matter of time. It demands substantial computational resources, human creativity, and extensive trial and error.

Even with these opposing views, industry and thought leaders in AI development are increasingly adopting practices that parallel safety and contingency planning. According to Fortune, some billionaires are buying up distant ranches or farms that come with everything needed for high-security self-isolation.

“Saying you’re ‘buying a house in New Zealand’ is kind of a wink, wink, say no more.” – Reid Hoffman

Fears over what AI could eventually be able to do and what impact it will have on society are still reverberating throughout tech circles. Yet, as advancements accelerate, so too do the demands for strong regulations and even stronger ethical guidelines. The founding of groups like the AI Safety Institute represents a positive effort to address threats before they happen.

Tags