AI 2027: an artificial intelligence future that’s only two years away?
28 May 2025
A speculative essay on the (perhaps) faster than anticipated rise of a superhuman, superintelligent AI, by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean. It’s a long, possibly unsettling read, but well worth it.
The CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.” What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.
Artificial General Intelligence (AGI) mimics all the cognitive activities of the human brain, while AI can perform tasks that require human intelligence. I’m thinking HAL, the human-like computer in the 1968 film 2001: A Space Odyssey might be an example of AGI, while ChatGPT or Claude are AI bots.
There are some people who think AGI will never arrive, but an almost superintelligent AI could still be as menacing as some fear:
A week before release, OpenBrain gave Agent-3-mini to a set of external evaluators for safety testing. Preliminary results suggest that it’s extremely dangerous. A third-party evaluator finetunes it on publicly available biological weapons data and sets it to provide detailed instructions for human amateurs designing a bioweapon — it looks to be scarily effective at doing so. If the model weights fell into terrorist hands, the government believes there is a significant chance it could succeed at destroying civilization.
Doesn’t sound like much of a “glorious future” to me.
RELATED CONTENT