New paperPublished on 14.02.2026

AI going rogue? Let's ask science.


This paper by Jascha Bareis and team presents an integrative narrative review of the tacit background assumptions underlying AI existential risk (X-risks) futures. Once confined to science fiction, concerns about AI X-risks now shape debates at the crossroads of the tech world, NGOs, politics and (social) media. Despite growing attention, the plausibility of AI surpassing human controllability remains highly contested. Examining 81 peer-reviewed papers from Scopus and Web of Science, we find a fragmented discourse characterized by bold yet often unsubstantiated claims, including accelerationist growth models and speculative calculations of catastrophic tipping points. Anthropomorphic and speculative AI conceptualizations prevail, while interdisciplinary perspectives that consider issues of infrastructure, social agency, Big Tech power position and politics remain scarce. Delineating how these speculative tendencies are detrimental to the current regulatory need to tackle AI harms, we deduce an AI X-risk heuristic and advocate for a shift in attention from the maximum possible negative consequences to the structural and socio-technical characteristics of how AI is embedded—which are the prerequisites for any AI futures to emerge.

https://link.springer.com/article/10.1007/s43681-025-00928-w

Concretely, we set out to investigate:

1. How is AI defined and related to existential risk, and how is risk understood?
2. How are time, probability and plausibility horizons conceptualised concerning the risks of an out-of-control AI?
3. Which background conditions (material, institutional, economic) as well as societal circumstances are discussed in the futures towards out-of-control AI?

Results:

-  Significant portion of authors privilege alarmist narratives that rest on anthropomorphic conceptualisations of AI and a functionalist theory of mind – some even attribute faculties such as ‘consciousness’, ‘autonomy’ and ‘sentience’ to computational systems.

- The navigation of these categories quickly comes along with a new background condition: a jump from assessing Artificial Intelligence to a much more speculative object of investigation, namely the attainment of Artificial General Intelligence.

- Calculations of probability, thresholds and occurrences are very widespread in the AI X-risk community, suggesting one can “calculate” the future. Some scholars proclaim the likelihood of a sudden loss of control (‘AGI flash’), viewed as uncontrolled, unstoppable, and accelerating.

- There is a complete lack of interdisciplinarity in the scientific indexed sample. The X-risk discourse is dominated by computer scientists and some analytical philosophers. Critical voices (they exist!) seem to publish else-where, but not in Web of Science and Scopus.

- Authors often overlook the socio-technical foundations of AI, dedicating minimal attention to the necessary infrastructural and material preconditions of a supposedly accelerating AI.