Opinion & Analysis

The longtermist fear of a future malevolent superintelligence is hindering our progress today

After many months of intense negotiations, a provisional agreement on the AI Act was finally reached at the end of 2023. This makes the EU the first jurisdiction with comprehensive AI legislation as other countries are still trying to figure out how to regulate AI. Their efforts are being increasingly challenged by a concerning turn in the discourse around AI, and even though Europe may be a little further ahead, we should still be paying close attention to this discourse.

The fear of a rogue superintelligence taking over the world (à la Terminator’s Skynet) is increasingly influencing the perception of AI among decision makers. The drama around the hiring-firing-then-rehiring of OpenAI CEO Sam Altman highlighted this radical shift’s consolidation, revealing a deep split in the AI community between the ‘doomer’ and ‘boomer’ factions.

The OpenAI debacle was not an isolated incident unique to Silicon Valley, though. Even European Commission President Ursula von der Leyen alluded to the risk of human extinction through AI in her 2023 State of the Union address (which thankfully doesn’t appear in the AI Act), not to mention the UK’s AI Safety Summit, which was also surrounded by alarmism over existential risks.

Introducing longtermism – a flawed philosophy
Importantly for any country still working towards regulation, these are not just fringe concerns, but the echoes of a philosophy that is becoming increasingly rooted in the AI community. This philosophy, often labelled ‘longtermism’ (and closely related to Effective Altruism), maintains that positively influencing the long-term future should be the absolute priority for today’s policymakers.

Longtermists prefer to discuss a hypothetical future human extinction, rather than regulating the AI systems that affect our everyday lives today, like AI tools used by public administrations, in healthcare systems or in hiring processes. In short, they want the opposite of an AI Act that establishes concrete rules that can be applied now.

While neither long-term thinking nor risk-prevention are inherently bad, the way longtermists translate their core belief into policy proposals is deeply flawed and could fundamentally mislead policymakers in other jurisdictions.

Usually, good policymaking rests on demonstrating that a proposed measure is preferable to the ‘counterfactual’, a scenario in which no new policy is adopted. To take decisions, policymakers need to identify and measure the positive and negative impacts of their proposed policy and alternative courses of action over time. Inevitably, they need to ‘discount’ future consequences by conducting a cost-benefit impact assessment – discounting avoids one euro of impact today being equated to one euro in the future, thus reflecting inflation rates and opportunity costs.

But discounting also reflects the risk that the future impact does not materialise at all. Discounting allows for weighing competing goals by specifying how likely impacts are for different courses of action and how big the positive or negative impacts could be. This illustrates the inherent complexity of decision making. It’s in this methodological detail of policymaking where longtermists get it wrong.

They essentially claim that the discount rate should be zero. Because their goal is to maximise impact on the lives of far-off future people, they argue that people today do not have a special claim to policy. If anything, future populations should be given priority, as they are in the (hypothetical) majority and this is what should determine today’s actions.

How longtermists get it wrong on AI
Consequently, the longtermist approach requires policymakers to single-mindedly prioritise avoiding the worst-case scenario. In the very far future, human extinction caused by a malevolent super intelligence is an excellent candidate for such a scenario.

However, is it the only one? Is a rogue super intelligence really the most likely scenario? And should we therefore disregard other courses of action that may make sense for us today?

Longtermists’ exclusive focus on one risk related to super intelligence (which may never happen) in the far future leads to two dramatic oversights in long-term thinking (while also distracting us from thinking about negative impacts that are already relevant today – but that’s another story for another day).

First, the longtermists overlook other, arguably more tangible, existential threats like climate change, another pandemic, or even disinformation-driven conflict. Ironically for longtermism, AI could even be part of the solution to these threats. AI can help reduce carbon emissions, monitor infectious diseases, boost scientific breakthroughs in medicine and physics, and moderate online content.

Yet longtermists disregard these positive effects and AI’s potential to mitigate other existential risks. They disregard the fact that AI can lead to a favourable impact assessment in terms of averting human extinction via other means.

This leads us to the second flaw in longtermist reasoning – the inability to weigh competing goals because there is only one priority, namely the survival of homo sapiens.

However, mere survival might not be enough for future populations—and neither is it enough for the present population. Education, meaningful employment opportunities, wellbeing, and healthy ecosystems are just a few of the additional criteria we might want to consider for both the present and the future. This is why several countries around the world have committed to preserving the so-called ‘four capitals’ (financial and physical, human, social and natural) as their long-term policy goals.

Again, AI can be an essential element in the pursuit of this form of inter-generational social contract. Stopping AI’s development now or conditioning it to one very long-term (and again, not definitive) risk may prevent our societies from harnessing these benefits in the here and now.

Policymaking is about unpacking trade-offs, both between different impacts and courses of action across time. This is why policymakers need suitable advice on the real and realistic impacts of AI – as they are now regulated under the AI Act – rather than incomplete and methodologically flawed inputs. There should be room for the long term, but not for long-sightedness, and certainly not for overly narrow visions of the future. We don’t need narrow telescopes on AI but rather a 360° vision.

Thankfully, it seems that EU regulators have not been swayed by the alarmism surrounding the AI Act’s drafting. If the final text preserves the current focus and balance, the AI Act could become an inspiration for a more meaningful, less long-sighted approach to global AI governance.

And that’s a very good outcome – both for us living in the here and now, and for future generations yet to arrive.

About the author

Paula Gürtler is Research Assistant in the Global Governance, Regulation, Innovation and Digital Economy (GRID) unit at CEPS

Access the original publication here