Existential Risk – Diplomacy and Governance

Today, we launched our latest report at the Embassy of Finland in London. It lays out three concrete recommendations for the international community to mitigate existential risk. The executive summary is here. The full report can be found here. The 2015 Paris Agreement represented a huge global effort to safeguard future generations from damaging climate […]

More Must Be Done to Guard Against Global Catastrophic Risks

Crossposted from Huffington Post Sebastian Farquhar Global catastrophes sometimes strike. In 1918 the Spanish Flu killed as many as one in twenty people. There have been even more devastating pandemics – the Black Death and the 6th century Plague of Justinian may have each killed nearer to one in every six people on this earth. […]

Workshop: Existential Risk – Diplomacy and Governance

On February 8th and 9th around twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden gathered at our workshop in Oxford to discuss governance in existential risks. This brought together a mixture of specialists in relevant subject domains, diplomats, policy experts, and researchers with broad methodological expertise in existential risk. We had […]

Three areas of research on the superintelligence control problem

This is a guide to research on the problem of preventing significant accidental harm from superintelligent AI systems, designed to make it easier to get started on work in this area and to understand how different kinds of work could help mitigate risk. I’ll be updating this guide with a longer reading list and more detailed […]

How much does work in AI safety help the world?

Owen Cotton-Barratt and Daniel Dewey There’s been some discussion lately about whether we can make estimates of how likely efforts to mitigate existential risk from AI are to succeed and about what reasonable estimates of that probability might be. In a recent conversation between the two of us, Daniel mentioned that he didn’t have a […]

Global risks: the wildfire in the commons

Abstract: “Technological developments can create new types of global risk, including risks from climate change, geo-engineering, and emerging biotechnology. These technologies have enormous potential to make people better off, but the benefits of innovation must be balanced against the risks they create. Risk reduction is a global public good, which we should expect to be […]

"Czech-2013-Prague-Astronomical clock face" by Godot13 - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Czech-2013-Prague-Astronomical_clock_face.jpg#mediaviewer/File:Czech-2013-Prague-Astronomical_clock_face.jpg

Allocating risk mitigation across time – FHI technical report

Owen Cotton-Barratt has just released a Future of Humanity Institute technical report on the strategic considerations in the timing of work on catastrophic risks such as artificial intelligence- “Allocating risk mitigation across time”. From his abstract: This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else […]

Speed of takeoff matters for AI.

Strategic considerations about different speeds of AI takeoff

Owen Cotton-Barratt and Toby Ord There are several different kinds of artificial general intelligence (AGI) which might be developed, and there are different scenarios which could play out after one of them reaches a roughly human level of ability across a wide range of tasks. We shall discuss some of the implications we can see […]

Timing

The timing of labour aimed at reducing existential risk

Toby Ord, Oxford University, Future of Humanity Institute Work towards reducing existential risk is likely to happen over a timescale of decades. For many parts of this work, the benefits of that labour is greatly affected by when it happens. This has a large effect when it comes to strategic thinking about what to do […]