"Czech-2013-Prague-Astronomical clock face" by Godot13 - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Czech-2013-Prague-Astronomical_clock_face.jpg#mediaviewer/File:Czech-2013-Prague-Astronomical_clock_face.jpg

Allocating risk mitigation across time – FHI technical report

Owen Cotton-Barratt has just released a Future of Humanity Institute technical report on the strategic considerations in the timing of work on catastrophic risks such as artificial intelligence- “Allocating risk mitigation across time”.

From his abstract:

This article is about priority-setting for work aiming to reduce existential risk. Its chief claim is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. This is because we are uncertain about when we will have to face different risks, because we expect diminishing returns of extra work, and because we expect that more people will work on these risks in the future.

I explore this claim both qualitatively and with explicit models. I consider its implications for two questions: first, “When is it best to do different kinds of work?”; second, “Which risks should we focus on?”.

As a major application, I look at the case of risk from artificial intelligence. The best strategies for reducing this risk depend on when the risk is coming. I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.

You can read the full technical report here.

 

Posted in Existential Risk, Position paper, Prioritisation research, Techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *