Recent advances in artificial intelligence and machine learning have created fantastic new technologies with the potential to bring huge benefits to humankind. Over the next few decades, it is plausible that artificially intelligent systems will become sufficiently advanced that they pose novel risks for society. These systems might be opaque and generally intelligent enough that they could be dangerous and difficult to control. Although current systems do not pose this sort of risk, an increasing number of experts in the technology and artificial intelligence communities think such scenarios are worth planning for – including Bill Gates, Stuart Russell, Elon Musk, and Lord Martin Rees. This topic received substantially increased attention following the New York Times best-seller “Superintelligence” by Professor Nick Bostrom of the Future of Humanity Institute.
The Global Priorities Project (GPP) and the Future of Humanity Institute (FHI) at the University of Oxford are looking for a Senior Policy Fellow to lead the creation of a report outlining the range of options available for limiting and mitigating risks from artificially intelligent systems. The report will include:
- A wide-ranging outline of the risks we may face and the kinds of options available to society – including commercial regulation, directed funding, liability, licensing, self-regulation, nationalisation, and classifying certain types of information among others.
- An analysis of the strengths and limitations of these approaches, with reference to the specific risks coming from artificially intelligent systems.
- An overview of the historical precedent for such approaches, and some insight into the drivers of success and failure in the past.
- An initial analysis of cost, including opportunity cost from restraining beneficial technological innovation, and political feasibility of the approaches identified.
The goal is to answer the questions “What might we do to reduce and mitigate risk from superintelligent AI systems and what looks most promising?” The report should not commit to an approach or eliminate any prematurely. The report should be something which researchers and policy-makers agree reflects the range of approaches and relative merit fairly, regardless of their views on absolute levels of importance or urgency.
The Senior Policy Fellow will work together with researchers at FHI and GPP to identify the range of responses to AI risk. They will then lead the research and writing of the report, drawing on research assistance from GPP and possibly with the ability to hire extra research assistance (depending on funding).
We anticipate that the position would last roughly a year. We are open to two ways the role could be carried out. It could be done as a secondment/sabbatical in Oxford from another institution. In this case, the Fellow would be heavily engaged in the writing and development of the report itself. Alternatively, it could be carried out as a part-time (possibly remote) role focused on providing expertise and structuring the ideas of the report. In this case, we would additionally hire a full-time research assistant to support the majority of the writing. We would also be open to discussing a longer term role.
The Global Priorities Project is a think tank in Oxford which is a collaboration between the Centre for Effective Altruism and the Future of Humanity Institute, at the University of Oxford. We develop practical policy proposals addressing important and topical issues which are nevertheless inappropriately neglected. Our work bridges a gap between academic work and the policy community. More details can be found here. The Centre for Effective Altruism is an Oxford-based charity which grew out of the work of practical ethics researchers at the University. The Future of Humanity Institute is a research institute at the University of Oxford, part of the Oxford Martin School, at the forefront of thinking in AI safety.
Because this is a fairly new field, we are open to a broad range of candidates who might be interested in transitioning their work towards AI safety. As a result, the job requirements listed are somewhat general. Candidates will be given the opportunity to develop and express more subject-specific positions during the selection process. The successful candidate will:
- Have expertise in the development of novel policy demonstrated either through their publication record or in the design of policy from within government, industry, or the non-profit sector.
- Have experience or knowledge relevant to the safe use, development, research or regulation of potentially risky technologies (such as nuclear weapons or energy, or biological dual use research of concern among others).
- Be open and willing to engage with a broad range of perspectives, although perhaps personally holding a perspective on AI safety.
- Demonstrate the ability to write clear high quality material.
A successful candidate is likely to also:
- Demonstrate the ability to engage with a range of technical fields as well as creativity and interdisciplinary participation.
- Have an international reputation for their work on policy either in academia or in practice.
- Have experience or knowledge relevant to current artificial intelligence technologies.
- Have an extensive and broad network of relevant contacts.
- Have experience leading the creation of policy reports.
Candidates are encouraged to contact us in advance of the deadline to discuss their interest and range of responsibilities they would consider. Formal applications should be sent to the Future of Humanity Institute by midnight, UK time on the 30th of October.
For any questions regarding the role, or to express interest, please contact firstname.lastname@example.org
The application form can be found here.
Salary will be equivalent to Grade 8 or Grade 9 on the Oxford salary scale, depending on experience (£38,500-56,000).