Why You Should Be Concerned About AI (And What You Can Do About It)

Why You Should Be Concerned About AI (And What You Can Do About It)
What happens when AI becomes smarter than us?
Preventing an AI-related catastrophe - Problem profile
Why do we think that reducing risks from AI is one of the most pressing issues of our time? There are technical safety issues that we believe could, in the worst case, lead to an existential threat to humanity.

The following is a summary of this article (linked above and to the left) written by Benjamin Hilton of the 80,000 Hours team. He neatly lays out why AI poses existential risks, aka the end of the world. This is why I'm writing this and getting involved in the ethical AI space. Achieving profound cognitive enhancement and moral maturation may not be possible before an AI extinction event, or if it is, then it may not last for very long presuming AI continues to rapidly evolve. No one knows with certainty what the outcome of superintelligent AI will be, but we had better do our best to make it aligned with human values and mitigate existential risk. I encourage you to take the time to read the whole article but here's the summary, also written by Hilton, for those who can't.

Summary

I expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). I think more work needs to be done to reduce these risks.

Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is neglected and may well be tractable. I estimated that there were around 400 people worldwide working directly on this in 2022, though I believe that number has grown. As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. As policy approaches continue to be developed and refined, we need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

Our overall view

Recommended - highest priority

We think this is among the most pressing problems in the world.

Scale  

AI will have a variety of impacts and has the potential to do a huge amount of good. But we’re particularly concerned about the possibility of extremely bad outcomes, especially an existential catastrophe. Some experts on AI risk think that the odds of this are as low as 0.5%, some think that it’s higher than 50%. We’re open to either being right — and you can see further discussion of this here. My overall guess is that the risk of an existential catastrophe caused by artificial intelligence by 2100 is around 1%, perhaps stretching into the low single digits. This puts me on the less worried end of 80,000 Hours staff: as an organisation, our take is that the risk is somewhere between 3% and 50%.

Neglectedness  

Around $50 million was spent on reducing catastrophic risks from AI in 2020 — while billions were spent advancing AI capabilities.4 While we are seeing increasing concern from AI experts, in 2022 I estimated there were around 400 people working directly on reducing the chances of an AI-related existential catastrophe (with a 90% confidence interval ranging between 200 and 1,000). Of these, around three quarters appeared to be working on technical AI safety research, with the rest split between strategy (and other governance) research and advocacy — though the field is changing fast.

Solvability  

Making progress on preventing an AI-related catastrophe seems hard, but there are a lot of avenues for more research and the field is very young. Governments started taking an active interest in regulating AI and mitigating these threats in 2023. So I think it’s moderately tractable, though I’m highly uncertain — again, assessments of the tractability of making AI safe vary enormously.