A rhetorical analysis of the public debate around AI Safety

<aside> 🚨 *Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Statement signed by many prominent AI Scientists, AI Company CEOs, and public figures in 2023*

</aside>

Today, the very people developing a new technology are warning us that it carries a significant chance to literally kill everyone on Earth within our lifetime. Surely, such a situation should precipitate an all-out reaction from the public. At the very least demanding a pause to reconsider whether we should really develop such a technology?

And yet, progress is continuing at full speed and funding is higher than ever.

Many people in key positions are not that worried about the above statement. This project is an investigation of the public debate around AI Safety. Its aim is to figure out if there are any serious arguments falsifying the following model:

  1. Artificial Super Intelligence (ASI) will be created soon
  2. ASI will be able to wipe-out humanity
  3. ASI will have incentives to wipe-out humanity
  4. There won’t be a solution to this by the time ASI is created
  5. Humanity being wiped-out is a bad thing

<aside> πŸ‘‰ One would assume that 5 does not need to be added to this model, but there are actually many influential people who express being OK with a disempowerment or even extinction of humanity

</aside>

Unfamiliar with AI X-Risk? Read This

What is this Project about?

<aside> πŸ—οΈ This is a work in progress. Expect that everything can and will change: this page, every other page, the project goals, deliverables, etc.

</aside>

I am conducting an aggregation and analysis of all arguments for and against AI Safety.

My plan is to: