A rhetorical analysis of the public debate around AI Safety
<aside> π¨ *Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Statement signed by many prominent AI Scientists, AI Company CEOs, and public figures in 2023*
</aside>
Today, the very people developing a new technology are warning us that it carries a significant chance to literally kill everyone on Earth within our lifetime. Surely, such a situation should precipitate an all-out reaction from the public. At the very least demanding a pause to reconsider whether we should really develop such a technology?
And yet, progress is continuing at full speed and funding is higher than ever.
Many people in key positions are not that worried about the above statement. This project is an investigation of the public debate around AI Safety. Its aim is to figure out if there are any serious arguments falsifying the following model:
<aside> π One would assume that 5 does not need to be added to this model, but there are actually many influential people who express being OK with a disempowerment or even extinction of humanity
</aside>
<aside> ποΈ This is a work in progress. Expect that everything can and will change: this page, every other page, the project goals, deliverables, etc.
</aside>
I am conducting an aggregation and analysis of all arguments for and against AI Safety.
My plan is to: