I hate this kind of thing:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Statement on AI Risk | CAIS (safe.ai)
That statement is bullshit.
Now, I am sure that many of the people signing the statement are sincere in their belief that one day an AI god will rise above us and have the power to destroy us all. Let me just say that we shall agree to disagree on that point. Note, however, that other signees of that letter, such as Sam Altman, the CEO of OpenAI, have a vested interest in focusing on long term issues with AI rather than short term problems with existing products.
A super general artificial intelligence is not an immediate problem. We have spent billions of dollars over the last decade or so on automated cars, and we haven't come close to solving that problem. The bloody things aren't even smart enough to get out of the way of emergency vehicles all the time. ChatGPT cannot do basic research without lying to us. I don't think we are in any danger from our robot overlords anytime soon.
But if we pretend we are, well, then that is awfully convenient for certain companies that, for example, create products that spew misinformation after being trained on copyrighted works, isn't it? Or who force drivers to sign NDAs in order to discourage drivers from reporting safety concerns about their allegedly self-driving cars and have had a treasure trove of material released showing that those cars allegedly perform much worse than they let on in public. Or create programs that discriminate in job hiring, or housing or criminal procedures.
Look, I love technology. I went to college to be the next Perry Mason (though not as telegenic. Raymond Burr was a handsome man.) and changed to engineering because I fell in love with solving problems. I am a self-taught programmer. I have an e-ink tablet, for pity's sake. Do you know how useless an e-ink tablet is for 90% of what a tablet does? But do you know how cool an e-ink tablet is for everything else? But I hate technology bullshit. I hate how finance bros have turned what should be a path to better lives for everyone into a route to immiseration and loss for almost everyone but them.
And we seem to be in a golden age of technology bullshit.
This statement, and the letter that proceeded it, is at best a misguided overreaction to hype. At worst, and what I believe is more likely, it is a piece of propaganda being used by at least some of the people behind to distract the public and regulators from the very real harms and ethics violations occurring today. It sounds altruistic to protect people from the oncoming AI apocalypse. But you might as well be protecting people from the oncoming Easter Bunny apocalypse.
Regulation of AI is important, yes. But it has to be focused on real AI system in use today. It has to be designed to alleviate real harms being imposed on real people living today. This handwaving about some theoretical god-like AI coming for our precious essence sometime in the future is a distraction from the real damage real companies are doing to real people in the real world in real time. We shouldn't let their bullshit distract us from that very real problem.