Today, a blend of AI industry bigwigs, scholars, and even some stars raised their voices. Their urgent call? To dial down the risk of AI sparking a global catastrophe.
They’re putting it out there loud and clear: the threat of an AI-induced extinction event needs to be up there with pandemics and nuclear warfare on the global priority list.
This clarion call came in a statement from the Centre for AI Safety. The signatories are a who’s who of the AI world, including OpenAI’s Sam Altman and AI legend Geoffrey Hinton.
Joining them are head honchos and brainiacs from Google DeepMind, Anthropic, Microsoft’s CTO Kevin Scott, internet security wizard Bruce Schneier, climate crusader Bill McKibben, and even musician Grimes.
The Centre’s director, Dan Hendrycks, took to Twitter with insights. This initiative, sparked by Cambridge AI professor David Krueger, isn’t about sidelining other AI concerns like bias or misinformation. Hendrycks likens their move to atomic scientists sounding the alarm on their own creations. His message? Tackling multiple dangers is doable. It’s not an ‘either/or’ situation but a ‘yes/and’ approach.
Hendrycks emphasizes that focusing solely on current harms is as risky as ignoring them. In the grand scheme of risk management, it’s about striking a balance – a nod to being vigilant about present threats while not turning a blind eye to future ones.