Nature ‘s latest editorial highlights how AI fear-mongering from tech companies narrows the debate on AI risks and regulations.
Talk of artificial intelligence destroying humanity plays into the tech companies’ agenda, and hinders effective regulation of the societal harms AI is causing right now.
Many AI researchers and ethicists to whom Nature has spoken are frustrated by the doomsday talk dominating debates about AI. It is problematic in at least two ways. First, the spectre of AI as an all-powerful machine fuels competition between nations to develop AI so that they can benefit from and control it. This works to the advantage of tech firms: it encourages investment and weakens arguments for regulating the industry. An actual arms race to produce next-generation AI-powered military technology is already under way, increasing the risk of catastrophic conflict — doomsday, perhaps, but not of the sort much discussed in the dominant ‘AI threatens human extinction’ narrative.
Second, it allows a homogeneous group of company executives and technologists to dominate the conversation about AI risks and regulation, while other communities are left out. Letters written by tech-industry leaders are “essentially drawing boundaries around who counts as an expert in this conversation”, says Amba Kak, director of the AI Now Institute in New York City, which focuses on the social consequences of AI.
For more, head here.