If you’ve used too much Pro-A.I. chatter in recent times, you’re probably not alone. AI developers, leading A.I. Ethicists and even Microsoft co-founder Bill Gates have defended his work in the past week.
This is in response to an open letter published last week by the Future of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month pause to work on AI systems. The demand was made. are demanding that can compete with the human-level.
The letter, which now has more than 13,500 signatures, expressed fears that a “cutaway race” to develop programs such as OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Alphabet’s Bard could have negative consequences.
But large swathes of the tech industry, including at least one of the biggest giants, are pushing back.
“I don’t think asking any one particular group to stop is the solution to the challenges,” Gates told Reuters on Monday. Gates said that stopping the global industry would be difficult – though he agreed that more research was needed to “identify the hard areas” the industry is facing.
Experts say that’s what makes the debate interesting: The open letter may cite some valid concerns, but its proposed solution seems impossible to achieve.
What are Musk and Wozniak worried about?
The concerns of the open letter are relatively straightforward: “In recent months A.I. Labs are locked in an out-of-control race to develop and deploy ever more powerful digital brains that no one – not even their creators – can understand, predict or reliably control can do. Can do.
AI systems often come with programming biases and potential privacy issues. They can spread misinformation widely, especially when used maliciously.
And it’s easy to imagine companies trying to save money by replacing human jobs — from personal assistants to customer service representatives — with AI. language system.
Italy has already temporarily banned ChatGPT over privacy issues stemming from the OpenAI data breach. The UK government published regulation recommendations last week, and the European Consumer Organization has also called on lawmakers across Europe to extend the rules.
In the US, some members of Congress have called for new laws to regulate AI. technology. Last month, the Federal Trade Commission issued guidance for businesses developing such chatbots, meaning the federal government is keeping an eye on AI systems that could be used by fraudsters.
And several state privacy laws were passed last year aimed at forcing companies to disclose how their AI works. The products work, and give customers the chance to opt out of providing personal data to AI-automated decisions.
Those laws are currently active in California, Connecticut, Colorado, Utah, and Virginia.
What do A.I. say developers?
At least one A.I. The security and research company isn’t worried just yet: San Francisco-based Anthropic wrote in a blog post last month that “existing technologies” do not pose an imminent concern.
Anthropic, which received a $400 million investment from Alphabet in February, has its own AI. Chatbot. It has written in its blog post that Future A.I. The system could become “more powerful” over the next decade, and building guardrails now could “help reduce risk” further down the road.
Problem: As Anthropic wrote, no one is sure what these railings could or should look like.
A company spokesperson told CNBC Make It that the open letter’s ability to foster dialogue on the topic is useful. The spokesperson did not specify whether Anthropic would support the six-month pause.
In a Wednesday tweet, OpenAI CEO Sam Altman acknowledged that “an effective global regulatory framework, including democratic governance” and “substantial coordination” among leading Artificial General Intelligence (AGI) companies could help.
But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, did not specify what those policies might be, or respond to CNBC’s request for comment on the open letter. Give Give
Some researchers raise another issue: halting research could stall progress in a fast-moving industry, and allow authoritarian countries to develop AI of their own. system to proceed.
Highlighting the potential dangers of AI could encourage bad actors to adopt the technology for nefarious purposes, says AI Richard Soucher. Researcher and CEO of AI-powered search engine startup You.com.