Elon Musk, a group of AI experts and industry executives are calling for a six-month hiatus. in the development of more powerful systems than the recently released OpenAI GPT-4in an open letter citing potential risks for society and humanity.
In the petition posted on the futureoflife.org site, call for a moratorium until security systems are established with new regulatory authoritiessystems monitoring from AItechniques that help distinguish between the real and the artificial, and institutions capable of coping with the «dramatic economic and political disruption (especially for democracy) What will the AI cause?
Earlier this month, OpenAI, backed by Microsoft, presented the fourth version of his GPT AI program (Generative Pre-trained Transformer), which has captivated users with its wide range of applicationsfrom involving users in human-like conversations until compose songs and summarize long documents.
The letter, issued by the nonprofit Future of Life Institute and signed by more than 1,000 people, including Muskrequested a pause in development advanced AI until independent experts developed, implemented and audited shared security protocols for such designs.
“Powerful artificial intelligence systems must be developed only once we are sure that its effects they will be positive and their risks will be manageable,” the letter said. “In recent months we have seen how AI laboratories have launched into an uncontrolled race to develop and deploy digital brains increasingly powerful than anyone elsenot even its creators, can reliably understand, predict, or control.”
OpenAI did not immediately respond to a request for comments. The letter detailed the potential risks for society and civilization of competitive AI systems among humans in the form of economic and political disruptionsand asked developers to work with the governance legislators and regulatory authorities.
“Should we allow the machines to flood our information channels with propaganda and lies? Should we automate all jobs, including rewarding? (…) Should we risk losing control of our civilization? These decisions should not delegate to technology leaders not elected,” they concluded.
Cosigners included to the CEO of Stability AI, Emad Mostaque, researchers from DeepMind, owned by Alphabetand the heavyweights of the AI, Yoshua Bengiooften referred to as one of the “godfathers of AI”, and Stuart Russell, a pioneer of research in the field.
The concerns arise as the European Union’s police force, Europol, on Monday joined a chorus of ethical and legal concerns about advanced AI such as ChatGPT, warning about the possible improper use of the system in phishing attempts, misinformation and cybercrime. Meanwhile, the UK government unveiled proposals for a regulatory framework «adaptive” around AI.
The government’s approachdescribed in a published policy document on Wednesday, would divide the responsibility of governing artificial intelligence (IA) among its regulators human rights, health and safety and competitioninstead of creating a new body dedicated to technology.
Musk, whose automaker Tesla is using AI for an autopilot system, has raised concerns about the technology itself. Since its release last year, OpenAI’s ChatGPT has led rivals to accelerate the development of similar large language models. and companies to integrate models of generative AI in their products.
Last week, OpenAI announced that had partnered with about a dozen companies to incorporate their services in your chatbot, which allows the ChatGPT users order groceries through Instacart or book flights through from Expedia. Sam Altmann, Executive Director by OpenAIhas not signed the letter, a spokesperson for Future of Life told Reuters.
«The letter is not perfect, but the spirit is correct: we should slow down until we better understand the ramifications,” said Gary Marcus, a New York University professor who signed the letter. “The big players are becoming more and more secretive about what they are doing, making it difficult for society to defend itself from any harm that may materialize.”
Critics accused the signatories of the letter of promoting “AI hype,” arguing that claims about the current potential of the technology had been greatly exaggerated. Instead of stopping the investigation, he said, AI researchers they should be subject to increased transparency requirements. «If you do AI research, dYou have to be very transparent about how you do it.”
These kinds of statements are intended to generate enthusiasm. They are meant to make people worry,” Johanna Björklund, AI researcher and associate professor at Umeå University. “I don’t think there’s a need to pull the handbrake.”