GPT-4 Alert: More than a thousand CEOs and academics ask to stop all tests of artificial intelligence for six months

The founder of Tesla, Elon Musk (NTB/Carina Johansen via REUTERS) (NTB/)

Elon Musk and a group of academics, artificial intelligence (AI) experts, and industry executives are calling for a six-month break in the development of more powerful systems, which has recently released OpenAI’s GPT-4in an open letter that quotes risks potentials for society and the humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth version of its AI program GPT (Generative Pre-trained Transformer), which has wowed users with its wide range of applications, from engaging users in similar conversations to those that humans can have, even composing songs and summarizing long documents.

The letterissued by the Future of Life Institute nonprofit and signed by more than 1,000 people, including Musk, calling for a pause in the advanced development of artificial intelligence until independent experts develop, implement and audit, security protocols shared for such designs. “Powerful artificial intelligence systems should be developed only once we are sure that their effects will be positive and their risks manageable,” the letter said.

The letter details the potential risks to society and civilization from the circulation of competitive AI systems among humans in the form of economic and political disruptions, and calls on developers to work with lawmakers on governance and regulatory authorities. And cosigners to it included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Eussel. , a pioneer of field research.

Iceland would train GPT-4 to prevent their local language from going extinct in the face of digital advance.  (Freepik)
Iceland would train GPT-4 to prevent their local language from going extinct in the face of digital advance. (Freepik) (rawpixel.com / Teddy Rawpixel /)

Concern of the European Union

Concerns arise when the EU police forceEuropol, joined a chorus of ethical and legal concerns about advanced AI like ChatGPT on Monday., warning about the possible misuse of the system in phishing attempts, disinformation and cybercrime. Meanwhilethe UK government unveiled proposals for an “adaptive” regulatory framework around artificial intelligence.

The government’s approach, outlined in a policy document released Wednesday, would split responsibility for governing artificial intelligence (AI) among its human rights, health and safety and competition regulators, rather than creating a new body dedicated to the technology. .

As much as Elon Musk, whose automaker Tesla is using AI for an autopilot system, has expressed concerns about AI development.

A response from ChatGPT, an artificial intelligence bot that has developed OpenAI (REUTERS/Florence Lo/Illustration/File)
A response from ChatGPT, an artificial intelligence bot that has developed OpenAI (REUTERS/Florence Lo/Illustration/File) (FLORENCE LO/)

The GPT case

Since its launch last year, ChatGPT of OpenAI has led rivals to accelerate the development of similar large language models since companies to integrate generative AI models into their products. Last week, OpenAI announced that it had partnered with around a dozen companies to incorporate their services into its chatbot, allowing ChatGPT users order groceries via Instacart or book flights via Expedia.

Sam Altman, CEO of OpenAI, has not signed the lettera spokesman for Future of Life told Reuters.

“The letter is not perfect, but the spirit is right: We need to slow down until we better understand the ramifications,” said Gary Marcus, a New York University professor who signed the letter. “The big players are becoming increasingly secretive about what they are doing, making it harder for society to defend itself from any harm that may materialize.”

I know anyway critics accused the signatories of the letter of promoting “AI hype”has argued that claims about the current potential of the technology had been grossly exaggerated.

“These kinds of statements are meant to generate excitement. They are meant to make people worry,” Johanna Björklund, AI researcher and associate professor at Umeå University. “I don’t think there’s a need to pull the parking brake.” Instead of halting the investigation, he said, AI researchers should be subject to increased transparency requirements. “If you do AI research, you have to be very transparent about how you do it.”

With information from Reuters

KEEP READING

Governments try to establish regulations for artificial intelligence

Microsoft launches “Create”, a tool to generate images with artificial intelligence

Source-www.infobae.com