who will be the Robert Oppenheimer of the artificial intelligence revolution? That was the question I was asking myself as I read the dazzling new book on mustafa suleman, The Coming Wave: Technology, Power and the 21st Century’s Greatest Dilemma. Perhaps it is Suleyman himself.
While Suleyman’s ideas on how to deal with the challenges posed by AI command our respect and attention, they are reminiscent of the dying dilemmas of the nuclear age, not to mention the toxic politics that coincided with the onset of the Cold War.
By now you will have read a great deal of hype and doom and gloom on the subject. But Suleyman’s is the book that you can’t afford not to read. As one of the co-founders of DeepMind, along with Demis Hassabis and Shane Legg, Suleyman has been a key figure in the explosive advancement of AI. I was in the room when DeepMind had its first breakthrough with the Deep Q Network, which it learned to excel at the computer game. breakout. He was “present at the creation” when DeepMind’s AlphaGo defeated Lee Sedol in the ancient Asian board game Go in Seoul in 2016, an event watched live by 280 million people.
Most recently, he helped Google create LaMDA (short for Language Model for Dialogue Applications), one of the revolutionary new Large Language Models (LLMs). LaMDA is so plausible in conversation that it convinced Google engineer Blake Lemoine that it was sentient to the point of being a person. Surely that is the essence of passing the “Turing Test”, named after the English pioneer of information technology Alan Turing. Last year, Suleyman created Inflection AI with Reid Hoffman, whose book impromptuco-authored with GPT-4I spoke in my last column.
What makes Suleyman unusual in this field is that he didn’t start out as a data scientist. Born in Islington, north London, the son of a Syrian-born taxi driver and an English nurse, he left Oxford to found the Muslim Youth Helpline, a telephone advice service, and went on to work for the socialist mayor of London, Ken Livingstone. (“Red Ken”). His experiences with both city government and the United Nations inform his thoughts on the likely policy response to the challenges posed by AI.
Suleyman’s starting point is the familiar and hyperbolic that AI is going to revolutionize almost everything. “With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us, and create new forms of art and culture that transcend the limits of imagination. … The coming wave is a supercluster, an evolutionary outburst like the Cambrian explosion, the most intense eruption of new species in Earth history.”
While reading its first chapters, I was reminded of Why AI will save the worldby Marc Andreessen , the recent essay in which the venture capital titan argues against the Cassandras and Luddites that the AI will not “kill us all, ruin our society, take away all our jobs, or lead to crippling inequality”. It will just “make it easier for bad people to do bad things”, like almost any new technology.
But Suleyman is much less optimistic. “With AI,” he warns, “We could create systems that are out of our control and find ourselves at the mercy of algorithms we don’t understand”. It foresees “an existential threat to nation states: risks so profound that they can disrupt or even topple the current geopolitical order.” It fears “huge AI-powered cyberattacks, automated warfare that could devastate countries [y] engineered pandemics,” not to mention “an avalanche of misinformation, job losses, and the prospect of catastrophic accidents.”

This sounds much closer to another former Google AI expert, Geoffrey Hinton, who recently told Wired: “There are times when I think we probably won’t be able to contain it. [la IA]and we are merely a passing phase in the evolution of intelligence.” Hinton’s latest suggestion to slow down the AI revolution is to require that it be based on analog computers..
On my more optimistic days, I hope that LLMs pollute the internet so much with their “hallucinations” (made up things that ring true) that we all lose trust in whatever we find online. LLMs have already begun to learn from the massive amounts of content they themselves are spewing out, which surely must have garbage-in and garbage-out consequences. As Deepak Seth has argued, LLMs are already collecting AI-generated content and learning from it. This process will tend to amplify hallucinations. Earlier this month, the Wall Street Journal reported that GPT-4 is getting worse in math. The technical term for this is “drift,” which gives new meaning to the question, “Do you get my point?”
The less we can trust the plausible verbiage that GPT-4 gives us, the more it will lead us back to good, old-fashioned libraries, where the knowledge is much more reliable and rationally ordered rather than maximizing eyeball share. That is why my biggest investment in the last five years has been in a large “Name of the Rose” style library to house books printed on paper.
The most immediate short-term danger that AI poses is to the democratic political process. Earlier this summer, Archon Fung and Lawrence Lessig published a chilling essay in Scientific American, in which they imagined an AI called “Clogger” deciding the outcome of the 2024 presidential election.
First, your language model would generate messages (texts, social media, and email, perhaps including images and videos) tailored to you personally… Second, Clogger would use a technique called reinforcement learning to generate messages that have each more likely to change your vote. …Finally, over the course of a campaign, Clogger’s messages might evolve to take into account your responses to previous sends and what you’ve learned about changing the minds of others.
Another clear and present danger is that more and more military decisions will be delegated to AI, as is already the case with Israel’s Iron Dome missile defense system and seems increasingly a feature of drone warfare in Ukraine. The most questionable claim of Andreessen’s essay was his assertion that “AI is going to improve warfare, when it has to happen, by dramatically reducing wartime death rates,” because AI will help statesmen and commanders to “make much better strategic and tactical decisions, minimizing risk, error and unnecessary bloodshed”.
I strongly suspect that the opposite will happen. In the coming AI wars, death rates in the military will be very, very high precisely because AI will make missiles and other weapons much more accurate. Any half decent AI who has read Clausewitz will want to get the annihilation of the enemy as soon as possible. AI-enabled commanders may also be more willing to sacrifice their own men to ensure victory, in the same way that AI chess programs sacrifice their own pieces more ruthlessly than human grandmasters.
In short, I agree with Suleyman’s analysis. AI (especially when combined with genetic engineering, robotics, quantum computers, fusion reactors, and nanotechnology) implies a proliferation of new technologies that are asymmetric, hyper-evolutionary, “all-purpose,” and autonomous. Not all consequences will be benign.

The problem is that such a tsunami of technological change is almost impossible to contain, let alone stop. As Suleyman argues, our political institutions lack the ability to regulate AI. On the one hand, criminal actors will soon be able to deploy unstoppable malware (much worse than WannaCry), killer bots or drones, and deepfake disinformation engines. On the other hand, legal power is increasingly concentrated in the hands of the leaders of a few technology companies: the new East India Companies. Meanwhile, AI is about to cause a massive disruption in the labor market, destroying the modern social contract, whereby the liberal nation-state of the 20th century offered its citizens security and a high rate of employment.
Is there anything we can do to avoid this dystopian outcome? In a new piece co-authored with Ian Bremmer, On Foreign Affairs, Suleyman offers an ambitious blueprint for an international “technprudential” regime to regulate AI. The analogy is partly with financial regulation, as he and Bremmer make clear by proposing as a potential model “the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank for International Settlements, and the International Monetary Fund.” Specifically, they call for the creation of a Geotech Stability Board, similar to the Financial Stability Board created in April 2009, in the depths of the global financial crisis. However, they imagine that big tech companies will participate as “parties to international summits and signatories to any agreement on AI”, implying an even bigger role than big banks have in financial regulation.
Like me, you may be inclined to despair at the thought of regulating AI as badly as we regulate finance. But notice the other two elements of the Bremmer-Suleyman model. One is a body similar to the Intergovernmental Panel on Climate Change, to ensure that we have regular and rigorous assessments of the impacts of AI. The other is more convincing, in my opinion.
Washington and Beijing should aim to create common areas and even security barriers proposed and monitored by a third party. The surveillance and verification approaches often found in arms control regimes could be applied here. … there may be room for Beijing and Washington to cooperate in global anti-proliferation efforts.
This came as a surprise to me, as I had inferred from The Coming Wave that Suleyman had little time for analogies between AI and nuclear weapons. He and Bremmer even say: “Artificial intelligence systems are not only infinitely easier to develop, steal, and copy than nuclear weapons; They are controlled by private companies, not governments.. And yet they (like almost everyone who tries to think systematically about how to deal with the threats posed by AI) inevitably return to the Cold War arms race.
Of course, it is an imperfect analogy. (Imagine if the atomic bomb had emerged from a private sector competition between, say, General Electric and IBM. And AI has many more uses and users than nuclear fission.) Still, it’s not entirely a coincidence that the Innovation in AI has had many more uses and users than nuclear fission, it accelerated more or less simultaneously with the transition of the US-China relationship from an economic symbiosis (“Chimeric”) to a Second Cold War. Eric Schmidt, the former CEO of Google, was skeptical in 2018 when I first argued that we were in a new cold war, but the final 2021 report from the Homeland Security Commission on Artificial Intelligence, which he chaired, essentially concurs that :
The US military has enjoyed military-technical superiority over all potential adversaries since the end of the Cold War. Now his technical prowess is being questioned, especially by China and Russia. …if current trend lines remain unchanged, the US military will lose its military-technical superiority in the coming years. … AI is a key aspect of this challenge, as our two major competing powers believe they will be able to offset our military advantage by using AI-enabled systems and autonomy.. In the coming decades, the United States will win against technically sophisticated adversaries only if it accelerates the adoption of AI-based sensors, command and control, weapons, and logistics systems.
Marc Andreessen’s decisive argument for pursuing AI “with maximum force and speed” is that “the biggest risk of AI is that China achieves global AI dominance and we (the US and the West) don’t”.

That implies, as Andreessen acknowledges, an arms race as rampant as the one that followed the Soviets’ acquisition (through espionage rather than their own physics excellence) of the atomic bomb and then the hydrogen bomb. It is true that today the United States is ahead in one key respect: we have access to the most sophisticated microchips and, thanks to various US sanctions, the Chinese do not. But doesn’t this put Xi Jinping in the position of Stalin when the US first had the bomb?
Is there an alternative to an all-out AI arms race? Tellingly, the best examples Suleyman himself gives of successful regimes of technological containment (a word made famous by George Kennan, of course) are taken from the First Cold War: the nuclear nonproliferation regime and the ban on chemical weapons and biological. Of course, arms control was not an absolute success. But it did not achieve anything. And that is why Suleyman is right to defend it.
Which brings us back to “Oppie.” In a recent article here, Hal Brands argued that Oppenheimer was wrong to oppose the construction of the hydrogen bomb, the “super” as physicists knew it. Brand’s argument seems to be that the nuclear arms race was okay because the good guys ultimately won it. This surely understates how risky that race was, especially in 1962, when the superpowers came within an inch of World War III over Cuba. In the end we got into a crazy situation where we put a lot more effort into building nuclear missiles than building nuclear power plants, as if the latter were more dangerous! Is this really how we want the AI race to play out?
Bremmer and Suleyman are right: The United States and China urgently need to start arms control negotiations, not only to limit the use of AI as a weapon, but also to ensure that more resources are devoted to its benign applications. Right now, there are virtually no restrictions other than the economic restrictions that the United States has placed on China. Meanwhile, China is very likely to go ahead with bioweapons research. As Schmidt et al. have noted, the risk of AI being used for that purpose is “a very short-term concern.” Nothing we are currently doing prevents it; The Biden administration’s current focus on China may even be encouraging such activity.
Biden’s national security team thinks he can rename economic decoupling “derisking” and then schedule some high-level meetings in Beijing. But that is not the path to meaningful détente. The United States and China need to talk about substantive issues, and arms control (not just AI, but also nuclear, biological, and other weapons of mass destruction) is the right place to start. In fact, I wouldn’t be surprised if it was the Chinese who suggested it. The drive for arms control in a cold war tends to come from the side that fears losing the arms race.
As for the brilliant Mr. Suleyman, he must be careful. He is right to warn of the dangers of a runaway AI race. He is right to call for AI gun control. But his argument for global institutions is reminiscent of Oppenheimer’s in 1946 for an Atomic Development Authority that would limit national sovereignty over nuclear technology. Like Oppenheimer, Suleyman has a left-wing political past. And he worries me that, like Oppenheimer, he may one day hold that against him, as the new Cold War intensifies.
(c)Bloomberg
Keep reading:
Artificial Intelligence, an unexpected character in the world of video games
Boom IA: what was the star action on Wall Street that made Argentine investors earn up to 600%
Through AI and brain implants, two patients with paralysis spoke again from their thoughts
Source-www.infobae.com