Artificial Intelligence and the world’s “Oppenheimer Moment”

LeahEichler AIThreat 230509

By Bernd Debusmann

WASHINGTON — The rapid advance of Artificial Intelligence has pushed the world to the brink of a technological revolution that will affect most of the world’s eight billion people. It raises a question of crucial importance: will AI be a force for good or an existential threat?

There have been impressive AI-assisted advances in medicine to help us lead longer and healthier lives. But AI also raises the specter of killer robots and applications that could lead to the extermination of humanity.

The answer on blessing or curse is both and the debate tends to pitch techies against techies.

Although few will admit it, the tens of thousands of people who work on AI in the tech industry, which now employs more than nine million people in companies like Google, Open AI and Anthropic, don’t themselves know what the future will bring.

Media coverage on AI has tended to focus on applications like ChatGPT, frequently used by students to write essays, and on AI-aided Internet postings to spread misinformation and disinformation. Then there are “deep fakes” that mimic the voice and appearance of a person.

Early this year, an AI-generated robocall with U.S. President Joe Biden’s voice urged voters in New Hampshire not to vote in state elections.

The tempo of the debate on where AI will take mankind accelerated sharply since a non-profit organization little known outside the technology community, the San Francisco-based Center for AI Safety issued a blunt, one-sentence statement a year ago.

It said: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

That urgent call to take the potential impact of AI as seriously as nuclear war was signed by more than 350 researchers, engineers and top executives from the leading companies working in AI. The signatories included Geoffrey Hinton and Yoshua Bengio, two Canadian scientists often called godfathers of advanced AI for their pioneering work on artificial neural networks.

The parallel with the development of nuclear weapons rang alarm bells outside the tech world.

Those who voiced concern included Warren Buffett, the the 92-year-old multibillionaire investor with a reputation for sage judgment. At the annual meeting of his Berkshire Hathaway company in May, he said “we let the genie out of the bottle when we developed nuclear weapons. AI is somewhat similar — it’s part way out of the bottle.”

Political leaders and scientists around the world also took note.

China and the United States, the two countries thought to have the largest array of AI tools, have paid relatively little public attention to the potential hazards of Artificial Intelligence.

But on May 6, the administration of Joe Biden made a surprise announcement: American and Chinese diplomats plan to begin what a New York Times analysis termed “the first, tentative arms control talks over the use of artificial intelligence.”

Britain took action much earlier. Just five months after the Center for IA Safety’s “risk of extinction” warning, the British government convened an AI summit attended by representatives of 28 countries at Bletchley Park, site of the World War II facility where British scientists broke the code Nazi Germany used for military communications.

The summit ended with a lengthy communique noting the potential risks of AI, particularly in cybersecurity and biotechnology, and urged international cooperation and a global dialogue to better understand the impact of AI on societies around the world.

Curiously, the Bletchley Declaration produced by the summit made no explicit mention of Artificial Intelligence in war, a sensitive subject that has been under discussion by military leaders for at least two decades of steadily accelerating progress on developing Lethal Autonomous Weapons, or LAWs, better known as killer robots.

In contrast, in the last two days of April, a meeting convened by the Austrian government spelt out what is at stake in unambiguous terms. The conference was entitled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.” It brought together representatives of 143 countries, mostly from non-governmental and international organisations.

“Now is the time to agree on international rules and norms to ensure human control, “ Austrian Foreign Minister Alexander Schallenberg told the meeting. “At least let’s make us sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines.”

“This is our generation’s ‘Oppenheimer moment’ where geopolitical tensions threaten to lead a major scientific breakthrough down a very dangerous path for the future of humanity,” said the summary at the end of the April 29–30 conference.

The reference was to Robert Oppenheimer, the U.S. physicist who led the project to develop the atomic bomb, the first two of which the U.S. dropped on the Japanese cities of Hiroshima and Nagasaki. A biographical movie on Oppenheimer broke box office records last summer and won seven Oscars in March.

The reference to life-and-death decisions remaining in the hands of humans reflects fears that artificial intelligence could give weapons systems the capability to make decisions themselves after processing surveillance data.

In October, the Secretary General of the United Nations, Antonio Guterres and the President of the International Committee for the Red Cross, Mirjana Spoljaric, called on political leaders to establish new international rules on autonomous weapons systems by 2026.

This is an extremely ambitious goal, more aspirational than based on reality. It brings to mind the Nuclear Non-Proliferation Treaty (NPT) which entered into force in 1970 after years of arduous negotiations by a small army of experts, lawyers and government leaders. It was hailed a success and there are now 191 countries party to it.

However, the three countries with the largest nuclear arsenals — the United States, Britain and France — never accepted it. Neither did India, Pakistan, North Korea and Israel. They all have nuclear arsenals.

In an ideal world, countries with the capacity to advance Artificial Intelligence would conclude a pre-emptive ban on lethal autonomous weapons, a campaign that has been waged for the past decade by a coalition of non-governmental organisations whose website is stopkillerrobots.org.

For a glimpse on horrifying consequences of unchecked development of LAWs, the Future of Life Institute which has branches in Belgium and the United States, has produced a mock sci-fi documentary some experts say comes closer to reality than Hollywood movies such as The Terminator.

The video is worth watching: https://www.youtube.com/watch?v=HipTO_7mUOw

Donors to the work of the institute include Elon Musk, who gave $10 million. The billionaire entrepreneur’s interest in artificial intelligence stems from his ambition to eliminate flaws from his AI-using driverless Tesla cars which have been involved in a number of lethal accidents.

Musk is particularly bullish on the future of AI. “we’ll have Artificial Intelligence that is smarter than any one human probably around the end of the year, ” he said recently.

On the bright side, AI has been a blessing in a number of fields, in particular healthcare.

Using deep-learning algorithms, it has been effective in the early detection of existing cancers and in predicting the development of cancers of the liver, rectum and prostate with 94% accuracy, according to new research by America’s Mount Sinai hospital group.

When you ask Americans what comes to mind when they hear the phrase Artificial Intelligence, the answer is more frequently “jobs” than killer robots.

Worries about the AI-driven technological revolution and its impact on the global economy are shared by deeply knowledgeable leaders in finance and the global economy.

Introducing a new analysis by the International Monetary Fund (IMF) early this year, its managing director, Kristalina Georgieva, said “the findings are striking. Almost 40 percent of global employment is exposed to AI. Historically, automation and information technology have tended to affect routine tasks but one of the things that sets AI apart is its ability to impact high-skilled jobs.”

There are no estimates on how many of these jobs will disappear and how many high-skilled workers will benefit by using AI to complement their work and thus boost productivity. “In most scenarios,” the IMF found, “AI will likely worsen overall inequality.”

Other leaders from outside the technology community, such as Buffet, are taking a wait-and-see approach.

“It has enormous potential for good and enormous potential for harm, “ Buffet said when asked how he saw AI. “And I just don’t know how that plays out.”

Bernd Debusmann is a  former columnist for Reuters who worked as a correspondent, bureau chief and editor in Europe, Latin America, the Middle East, Africa and United States. He has reported from more than 100 countries and lived in nine. He was shot twice in the course of his work – once covering a night battle in the center of Beirut and once in an assassination attempt prompted by his reporting on Syria. This analysis was originally published in Medium.

Share this article