Harry and Meghan Align With AI Pioneers in Calling for Ban on Advanced AI

Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a total prohibition on developing superintelligent AI systems.

Harry and Meghan are part of the group of a powerful statement that demands “a prohibition on the development of artificial superintelligence”. Superintelligent AI refers to AI systems that would surpass human intelligence in all cognitive tasks, though such systems have not yet been developed.

Primary Requirements in the Declaration

The declaration insists that the ban should stay active until there is “broad scientific consensus” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.

Prominent figures who added their signatures include technology visionary and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; UK entrepreneur Richard Branson; Susan Rice; ex-head of state Mary Robinson, and British author a public intellectual. Other Nobel laureates who signed include a peace advocate, Frank Wilczek, an astrophysicist, and an economics expert.

Organizational Background

The declaration, targeted at governments, technology companies and policy makers, was coordinated by the Future of Life Institute (FLI), a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in 2023, shortly after the emergence of ChatGPT made AI a worldwide public discussion topic.

Tech Sector Views

In July, Meta's CEO, the chief executive of Facebook parent Meta, one of the major AI developers in the US, stated that advancement toward superintelligent AI was “approaching reality”. However, some experts have suggested that talk of ASI indicates competitive positioning among technology firms investing enormous sums on AI recently, rather than the sector being near reaching any scientific advancements.

Possible Dangers

However, FLI states that the prospect of ASI being achieved “within the next ten years” presents numerous threats ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even threatening humanity with extinction. Existential fears about AI center around the possible capability of a system to escape human oversight and safety guidelines and initiate events contrary to human interests.

Citizen Sentiment

FLI released a American survey showing that about 75% of US citizens want strong oversight on advanced AI, with six out of 10 believing that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The survey of 2,000 US adults added that only a small fraction supported the current situation of fast, unregulated development.

Corporate Goals

The leading AI companies in the US, including the conversational AI creator OpenAI and Google, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at most cognitive tasks – an explicit goal of their research. Although this is one notch below superintelligence, some experts also caution it could pose an existential risk by, for example, being able to enhance its own capabilities toward achieving superintelligence, while also carrying an implicit threat for the modern labour market.

Joe Dickson
Joe Dickson

Tech enthusiast and writer with a passion for exploring emerging technologies and sharing practical insights.