Rishi Sunak: AI firms cannot mark their own homework

Monitoring the risks posed by artificial intelligence (AI) is too important to be left to big tech firms, Prime Minister Rishi Sunak has said.

He told the BBC that governments needed to take action and AI firms could not be left to “mark their own homework”.

He was speaking ahead of the AI Safety Summit, where a global declaration on managing AI risks has been announced.

King Charles told delegates the issue required “urgency, unity and collective strength”.

It comes amid growing concerns about highly advanced forms of AI with as-yet unknown capabilities.

So far countries are only starting to address the potential risks, which may include breaches to privacy, cyberattacks and the displacement of jobs.

In an interview with the BBC at Downing Street, Mr Sunak said AI was a “transformative technology” that could have huge benefits in the NHS or in schools.

But he said he wanted the UK and other countries to be able “do the testing that is necessary to make sure that we are keeping our citizens and everyone at home safe”.

“There has to be governments or external people who do that work,” he said.

Speaking to the BBC’s technology editor Zoe Kleinman, he said that many AI firms had already given the UK access to their models before their release.

And he claimed the UK was “investing more” in AI risk management than any other country.

“We’ve already invested £100 million in our task force, which will become our Safety Institute,” he said.

“And we’re attracting the best and the brightest researchers from around the world to come and work in that institution.”

Around 100 world leaders, tech bosses and academics are currently gathering at the UK’s first AI safety summit at Bletchley Park, in Buckinghamshire.

Earlier on Wednesday, the delegates agreed the world’s first ever “international statement” on so-called frontier AI – the government’s term for AI that could exceed the capabilities of today’s most advanced systems.

The Bletchley Declaration calls for global cooperation on tackling the risks, which include potential breaches to privacy and the displacement of jobs.

Signed by 28 countries and the EU, it also says AI should be kept “safe, in such a way as to be human-centric, trustworthy and responsible”. (BBC)

Exit mobile version