Thursday, November 21, 2024

Fake content is getting harder to detect but Hinton has an idea to make it easier

Must read

TORONTO — Artificial intelligence pioneer Geoffrey Hinton says it’s getting more difficult to tell videos, voices and images generated with the technology from material that’s real — but he has an idea to aid in the battle.

The increased struggle has contributed to a shift in how the British-Canadian computer scientist and recent Nobel Prize recipient thinks the world could address fake content.

“For a while, I thought we may be able to label things as generated by AI,” Hinton said Monday at the inaugural Hinton Lectures.

“I think it’s more plausible now to be able to recognize that things are real by taking a code in them and going to some websites and seeing the same things on that website.”

He reasons this approach would verify content isn’t fake and imagines it could be particularly handy when it comes to political video advertisements.

“You could have something like a QR code in them (taking you) to a website, and if there’s an identical video on that website, all you have to do is know that that website is real,” Hinton explained.

Most Canadians have spotted deepfakes online and almost a quarter encounter them weekly, according to an April survey of 2,501 Canadians conducted by the Dais, a public policy organization at Toronto Metropolitan University.

Deepfakes are digitally manipulated images or videos depicting scenes that have not happened. Recent deepfakes have depicted Pope Francis in a Balenciaga puffer jacket and pop star Taylor Swift in sexually explicit poses.

The Hinton Lectures are a two-night event the Global Risk Institute is hosting this week at the John W. H. Bassett Theatre in Toronto.

The first evening saw Hinton, who is often called the godfather of AI, take the stage briefly to remind the audience of the litany of risks he’s been warning the public over the last few years that the technology poses. He feels AI could cause or contribute to accidental disasters, joblessness, cybercrime, discrimination and biological and existential threats.

However, the bulk of the evening was dedicated to a talk from Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California.

Steinhardt told the audience he believes AI will advance even faster than many expect but there will be surprises along the way.

By 2030, he imagines AI will be “superhuman,” when it comes to math, programming and hacking.

He also thinks large language models, which underpin AI systems, could become capable of persuasion or manipulation.

Latest article