Monday, December 23, 2024

The internet is rife with fake reviews. Will AI make it worse?

Must read

The emergence of generative artificial intelligence tools that allow people to efficiently produce novel and detailed online reviews with almost no work has put merchants, service providers and consumers in uncharted territory, watchdog groups and researchers say.

Phony reviews have long plagued many popular consumer websites, such as Amazon and Yelp. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, such reviews are initiated by businesses that offer customers incentives such as gift cards for positive feedback.

But AI-infused text generation tools, popularized by OpenAI’s ChatGPT, enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts.

The deceptive practice, which is illegal in the U.S., is carried out year-round but becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews to help them purchase gifts.

Where are AI-generated reviews showing up?

Fake reviews are found across a wide range of industries, from e-commerce, lodging and restaurants, to services such as home repairs, medical care and piano lessons.

The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, said it started to see AI-generated reviews show up in large numbers in mid-2023 and they have multiplied ever since.

For a report released this month, The Transparency Company analyzed 73 million reviews in three sectors: home, legal and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated.

“It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and advisor to tech startups, who reviewed The Transparency Company’s work and is set to lead the organization starting Jan. 1.

In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said.

The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews.

The FTC, which this year banned the sale or purchase of fake reviews, said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags and other businesses.

Latest article