I’ve always relied on user reviews when deciding whether to download an app or buy something online. Honestly, if a product doesn’t have at least a four-star rating, I usually don’t even consider it. But now, artificial intelligence is making it harder to trust those reviews, they could be completely fake.
According to research shared with Lifewire by digital ad verification firm DoubleVerify, the number of AI-generated fake app reviews has tripled since 2023. In one case, a reviewer accidentally left in the phrase, “I’m sorry, but as an AI language model,” revealing the review’s origins. In another example, DoubleVerify found that more than half of the reviews for a certain streaming app were fake and created by AI.
The issue has become so widespread that in August, the Federal Trade Commission (FTC) announced a new rule aimed at cracking down on fake reviews. FTC Chair Lina M. Khan emphasized that these fake reviews mislead consumers, waste time and money, and hurt fair competition. The new regulation allows for penalties against companies caught using AI-generated reviews to manipulate consumer perception. Still, identifying these reviews remains a challenge, some are obvious, but many are difficult to detect.
This growing problem is more than just a nuisance. It’s becoming nearly impossible to know whether a highly rated app is safe or potentially malicious. Consumers could unknowingly download malware or purchase low-quality products based on fake reviews. While companies like DoubleVerify offer tools to help businesses detect and limit fraudulent reviews, everyday users are left with little reassurance.
For people like me, this uncertainty means becoming much more cautious, less willing to trust app store ratings or product reviews without seeing real, hands-on feedback from reliable sources.
At the end of the day, it’s yet another blow to consumer trust. And unfortunately, there’s no clear solution on the horizon.