TikTok Review Bombing, Google Whack-a-Mole, EU Acts on AI

TikTok Review Bombing, Google Whack-a-Mole, EU Acts on AI

Teen TikTok Inspires 1-Star Barrage

Marketing today can be a complex and volatile proposition for local business owners. A case-in-point involves a nail salon in Philadelphia, Pennsylvania that is reportedly getting bombed with 1-star reviews after a customer created a TikTok video to complain about the poor job the salon did on her nails. Laura Bagella, who created the video, has only 4,500 followers but the video has apparently gone viral, prompting sympathetic users to review bomb the salon on Google and Yelp – bringing down its aggregate review score. The owner has appealed for help in the GBP forums, characterizing all the one-star reviews as fake. However, it's not entirely clear if all these reviews are indeed fake though many may be. One assumes that once Google and Yelp detect or are notified of the review bombing, the 1-star reviews will come down (review timing and velocity are key indicators of authenticity). Yet the reputational damage of the TikTok video will linger for some time.

Our take:

  • People might be inclined to react: this is a sad commentary on 1) vulnerability of SMBs, 2) teen entitlement, 3) general state of culture/society.
  • Bagella went back to the salon to express her frustration; she reported the salon was defensive. Bagella might have been unreasonable but they could have avoided this debacle had they refunded her $35.
  • The platforms will catch review bombing. But these incidents will keep happening. And Google's emphasis on short video and its Perspectives filter will give such videos even more visibility and influence.

Review Fraud Suits Not Enough

Amazon and Google periodically sue fake review brokers and SEO fraudsters. These efforts are well publicized but tend to be inconsistent and ad hoc. Despite the use of AI and more sophisticated algorithms it appears the big platforms are not winning the war on fake profiles and reviews. Most recently Google sued "a bad actor who ... posted more than 350 fraudulent Business Profiles and tried to bolster them with more than 14,000 fake reviews." That "bad actor," Ethan Hu, set up scores of fake listings and reviews as part of a lead-generation scheme for local businesses. He's accused of misleading consumers (i.e., fraudulent listings/reviews) and SMBs by making false SEO claims about the capabilities and benefits of his service. The complaint details how Hu created and fraudulently verified hundreds of fake profiles and created fake reviews. Google will win this litigation, but it's just one instance of a much more pervasive problem that doesn't seem to be getting better.

Our take:

  • Google says the suit will "build awareness that we will not sit idly by as bad actors misuse our products." In other words, it will "make an example" of Hu.
  • But periodic litigation won't deter fraudsters. There's so much at stake with online reviews (SEO, conversions), incentives to cheat remain high.
  • One question to ask: what action would be required if Google truly sought to minimize/eliminate fake reviews?

EU Acts on AI, US Dithers

This week US Senators Klobuchar and Grassley reintroduced antitrust legislation to prevent big tech companies from "self-preferencing." A similar bill failed to gain a vote in the last Congress. Meanwhile, Europe's Digital Markets Act already prohibits it. Indeed, as US legislators dither, the EU is moving quickly on multiple digital regulatory fronts. The latest is AI regulation, which passed the European Parliament last week, although it has more hoops to jump through. The AI Act seeks to bring more transparency to AI models and ban "high risk" use cases: facial recognition in public places, predictive policing, general scraping of faces online, and others that might pose "significant harm to people’s health, safety, fundamental rights or the environment." US AI regulation discussions are happening but legislation is far off. Yet private industry is rapidly deploying AI systems, sometimes with very negative outcomes (healthcare, debt collection, hiring discrimination, racist predictive policing). Most US stakeholders agree on AI regulation but what and how are in dispute.

Source: Arseny Togulev/Unsplash

Our take:

  • The EU's AI Act imposes multiple obligations on AI platforms, including transparency requirements, labeling and risk mitigation.  
  • Existing, "foundational" AI systems (e.g., ChatGPT, PaLM 2) are far from compliant with new EU rules, which are not yet law across Europe.
  • Given US inaction, Europe has become the de-facto global internet regulator. Yet that doesn't always benefit North Americans (e.g., privacy/GDPR).

Recent Analysis

Short Takes

Listen to our latest podcast.

How can we make this better? Email us with suggestions and recommendations.