spam

google-search-is-losing-the-fight-with-seo-spam,-study-says

Google search is losing the fight with SEO spam, study says

Just wait until more AI sites arrive —

Study finds “search engines seem to lose the cat-and-mouse game that is SEO spam.”

Google search is losing the fight with SEO spam, study says

It’s not just you—Google Search is getting worse. A new study from Leipzig University, Bauhaus-University Weimar, and the Center for Scalable Data Analytics and Artificial Intelligence looked at Google search quality for a year and found the company is losing the war against SEO (Search Engine Optimization) spam.

The study, first spotted by 404media, “monitored Google, Bing and DuckDuckGo for a year on 7,392 product review queries,” using queries like “best headphones” to study search results. The focus was on product review queries because the researchers felt those searches were “particularly vulnerable to affiliate marketing due to its inherent conflict of interest between users, search providers, and content providers.”

Overall, the study found that “the majority of high-ranking product reviews in the result pages of commercial search engines (SERPs) use affiliate marketing, and significant amounts are outright SEO product review spam.” Search engines occasionally update their ranking algorithms to try to combat spam, but the study found that “search engines seem to lose the cat-and-mouse game that is SEO spam” and that there are “strong correlations between search engine rankings and affiliate marketing, as well as a trend toward simplified, repetitive, and potentially AI-generated content.”

The study found “an inverse relationship between a page’s optimization level and its perceived expertise, indicating that SEO may hurt at least subjective page quality.” Google and its treatment of pages is the primary force behind what does and doesn’t count as SEO, and to say Google’s guidelines reduce subjective page quality is a strike against Google’s entire ranking algorithm.

The bad news is that it doesn’t seem like this will get better any time soon. The study points out generative AI sites one or two times, but that was only in the past year. The elephant in the room is that generative AI is starting to be able to completely automate the processes of SEO spam. Some AI content farms can scan a human-written site, use it for “training data,” rewrite it slightly, and then stave off the actual humans with more aggressive SEO tactics. There are already people bragging about doing AI-powered “SEO heists” on X (formerly Twitter). The New York Times is taking OpenAI to court for copyright infringement, and a class-action suit for book publishers calls ChatGPT and LLaMA (Large Language Model Meta AI) “industrial-strength plagiarists.” Artists are in the same boat from tools like Midjourney and Stable Diffusion. Most websites do not have the legal capacity to take on an infinite wave of automated spam sites enabled by these tools. Google’s policy is to not penalize AI-generated content in its search results.

A Google spokesperson responded to the study by pointing out that Google is still doing better than its competition: “This particular study looked narrowly at product review content, and it doesn’t reflect the overall quality and helpfulness of Search for the billions of queries we see every day. We’ve launched specific improvements to address these issues – and the study itself points out that Google has improved over the past year and is performing better than other search engines. More broadly, numerous third parties have measured search engine results for other types of queries and found Google to be of significantly higher quality than the rest.”

This post was updated at 6: 00PM ET to add a statement from Google.

Google search is losing the fight with SEO spam, study says Read More »

lazy-use-of-ai-leads-to-amazon-products-called-“i-cannot-fulfill-that-request”

Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

FILE NOT FOUND —

The telltale error messages are a sign of AI-generated pablum all over the Internet.

I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Enlarge / I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Amazon

Amazon users are at this point used to search results filled with products that are fraudulent, scams, or quite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy.”

As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging from lawn chairs to office furniture to Chinese religious tracts. A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media (one such example is Archived here).

ProTip: Don't ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Enlarge / ProTip: Don’t ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Other Amazon product names don’t mention OpenAI specifically but feature apparent AI-related error messages, such as “Sorry but I can’t generate a response to that request” or “Sorry but I can’t provide the information you’re looking for,” (available in a variety of colors). Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that OpenAI can’t provide content that “requires using trademarked brand names” or “promotes a specific religious institution” or in one case “encourage unethical behavior.”

The repeated invocation of a

Enlarge / The repeated invocation of a “commitment to providing reliable and trustworthy product descriptions” cited in this description is particularly ironic.

The descriptions for these oddly named products are also riddled with obvious AI error messages like, “Apologies, but I am unable to provide the information you’re seeking.” One product description for a set of tables and chairs (which has since been taken down) hilariously noted: “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]].” Another set of product descriptions, seemingly for tattoo ink guns, repeatedly apologizes that it can’t provide more information because: “We prioritize accuracy and reliability by only offering verified product details to our customers.”

Spam spam spam spam

Using large language models to help generate product names or descriptions isn’t against Amazon policy. On the contrary, in September Amazon launched its own generative AI tool to help sellers “create more thorough and captivating product descriptions, titles, and listing details.” And we could only find a small handful of Amazon products slipping through with the telltale error messages in their names or descriptions as of press time.

Still, these error-message-filled listings highlight the lack of care or even basic editing many Amazon scammers are exercising when putting their spammy product listings on the Amazon marketplace. For every seller that can be easily caught accidentally posting an OpenAI error, there are likely countless others using the technology to create product names and descriptions that only seem like they were written by a human that has actual experience with the product in question.

A set of clearly real people conversing on Twitter / X.

Enlarge / A set of clearly real people conversing on Twitter / X.

Amazon isn’t the only online platform where these AI bots are outing themselves, either. A quick search for “goes against OpenAI policy” or “as an AI language model” can find a whole lot of artificial posts on Twitter / X or Threads or LinkedIn, for example. Security engineer Dan Feldman noted a similar problem on Amazon back in April, though searching with the phrase “as an AI language model” doesn’t seem to generate any obviously AI-generated search results these days.

As fun as it is to call out these obvious mishaps for AI-generated content mills, a flood of harder-to-detect AI content is threatening to overwhelm everyone from art communities to sci-fi magazines to Amazon’s own ebook marketplace. Pretty much any platform that accepts user submissions that involve text or visual art now has to worry about being flooded with wave after wave of AI-generated work trying to crowd out the human community they were created for. It’s a problem that’s likely to get worse before it gets better.

Listing image by Getty Images | Leon Neal

Lazy use of AI leads to Amazon products called “I cannot fulfill that request” Read More »