spam

“zero-warnings”:-longtime-youtuber-rails-against-unexplained-channel-removal

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal

Artemiy Pavlov, the founder of a small but mighty music software brand called Sinesvibes, spent more than 15 years building a YouTube channel with all original content to promote his business’ products. Over all those years, he never had any issues with YouTube’s automated content removal system—until Monday, when YouTube, without issuing a single warning, abruptly deleted his entire channel.

“What a ‘nice’ way to start a week!” Pavlov posted on Bluesky. “Our channel on YouTube has been deleted due to ‘spam and deceptive policies.’ Which is the biggest WTF moment in our brand’s history on social platforms. We have only posted demos of our own original products, never anything else….”

Officially, YouTube told Pavlov that his channel violated YouTube’s “spam, deceptive practices, and scam policy,” but Pavlov could think of no videos that might be labeled as violative.

“We have nothing to hide,” Pavlov told Ars, calling YouTube’s decision to delete the channel with “zero warnings” a “terrible, terrible day for an independent, honest software brand.”

“We have never been involved with anything remotely shady,” Pavlov said. “We have never taken a single dollar dishonestly from anyone. And we have thousands of customers that stand by our brand.”

Ars saw Pavolov’s post and reached out to YouTube to find out why the channel was targeted for takedown. About three hours later, the channel was suddenly restored. That’s remarkably fast, as YouTube can sometimes take days or weeks to review an appeal. A YouTube spokesperson later confirmed that the Sinesvibes channel was reinstated due to the regular appeals process, indicating perhaps that YouTube could see that Sinesvibes’ removal was an obvious mistake.

Developer calls for more human review

For small brands like Sinesvibes, even spending half a day in limbo was a cause for crisis. Immediately, the brand worried about 50 broken product pages for one of its distributors, as well as “hundreds if not thousands of news articles posted about our software on dozens of different websites.” Unsure if the channel would ever be restored, Sinesvibes spent most of Monday surveying the damage.

Now that the channel is restored, Pavlov is stuck confronting how much of the Sinesvibes brand depends on the YouTube channel remaining online while still grappling with uncertainty since the reason behind the ban remains unknown. He told Ars that’s why, for small brands, simply having a channel reinstated doesn’t resolve all their concerns.

“Zero warnings”: Longtime YouTuber rails against unexplained channel removal Read More »

here’s-how-hucksters-are-manipulating-google-to-promote-shady-chrome-extensions

Here’s how hucksters are manipulating Google to promote shady Chrome extensions

The people overseeing the security of Google’s Chrome browser explicitly forbid third-party extension developers from trying to manipulate how the browser extensions they submit are presented in the Chrome Web Store. The policy specifically calls out search-manipulating techniques such as listing multiple extensions that provide the same experience or plastering extension descriptions with loosely related or unrelated keywords.

On Wednesday, security and privacy researcher Wladimir Palant revealed that developers are flagrantly violating those terms in hundreds of extensions currently available for download from Google. As a result, searches for a particular term or terms can return extensions that are unrelated, inferior knockoffs, or carry out abusive tasks such as surreptitiously monetizing web searches, something Google expressly forbids.

Not looking? Don’t care? Both?

A search Wednesday morning in California for Norton Password Manager, for example, returned not only the official extension but three others, all of which are unrelated at best and potentially abusive at worst. The results may look different for searches at other times or from different locations.

Search results for Norton Password Manager.

It’s unclear why someone who uses a password manager would be interested in spoofing their time zone or boosting the audio volume. Yes, they’re all extensions for tweaking or otherwise extending the Chrome browsing experience, but isn’t every extension? The Chrome Web Store doesn’t want extension users to get pigeonholed or to see the list of offerings as limited, so it doesn’t just return the title searched for. Instead, it draws inferences from descriptions of other extensions in an attempt to promote ones that may also be of interest.

In many cases, developers are exploiting Google’s eagerness to promote potentially related extensions in campaigns that foist offerings that are irrelevant or abusive. But wait, Chrome security people have put developers on notice that they’re not permitted to engage in keyword spam and other search-manipulating techniques. So, how is this happening?

Here’s how hucksters are manipulating Google to promote shady Chrome extensions Read More »

google-search-is-losing-the-fight-with-seo-spam,-study-says

Google search is losing the fight with SEO spam, study says

Just wait until more AI sites arrive —

Study finds “search engines seem to lose the cat-and-mouse game that is SEO spam.”

Google search is losing the fight with SEO spam, study says

It’s not just you—Google Search is getting worse. A new study from Leipzig University, Bauhaus-University Weimar, and the Center for Scalable Data Analytics and Artificial Intelligence looked at Google search quality for a year and found the company is losing the war against SEO (Search Engine Optimization) spam.

The study, first spotted by 404media, “monitored Google, Bing and DuckDuckGo for a year on 7,392 product review queries,” using queries like “best headphones” to study search results. The focus was on product review queries because the researchers felt those searches were “particularly vulnerable to affiliate marketing due to its inherent conflict of interest between users, search providers, and content providers.”

Overall, the study found that “the majority of high-ranking product reviews in the result pages of commercial search engines (SERPs) use affiliate marketing, and significant amounts are outright SEO product review spam.” Search engines occasionally update their ranking algorithms to try to combat spam, but the study found that “search engines seem to lose the cat-and-mouse game that is SEO spam” and that there are “strong correlations between search engine rankings and affiliate marketing, as well as a trend toward simplified, repetitive, and potentially AI-generated content.”

The study found “an inverse relationship between a page’s optimization level and its perceived expertise, indicating that SEO may hurt at least subjective page quality.” Google and its treatment of pages is the primary force behind what does and doesn’t count as SEO, and to say Google’s guidelines reduce subjective page quality is a strike against Google’s entire ranking algorithm.

The bad news is that it doesn’t seem like this will get better any time soon. The study points out generative AI sites one or two times, but that was only in the past year. The elephant in the room is that generative AI is starting to be able to completely automate the processes of SEO spam. Some AI content farms can scan a human-written site, use it for “training data,” rewrite it slightly, and then stave off the actual humans with more aggressive SEO tactics. There are already people bragging about doing AI-powered “SEO heists” on X (formerly Twitter). The New York Times is taking OpenAI to court for copyright infringement, and a class-action suit for book publishers calls ChatGPT and LLaMA (Large Language Model Meta AI) “industrial-strength plagiarists.” Artists are in the same boat from tools like Midjourney and Stable Diffusion. Most websites do not have the legal capacity to take on an infinite wave of automated spam sites enabled by these tools. Google’s policy is to not penalize AI-generated content in its search results.

A Google spokesperson responded to the study by pointing out that Google is still doing better than its competition: “This particular study looked narrowly at product review content, and it doesn’t reflect the overall quality and helpfulness of Search for the billions of queries we see every day. We’ve launched specific improvements to address these issues – and the study itself points out that Google has improved over the past year and is performing better than other search engines. More broadly, numerous third parties have measured search engine results for other types of queries and found Google to be of significantly higher quality than the rest.”

This post was updated at 6: 00PM ET to add a statement from Google.

Google search is losing the fight with SEO spam, study says Read More »

lazy-use-of-ai-leads-to-amazon-products-called-“i-cannot-fulfill-that-request”

Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

FILE NOT FOUND —

The telltale error messages are a sign of AI-generated pablum all over the Internet.

I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Enlarge / I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Amazon

Amazon users are at this point used to search results filled with products that are fraudulent, scams, or quite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy.”

As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging from lawn chairs to office furniture to Chinese religious tracts. A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media (one such example is Archived here).

ProTip: Don't ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Enlarge / ProTip: Don’t ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Other Amazon product names don’t mention OpenAI specifically but feature apparent AI-related error messages, such as “Sorry but I can’t generate a response to that request” or “Sorry but I can’t provide the information you’re looking for,” (available in a variety of colors). Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that OpenAI can’t provide content that “requires using trademarked brand names” or “promotes a specific religious institution” or in one case “encourage unethical behavior.”

The repeated invocation of a

Enlarge / The repeated invocation of a “commitment to providing reliable and trustworthy product descriptions” cited in this description is particularly ironic.

The descriptions for these oddly named products are also riddled with obvious AI error messages like, “Apologies, but I am unable to provide the information you’re seeking.” One product description for a set of tables and chairs (which has since been taken down) hilariously noted: “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]].” Another set of product descriptions, seemingly for tattoo ink guns, repeatedly apologizes that it can’t provide more information because: “We prioritize accuracy and reliability by only offering verified product details to our customers.”

Spam spam spam spam

Using large language models to help generate product names or descriptions isn’t against Amazon policy. On the contrary, in September Amazon launched its own generative AI tool to help sellers “create more thorough and captivating product descriptions, titles, and listing details.” And we could only find a small handful of Amazon products slipping through with the telltale error messages in their names or descriptions as of press time.

Still, these error-message-filled listings highlight the lack of care or even basic editing many Amazon scammers are exercising when putting their spammy product listings on the Amazon marketplace. For every seller that can be easily caught accidentally posting an OpenAI error, there are likely countless others using the technology to create product names and descriptions that only seem like they were written by a human that has actual experience with the product in question.

A set of clearly real people conversing on Twitter / X.

Enlarge / A set of clearly real people conversing on Twitter / X.

Amazon isn’t the only online platform where these AI bots are outing themselves, either. A quick search for “goes against OpenAI policy” or “as an AI language model” can find a whole lot of artificial posts on Twitter / X or Threads or LinkedIn, for example. Security engineer Dan Feldman noted a similar problem on Amazon back in April, though searching with the phrase “as an AI language model” doesn’t seem to generate any obviously AI-generated search results these days.

As fun as it is to call out these obvious mishaps for AI-generated content mills, a flood of harder-to-detect AI content is threatening to overwhelm everyone from art communities to sci-fi magazines to Amazon’s own ebook marketplace. Pretty much any platform that accepts user submissions that involve text or visual art now has to worry about being flooded with wave after wave of AI-generated work trying to crowd out the human community they were created for. It’s a problem that’s likely to get worse before it gets better.

Listing image by Getty Images | Leon Neal

Lazy use of AI leads to Amazon products called “I cannot fulfill that request” Read More »