Amazon

lawsuit-against-prime-video-ads-shows-perils-of-annual-streaming-subscriptions

Lawsuit against Prime Video ads shows perils of annual streaming subscriptions

Priyanka CHopra (left) and Richard Madden (right) in the AMazon Prime Video original series Citadel.

Enlarge / Priyanka Chopra (left) and Richard Madden (right) in the Prime Video original series Citadel.

Streaming services like Amazon Prime Video promote annual subscriptions as a way to save money. But long-term commitments to streaming companies that are in the throes of trying to determine how to maintain or achieve growth typically end up biting subscribers in the butt—and they’re getting fed up.

As first reported by The Hollywood Reporter, a lawsuit seeking class-action certification [PDF] hit Amazon on February 9. The complaint centers on Amazon showing ads with Prime Video streams, which it started doing for US subscribers in January unless customers paid an extra $2.99/month. This approach differed from how other streaming services previously introduced ads: by launching a new subscription plan with ads and lower prices and encouraging subscribers to switch.

A problem with this approach, though, as per the lawsuit, is that it meant that people who signed up for an annual subscription to Prime Video before Amazon’s September 2023 announcement about ads already paid for a service that’s different from what they expected.

And that’s not the only risk people face when opting-in to a yearlong relationship with streaming services these days.

Paying extra “for something they already paid for”

The lawsuit recently filed against Prime Video names California resident Wilbert Napoleon as a plaintiff and argues that Amazon’s advertisements for Prime Video made “reasonable consumers” think that they would get ad-free movie and TV-show streaming for the duration of their subscription.

The lawsuit reads:

Reasonable consumers expect that, if you purchase a subscription with ad-free streaming of movies and tv shows, that the ad-free streaming for movies and tv shows is available for the duration of the purchased subscription.

… however, Plaintiff and class members’ reasonable expectations were not met. Instead of receiving a subscription that included ad-free streaming of [TV] shows and movies, they received something worth less.

Napoleon bought an annual subscription to Prime Video in June 2023, per the court filings. The lawsuit accuses Amazon of falsely advertising Prime Video.

“Subscribers must now pay extra to get something that they already paid for,” the lawsuit says.

The idea of expectations not being met is common for streaming customers. That said, the lawsuit hasn’t gotten far enough yet where we should expect big changes to Prime Video or financial penalties for Amazon. Changing the user experience mid-deal is aggravating for customers, but Prime Video’s terms of use claim that Amazon maintains the right to diminish the value of Prime Video:

Offers and pricing for subscriptions (also referred to at times as memberships), the subscription services, the extent of available Subscription Digital Content, and the specific titles available through subscription services, may change over time and by location without notice (except as may be required by applicable law).

But there’s still a broader point to be made around streaming services trying to lure people into yearlong commitments knowing that the product they offer today might drastically change over the next 12 months.

Amazon, for example, announced that it would bring commercials to Prime Video in September and didn’t confirm when it would introduce ads until December, about a month ahead of the changes. Yet, Amazon reportedly had plans to bring ads to the service as early as June, per a report from The Wall Street Journal that cited anonymous “people familiar with the situation.” Despite these reported plans to alter the user experience significantly, Amazon continued to sell annual subscriptions to Prime Video. For months, people were committing to something that they expected would include commercial-free viewing, which used to be a popular draw of Prime Video compared to rival streaming services.

Prime Video also seemingly didn’t give a heads-up that it was removing Dolby Vision and Dolby Atmos support unless subscribers agreed to pay $2.99 more per month for an ad-free plan.

Amazon declined to comment on this story. Lawyers for the lawsuit filed against Amazon didn’t respond to a request for comment.

Lawsuit against Prime Video ads shows perils of annual streaming subscriptions Read More »

prime-video-cuts-dolby-vision,-atmos-support-from-ad-tier—and-didn’t-tell-subs

Prime Video cuts Dolby Vision, Atmos support from ad tier—and didn’t tell subs

Surprise —

To get them back, you must pay an extra $2.99/month for the ad-free tier.

High King Gil-galad and Elrond in The Lord of the Rings: The Rings of Power

Enlarge / The Rings of Power… now in HDR10+ for ad-tier users.

On January 29, Amazon started showing ads to Prime Video subscribers in the US unless they pay an additional $2.99 per month. But this wasn’t the only change to the service. Those who don’t pay up also lose features; their accounts no longer support Dolby Vision or Dolby Atmos.

As noticed by German tech outlet 4K Filme on Sunday, Prime Video users who choose to sit through ads can no longer use Dolby Vision or Atmos while streaming. Ad-tier subscribers are limited to HDR10+ and Dolby Digital 5.1.

4K Filme confirmed that this was the case on TVs from both LG and Sony; Forbes also confirmed the news using a TCL TV.

“In the ads-free account, the TV throws up its own confirmation boxes to say that the show is playing in Dolby Vision HDR and Dolby Atmos. In the basic, with-ads account, however, the TV’s Dolby Vision and Dolby Atmos pop-up boxes remain stubbornly absent,” Forbes said.

Amazon hasn’t explained its reasoning for the feature removal, but it may be trying to cut back on licensing fees paid to Dolby Laboratories. Amazon may also hope to push HDR10+, a Dolby Vision competitor that’s free and open. It also remains possible that we could one day see the return of Dolby Vision and Dolby Atmos to the ad tier through a refreshed licensing agreement.

Amazon has had a back-and-forth history with supporting Dolby features. In 2016, it first made Dolby Vision available on Prime Video. In 2017, though, Prime Video stopped supporting the format in favor of HDR10+. Amazon announced the HDR10+ format alongside Samsung, and it subsequently made the entire Prime Video library available in HDR10+. But in 2022, Prime Video started offering content like The Lord of the Rings: The Rings of Power in Dolby Vision once again.

Amazon wasn’t upfront about removals

Amazon announced in September 2023 that it would run ads on Prime Video accounts in 2024; in December, Amazon confirmed that the ads would start running on January 29 unless subscribers paid extra. In the interim, Amazon failed to mention that it was also removing support for Dolby Vision and Atmos from the ad-supported tier.

Forbes first reported on Prime Video’s ad-based tier not supporting Dolby Vision and Atmos by assuming that it was a technical error. Not until after Forbes published its article did Amazon officially confirm the changes. That’s not how people subscribing to a tech giant’s service expect to learn about a diminishing of their current plan.

It also seems that Amazon’s removal of the Dolby features has been done in such a way that it could lead some users to think they’re getting Dolby Vision and Atmos support even when they’re not.

As Forbes’ John Archer reported, “To add a bit of confusion to the mix, on the TCL TV I used, the Prime Video header information for the Jack Ryan show that appears on the with-ads basic account shows Dolby Vision and Dolby Atmos among the supported technical features—yet when you start to play the episode, neither feature is delivered to the TV.”

As streaming services overtake traditional media, many customers are growing increasingly discouraged by how the industry seems to be evolving into something strongly reminiscent of cable. While there are some aspects of old-school TV worth emulating, others—like confusing plans that don’t make it clear what you get with each package—are not.

Amazon didn’t respond to questions Ars Technica sent in time for publication, but we’ll update this story if we hear back.

Prime Video cuts Dolby Vision, Atmos support from ad tier—and didn’t tell subs Read More »

amazon-hides-cheaper-items-with-faster-delivery,-lawsuit-alleges

Amazon hides cheaper items with faster delivery, lawsuit alleges

A game of hide-and-seek —

Hundreds of millions of Amazon’s US customers have overpaid, class action says.

Amazon hides cheaper items with faster delivery, lawsuit alleges

Amazon rigged its platform to “routinely” push an overwhelming majority of customers to pay more for items that could’ve been purchased at lower costs with equal or faster delivery times, a class-action lawsuit has alleged.

The lawsuit claims that a biased algorithm drives Amazon’s “Buy Box,” which appears on an item’s page and prompts shoppers to “Buy Now” or “Add to Cart.” According to customers suing, nearly 98 percent of Amazon sales are of items featured in the Buy Box, because customers allegedly “reasonably” believe that featured items offer the best deal on the platform.

“But they are often wrong,” the complaint said, claiming that instead, Amazon features items from its own retailers and sellers that participate in Fulfillment By Amazon (FBA), both of which pay Amazon higher fees and gain secret perks like appearing in the Buy Box.

“The result is that consumers routinely overpay for items that are available at lower prices from other sellers on Amazon—not because consumers don’t care about price, or because they’re making informed purchasing decisions, but because Amazon has chosen to display the offers for which it will earn the highest fees,” the complaint said.

Authorities in the US and the European Union have investigated Amazon’s allegedly anticompetitive Buy Box algorithm, confirming that it’s “favored FBA sellers since at least 2016,” the complaint said. In 2021, Amazon was fined more than $1 billion by the Italian Competition Authority over these unfair practices, and in 2022, the European Commission ordered Amazon to “apply equal treatment to all sellers when deciding what to feature in the Buy Box.”

These investigations served as the first public notice that Amazon’s Buy Box couldn’t be trusted, customers suing said. Amazon claimed that the algorithm was fixed in 2020, but so far, Amazon does not appear to have addressed all concerns over its Buy Box algorithm. As of 2023, European regulators have continued pushing Amazon “to take further action to remedy its Buy Box bias in their respective jurisdictions,” the customers’ complaint said.

The class action was filed by two California-based long-time Amazon customers, Jeffrey Taylor and Robert Selway. Both feel that Amazon “willfully” and “deceptively” tricked them and hundreds of millions of US customers into purchasing the featured item in the Buy Box when better deals existed.

Taylor and Selway’s lawyer, Steve Berman, told Reuters that Amazon has placed “a great burden” on its customers, who must invest more time on the platform to identify the best deals. Unlike other lawsuits over Amazon’s Buy Box, this is the first lawsuit to seek compensation over harms to consumers, not over antitrust concerns or harms to sellers, Reuters noted.

The lawsuit has been filed on behalf of “all persons who made a purchase using the Buy Box from 2016 to the present.” Because Amazon supposedly “frequently” features more expensive items in the Buy Box and most sales result from Buy Box placements, they’ve alleged that “the chances that any Class member was unharmed by one or more purchases is virtually non-existent.”

“Our team expects the class to include hundreds of millions of Amazon consumers because virtually all purchases are made from the Buy Box,” a spokesperson for plaintiffs’ lawyers told Ars.

Customers suing are hoping that a jury will decide that Amazon continues to “deliberately steer” customers to purchase higher-priced items in the Buy Box to spike its own profits. They’ve asked a US district court in Washington, where Amazon is based, to permanently stop Amazon from using allegedly biased algorithms to drive sales through its Buy Box.

The extent of damages that Amazon could owe are currently unknown but appear significant. It’s estimated that 80 percent of Amazon’s 300 million userbase is comprised of US subscribers, each allegedly overpaying on most of their purchases over the past seven years. Last year, Amazon’s US sales exceeded $574 billion.

“Amazon claims to be a ‘customer-centric’ company that works to offer the lowest prices to its customers, but in violation of the Washington Consumer Protection Act, Amazon employs a deceptive scheme to keep its profits—and consumer prices—high,” customer’s lawsuit alleged.

Amazon hides cheaper items with faster delivery, lawsuit alleges Read More »

amazon-ring-stops-letting-police-request-footage-in-neighbors-app-after-outcry

Amazon Ring stops letting police request footage in Neighbors app after outcry

Neighborhood watch —

Warrantless access may still be granted during vaguely defined “emergencies.”

Amazon Ring stops letting police request footage in Neighbors app after outcry

Amazon Ring has shut down a controversial feature in its community safety app Neighbors that has allowed police to contact homeowners and request doorbell and surveillance camera footage without a warrant for years.

In a blog, head of the Neighbors app Eric Kuhn confirmed that “public safety agencies like fire and police departments can still use the Neighbors app to share helpful safety tips, updates, and community events,” but the Request for Assistance (RFA) tool will be disabled.

“They will no longer be able to use the RFA tool to request and receive video in the app,” Kuhn wrote.

Kuhn did not explain why Neighbors chose to “sunset” the RFA tool, but privacy advocates and lawmakers have long criticized Ring for helping to expand police surveillance in communities, seemingly threatening privacy and enabling racial profiling, CNBC reported. Among the staunchest critics of Ring’s seemingly tight relationship with law enforcement is the Electronic Frontier Foundation (EFF), which has long advocated for Ring and its users to stop sharing footage with police without a warrant.

In a statement provided to Ars, EFF senior policy analyst Matthew Guariglia noted that Ring had launched the RFA tool after EFF and other organizations had criticized Ring for allowing police to privately email warrantless requests for footage in the Neighbors app. Rather than end requests through the app entirely, Ring appeared to see the RFA tool as a middle ground, providing transparency about how many requests were being made, without ending police access to community members readily sharing footage on the app.

“Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users,” Guariglia said.

Moving forward, police and public safety agencies with warrants will still be able to request footage, which Amazon documents in transparency reports published every six months. These reports show thousands of search warrant requests and even more “preservation requests,” which allow government agencies to request to preserve user information for up to 90 days, “pending the receipt of a legally valid and binding order.”

“If we are legally required to comply, we will provide information responsive to the government demand,” Ring’s website says.

Ring rebrand embraces “hope and joy”

Guariglia said that Ring sunsetting the RFA tool “is a step in the right direction,” but it has “come after years of cozy relationships with police and irresponsible handling of data” that has, for many, damaged trust in Ring.

In 2022, EFF reported that Ring admitted that “there are ’emergency’ instances when police can get warrantless access to Ring personal devices without the owner’s permission.” And last year, Ring reached a $5.8 million settlement with the Federal Trade Commission, refunding customers for what the FTC described as “compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections, enabling hackers to take control of consumers’ accounts, cameras, and videos.”

Because of this history, Guariglia said that EFF is “still deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.”

EFF recommends additional steps that Ring could take to enhance user privacy, like enabling end-to-end encryption by default and turning off default audio collection, Guariglia said.

Bloomberg noted that this change to the Neighbors app comes after a new CEO, Liz Hamren, came on board, announcing that last year “Ring was rethinking its mission statement.” Because Ring was adding indoor and backyard home monitoring and business services, the company’s initial mission statement—”to reduce crime in neighborhoods”—was no longer, as founding Ring CEO Jamie Siminoff had promoted it, “at the core” of what Ring does.

In Kuhn’s blog, barely any attention is given to ending the RFA tool. A Ring spokesperson declined to tell Ars how many users had volunteered to use the tool, so it remains unclear how popular it was.

Rather than clarifying the RFA tool controversy, Kuhn’s blog primarily focused on describing how much Ring users loved “heartwarming or silly” footage like a “bear relaxing in a pool.” Under Hamren and Kuhn’s guidance, it appears that the Neighbors app is embracing a new mission of connecting communities to find “hope and joy” in their areas by adding new features to Neighbors like Moments and Best of Ring.

By contrast, when Ring introduced the RFA tool, it said that its mission was “to make neighborhoods safer for everyone.” On a help page, Ring bragged that police had used Neighbors to recover stolen guns and medical supplies. Because of these selling points, Ring’s community safety features may still be priorities for some users. So, while Ring may be ready to move on from highlighting its partnership with law enforcement as a “core” part of its service, its users may still be used to seeing their cameras as tools that should be readily accessible to police.

As law enforcement agencies lose access to Neighbors’ RFA tool, Guariglia said that it’s important to raise awareness among Ring owners that police can’t demand access to footage without a warrant.

“This announcement will not stop police from trying to get Ring footage directly from device owners without a warrant,” Guariglia said. “Ring users should also know that when police knock on their door, they have the right to, and should, request that police get a warrant before handing over footage.”

Amazon Ring stops letting police request footage in Neighbors app after outcry Read More »

“alexa-is-in-trouble”:-paid-for-alexa-gives-inaccurate-answers-in-early-demos

“Alexa is in trouble”: Paid-for Alexa gives inaccurate answers in early demos

Amazon Echo Show 8 with Alexa

Enlarge / Amazon demoed future generative AI capabilties for Alexa in September.

“If this fails to get revenue, Alexa is in trouble.”

A quote from an anonymous Amazon employee in a Wednesday Business Insider report paints a dire picture. Amazon needs its upcoming subscription version of Alexa to drive revenue in ways that its voice assistant never has before.

Amazon declined Ars’ request for comment on the report. But the opening quote in this article could have been uttered by anyone following voice assistants for the past year-plus. All voice assistants have struggled to drive revenue since people tend to use voice assistants for basic queries, like checking the weather, rather than transactions.

Amazon announced plans to drive usage and interest in Alexa by releasing a generative AI version that it said would one day require a subscription.

This leads to the question: Would you pay to use Alexa? Amazon will be challenged to convince people to change how they use Alexa while suddenly paying a monthly rate to enable that unprecedented behavior.

Workers within Amazon seemingly see this obstacle. Insider, citing an anonymous Amazon employee, reported that “some were questioning the entire premise of charging for Alexa. For example, people who already pay for an existing Amazon service, such as Amazon Music, might not be willing to pay additional money to get access to the newer version of Alexa.”

“There is tension over whether people will pay for Alexa or not,” one of the anonymous Amazon workers reportedly said.

Subscription-based Alexa originally planned for June release

Amazon hasn’t publicly confirmed a release date for generative AI Alexa. But Insider’s report, citing “internal documents and people familiar with the matter,” said Amazon has been planning to release its subscription plan on June 30. However, plans for what Insider said will be called “Alexa Plus” and built on “Remarkable Alexa” technology could be delayed due to numerous development challenges.

According to the report, the Remarkable Alexa tech has been being demoed by 15,000 customers and currently succeeds in being conversational but is “deflecting answers, often giving unnecessarily long or inaccurate responses.”

In September, then-SVP of devices and services at Amazon David Limp demoed Alexa understanding more complex commands, including Alexa not requiring the “Hey Alexa” prompt and being able to understand multiple demands for multiple apps through a single spoken phrase.

Insider reported: “The new Alexa still didn’t meet the quality standards expected for Alexa Plus, these people added, noting the technical challenges and complexity of redesigning Alexa.”

“Legacy constraints”

According to the report, people working on the original Alexa insisted on using what they had already built for the standard voice assistant with the paid-for version, resulting in a bloated technology and “internal politics.”

However, the original Alexa is based on a natural language model with multiple parts doing multiple things, compared to the colossal large language model of generative AI Alexa.

Now, generative AI Alexa is reportedly moving to a new technological stack to avoid the “legacy constraints” of today’s Alexa but potentially delaying things.

“Alexa is in trouble”: Paid-for Alexa gives inaccurate answers in early demos Read More »

would-luddites-find-the-gig-economy-familiar?

Would Luddites find the gig economy familiar?

Machine Breakers Unite! —

Luddites were hardly the anti-tech dullards historians have painted them to be.

Woman about to swing a hammer at a laptop.

The term Luddite is usually used as an insult. It suggests someone who is backward-looking, averse to progress, afraid of new technology, and frankly, not that bright. But Brian Merchant claims that that is not who the Luddites were at all. They were organized, articulate in their demands, very much understood how factory owners were using machinery to supplant them, and highly targeted in their destruction of that machinery.

Their pitiable reputation is the result of a deliberate smear campaign by elites in their own time who (successfully, as it turned out) tried to discredit their coherent and justified movement. In his book Blood in the Machine: The Origins of the Rebellion Against Big Tech, Merchant memorializes the Luddites not as the hapless dolts with their heads in the sand that they’ve become synonymous with, but rather as the first labor organizers. Longing for the halcyon days of yore when we were more in touch with nature isn’t Luddism, Merchant writes; that’s pastoralism—totally different thing.

OG Luddites

Weavers used to work at home, using hand-powered looms (i.e., machines). The whole family pitched in to make cloth; they worked on their own schedules and spent their leisure time and meals together. Master weavers apprenticed for seven years to learn their trade. It worked this way in the north of England for hundreds of years.

In 1786 Edmund Cartwright invented the power-loom. Now, instead of a master weaver being required to make cloth, an unschooled child could work a loom. Anyone who could afford these “automated” looms (they did still need some human supervision) could cram a bunch of them into a factory and bring in orphans from the poorhouse to oversee them all day long. The orphans could churn out a lot more cloth much faster than before, and owners didn’t have to pay the 7-year-olds what they had been paying the master weavers. By the beginning of the 19th century, that is exactly what the factory owners did.

The weavers, centered in Nottinghamshire—Robin Hood country—obviously did not appreciate factory owners using these automated looms to obviate their jobs, their training—their entire way of life, really. They tried to negotiate with the factory owners for fair wages and to get protective legislation enacted to limit the impacts of the automated looms and protect their rights and products. But Parliament was having none of it; instead, Parliament—somewhat freaked out by the French Revolution—passed the Combination Acts in 1801, which made unionizing illegal. So, the workers took what they saw as their only remaining avenue of recourse; they started smashing the automated looms.

The aristocrats in the House of Lords told them they didn’t understand, that this automation would make things better for everyone. But it wasn’t improving things for anyone the Luddites knew or saw. They watched factory owners get richer and richer, their own families get thinner and thinner, and markets get flooded with inferior cloth made by child slaves working in unsafe conditions. So they continued breaking the machines, even after the House of Lords made it a capital crime in 1812.

Merchant tells his story through the experience of selected individuals. One is Robert Blincoe, an orphan whose memoir of mistreatment in his 10 years of factory work is thought to have inspired Dickens’ Oliver Twist. Another is Lord Byron, who, like other Romantic poets, sympathized with the Luddites and who spoke (beautifully but futilely) in the House of Lords on their behalf. George Mellor, another figure Merchant spends time with, is one of the primary candidates for a real-life General Ludd.

Edward Ludd himself doesn’t qualify, as he was mythical. Supposedly an apprentice in the cloth trade who smashed his master’s device with a hammer in 1799, he became the movement’s figurehead, with the disparate raiders breaking machines all over northern England, leaving notes signed with his name. George Mellor, by contrast, was one of the best writers and organizers the Luddites had. He’d spent the requisite seven years to learn his cloth finishing job and in 1811 was ready to get to work. The West Riding of York, where he lived, had been home to wool weavers for centuries. But now greedy factory owners were using machines and children to do the work he had spent his adolescence mastering. After over a year of pleading with the owners and the government, and then resorting to machine breaking, there was no change and no hope in sight.

Finally, Mellor led a raid in which a friend was killed, and he snapped. He murdered a factory owner and was hanged, along with 14 of his fellows (only four were involved in the murder; the rest were killed for other Luddite activities).

Even as their bodies were still practically swinging on the gallows, the aristocracy and press were already undermining and reshaping the Luddite story, depicting them as deluded and small-minded men who smashed machines they couldn’t understand—not the strategic, grassroots labor activists they were. That misrepresentation is largely how they are still remembered.

Would Luddites find the gig economy familiar? Read More »

lazy-use-of-ai-leads-to-amazon-products-called-“i-cannot-fulfill-that-request”

Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

FILE NOT FOUND —

The telltale error messages are a sign of AI-generated pablum all over the Internet.

I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Enlarge / I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Amazon

Amazon users are at this point used to search results filled with products that are fraudulent, scams, or quite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy.”

As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging from lawn chairs to office furniture to Chinese religious tracts. A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media (one such example is Archived here).

ProTip: Don't ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Enlarge / ProTip: Don’t ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Other Amazon product names don’t mention OpenAI specifically but feature apparent AI-related error messages, such as “Sorry but I can’t generate a response to that request” or “Sorry but I can’t provide the information you’re looking for,” (available in a variety of colors). Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that OpenAI can’t provide content that “requires using trademarked brand names” or “promotes a specific religious institution” or in one case “encourage unethical behavior.”

The repeated invocation of a

Enlarge / The repeated invocation of a “commitment to providing reliable and trustworthy product descriptions” cited in this description is particularly ironic.

The descriptions for these oddly named products are also riddled with obvious AI error messages like, “Apologies, but I am unable to provide the information you’re seeking.” One product description for a set of tables and chairs (which has since been taken down) hilariously noted: “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]].” Another set of product descriptions, seemingly for tattoo ink guns, repeatedly apologizes that it can’t provide more information because: “We prioritize accuracy and reliability by only offering verified product details to our customers.”

Spam spam spam spam

Using large language models to help generate product names or descriptions isn’t against Amazon policy. On the contrary, in September Amazon launched its own generative AI tool to help sellers “create more thorough and captivating product descriptions, titles, and listing details.” And we could only find a small handful of Amazon products slipping through with the telltale error messages in their names or descriptions as of press time.

Still, these error-message-filled listings highlight the lack of care or even basic editing many Amazon scammers are exercising when putting their spammy product listings on the Amazon marketplace. For every seller that can be easily caught accidentally posting an OpenAI error, there are likely countless others using the technology to create product names and descriptions that only seem like they were written by a human that has actual experience with the product in question.

A set of clearly real people conversing on Twitter / X.

Enlarge / A set of clearly real people conversing on Twitter / X.

Amazon isn’t the only online platform where these AI bots are outing themselves, either. A quick search for “goes against OpenAI policy” or “as an AI language model” can find a whole lot of artificial posts on Twitter / X or Threads or LinkedIn, for example. Security engineer Dan Feldman noted a similar problem on Amazon back in April, though searching with the phrase “as an AI language model” doesn’t seem to generate any obviously AI-generated search results these days.

As fun as it is to call out these obvious mishaps for AI-generated content mills, a flood of harder-to-detect AI content is threatening to overwhelm everyone from art communities to sci-fi magazines to Amazon’s own ebook marketplace. Pretty much any platform that accepts user submissions that involve text or visual art now has to worry about being flooded with wave after wave of AI-generated work trying to crowd out the human community they were created for. It’s a problem that’s likely to get worse before it gets better.

Listing image by Getty Images | Leon Neal

Lazy use of AI leads to Amazon products called “I cannot fulfill that request” Read More »

ai-firms’-pledges-to-defend-customers-from-ip-issues-have-real-limits

AI firms’ pledges to defend customers from IP issues have real limits

Read the fine print —

Indemnities offered by Amazon, Google, and Microsoft are narrow.

The Big Tech groups are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI

Enlarge / The Big Tech groups are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI

FT

The world’s biggest cloud computing companies that have pushed new artificial intelligence tools to their business customers are offering only limited protections against potential copyright lawsuits over the technology.

Amazon, Microsoft and Google are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI—systems that can spew out humanlike text, images and code in seconds.

AI models are “trained” on data, such as photographs and text found on the internet. This has led to concern that rights holders, from media companies to image libraries, will make legal claims against third parties who use the AI tools trained on their copyrighted data.

The big three cloud computing providers have pledged to defend business customers from such intellectual property claims. But an analysis of the indemnity clauses published by the cloud computing companies show that the legal protections only extend to the use of models developed by or with oversight from Google, Amazon and Microsoft.

“The indemnities are quite a smart bit of business . . . and make people think ‘I can use this without worrying’,” said Matthew Sag, professor of law at Emory University.

But Brenda Leong, a partner at Luminos Law, said it was “important for companies to understand that [the indemnities] are very narrowly focused and defined.”

Google, Amazon and Microsoft declined to comment.

The indemnities provided to customers do not cover use of third-party models, such as those developed by AI start-up Anthropic, which counts Amazon and Google as investors, even if these tools are available for use on the cloud companies’ platforms.

In the case of Amazon, only content produced by its own models, such as Titan, as well as a range of the company’s AI applications, are covered.

Similarly, Microsoft only provides protection for the use of tools that run on its in-house models and those developed by OpenAI, the startup with which it has a multibillion-dollar alliance.

“People needed those assurances to buy, because they were hyper aware of [the legal] risk,” said one IP lawyer working on the issues.

The three cloud providers, meanwhile, have been adding safety filters to their tools that aim to screen out any potentially problematic content that is generated. The tech groups had become “more satisfied that instances of infringements would be very low,” but did not want to provide “unbounded” protection, the lawyer said.

While the indemnification policies announced by Microsoft, Amazon, and Alphabet are similar, their customers may want to negotiate more specific indemnities in contracts tailored to their needs, though that is not yet common practice, people close to the cloud companies said.

OpenAI and Meta are among the companies fighting the first generative AI test cases brought by prominent authors and the comedian Sarah Silverman. They have focused in large part on allegations that the companies developing models unlawfully used copyrighted content to train them.

Indemnities were being offered as an added layer of “security” to users who might be worried about the prospect of more lawsuits, especially since the test cases could “take significant time to resolve,” which created a period of “uncertainty,” said Angela Dunning, a partner at law firm Cleary Gottlieb.

However, Google’s indemnity does not extend to models that have been “fine-tuned” by customers using their internal company data—a practice that allows businesses to train general models to produce more relevant and specific results—while Microsoft’s does.

Amazon’s covers Titan models that have been customized in this way, but if the alleged infringement is due to the fine-tuning, the protection is voided.

Legal claims brought against the users—rather than the makers—of generative AI tools may be challenging to win, however.

When dismissing part of a claim brought by three artists a year ago against AI companies Stability AI, DeviantArt, and Midjourney, US Judge William Orrick said one “problem” was that it was “not plausible” that every image generated by the tools had relied on “copyrighted training images.”

For copyright infringement to apply, the AI-generated images must be shown to be “substantially similar” to the copyrighted images, Orrick said.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AI firms’ pledges to defend customers from IP issues have real limits Read More »

amazon-marketplace-crackdown-has-sellers-searching-for-legal-help

Amazon marketplace crackdown has sellers searching for legal help

Legit or not —

Clean-up drive has led to some small businesses having their accounts suspended.

Amazon marketplace crackdown has sellers searching for legal help

Leon Neal | Getty Images

Merchants who have been suspended from selling goods on Amazon’s marketplace are turning to a cottage industry of lawyers to regain access to their accounts and money, amid growing scrutiny of how the retailer treats independents.

Millions of accounts on the leading ecommerce platform have been prevented from engaging in sales for alleged violations of Amazon’s broad range of policies and other bad behavior. Even temporary suspensions can be a critical blow to the small business owners who rely on online sales.

Four ecommerce-focused US law firms told the Financial Times that the majority of the cases they took on were complaints brought by aggrieved Amazon sellers, with each handling hundreds or thousands of cases every year.

About a dozen sellers also said they had grown worried about Amazon’s power to suspend their accounts or product listings, as it was not always clear what had triggered the suspension and Amazon’s seller support services did not always help to sort out the issue.

Account suspension was “a big fear of mine,” said one seller, who declined to be named. “At the end of the day, it’s not really your business. One day you can wake up and it’s all gone.”

Amazon’s recent efforts to crack down on issues such as fake product reviews have come as US and European regulators have upped their scrutiny of the online harms facing shoppers.

But critics said the existence of a growing army of lawyers and consultants to deal with the fallout from Amazon’s actions pointed to a problem with the way the retailer treats its sellers.

“If you’re a seller and you need help to navigate the system, that’s a real vulnerability for the marketplace. If you’re operating a business where the people you’re deriving revenue from feel that they’re being treated in an arbitrary way without due process, that is a problem,” said Marianne Rowden, chief executive of the E-Merchants Trade Council.

“The fact that there are entire law firms dedicated to dealing with Amazon says a lot,” said one seller, who like many who spoke to the FT asked to remain anonymous for fear of reprisals.

Amazon declined to comment in detail but said its selling partners were “incredibly important” and the company worked hard to “protect and help them grow their business.” The company worked to “eliminate mistakes and ‘false positive’ enforcements” and had an appeal process for sellers in place.

Sellers on Amazon’s marketplace account for more than 60 percent of sales in its store. In the nine months to September 30, Amazon recorded $96bn in commissions and fees paid by sellers, a jump of nearly 20 percent compared with the same period a year earlier.

As the marketplace has grown, Amazon has had to do more to police it. During the first half of 2023 in its EU store, Amazon took 274mn “actions” in response to potential policy violations and other suspected problems, which included the removal of content and 4.2mn account suspensions. Amazon revealed the numbers as part of its first European transparency report newly required by EU law.

Amazon typically withholds any money in the account of a seller it has suspended for alleged fraudulent or abusive practices, which it may keep permanently if the account is not reinstated and the merchant is deemed to have been a bad actor.

Figuring out what caused a suspension and how to reverse it can be difficult. “We had a listing shut down during Prime Big Deals Days with no warning, no cause, no explanation,” said one kitchenware seller who has been selling on Amazon.com since 2014. “That’s pretty common.”

Amazon gave no further information when the listing was reinstated days later, the seller said.

Such confusion drives some sellers towards lawyers and consultants who advise on underlying problems, such as intellectual property disputes.

Amazon-focused US firms said they typically charged flat fees of between $1,300 and $3,500 per case.

CJ Rosenbaum, founding partner of the Amazon and ecommerce-focused law firm Rosenbaum Famularo, said the practice experienced a “big jump” in demand during the pandemic.

Many cases related to IP complaints from bigger brands “trying to control who sells their products” and making “a baseless counterfeit complaint” against a smaller Amazon seller, he added.

Lawyers said some sellers had been wrongly accused by the company’s automated systems that identify breaches of rules and policies. They added though that others had broken Amazon’s rules.

The retailer has become “more draconian” in the enforcement of its policies in recent years, said attorney Jeff Schick.

“Clients will say Amazon is unfair,” he said, but added that if the company did not strictly enforce its rules “then the platform becomes the next [US classified advertisements website] Craigslist.”

As part of escalated disputes, lawyers might steer merchants through a costly arbitration process that the company requires US sellers to use for most issues, rather than filing lawsuits against it.

Sellers were subject to “forced” arbitration clauses that required them to “sign away the right to their day in court if a dispute with Amazon arises,” said a 2022 US government report.

The details of arbitrations are not public, and decisions do not typically set binding precedents. They can also be hugely expensive: the up to three arbitrators that preside over a case can charge hundreds of dollars an hour.

“Quickly, you’re at $25,000 of costs or more,” said sole practitioner Leo Vaisburg, who left firm Wilson Elser in 2022 to pursue Amazon-related work full time. For many small businesses the high costs were “a barrier to entry,” he added. “Very few cases are worth that kind of money.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Amazon marketplace crackdown has sellers searching for legal help Read More »

fda-would-like-to-stop-finding-viagra-in-supplements-sold-on-amazon

FDA would like to stop finding Viagra in supplements sold on Amazon

Well, that’s one kind of energy —

“Big Guys Male Energy Supplement” turns out to be a vehicle for prescription drugs.

Image of a pile of blue pills that forms the shape of a male symbol.

If you were to search for a product called “Mens Maximum Energy Supplement” on Amazon, you’d be bombarded with everything from caffeine pills to amino acid supplements to the latest herb craze. But at some point last year, the FDA had purchased a specific product by that name from Amazon and sent it off to one of its labs to find out if the self-proclaimed “dietary supplement” contained anything that would actually boost energy.

In August, the FDA announced that the supposed supplement was actually a vehicle for a prescription drug that offered a very specific type of energy boost. It contained sildenafil, a drug much better known by its brand name: Viagra.

Four months later, the FDA is finally getting around to issuing a warning letter to Amazon, giving it 15 days to not only address Mens Maximum Energy Supplement and a handful of similar vehicles for prescription erection boosters, but also asking for an explanation of how the company is going to keep similarly mislabelled prescription drugs from being hawked on its site in the future.

Prescription energy

Mens Maximum Energy Supplement was just one of seven products that the FDA found for sale on Amazon that contained either Sildenafil or Tadalafil (marketed as Cialis). The product names ranged from the jokey (WeFun and Genergy) to the vaguely suggestive (Round 2) to the verbose (Big Guys Male Energy Supplement and X Max Triple Shot Energy Honey). All of them were marketed as supplements and contained no indication of their active ingredients.

And that, as the FDA explains to Amazon in detail, means selling those products violates a whole host of laws and regulations. They’re being marketed as dietary supplements, but don’t fit the operative legal definition of these supplements. They’re offering prescription drugs without providing directions for their intended and safe use. They contain no warnings about unsafe doses or how long they can be used safely.

The FDA points out that these rules exist for very good reasons. Both of the drugs found in these supplements inhibit an enzyme called a type-5 phosphodiesterase which, among other things, influences the circulatory system. One potential side effect is a dangerous drop in blood pressure. Both Sildenafil and Tadalafil can also have dangerous interactions with a specific class of drugs often taken by those with diabetes, high blood pressure, or heart disease.

Legal remedies

The FDA’s letter makes it clear that the highlighted supplements aren’t intended to be an exhaustive list of the products that Amazon offers in violation of federal law. And it is very explicit about the fact that it is Amazon’s responsibility (and not the FDA’s) to ensure compliance: “You are responsible for investigating and determining the causes of any violations and for preventing their recurrence or the occurrence of other violations.”

And Amazon clearly has its work cut out for it. None of the products cited by the FDA’s letter appear to still be for sale under the same name at Amazon—a company spokesperson told Ars that it pulled them in response to the original FDA findings. But searches for them at Amazon brought up a number of similar products, many of which included pills with the blue color that Viagra was marketed with.

So, the FDA wants to see a plan that describes how Amazon will not only deal with the products at issue in this letter, but prevent all similar violations in the future: “Include an explanation of each step being taken to prevent the recurrence of violations, including steps you will take to ensure that Amazon will no longer introduce or deliver for introduction into interstate commerce unapproved new drugs and/or misbranded products with undeclared drug ingredients, as well as copies of related documentation.”

Amazon is being given 15 days to respond to the warning letter. Failure to adequately address these violations, the FDA warns, will result in legal action.

FDA would like to stop finding Viagra in supplements sold on Amazon Read More »

big-tech-is-spending-more-than-vc-firms-on-ai-startups

Big Tech is spending more than VC firms on AI startups

money cannon —

Microsoft, Google, and Amazon haved crowded out traditional Silicon Valley investors.

A string of deals by Microsoft, Google and Amazon amounted to two-thirds of the $27 billion raised by fledgling AI companies in 2023,

Enlarge / A string of deals by Microsoft, Google and Amazon amounted to two-thirds of the $27 billion raised by fledgling AI companies in 2023,

FT montage/Dreamstime

Big tech companies have vastly outspent venture capital groups with investments in generative AI startups this year, as established giants use their financial muscle to dominate the much-hyped sector.

Microsoft, Google and Amazon last year struck a series of blockbuster deals, amounting to two-thirds of the $27 billion raised by fledgling AI companies in 2023, according to new data from private market researchers PitchBook.

The huge outlay, which exploded after the launch of OpenAI’s ChatGPT in November 2022, highlights how the biggest Silicon Valley groups are crowding out traditional tech investors for the biggest deals in the industry.

The rise of generative AI—systems capable of producing humanlike video, text, image and audio in seconds—have also attracted top Silicon Valley investors. But VCs have been outmatched, having been forced to slow down their spending as they adjust to higher interest rates and falling valuations for their portfolio companies.

“Over the past year, we’ve seen the market quickly consolidate around a handful of foundation models, with large tech players coming in and pouring billions of dollars into companies like OpenAI, Cohere, Anthropic and Mistral,” said Nina Achadjian, a partner at US venture firm Index Ventures referring to some of the top AI startups.

“For traditional VCs, you had to be in early and you had to have conviction—which meant being in the know on the latest AI research and knowing which teams were spinning out of Google DeepMind, Meta and others,” she added.

Financial Times

A string of deals, such as Microsoft’s $10 billion investment in OpenAI as well as billions of dollars raised by San Francisco-based Anthropic from both Google and Amazon, helped push overall spending on AI groups to nearly three times as much as the previous record of $11 billion set two years ago.

Venture investing in tech hit record levels in 2021, as investors took advantage of ultra-low interest rates to raise and deploy vast sums across a range of industries, particularly those most disrupted by Covid-19.

Microsoft has also committed $1.3 billion to Inflection, another generative AI start-up, as it looks to steal a march on rivals such as Google and Amazon.

Building and training generative AI tools is an intensive process, requiring immense computing power and cash. As a result, start-ups have preferred to partner with Big Tech companies which can provide cloud infrastructure and access to the most powerful chips as well as dollars.

That has rapidly pushed up the valuations of private start-ups in the space, making it harder for VCs to bet on the companies at the forefront of the technology. An employee stock sale at OpenAI is seeking to value the company at $86 billion, almost treble the valuation it received earlier this year.

“Even the world’s top venture investors, with tens of billions under management, can’t compete to keep these AI companies independent and create new challengers that unseat the Big Tech incumbents,” said Patrick Murphy, founding partner at Tapestry VC, an early-stage venture capital firm.

“In this AI platform shift, most of the potentially one-in-a-million companies to appear so far have been captured by the Big Tech incumbents already.”

VCs are not absent from the market, however. Thrive Capital, Josh Kushner’s New York-based firm, is the lead investor in OpenAI’s employee stock sale, having already backed the company earlier this year. Thrive has continued to invest throughout a downturn in venture spending in 2023.

Paris-based Mistral raised around $500 million from investors including venture firms Andreessen Horowitz and General Catalyst, and chipmaker Nvidia since it was founded in May this year.

Some VCs are seeking to invest in companies building applications that are being built over so-called “foundation models” developed by OpenAI and Anthropic, in much the same way apps began being developed on mobile devices in the years after smartphones were introduced.

“There is this myth that only the foundation model companies matter,” said Sarah Guo, founder of AI-focused venture firm Conviction. “There is a huge space of still-unexplored application domains for AI, and a lot of the most valuable AI companies will be fundamentally new.”

Additional reporting by Tim Bradshaw.

© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Big Tech is spending more than VC firms on AI startups Read More »

cvs,-rite-aid,-walgreens-hand-out-medical-records-to-cops-without-warrants

CVS, Rite Aid, Walgreens hand out medical records to cops without warrants

prescription for privacy —

Lawmakers want HHS to revise health privacy law to require warrants.

CVS, Rite Aid, Walgreens hand out medical records to cops without warrants

All of the big pharmacy chains in the US hand over sensitive medical records to law enforcement without a warrant—and some will do so without even running the requests by a legal professional, according to a congressional investigation.

The revelation raises grave medical privacy concerns, particularly in a post-Dobbs era in which many states are working to criminalize reproductive health care. Even if people in states with restrictive laws cross state lines for care, pharmacists in massive chains, such as CVS, can access records across borders.

Lawmakers noted the pharmacies’ policies for releasing medical records in a letter dated Tuesday to the Department of Health and Human Services (HHS) Secretary Xavier Becerra. The letter—signed by Sen. Ron Wyden (D-Ore.), Rep. Pramila Jayapal (D-Wash.), and Rep. Sara Jacobs (D-Calif.)—said their investigation pulled information from briefings with eight big prescription drug suppliers.

They include the seven largest pharmacy chains in the country: CVS Health, Walgreens Boots Alliance, Cigna, Optum Rx, Walmart Stores, Inc., The Kroger Company, and Rite Aid Corporation. The lawmakers also spoke with Amazon Pharmacy.

All eight of the pharmacies said they do not require law enforcement to have a warrant prior to sharing private and sensitive medical records, which can include the prescription drugs a person used or uses and their medical conditions. Instead, all the pharmacies hand over such information with nothing more than a subpoena, which can be issued by government agencies and does not require review or approval by a judge.

Three pharmacies—CVS Health, The Kroger Company, and Rite Aid Corporation—told lawmakers they didn’t even require their pharmacy staff to consult legal professionals before responding to law enforcement requests at pharmacy counters. According to the lawmakers, CVS, Kroger, and Rite Aid said that “their pharmacy staff face extreme pressure to immediately respond to law enforcement demands and, as such, the companies instruct their staff to process those requests in store.”

The rest of the pharmacies—Amazon, Cigna, Optum Rx, Walmart, and Walgreens Boots Alliance—at least require that law enforcement requests be reviewed by legal professionals before pharmacists respond. But, only Amazon said it had a policy of notifying customers of law enforcement demands for pharmacy records unless there were legal prohibitions to doing so, such as a gag order.

HIPAA and transparency

The lawmakers note that the pharmacies aren’t violating regulations under the Health Insurance Portability and Accountability Act (HIPAA). The pharmacies pointed to language in HIPAA regulations that allow health care providers, including pharmacists, to provide medical records if required by law, with subpoenas being a sufficient legal process for such a request. However, the lawmakers note that the HHS has discretion in determining the legal standard here—that is, it has the power to strengthen the regulation to require a warrant, which the lawmakers say it should do.

“We urge HHS to consider further strengthening its HIPAA regulations to more closely align them with Americans’ reasonable expectations of privacy and Constitutional principles,” the three lawmakers wrote.

They also pushed for pharmacies to do better, encouraging them to follow the lead of tech companies. “Pharmacies can and should insist on a warrant, and invite law enforcement agencies that insist on demanding patient medical records with solely a subpoena to go to court to enforce that demand. The requirement for a warrant is exactly the approach taken by tech companies to protect customer privacy.” The trio noted that Google, Microsoft, and Yahoo have since 2010 required law enforcement to have a warrant to obtain customers’ emails.

Also noting tech companies’ lead, the lawmakers encouraged pharmacies to publish annual transparency reports. In the course of the investigation, only CVS Health said it planned to do so.

“Americans deserve to have their private medical information protected at the pharmacy counter and a full picture of pharmacies’ privacy practices, so they can make informed choices about where to get their prescriptions filled,” the lawmakers wrote.

For now, HIPAA regulations grant patients the right to know who is accessing their health records. But, to do so, patients have to specifically request that information—and almost no one does that. “Last year, CVS Health, the largest pharmacy in the nation by total prescription revenue, only received a single-digit number of such consumer requests,” the lawmakers noted.

“The average American is likely unaware that this is even a problem,” the lawmakers said.

CVS, Rite Aid, Walgreens hand out medical records to cops without warrants Read More »