generative ai

fake-ai-law-firms-are-sending-fake-dmca-threats-to-generate-fake-seo-gains

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains

Dewey Fakum & Howe, LLP —

How one journalist found himself targeted by generative AI over a keyfob photo.

Updated

Face composed of many pixellated squares, joining together

Enlarge / A person made of many parts, similar to the attorney who handles both severe criminal law and copyright takedowns for an Arizona law firm.

Getty Images

If you run a personal or hobby website, getting a copyright notice from a law firm about an image on your site can trigger some fast-acting panic. As someone who has paid to settle a news service-licensing issue before, I can empathize with anybody who wants to make this kind of thing go away.

Which is why a new kind of angle-on-an-angle scheme can seem both obvious to spot and likely effective. Ernie Smith, the prolific, ever-curious writer behind the newsletter Tedium, received a “DMCA Copyright Infringement Notice” in late March from “Commonwealth Legal,” representing the “Intellectual Property division” of Tech4Gods.

The issue was with a photo of a keyfob from legitimate photo service Unsplash used in service of a post about a strange Uber ride Smith once took. As Smith detailed in a Mastodon thread, the purported firm needed him to “add a credit to our client immediately” through a link to Tech4Gods, and said it should be “addressed in the next five business days.” Removing the image “does not conclude the matter,” and should Smith not have taken action, the putative firm would have to “activate” its case, relying on DMCA 512(c) (which, in many readings, actually does grant relief should a website owner, unaware of infringing material, “act expeditiously to remove” said material). The email unhelpfully points to the main page of the Internet Archive so that Smith might review “past usage records.”

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including “for,” called into question.

Commonwealth Legal Services

There are quite a few issues with Commonwealth Legal’s request, as detailed by Smith and 404 Media. Chief among them is that Commonwealth Legal, a firm theoretically based in Arizona (which is not a commonwealth), almost certainly does not exist. Despite the 2018 copyright displayed on the site, the firm’s website domain was seemingly registered on March 1, 2024, with a Canadian IP location. The address on the firm’s site leads to a location that, to say the least, does not match the “fourth floor” indicated on the website.

While the law firm’s website is stuffed full of stock images, so are many websites for professional services. The real tell is the site’s list of attorneys, most of which, as 404 Media puts it, have “vacant, thousand-yard stares” common to AI-generated faces. AI detection firm Reality Defender told 404 Media that his service spotted AI generation in every attorneys’ image, “most likely by a Generative Adversarial Network (GAN) model.”

Then there are the attorneys’ bios, which offer surface-level competence underpinned by bizarre setups. Five of the 12 supposedly come from acclaimed law schools at Harvard, Yale, Stanford, and University of Chicago. The other seven seem to have graduated from the top five results you might get for “Arizona Law School.” Sarah Walker has a practice based on “Copyright Violation and Judicial Criminal Proceedings,” a quite uncommon pairing. Sometimes she is “upholding the rights of artists,” but she can also “handle high-stakes criminal cases.” Walker, it seems, couldn’t pick just one track at Yale Law School.

Why would someone go to the trouble of making a law firm out of NameCheap, stock art, and AI images (and seemingly copy) to send quasi-legal demands to site owners? Backlinks, that’s why. Backlinks are links from a site that Google (or others, but almost always Google) holds in high esteem to a site trying to rank up. Whether spammed, traded, generated, or demanded through a fake firm, backlinks power the search engine optimization (SEO) gray, to very dark gray, market. For all their touted algorithmic (and now AI) prowess, search engines have always had a hard time gauging backlink quality and context, so some site owners still buy backlinks.

The owner of Tech4Gods told 404 Media’s Jason Koebler that he did buy backlinks for his gadget review site (with “AI writing assistants”). He disclaimed owning the disputed image or any images and made vague suggestions that a disgruntled former contractor may be trying to poison his ranking with spam links.

Asked by Ars if he had heard back from “Commonwealth Legal” now that five business days were up, Ernie Smith tells Ars: “No, alas.”

This post was updated at 4: 50 p.m. Eastern to include Ernie Smith’s response.

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains Read More »

copilot-key-is-based-on-a-button-you-probably-haven’t-seen-since-ibm’s-model-m

Copilot key is based on a button you probably haven’t seen since IBM’s Model M

Microsoft chatbot button —

Left-Shift + Windows key + F23

A Dell XPS 14 laptop with a Copilot key.

Enlarge / A Dell XPS 14 laptop. The Copilot key is to the right of the right-Alt button.

In January, Microsoft introduced a new key to Windows PC keyboards for the first time in 30 years. The Copilot key, dedicated to launching Microsoft’s eponymous generative AI assistant, is already on some Windows laptops released this year. On Monday, Tom’s Hardware dug into the new addition and determined exactly what pressing the button does, which is actually pretty simple. Pushing a computer’s integrated Copilot button is like pressing left-Shift + Windows key + F23 simultaneously.

Tom’s Hardware confirmed this after wondering if the Copilot key introduced a new scan code to Windows or if it worked differently. Using the scripting program AuthoHotkey with a new laptop with a Copilot button, Tom’s Hardware discovered the keystrokes registered when a user presses the Copilot key. The publication confirmed with Dell that “this key assignment is standard for the Copilot key and done at Microsoft’s direction.”

F23

Surprising to see in that string of keys is F23. Having a computer keyboard with a function row or rows that take you from F1 all the way to F23 is quite rare today. When I try to imagine a keyboard that comes with an F23 button, vintage keyboards come to mind, more specifically buckling spring keyboards from IBM.

IBM’s Model F, which debuted in 1981 and used buckling spring switches over a capacitive PCB, and the Model M, which launched in 1985 and used buckling spring switches over a membrane sheet, both offered layouts with 122 keys. These layouts included not one, but two rows of function keys that would leave today’s 60 percent keyboard fans sweating over the wasted space.

But having 122 keys was helpful for keyboards tied to IBM business terminals. The keyboard layout even included a bank of keys to the left of the primary alpha block of keys for even more forms of input.

An IBM Model M keyboard with an F23 key.

Enlarge / An IBM Model M keyboard with an F23 key.

The 122-key keyboard layout with F23 lives on. Beyond people who still swear by old Model F and M keyboards, Model F Labs and Unicomp both currently sell modern buckling spring keyboards with built-in F23 buttons. Another reason a modern Windows PC user might have access to an F23 key is if they use a macro pad.

But even with those uses in mind, the F23 key remains rare. That helps explain why Microsoft would use the key for launching Copilot; users are unlikely to have F23 programmed for other functions. This was also likely less work than making a key with an entirely new scan code.

The Copilot button is reprogrammable

When I previewed Dell’s 2024 XPS laptops, a Dell representative told me that the integrated Copilot key wasn’t reprogrammable. However, in addition to providing some interesting information about the newest PC key since the Windows button, Tom’s Hardware’s revelation shows why the Copilot key is actually reprogrammable, even if OEMs don’t give users a way to do so out of the box. (If you need help, check out the website’s tutorial for reprogramming the Windows Copilot key.)

I suspect there’s a strong interest in reprogramming that button. For one, generative AI, despite all its hype and potential, is still an emerging technology. Many don’t need or want access to any chatbot—let alone Microsoft’s—instantly or even at all. Those who don’t use their system with a Microsoft account have no use for the button, since being logged in to a Microsoft account is required for the button to launch Copilot.

A rendering of the Copilot button.

Enlarge / A rendering of the Copilot button.

Microsoft

Additionally, there are other easy ways to launch Copilot on a computer that has the program downloaded, like double-clicking an icon or pressing Windows + C, that make a dedicated button unnecessary. (Ars Technica asked Microsoft why the Copilot key doesn’t just register Windows + C, but the company declined to comment. Windows + C has launched other apps in the past, including Cortana, so it’s possible that Microsoft wanted to avoid the Copilot key performing a different function when pressed on computers that use Windows images without Copilot.)

In general, shoehorning the Copilot key into Windows laptops seems premature. Copilot is young and still a preview; just a few months ago, it was called Bing Chat. Further, the future of generative AI, including its popularity and top uses, is still forming and could evolve substantially during the lifetime of a Windows laptop. Microsoft’s generative AI efforts could also flounder over the years. Imagine if Microsoft went all-in on Bing back in the day and made all Windows keyboards have a Bing button, for example. Just because Microsoft wants something to become mainstream doesn’t mean that it will.

This all has made the Copilot button seem more like a way to force the adoption of Microsoft’s chatbot than a way to improve Windows keyboards. Microsoft has also made the Copilot button a requirement for its AI PC certification (which also requires an integrated neural processing unit and having Copilot pre-installed). Microsoft plans to make Copilot keys a requirement for Windows 11 OEM PCs eventually, it told Ars Technica in January.

At least for now, the basic way that the Copilot button works means you can turn the key into something more useful. Now, the tricky part would be finding a replacement keycap to eradicate Copilot’s influence from your keyboard.

Listing image by Microsoft

Copilot key is based on a button you probably haven’t seen since IBM’s Model M Read More »

canva’s-affinity-acquisition-is-a-non-subscription-based-weapon-against-adobe

Canva’s Affinity acquisition is a non-subscription-based weapon against Adobe

M&A —

But what will result from the companies’ opposing views on generative AI?

Affinity's photo editor.

Enlarge / Affinity’s photo editor.

Online graphic design platform provider Canva announced its acquisition of Affinity on Tuesday. The purchase adds tools for creative professionals to the Australian startup’s repertoire, presenting competition for today’s digital design stronghold, Adobe.

The companies didn’t provide specifics about the deal, but Cliff Obrecht, Canva’s co-founder and COO, told Bloomberg that it consists of cash and stock and is worth “several hundred million pounds.”

Canva, which debuted in 2013, has made numerous acquisitions to date, including Flourish, Kaleido, and Pixabay, but its purchase of Affinity is its biggest yet—by both price and headcount (90). Affinity CEO Ashley Hewson said via a YouTube video that Canva approached Affinity about a potential deal two months ago.

Before its Affinity purchase, Canva claimed 175 million users, which interestingly includes 90 million accrued since September 2022, when Canva launched Visual Suite. Without Affinity, though, Canva hasn’t had a way to appeal to the business-to-business market.

Affinity, which works with iPads, Macs, and Windows PCs, meanwhile, has a creative suite that includes a photo editor, professional page layout software, and Designer, a vector-based graphics software that “thousands” of illustrators, designers, and game developers use, Obrecht said when announcing the acquisition.

Of course, Affinity’s user base isn’t nearly the size of Adobe’s. Affinity claims that 3 million creative professionals use its tools. Adobe hasn’t provided hard numbers recently, but in 2017, it was estimated that Adobe Creative Cloud had 12 million subscribers, and Adobe currently claims to have 50 million members on its Behance online community.

However, Affinity has earned a following among creative professionals seeking an alternative to Adobe. Speaking to Bloomberg, Obrecht was keen to point out that Apple has featured Affinity apps in presentations about creative products, for example.

Perpetual Affinity licenses will still be available

Since being founded in 2014, one of the biggest ways that Affinity has stood out to creatives looking to avoid the costs associated with Adobe, including subscription fees, is perpetual licensing. New owner Canva pledged in an announcement today that one-time purchase fees will always be an option for Affinity users.

“Perpetual licenses will always be offered, and we will always price Affinity fairly and affordably,” an announcement today from Canva and Affinity said.

If Canva ever decides to sell Affinity as a subscription, perpetual licensing will remain available, Canva said, adding: “This fits with enabling Canva users to start adopting Affinity. It could also allow us to offer Affinity users a way to scale their workflows using Canva as a platform to share and collaborate on their Affinity assets, if they choose to.”

As we’ve seen with many other acquisitions, though, it’s common for companies to start changing their minds about how they’re willing to operate an acquired business years or even months after finalizing the purchase. And, of course, Canva’s idea of pricing “fairly and affordably” could differ from those of long-time Affinity users.

What about AI?

Canva also vowed to keep Affinity available as a standalone product and said there will be upcoming free updates to Affinity V2. However, Cameron Adams, Canva’s co-founder, pointed to potential future integration between Canva’s and Affinity’s offerings when speaking with Sydney Morning Herald:

Our product teams have already started chatting and we have some immediate plans for lightweight integration, but we think the products themselves will always be separate. Professional designers have really specific needs.

Canva’s announcement today said that the company plans to accelerate the rollout of “highly requested” Affinity features, “such as variable font support, blend and width tools, auto object selection, multi-page spreads, [and] ePub export.” With Canva, which was valued at $26 billion in 2021 and generates over $2.1 billion in annualized revenue, taking ownership of Affinity, the creative suite is expected to have more resources for improvements and updates than before.

Notably, though, Canva hasn’t revealed to what degree it may try to incorporate AI into Affinity. Canva is fully aboard the generative AI hype train and, as recently as this Monday pushed workers of all types to embrace the technology. Affinity, meanwhile, has said that it won’t make any generative AI tech and is “against anything which undermines human talent or tramples on artists’ IP.” Affinity’s stance could be forced to change one day under its new owner.

To start, though, Canva’s acquisition helps to fill the B2B gap in its portfolio, and it’s expected to use its new appeal to go after some of Adobe’s dominance.

“While our last decade at Canva has focused heavily on the 99 percent of knowledge workers without design training, truly empowering the world to design includes empowering professional designers, too. By joining forces with Affinity, we’re excited to unlock the full spectrum of designers at every level and stage of the design journey,” Obrecht said in Tuesday’s announcement.

Meanwhile, Adobe abandoned its own recent merger and acquisition efforts, a $20 billion purchase of Figma, in December due to regulatory concerns.

Canva’s Affinity acquisition is a non-subscription-based weapon against Adobe Read More »

wwdc-2024-starts-on-june-10-with-announcements-about-ios-18-and-beyond

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond

WWDC —

Speculation is rampant that Apple will make its first big moves in generative AI.

A colorful logo that says

Enlarge / The logo for WWDC24.

Apple

Apple has announced dates for this year’s Worldwide Developers Conference (WWDC). WWDC24 will run from June 10 through June 14 at the company’s Cupertino, California, headquarters, but everything will be streamed online.

Apple posted about the event with the following generic copy:

Join us online for the biggest developer event of the year. Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.

As always, the conference will kick off with a keynote presentation on the first day, which is Monday, June 10. You can be sure Apple will use that event to at least announce the key features of its next round of annual software updates for iOS, iPadOS, macOS, watchOS, visionOS, and tvOS.

We could also see new hardware—it doesn’t happen every year, but it has of late. We don’t yet know exactly what that hardware might be, though.

Much of the speculation among analysts and commentators concerns Apple’s first move into generative AI. There have been reports that Apple may work with a partner like Google to include a chatbot in its operating system, that it has been considering designing its own AI tools, or that it could offer an AI App Store, giving users a choice between many chatbots.

Whatever the case, Apple is playing catch-up with some of its competitors in generative AI and large language models even though it has been using other applications of AI across its products for a couple of years now. The company’s leadership will probably talk about it during the keynote.

After the keynote, Apple usually hosts a “Platforms State of the Union” talk that delves deeper into its upcoming software updates, followed by hours of developer-focused sessions detailing how to take advantage of newly planned features in third-party apps.

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond Read More »

google-balks-at-$270m-fine-after-training-ai-on-french-news-sites’-content

Google balks at $270M fine after training AI on French news sites’ content

Google balks at $270M fine after training AI on French news sites’ content

Google has agreed to pay 250 million euros (about $273 million) to settle a dispute in France after breaching years-old commitments to inform and pay French news publishers when referencing and displaying content in both search results and when training Google’s AI-powered chatbot, Gemini.

According to France’s competition watchdog, the Autorité de la Concurrence (ADLC), Google dodged many commitments to deal with publishers fairly. Most recently, it never notified publishers or the ADLC before training Gemini (initially launched as Bard) on publishers’ content or displaying content in Gemini outputs. Google also waited until September 28, 2023, to introduce easy options for publishers to opt out, which made it impossible for publishers to negotiate fair deals for that content, the ADLC found.

“Until this date, press agencies and publishers wanting to opt out of this use had to insert an instruction opposing any crawling of their content by Google, including on the Search, Discover and Google News services,” the ADLC noted, warning that “in the future, the Autorité will be particularly attentive as regards the effectiveness of opt-out systems implemented by Google.”

To address breaches of four out of seven commitments in France—which the ADLC imposed in 2022 for a period of five years to “benefit” publishers by ensuring Google’s ongoing negotiations with them were “balanced”—Google has agreed to “a series of corrective measures,” the ADLC said.

Google is not happy with the fine, which it described as “not proportionate” partly because the fine “doesn’t sufficiently take into account the efforts we have made to answer and resolve the concerns raised—in an environment where it’s very hard to set a course because we can’t predict which way the wind will blow next.”

According to Google, regulators everywhere need to clearly define fair use of content when developing search tools and AI models, so that search companies and AI makers always know “whom we are paying for what.” Currently in France, Google contends, the scope of Google’s commitments has shifted from just general news publishers to now also include specialist publications and listings and comparison sites.

The ADLC agreed that “the question of whether the use of press publications as part of an artificial intelligence service qualifies for protection under related rights regulations has not yet been settled,” but noted that “at the very least,” Google was required to “inform publishers of the use of their content for their Bard software.”

Regarding Bard/Gemini, Google said that it “voluntarily introduced a new technical solution called Google-Extended to make it easier for rights holders to opt out of Gemini without impact on their presence in Search.” It has now also committed to better explain to publishers both “how our products based on generative AI work and how ‘Opt Out’ works.”

Google said that it agreed to the settlement “because it’s time to move on” and “focus on the larger goal of sustainable approaches to connecting people with quality content and on working constructively with French publishers.”

“Today’s fine relates mostly to [a] disagreement about how much value Google derives from news content,” Google’s blog said, claiming that “a lack of clear regulatory guidance and repeated enforcement actions have made it hard to navigate negotiations with publishers, or plan how we invest in news in France in the future.”

What changes did Google agree to make?

Google defended its position as “the first and only platform to have signed significant licensing agreements” in France, benefiting 280 French press publishers and “covering more than 450 publications.”

With these publishers, the ADLC found that Google breached requirements to “negotiate in good faith based on transparent, objective, and non-discriminatory criteria,” to consistently “make a remuneration offer” within three months of a publisher’s request, and to provide information for publishers to “transparently assess their remuneration.”

Google also breached commitments to “inform editors and press agencies of the use of their content by its service Bard” and of Google’s decision to link “the use of press agencies’ and publishers’ content by its artificial intelligence service to the display of protected content on services such as Search, Discover and News.”

Regarding negotiations, the ADLC found that Google not only failed to be transparent with publishers about remuneration, but also failed to keep the ADLC informed of information necessary to monitor whether Google was honoring its commitments to fairly pay publishers. Partly “to guarantee better communication,” Google has agreed to appoint a French-speaking representative in its Paris office, along with other steps the ADLC recommended.

According to the ADLC’s announcement (translated from French), Google seemingly acted sketchy in negotiations by not meeting non-discrimination criteria—and unfavorably treating publishers in different situations identically—and by not mentioning “all the services that could generate revenues for the negotiating party.”

“According to the Autorité, not taking into account differences in attractiveness between content does not allow for an accurate reflection of the contribution of each press agency and publisher to Google’s revenues,” the ADLC said.

Also problematically, Google established a minimum threshold of 100 euros for remuneration that it has now agreed to drop.

This threshold, “in its very principle, introduces discrimination between publishers that, below a certain threshold, are all arbitrarily assigned zero remuneration, regardless of their respective situations,” the ADLC found.

Google balks at $270M fine after training AI on French news sites’ content Read More »

google-reshapes-fitbit-in-its-image-as-users-allege-“planned-obsolescence”

Google reshapes Fitbit in its image as users allege “planned obsolescence”

Google Fitbit, emphasis on Google —

Generative AI may not be enough to appease frustrated customers.

Product render of Fitbit Charge 5 in Lunar White and Soft Gold.

Enlarge / Google Fitbit’s Charge 5.

Fitbit

Google closed its Fitbit acquisition in 2021. Since then, the tech behemoth has pushed numerous changes to the wearable brand, including upcoming updates announced this week. While Google reshapes its fitness tracker business, though, some long-time users are regretting their Fitbit purchases and questioning if Google’s practices will force them to purchase their next fitness tracker elsewhere.

Generative AI coming to Fitbit (of course)

As is becoming common practice with consumer tech announcements, Google’s latest announcements about Fitbit seemed to be trying to convince users of the wonders of generative AI and how that will change their gadgets for the better. In a blog post yesterday, Dr. Karen DeSalvo, Google’s chief health officer, announced that Fitbit Premium subscribers would be able to test experimental AI features later this year (Google hasn’t specified when).

“You will be able to ask questions in a natural way and create charts just for you to help you understand your own data better. For example, you could dig deeper into how many active zone minutes… you get and the correlation with how restorative your sleep is,” she wrote.

DeSalvo’s post included an example of a user asking a chatbot if there was a connection between their sleep and activity and said that the experimental AI features will only be available to “a limited number of Android users who are enrolled in the Fitbit Labs program in the Fitbit mobile app.”

Google shared this image as an example of what future Fitbit generative AI features could look like.

Google shared this image as an example of what future Fitbit generative AI features could look like.

Fitbit is also working with the Google Research team and “health and wellness experts, doctors, and certified coaches” to develop a large language model (LLM) for upcoming Fitbit mobile app features that pull data from Fitbit and Pixel devices, DeSalvo said. The announcement follows Google’s decision to stop selling Fitbits in places where it doesn’t sell Pixels, taking the trackers off shelves in a reported 29 countries.

In a blog post yesterday, Yossi Matias, VP of engineering and research at Google, said the company wants to use the LLM to add personalized coaching features, such as the ability to look for sleep irregularities and suggest actions “on how you might change the intensity of your workout.”

Google’s Fitbit is building the LLM on Gemini models that are tweaked on de-identified data from unspecified “research case studies,” Matias said, adding: “For example, we’re testing performance using sleep medicine certification exam-like practice tests.”

Gemini, which Google released in December, has been criticized for generating historically inaccurate images. After users complained about different races and ethnicities being inaccurately portrayed in prompts for things like Nazi members and medieval British kings, Google pulled the feature last month and said it would release a fix “soon.”In a press briefing, Florence Thng, director and product lead at Fitbit, suggested that such problems wouldn’t befall Fitbit’s LLM since it’s being tested by users before an official rollout, CNET reported.

Other recent changes to Fitbit include a name tweak from Fitbit by Google, to Google Fitbit, as spotted by 9to5Google this week.

A screenshot from Fitbit's homepage.

Enlarge / A screenshot from Fitbit’s homepage.

Combined with other changes that Google has brought to Fitbit over the past two years—including axing most social features, the ability to sync with computers, its browser-based SDK for developing apps, and pushing users to log in with Google accounts ahead of Google shuttering all Fitbit accounts in 2025—Fitbit, like many acquired firms, is giving long-time customers a different experience than it did before it was bought.

Disheartened customers

Meanwhile, customers, especially Charge 5 users, are questioning whether their next fitness tracker will come from Fitbit Google Fitbit.

For example, in January, we reported that users were claiming that their Charge 5 suddenly started draining battery rapidly after installing a firmware update that Fitbit released in December. As of this writing, one thread discussing the problem on Fitbit’s support forum has 33 pages of comments. Google told BBC in January that it didn’t know what the problem was but knew that it wasn’t tied to firmware. Google hasn’t followed up with further explanation since. The company hasn’t responded to multiple requests from Ars Technica for comment. In the meantime, users continue experiencing problems and have reported so on Fitbit’s forum. Per user comments, the most Google has done is offer discounts or, if the device was within its warranty period, a replacement.

“This is called planned obsolescence. I’ll be upgrading to a watch style tracker from a different company. I wish Fitbit hadn’t sold out to Google,” a forum user going by Sean77024 wrote on Fitbit’s support forum yesterday.

Others, like 2MeFamilyFlyer, have also accused Fitbit of planning Charge 5 obsolescence. 2MeFamilyFlyer said they’re seeking a Fitbit alternative.

The ongoing problems with the Charge 5, which was succeeded by the Charge 6 on October 12, has some, like reneeshawgo on Fitbit’s forum and PC World Senior Editor Alaina Yee saying that Fitbit devices aren’t meant to last long. In January, Yee wrote: “You should see Fitbits as a 1-year purchase in the US and two years in regions with better warranty protections.”

For many, a year or two wouldn’t be sufficient, even if the Fitbit came with trendy AI features.

Google reshapes Fitbit in its image as users allege “planned obsolescence” Read More »

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

tyler-perry-puts-$800-million-studio-expansion-on-hold-because-of-openai’s-sora

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

The Synthetic Screen —

Perry: Mind-blowing AI video-generation tools “will touch every corner of our industry.”

Tyler Perry in 2022.

Enlarge / Tyler Perry in 2022.

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to surpass other AI video generators in capability dramatically. It seems that a similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Tyler Perry Studios, which the actor and producer acquired in 2015, is a 330-acre lot located in Atlanta and is one of the largest film production facilities in the United States. Perry, who is perhaps best known for his series of Madea films, says that technology like Sora worries him because it could make the need for building sets or traveling to locations obsolete. He cites examples of virtual shooting in the snow of Colorado or on the Moon just by using a text prompt. “This AI can generate it like nothing.” The technology may represent a radical reduction in costs necessary to create a film, and that will likely put entertainment industry jobs in jeopardy.

“It makes me worry so much about all of the people in the business,” he told The Hollywood Reporter. “Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I’m thinking this will touch every corner of our industry.”

You can read the full interview at The Hollywood Reporter, which did an excellent job of covering Perry’s thoughts on a technology that may end up fundamentally disrupting Hollywood. To his mind, AI tech poses an existential risk to the entertainment industry that it can’t ignore: “There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive.”

Perry also looks beyond Hollywood and says that it’s not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. “If you look at it across the world, how it’s changing so quickly, I’m hoping that there’s a whole government approach to help everyone be able to sustain.”

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

air-canada-must-honor-refund-policy-invented-by-airline’s-chatbot

Air Canada must honor refund policy invented by airline’s chatbot

Blame game —

Air Canada appears to have quietly killed its costly chatbot support.

Air Canada must honor refund policy invented by airline’s chatbot

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.

Air Canada’s chatbot appears to be disabled

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

“It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”

Air Canada must honor refund policy invented by airline’s chatbot Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »