generative ai

sky-voice-actor-says-nobody-ever-compared-her-to-scarjo-before-openai-drama

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama

Scarlett Johansson attends the Golden Heart Awards in 2023.

Enlarge / Scarlett Johansson attends the Golden Heart Awards in 2023.

OpenAI is sticking to its story that it never intended to copy Scarlett Johansson’s voice when seeking an actor for ChatGPT’s “Sky” voice mode.

The company provided The Washington Post with documents and recordings clearly meant to support OpenAI CEO Sam Altman’s defense against Johansson’s claims that Sky was made to sound “eerily similar” to her critically acclaimed voice acting performance in the sci-fi film Her.

Johansson has alleged that OpenAI hired a soundalike to steal her likeness and confirmed that she declined to provide the Sky voice. Experts have said that Johansson has a strong case should she decide to sue OpenAI for violating her right to publicity, which gives the actress exclusive rights to the commercial use of her likeness.

In OpenAI’s defense, The Post reported that the company’s voice casting call flier did not seek a “clone of actress Scarlett Johansson,” and initial voice test recordings of the unnamed actress hired to voice Sky showed that her “natural voice sounds identical to the AI-generated Sky voice.” Because of this, OpenAI has argued that “Sky’s voice is not an imitation of Scarlett Johansson.”

What’s more, an agent for the unnamed Sky actress who was cast—both granted anonymity to protect her client’s safety—confirmed to The Post that her client said she was never directed to imitate either Johansson or her character in Her. She simply used her own voice and got the gig.

The agent also provided a statement from her client that claimed that she had never been compared to Johansson before the backlash started.

This all “feels personal,” the voice actress said, “being that it’s just my natural voice and I’ve never been compared to her by the people who do know me closely.”

However, OpenAI apparently reached out to Johansson after casting the Sky voice actress. During outreach last September and again this month, OpenAI seemed to want to substitute the Sky voice actress’s voice with Johansson’s voice—which is ironically what happened when Johansson got cast to replace the original actress hired to voice her character in Her.

Altman has clarified that timeline in a statement provided to Ars that emphasized that the company “never intended” Sky to sound like Johansson. Instead, OpenAI tried to snag Johansson to voice the part after realizing—seemingly just as Her director Spike Jonze did—that the voice could potentially resonate with more people if Johansson did it.

“We are sorry to Ms. Johansson that we didn’t communicate better,” Altman’s statement said.

Johansson has not yet made any public indications that she intends to sue OpenAI over this supposed miscommunication. But if she did, legal experts told The Post and Reuters that her case would be strong because of legal precedent set in high-profile lawsuits raised by singers Bette Midler and Tom Waits blocking companies from misappropriating their voices.

Why Johansson could win if she sued OpenAI

In 1988, Bette Midler sued Ford Motor Company for hiring a soundalike to perform Midler’s song “Do You Want to Dance?” in a commercial intended to appeal to “young yuppies” by referencing popular songs from their college days. Midler had declined to do the commercial and accused Ford of exploiting her voice to endorse its product without her consent.

This groundbreaking case proved that a distinctive voice like Midler’s cannot be deliberately imitated to sell a product. It did not matter that the singer used in the commercial had used her natural singing voice, because “a number of people” told Midler that the performance “sounded exactly” like her.

Midler’s case set a powerful precedent preventing companies from appropriating parts of performers’ identities—essentially stopping anyone from stealing a well-known voice that otherwise could not be bought.

“A voice is as distinctive and personal as a face,” the court ruled, concluding that “when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs.”

Like in Midler’s case, Johansson could argue that plenty of people think that the Sky voice sounds like her and that OpenAI’s product might be more popular if it had a Her-like voice mode. Comics on popular late-night shows joked about the similarity, including Johansson’s husband, Saturday Night Live comedian Colin Jost. And other people close to Johansson agreed that Sky sounded like her, Johansson has said.

Johansson’s case differs from Midler’s case seemingly primarily because of the casting timeline that OpenAI is working hard to defend.

OpenAI seems to think that because Johansson was offered the gig after the Sky voice actor was cast that she has no case to claim that they hired the other actor after she declined.

The timeline may not matter as much as OpenAI may think, though. In the 1990s, Tom Waits cited Midler’s case when he won a $2.6 million lawsuit after Frito-Lay hired a Waits impersonator to perform a song that “echoed the rhyming word play” of a Waits song in a Doritos commercial. Waits won his suit even though Frito-Lay never attempted to hire the singer before casting the soundalike.

Sky voice actor says nobody ever compared her to ScarJo before OpenAI drama Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

openai’s-flawed-plan-to-flag-deepfakes-ahead-of-2024-elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections

As the US moves toward criminalizing deepfakes—deceptive AI-generated audio, images, and videos that are increasingly hard to discern from authentic content online—tech companies have rushed to roll out tools to help everyone better detect AI content.

But efforts so far have been imperfect, and experts fear that social media platforms may not be ready to handle the ensuing AI chaos during major global elections in 2024—despite tech giants committing to making tools specifically to combat AI-fueled election disinformation. The best AI detection remains observant humans, who, by paying close attention to deepfakes, can pick up on flaws like AI-generated people with extra fingers or AI voices that speak without pausing for a breath.

Among the splashiest tools announced this week, OpenAI shared details today about a new AI image detection classifier that it claims can detect about 98 percent of AI outputs from its own sophisticated image generator, DALL-E 3. It also “currently flags approximately 5 to 10 percent of images generated by other AI models,” OpenAI’s blog said.

According to OpenAI, the classifier provides a binary “true/false” response “indicating the likelihood of the image being AI-generated by DALL·E 3.” A screenshot of the tool shows how it can also be used to display a straightforward content summary confirming that “this content was generated with an AI tool” and includes fields ideally flagging the “app or device” and AI tool used.

To develop the tool, OpenAI spent months adding tamper-resistant metadata to “all images created and edited by DALL·E 3” that “can be used to prove the content comes” from “a particular source.” The detector reads this metadata to accurately flag DALL-E 3 images as fake.

That metadata follows “a widely used standard for digital content certification” set by the Coalition for Content Provenance and Authenticity (C2PA), often likened to a nutrition label. And reinforcing that standard has become “an important aspect” of OpenAI’s approach to AI detection beyond DALL-E 3, OpenAI said. When OpenAI broadly launches its video generator, Sora, C2PA metadata will be integrated into that tool as well, OpenAI said.

Of course, this solution is not comprehensive because that metadata could always be removed, and “people can still create deceptive content without this information (or can remove it),” OpenAI said, “but they cannot easily fake or alter this information, making it an important resource to build trust.”

Because OpenAI is all in on C2PA, the AI leader announced today that it would join the C2PA steering committee to help drive broader adoption of the standard. OpenAI will also launch a $2 million fund with Microsoft to support broader “AI education and understanding,” seemingly partly in the hopes that the more people understand about the importance of AI detection, the less likely they will be to remove this metadata.

“As adoption of the standard increases, this information can accompany content through its lifecycle of sharing, modification, and reuse,” OpenAI said. “Over time, we believe this kind of metadata will be something people come to expect, filling a crucial gap in digital content authenticity practices.”

OpenAI joining the committee “marks a significant milestone for the C2PA and will help advance the coalition’s mission to increase transparency around digital media as AI-generated content becomes more prevalent,” C2PA said in a blog.

OpenAI’s flawed plan to flag deepfakes ahead of 2024 elections Read More »

tech-brands-are-forcing-ai-into-your-gadgets—whether-you-asked-for-it-or-not

Tech brands are forcing AI into your gadgets—whether you asked for it or not

Tech brands love hollering about the purported thrills of AI these days.

Enlarge / Tech brands love hollering about the purported thrills of AI these days.

Logitech announced a new mouse last week. A company rep reached out to inform Ars of Logitech’s “newest wireless mouse.” The gadget’s product page reads the same as of this writing.

I’ve had good experience with Logitech mice, especially wireless ones, one of which I’m using now. So I was keen to learn what Logitech might have done to improve on its previous wireless mouse designs. A quieter click? A new shape to better accommodate my overworked right hand? Multiple onboard profiles in a business-ready design?

I was disappointed to learn that the most distinct feature of the Logitech Signature AI Edition M750 is a button located south of the scroll wheel. This button is preprogrammed to launch the ChatGPT prompt builder, which Logitech recently added to its peripherals configuration app Options+.

That’s pretty much it.

Beyond that, the M750 looks just like the Logitech Signature M650, which came out in January 2022.  Also, the new mouse’s forward button (on the left side of the mouse) is preprogrammed to launch Windows or macOS dictation, and the back button opens ChatGPT within Options+. As of this writing, the new mouse’s MSRP is $10 higher ($50) than the M650’s.

  • The new M750 (pictured) is 4.26×2.4×1.52 inches and 3.57 ounces.

    Logitech

  • The M650 (pictured) comes in 3 sizes. The medium size is 4.26×2.4×1.52 inches and 3.58 ounces.

    Logitech

I asked Logitech about the M750 appearing to be the M650 but with an extra button, and a spokesperson responded by saying:

M750 is indeed not the same mouse as M650. It has an extra button that has been preprogrammed to trigger the Logi AI Prompt Builder once the user installs Logi Options+ app. Without Options+, the button does DPI toggle between 1,000 and 1,600 DPI.

However, a reprogrammable button south of a mouse’s scroll wheel that can be set to launch an app or toggle DPI out of the box is pretty common, including among Logitech mice. Logitech’s rep further claimed to me that the two mice use different electronic components, which Logitech refers to as the mouse’s platform. Logitech can reuse platforms for different models, the spokesperson said.

Logitech’s rep declined to comment on why the M650 didn’t have a button south of its scroll wheel. Price is a potential reason, but Logitech also sells cheaper mice with this feature.

Still, the minimal differences between the two suggest that the M750 isn’t worth a whole product release. I suspect that if it weren’t for Logitech’s trendy new software feature, the M750 wouldn’t have been promoted as a new product.

The M750 also raises the question of how many computer input devices need to be equipped with some sort of buzzy, generative AI-related feature.

Logitech’s ChatGPT prompt builder

Logitech’s much bigger release last week wasn’t a peripheral but an addition to its Options+ app. You don’t need the “new” M750 mouse to use Logitech’s AI Prompt Builder; I was able to program my MX Master 3S to launch it. Several Logitech mice and keyboards support AI Prompt Builder.

When you press a button that launches the prompt builder, an Options+ window appears. There, you can input text that Options+ will use to create a ChatGPT-appropriate prompt based on your needs:

A Logitech-provided image depicting its AI Prompt Builder software feature.

Enlarge / A Logitech-provided image depicting its AI Prompt Builder software feature.

Logitech

After you make your choices, another window opens with ChatGPT’s response. Logitech said the prompt builder requires a ChatGPT account, but I was able to use GPT-3.5 without entering one (the feature can also work with GPT-4).

The typical Arsian probably doesn’t need help creating a ChatGPT prompt, and Logitech’s new capability doesn’t work with any other chatbots. The prompt builder could be interesting to less technically savvy people interested in some handholding for early ChatGPT experiences. However, I doubt if people with an elementary understanding of generative AI need instant access to ChatGPT.

The point, though, is instant access to ChatGPT capabilities, something that Logitech is arguing is worthwhile for its professional users. Some Logitech customers, though, seem to disagree, especially with the AI Prompt Builder, meaning that Options+ has even more resources in the background.

But Logitech isn’t the only gadget company eager to tie one-touch AI access to a hardware button.

Pinching your earbuds to talk to ChatGPT

Similarly to Logitech, Nothing is trying to give its customers access to ChatGPT quickly. In this case, access occurs by pinching the device. This month, Nothing announced that it “integrated Nothing earbuds and Nothing OS with ChatGPT to offer users instant access to knowledge directly from the devices they use most, earbuds and smartphones.” The feature requires the latest Nothing OS and for the users to have a Nothing phone with ChatGPT installed. ChatGPT gestures work with Nothing’s Phone (2) and Nothing Ear and Nothing Ear (a), but Nothing plans to expand to additional phones via software updates.

Nothing's Ear and Ear (a) earbuds.

Enlarge / Nothing’s Ear and Ear (a) earbuds.

Nothing

Nothing also said it would embed “system-level entry points” to ChatGPT, like screenshot sharing and “Nothing-styled widgets,” to Nothing smartphone OSes.

A peek at setting up ChatGPT integration on the Nothing X app.

Enlarge / A peek at setting up ChatGPT integration on the Nothing X app.

Nothing’s ChatGPT integration may be a bit less intrusive than Logitech’s since users who don’t have ChatGPT on their phones won’t be affected. But, again, you may wonder how many people asked for this feature and how reliably it will function.

Tech brands are forcing AI into your gadgets—whether you asked for it or not Read More »

amazon-virtually-kills-efforts-to-develop-alexa-skills,-disappointing-dozens

Amazon virtually kills efforts to develop Alexa Skills, disappointing dozens

disincentives —

Most devs would need to pay out of pocket to host Alexa apps after June.

amazon echo dot gen 4

Enlarge / The 4th-gen Amazon Echo Dot smart speaker.

Amazon

Alexa hasn’t worked out the way Amazon originally planned.

There was a time when it thought that Alexa would yield a robust ecosystem of apps, or Alexa Skills, that would make the voice assistant an integral part of users’ lives. Amazon envisioned tens of thousands of software developers building valued abilities for Alexa that would grow the voice assistant’s popularity—and help Amazon make some money.

But about seven years after launching a rewards program to encourage developers to build Skills, Alexa’s most preferred abilities are the basic ones, like checking the weather. And on June 30, Amazon will stop giving out the monthly Amazon Web Services credits that have made it free for third-party developers to build and host Alexa Skills. The company also recently told devs that its Alexa Developer Rewards program was ending, virtually disincentivizing third-party devs to build for Alexa.

Death knell for third-party Alexa apps

The news has left dozens of Alexa Skills developers wondering if they have a future with Alexa, especially as Amazon preps a generative AI and subscription-based version of Alexa. “Dozens” may sound like a dig at Alexa’s ecosystem, but it’s an estimation based on a podcast from Skills developers Mark Tucker and Allen Firstenberg, who, in a recent podcast, agreed that “dozens” of third-party devs were contemplating if it’s still worthwhile to develop Alexa skills. The casual summary wasn’t stated as a hard fact or confirmed by Amazon but, rather, seemed like a rough and quick estimation based on the developers’ familiarity with the Skills community. But with such minimal interest and money associated with Skills, dozens isn’t an implausible figure either.

Amazon admitted that there’s little interest in its Skills incentives programs. Bloomberg reported that “fewer than 1 percent of developers were using the soon-to-end programs,” per Amazon spokesperson Lauren Raemhild.

“Today, with over 160,000 skills available for customers and a well-established Alexa developer community, these programs have run their course, and we decided to sunset them,” she told the publication.

The writing on the wall, though, is that Amazon doesn’t have the incentive or money to grow the Alexa app ecosystem it once imagined. Voice assistants largely became money pits, and the Alexa division has endured recent layoffs as it fights for survival and relevance. Meanwhile, Google Assistant stopped using third-party apps in 2022.

“Many developers are now going to need to make some tough decisions about maintaining existing or creating future experiences on Alexa,” Tucker said via a LinkedIn post.

Alexa Skills criticized as “useless”

As of this writing, the top Alexa skills, in order, are: Jeopardy, Are You Smarter Than a 5th Grader?, Who Wants to Be a Millionaire?, and Calm. That’s not exactly a futuristic list of must-have technological feats. For years, people have wondered when the “killer app” would come to catapult Alexa’s popularity. But now it seems like Alexa’s only hope at that killer use case is generative AI (a gamble filled with its own obstacles).

But like Amazon, third-party developers found it hard to make money off Skills, with a rare few pointing to making thousands of dollars at most and the vast majority not making anything.

“If you can’t make money off it, no one’s going to seriously engage,” Joseph “Jo” Jaquinta, a developer who had made over 12 Skills, told CNET in 2017.

By 2018, Amazon had paid developers millions to grow Alexa Skills. But by 2020, Amazon reduced the amount of money it paid out to third-party developers, an anonymous source told Bloomberg, The source noted that the apps made by paid developers weren’t making the company much money. Come 2024, the most desirable things you can make Alexa do remain basic tasks, like playing a song and apparently trivia games.

Amazon hasn’t said it’s ending Skills. That would seem premature considering that its Alexa chatbot isn’t expected until June. Developers can still make money off Skills with in-app purchases, but the incentive is minimal.

“Developers like you have and will play a critical role in the success of Alexa, and we appreciate your continued engagement,” Amazon’s notice to devs said, per Bloomberg.

We’ll see how “critical” Amazon treats those remaining developers once its generative AI chatbot is ready.

Amazon virtually kills efforts to develop Alexa Skills, disappointing dozens Read More »

us-lawmaker-proposes-a-public-database-of-all-ai-training-material

US lawmaker proposes a public database of all AI training material

Who’s got the receipts? —

Proposed law would require more transparency from AI companies.

US lawmaker proposes a public database of all AI training material

Amid a flurry of lawsuits over AI models’ training data, US Representative Adam Schiff (D-Calif.) has introduced a bill that would require AI companies to disclose exactly which copyrighted works are included in datasets training AI systems.

The Generative AI Disclosure Act “would require a notice to be submitted to the Register of Copyrights prior to the release of a new generative AI system with regard to all copyrighted works used in building or altering the training dataset for that system,” Schiff said in a press release.

The bill is retroactive and would apply to all AI systems available today, as well as to all AI systems to come. It would take effect 180 days after it’s enacted, requiring anyone who creates or alters a training set not only to list works referenced by the dataset, but also to provide a URL to the dataset within 30 days before the AI system is released to the public. That URL would presumably give creators a way to double-check if their materials have been used and seek any credit or compensation available before the AI tools are in use.

All notices would be kept in a publicly available online database.

Schiff described the act as championing “innovation while safeguarding the rights and contributions of creators, ensuring they are aware when their work contributes to AI training datasets.”

“This is about respecting creativity in the age of AI and marrying technological progress with fairness,” Schiff said.

Currently, creators who don’t have access to training datasets rely on AI models’ outputs to figure out if their copyrighted works may have been included in training various AI systems. The New York Times, for example, prompted ChatGPT to spit out excerpts of its articles, relying on a tactic to identify training data by asking ChatGPT to produce lines from specific articles, which OpenAI has curiously described as “hacking.”

Under Schiff’s law, The New York Times would need to consult the database to ID all articles used to train ChatGPT or any other AI system.

Any AI maker who violates the act would risk a “civil penalty in an amount not less than $5,000,” the proposed bill said.

At a hearing on artificial intelligence and intellectual property, Rep. Darrell Issa (R-Calif.)—who chairs the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet—told Schiff that his subcommittee would consider the “thoughtful” bill.

Schiff told the subcommittee that the bill is “only a first step” toward “ensuring that at a minimum” creators are “aware of when their work contributes to AI training datasets,” saying that he would “welcome the opportunity to work with members of the subcommittee” on advancing the bill.

“The rapid development of generative AI technologies has outpaced existing copyright laws, which has led to widespread use of creative content to train generative AI models without consent or compensation,” Schiff warned at the hearing.

In Schiff’s press release, Meredith Stiehm, president of the Writers Guild of America West, joined leaders from other creative groups celebrating the bill as an “important first step” for rightsholders.

“Greater transparency and guardrails around AI are necessary to protect writers and other creators” and address “the unprecedented and unauthorized use of copyrighted materials to train generative AI systems,” Stiehm said.

Until the thorniest AI copyright questions are settled, Ken Doroshow, a chief legal officer for the Recording Industry Association of America, suggested that Schiff’s bill filled an important gap by introducing “comprehensive and transparent recordkeeping” that would provide “one of the most fundamental building blocks of effective enforcement of creators’ rights.”

A senior adviser for the Human Artistry Campaign, Moiya McTier, went further, celebrating the bill as stopping AI companies from “exploiting” artists and creators.

“AI companies should stop hiding the ball when they copy creative works into AI systems and embrace clear rules of the road for recordkeeping that create a level and transparent playing field for the development and licensing of genuinely innovative applications and tools,” McTier said.

AI copyright guidance coming soon

While courts weigh copyright questions raised by artists, book authors, and newspapers, the US Copyright Office announced in March that it would be issuing guidance later this year, but the office does not seem to be prioritizing questions on AI training.

Instead, the Copyright Office will focus first on issuing guidance on deepfakes and AI outputs. This spring, the office will release a report “analyzing the impact of AI on copyright” of “digital replicas, or the use of AI to digitally replicate individuals’ appearances, voices, or other aspects of their identities.” Over the summer, another report will focus on “the copyrightability of works incorporating AI-generated material.”

Regarding “the topic of training AI models on copyrighted works as well as any licensing considerations and liability issues,” the Copyright Office did not provide a timeline for releasing guidance, only confirming that their “goal is to finalize the entire report by the end of the fiscal year.”

Once guidance is available, it could sway court opinions, although courts do not necessarily have to apply Copyright Office guidance when weighing cases.

The Copyright Office’s aspirational timeline does seem to be ahead of when at least some courts can be expected to decide on some of the biggest copyright questions for some creators. The class-action lawsuit raised by book authors against OpenAI, for example, is not expected to be resolved until February 2025, and the New York Times’ lawsuit is likely on a similar timeline. However, artists suing Stability AI face a hearing on that AI company’s motion to dismiss this May.

US lawmaker proposes a public database of all AI training material Read More »

fake-ai-law-firms-are-sending-fake-dmca-threats-to-generate-fake-seo-gains

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains

Dewey Fakum & Howe, LLP —

How one journalist found himself targeted by generative AI over a keyfob photo.

Updated

Face composed of many pixellated squares, joining together

Enlarge / A person made of many parts, similar to the attorney who handles both severe criminal law and copyright takedowns for an Arizona law firm.

Getty Images

If you run a personal or hobby website, getting a copyright notice from a law firm about an image on your site can trigger some fast-acting panic. As someone who has paid to settle a news service-licensing issue before, I can empathize with anybody who wants to make this kind of thing go away.

Which is why a new kind of angle-on-an-angle scheme can seem both obvious to spot and likely effective. Ernie Smith, the prolific, ever-curious writer behind the newsletter Tedium, received a “DMCA Copyright Infringement Notice” in late March from “Commonwealth Legal,” representing the “Intellectual Property division” of Tech4Gods.

The issue was with a photo of a keyfob from legitimate photo service Unsplash used in service of a post about a strange Uber ride Smith once took. As Smith detailed in a Mastodon thread, the purported firm needed him to “add a credit to our client immediately” through a link to Tech4Gods, and said it should be “addressed in the next five business days.” Removing the image “does not conclude the matter,” and should Smith not have taken action, the putative firm would have to “activate” its case, relying on DMCA 512(c) (which, in many readings, actually does grant relief should a website owner, unaware of infringing material, “act expeditiously to remove” said material). The email unhelpfully points to the main page of the Internet Archive so that Smith might review “past usage records.”

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including “for,” called into question.

Commonwealth Legal Services

There are quite a few issues with Commonwealth Legal’s request, as detailed by Smith and 404 Media. Chief among them is that Commonwealth Legal, a firm theoretically based in Arizona (which is not a commonwealth), almost certainly does not exist. Despite the 2018 copyright displayed on the site, the firm’s website domain was seemingly registered on March 1, 2024, with a Canadian IP location. The address on the firm’s site leads to a location that, to say the least, does not match the “fourth floor” indicated on the website.

While the law firm’s website is stuffed full of stock images, so are many websites for professional services. The real tell is the site’s list of attorneys, most of which, as 404 Media puts it, have “vacant, thousand-yard stares” common to AI-generated faces. AI detection firm Reality Defender told 404 Media that his service spotted AI generation in every attorneys’ image, “most likely by a Generative Adversarial Network (GAN) model.”

Then there are the attorneys’ bios, which offer surface-level competence underpinned by bizarre setups. Five of the 12 supposedly come from acclaimed law schools at Harvard, Yale, Stanford, and University of Chicago. The other seven seem to have graduated from the top five results you might get for “Arizona Law School.” Sarah Walker has a practice based on “Copyright Violation and Judicial Criminal Proceedings,” a quite uncommon pairing. Sometimes she is “upholding the rights of artists,” but she can also “handle high-stakes criminal cases.” Walker, it seems, couldn’t pick just one track at Yale Law School.

Why would someone go to the trouble of making a law firm out of NameCheap, stock art, and AI images (and seemingly copy) to send quasi-legal demands to site owners? Backlinks, that’s why. Backlinks are links from a site that Google (or others, but almost always Google) holds in high esteem to a site trying to rank up. Whether spammed, traded, generated, or demanded through a fake firm, backlinks power the search engine optimization (SEO) gray, to very dark gray, market. For all their touted algorithmic (and now AI) prowess, search engines have always had a hard time gauging backlink quality and context, so some site owners still buy backlinks.

The owner of Tech4Gods told 404 Media’s Jason Koebler that he did buy backlinks for his gadget review site (with “AI writing assistants”). He disclaimed owning the disputed image or any images and made vague suggestions that a disgruntled former contractor may be trying to poison his ranking with spam links.

Asked by Ars if he had heard back from “Commonwealth Legal” now that five business days were up, Ernie Smith tells Ars: “No, alas.”

This post was updated at 4: 50 p.m. Eastern to include Ernie Smith’s response.

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains Read More »

copilot-key-is-based-on-a-button-you-probably-haven’t-seen-since-ibm’s-model-m

Copilot key is based on a button you probably haven’t seen since IBM’s Model M

Microsoft chatbot button —

Left-Shift + Windows key + F23

A Dell XPS 14 laptop with a Copilot key.

Enlarge / A Dell XPS 14 laptop. The Copilot key is to the right of the right-Alt button.

In January, Microsoft introduced a new key to Windows PC keyboards for the first time in 30 years. The Copilot key, dedicated to launching Microsoft’s eponymous generative AI assistant, is already on some Windows laptops released this year. On Monday, Tom’s Hardware dug into the new addition and determined exactly what pressing the button does, which is actually pretty simple. Pushing a computer’s integrated Copilot button is like pressing left-Shift + Windows key + F23 simultaneously.

Tom’s Hardware confirmed this after wondering if the Copilot key introduced a new scan code to Windows or if it worked differently. Using the scripting program AuthoHotkey with a new laptop with a Copilot button, Tom’s Hardware discovered the keystrokes registered when a user presses the Copilot key. The publication confirmed with Dell that “this key assignment is standard for the Copilot key and done at Microsoft’s direction.”

F23

Surprising to see in that string of keys is F23. Having a computer keyboard with a function row or rows that take you from F1 all the way to F23 is quite rare today. When I try to imagine a keyboard that comes with an F23 button, vintage keyboards come to mind, more specifically buckling spring keyboards from IBM.

IBM’s Model F, which debuted in 1981 and used buckling spring switches over a capacitive PCB, and the Model M, which launched in 1985 and used buckling spring switches over a membrane sheet, both offered layouts with 122 keys. These layouts included not one, but two rows of function keys that would leave today’s 60 percent keyboard fans sweating over the wasted space.

But having 122 keys was helpful for keyboards tied to IBM business terminals. The keyboard layout even included a bank of keys to the left of the primary alpha block of keys for even more forms of input.

An IBM Model M keyboard with an F23 key.

Enlarge / An IBM Model M keyboard with an F23 key.

The 122-key keyboard layout with F23 lives on. Beyond people who still swear by old Model F and M keyboards, Model F Labs and Unicomp both currently sell modern buckling spring keyboards with built-in F23 buttons. Another reason a modern Windows PC user might have access to an F23 key is if they use a macro pad.

But even with those uses in mind, the F23 key remains rare. That helps explain why Microsoft would use the key for launching Copilot; users are unlikely to have F23 programmed for other functions. This was also likely less work than making a key with an entirely new scan code.

The Copilot button is reprogrammable

When I previewed Dell’s 2024 XPS laptops, a Dell representative told me that the integrated Copilot key wasn’t reprogrammable. However, in addition to providing some interesting information about the newest PC key since the Windows button, Tom’s Hardware’s revelation shows why the Copilot key is actually reprogrammable, even if OEMs don’t give users a way to do so out of the box. (If you need help, check out the website’s tutorial for reprogramming the Windows Copilot key.)

I suspect there’s a strong interest in reprogramming that button. For one, generative AI, despite all its hype and potential, is still an emerging technology. Many don’t need or want access to any chatbot—let alone Microsoft’s—instantly or even at all. Those who don’t use their system with a Microsoft account have no use for the button, since being logged in to a Microsoft account is required for the button to launch Copilot.

A rendering of the Copilot button.

Enlarge / A rendering of the Copilot button.

Microsoft

Additionally, there are other easy ways to launch Copilot on a computer that has the program downloaded, like double-clicking an icon or pressing Windows + C, that make a dedicated button unnecessary. (Ars Technica asked Microsoft why the Copilot key doesn’t just register Windows + C, but the company declined to comment. Windows + C has launched other apps in the past, including Cortana, so it’s possible that Microsoft wanted to avoid the Copilot key performing a different function when pressed on computers that use Windows images without Copilot.)

In general, shoehorning the Copilot key into Windows laptops seems premature. Copilot is young and still a preview; just a few months ago, it was called Bing Chat. Further, the future of generative AI, including its popularity and top uses, is still forming and could evolve substantially during the lifetime of a Windows laptop. Microsoft’s generative AI efforts could also flounder over the years. Imagine if Microsoft went all-in on Bing back in the day and made all Windows keyboards have a Bing button, for example. Just because Microsoft wants something to become mainstream doesn’t mean that it will.

This all has made the Copilot button seem more like a way to force the adoption of Microsoft’s chatbot than a way to improve Windows keyboards. Microsoft has also made the Copilot button a requirement for its AI PC certification (which also requires an integrated neural processing unit and having Copilot pre-installed). Microsoft plans to make Copilot keys a requirement for Windows 11 OEM PCs eventually, it told Ars Technica in January.

At least for now, the basic way that the Copilot button works means you can turn the key into something more useful. Now, the tricky part would be finding a replacement keycap to eradicate Copilot’s influence from your keyboard.

Listing image by Microsoft

Copilot key is based on a button you probably haven’t seen since IBM’s Model M Read More »

canva’s-affinity-acquisition-is-a-non-subscription-based-weapon-against-adobe

Canva’s Affinity acquisition is a non-subscription-based weapon against Adobe

M&A —

But what will result from the companies’ opposing views on generative AI?

Affinity's photo editor.

Enlarge / Affinity’s photo editor.

Online graphic design platform provider Canva announced its acquisition of Affinity on Tuesday. The purchase adds tools for creative professionals to the Australian startup’s repertoire, presenting competition for today’s digital design stronghold, Adobe.

The companies didn’t provide specifics about the deal, but Cliff Obrecht, Canva’s co-founder and COO, told Bloomberg that it consists of cash and stock and is worth “several hundred million pounds.”

Canva, which debuted in 2013, has made numerous acquisitions to date, including Flourish, Kaleido, and Pixabay, but its purchase of Affinity is its biggest yet—by both price and headcount (90). Affinity CEO Ashley Hewson said via a YouTube video that Canva approached Affinity about a potential deal two months ago.

Before its Affinity purchase, Canva claimed 175 million users, which interestingly includes 90 million accrued since September 2022, when Canva launched Visual Suite. Without Affinity, though, Canva hasn’t had a way to appeal to the business-to-business market.

Affinity, which works with iPads, Macs, and Windows PCs, meanwhile, has a creative suite that includes a photo editor, professional page layout software, and Designer, a vector-based graphics software that “thousands” of illustrators, designers, and game developers use, Obrecht said when announcing the acquisition.

Of course, Affinity’s user base isn’t nearly the size of Adobe’s. Affinity claims that 3 million creative professionals use its tools. Adobe hasn’t provided hard numbers recently, but in 2017, it was estimated that Adobe Creative Cloud had 12 million subscribers, and Adobe currently claims to have 50 million members on its Behance online community.

However, Affinity has earned a following among creative professionals seeking an alternative to Adobe. Speaking to Bloomberg, Obrecht was keen to point out that Apple has featured Affinity apps in presentations about creative products, for example.

Perpetual Affinity licenses will still be available

Since being founded in 2014, one of the biggest ways that Affinity has stood out to creatives looking to avoid the costs associated with Adobe, including subscription fees, is perpetual licensing. New owner Canva pledged in an announcement today that one-time purchase fees will always be an option for Affinity users.

“Perpetual licenses will always be offered, and we will always price Affinity fairly and affordably,” an announcement today from Canva and Affinity said.

If Canva ever decides to sell Affinity as a subscription, perpetual licensing will remain available, Canva said, adding: “This fits with enabling Canva users to start adopting Affinity. It could also allow us to offer Affinity users a way to scale their workflows using Canva as a platform to share and collaborate on their Affinity assets, if they choose to.”

As we’ve seen with many other acquisitions, though, it’s common for companies to start changing their minds about how they’re willing to operate an acquired business years or even months after finalizing the purchase. And, of course, Canva’s idea of pricing “fairly and affordably” could differ from those of long-time Affinity users.

What about AI?

Canva also vowed to keep Affinity available as a standalone product and said there will be upcoming free updates to Affinity V2. However, Cameron Adams, Canva’s co-founder, pointed to potential future integration between Canva’s and Affinity’s offerings when speaking with Sydney Morning Herald:

Our product teams have already started chatting and we have some immediate plans for lightweight integration, but we think the products themselves will always be separate. Professional designers have really specific needs.

Canva’s announcement today said that the company plans to accelerate the rollout of “highly requested” Affinity features, “such as variable font support, blend and width tools, auto object selection, multi-page spreads, [and] ePub export.” With Canva, which was valued at $26 billion in 2021 and generates over $2.1 billion in annualized revenue, taking ownership of Affinity, the creative suite is expected to have more resources for improvements and updates than before.

Notably, though, Canva hasn’t revealed to what degree it may try to incorporate AI into Affinity. Canva is fully aboard the generative AI hype train and, as recently as this Monday pushed workers of all types to embrace the technology. Affinity, meanwhile, has said that it won’t make any generative AI tech and is “against anything which undermines human talent or tramples on artists’ IP.” Affinity’s stance could be forced to change one day under its new owner.

To start, though, Canva’s acquisition helps to fill the B2B gap in its portfolio, and it’s expected to use its new appeal to go after some of Adobe’s dominance.

“While our last decade at Canva has focused heavily on the 99 percent of knowledge workers without design training, truly empowering the world to design includes empowering professional designers, too. By joining forces with Affinity, we’re excited to unlock the full spectrum of designers at every level and stage of the design journey,” Obrecht said in Tuesday’s announcement.

Meanwhile, Adobe abandoned its own recent merger and acquisition efforts, a $20 billion purchase of Figma, in December due to regulatory concerns.

Canva’s Affinity acquisition is a non-subscription-based weapon against Adobe Read More »

wwdc-2024-starts-on-june-10-with-announcements-about-ios-18-and-beyond

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond

WWDC —

Speculation is rampant that Apple will make its first big moves in generative AI.

A colorful logo that says

Enlarge / The logo for WWDC24.

Apple

Apple has announced dates for this year’s Worldwide Developers Conference (WWDC). WWDC24 will run from June 10 through June 14 at the company’s Cupertino, California, headquarters, but everything will be streamed online.

Apple posted about the event with the following generic copy:

Join us online for the biggest developer event of the year. Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.

As always, the conference will kick off with a keynote presentation on the first day, which is Monday, June 10. You can be sure Apple will use that event to at least announce the key features of its next round of annual software updates for iOS, iPadOS, macOS, watchOS, visionOS, and tvOS.

We could also see new hardware—it doesn’t happen every year, but it has of late. We don’t yet know exactly what that hardware might be, though.

Much of the speculation among analysts and commentators concerns Apple’s first move into generative AI. There have been reports that Apple may work with a partner like Google to include a chatbot in its operating system, that it has been considering designing its own AI tools, or that it could offer an AI App Store, giving users a choice between many chatbots.

Whatever the case, Apple is playing catch-up with some of its competitors in generative AI and large language models even though it has been using other applications of AI across its products for a couple of years now. The company’s leadership will probably talk about it during the keynote.

After the keynote, Apple usually hosts a “Platforms State of the Union” talk that delves deeper into its upcoming software updates, followed by hours of developer-focused sessions detailing how to take advantage of newly planned features in third-party apps.

WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond Read More »

google-balks-at-$270m-fine-after-training-ai-on-french-news-sites’-content

Google balks at $270M fine after training AI on French news sites’ content

Google balks at $270M fine after training AI on French news sites’ content

Google has agreed to pay 250 million euros (about $273 million) to settle a dispute in France after breaching years-old commitments to inform and pay French news publishers when referencing and displaying content in both search results and when training Google’s AI-powered chatbot, Gemini.

According to France’s competition watchdog, the Autorité de la Concurrence (ADLC), Google dodged many commitments to deal with publishers fairly. Most recently, it never notified publishers or the ADLC before training Gemini (initially launched as Bard) on publishers’ content or displaying content in Gemini outputs. Google also waited until September 28, 2023, to introduce easy options for publishers to opt out, which made it impossible for publishers to negotiate fair deals for that content, the ADLC found.

“Until this date, press agencies and publishers wanting to opt out of this use had to insert an instruction opposing any crawling of their content by Google, including on the Search, Discover and Google News services,” the ADLC noted, warning that “in the future, the Autorité will be particularly attentive as regards the effectiveness of opt-out systems implemented by Google.”

To address breaches of four out of seven commitments in France—which the ADLC imposed in 2022 for a period of five years to “benefit” publishers by ensuring Google’s ongoing negotiations with them were “balanced”—Google has agreed to “a series of corrective measures,” the ADLC said.

Google is not happy with the fine, which it described as “not proportionate” partly because the fine “doesn’t sufficiently take into account the efforts we have made to answer and resolve the concerns raised—in an environment where it’s very hard to set a course because we can’t predict which way the wind will blow next.”

According to Google, regulators everywhere need to clearly define fair use of content when developing search tools and AI models, so that search companies and AI makers always know “whom we are paying for what.” Currently in France, Google contends, the scope of Google’s commitments has shifted from just general news publishers to now also include specialist publications and listings and comparison sites.

The ADLC agreed that “the question of whether the use of press publications as part of an artificial intelligence service qualifies for protection under related rights regulations has not yet been settled,” but noted that “at the very least,” Google was required to “inform publishers of the use of their content for their Bard software.”

Regarding Bard/Gemini, Google said that it “voluntarily introduced a new technical solution called Google-Extended to make it easier for rights holders to opt out of Gemini without impact on their presence in Search.” It has now also committed to better explain to publishers both “how our products based on generative AI work and how ‘Opt Out’ works.”

Google said that it agreed to the settlement “because it’s time to move on” and “focus on the larger goal of sustainable approaches to connecting people with quality content and on working constructively with French publishers.”

“Today’s fine relates mostly to [a] disagreement about how much value Google derives from news content,” Google’s blog said, claiming that “a lack of clear regulatory guidance and repeated enforcement actions have made it hard to navigate negotiations with publishers, or plan how we invest in news in France in the future.”

What changes did Google agree to make?

Google defended its position as “the first and only platform to have signed significant licensing agreements” in France, benefiting 280 French press publishers and “covering more than 450 publications.”

With these publishers, the ADLC found that Google breached requirements to “negotiate in good faith based on transparent, objective, and non-discriminatory criteria,” to consistently “make a remuneration offer” within three months of a publisher’s request, and to provide information for publishers to “transparently assess their remuneration.”

Google also breached commitments to “inform editors and press agencies of the use of their content by its service Bard” and of Google’s decision to link “the use of press agencies’ and publishers’ content by its artificial intelligence service to the display of protected content on services such as Search, Discover and News.”

Regarding negotiations, the ADLC found that Google not only failed to be transparent with publishers about remuneration, but also failed to keep the ADLC informed of information necessary to monitor whether Google was honoring its commitments to fairly pay publishers. Partly “to guarantee better communication,” Google has agreed to appoint a French-speaking representative in its Paris office, along with other steps the ADLC recommended.

According to the ADLC’s announcement (translated from French), Google seemingly acted sketchy in negotiations by not meeting non-discrimination criteria—and unfavorably treating publishers in different situations identically—and by not mentioning “all the services that could generate revenues for the negotiating party.”

“According to the Autorité, not taking into account differences in attractiveness between content does not allow for an accurate reflection of the contribution of each press agency and publisher to Google’s revenues,” the ADLC said.

Also problematically, Google established a minimum threshold of 100 euros for remuneration that it has now agreed to drop.

This threshold, “in its very principle, introduces discrimination between publishers that, below a certain threshold, are all arbitrarily assigned zero remuneration, regardless of their respective situations,” the ADLC found.

Google balks at $270M fine after training AI on French news sites’ content Read More »

google-reshapes-fitbit-in-its-image-as-users-allege-“planned-obsolescence”

Google reshapes Fitbit in its image as users allege “planned obsolescence”

Google Fitbit, emphasis on Google —

Generative AI may not be enough to appease frustrated customers.

Product render of Fitbit Charge 5 in Lunar White and Soft Gold.

Enlarge / Google Fitbit’s Charge 5.

Fitbit

Google closed its Fitbit acquisition in 2021. Since then, the tech behemoth has pushed numerous changes to the wearable brand, including upcoming updates announced this week. While Google reshapes its fitness tracker business, though, some long-time users are regretting their Fitbit purchases and questioning if Google’s practices will force them to purchase their next fitness tracker elsewhere.

Generative AI coming to Fitbit (of course)

As is becoming common practice with consumer tech announcements, Google’s latest announcements about Fitbit seemed to be trying to convince users of the wonders of generative AI and how that will change their gadgets for the better. In a blog post yesterday, Dr. Karen DeSalvo, Google’s chief health officer, announced that Fitbit Premium subscribers would be able to test experimental AI features later this year (Google hasn’t specified when).

“You will be able to ask questions in a natural way and create charts just for you to help you understand your own data better. For example, you could dig deeper into how many active zone minutes… you get and the correlation with how restorative your sleep is,” she wrote.

DeSalvo’s post included an example of a user asking a chatbot if there was a connection between their sleep and activity and said that the experimental AI features will only be available to “a limited number of Android users who are enrolled in the Fitbit Labs program in the Fitbit mobile app.”

Google shared this image as an example of what future Fitbit generative AI features could look like.

Google shared this image as an example of what future Fitbit generative AI features could look like.

Fitbit is also working with the Google Research team and “health and wellness experts, doctors, and certified coaches” to develop a large language model (LLM) for upcoming Fitbit mobile app features that pull data from Fitbit and Pixel devices, DeSalvo said. The announcement follows Google’s decision to stop selling Fitbits in places where it doesn’t sell Pixels, taking the trackers off shelves in a reported 29 countries.

In a blog post yesterday, Yossi Matias, VP of engineering and research at Google, said the company wants to use the LLM to add personalized coaching features, such as the ability to look for sleep irregularities and suggest actions “on how you might change the intensity of your workout.”

Google’s Fitbit is building the LLM on Gemini models that are tweaked on de-identified data from unspecified “research case studies,” Matias said, adding: “For example, we’re testing performance using sleep medicine certification exam-like practice tests.”

Gemini, which Google released in December, has been criticized for generating historically inaccurate images. After users complained about different races and ethnicities being inaccurately portrayed in prompts for things like Nazi members and medieval British kings, Google pulled the feature last month and said it would release a fix “soon.”In a press briefing, Florence Thng, director and product lead at Fitbit, suggested that such problems wouldn’t befall Fitbit’s LLM since it’s being tested by users before an official rollout, CNET reported.

Other recent changes to Fitbit include a name tweak from Fitbit by Google, to Google Fitbit, as spotted by 9to5Google this week.

A screenshot from Fitbit's homepage.

Enlarge / A screenshot from Fitbit’s homepage.

Combined with other changes that Google has brought to Fitbit over the past two years—including axing most social features, the ability to sync with computers, its browser-based SDK for developing apps, and pushing users to log in with Google accounts ahead of Google shuttering all Fitbit accounts in 2025—Fitbit, like many acquired firms, is giving long-time customers a different experience than it did before it was bought.

Disheartened customers

Meanwhile, customers, especially Charge 5 users, are questioning whether their next fitness tracker will come from Fitbit Google Fitbit.

For example, in January, we reported that users were claiming that their Charge 5 suddenly started draining battery rapidly after installing a firmware update that Fitbit released in December. As of this writing, one thread discussing the problem on Fitbit’s support forum has 33 pages of comments. Google told BBC in January that it didn’t know what the problem was but knew that it wasn’t tied to firmware. Google hasn’t followed up with further explanation since. The company hasn’t responded to multiple requests from Ars Technica for comment. In the meantime, users continue experiencing problems and have reported so on Fitbit’s forum. Per user comments, the most Google has done is offer discounts or, if the device was within its warranty period, a replacement.

“This is called planned obsolescence. I’ll be upgrading to a watch style tracker from a different company. I wish Fitbit hadn’t sold out to Google,” a forum user going by Sean77024 wrote on Fitbit’s support forum yesterday.

Others, like 2MeFamilyFlyer, have also accused Fitbit of planning Charge 5 obsolescence. 2MeFamilyFlyer said they’re seeking a Fitbit alternative.

The ongoing problems with the Charge 5, which was succeeded by the Charge 6 on October 12, has some, like reneeshawgo on Fitbit’s forum and PC World Senior Editor Alaina Yee saying that Fitbit devices aren’t meant to last long. In January, Yee wrote: “You should see Fitbits as a 1-year purchase in the US and two years in regions with better warranty protections.”

For many, a year or two wouldn’t be sufficient, even if the Fitbit came with trendy AI features.

Google reshapes Fitbit in its image as users allege “planned obsolescence” Read More »