Artificial Intelligence

microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform

Microsoft sues service for creating illicit content with its AI platform

Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesn’t allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.

Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Masada wrote:

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Microsoft sues service for creating illicit content with its AI platform Read More »

tech-worker-movements-grow-as-threats-of-rto,-ai-loom

Tech worker movements grow as threats of RTO, AI loom


Advocates say tech workers movements got too big to ignore in 2024.

Credit: Aurich Lawson | Getty Images

It feels like tech workers have caught very few breaks over the past several years, between ongoing mass layoffs, stagnating wages amid inflation, AI supposedly coming for jobs, and unpopular orders to return to office that, for many, threaten to disrupt work-life balance.

But in 2024, a potentially critical mass of tech workers seemed to reach a breaking point. As labor rights groups advocating for tech workers told Ars, these workers are banding together in sustained strong numbers and are either winning or appear tantalizingly close to winning better worker conditions at major tech companies, including Amazon, Apple, Google, and Microsoft.

In February, the industry-wide Tech Workers Coalition (TWC) noted that “the tech workers movement is far more expansive and impactful” than even labor rights advocates realized, noting that unionized tech workers have gone beyond early stories about Googlers marching in the streets and now “make the headlines on a daily basis.”

Ike McCreery, a TWC volunteer and ex-Googler who helped found the Alphabet Workers Union, told Ars that although “it’s hard to gauge numerically” how much movements have grown, “our sense is definitely that the momentum continues to build.”

“It’s been an exciting year,” McCreery told Ars, while expressing particular enthusiasm that even “highly compensated tech workers are really seeing themselves more as workers” in these fights—which TWC “has been pushing for a long time.”

In 2024, TWC broadened efforts to help workers organize industry-wide, helping everyone from gig workers to project managers build both union and non-union efforts to push for change in the workplace.

Such widespread organizing “would have been unthinkable only five years ago,” TWC noted in February, and it’s clear from some of 2024’s biggest wins that some movements are making gains that could further propel that momentum in 2025.

Workers could also gain the upper hand if unpopular policies increase what one November study called “brain drain.” That’s a trend where tech companies adopting potentially alienating workplace tactics risk losing top talent at a time when key industries like AI and cybersecurity are facing severe talent shortages.

Advocates told Ars that unpopular policies have always fueled workers movements, and RTO and AI are just the latest adding fuel to the fire. As many workers prepare to head back to offices in 2025 where worker surveillance is only expected to intensify, they told Ars why they expect to see workers’ momentum continue at some of the world’s biggest tech firms.

Tech worker movements growing

In August, Apple ratified a labor contract at America’s first unionized Apple Store—agreeing to a modest increase in wages, about 10 percent over three years. While small, that win came just a few weeks before the National Labor Relations Board (NLRB) determined that Amazon was a joint employer of unionized contract-based delivery drivers. And Google lost a similar fight last January when the NLRB ruled it must bargain with a union representing YouTube Music contract workers, Reuters reported.

For many workers, joining these movements helped raise wages. In September, facing mounting pressure, Amazon raised warehouse worker wages—investing $2.2 billion, its “biggest investment yet,” to broadly raise base salaries for workers. And more recently, Amazon was hit with a strike during the busy holiday season, as warehouse workers hoped to further hobble the company during a clutch financial quarter to force more bargaining. (Last year, Amazon posted record-breaking $170 billion holiday quarter revenues and has said the current strike won’t hurt revenues.)

Even typically union-friendly Microsoft drew worker backlash and criticism in 2024 following layoffs of 650 video game workers in September.

These mass layoffs are driving some workers to join movements. A senior director for organizing with Communications Workers of America (CWA), Tom Smith, told Ars that shortly after the 600-member Tech Guild—”the largest single certified group of tech workers” to organize at the New York Times—reached a tentative deal to increase wages “up to 8.25 percent over the length of the contract,” about “460 software engineers at a video game company owned by Microsoft successfully unionized.”

Smith told Ars that while workers for years have pushed for better conditions, “these large units of tech workers achieving formal recognition, building lasting organization, and winning contracts” at “a more mass scale” are maturing, following in the footsteps of unionizing Googlers and today influencing a broader swath of tech industry workers nationwide. From CWA’s viewpoint, workers in the video game industry seem best positioned to seek major wins next, Smith suggested, likely starting with Microsoft-owned companies and eventually affecting indie game companies.

CWA, TWC, and Tech Workers Union 1010 (a group run by tech workers that’s part of the Office and Professional Employees International Union) all now serve as dedicated groups supporting workers movements long-term, and that stability has helped these movements mature, McCreery told Ars. Each group plans to continue meeting workers where they are to support and help expand organizing in 2025.

Cost of RTOs may be significant, researchers warn

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies.

In perhaps the biggest example from 2024, when Amazon announced that it was requiring workers in-office five days a week next year, a poll on the anonymous platform where workers discuss employers, Blind, found an overwhelming majority of more than 2,000 Amazon employees were “dissatisfied.”

“My morale for this job is gone…” one worker said on Blind.

Workers criticized the “non-data-driven logic” of the RTO mandate, prompting an Amazon executive to remind them that they could take their talents elsewhere if they didn’t like it. Many confirmed that’s exactly what they planned to do. (Amazon later announced it would be delaying RTO for many office workers after belatedly realizing there was a lack of office space.)

Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don’t make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had “even a slight positive impact on productivity.”

But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.

Very few studies have been done analyzing the true costs and benefits of RTO, a November academic study titled “Return to Office and Brain Drain” said, and so far companies aren’t necessarily backing the limited findings. The researchers behind that study noted that “the only existing study” measuring how RTO impacts employee turnover showed this year that senior employees left for other companies after Microsoft’s RTO mandate, but Microsoft disputed that finding.

Seeking to build on this research, the November study tracked “over 3 million tech and finance workers’ employment histories reported on LinkedIn” and analyzed “the effect of S&P 500 firms’ return-to-office (RTO) mandates on employee turnover and hiring.”

Choosing to only analyze the firms requiring five days in office, the final sample covered 54 RTO firms, including big tech companies like Amazon, Apple, and Microsoft. From that sample, researchers concluded that average employee turnover increased by 14 percent after RTO mandates at bigger firms. And since big firms typically have lower turnover, the increase in turnover is likely larger at smaller firms, the study’s authors concluded.

The study also supported the conclusion that “employees with the highest skill level are more likely to leave” and found that “RTO firms take significantly longer time to fill their job vacancies after RTO mandates.”

“Together, our evidence suggests that RTO mandates are costly to firms and have serious negative effects on the workforce,” the study concluded, echoing some remote workers’ complaints about the seemingly non-data-driven logic of RTO, while urging that further research is needed.

“These turnovers could potentially have short-term and long-term effects on operation, innovation, employee morale, and organizational culture,” the study concluded.

A co-author of the “brain drain” study, Mark Ma, told Ars that by contrast, Glassdoor going fully remote at least anecdotally seemed to “significantly” increase the number and quality of applications—possibly also improving retention by offering the remote flexibility that many top talents today require.

Ma said that next his team hopes to track where people who leave firms over RTO policies go next.

“Do they become self-employed, or do they go to a competitor, or do they fund their own firm?” Ma speculated, hoping to trace these patterns more definitively over the next several years.

Additionally, Ma plans to investigate individual firms’ RTO impacts, as well as impacts on niche classes of workers with highly sought-after skills—such as in areas like AI, machine learning, or cybersecurity—to see if it’s easier for them to find other jobs. In the long-term, Ma also wants to monitor for potentially less-foreseeable outcomes, such as RTO mandates possibly increasing firms’ number of challengers in their industry.

Will RTO mandates continue in 2025?

Many tech workers may be wondering if there will be a spike in return-to-office mandates in 2025, especially since one of the most politically influential figures in tech, Elon Musk, recently reiterated that he thinks remote work is “poison.”

Musk, of course, banned remote work at Tesla, as well as when he took over Twitter. And as co-lead of the US Department of Government Efficiency (DOGE), Musk reportedly plans to ban remote work for government employees, as well. If other tech firms are influenced by Musk’s moves and join executives who seem to be mandating RTO based on intuition, it’s possible that more tech workers could be forced to return to office or else seek other employment.

But Ma told Ars that he doesn’t expect to see “a big spike in the number of firms announcing return to office mandates” in 2025.

His team only found eight major firms in tech and finance that issued five-day return-to-office mandates in 2024, which was the same number of firms flagged in 2023, suggesting no major increase in RTOs from year to year. Ma told Ars that while big firms like Amazon ordering employees to return to the office made headlines, many firms seem to be continuing to embrace hybrid models, sometimes allowing employees to choose when or if they come into the office.

That seeming preference for hybrid work models seems to align with “future of work” surveys outlining workplace trends and employee preferences that the Consumer Technology Association (CTA) conducted for years but has seemingly since discontinued. In 2021, CTA reported that “89 percent of tech executives say flexible work arrangements are the most important employee benefit and 65 percent say they’ll hire more employees to work remotely.” The next year, which apparently was the last time CTA published the survey, the CTA suggested hybrid models could help attract talents in a competitive market hit with “an unprecedented demand for workers with high-tech skills.”

The CTA did not respond to Ars’ requests to comment on whether it expects hybrid work arrangements to remain preferred over five-day return-to-office policies next year.

CWA’s Smith told Ars that workers movements are growing partly because “folks are engaged in this big fight around surveillance and workplace control,” as well as anything “having to do with to what extent will people return to offices and what does that look like if and when people do return to offices?”

Without data backing RTO mandates, Ma’s study suggests that firms will struggle to retain highly skilled workers at a time when tech innovation remains a top priority for the US. As workers appear increasingly put off by policies—like RTO or AI-driven workplace monitoring or efficiency efforts threatening to replace workers with AI—Smith’s experience seems to show that disgruntled workers could find themselves drawn to unions that could help them claw back control over work-life balance. And the cost of the ensuing shuffle to some of the largest tech firms in the world could be “significant,” Ma’s study warned.

TWC’s McCreery told Ars that on top of unpopular RTO policies driving workers to join movements, workers have also become more active in protesting unpopular politics, frustrated to see their talents apparently used to further controversial conflicts and military efforts globally. Some workers think workplace organizing could be more powerful than voting to oppose political actions their companies take.

“The workplace really remains an important site of power for a lot of people where maybe they don’t feel like they can enact their values just by voting or in other ways,” McCreery said.

While unpopular policies “have always been a reason workers have joined unions and joined movements,” McCreery said that “the development of more of these unpopular policies” like RTO and AI-enhanced surveillance “really targeted” at workers has increased “the political consciousness and the sense” that tech workers are “just like any other workers.”

Layoffs at companies like Microsoft and Amazon during periods when revenue is increasing in the double-digits also unify workers, advocates told Ars. Forbes noted Microsoft laid off 1,000 workers “just five days before reporting a 17.6 percent increase in revenue to $62 billion,” while Amazon’s 1,000-worker layoffs followed a 14 percent rise in revenue to $170 billion. And demand for AI led to the highest profit margins Amazon’s seen for its cloud business in a decade, CNBC reported in October.

CWA’s Smith told Ars as companies continue to rake in profits and workers feel their work-life balance slipping away while their efforts in the office are potentially “used to increase control and cause broader suffering,” some of the biggest fights workers raised in 2024 may intensify next year.

“It’s like a shock to employees, these industries pushing people to lower your expectations because we’re going to lay off hundreds of thousands of you just because we can while we make more profits than we ever have,” Smith said. “I think workers are going to step into really broad campaigns to assert a different worldview on employment security.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Tech worker movements grow as threats of RTO, AI loom Read More »

openai-defends-for-profit-shift-as-critical-to-sustain-humanitarian-mission

OpenAI defends for-profit shift as critical to sustain humanitarian mission

OpenAI has finally shared details about its plans to shake up its core business by shifting to a for-profit corporate structure.

On Thursday, OpenAI posted on its blog, confirming that in 2025, the existing for-profit arm will be transformed into a Delaware-based public benefit corporation (PBC). As a PBC, OpenAI would be required to balance its shareholders’ and stakeholders’ interests with the public benefit. To achieve that, OpenAI would offer “ordinary shares of stock” while using some profits to further its mission—”ensuring artificial general intelligence (AGI) benefits all of humanity”—to serve a social good.

To compensate for losing control over the for-profit, the nonprofit would have some shares in the PBC, but it’s currently unclear how many will be allotted. Independent financial advisors will help OpenAI reach a “fair valuation,” the blog said, while promising the new structure would “multiply” the donations that previously supported the nonprofit.

“Our plan would result in one of the best resourced nonprofits in history,” OpenAI said. (During its latest funding round, OpenAI was valued at $157 billion.)

OpenAI claimed the nonprofit’s mission would be more sustainable under the proposed changes, as the costs of AI innovation only continue to compound. The new structure would set the PBC up to control OpenAI’s operations and business while the nonprofit would “hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science,” OpenAI said.

Some of OpenAI’s rivals, such as Anthropic and Elon Musk’s xAI, use a similar corporate structure, OpenAI noted.

Critics had previously pushed back on this plan, arguing that humanity may be better served if the nonprofit continues controlling the for-profit arm of OpenAI. But OpenAI argued that the old way made it hard for the Board “to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.

OpenAI defends for-profit shift as critical to sustain humanitarian mission Read More »

character.ai-steps-up-teen-safety-after-bots-allegedly-caused-suicide,-self-harm

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm

Following a pair of lawsuits alleging that chatbots caused a teen boy’s suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that’s supposed to make their experiences with bots safer.

In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model “away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”

C.AI said “evolving the model experience” to reduce the likelihood kids are engaging in harmful chats—including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suing—it had to tweak both model inputs and outputs.

To stop chatbots from initiating and responding to harmful dialogs, C.AI added classifiers that should help C.AI identify and filter out sensitive content from outputs. And to prevent kids from pushing bots to discuss sensitive topics, C.AI said that it had improved “detection, response, and intervention related to inputs from all users.” That ideally includes blocking any sensitive content from appearing in the chat.

Perhaps most significantly, C.AI will now link kids to resources if they try to discuss suicide or self-harm, which C.AI had not done previously, frustrating parents suing who argue this common practice for social media platforms should extend to chatbots.

Other teen safety features

In addition to creating the model just for teens, C.AI announced other safety features, including more robust parental controls rolling out early next year. Those controls would allow parents to track how much time kids are spending on C.AI and which bots they’re interacting with most frequently, the blog said.

C.AI will also be notifying teens when they’ve spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents suing have alleged. In one case, parents had to lock their son’s iPad in a safe to keep him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested murdering his parents. That teen has vowed to start using the app whenever he next has access, while parents fear the bots’ seeming influence may continue causing harm if he follows through on threats to run away.

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm Read More »

report:-google-told-ftc-microsoft’s-openai-deal-is-killing-ai-competition

Report: Google told FTC Microsoft’s OpenAI deal is killing AI competition

Google reportedly wants the US Federal Trade Commission (FTC) to end Microsoft’s exclusive cloud deal with OpenAI that requires anyone wanting access to OpenAI’s models to go through Microsoft’s servers.

Someone “directly involved” in Google’s effort told The Information that Google’s request came after the FTC began broadly probing how Microsoft’s cloud computing business practices may be harming competition.

As part of the FTC’s investigation, the agency apparently asked Microsoft’s biggest rivals if the exclusive OpenAI deal was “preventing them from competing in the burgeoning artificial intelligence market,” multiple sources told The Information. Google reportedly was among those arguing that the deal harms competition by saddling rivals with extra costs and blocking them from hosting OpenAI’s latest models themselves.

In 2024 alone, Microsoft generated about $1 billion from reselling OpenAI’s large language models (LLMs), The Information reported, while rivals were stuck paying to train staff to move data to Microsoft servers if their customers wanted access to OpenAI technology. For one customer, Intuit, it cost millions monthly to access OpenAI models on Microsoft’s servers, The Information reported.

Microsoft benefits from the arrangement—which is not necessarily illegal—of increased revenue from reselling LLMs and renting out more cloud servers. It also takes a 20 percent cut of OpenAI’s revenue. Last year, OpenAI made approximately $3 billion selling its LLMs to customers like T-Mobile and Walmart, The Information reported.

Microsoft’s agreement with OpenAI could be viewed as anti-competitive if businesses convince the FTC that the costs of switching to Microsoft’s servers to access OpenAI technology is so burdensome that it’s unfairly disadvantaging rivals. It could also be considered harming the market and hampering innovation by seemingly disincentivizing Microsoft from competing with OpenAI in the market.

To avoid any disruption to the deal, however, Microsoft could simply point to AI models sold by Google and Amazon as proof of “robust competition,” The Information noted. The FTC may not buy that defense, though, since rivals’ AI models significantly fall behind OpenAI’s models in sales. Any perception that the AI market is being foreclosed by an entrenched major player could trigger intense scrutiny as the US seeks to become a world leader in AI technology development.

Report: Google told FTC Microsoft’s OpenAI deal is killing AI competition Read More »

recap:-our-“ai-in-dc”-conference-was-great—here’s-what-you-missed

Recap: Our “AI in DC” conference was great—here’s what you missed


Experts were assembled, tales told, and cocktails consumed. It was fun!

Photograph of the exterior of the International Spy Museum in DC

Our venue. So spy-ish! Credit: DC Event Photojournalism

Our venue. So spy-ish! Credit: DC Event Photojournalism

Ars Technica descended in force last week upon our nation’s capital, setting up shop in the International Spy Museum for a three-panel discussion on artificial intelligence, infrastructure, security, and how compliance with policy changes over the next decade or so might shape the future of business computing in all its forms. Much like our San Jose event last month, the venue was packed to the rafters with Ars readers eager for knowledge (and perhaps some free drinks, which is definitely why I was there!). A bit over 200 people were eventually herded into one of the conference spaces in the venue’s upper floors, and Ars Editor-in-Chief Ken Fisher hopped on stage to take us in.

“Today’s event about privacy, compliance, and making infrastructure smarter, I think, could not be more perfectly timed,” said Fisher. “I don’t know about your orgs, but I know Ars Technica and our parent company, Condé Nast, are currently thinking about generative AI and how it touches almost every aspect or could touch almost every aspect of our business.”

Photograph of a panel discussion

Ars EIC Ken Fisher takes the stage to kick things off.

Credit: DC Event Photojournalism

Ars EIC Ken Fisher takes the stage to kick things off. Credit: DC Event Photojournalism

Fisher continued: “I think the media talks about how [generative AI] is going to maybe write news and take over content, but the reality is that generative AI has a lot of potential to help us in finance, to help us with opex, to help us with planning—to help us with pretty much every aspect of our business and in our business. And from what I’m reading online, many folks are starting to have this dream that generative AI is going to lead them into a world where they can replace a lot of SaaS services where they can make a pivot to first-party data.”

First-party data and first-party software development, concluded Fisher, will be critically important when paired with generative AI—”table stakes,” Fisher called them, for participating in the future of business.

After Ken, it was on to our first panel!

“The Key to Compliance with Emerging Technologies”

Up first were Anton Dam, an engineering VP with Auditboard; John Verdi of the Future of Privacy Forum; and Jim Comstock, a cloud storage program director at IBM. The main concern of this panel was how companies will keep up with shifting compliance requirements as the pace of advancement continues to increase.

Each panelist had somewhat of a complementary take. AuditBoard’s Dam emphasized how quickly AI is shifting things around and pointed out the need for organizations to be proactive—to be mindful of regulatory changes before they happen and to have plans in place. “If you want to stay compliant,” Dam said, “you have to be proactive and not wait for, say, agency guidance.”

Photograph of panelists on a stage

Hutchinson, Comstock, Dam, and Verdi.

Credit: DC Event Photojournalism

Hutchinson, Comstock, Dam, and Verdi. Credit: DC Event Photojournalism

FPF’s John Verdi dwelled for a bit on the challenge of doing just that and balancing innovation against the need to comply with regs. He noted that a “privacy by design” approach for products—where considerations about compliance are factored into something’s design from the very beginning rather than being treated as bolt-ons later—ultimately serves both the customer and the business.

Cross-border compliance also came up—with big cloud providers and data that perhaps resides in different countries, different laws apply. Making sure you’re doing what all of those laws say is hugely complex, and IBM’s Comstock pointed out that customers need to both work with vendors and also hold those vendors accountable for where one’s data resides.

“Data Security in the Age of AI-Assisted Cyber Espionage”

Next, we shifted to an infosec outlook, bringing on a four-person panel that included former Ars Technica senior security editor Sean Gallagher, who is currently keeping the world safe at Sophos X-Ops. Joining Sean were Kate Highnam, an ML engineer at Booz-Allen Hamilton; Dr. Scott White, director of cybersecurity at George Washington University; and Elisa Ortiz, a storage and product marketing director at IBM.

Photograph of panelists on a stage

Hutchinson, Ortiz, White, Highnam, and Gallagher.

Credit: DC Event Photojournalism

Hutchinson, Ortiz, White, Highnam, and Gallagher. Credit: DC Event Photojournalism

For this panel, we wanted to look at the landscape around us, and Sean kicked the session off with a sobering description of the most profligate cyber threats as they currently exist today. “Pig butchering” was at the top of his list—that is, a shockingly common romance scam where victims are tricked into an emotional connection with a scammer, who then extorts them for money. (As Sean explained, the scammers themselves are often also victims, typically being trapped without their passports in foreign countries and forced to engage in scamming as their only hope to escape back home.) Scams like this increasingly use AI to work around language barriers—if a scammer who only speaks Cantonese targets a victim who only speaks German, for example, the scammers have begun using LLMs to carry on the scam, with LLMs providing not just basic translation but also native colloquialisms and other human-like linguistic tweaks to help sell the scheme.

Dr. Scott White of GWU took us from scams to national security, pointing out how AI can and is transforming intelligence gathering in addition to romance scams. Booz-Allen Hamilton’s Kate Highnam continued this line of discussion, walking us through several ways that machine learning helps with detecting cyber-espionage activities. As good as the tools are, she emphasized that—at least for the foreseeable future—there will continue to need to be a human in the loop when AI is used for detection of crimes. “AI is really good for generalizing our directions,” she said, “but at the end of the day, we have to make sure that we are very clear with our assumptions.”

IBM’s Ortiz closed out the panel by reminding us that threats don’t just come in through the proverbial front door—one of the areas where companies can have significant vulnerabilities is via their backups. As attackers increasingly target backups, Ortiz advocated broad use of predictive analytics and real-time anomaly detection in order to spy out any oddness attackers might be up to.

“The Best Infrastructure Solution for Your AI/ML Strategy”

Our final panel had a deceptively simple title and an impossible task, because there is no “best” infrastructure solution. But there might be a best infrastructure solution for you, and that’s what we wanted to look at. Joining me on stage were Daniel Fenton, head of AI platforms at JLL; Arun Natarajan, director of AI innovation at the IRS; Amy Hirst, VP of site reliability engineering and user experience at IBM; and Matt Klos, an IBM senior solutions architect.

Photograph of panelists on a stage

Fenton, Natarajan, Hirst, and Klos. (You can’t see me, but I’m just off-stage to the left.)

Credit: DC Event Photojournalism

Fenton, Natarajan, Hirst, and Klos. (You can’t see me, but I’m just off-stage to the left.) Credit: DC Event Photojournalism

It’s always fascinating to get to ask the IRS anything, and Natarajan gave insightful answers. He opened by contrasting the goals and challenges of the IRS’s IT strategy as a government service organization to the goals of a typical enterprise, and there are obvious significant differences. Fancy features don’t count as much as stability, security, and integration with legacy systems. AI is being looked at where appropriate, but what the IRS needs from AI more than anything else is transparency, and that can sometimes be lacking. “We’re challenged with ensuring ethical AI and transparency to the taxpayer, which requires a different approach than private sector solutions.”

Other panelists, like JLL’s Fenton, emphasized the use of open architecture to ensure flexibility; IBM’s Klos also noted that often, even the best laid plans of data center engineers gang aft agley due not to architecture issues or poor design but to actual physical infrastructure not being what was expected. “One of the biggest pitfalls I see is power,” he explained. “Customers assume it’s everywhere, but it’s often a limiting factor, especially in high-demand AI infrastructure.”

Amy Hirst pointed out that when building one’s own AI/ML setup, traditional performance metrics still apply—and they apply across multiple stacks, including both storage and networking. Her advice sounds somewhat traditional but holds absolutely true, even now. “You have to consider both latency and throughput,” she said. “Both are key to establishing your system’s needs for AI workloads.”

Drinks and everything thereafter

And with that, our panel discussions were done. The group dispersed, with some folks heading downstairs for a private tour of the museum’s Bond in Motion exhibit, which featured the various on-screen rides of 007. Then there was a convergence on the bar and about an hour of fun conversations.

Cocktail time! Did our photographer snag you in a picture? Comment below and say hi! DC Event Photojournalism

For me, the networking and socializing are always the best parts of events like this, and getting to shake hands and swap business cards with Ars readers is one of the most exciting and joyful parts of this job. A special thank you to everyone who stuck around for the cocktail hour and who got a chance to say hi—you’re special, and I’m glad you came to the event!

And if you’re reading this and you’re sad that you didn’t get to make it to any of our events this year, that’s OK—we’ll do more next year. Stay tuned to the front page because Ars might be coming to your town next!

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

Recap: Our “AI in DC” conference was great—here’s what you missed Read More »

don’t-fall-for-ai-scams-cloning-cops’-voices,-police-warn

Don’t fall for AI scams cloning cops’ voices, police warn

AI is giving scammers a more convincing way to impersonate police, reports show.

Just last week, the Salt Lake City Police Department (SLCPD) warned of an email scam using AI to convincingly clone the voice of Police Chief Mike Brown.

A citizen tipped off cops after receiving a suspicious email that included a video showing the police chief claiming that they “owed the federal government nearly $100,000.”

To dupe their targets, the scammers cut together real footage from one of Brown’s prior TV interviews with AI-generated audio that SLCPD said “is clear and closely impersonates the voice of Chief Brown, which could lead community members to believe the message was legitimate.”

The FBI has warned for years of scammers attempting extortion by impersonating cops or government officials. But as AI voice-cloning technology has advanced, these scams could become much harder to detect, to the point where even the most forward-thinking companies like OpenAI have been hesitant to release the latest tech due to obvious concerns about potential abuse.

SLCPD noted that there were clues in the email impersonating their police chief that a tech-savvy citizen could have picked up on. A more careful listen reveals “the message had unnatural speech patterns, odd emphasis on certain words, and an inconsistent tone,” as well as “detectable acoustic edits from one sentence to the next.” And perhaps most glaringly, the scam email came from “a Google account and had the Salt Lake City Police Department’s name in it followed by a numeric number,” instead of from the police department’s official email domain of “slc.gov.”

SLCPD isn’t the only police department dealing with AI cop impersonators. Tulsa had a similar problem this summer when scammers started calling residents using a convincing fake voice designed to sound like Tulsa police officer Eric Spradlin, Public Radio Tulsa reported. A software developer who received the call, Myles David, said he understood the AI risks today but that even he was “caught off guard” and had to call police to verify the call wasn’t real.

Don’t fall for AI scams cloning cops’ voices, police warn Read More »

bytedance-intern-fired-for-planting-malicious-code-in-ai-models

ByteDance intern fired for planting malicious code in AI models

After rumors swirled that TikTok owner ByteDance had lost tens of millions after an intern sabotaged its AI models, ByteDance issued a statement this weekend hoping to silence all the social media chatter in China.

In a social media post translated and reviewed by Ars, ByteDance clarified “facts” about “interns destroying large model training” and confirmed that one intern was fired in August.

According to ByteDance, the intern had held a position in the company’s commercial technology team but was fired for committing “serious disciplinary violations.” Most notably, the intern allegedly “maliciously interfered with the model training tasks” for a ByteDance research project, ByteDance said.

None of the intern’s sabotage impacted ByteDance’s commercial projects or online businesses, ByteDance said, and none of ByteDance’s large models were affected.

Online rumors suggested that more than 8,000 graphical processing units were involved in the sabotage and that ByteDance lost “tens of millions of dollars” due to the intern’s interference, but these claims were “seriously exaggerated,” ByteDance said.

The tech company also accused the intern of adding misleading information to his social media profile, seemingly posturing that his work was connected to ByteDance’s AI Lab rather than its commercial technology team. In the statement, ByteDance confirmed that the intern’s university was notified of what happened, as were industry associations, presumably to prevent the intern from misleading others.

ByteDance’s statement this weekend didn’t seem to silence all the rumors online, though.

One commenter on ByteDance’s social media post disputed the distinction between the AI Lab and the commercial technology team, claiming that “the commercialization team he is in was previously under the AI Lab. In the past two years, the team’s recruitment was written as AI Lab. He joined the team as an intern in 2021, and it might be the most advanced AI Lab.”

ByteDance intern fired for planting malicious code in AI models Read More »

us-suspects-tsmc-helped-huawei-skirt-export-controls,-report-says

US suspects TSMC helped Huawei skirt export controls, report says

In April, TSMC was provided with $6.6 billion in direct CHIPS Act funding to “support TSMC’s investment of more than $65 billion in three greenfield leading-edge fabs in Phoenix, Arizona, which will manufacture the world’s most advanced semiconductors,” the Department of Commerce said.

These investments are key to the Biden-Harris administration’s mission of strengthening “economic and national security by providing a reliable domestic supply of the chips that will underpin the future economy, powering the AI boom and other fast-growing industries like consumer electronics, automotive, Internet of Things, and high-performance computing,” the department noted. And in particular, the funding will help America “maintain our competitive edge” in artificial intelligence, the department said.

It likely wouldn’t make sense to prop TSMC up to help the US “onshore the critical hardware manufacturing capabilities that underpin AI’s deep language learning algorithms and inferencing techniques,” to then limit access to US-made tech. TSMC’s Arizona fabs are supposed to support companies like Apple, Nvidia, and Qualcomm and enable them to “compete effectively,” the Department of Commerce said.

Currently, it’s unclear where the US probe into TSMC will go or whether a damaging finding could potentially impact TSMC’s CHIPS funding.

Last fall, the Department of Commerce published a final rule, though, designed to “prevent CHIPS funds from being used to directly or indirectly benefit foreign countries of concern,” such as China.

If the US suspected that TSMC was aiding Huawei’s AI chip manufacturing, the company could be perceived as avoiding CHIPS guardrails prohibiting TSMC from “knowingly engaging in any joint research or technology licensing effort with a foreign entity of concern that relates to a technology or product that raises national security concerns.”

Violating this “technology clawback” provision of the final rule risks “the full amount” of CHIPS Act funding being “recovered” by the Department of Commerce. That outcome seems unlikely, though, given that TSMC has been awarded more funding than any other recipient apart from Intel.

The Department of Commerce declined Ars’ request to comment on whether TSMC’s CHIPS Act funding could be impacted by their reported probe.

US suspects TSMC helped Huawei skirt export controls, report says Read More »

expert-witness-used-copilot-to-make-up-fake-damages,-irking-judge

Expert witness used Copilot to make up fake damages, irking judge


Judge calls for a swift end to experts secretly using AI to sway cases.

A New York judge recently called out an expert witness for using Microsoft’s Copilot chatbot to inaccurately estimate damages in a real estate dispute that partly depended on an accurate assessment of damages to win.

In an order Thursday, judge Jonathan Schopf warned that “due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues” that any use of AI should be disclosed before testimony or evidence is admitted in court. Admitting that the court “has no objective understanding as to how Copilot works,” Schopf suggested that the legal system could be disrupted if experts started overly relying on chatbots en masse.

His warning came after an expert witness, Charles Ranson, dubiously used Copilot to cross-check calculations in a dispute over a $485,000 rental property in the Bahamas that had been included in a trust for a deceased man’s son. The court was being asked to assess if the executrix and trustee—the deceased man’s sister—breached her fiduciary duties by delaying the sale of the property while admittedly using it for personal vacations.

To win, the surviving son had to prove that his aunt breached her duties by retaining the property, that her vacations there were a form of self-dealing, and that he suffered damages from her alleged misuse of the property.

It was up to Ranson to figure out how much would be owed to the son had the aunt sold the property in 2008 compared to the actual sale price in 2022. But Ranson, an expert in trust and estate litigation, “had no relevant real estate expertise,” Schopf said, finding that Ranson’s testimony was “entirely speculative” and failed to consider obvious facts, such as the pandemic’s impact on rental prices or trust expenses like real estate taxes.

Seemingly because Ranson didn’t have the relevant experience in real estate, he turned to Copilot to fill in the blanks and crunch the numbers. The move surprised Internet law expert Eric Goldman, who told Ars that “lawyers retain expert witnesses for their specialized expertise, and it doesn’t make any sense for an expert witness to essentially outsource that expertise to generative AI.”

“If the expert witness is simply asking a chatbot for a computation, then the lawyers could make that same request directly without relying on the expert witness (and paying the expert’s substantial fees),” Goldman suggested.

Perhaps the son’s legal team wasn’t aware of how big a role Copilot played. Schopf noted that Ranson couldn’t recall what prompts he used to arrive at his damages estimate. The expert witness also couldn’t recall any sources for the information he took from the chatbot and admitted that he lacked a basic understanding of how Copilot “works or how it arrives at a given output.”

Ars could not immediately reach Ranson for comment. But in Schopf’s order, the judge wrote that Ranson defended using Copilot as a common practice for expert witnesses like him today.

“Ranson was adamant in his testimony that the use of Copilot or other artificial intelligence tools, for drafting expert reports is generally accepted in the field of fiduciary services and represents the future of analysis of fiduciary decisions; however, he could not name any publications regarding its use or any other sources to confirm that it is a generally accepted methodology,” Schopf wrote.

Goldman noted that Ranson relying on Copilot for “what was essentially a numerical computation was especially puzzling because of generative AI’s known hallucinatory tendencies, which makes numerical computations untrustworthy.”

Because Ranson was so bad at explaining how Copilot works, Schopf took the extra time to actually try to use Copilot to generate the estimates that Ranson got—and he could not.

Each time, the court entered the same query into Copilot—”Can you calculate the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004 through January 31, 2021?”—and each time Copilot generated a slightly different answer.

This “calls into question the reliability and accuracy of Copilot to generate evidence to be relied upon in a court proceeding,” Schopf wrote.

Chatbot not to blame, judge says

While the court was experimenting with Copilot, they also probed the chatbot for answers to a more Big Picture legal question: Are Copilot’s responses accurate enough to be cited in court?

The court found that Copilot had less faith in its outputs than Ranson seemingly did. When asked “are you accurate” or “reliable,” Copilot responded that “my accuracy is only as good as my sources, so for critical matters, it’s always wise to verify.” When more specifically asked, “Are your calculations reliable enough for use in court,” Copilot similarly recommended that outputs “should always be verified by experts and accompanied by professional evaluations before being used in court.”

Although it seemed clear that Ranson did not verify outputs before using them in court, Schopf noted that at least “developers of the Copilot program recognize the need for its supervision by a trained human operator to verify the accuracy of the submitted information as well as the output.”

Microsoft declined Ars’ request to comment.

Until a bright-line rule exists telling courts when to accept AI-generated testimony, Schopf suggested that courts should require disclosures from lawyers to stop chatbot-spouted inadmissible testimony from disrupting the legal system.

“The use of artificial intelligence is a rapidly growing reality across many industries,” Schopf wrote. “The mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court.”

Ultimately, Schopf found that there was no breach of fiduciary duty, negating the need for Ranson’s Copilot-cribbed testimony on damages in the Bahamas property case. Schopf denied all of the son’s objections in their entirety (as well as any future claims) after calling out Ranson’s misuse of the chatbot at length.

But in his order, the judge suggested that Ranson seemed to get it all wrong before involving the chatbot.

“Whether or not he was retained and/ or qualified as a damages expert in areas other than fiduciary duties, his testimony shows that he admittedly did not perform a full analysis of the problem, utilized an incorrect time period for damages, and failed to consider obvious elements into his calculations, all of which go against the weight and credibility of his opinion,” Schopf wrote.

Schopf noted that the evidence showed that rather than the son losing money from his aunt’s management of the trust—which Ranson’s cited chatbot’s outputs supposedly supported—the sale of the property in 2022 led to “no attributable loss of capital” and “in fact, it generated an overall profit to the Trust.”

Goldman suggested that Ranson did not seemingly spare much effort by employing Copilot in a way that seemed to damage his credibility in court.

“It would not have been difficult for the expert to pull the necessary data directly from primary sources, so the process didn’t even save much time—but that shortcut came at the cost of the expert’s credibility,” Goldman told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Expert witness used Copilot to make up fake damages, irking judge Read More »

donotpay-has-to-pay-$193k-for-falsely-touting-untested-ai-lawyer,-ftc-says

DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says

DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says

Among the first AI companies that the Federal Trade Commission has exposed as deceiving consumers is DoNotPay—which initially was advertised as “the world’s first robot lawyer” with the ability to “sue anyone with the click of a button.”

On Wednesday, the FTC announced that it took action to stop DoNotPay from making bogus claims after learning that the AI startup conducted no testing “to determine whether its AI chatbot’s output was equal to the level of a human lawyer.” DoNotPay also did not “hire or retain any attorneys” to help verify AI outputs or validate DoNotPay’s legal claims.

DoNotPay accepted no liability. But to settle the charges that DoNotPay violated the FTC Act, the AI startup agreed to pay $193,000, if the FTC’s consent agreement is confirmed following a 30-day public comment period. Additionally, DoNotPay agreed to warn “consumers who subscribed to the service between 2021 and 2023” about the “limitations of law-related features on the service,” the FTC said.

Moving forward, DoNotPay would also be prohibited under the settlement from making baseless claims that any of its features can be substituted for any professional service.

A DoNotPay spokesperson told Ars that the company “is pleased to have worked constructively with the FTC to settle this case and fully resolve these issues, without admitting liability.”

“The complaint relates to the usage of a few hundred customers some years ago (out of millions of people), with services that have long been discontinued,” DoNotPay’s spokesperson said.

The FTC’s settlement with DoNotPay is part of a larger agency effort to crack down on deceptive AI claims. Four other AI companies were hit with enforcement actions Wednesday, the FTC said, and FTC Chair Lina Khan confirmed that the agency’s so-called “Operation AI Comply” will continue monitoring companies’ attempts to “lure consumers into bogus schemes” or use AI tools to “turbocharge deception.”

“Using AI tools to trick, mislead, or defraud people is illegal,” Khan said. “The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”

DoNotPay never tested robot lawyer

DoNotPay was initially released in 2015 as a free way to contest parking tickets. Soon after, it quickly expanded its services to supposedly cover 200 areas of law—aiding with everything from breach of contract claims to restraining orders to insurance claims and divorce settlements.

As DoNotPay’s legal services expanded, the company defended its innovative approach to replacing lawyers while acknowledging that it was on seemingly shaky grounds. In 2018, DoNotPay CEO Joshua Browder confirmed to the ABA Journal that the legal services were provided with “no lawyer oversight.” But he said that he was only “a bit worried” about threats to sue DoNotPay for unlicensed practice of law. Because DoNotPay was free, he expected he could avoid some legal challenges.

According to the FTC complaint, DoNotPay began charging subscribers $36 every two months in 2019 while making several false claims in ads to apparently drive up subscriptions.

DoNotPay has to pay $193K for falsely touting untested AI lawyer, FTC says Read More »

openai-asked-us-to-approve-energy-guzzling-5gw-data-centers,-report-says

OpenAI asked US to approve energy-guzzling 5GW data centers, report says

Great scott! —

OpenAI stokes China fears to woo US approvals for huge data centers, report says.

OpenAI asked US to approve energy-guzzling 5GW data centers, report says

OpenAI hopes to convince the White House to approve a sprawling plan that would place 5-gigawatt AI data centers in different US cities, Bloomberg reports.

The AI company’s CEO, Sam Altman, supposedly pitched the plan after a recent meeting with the Biden administration where stakeholders discussed AI infrastructure needs. Bloomberg reviewed an OpenAI document outlining the plan, reporting that 5 gigawatts “is roughly the equivalent of five nuclear reactors” and warning that each data center will likely require “more energy than is used to power an entire city or about 3 million homes.”

According to OpenAI, the US needs these massive data centers to expand AI capabilities domestically, protect national security, and effectively compete with China. If approved, the data centers would generate “thousands of new jobs,” OpenAI’s document promised, and help cement the US as an AI leader globally.

But the energy demand is so enormous that OpenAI told officials that the “US needs policies that support greater data center capacity,” or else the US could fall behind other countries in AI development, the document said.

Energy executives told Bloomberg that “powering even a single 5-gigawatt data center would be a challenge,” as power projects nationwide are already “facing delays due to long wait times to connect to grids, permitting delays, supply chain issues, and labor shortages.” Most likely, OpenAI’s data centers wouldn’t rely entirely on the grid, though, instead requiring a “mix of new wind and solar farms, battery storage and a connection to the grid,” John Ketchum, CEO of NextEra Energy Inc, told Bloomberg.

That’s a big problem for OpenAI, since one energy executive, Constellation Energy Corp. CEO Joe Dominguez, told Bloomberg that he’s heard that OpenAI wants to build five to seven data centers. “As an engineer,” Dominguez said he doesn’t think that OpenAI’s plan is “feasible” and would seemingly take more time than needed to address current national security risks as US-China tensions worsen.

OpenAI may be hoping to avoid delays and cut the lines—if the White House approves the company’s ambitious data center plan. For now, a person familiar with OpenAI’s plan told Bloomberg that OpenAI is focused on launching a single data center before expanding the project to “various US cities.”

Bloomberg’s report comes after OpenAI’s chief investor, Microsoft, announced a 20-year deal with Constellation to re-open Pennsylvania’s shuttered Three Mile Island nuclear plant to provide a new energy source for data centers powering AI development and other technologies. But even if that deal is approved by regulators, the resulting energy supply that Microsoft could access—roughly 835 megawatts (0.835 gigawatts) of energy generation, which is enough to power approximately 800,000 homes—is still more than five times less than OpenAI’s 5-gigawatt demand for its data centers.

Ketchum told Bloomberg that it’s easier to find a US site for a 1-gigawatt data center, but locating a site for a 5-gigawatt facility would likely be a bigger challenge. Notably, Amazon recently bought a $650 million nuclear-powered data center in Pennsylvania with a 2.5-gigawatt capacity. At the meeting with the Biden administration, OpenAI suggested opening large-scale data centers in Wisconsin, California, Texas, and Pennsylvania, a source familiar with the matter told CNBC.

During that meeting, the Biden administration confirmed that developing large-scale AI data centers is a priority, announcing “a new Task Force on AI Datacenter Infrastructure to coordinate policy across government.” OpenAI seems to be trying to get the task force’s attention early on, outlining in the document that Bloomberg reviewed the national security and economic benefits its data centers could provide for the US.

In a statement to Bloomberg, OpenAI’s spokesperson said that “OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AI’s benefits accessible to everyone.”

Big Tech companies and AI startups will likely continue pressuring officials to approve data center expansions, as well as new kinds of nuclear reactors as the AI explosion globally continues. Goldman Sachs estimated that “data center power demand will grow 160 percent by 2030.” To ensure power supplies for its AI, according to the tech news site Freethink, Microsoft has even been training AI to draft all the documents needed for proposals to secure government approvals for nuclear plants to power AI data centers.

OpenAI asked US to approve energy-guzzling 5GW data centers, report says Read More »