Joe Biden

appeals-court-blocks-fcc’s-efforts-to-bring-back-net-neutrality-rules

Appeals court blocks FCC’s efforts to bring back net neutrality rules

“The key here is not whether Broadband Internet Service Providers utilize telecommunications; it is instead whether they do so while offering to consumers the capability to do more,” Griffin wrote, concluding that “they do.”

“The FCC exceeded its statutory authority,” Griffin wrote, at one point accusing the FCC of arguing for a reading of the statute “that is too sweeping.”

The three-judge panel ordered a stay of the FCC’s order imposing net neutrality rules—known as the Safeguarding and Securing the Open Internet Order.

In a statement, FCC chair Jessica Rosenworcel suggested that Congress would likely be the only path to safeguard net neutrality moving forward. In the federal register, experts noted that net neutrality is critical to boosting new applications, services, or content, warning that without clear rules, the next Amazon or YouTube could be throttled before it can get off the ground.

“Consumers across the country have told us again and again that they want an Internet that is fast, open, and fair,” Rosenworcel said. “With this decision it is clear that Congress now needs to heed their call, take up the charge for net neutrality, and put open Internet principles in federal law.”

Rosenworcel will soon leave the FCC and will be replaced by Trump’s incoming FCC chair pick, Brendan Carr, who helped overturn net neutrality in 2017 and is expected to loosen broadband regulations once he’s confirmed.

Appeals court blocks FCC’s efforts to bring back net neutrality rules Read More »

china’s-plan-to-dominate-legacy-chips-globally-sparks-us-probe

China’s plan to dominate legacy chips globally sparks US probe

Under Joe Biden’s direction, the US Trade Representative (USTR) launched a probe Monday into China’s plans to globally dominate markets for legacy chips—alleging that China’s unfair trade practices threaten US national security and could thwart US efforts to build up a domestic semiconductor supply chain.

Unlike the most advanced chips used to power artificial intelligence that are currently in short supply, these legacy chips rely on older manufacturing processes and are more ubiquitous in mass-market products. They’re used in tech for cars, military vehicles, medical devices, smartphones, home appliances, space projects, and much more.

China apparently “plans to build more than 60 percent of the world’s new legacy chip capacity over the next decade,” and Commerce Secretary Gina Raimondo said evidence showed this was “discouraging investment elsewhere and constituted unfair competition,” Reuters reported.

Most people purchasing common goods don’t even realize they’re using Chinese chips, including government agencies, and the probe is meant to fix that by flagging everywhere Chinese chips are found in the US. Raimondo said she was “fairly alarmed” that research showed “two-thirds of US products using chips had Chinese legacy chips in them, and half of US companies did not know the origin of their chips including some in the defense industry.”

To deter harms from any of China’s alleged anticompetitive behavior, the USTR plans to spend a year investigating all of China’s acts, policies, and practices that could be helping China achieve global dominance in the foundational semiconductor market.

The agency will start by probing “China’s manufacturing of foundational semiconductors (also known as legacy or mature node semiconductors),” the press release said, “including to the extent that they are incorporated as components into downstream products for critical industries like defense, automotive, medical devices, aerospace, telecommunications, and power generation and the electrical grid.”

Additionally, the probe will assess China’s potential impact on “silicon carbide substrates (or other wafers used as inputs into semiconductor fabrication)” to ensure China isn’t burdening or restricting US commerce.

Some officials were frustrated that Biden didn’t launch the probe sooner, the Financial Times reported. It will ultimately be up to Donald Trump’s administration to complete the investigation, but Biden and Trump have long been aligned on US-China trade strategies, so Trump is not necessarily expected to meddle with the probe. Reuters noted that the probe could set Trump up to pursue his campaign promise of imposing a 60 percent tariff on all goods from China, but FT pointed out that Trump could also plan to use tariffs as a “bargaining chip” in his own trade negotiations.

China’s plan to dominate legacy chips globally sparks US probe Read More »

openai-accused-of-trying-to-profit-off-ai-model-inspection-in-court

OpenAI accused of trying to profit off AI model inspection in court


Experiencing some technical difficulties

How do you get an AI model to confess what’s inside?

Credit: Aurich Lawson | Getty Images

Since ChatGPT became an instant hit roughly two years ago, tech companies around the world have rushed to release AI products while the public is still in awe of AI’s seemingly radical potential to enhance their daily lives.

But at the same time, governments globally have warned it can be hard to predict how rapidly popularizing AI can harm society. Novel uses could suddenly debut and displace workers, fuel disinformation, stifle competition, or threaten national security—and those are just some of the obvious potential harms.

While governments scramble to establish systems to detect harmful applications—ideally before AI models are deployed—some of the earliest lawsuits over ChatGPT show just how hard it is for the public to crack open an AI model and find evidence of harms once a model is released into the wild. That task is seemingly only made harder by an increasingly thirsty AI industry intent on shielding models from competitors to maximize profits from emerging capabilities.

The less the public knows, the seemingly harder and more expensive it is to hold companies accountable for irresponsible AI releases. This fall, ChatGPT-maker OpenAI was even accused of trying to profit off discovery by seeking to charge litigants retail prices to inspect AI models alleged as causing harms.

In a lawsuit raised by The New York Times over copyright concerns, OpenAI suggested the same model inspection protocol used in a similar lawsuit raised by book authors.

Under that protocol, the NYT could hire an expert to review highly confidential OpenAI technical materials “on a secure computer in a secured room without Internet access or network access to other computers at a secure location” of OpenAI’s choosing. In this closed-off arena, the expert would have limited time and limited queries to try to get the AI model to confess what’s inside.

The NYT seemingly had few concerns about the actual inspection process but bucked at OpenAI’s intended protocol capping the number of queries their expert could make through an application programming interface to $15,000 worth of retail credits. Once litigants hit that cap, OpenAI suggested that the parties split the costs of remaining queries, charging the NYT and co-plaintiffs half-retail prices to finish the rest of their discovery.

In September, the NYT told the court that the parties had reached an “impasse” over this protocol, alleging that “OpenAI seeks to hide its infringement by professing an undue—yet unquantified—’expense.'” According to the NYT, plaintiffs would need $800,000 worth of retail credits to seek the evidence they need to prove their case, but there’s allegedly no way it would actually cost OpenAI that much.

“OpenAI has refused to state what its actual costs would be, and instead improperly focuses on what it charges its customers for retail services as part of its (for profit) business,” the NYT claimed in a court filing.

In its defense, OpenAI has said that setting the initial cap is necessary to reduce the burden on OpenAI and prevent a NYT fishing expedition. The ChatGPT maker alleged that plaintiffs “are requesting hundreds of thousands of dollars of credits to run an arbitrary and unsubstantiated—and likely unnecessary—number of searches on OpenAI’s models, all at OpenAI’s expense.”

How this court debate resolves could have implications for future cases where the public seeks to inspect models causing alleged harms. It seems likely that if a court agrees OpenAI can charge retail prices for model inspection, it could potentially deter lawsuits from any plaintiffs who can’t afford to pay an AI expert or commercial prices for model inspection.

Lucas Hansen, co-founder of CivAI—a company that seeks to enhance public awareness of what AI can actually do—told Ars that probably a lot of inspection can be done on public models. But often, public models are fine-tuned, perhaps censoring certain queries and making it harder to find information that a model was trained on—which is the goal of NYT’s suit. By gaining API access to original models instead, litigants could have an easier time finding evidence to prove alleged harms.

It’s unclear exactly what it costs OpenAI to provide that level of access. Hansen told Ars that costs of training and experimenting with models “dwarfs” the cost of running models to provide full capability solutions. Developers have noted in forums that costs of API queries quickly add up, with one claiming OpenAI’s pricing is “killing the motivation to work with the APIs.”

The NYT’s lawyers and OpenAI declined to comment on the ongoing litigation.

US hurdles for AI safety testing

Of course, OpenAI is not the only AI company facing lawsuits over popular products. Artists have sued makers of image generators for allegedly threatening their livelihoods, and several chatbots have been accused of defamation. Other emerging harms include very visible examples—like explicit AI deepfakes, harming everyone from celebrities like Taylor Swift to middle schoolers—as well as underreported harms, like allegedly biased HR software.

A recent Gallup survey suggests that Americans are more trusting of AI than ever but still twice as likely to believe AI does “more harm than good” than that the benefits outweigh the harms. Hansen’s CivAI creates demos and interactive software for education campaigns helping the public to understand firsthand the real dangers of AI. He told Ars that while it’s hard for outsiders to trust a study from “some random organization doing really technical work” to expose harms, CivAI provides a controlled way for people to see for themselves how AI systems can be misused.

“It’s easier for people to trust the results, because they can do it themselves,” Hansen told Ars.

Hansen also advises lawmakers grappling with AI risks. In February, CivAI joined the Artificial Intelligence Safety Institute Consortium—a group including Fortune 500 companies, government agencies, nonprofits, and academic research teams that help to advise the US AI Safety Institute (AISI). But so far, Hansen said, CivAI has not been very active in that consortium beyond scheduling a talk to share demos.

The AISI is supposed to protect the US from risky AI models by conducting safety testing to detect harms before models are deployed. Testing should “address risks to human rights, civil rights, and civil liberties, such as those related to privacy, discrimination and bias, freedom of expression, and the safety of individuals and groups,” President Joe Biden said in a national security memo last month, urging that safety testing was critical to support unrivaled AI innovation.

“For the United States to benefit maximally from AI, Americans must know when they can trust systems to perform safely and reliably,” Biden said.

But the AISI’s safety testing is voluntary, and while companies like OpenAI and Anthropic have agreed to the voluntary testing, not every company has. Hansen is worried that AISI is under-resourced and under-budgeted to achieve its broad goals of safeguarding America from untold AI harms.

“The AI Safety Institute predicted that they’ll need about $50 million in funding, and that was before the National Security memo, and it does not seem like they’re going to be getting that at all,” Hansen told Ars.

Biden had $50 million budgeted for AISI in 2025, but Donald Trump has threatened to dismantle Biden’s AI safety plan upon taking office.

The AISI was probably never going to be funded well enough to detect and deter all AI harms, but with its future unclear, even the limited safety testing the US had planned could be stalled at a time when the AI industry continues moving full speed ahead.

That could largely leave the public at the mercy of AI companies’ internal safety testing. As frontier models from big companies will likely remain under society’s microscope, OpenAI has promised to increase investments in safety testing and help establish industry-leading safety standards.

According to OpenAI, that effort includes making models safer over time, less prone to producing harmful outputs, even with jailbreaks. But OpenAI has a lot of work to do in that area, as Hansen told Ars that he has a “standard jailbreak” for OpenAI’s most popular release, ChatGPT, “that almost always works” to produce harmful outputs.

The AISI did not respond to Ars’ request to comment.

NYT “nowhere near done” inspecting OpenAI models

For the public, who often become guinea pigs when AI acts unpredictably, risks remain, as the NYT case suggests that the costs of fighting AI companies could go up while technical hiccups could delay resolutions. Last week, an OpenAI filing showed that NYT’s attempts to inspect pre-training data in a “very, very tightly controlled environment” like the one recommended for model inspection were allegedly continuously disrupted.

“The process has not gone smoothly, and they are running into a variety of obstacles to, and obstructions of, their review,” the court filing describing NYT’s position said. “These severe and repeated technical issues have made it impossible to effectively and efficiently search across OpenAI’s training datasets in order to ascertain the full scope of OpenAI’s infringement. In the first week of the inspection alone, Plaintiffs experienced nearly a dozen disruptions to the inspection environment, which resulted in many hours when News Plaintiffs had no access to the training datasets and no ability to run continuous searches.”

OpenAI was additionally accused of refusing to install software the litigants needed and randomly shutting down ongoing searches. Frustrated after more than 27 days of inspecting data and getting “nowhere near done,” the NYT keeps pushing the court to order OpenAI to provide the data instead. In response, OpenAI said plaintiffs’ concerns were either “resolved” or discussions remained “ongoing,” suggesting there was no need for the court to intervene.

So far, the NYT claims that it has found millions of plaintiffs’ works in the ChatGPT pre-training data but has been unable to confirm the full extent of the alleged infringement due to the technical difficulties. Meanwhile, costs keep accruing in every direction.

“While News Plaintiffs continue to bear the burden and expense of examining the training datasets, their requests with respect to the inspection environment would be significantly reduced if OpenAI admitted that they trained their models on all, or the vast majority, of News Plaintiffs’ copyrighted content,” the court filing said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

OpenAI accused of trying to profit off AI model inspection in court Read More »

trump-plans-to-dismantle-biden-ai-safeguards-after-victory

Trump plans to dismantle Biden AI safeguards after victory

That’s not the only uncertainty at play. Just last week, House Speaker Mike Johnson—a staunch Trump supporter—said that Republicans “probably will” repeal the bipartisan CHIPS and Science Act, which is a Biden initiative to spur domestic semiconductor chip production, among other aims. Trump has previously spoken out against the bill. After getting some pushback on his comments from Democrats, Johnson said he would like to “streamline” the CHIPS Act instead, according to The Associated Press.

Then there’s the Elon Musk factor. The tech billionaire spent tens of millions through a political action committee supporting Trump’s campaign and has been angling for regulatory influence in the new administration. His AI company, xAI, which makes the Grok-2 language model, stands alongside his other ventures—Tesla, SpaceX, Starlink, Neuralink, and X (formerly Twitter)—as businesses that could see regulatory changes in his favor under a new administration.

What might take its place

If Trump strips away federal regulation of AI, state governments may step in to fill any federal regulatory gaps. For example, in March, Tennessee enacted protections against AI voice cloning, and in May, Colorado created a tiered system for AI deployment oversight. In September, California passed multiple AI safety bills, one requiring companies to publish details about their AI training methods and a contentious anti-deepfake bill aimed at protecting the likenesses of actors.

So far, it’s unclear what Trump’s policies on AI might represent besides “deregulate whenever possible.” During his campaign, Trump promised to support AI development centered on “free speech and human flourishing,” though he provided few specifics. He has called AI “very dangerous” and spoken about its high energy requirements.

Trump allies at the America First Policy Institute have previously stated they want to “Make America First in AI” with a new Trump executive order, which still only exists as a speculative draft, to reduce regulations on AI and promote a series of “Manhattan Projects” to advance military AI capabilities.

During his previous administration, Trump signed AI executive orders that focused on research institutes and directing federal agencies to prioritize AI development while mandating that federal agencies “protect civil liberties, privacy, and American values.”

But with a different AI environment these days in the wake of ChatGPT and media-reality-warping image synthesis models, those earlier orders don’t likely point the way to future positions on the topic. For more details, we’ll have to wait and see what unfolds.

Trump plans to dismantle Biden AI safeguards after victory Read More »

google-and-kairos-sign-nuclear-reactor-deal-with-aim-to-power-ai

Google and Kairos sign nuclear reactor deal with aim to power AI

Google isn’t alone in eyeballing nuclear power as an energy source for massive datacenters. In September, Ars reported on a plan from Microsoft that would re-open the Three Mile Island nuclear power plant in Pennsylvania to fulfill some of its power needs. And the US administration is getting into the nuclear act as well, signing a bipartisan ADVANCE act in July with the aim of jump-starting new nuclear power technology.

AI is driving demand for nuclear

In some ways, it would be an interesting twist if demand for training and running power-hungry AI models, which are often criticized as wasteful, ends up kick-starting a nuclear power renaissance that helps wean the US off fossil fuels and eventually reduces the impact of global climate change. These days, almost every Big Tech corporate position could be seen as an optics play designed to increase shareholder value, but this may be one of the rare times when the needs of giant corporations accidentally align with the needs of the planet.

Even from a cynical angle, the partnership between Google and Kairos Power represents a step toward the development of next-generation nuclear power as an ostensibly clean energy source (especially when compared to coal-fired power plants). As the world sees increasing energy demands, collaborations like this one, along with adopting solutions like solar and wind power, may play a key role in reducing greenhouse gas emissions.

Despite that potential upside, some experts are deeply skeptical of the Google-Kairos deal, suggesting that this recent rush to nuclear may result in Big Tech ownership of clean power generation. Dr. Sasha Luccioni, Climate and AI Lead at Hugging Face, wrote on X, “One step closer to a world of private nuclear power plants controlled by Big Tech to power the generative AI boom. Instead of rethinking the way we build and deploy these systems in the first place.”

Google and Kairos sign nuclear reactor deal with aim to power AI Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

biden-rushes-to-avert-labor-shortage-with-chips-act-funding-for-workers

Biden rushes to avert labor shortage with CHIPS act funding for workers

Less than one month to apply —

To dodge labor shortage, US finally aims CHIPS Act funding at training workers.

US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

Enlarge / US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

In the hopes of dodging a significant projected worker shortage in the next few years, the Biden administration will finally start funding workforce development projects to support America’s ambitions to become the world’s leading chipmaker through historic CHIPS and Science Act investments.

The Workforce Partner Alliance (WFPA) will be established through the CHIPS Act’s first round of funding focused on workers, officials confirmed in a press release. The program is designed to “focus on closing workforce and skills gaps in the US for researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a program requirements page said.

Bloomberg reported that the US risks a technician shortage reaching 90,000 by 2030. This differs slightly from Natcast’s forecast, which found that out of “238,000 jobs the industry is projected to create by 2030,” the semiconductor industry “will be unable to fill more than 67,000.”

Whatever the industry demand will actually be, with a projected tens of thousands of jobs needing to be filled just as the country is hoping to produce more chips than ever, the Biden administration is hoping to quickly train enough workers to fill openings for “researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a WFPA site said.

To do this, a “wide range of workforce solution providers” are encouraged to submit “high-impact” WFPA project proposals that can be completed within two years, with total budgets of between $500,000 and $2 million per award, the press release said.

Examples of “evidence-based workforce development strategies and methodologies that may be considered for this program” include registered apprenticeship and pre-apprenticeship programs, colleges or universities offering semiconductor industry-relevant degrees, programs combining on-the-job training with effective education or mentorship, and “experiential learning opportunities such as co-ops, externships, internships, or capstone projects.” While programs supporting construction activities will not be considered, programs designed to “reduce barriers” to entry in the semiconductor industry can use funding to support workers’ training, such as for providing childcare or transportation for workers.

“Making investments in the US semiconductor workforce is an opportunity to serve underserved communities, to connect individuals to good-paying sustainable jobs across the country, and to develop a robust workforce ecosystem that supports an industry essential to the national and economic security of the US,” Natcast said.

Between four to 10 projects will be selected, providing opportunities for “established programs with a track record of success seeking to scale,” as well as for newer programs “that meet a previously unaddressed need, opportunity, or theory of change” to be launched or substantially expanded.

The deadline to apply for funding is July 26, which gives applicants less than one month to get their proposals together. Applicants must have a presence in the US but can include for-profit organizations, accredited education institutions, training programs, state and local government agencies, and nonprofit organizations, Natcast’s eligibility requirements said.

Natcast—the nonprofit entity created to operate the National Semiconductor Technology Center Consortium—will manage the WFPA. An FAQ will be provided soon, Natcast said, but in the meantime, the agency is giving a brief window to submit questions about the program. Curious applicants can send questions to [email protected] until 11: 59 pm ET on July 9.

Awardees will be notified by early fall, Natcast said.

Planning the future of US chip workforce

In Natcast’s press release, Deirdre Hanford, Natcast’s CEO, said that the WFPA will “accelerate progress in the US semiconductor industry by tackling its most critical challenges, including the need for a highly skilled workforce that can meet the evolving demands of the industry.”

And the senior manager of Natcast’s workforce development programs, Michael Barnes, said that the WFPA will be critical to accelerating the industry’s growth in the US.

“It is imperative that we develop a domestic semiconductor workforce ecosystem that can support the industry’s anticipated growth and strengthen American national security, economic prosperity, and global competitiveness,” Barnes said.

Biden rushes to avert labor shortage with CHIPS act funding for workers Read More »

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

bill-that-could-ban-tiktok-passes-in-house-despite-constitutional-concerns

Bill that could ban TikTok passes in House despite constitutional concerns

Bill that could ban TikTok passes in House despite constitutional concerns

On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.

TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”

Lawmakers insist that the Protecting Americans from Foreign Adversary Controlled Applications Act is not a ban. Instead, they claim the law gives TikTok a choice: either divest from ByteDance’s China-based owners or face the consequences of TikTok being cut off in the US.

Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.

Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.

If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.

House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”

McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.

Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.

More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”

“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.

Bill that could ban TikTok passes in House despite constitutional concerns Read More »

us-funds-$5b-chip-effort-after-lagging-on-semiconductor-innovation

US funds $5B chip effort after lagging on semiconductor innovation

Now hiring? —

US had failed to fund the “science half” of CHIPS and Science Act, critic said.

US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

Enlarge / US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

The Biden administration announced investments Friday totaling more than $5 billion in semiconductor research and development intended to re-establish the US as a global leader manufacturing the “next generation of semiconductor technologies.”

Through sizeable investments, the US will “advance US leadership in semiconductor R&D, cut down on the time and cost of commercializing new technologies, bolster US national security, and connect and support workers in securing good semiconductor jobs,” a White House press release said.

Currently, the US produces “less than 10 percent” of the global chips supply and “none of the most advanced chips,” the White House said. But investing in programs like the National Semiconductor Technology Center (NSTC)—considered the “centerpiece” of the CHIPS and Science Act’s four R&D programs—and training a talented workforce could significantly increase US production of semiconductors that the Biden administration described as the “backbone of the modern economy.”

The White House projected that the NSTC’s workforce activities would launch in the summer of 2024. The Center’s prime directive will be developing new semiconductor technologies by “supporting design, prototyping, and piloting and through ensuring innovators have access to critical capabilities.”

Moving forward, the NSTC will operate as a public-private consortium, involving both government and private sector institutions, the White House confirmed. It will be run by a recently established nonprofit called the National Center for the Advancement of Semiconductor Technology (Natcast), which will coordinate with the secretaries of Commerce, Defense, and Energy, as well as the National Science Foundation’s director. Any additional stakeholders can provide input on the NSTC’s goals by joining the NSTC Community of Interest at no cost.

The National Institute of Standards and Technology (NIST) has explained why achieving the NSTC’s mission to develop cutting-edge semiconductor technology in the US will not be easy:

The smallest dimensions of leading-edge semiconductor devices have reached the atomic scale and the complexity of the circuit architecture is increasing exponentially with the use of three-dimensional structures, the incorporation of new materials, and improvements in the thousands of process steps needed to make advanced chips. Into the future, as new applications demand higher-performance semiconductors, their design and production will become even more complex. This complexity makes it increasingly difficult and costly to implement innovations because of the dependencies between design and manufacturing, between manufacturing steps, and between front-end and back-end processes.

The complexity of keeping up with semiconductor tech is why it’s critical for the US to create clear pathways for skilled workers to break into this burgeoning industry. The Biden administration said it plans to invest “at least hundreds of millions of dollars in the NSTC’s workforce efforts,” creating a Workforce Center of Excellence with locations throughout the US and piloting new training programs, including initiatives engaging underserved communities. The Workforce Center will start by surveying best practices in semiconductor education programs, then establish a baseline program to attract workers seeking dependable paths to break into the industry.

Last year, the Semiconductor Industry Association (SIA) released a study showing that the US was not adequately preparing a highly skilled workforce. Between “67,000, or 58 percent, of projected new jobs, may remain unfulfilled at the current trajectory,” SIA estimated.

A skilled workforce is just part of the equation, though. The US also needs facilities where workers can experiment with new technologies without breaking the bank. To that end, the Department of Commerce announced it would be investing “at least $200 million” in a first-of-its-kind CHIPS Manufacturing USA Institute. That institute will “allow innovators to replicate and experiment with physical manufacturing processes at low cost.”

Other Commerce Department investments announced include “up to $300 million” for advanced packaging R&D necessary for discovering new applications for semiconductor technologies and over $100 million in funding for dozens of projects to help inventors “more easily scale innovations into commercial products.”

A Commerce Department spokesperson told Ars that “the location of the NSTC headquarters has not yet been determined” but will “directly support the NSTC research strategy and give engineers, academics, researchers, engineers at startups, small and large companies, and workforce developers the capabilities they need to innovate.” In 2024, NSTC’s efforts to kick off research appear modest, with the center expecting to prioritize engaging community members and stakeholders, launching workforce programs, and identifying early start research programs.

So far, Biden’s efforts to ramp up semiconductor manufacturing in the US have not gone smoothly. Earlier this year, TSMC predicted further delays at chips plants under construction in Arizona and confirmed that the second plant would not be able to manufacture the most advanced chips, as previously expected.

That news followed criticism from private entities last year. In November, Nvidia CEO Jensen Huang predicted that the US was “somewhere between a decade and two decades away” from semiconductor supply chain independence. The US Chamber of Commerce said last August that the reason why the US remained so far behind was because the US had so far failed to prioritize funding in the “science half” of the CHIPS and Science Act.

In 2024, the Biden administration appears to be attempting to finally start funding a promised $11 billion total in research and development efforts. Once NSTC kicks off research, the pressure will be on to chase the Center’s highest ambition of turning the US into a consistent birthplace of life-changing semiconductor technologies once again.

US funds $5B chip effort after lagging on semiconductor innovation Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »