NIST

major-government-research-lab-appears-to-be-squeezing-out-foreign-scientists

Major government research lab appears to be squeezing out foreign scientists

One of the US government’s top scientific research labs is taking steps that could drive away foreign scientists, a shift lawmakers and sources tell WIRED could cost the country valuable expertise and damage the agency’s credibility.

The National Institute of Standards and Technology (NIST) helps determine the frameworks underpinning everything from cybersecurity to semiconductor manufacturing. Some of NIST’s recent work includes establishing guidelines for securing AI systems and identifying health concerns with air purifiers and firefighting gloves. Many of the agency’s thousands of employees, postdoctoral scientists, contractors, and guest researchers are brought in from around the world for their specialized expertise.

“For weeks now, rumors of draconian new measures have been spreading like wildfire, while my staff’s inquiries to NIST have gone unanswered,” Zoe Lofgren, the top Democrat on the House Committee on Science, Space, and Technology, wrote in a letter sent to acting NIST Director Craig Burkhardt on Thursday. April McClain Delaney, a fellow Democrat on the committee, cosigned the message.

Lofgren wrote that while her staff has heard about multiple rumored changes, what they have confirmed through unnamed sources is that the Trump administration “has begun taking steps to limit the ability of foreign-born researchers to conduct their work at NIST.”

The congressional letter follows a Boulder Reporting Lab article on February 12 that said international graduate students and postdoctoral researchers would be limited to a maximum of three years at NIST going forward, despite many of them needing five to seven years to complete their work.

A NIST employee tells WIRED that some plans to bring on foreign workers through the agency’s Professional Research and Experience Program have recently been canceled because of uncertainty about whether they would make it through the new security protocols. The staffer, who spoke on the condition of anonymity because they were not authorized to speak to the media, says the agency has yet to widely communicate what the new hurdles will be or why it believes they are justified.

On Thursday, the Colorado Sun reported that “noncitizens” lost after-hours access to a NIST lab last month and could soon be banned from the facility entirely.

Jennifer Huergo, a spokesperson for NIST, tells WIRED that the proposed changes are aimed at protecting US science from theft and abuse, echoing a similar statement issued this week to other media outlets. Huergo declined to comment on who needs to approve the proposal for it to be finalized and when a decision will be made. She also didn’t immediately respond to a request for comment on the lawmakers’ letter.

Preventing foreign adversaries from stealing valuable American intellectual property has been a bipartisan priority, with NIST among the agencies in recent years to receive congressional scrutiny about the adequacy of its background checks and security policies. Just last month, Republican lawmakers renewed calls to put restrictions in place preventing Chinese nationals from working at or with national labs run by the Department of Energy.

But Lofgren’s letter contends that the rumored restrictions on non-US scientists at NIST go beyond “what is reasonable and appropriate to protect research security.” The letter demands transparency about new policies by February 26 and a pause on them “until Congress can weigh in on whether these changes are necessary at all.”

The potential loss of research talent at NIST would add to a series of other Trump administration policies that some US tech industry leaders have warned will dismantle the lives of immigrant researchers already living in the US and hamper economic growth. Hiking fees on H-1B tech visas, revoking thousands of student visas, and carrying out legally dubious mass deportations all stand to push people eager to work on science and tech research in the US to go elsewhere instead. The Trump administration has also announced plans to limit post-graduation job training for international students.

Pat Gallagher, who served as the director of NIST from 2009 to 2013 under President Barack Obama, says the changes could erode trust in the agency, which has long provided the technical foundations that industry and governments around the world rely on. “What has made NIST special is it is scientifically credible,” he tells WIRED. “Industry, universities, and the global measurement community knew they could work with NIST.”

Like much of the federal government, NIST has been in turmoil for most of the past year. Parts of it were paralyzed for months as rumors of DOGE cuts spread. Ultimately, the agency lost hundreds of its thousands of workers to budget cuts, with further funding pressure to come.

As of a couple of years ago, NIST welcomed 800 researchers on average annually from outside the US to work in its offices and collaborate directly with staff.

Lofgren expressed fear that rumors may be enough to scare away researchers and undermine US competitiveness in vital research. “Our scientific excellence depends upon attracting the best and brightest from around the world,” she wrote in the letter.

This story originally appeared on wired.com.

Major government research lab appears to be squeezing out foreign scientists Read More »

white-house-unveils-sweeping-plan-to-“win”-global-ai-race-through-deregulation

White House unveils sweeping plan to “win” global AI race through deregulation

Trump’s plan was not welcomed by everyone. J.B. Branch, Big Tech accountability advocate for Public Citizen, in a statement provided to Ars, criticized Trump as giving “sweetheart deals” to tech companies that would cause “electricity bills to rise to subsidize discounted power for massive AI data centers.”

Infrastructure demands and energy requirements

Trump’s new AI plan tackles infrastructure head-on, stating that “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today.” To meet this demand, it proposes streamlining environmental permitting for data centers through new National Environmental Policy Act (NEPA) exemptions, making federal lands available for construction and modernizing the power grid—all while explicitly rejecting “radical climate dogma and bureaucratic red tape.”

The document embraces what it calls a “Build, Baby, Build!” approach—echoing a Trump campaign slogan—and promises to restore semiconductor manufacturing through the CHIPS Program Office, though stripped of “extraneous policy requirements.”

On the technology front, the plan directs Commerce to revise NIST’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Federal procurement would favor AI developers whose systems are “objective and free from top-down ideological bias.” The document strongly backs open source AI models and calls for exporting American AI technology to allies while blocking administration-labeled adversaries like China.

Security proposals include high-security military data centers and warnings that advanced AI systems “may pose novel national security risks” in cyberattacks and weapons development.

Critics respond with “People’s AI Action Plan”

Before the White House unveiled its plan, more than 90 organizations launched a competing “People’s AI Action Plan” on Tuesday, characterizing the Trump administration’s approach as “a massive handout to the tech industry” that prioritizes corporate interests over public welfare. The coalition includes labor unions, environmental justice groups, and consumer protection nonprofits.

White House unveils sweeping plan to “win” global AI race through deregulation Read More »

windows-11’s-most-important-new-feature-is-post-quantum-cryptography-here’s-why.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why.

Microsoft is updating Windows 11 with a set of new encryption algorithms that can withstand future attacks from quantum computers in a move aimed at jump-starting what’s likely to be the most formidable and important technology transition in modern history.

Computers that are based on the physics of quantum mechanics don’t yet exist outside of sophisticated labs, but it’s well-established science that they eventually will. Instead of processing data in the binary state of zeros and ones, quantum computers run on qubits, which encompass myriad states all at once. This new capability promises to bring about new discoveries of unprecedented scale in a host of fields, including metallurgy, chemistry, drug discovery, and financial modeling.

Averting the cryptopocalypse

One of the most disruptive changes quantum computing will bring is the breaking of some of the most common forms of encryption, specifically, the RSA cryptosystem and those based on elliptic curves. These systems are the workhorses that banks, governments, and online services around the world have relied on for more than four decades to keep their most sensitive data confidential. RSA and elliptic curve encryption keys securing web connections would require millions of years to be cracked using today’s computers. A quantum computer could crack the same keys in a matter of hours or minutes.

At Microsoft’s BUILD 2025 conference on Monday, the company announced the availability of quantum-resistant algorithms to SymCrypt, the core cryptographic code library in Windows. The updated library is available in Build 27852 and higher versions of Windows 11. Additionally, Microsoft has updated SymCrypt-OpenSSL, its open source project that allows the widely used OpenSSL library to use SymCrypt for cryptographic operations.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why. Read More »

feds-to-get-early-access-to-openai,-anthropic-ai-to-test-for-doomsday-scenarios

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

“Advancing the science of AI safety” —

AI companies agreed that ensuring AI safety was key to innovation.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

The US AI Safety Institute will also be collaborating with the UK AI Safety Institute when examining models to flag potential safety risks. Both groups will provide feedback to OpenAI and Anthropic “on potential safety improvements to their models.”

NIST said that the agreements also build on voluntary AI safety commitments that AI companies made to the Biden administration to evaluate models to detect risks.

Elizabeth Kelly, director of the US AI Safety Institute, called the agreements “an important milestone” to “help responsibly steward the future of AI.”

Anthropic co-founder: AI safety “crucial” to innovation

The announcement comes as California is poised to pass one of the country’s first AI safety bills, which will regulate how AI is developed and deployed in the state.

Among the most controversial aspects of the bill is a requirement that AI companies build in a “kill switch” to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

Critics say the bill overlooks existing safety risks from AI—like deepfakes and election misinformation—to prioritize prevention of doomsday scenarios and could stifle AI innovation while providing little security today. They’ve urged California’s governor, Gavin Newsom, to veto the bill if it arrives at his desk, but it’s still unclear if Newsom intends to sign.

Anthropic was one of the AI companies that cautiously supported California’s controversial AI bill, Reuters reported, claiming that the potential benefits of the regulations likely outweigh the costs after a late round of amendments.

The company’s CEO, Dario Amodei, told Newsom why Anthropic supports the bill now in a letter last week, Reuters reported. He wrote that although Anthropic isn’t certain about aspects of the bill that “seem concerning or ambiguous,” Anthropic’s “initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced” by recent changes to the bill.

OpenAI has notably joined critics opposing California’s AI safety bill and has been called out by whistleblowers for lobbying against it.

In a letter to the bill’s co-sponsor, California Senator Scott Wiener, OpenAI’s chief strategy officer, Jason Kwon, suggested that “the federal government should lead in regulating frontier AI models to account for implications to national security and competitiveness.”

The ChatGPT maker striking a deal with the US AI Safety Institute seems in line with that thinking. As Kwon told Reuters, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

While some critics worry California’s AI safety bill will hamper innovation, Anthropic’s co-founder, Jack Clark, told Reuters today that “safe, trustworthy AI is crucial for the technology’s positive impact.” He confirmed that Anthropic’s “collaboration with the US AI Safety Institute” will leverage the government’s “wide expertise to rigorously test” Anthropic’s models “before widespread deployment.”

In NIST’s press release, Kelly agreed that “safety is essential to fueling breakthrough technological innovation.”

By directly collaborating with OpenAI and Anthropic, the US AI Safety Institute also plans to conduct its own research to help “advance the science of AI safety,” Kelly said.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios Read More »

us-agency-tasked-with-curbing-risks-of-ai-lacks-funding-to-do-the-job

US agency tasked with curbing risks of AI lacks funding to do the job

more dollars needed —

Lawmakers fear the NIST will have to rely on companies developing the technology.

They know...

Enlarge / They know…

Aurich / Getty

US president Joe Biden’s plan for containing the dangers of artificial intelligencealready risks being derailed by congressional bean counters.

A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as “an almost impossible deadline” for the agency.

Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.

The US government has already tapped NIST to help regulate AI. In January 2023 the agency released an AI risk management framework to guide business and government. NIST has also devised ways to measure public trust in new AI tools. But the agency, which standardizes everything from food ingredients to radioactive materials and atomic clocks, has puny resources compared to those of the companies on the forefront of AI. OpenAI, Google, and Meta each likely spent upwards of $100 million to train the powerful language models that undergird applications such as ChatGPT, Bard, and Llama 2.

NIST’s budget for 2023 was $1.6 billion, and the White House has requested that it be increased by 29 percent in 2024 for initiatives not directly related to AI. Several sources familiar with the situation at NIST say that the agency’s current budget will not stretch to figuring out AI safety testing on its own.

On December 16, the same day Tabassi spoke at NeurIPS, six members of Congress signed a bipartisan open letter raising concern about the prospect of NIST enlisting private companies with little transparency. “We have learned that NIST intends to make grants or awards to outside organizations for extramural research,” they wrote. The letter warns that there does not appear to be any publicly available information about how those awards will be decided.

The lawmakers’ letter also claims that NIST is being rushed to define standards even though research into testing AI systems is at an early stage. As a result there is “significant disagreement” among AI experts over how to work on or even measure and define safety issues with the technology, it states. “The current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” the letter claims.

NIST spokesperson Jennifer Huergo confirmed that the agency had received the letter and said that it “will respond through the appropriate channels.”

NIST is making some moves that would increase transparency, including issuing a request for information on December 19, soliciting input from outside experts and companies on standards for evaluating and red-teaming AI models. It is unclear if this was a response to the letter sent by the members of Congress.

The concerns raised by lawmakers are shared by some AI experts who have spent years developing ways to probe AI systems. “As a nonpartisan scientific body, NIST is the best hope to cut through the hype and speculation around AI risk,” says Rumman Chowdhury, a data scientist and CEO of Parity Consultingwho specializes in testing AI models for bias and other problems. “But in order to do their job well, they need more than mandates and well wishes.”

Yacine Jernite, machine learning and society lead at Hugging Face, a company that supports open source AI projects, says big tech has far more resources than the agency given a key role in implementing the White House’s ambitious AI plan. “NIST has done amazing work on helping manage the risks of AI, but the pressure to come up with immediate solutions for long-term problems makes their mission extremely difficult,” Jernite says. “They have significantly fewer resources than the companies developing the most visible AI systems.”

Margaret Mitchell, chief ethics scientist at Hugging Face, says the growing secrecy around commercial AI models makes measurement more challenging for an organization like NIST. “We can’t improve what we can’t measure,” she says.

The White House executive order calls for NIST to perform several tasks, including establishing a new Artificial Intelligence Safety Institute to support the development of safe AI. In April, a UK taskforce focused on AI safety was announced. It will receive $126 million in seed funding.

The executive order gave NIST an aggressive deadline for coming up with, among other things, guidelines for evaluating AI models, principles for “red-teaming” (adversarially testing) models, developing a plan to get US-allied nations to agree to NIST standards, and coming up with a plan for “advancing responsible global technical standards for AI development.”

Although it isn’t clear how NIST is engaging with big tech companies, discussions on NIST’s risk management framework, which took place prior to the announcement of the executive order, involved Microsoft; Anthropic, a startup formed by ex-OpenAI employees that is building cutting-edge AI models; Partnership on AI, which represents big tech companies; and the Future of Life Institute, a nonprofit dedicated to existential risk, among others.

“As a quantitative social scientist, I’m both loving and hating that people realize that the power is in measurement,” Chowdhury says.

This story originally appeared on wired.com.

US agency tasked with curbing risks of AI lacks funding to do the job Read More »