regulation

epa’s-pfas-rules:-we’d-prefer-zero,-but-we’ll-accept-4-parts-per-trillion

EPA’s PFAS rules: We’d prefer zero, but we’ll accept 4 parts per trillion

Approaching zero —

For two chemicals, any presence in water supplies is too much.

A young person drinks from a public water fountain.

Today, the Environmental Protection Agency announced that it has finalized rules for handling water supplies that are contaminated by a large family of chemicals collectively termed PFAS (perfluoroalkyl and polyfluoroalkyl substances). Commonly called “forever chemicals,” these contaminants have been linked to a huge range of health issues, including cancers, heart disease, immune dysfunction, and developmental disorders.

The final rules keep one striking aspect of the initial proposal intact: a goal of completely eliminating exposure to two members of the PFAS family. The new rules require all drinking water suppliers to monitor for the chemicals’ presence, and the EPA estimates that as many as 10 percent of them may need to take action to remove them. While that will be costly, the health benefits are expected to exceed those costs.

Going low

PFAS are a collection of hydrocarbons where some of the hydrogen atoms have been swapped out for fluorine. This swap retains the water-repellant behavior of hydrocarbons while making the molecules highly resistant to breaking down through natural processes—hence the forever chemicals moniker. They’re widely used in water-resistant clothing and non-stick cooking equipment and have found uses in firefighting foam. Their widespread use and disposal has allowed them to get into water supplies in many locations.

They’ve also been linked to an enormous range of health issues. The EPA expects that its new rules will have the following effects: fewer cancers, lower incidence of heart attacks and strokes, reduced birth complications, and a drop in other developmental, cardiovascular, liver, immune, endocrine, metabolic, reproductive, musculoskeletal, and carcinogenic effects. These are not chemicals you want to be drinking.

The striking thing was how far the EPA was willing to go to get them out of drinking water. For two chemicals, Perfluorooctanoic acid (PFOA) and Perfluorooctanesulfonic acid (PFOS), the Agency’s ideal contamination level is zero. Meaning no exposure to these chemicals whatsoever. Since current testing equipment is limited to a sensitivity of four parts per trillion, the new rules settle for using that as the standard. Other family members see limits of 10 parts per trillion, and an additional limit sets a cap on how much total exposure is acceptable when a mixture of PFAS is present.

Overall, the EPA estimates that there are roughly 66,000 drinking water suppliers that will be subject to these new rules. They’ll be given three years to get monitoring and testing programs set up and provided access to funds from the Bipartisan Infrastructure Law to help offset the costs. All told, over $20 billion will be made available for the testing and improvements to equipment needed for compliance.

The Agency expects that somewhere between 4,000 and 6,500 of those systems will require some form of decontamination. While those represent a relatively small fraction of the total drinking water suppliers, it’s estimated that nearly a third of the US’ population will see its exposure to PFAS drop. Several technologies, including reverse osmosis and exposure to activated carbon, are capable of pulling PFAS from water, and the EPA is leaving it up to each supplier to choose a preferred method.

Cost/benefit

All of that monitoring and decontamination will not come cheap. The EPA estimates that the annual costs will be in the neighborhood of $150 billion, which will likely be passed on to consumers via their water suppliers. Those same consumers, however, are expected to see health benefits that outweigh these costs. EPA estimates place the impact of just three of the health improvements (cancer, cardiovascular, and birth complications) at $150 billion annually. Adding all the benefits of the rest of the health improvements should greatly exceed the costs.

The problem, of course, is that people will immediately recognize the increased cost of their water bills, while the savings of medical problems that don’t happen are much more abstract.

Overall, the final plan is largely unchanged from the EPA’s original proposal. The biggest differences are that the Agency is giving water suppliers more time to comply, somewhat more specific exposure allowances, and the ability of suppliers with minimal contamination to go longer in between submitting test results.

“People will live longer, healthier lives because of this action, and the benefits justify the costs,” the agency concluded in announcing the new rules.

EPA’s PFAS rules: We’d prefer zero, but we’ll accept 4 parts per trillion Read More »

us-agency-tasked-with-curbing-risks-of-ai-lacks-funding-to-do-the-job

US agency tasked with curbing risks of AI lacks funding to do the job

more dollars needed —

Lawmakers fear the NIST will have to rely on companies developing the technology.

They know...

Enlarge / They know…

Aurich / Getty

US president Joe Biden’s plan for containing the dangers of artificial intelligencealready risks being derailed by congressional bean counters.

A White House executive order on AI announced in October calls on the US to develop new standards for stress-testing AI systems to uncover their biases, hidden threats, and rogue tendencies. But the agency tasked with setting these standards, the National Institute of Standards and Technology (NIST), lacks the budget needed to complete that work independently by the July 26, 2024, deadline, according to several people with knowledge of the work.

Speaking at the NeurIPS AI conference in New Orleans last week, Elham Tabassi, associate director for emerging technologies at NIST, described this as “an almost impossible deadline” for the agency.

Some members of Congress have grown concerned that NIST will be forced to rely heavily on AI expertise from private companies that, due to their own AI projects, have a vested interest in shaping standards.

The US government has already tapped NIST to help regulate AI. In January 2023 the agency released an AI risk management framework to guide business and government. NIST has also devised ways to measure public trust in new AI tools. But the agency, which standardizes everything from food ingredients to radioactive materials and atomic clocks, has puny resources compared to those of the companies on the forefront of AI. OpenAI, Google, and Meta each likely spent upwards of $100 million to train the powerful language models that undergird applications such as ChatGPT, Bard, and Llama 2.

NIST’s budget for 2023 was $1.6 billion, and the White House has requested that it be increased by 29 percent in 2024 for initiatives not directly related to AI. Several sources familiar with the situation at NIST say that the agency’s current budget will not stretch to figuring out AI safety testing on its own.

On December 16, the same day Tabassi spoke at NeurIPS, six members of Congress signed a bipartisan open letter raising concern about the prospect of NIST enlisting private companies with little transparency. “We have learned that NIST intends to make grants or awards to outside organizations for extramural research,” they wrote. The letter warns that there does not appear to be any publicly available information about how those awards will be decided.

The lawmakers’ letter also claims that NIST is being rushed to define standards even though research into testing AI systems is at an early stage. As a result there is “significant disagreement” among AI experts over how to work on or even measure and define safety issues with the technology, it states. “The current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” the letter claims.

NIST spokesperson Jennifer Huergo confirmed that the agency had received the letter and said that it “will respond through the appropriate channels.”

NIST is making some moves that would increase transparency, including issuing a request for information on December 19, soliciting input from outside experts and companies on standards for evaluating and red-teaming AI models. It is unclear if this was a response to the letter sent by the members of Congress.

The concerns raised by lawmakers are shared by some AI experts who have spent years developing ways to probe AI systems. “As a nonpartisan scientific body, NIST is the best hope to cut through the hype and speculation around AI risk,” says Rumman Chowdhury, a data scientist and CEO of Parity Consultingwho specializes in testing AI models for bias and other problems. “But in order to do their job well, they need more than mandates and well wishes.”

Yacine Jernite, machine learning and society lead at Hugging Face, a company that supports open source AI projects, says big tech has far more resources than the agency given a key role in implementing the White House’s ambitious AI plan. “NIST has done amazing work on helping manage the risks of AI, but the pressure to come up with immediate solutions for long-term problems makes their mission extremely difficult,” Jernite says. “They have significantly fewer resources than the companies developing the most visible AI systems.”

Margaret Mitchell, chief ethics scientist at Hugging Face, says the growing secrecy around commercial AI models makes measurement more challenging for an organization like NIST. “We can’t improve what we can’t measure,” she says.

The White House executive order calls for NIST to perform several tasks, including establishing a new Artificial Intelligence Safety Institute to support the development of safe AI. In April, a UK taskforce focused on AI safety was announced. It will receive $126 million in seed funding.

The executive order gave NIST an aggressive deadline for coming up with, among other things, guidelines for evaluating AI models, principles for “red-teaming” (adversarially testing) models, developing a plan to get US-allied nations to agree to NIST standards, and coming up with a plan for “advancing responsible global technical standards for AI development.”

Although it isn’t clear how NIST is engaging with big tech companies, discussions on NIST’s risk management framework, which took place prior to the announcement of the executive order, involved Microsoft; Anthropic, a startup formed by ex-OpenAI employees that is building cutting-edge AI models; Partnership on AI, which represents big tech companies; and the Future of Life Institute, a nonprofit dedicated to existential risk, among others.

“As a quantitative social scientist, I’m both loving and hating that people realize that the power is in measurement,” Chowdhury says.

This story originally appeared on wired.com.

US agency tasked with curbing risks of AI lacks funding to do the job Read More »

challenges-behind-applying-real-world-laws-to-xr-spaces-and-ensuring-user-safety

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety

Immersive technologies bridging the gap between the physical and digital worlds can create new business opportunities. However, it also gives rise to new challenges in regulation and applying real-world laws to XR spaces. According to a World Economic Forum report, we are relatively slow in innovating new legal frameworks for emerging technologies like AR and VR.

Common Challenges of Applying Laws to AR and VR

XR technologies like AR and VR are already considered beneficial and are used in industries like medicine and education. However, XR still harbors risks to human rights, according to an Electronic Frontier Foundation (EFF) article.

Issues like data harvesting and online harassment pose real threats to users, and self-regulation when it comes to data protection and ethical guidelines is insufficient in mitigating such risks. Some common challenges that crop up when applying real-world laws to AR and VR include intellectual property, virtual privacy and security, and product liability.

There’s also the need for a new framework tailored to fit emerging technologies, but legislative attempts at regulation may face several hurdles. It’s also worth noting that while regulation can help keep users safe, it may also potentially hamper the development of such technologies, according to Digikonn co-founder Chirag Prajapati.

Can Real-World Laws Be Applied to XR Spaces?

In an interview with IEEE Spectrum in 2018, Robyn Chatwood, an intellectual property and information technology partner at Dentons Australia, gave an example of an incident that occurred in a VR space where a user experienced sexual assault. Unfortunately, Chatwood remarked that there are no laws saying that sexual assault in VR is the same as in the real world. When asked when she thinks these issues will be addressed, Chatwood remarked that, in several years, another incident could draw more widespread attention to the problems in XR spaces. It’s also possible that, through increased adoption, society will begin to recognize the need to develop regulations for XR spaces.

On a more positive note, the trend toward regulations for XR spaces has been changing recently. For instance, Meta has rolled out a minimum distance between avatars in Horizon Worlds, its VR social media platform. This boundary prevents other avatars from getting into your avatar’s personal space. This system works by halting a user’s forward movement as they get closer to the said boundary.

There are also new laws being drafted to protect users in online spaces. In particular, the UK’s Online Safety Bill, which had its second reading in the House of Commons in April 2022, aims to protect users by ensuring that online platforms have safety measures in place against harmful and illegal content and covers four new criminal offenses.

In the paper, The Law and Ethics of Virtual Assault, author John Danaher proposes a broader definition of virtual sexual assault, which allows for what he calls the different “sub-types of virtual sexual assault.” Danaher also provides suggestions on when virtual acts should be criminalized and how virtual sexual assault can be criminalized. The paper also touches on topics like consent and criminal responsibility for such crimes.

There’s even a short film that brings to light pressing metaverse concerns. Privacy Lost aims to educate policymakers about the potential dangers, such as manipulation, that come with emerging technologies.

While many legal issues in the virtual world are resolved through criminal courts and tort systems, according to Gamma Law’s David B. Hoppe, these approaches lack the necessary nuance and context to resolve such legal disputes. Hoppe remarks that real-world laws may not have the specificity that will allow them to tackle new privacy issues in XR spaces and shares that there is a need for a more nuanced legal strategy and tailored legal documents to help protect users in XR spaces.

Issues with Existing Cyber Laws

The novelty of AR and VR technologies makes it challenging to implement legislation. However, for users to maximize the benefits of such technologies, their needs should be considered by developers, policymakers, and organizations that implement them. While cyber laws are in place, persistent issues still need to be tackled, such as challenges in executing sanctions for offenders and the lack of adequate responses.

The United Nations Office on Drugs and Crime (UNODC) also cites several obstacles to cybercrime investigations, such as user anonymity from technologies, attribution, which determines who or what is responsible for the crime, and traceback, which can be time-consuming. The UNODC also notes that the lack of coordinated national cybercrime laws and international standards for evidence can hamper cybercrime investigations.

Creating Safer XR Spaces for Users

Based on guidelines provided by the World Economic Forum, there are several key considerations that legislators should consider. These include how laws and regulations apply to XR conduct governed by private platforms and how rules can potentially apply when an XR user’s activities have direct, real-world effects.

The XR Association (XRA) has also provided guidelines to help create safe and inclusive immersive spaces. Its conduct policy tips to address abuse include creating tailored policies that align with a business’ product and community and including notifications of possible violations. Moreover, the XRA has been proactive in rolling out measures for the responsible development and adoption of XR. For instance, it has held discussions on user privacy and safety in mixed reality spaces, zeroing in on how developers, policymakers, and organizations can better promote privacy, safety, and inclusion, as well as tackle issues that are unique to XR spaces. It also works with XRA member companies to create guidelines for age-appropriate use of XR technology, helping develop safer virtual spaces for younger users.

Other Key Players in XR Safety

Aside from the XRA, other organizations are also taking steps to create safer XR spaces. X Reality Safety Intelligence (XRSI), formerly known as X Reality Safety Initiative, is one of the world’s leading organizations focused on providing intelligence and advisory services to promote the safety and well-being of ecosystems for emerging technologies.

It has created a number of programs that help tackle critical issues and risks in the metaverse focusing on aspects like diversity and inclusion, trustworthy journalism, and child safety. For instance, the organization has shown support for the Kids PRIVACY Act, a legislation that aims to implement more robust measures to protect younger users online.

XRSI has also published research and shared guidelines to create standards for XR spaces. It has partnered with Standards Australia to create the first-ever Metaverse Standards whitepaper, which serves as a guide for standards in the metaverse to protect users against risks unique to the metaverse. These are categorized as Human Risks, Regulatory Risks, Financial Risks, and Legal Risks, among other metaverse-unique risks.

The whitepaper is a collaborative effort that brings together cybersecurity experts, VR and AR pioneers, strategists, and AI and metaverse specialists. One of its authors, Dr. Catriona Wallace, is the founder of the social enterprise The Responsible Metaverse Alliance. Cybersecurity professional Kavya Pearlman, the founder and CEO of XRSI, is also one of its authors. Pearlman works with various organizations and governments, advising on policymaking and cybersecurity to help keep users safe in emerging technology ecosystems.

One such issue that’s being highlighted by the XRSI is the risks that come with XR data collection in three areas: medical XR and healthcare, learning and education, and employment and work. The report highlights how emerging technologies create new privacy and safety concerns, risks such as the lack of inclusivity, the lack of equality in education, and the lack of experience in using data collected in XR spaces are cropping up.

In light of these issues, the XRSI has created goals and guidelines to help address these risks. Some of the goals include establishing a standards-based workflow to manage XR-collected data and adopting a new approach to classifying such data.

The EU is also taking steps to ensure data protection in emerging technologies, with new EU laws aiming to complement the GDPR’s requirements for XR technologies and services. Moreover, the EU data protection law applies to most XR technologies, particularly for commercial applications. It’s possible that a user’s explicit consent may be required to make data processing operations legitimate.

According to the Information Technology & Innovation Foundation (ITIF), policymakers need to mitigate so-called regulatory uncertainty by making it clear how and when laws apply to AR and VR technologies. The same ITIF report stresses that they need to collaborate with stakeholder communities and industry leaders to create and implement comprehensive guidelines and clear standards for AR and VR use.

However, while creating safer XR spaces is of utmost importance, the ITIF also highlights the risks of over-regulation, which can stifle the development of new technologies. To mitigate this risk, policymakers can instead focus on developing regulations that help promote innovation in the field, such as creating best practices for law enforcement agencies to tackle cybercrime and focusing on funding for user safety research.

Moreover, the ITIF also provides some guidelines regarding privacy concerns from AR in public spaces, as well as what steps leaders and policymakers could take to mitigate the risks and challenges that come with the use of immersive technologies.

The EFF also shares that governments need to execute or update data protection legislation to protect users and their data.

There is still a long way to go when applying real-world laws to XR spaces. However, many organizations, policymakers, and stakeholders are already taking steps to help make such spaces safer for users.

Challenges Behind Applying Real-World Laws to XR Spaces and Ensuring User Safety Read More »