AI harms

biden-orders-every-us-agency-to-appoint-a-chief-ai-officer

Biden orders every US agency to appoint a chief AI officer

Mission control —

Federal agencies rush to appoint chief AI officers with “significant expertise.”

Biden orders every US agency to appoint a chief AI officer

The White House has announced the “first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits.” To coordinate these efforts, every federal agency must appoint a chief AI officer with “significant expertise in AI.”

Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said.

As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting “safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition,” OMB said.

Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It’s up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency’s mission and ensure “equitable outcomes,” OMB said. Here’s a brief summary of OMB’s ideals:

Agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services, reduce food insecurity, address the climate crisis, improve public health, advance equitable outcomes, protect democracy and human rights, and grow economic competitiveness in a way that benefits people across the United States.

Among the chief AI officer’s primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They’ll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a “significant impact on rights or safety,” OMB said.

OMB breaks down several AI uses that could impact safety, including controlling “safety-critical functions” within everything from emergency services to food-safety mechanisms to systems controlling nuclear reactors. Using AI to maintain election integrity could be safety-impacting, too, as could using AI to move industrial waste, control health insurance costs, or detect the “presence of dangerous weapons.”

Uses of AI presumed to be rights-impacting include censoring protected speech and a wide range of law enforcement efforts, such as predicting crimes, sketching faces, or using license plate readers to track personal vehicles in public spaces. Other rights-impacting AI uses include “risk assessments related to immigration,” “replicating a person’s likeness or voice without express consent,” or detecting students cheating.

Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB’s minimum standards for responsible AI use. Once a determination is made, the officers will “centrally track” the determinations, informing OMB of any major changes to “conditions or context in which the AI is used.” The officers will also regularly convene “a new Chief AI Officer Council to coordinate” efforts and share innovations government-wide.

As agencies advance AI uses—which the White House says is critical to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more”—chief AI officers will become the public-facing figures accountable for decisions made. In that role, the officer must consult with the public and incorporate “feedback from affected communities,” notify “negatively affected individuals” of new AI uses, and maintain options to opt-out of “AI-enabled decisions,” OMB said.

However, OMB noted that chief AI officers also have the power to waive opt-out options “if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency.”

Biden orders every US agency to appoint a chief AI officer Read More »

world’s-first-global-ai-resolution-unanimously-adopted-by-united-nations

World’s first global AI resolution unanimously adopted by United Nations

We hold these seeds to be self-evident —

Nonbinding agreement seeks to protect personal data and safeguard human rights.

The United Nations building in New York.

Enlarge / The United Nations building in New York.

On Thursday, the United Nations General Assembly unanimously consented to adopt what some call the first global resolution on AI, reports Reuters. The resolution aims to foster the protection of personal data, enhance privacy policies, ensure close monitoring of AI for potential risks, and uphold human rights. It emerged from a proposal by the United States and received backing from China and 121 other countries.

Being a nonbinding agreement and thus effectively toothless, the resolution seems broadly popular in the AI industry. On X, Microsoft Vice Chair and President Brad Smith wrote, “We fully support the @UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

The resolution, titled “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development,” resulted from three months of negotiation, and the stakeholders involved seem pleased at the level of international cooperation. “We’re sailing in choppy waters with the fast-changing technology, which means that it’s more important than ever to steer by the light of our values,” one senior US administration official told Reuters, highlighting the significance of this “first-ever truly global consensus document on AI.”

In the UN, adoption by consensus means that all members agree to adopt the resolution without a vote. “Consensus is reached when all Member States agree on a text, but it does not mean that they all agree on every element of a draft document,” writes the UN in a FAQ found online. “They can agree to adopt a draft resolution without a vote, but still have reservations about certain parts of the text.”

The initiative joins a series of efforts by governments worldwide to influence the trajectory of AI development following the launch of ChatGPT and GPT-4, and the enormous hype raised by certain members of the tech industry in a public worldwide campaign waged last year. Critics fear that AI may undermine democratic processes, amplify fraudulent activities, or contribute to significant job displacement, among other issues. The resolution seeks to address the dangers associated with the irresponsible or malicious application of AI systems, which the UN says could jeopardize human rights and fundamental freedoms.

Resistance from nations such as Russia and China was anticipated, and US officials acknowledged the presence of “lots of heated conversations” during the negotiation process, according to Reuters. However, they also emphasized successful engagement with these countries and others typically at odds with the US on various issues, agreeing on a draft resolution that sought to maintain a delicate balance between promoting development and safeguarding human rights.

The new UN agreement may be the first “global” agreement, in the sense of having the participation of every UN country, but it wasn’t the first multi-state international AI agreement. That honor seems to fall to the Bletchley Declaration signed in November by the 28 nations attending the UK’s first AI Summit.

Also in November, the US, Britain, and other nations unveiled an agreement focusing on the creation of AI systems that are “secure by design” to protect against misuse by rogue actors. Europe is slowly moving forward with provisional agreements to regulate AI and is close to implementing the world’s first comprehensive AI regulations. Meanwhile, the US government still lacks consensus on legislative action related to AI regulation, with the Biden administration advocating for measures to mitigate AI risks while enhancing national security.

World’s first global AI resolution unanimously adopted by United Nations Read More »