AI risks

sam-altman-accused-of-being-shady-about-openai’s-safety-efforts

Sam Altman accused of being shady about OpenAI’s safety efforts

Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

Enlarge / Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.

OpenAI is facing increasing pressure to prove it’s not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company’s non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.

In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be “stifling” its “employees from making protected disclosures to government regulators.”

Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don’t discourage disclosures. That’s critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.

Grassley has apparently twice requested these records without a response from OpenAI, his letter said. And so far, OpenAI has not responded to the most recent request to send documents, Grassley’s spokesperson, Clare Slattery, told The Washington Post.

“It’s not enough to simply claim you’ve made ‘updates,’” Grassley said in a statement provided to Ars. “The proof is in the pudding. Altman needs to provide records and responses to my oversight requests so Congress can accurately assess whether OpenAI is adequately protecting its employees and users.”

In addition to requesting OpenAI’s recently updated employee agreements, Grassley pushed OpenAI to be more transparent about the total number of requests it has received from employees seeking to make federal disclosures since 2023. The senator wants to know what information employees wanted to disclose to officials and whether OpenAI actually approved their requests.

Along the same lines, Grassley asked OpenAI to confirm how many investigations the SEC has opened into OpenAI since 2023.

Together, these documents would shed light on whether OpenAI employees are potentially still being silenced from making federal disclosures, what kinds of disclosures OpenAI denies, and how closely the SEC is monitoring OpenAI’s seeming efforts to hide safety risks.

“It is crucial OpenAI ensure its employees can provide protected disclosures without illegal restrictions,” Grassley wrote in his letter.

He has requested a response from OpenAI by August 15 so that “Congress may conduct objective and independent oversight on OpenAI’s safety protocols and NDAs.”

OpenAI did not immediately respond to Ars’ request for comment.

On X, Altman wrote that OpenAI has taken steps to increase transparency, including “working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.” He also confirmed that OpenAI wants “current and former employees to be able to raise concerns and feel comfortable doing so.”

“This is crucial for any company, but for us especially and an important part of our safety plan,” Altman wrote. “In May, we voided non-disparagement terms for current and former employees and provisions that gave OpenAI the right (although it was never used) to cancel vested equity. We’ve worked hard to make it right.”

In July, whistleblowers told the SEC that OpenAI should be required to produce not just current employee contracts, but all contracts that contained a non-disclosure agreement to ensure that OpenAI hasn’t been obscuring a history or current practice of obscuring AI safety risks. They want all current and former employees to be notified of any contract that included an illegal NDA and for OpenAI to be fined for every illegal contract.

Sam Altman accused of being shady about OpenAI’s safety efforts Read More »

biden-orders-every-us-agency-to-appoint-a-chief-ai-officer

Biden orders every US agency to appoint a chief AI officer

Mission control —

Federal agencies rush to appoint chief AI officers with “significant expertise.”

Biden orders every US agency to appoint a chief AI officer

The White House has announced the “first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits.” To coordinate these efforts, every federal agency must appoint a chief AI officer with “significant expertise in AI.”

Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said.

As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting “safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition,” OMB said.

Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It’s up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency’s mission and ensure “equitable outcomes,” OMB said. Here’s a brief summary of OMB’s ideals:

Agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services, reduce food insecurity, address the climate crisis, improve public health, advance equitable outcomes, protect democracy and human rights, and grow economic competitiveness in a way that benefits people across the United States.

Among the chief AI officer’s primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They’ll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a “significant impact on rights or safety,” OMB said.

OMB breaks down several AI uses that could impact safety, including controlling “safety-critical functions” within everything from emergency services to food-safety mechanisms to systems controlling nuclear reactors. Using AI to maintain election integrity could be safety-impacting, too, as could using AI to move industrial waste, control health insurance costs, or detect the “presence of dangerous weapons.”

Uses of AI presumed to be rights-impacting include censoring protected speech and a wide range of law enforcement efforts, such as predicting crimes, sketching faces, or using license plate readers to track personal vehicles in public spaces. Other rights-impacting AI uses include “risk assessments related to immigration,” “replicating a person’s likeness or voice without express consent,” or detecting students cheating.

Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB’s minimum standards for responsible AI use. Once a determination is made, the officers will “centrally track” the determinations, informing OMB of any major changes to “conditions or context in which the AI is used.” The officers will also regularly convene “a new Chief AI Officer Council to coordinate” efforts and share innovations government-wide.

As agencies advance AI uses—which the White House says is critical to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more”—chief AI officers will become the public-facing figures accountable for decisions made. In that role, the officer must consult with the public and incorporate “feedback from affected communities,” notify “negatively affected individuals” of new AI uses, and maintain options to opt-out of “AI-enabled decisions,” OMB said.

However, OMB noted that chief AI officers also have the power to waive opt-out options “if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency.”

Biden orders every US agency to appoint a chief AI officer Read More »