generative ai

sharing-deepfake-porn-could-lead-to-lengthy-prison-time-under-proposed-law

Sharing deepfake porn could lead to lengthy prison time under proposed law

Fake nudes, real harms —

Teen “shouting for change” after fake nude images spread at NJ high school.

Sharing deepfake porn could lead to lengthy prison time under proposed law

The US seems to be getting serious about criminalizing deepfake pornography after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates last October.

On Tuesday, Rep. Joseph Morelle (D-NY) announced that he has re-introduced the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” Under the proposed law, anyone sharing deepfake pornography without an individual’s consent risks damages that could go as high as $150,000 and imprisonment of up to 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

The hope is that steep penalties will deter companies and individuals from allowing the disturbing images to be spread. It creates a criminal offense for sharing deepfake pornography “with the intent to harass, annoy, threaten, alarm, or cause substantial harm to the finances or reputation of the depicted individual” or with “reckless disregard” or “actual knowledge” that images will harm the individual depicted. It also provides a path for victims to sue offenders in civil court.

Rep. Tom Kean (R-NJ), who co-sponsored the bill, said that “proper guardrails and transparency are essential for fostering a sense of responsibility among AI companies and individuals using AI.”

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not,” Morelle said. “Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime.”

Joining Morelle in pushing to criminalize deepfake pornography was Dorota and Francesca Mani, who have spent the past two months meeting with lawmakers, The Wall Street Journal reported. The mother and daughter experienced the horror Morelle described firsthand when the New Jersey high school confirmed that 14-year-old Francesca was among the students targeted last year.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Francesca said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Morelle’s office told Ars that “advocacy from partners like the Mani family” is “critical to bringing attention to this issue” and getting the proposed law “to the floor for a vote.”

Morelle introduced the law in December 2022, but it failed to pass that year or in 2023. He’s re-introducing the law in 2024 after seemingly gaining more support during a House Oversight subcommittee hearing on “Advances in Deepfake Technology” last November.

At that hearing, many lawmakers warned of the dangers of AI-generated deepfakes, citing a study from the Dutch AI company Sensity, which found that 96 percent of deepfakes online are deepfake porn—the majority of which targets women.

But lawmakers also made clear that it’s currently hard to detect AI-generated images and distinguish them from real images.

According to a hearing transcript posted by the nonprofit news organization Tech Policy Press, David Doermann—currently interim chair of the University at Buffalo’s computer science and engineering department and former program manager at the Defense Advanced Research Projects Agency (DARPA)—told lawmakers that DARPA was already working on advanced deepfake detection tools but still had more work to do.

To support laws like Morelle’s, lawmakers have called for more funding for DARPA and the National Science Foundation to aid in ongoing efforts to create effective detection tools. At the same time, President Joe Biden—through a sweeping AI executive order—has pushed for solutions like watermarking deepfakes. Biden’s executive order also instructed the Department of Commerce to establish “standards and best practices for detecting AI-generated content and authenticating official content.”

Morelle is working to push his law through in 2024, warning that deepfake pornography is already affecting a “generation of young women like Francesca,” who are “ready to stand up against systemic oppression and stand in their power.”

Until the federal government figures out how to best prevent the sharing of AI-generated deepfakes, Francesca and her mom plan to keep pushing for change.

“Our voices are our secret weapon, and our words are like power-ups in Fortnite,” Francesca said. “My mom and I are advocating to create a world where being safe isn’t just a hope; it’s a reality for everyone.”

Sharing deepfake porn could lead to lengthy prison time under proposed law Read More »

regulators-aren’t-convinced-that-microsoft-and-openai-operate-independently

Regulators aren’t convinced that Microsoft and OpenAI operate independently

Under Microsoft’s thumb? —

EU is fielding comments on potential market harms of Microsoft’s investments.

Regulators aren’t convinced that Microsoft and OpenAI operate independently

European Union regulators are concerned that Microsoft may be covertly controlling OpenAI as its biggest investor.

On Tuesday, the European Commission (EC) announced that it is currently “checking whether Microsoft’s investment in OpenAI might be reviewable under the EU Merger Regulation.”

The EC’s executive vice president in charge of competition policy, Margrethe Vestager, said in the announcement that rapidly advancing AI technologies are “disruptive” and have “great potential,” but to protect EU markets, a forward-looking analysis scrutinizing antitrust risks has become necessary.

Hoping to thwart predictable anticompetitive risks, the EC has called for public comments. Regulators are particularly keen to hear from policy experts, academics, and industry and consumer organizations who can identify “potential competition issues” stemming from tech companies partnering to develop generative AI and virtual world/metaverse systems.

The EC worries that partnerships like Microsoft and OpenAI could “result in entrenched market positions and potential harmful competition behavior that is difficult to address afterwards.” That’s why Vestager said that these partnerships needed to be “closely” monitored now—”to ensure they do not unduly distort market dynamics.”

Microsoft has denied having control over OpenAI.

A Microsoft spokesperson told Ars that, rather than stifling competition, since 2019, the tech giant has “forged a partnership with OpenAI that has fostered more AI innovation and competition, while preserving independence for both companies.”

But ever since Sam Altman was bizarrely ousted by OpenAI’s board, then quickly reappointed as OpenAI’s CEO—joining Microsoft for the brief time in between—regulators have begun questioning whether recent governance changes mean that Microsoft’s got more control over OpenAI than the companies have publicly stated.

OpenAI did not immediately respond to Ars’ request to comment. Last year, OpenAI confirmed that “it remained independent and operates competitively,” CNBC reported.

Beyond the EU, the UK’s Competition and Markets Authority (CMA) and reportedly the US Federal Trade Commission have also launched investigations into Microsoft’s OpenAI investments. On January 3, the CMA ended its comments period, but it’s currently unclear whether significant competition issues were raised that could trigger a full-fledged CMA probe.

A CMA spokesperson declined Ars’ request to comment on the substance of comments received or to verify how many comments were received.

Antitrust legal experts told Reuters that authorities should act quickly to prevent “critical emerging technology” like generative AI from being “monopolized,” noting that before launching a probe, the CMA will need to find evidence showing that Microsoft’s influence over OpenAI materially changed after Altman’s reappointment.

The EC is also investigating partnerships beyond Microsoft and OpenAI, questioning whether agreements “between large digital market players and generative AI developers and providers” may impact EU market dynamics.

Microsoft observing OpenAI board meetings

In total, Microsoft has pumped $13 billion into OpenAI, CNBC reported, which has a somewhat opaque corporate structure. OpenAI’s parent company, Reuters reported in December, is a nonprofit, which is “a type of entity rarely subject to antitrust scrutiny.” But in 2019, as Microsoft started investing billions into the AI company, OpenAI also “set up a for-profit subsidiary, in which Microsoft owns a 49 percent stake,” an insider source told Reuters. On Tuesday, a nonprofit consumer rights group, the Public Citizen, called for California Attorney General Robert Bonta to “investigate whether OpenAI should retain its non-profit status.”

A Microsoft spokesperson told Reuters that the source’s information was inaccurate, reiterating that the terms of Microsoft’s agreement with OpenAI are confidential. Microsoft has maintained that while it is entitled to OpenAI’s profits, it does not own “any portion” of OpenAI.

After OpenAI’s drama with Altman ended with an overhaul of OpenAI’s board, Microsoft appeared to increase its involvement with OpenAI by receiving a non-voting observer role on the board. That’s what likely triggered lawmaker’s initial concerns that Microsoft “may be exerting control over OpenAI,” CNBC reported.

The EC’s announcement comes days after Microsoft confirmed that Dee Templeton would serve as the observer on OpenAI’s board, initially reported by Bloomberg.

Templeton has spent 25 years working for Microsoft and is currently vice president for technology and research partnerships and operations. According to Bloomberg, she has already attended OpenAI board meetings.

Microsoft’s spokesperson told Ars that adding a board observer was the only recent change in the company’s involvement in OpenAI. An OpenAI spokesperson told CNBC that Microsoft’s board observer has no “governing authority or control over OpenAI’s operations.”

By appointing Templeton as a board observer, Microsoft may simply be seeking to avoid any further surprises that could affect its investment in OpenAI, but the CMA has suggested that Microsoft’s involvement in the board may have created “a relevant merger situation” that could shake up competition in the UK if not appropriately regulated.

Regulators aren’t convinced that Microsoft and OpenAI operate independently Read More »