Author name: Kris Guyer

from-sci-fi-to-state-law:-california’s-plan-to-prevent-ai-catastrophe

From sci-fi to state law: California’s plan to prevent AI catastrophe

Adventures in AI regulation —

Critics say SB-1047, proposed by “AI doomers,” could slow innovation and stifle open source AI.

The California state capital building in Sacramento.

Enlarge / The California State Capitol Building in Sacramento.

California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall “safety” of large artificial intelligence models. But critics are concerned that the bill’s overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today.

SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to “safety incidents.”

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of “critical harms” that an AI system might enable. That includes harms leading to “mass casualties or at least $500 million of damage,” such as “the creation or use of chemical, biological, radiological, or nuclear weapon” (hello, Skynet?) or “precise instructions for conducting a cyberattack… on critical infrastructure.” The bill also alludes to “other grave harms to public safety and security that are of comparable severity” to those laid out explicitly.

An AI model’s creator can’t be held liable for harm caused through the sharing of “publicly accessible” information from outside the model—simply asking an LLM to summarize The Anarchist’s Cookbook probably wouldn’t put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with “novel threats to public safety and security.” More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI “autonomously engaging in behavior other than at the request of a user” while acting “with limited human oversight, intervention, or supervision.”

Would California's new bill have stopped WOPR?

Enlarge / Would California’s new bill have stopped WOPR?

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must “implement the capability to promptly enact a full shutdown” and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require “intent, recklessness, or gross negligence” if performed by a human, suggesting a degree of agency that does not exist in today’s large language models.

Attack of the killer AI?

This kind of language in the bill likely reflects the particular fears of its original drafter, Center for AI Safety (CAIS) co-founder Dan Hendrycks. In a 2023 Time Magazine piece, Hendrycks makes the maximalist existential argument that “evolutionary pressures will likely ingrain AIs with behaviors that promote self-preservation” and lead to “a pathway toward being supplanted as the earth’s dominant species.'”

If Hendrycks is right, then legislation like SB-1047 seems like a common-sense precaution—indeed, it might not go far enough. Supporters of the bill, including AI luminaries Geoffrey Hinton and Yoshua Bengio, agree with Hendrycks’ assertion that the bill is a necessary step to prevent potential catastrophic harm from advanced AI systems.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” wrote Bengio in an endorsement of the bill. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers.

Tech policy expert Dr. Nirit Weiss-Blatt

However, critics argue that AI policy shouldn’t be led by outlandish fears of future systems that resemble science fiction more than current technology. “SB-1047 was originally drafted by non-profit groups that believe in the end of the world by sentient machine, like Dan Hendrycks’ Center for AI Safety,” Daniel Jeffries, a prominent voice in the AI community, told Ars. “You cannot start from this premise and create a sane, sound, ‘light touch’ safety bill.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers,” added tech policy expert Nirit Weiss-Blatt. “With their fictional fears, they try to pass fictional-led legislation, one that, according to numerous AI experts and open source advocates, could ruin California’s and the US’s technological advantage.”

From sci-fi to state law: California’s plan to prevent AI catastrophe Read More »

are-you-a-workaholic?-here’s-how-to-spot-the-signs

Are you a workaholic? Here’s how to spot the signs

bad for business —

Psychologists now view an out-of-control compulsion to work as an addiction.

Man works late in dimly lit cubicle amid a dark office space

An accountant who fills out spreadsheets at the beach, a dog groomer who always has time for one more client, a basketball player who shoots free throws to the point of exhaustion.

Every profession has its share of hard chargers and overachievers. But for some workers—perhaps more than ever in our always-on, always-connected world—the drive to send one more email, clip one more poodle, sink one more shot becomes all-consuming.

Workaholism is a common feature of the modern workplace. A recent review gauging its pervasiveness across occupational fields and cultures found that roughly 15 percent of workers qualify as workaholics. That adds up to millions of overextended employees around the world who don’t know when—or how, or why—to quit.

Whether driven by ambition, a penchant for perfectionism, or the small rush of completing a task, they work past any semblance of reason. A healthy work ethic can cross the line into an addiction, a shift with far-reaching consequences, says Toon Taris, a behavioral scientist and work researcher at Utrecht University in the Netherlands.

“Workaholism” is a word that gets thrown around loosely and sometimes glibly, says Taris, but the actual affliction is more common, more complex, and more dangerous than many people realize.

What workaholism is—and isn’t

Psychologists and employment researchers have tinkered with measures and definitions of workaholism for decades, and today the picture is coming into focus. In a major shift, workaholism is now viewed as an addiction with its own set of risk factors and consequences, says Taris, who, with occupational health scientist Jan de Jonge of Eindhoven University of Technology in the Netherlands, explored the phenomenon in the 2024 Annual Review of Organizational Psychology and Organizational Behavior.

Taris stresses that the “workaholic” label doesn’t apply to people who put in long hours because they love their jobs. Those people are considered engaged workers, he says. “That’s fine. No problems there.” People who temporarily put themselves through the grinder to advance their careers or keep up on car or house payments don’t count, either. Workaholism is in a different category from capitalism.

The growing consensus is that true workaholism encompasses four dimensions: motivations, thoughts, emotions, and behaviors, says Malissa Clark, an industrial/organizational psychologist at the University of Georgia in Athens. In 2020, Clark and colleagues proposed in the Journal of Applied Psychology  that, in sum, workaholism involves an inner compulsion to work, having persistent thoughts about work, experiencing negative feelings when not working, and working beyond what is reasonably expected.

Some personality types are especially likely to fall into the work trap. Perfectionists, extroverts, and people with type A (ambitious, aggressive, and impatient) personalities are prone to workaholism, Clark and coauthors found in a 2016 meta-analysis. They had expected people with low self-esteem to be at risk, but that link was nowhere to be found. Workaholics may put themselves through the wringer, but it’s not necessarily out of a sense of inadequacy or self-loathing.

Are you a workaholic? Here’s how to spot the signs Read More »

hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events

Hang out with Ars in San Jose and DC this fall for two infrastructure events

Arsmeet! —

Join us as we talk about the next few years in AI & storage, and what to watch for.

Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we’re back partnering with IBM again and we’re looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we’re going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it’s titled “Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next.” The idea will be to explore what generative AI means for the future of data management. The topics we’ll be discussing include:

  • Playing the infrastructure long game to address any kind of workload
  • Identifying infrastructure vulnerabilities with today’s AI tools
  • Infrastructure’s environmental footprint: Navigating impacts and responsibilities

We’re getting our panelists locked down right now, and while I don’t have any names to share, many will be familiar to Ars readers from past events—or from the front page.

As a neat added bonus, we’re going to host the event at the Computer History Museum, which any Bay Area Ars reader can attest is an incredibly cool venue. (Just nobody spill anything. I think they’ll kick us out if we break any exhibits!)

October: Washington, DC

Switching coasts, on October 29 we’ll set up shop in our nation’s capital for a similar show. This time, our event title will be “AI in DC: Privacy, Compliance, and Making Infrastructure Smarter.” Given that we’ll be in DC, the tone shifts a bit to some more policy-centric discussions, and the talk track looks like this:

  • The key to compliance with emerging technologies
  • Data security in the age of AI-assisted cyber-espionage
  • The best infrastructure solution for your AI/ML strategy

Same here deal with the speakers as with the September—I can’t name names yet, but the list will be familiar to Ars readers and I’m excited. We’re still considering venues, but hoping to find something that matches our previous events in terms of style and coolness.

Interested in attending?

While it’d be awesome if everyone could come, the old song and dance applies: space, as they say, will be limited at both venues. We’d like to make sure local folks in both locations get priority in being able to attend, so we’re asking anyone who wants a ticket to register for the events at the sign-up pages below. You should get an email immediately confirming we’ve received your info, and we’ll send another note in a couple of weeks with further details on timing and attendance.

On the Ars side, at minimum both our EIC Ken Fisher and I will be in attendance at both events, and we’ll likely have some other Ars staff showing up where we can—free drinks are a strong lure for the weary tech journalist, so there ought to be at least a few appearing at both. Hoping to see you all there!

Hang out with Ars in San Jose and DC this fall for two infrastructure events Read More »

ai-and-ml-enter-motorsports:-how-gm-is-using-them-to-win-more-races

AI and ML enter motorsports: How GM is using them to win more races

not LLM or generative AI —

From modeling tire wear and fuel use to predicting cautions based on radio traffic.

SAO PAULO, BRAZIL - JULY 13: The #02 Cadillac Racing Cadillac V-Series.R of Earl Bamber, and Alex Lynn in action ahead of the Six Hours of Sao Paulo at the Autodromo de Interlagos on July 13, 2024 in Sao Paulo, Brazil.

Enlarge / The Cadillac V-Series.R is one of General Motors’ factory-backed racing programs.

James Moy Photography/Getty Images

It is hard to escape the feeling that a few too many businesses are jumping on the AI hype train because it’s hype-y, rather than because AI offers an underlying benefit to their operation. So I will admit to a little inherent skepticism, and perhaps a touch of morbid curiosity, when General Motors got in touch wanting to show off some of the new AI/machine learning tools it has been using to win more races in NASCAR, sportscar racing, and IndyCar. As it turns out, that skepticism was misplaced.

GM has fingers in a lot of motorsport pies, but there are four top-level programs it really, really cares about. Number one for an American automaker is NASCAR—still the king of motorsport here—where Chevrolet supplies engines to six Cup teams. IndyCar, which could once boast of being America’s favorite racing, is home to another six Chevy-powered teams. And then there’s sportscar racing; right now, Cadillac is competing in IMSA’s GTP class and the World Endurance Championship’s Hypercar class, plus a factory Corvette Racing effort in IMSA.

“In all the series we race we either have key partners or specific teams that run our cars. And part of the technical support that they get from us are the capabilities of my team,” said Jonathan Bolenbaugh, motorsports analytics leader at GM, based at GM’s Charlotte Technical Center in North Carolina.

Unlike generative AI that’s being developed to displace humans from creative activities, GM sees the role of AI and ML as supporting human subject-matter experts so they can make the cars go faster. And it’s using these tools in a variety of applications.

One of GM's command centers at its Charlotte Technical Center in North Carolina.

Enlarge / One of GM’s command centers at its Charlotte Technical Center in North Carolina.

General Motors

Each team in each of those various series (obviously) has people on the ground at each race, and invariably more engineers and strategists helping them from Indianapolis, Charlotte, or wherever it is that the particular race team has its home base. But they’ll also be tied in with a team from GM Motorsport, working from one of a number of command centers at its Charlotte Technical Center.

What did they say?

Connecting all three are streams and streams of data from the cars themselves (in series that allow car-to-pit telemetry) but also voice comms, text-based messaging, timing and scoring data from officials, trackside photographs, and more. And one thing Bolenbaugh’s team and their suite of tools can do is help make sense of that data quickly enough for it to be actionable.

“In a series like F1, a lot of teams will have students who are potentially newer members of the team literally listening to the radio and typing out what is happening, then saying, ‘hey, this is about pitting. This is about track conditions,'” Bolenbaugh said.

Instead of giving that to the internship kids, GM built a real time audio transcription tool to do that job. After trying out a commercial off-the-shelf solution, it decided to build its own, “a combination of open source and some of our proprietary code,” Bolenbaugh said. As anyone who has ever been to a race track can attest, it’s a loud environment, so GM had to train models with all the background noise present.

“We’ve been able to really improve our accuracy and usability of the tool to the point where some of the manual support for that capability is now dwindling,” he said, with the benefit that it frees up the humans, who would otherwise be transcribing, to apply their brains in more useful ways.

Take a look at this

Another tool developed by Bolenbaugh and his team was built to quickly analyze images taken by trackside photographers working for the teams and OEMs. While some of the footage they shoot might be for marketing or PR, a lot of it is for the engineers.

Two years ago, getting those photos from the photographer’s camera to the team was the work of two to three minutes. Now, “from shutter click at the racetrack in a NASCAR event to AI-tagged into an application for us to get information out of those photos is seven seconds,” Bolenbaugh said.

Sometimes you don't need a ML tool to analyze a photo to tell you the car is damaged.

Enlarge / Sometimes you don’t need a ML tool to analyze a photo to tell you the car is damaged.

Jeffrey Vest/Icon Sportswire via Getty Images

“Time is everything, and the shortest lap time that we run—the Coliseum would be an outlier, but maybe like 18 seconds is probably a short lap time. So we need to be faster than from when they pass that pit lane entry to when they come back again,” he said.

At the rollout of this particular tool at a NASCAR race last year, one of GM’s partner teams was able to avoid a cautionary pitstop after its driver scraped the wall, when the young engineer who developed the tool was able to show them a seconds-old photo of the right side of the car that showed it had escaped any damage.

“They didn’t have to wait for a spotter to look, they didn’t have to wait for the driver’s opinion. They knew that didn’t have damage. That team made the playoffs in that series by four points, so in the event that they would have pitted, there’s a likelihood where they didn’t make it,” he said. In cases where a car is damaged, the image analysis tool can automatically flag that and make that known quickly through an alert.

Not all of the images are used for snap decisions like that—engineers can glean a lot about their rivals from photos, too.

“We would be very interested in things related to the geometry of the car for the setup settings—wicker settings, wing angles… ride heights of the car, how close the car is to the ground—those are all things that would be great to know from an engineering standpoint, and those would be objectives that we would have in doing image analysis,” said Patrick Canupp, director of motorsports competition engineering at GM.

Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Enlarge / Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Steve Russell/Toronto Star via Getty Images

“It’s not straightforward to take a set of still images and determine a lot of engineering information from those. And so we’re working on that actively to help with all the photos that come in to us on a race weekend—there’s thousands of them. And so it’s a lot of information that we have at our access, that we want to try to maximize the engineering information that we glean from all of that data. It’s kind of a big data problem that AI is really geared for,” Canupp said.

The computer says we should pit now

Remember that transcribed audio feed from earlier? “If a bunch of drivers are starting to talk about something similar in the race like the track condition, we can start inferring, based on… the occurrence of certain words, that the track is changing,” said Bolenbaugh. “It might not just be your car… if drivers are talking about something on track, the likelihood of a caution, which is a part of our strategy model, might be going up.”

That feeds into a strategy tool that also takes lap times from timing and scoring, as well as fuel efficiency data in racing series that provide it for all cars, or a predictive model to do the same in series like NASCAR and IndyCar where teams don’t get to see that kind of data from their competitors, as well as models of tire wear.

“One of the biggest things that we need to manage is tires, fuel, and lap time. Everything is a trade-off between trying to execute the race the fastest,” Bolenbaugh said.

Obviously races are dynamic situations, and so “multiple times a lap as the scenario changes, we’re updating our recommendation. So, with tire fall off [as the tire wears and loses grip], you’re following up in real time, predicting where it’s going to be. We are constantly evolving during the race and doing transfer learning so we go into the weekend, as the race unfolds, continuing to train models in real time,” Bolenbaugh said.

AI and ML enter motorsports: How GM is using them to win more races Read More »

lego’s-newest-retro-art-piece-is-a-1,215-piece-super-mario-world-homage

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage

let’s-a-go —

$130 set is available for preorder now, ships on October 1.

  • The Lego Mario & Yoshi set is an homage to 1990’s Super Mario World.

    The Lego Group

  • From the front, it looks like a fairly straightforward re-creation of the game’s 16-bit sprites.

    The Lego Group

  • Behind the facade are complex mechanics that move Yoshi’s feet and arms and bob his body up and down, to make him look like he’s walking. A separate dial opens his mouth and extends his tongue.

    The Lego Group

Nintendo and Lego are at it again—they’ve announced another collaboration today as a follow-up to the interactive Mario sets, the replica Nintendo Entertainment System, the unfolding question mark block with the Mario 64 worlds inside, and other sets besides.

The latest addition is an homage to 1990’s Super Mario World, Mario’s debut outing on the then-new 16-bit Super Nintendo Entertainment System. At first, the 1,215-piece set just looks like a caped Mario sitting on top of Yoshi. But a look at the back reveals more complex mechanics, including a hand crank that makes Yoshi’s feet and arms move and a dial that opens his mouth and extends his tongue.

Most of the Mario sets have included some kind of interactive moving part, even if it’s as simple as the movable mouth on the Lego Piranha Plant. Yoshi’s mechanical crank most strongly resembles the NES set, though, which included a CRT-style TV set with a crank that made the contents of the screen scroll so that Mario could “walk.”

The Mario & Yoshi set is available to preorder from Lego’s online store for $129.99. It begins shipping on October 1.

Lego has also branched out into other video game-themed sets. In 2022, the company began selling a replica Atari 2600, complete with faux-wood paneling. More recently, Lego has collaborated with Epic Games on several Fortnite-themed sets, including the Battle Bus.

Listing image by The Lego Group

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage Read More »

new-zealand-“deeply-shocked”-after-canada-drone-spied-on-its-olympic-practices—twice

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Droned —

Two Canadians have already been sent home over the incident.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice

Aurich Lawson | Getty Images

On July 22, the New Zealand women’s football (soccer) team was training in Saint-Étienne, France, for its upcoming Olympics matchup against Canada when team officials noticed a drone hovering near the practice pitch. Suspecting skullduggery, the New Zealand squad called the local police, and gendarmes located and then detained the nearby drone operator. He turned out to be one Joseph Lombardi, an “unaccredited analyst with Canada Soccer”—and he was apparently spying on the New Zealand practice and relaying information to a Canadian assistant coach.

On July 23, the New Zealand Olympic Committee put out a statement saying it was “deeply shocked and disappointed by this incident, which occurred just three days before the sides are due to face each other in their opening game of Paris 2024.” It also complained to the official International Olympic Committee integrity unit.

Early today, July 24, the Canadian side issued its own statement saying that it “stands for fair-play and we are shocked and disappointed. We offer our heartfelt apologies to New Zealand Football, to all the players affected, and to the New Zealand Olympic Committee.”

Later in the day, a follow-up Canadian statement revealed that this was actually the second drone-spying incident; the New Zealand side had also been watched by drone at its July 19 practice.

Team Canada announced four responses to these incidents:

  • “Joseph Lombardi, an unaccredited analyst with Canada Soccer, is being removed from the Canadian Olympic Team and will be sent home immediately.
  • Jasmine Mander, an assistant coach to whom Mr. Lombardi report sent [sic], is being removed from the Canadian Olympic Team and will be sent home immediately.
  • [The Canadian Olympic Committee] has accepted the decision of Head Coach Bev Priestman to remove herself from coaching the match against New Zealand on July 25th.
  • Canada Soccer staff will undergo mandatory ethics training.”

Drones are now everywhere—swarming the skies over Ukraine’s battlefields, flying from Houthi-controlled Yemen to Tel Aviv, scouting political assassination attempt options. Disney is running an 800-drone light show in Florida. The roofer who recently showed up to look at my shingles brought a drone with him. My kid owns one.

So, from a technical perspective, stories like this little spying scandal are no surprise at all. But for the Olympics, already awash in high-tech cheating scandals such as years-long state-sponsored doping campaigns, drone spying is just one more depressing example of how humans excel at using our tools to ruin good things in creative new ways.

And it’s a good reminder that every crazy example in those terrible HR training videos your boss makes you watch every year are included for a reason. So if you see “drone ethics” creeping into your compliance program right after sections on “how to avoid being phished” and “don’t let anyone else follow you through the door after you swipe your keycard”… well, now you know why.

New Zealand “deeply shocked” after Canada drone-spied on its Olympic practices—twice Read More »

crowdstrike-blames-testing-bugs-for-security-update-that-took-down-8.5m-windows-pcs

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs

oops —

Company says it’s improving testing processes to avoid a repeat.

CrowdStrike's Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

Enlarge / CrowdStrike’s Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

CrowdStrike

Security firm CrowdStrike has posted a preliminary post-incident report about the botched update to its Falcon security software that caused as many as 8.5 million Windows PCs to crash over the weekend, delaying flights, disrupting emergency response systems, and generally wreaking havoc.

The detailed post explains exactly what happened: At just after midnight Eastern time, CrowdStrike deployed “a content configuration update” to allow its software to “gather telemetry on possible novel threat techniques.” CrowdStrike says that these Rapid Response Content updates are tested before being deployed, and one of the steps involves checking updates using something called the Content Validator. In this case, “a bug in the Content Validator” failed to detect “problematic content data” in the update responsible for the crashing systems.

CrowdStrike says it is making changes to its testing and deployment processes to prevent something like this from happening again. The company is specifically including “additional validation checks to the Content Validator” and adding more layers of testing to its process.

The biggest change will probably be “a staggered deployment strategy for Rapid Response Content” going forward. In a staggered deployment system, updates are initially released to a small group of PCs, and then availability is slowly expanded once it becomes clear that the update isn’t causing major problems. Microsoft uses a phased rollout for Windows security and feature updates after a couple of major hiccups during the Windows 10 era. To this end, CrowdStrike will “improve monitoring for both sensor and system performance” to help “guide a phased rollout.”

CrowdStrike says it will also give its customers more control over when Rapid Response Content updates are deployed so that updates that take down millions of systems aren’t deployed at (say) midnight when fewer people are around to notice or fix things. Customers will also be able to subscribe to release notes about these updates.

Recovery of affected systems is ongoing. Rebooting systems multiple times (as many as 15, according to Microsoft) can give them enough time to grab a new, non-broken update file before they crash, resolving the issue. Microsoft has also created tools that can boot systems via USB or a network so that the bad update file can be deleted, allowing systems to restart normally.

In addition to this preliminary incident report, CrowdStrike says it will release “the full Root Cause Analysis” once it has finished investigating the issue.

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs Read More »

appeals-court-denies-stay-to-states-trying-to-block-epa’s-carbon-limits

Appeals Court denies stay to states trying to block EPA’s carbon limits

You can’t stay here —

The EPA’s plan to cut carbon emissions from power plants can go ahead.

Cooling towers emitting steam, viewed from above.

On Friday, the US Court of Appeals for the DC Circuit denied a request to put a hold on recently formulated rules that would limit carbon emissions made by fossil fuel power plants. The request, made as part of a case that sees 25 states squaring off against the EPA, would have put the federal government’s plan on hold while the case continued. Instead, the EPA will be allowed to continue the process of putting its rules into effect, and the larger case will be heard under an accelerated schedule.

Here we go again

The EPA’s efforts to regulate carbon emissions from power plants go back all the way to the second Bush administration, when a group of states successfully sued the EPA to force it to regulate greenhouse gas emissions. This led to a formal endangerment finding regarding greenhouse gases during the Obama administration, something that remained unchallenged even during Donald Trump’s term in office.

Obama tried to regulate emissions through the Clean Power Plan, but his second term came to an end before this plan had cleared court hurdles, allowing the Trump administration to formulate a replacement that did far less than the Clean Power Plan. This took place against a backdrop of accelerated displacement of coal by natural gas and renewables that had already surpassed the changes envisioned under the Clean Power Plan.

In any case, the Trump plan was thrown out by the courts on the day before Biden’s administration, allowing his EPA to start with a clean slate. Biden’s original plan, which would have had states regulate emissions from their electric grids by regulating them as a single system, was thrown out by the Supreme Court, which ruled that emissions would need to be regulated on a per-plant basis in a decision termed West Virginia v. EPA.

So, that’s what the agency is now trying to do. Its plan, issued last year, would allow fossil-fuel-burning plants that are being shut down in the early 2030s to continue operating without restrictions. Others will need to either install carbon capture equipment, or natural gas plants could swap in green hydrogen as their primary fuel.

And again

In response, 25 states have sued to block the rule (you can check out this filing to see if yours is among them). The states also sought a stay that would prevent the rule from being implemented while the case went forward. In it, they argue that carbon capture technology isn’t mature enough to form the basis of these regulations (something we predicted was likely to be a point of contention). The suit also suggests that the rules would effectively put coal out of business, something that’s beyond the EPA’s remit.

The DC Court of Appeals, however, was not impressed, ruling that the states’ arguments regarding carbon capture are insufficient: “Petitioners have not shown they are likely to succeed on those claims given the record in this case.” And that’s the key hurdle for determining whether a stay is justified. And the regulations don’t pose a likelihood of irreparable harm, as the court notes that states aren’t even expected to submit a plan for at least two years, and the regulations won’t kick in until 2030 at the earliest.

Meanwhile, the states cited the Supreme Court’s West Virginia v. EPA decision to argue against these rules, suggesting they represent a “major question” that requires input from Congress. The Court was also not impressed, writing that “EPA has claimed only the power to ‘set emissions limits under Section 111 based on the application of measures that would reduce pollution by causing the regulated source to operate more cleanly,’ a type of conduct that falls well within EPA’s bailiwick.”

To respond to the states’ concerns about the potential for irreparable harm, the court plans to consider them during the 2024 term and has given the parties just two weeks to submit proposed schedules for briefings on the case.

Appeals Court denies stay to states trying to block EPA’s carbon limits Read More »

intel-has-finally-tracked-down-the-problem-making-13th-and-14th-gen-cpus-crash

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash

crash no more? —

But microcode update can’t fix CPUs that are already crashing or unstable.

Intel's Core i9-13900K.

Enlarge / Intel’s Core i9-13900K.

Andrew Cunningham

For several months, Intel has been investigating reports that high-end 13th- and 14th-generation desktop CPUs (mainly, but not exclusively, the Core i9-13900K and 14900K) were crashing during gameplay. Intel partially addressed the issue by insisting that third-party motherboard makers adhere to Intel’s recommended default power settings in their motherboards, but the company said it was still working to identify the root cause of the problem.

The company announced yesterday that it has wrapped up its investigation and that a microcode update to fix the problem should be shipping out to motherboard makers in mid-August “following full validation.” Microcode updates like this generally require a BIOS update, so exactly when the patch hits your specific motherboard will be up to the company that made it.

Intel says that an analysis of defective processors “confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor.” In other words, the CPU is receiving too much power, which is degrading stability over time.

If you’re using a 13th- or 14th-generation CPU and you’re not noticing any problems, the microcode update should prevent your processor from degrading. But if you’re already noticing stability problems, Tom’s Hardware reports that “the bug causes irreversible degradation of the impacted processors” and that the fix will not be able to reverse the damage that has already happened.

There has been no mention of 12th-generation processors, including the Core i9-12900K, suffering from the same issues. The 12th-gen processors use Intel’s Alder Lake architecture, whereas the high-end 13th- and 14th-gen chips use a modified architecture called Raptor Lake that comes with higher clock speeds, a bit more cache memory, and additional E-cores.

Tom’s Hardware also says that Intel will continue to replace CPUs that are exhibiting problems and that the microcode update shouldn’t noticeably affect CPU performance.

Intel also separately confirmed speculation that there was an oxidation-related manufacturing issue with some early 13th-generation Core processors but that the problems were fixed in 2023 and weren’t related to the crashes and instability that the microcode update is fixing.

Intel has finally tracked down the problem making 13th- and 14th-gen CPUs crash Read More »

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

apple-“clearly-underreporting”-child-sex-abuse,-watchdogs-say

Apple “clearly underreporting” child sex abuse, watchdogs say

Apple “clearly underreporting” child sex abuse, watchdogs say

After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.

The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is “vastly undercounting how often” CSAM is found globally on its services.

According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, “Apple was implicated in 337 recorded offenses of child abuse images.” But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.

Large tech companies in the US must report CSAM to NCMEC when it’s found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC’s report showed. Experts told The Guardian that there’s ongoing concern that Apple “clearly” undercounts CSAM on its platforms.

Richard Collard, the NSPCC’s head of child safety online policy, told The Guardian that he believes Apple’s child safety efforts need major improvements.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”

Outside the UK, other child safety experts shared Collard’s concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple’s platforms a “black hole” obscuring CSAM. And she expects that Apple’s efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” Gardner told The Guardian.

Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and safety teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.

“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.

So far, Apple has not commented on the NSPCC’s report. Last September, Apple did respond to the Heat Initiative’s demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.

Apple “clearly underreporting” child sex abuse, watchdogs say Read More »

astronomers-discover-technique-to-spot-ai-fakes-using-galaxy-measurement-tools

Astronomers discover technique to spot AI fakes using galaxy-measurement tools

stars in their eyes —

Researchers use technique to quantify eyeball reflections that often reveal deepfake images.

Researchers write,

Enlarge / Researchers write, “In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person.”

In 2024, it’s almost trivial to create realistic AI-generated images of people, which has led to fears about how these deceptive images might be detected. Researchers at the University of Hull recently unveiled a novel method for detecting AI-generated deepfake images by analyzing reflections in human eyes. The technique, presented at the Royal Astronomical Society’s National Astronomy Meeting last week, adapts tools used by astronomers to study galaxies for scrutinizing the consistency of light reflections in eyeballs.

Adejumoke Owolabi, an MSc student at the University of Hull, headed the research under the guidance of Dr. Kevin Pimbblet, professor of astrophysics.

Their detection technique is based on a simple principle: A pair of eyes being illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eyeball. Many AI-generated images created to date don’t take eyeball reflections into account, so the simulated light reflections are often inconsistent between each eye.

A series of real eyes showing largely consistent reflections in both eyes.

Enlarge / A series of real eyes showing largely consistent reflections in both eyes.

In some ways, the astronomy angle isn’t always necessary for this kind of deepfake detection because a quick glance at a pair of eyes in a photo can reveal reflection inconsistencies, which is something artists who paint portraits have to keep in mind. But the application of astronomy tools to automatically measure and quantify eye reflections in deepfakes is a novel development.

Automated detection

In a Royal Astronomical Society blog post, Pimbblet explained that Owolabi developed a technique to detect eyeball reflections automatically and ran the reflections’ morphological features through indices to compare similarity between left and right eyeballs. Their findings revealed that deepfakes often exhibit differences between the pair of eyes.

The team applied methods from astronomy to quantify and compare eyeball reflections. They used the Gini coefficient, typically employed to measure light distribution in galaxy images, to assess the uniformity of reflections across eye pixels. A Gini value closer to 0 indicates evenly distributed light, while a value approaching 1 suggests concentrated light in a single pixel.

A series of deepfake eyes showing inconsistent reflections in each eye.

Enlarge / A series of deepfake eyes showing inconsistent reflections in each eye.

In the Royal Astronomical Society post, Pimbblet drew comparisons between how they measured eyeball reflection shape and how they typically measure galaxy shape in telescope imagery: “To measure the shapes of galaxies, we analyze whether they’re centrally compact, whether they’re symmetric, and how smooth they are. We analyze the light distribution.”

The researchers also explored the use of CAS parameters (concentration, asymmetry, smoothness), another tool from astronomy for measuring galactic light distribution. However, this method proved less effective in identifying fake eyes.

A detection arms race

While the eye-reflection technique offers a potential path for detecting AI-generated images, the method might not work if AI models evolve to incorporate physically accurate eye reflections, perhaps applied as a subsequent step after image generation. The technique also requires a clear, up-close view of eyeballs to work.

The approach also risks producing false positives, as even authentic photos can sometimes exhibit inconsistent eye reflections due to varied lighting conditions or post-processing techniques. But analyzing eye reflections may still be a useful tool in a larger deepfake detection toolset that also considers other factors such as hair texture, anatomy, skin details, and background consistency.

While the technique shows promise in the short term, Dr. Pimbblet cautioned that it’s not perfect. “There are false positives and false negatives; it’s not going to get everything,” he told the Royal Astronomical Society. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

Astronomers discover technique to spot AI fakes using galaxy-measurement tools Read More »