Author name: Paul Patrick

cybersecurity-takes-a-big-hit-in-new-trump-executive-order

Cybersecurity takes a big hit in new Trump executive order

Cybersecurity practitioners are voicing concerns over a recent executive order issued by the White House that guts requirements for: securing software the government uses, punishing people who compromise sensitive networks, preparing new encryption schemes that will withstand attacks from quantum computers, and other existing controls.

The executive order (EO), issued on June 6, reverses several key cybersecurity orders put in place by President Joe Biden, some as recently as a few days before his term ended in January. A statement that accompanied Donald Trump’s EO said the Biden directives “attempted to sneak problematic and distracting issues into cybersecurity policy” and amounted to “political football.”

Pro-business, anti-regulation

Specific orders Trump dropped or relaxed included ones mandating (1) federal agencies and contractors adopt products with quantum-safe encryption as they become available in the marketplace, (2) a stringent Secure Software Development Framework (SSDF) for software and services used by federal agencies and contractors, (3) the adoption of phishing-resistant regimens such as the WebAuthn standard for logging into networks used by contractors and agencies, (4) the implementation new tools for securing Internet routing through the Border Gateway Protocol, and (5) the encouragement of digital forms of identity.

In many respects, executive orders are at least as much performative displays as they are a vehicle for creating sound policy. Biden’s cybersecurity directives were mostly in this second camp.

The provisions regarding the secure software development framework, for instance, was born out of the devastating consequences of the SolarWinds supply chain attack of 2020. During the event, hackers linked to the Russian government breached the network of a widely used cloud service, SolarWinds. The hackers went on to push a malicious update that distributed a backdoor to more than 18,000 customers, many of whom were contractors and agencies of the federal government.

Cybersecurity takes a big hit in new Trump executive order Read More »

reddit-user-surprised-when-1960s-computer-panel-emerged-from-collapsed-family-garage

Reddit user surprised when 1960s computer panel emerged from collapsed family garage

The Spectra 70 family included five models: the 70/15, 70/25, 70/35, 70/45, and 70/55, with progressively more capable memory speeds and capacities. Operators could configure the system with up to 32,768 bytes of memory (32K), achieved by combining two 16,384-byte core memory modules—a respectable amount for the mid-1960s, though minuscule by today’s standards. By comparison, a decade later, the Apple II personal computer could utilize up to a maximum of 48K of memory.

A view of operators using an RCA Spectra 70 control panel similar to the one found in the garage, circa 1965.

A view of operators using an RCA Spectra 70/45 control panel similar to the one found in the garage, circa 1965. Credit: RCA

SonOfADeadMeme believes the 70/35 panel ended up in his family’s garage as a keepsake from the computer’s decommissioning. “I think the system may had been dismantled at IBM and the guy kept the terminal as a souvenir unfortunately, searched high and low while it was still standing but only other computers there was a Apple IIE and a Compaq that I think got tossed (kept the Apple II but cant find the Compaq). I did make sure to pretty much clean the whole place out before the collapse though,” they explained.

RCA discontinued the Spectra series in 1971 when the company exited the mainframe computer business, making surviving examples increasingly scarce. The company sold its computer division to Univac, which briefly continued supporting existing Spectra installations before phasing them out entirely.

As for the control panel’s future, the original poster has creative plans for this piece of computing history. “Unfortunately I don’t think I’m ever finding the other 1,500lbs of mainframe needed to use the luxurious 34 kilobytes of memory so I may (without altering a single Goddamn thing) string some LEDs behind the front panel and set them to blink at random.”

Reddit user surprised when 1960s computer panel emerged from collapsed family garage Read More »

rtfb:-the-raise-act

RTFB: The RAISE Act

The RAISE Act has overwhelmingly passed the New York Assembly (95-0 among Democrats and 24-22 among Republicans) and New York Senate (37-1 among Democrats, 21-0 among Republicans).

Governor Kathy Hochul now has to decide whether or not to sign it, which she has 10 non-Sunday days to do once the bill is delivered (30 if they’re out of session), but the bill might not be delivered for six months.

The aim of this post, now that we are seeing increasing public discussion, is to go through the bill to understand exactly what the bill would and would not do.

The RAISE Act is centrally a transparency bill. It requires frontier model developers to maintain, publish and adhere to (one might say ‘open source’ except that they can redact details for various reasons) a safety and security protocol (SSP) that outlines how they will, before releasing their frontier models, take appropriate steps to reduce risk of critical harm (100 casualties or 1 billion in damages) caused or materially enabled by those models. It must designate senior people as responsible for implementation.

It also requires companies to disclose (as in, write two sentences informing us about) safety incidents within 72 hours.

Enforcement is done only by the attorney general, and limited to injunctive or declaratory relief and fines of a maximum of $10 million for the first violation and $30 million for subsequent violations. This can happen if a company fails to take appropriate preventative steps, even if no critical harm has yet resulted, so if the SSP proves sufficiently inadequate preemptive action can be taken.

My take on the RAISE Act is that it seems clearly to be bending over backwards to avoid imposing substantial costs on the companies involved even if the state were to attempt to enforce it maximally and perversely, to give those companies maximum flexibility in how they respond, and to only apply to a handful of major players.

The bill is thus insufficient on its own but an important improvement upon the status quo. I strongly support this bill. I am very much not alone. The RAISE Act is a highly popular bill, supported (with admittedly very low salience) by 84% of New Yorkers.

a16z has already attempted to kill this bill before it overwhelmingly passed both houses, circulating an opposition memo and reportedly calling members. We should expect a continued flurry of industry lobbying against RAISE, likely following the usual playbooks, and for them to greatly outspend bill advocates.

o3-pro thinks Hochul is likely to ultimately sign the bill. with a 65% chance it becomes law in current form, 15% chance it becomes law with negotiated chapter amendments. The Manifold market has a 57% chance that the bill becomes law.

There are two big advantages we have in reading the RAISE Act.

  1. It is short and simple.

  2. We’ve analyzed similar things before.

Relax. This will be a breeze.

The bill is mostly definitions.

These are mostly standard. The AI definition has been consistent for a while. Compute cost is defined as the published market price cost of cloud compute, as reasonably assessed by the person doing the training, which is as clear and generous as one could hope.

The most important definition is ‘frontier model’:

6. “Frontier model” means either of the following:

(a) an artificial intelligence model trained using greater than 10^26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds one hundred million dollars;

OR

(b) an artificial intelligence model produced by applying knowledge distillation to a frontier model as defined in paragraph (a) of this subdivision, provided that the compute cost for such model produced by applying knowledge distillation exceeds five million dollars.

The first provision will centrally be ‘you spent $100 million dollars.’ Which remains a lot of dollars, and means this will only apply to a handful of frontier labs. But also note that 10^26 will for a while remain a lot of FLOPS. Epoch looked at this question, and also estimates the costs of various models, with the only current model over 10^26 likely being Grok 3 (o3-pro suggests it is not impossible that Gemini Ultra or a few others might just barely also qualify, although I find this highly unlikely).

The question is the second provision. How often will companies make distillations that cost more than $5 million and result in ‘similar or equivalent capabilities’ to the original, as required by the definition of distillation used here?

o3-pro believes the current number of such models, even without considering the capabilities requirement, is probably zero (the possible exception is Claude Haiku, if you think it has sufficiently comparable capabilities). It anticipates the number of $5 million distillations will not remain zero, and expects the distillations to mostly (but not entirely) be from the same companies releasing the $100 million frontier models.

Its baseline scenario is by 2029, there will be ~6 American frontier-trainers, in particular OpenAI, DeepMind, Anthropic, Meta, xAI and then maybe Amazon or Apple or perhaps an open source collective, and ~6 more distillers on top of that passing the $5 mark, starting with Cohere, then maybe Databricks or Perplexity.

A ‘large developer’ means spending a combined $100 million in training compute, or someone who buys the full intellectual rights to the results of that, with academic institutions doing research excluded.

This bill would have zero impact on everyone else.

So yes, there will be talk about how this will be ‘more difficult’ for ‘smaller’ companies. But by ‘smaller’ companies we mean a handful of large companies, and by ‘more difficult’ we mean a tiny fraction of overall costs. And as always, please point to the thing that you would have to do, that you don’t think is worth doing, or is even a substantial impact on their business costs?

Bill opponents, of course, are telling the same lies about this they told about SB 1047. Brianna January of the ‘Chamber of Progress’ calls this ‘an eviction notice for New York’s 9,000 AI startups,’ saying it ‘would send AI innovators packing,’ when exactly zero of these 9,000 startups would have to lift a single finger in response to this bill.

This is pure bad faith Obvious Nonsense, and you should treat anyone who says similar things accordingly. (The other Obvious Nonsense claim here is that the bill was ‘rushed’ and lacked a public hearing. The bill very much followed normal procedures and had debate on the floor, the bill was in the public pipeline for months, and bills in New York do not otherwise get pubic hearings, that’s a non sequitur.)

“Critical harm” means the death or serious injury of one hundred or more people or at least one billion dollars of damages to rights in money or property caused or materially enabled by a large developer’s use, storage, or release of a frontier model, through either of the following:

(a) The creation or use of a chemical, biological, radiological, or nuclear weapon; or

(b) An artificial intelligence model engaging in conduct that does both of the following:

(i) Acts with no meaningful human intervention; and

(ii) Would, if committed by a human, constitute a crime specified in the penal law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

A harm inflicted by an intervening human actor shall not be deemed to result from a developer’s activities unless such activities were a substantial factor in bringing about the harm, the intervening human actor’s conduct was reasonably foreseeable as a probable consequence of the developer’s activities, and could have been reasonably prevented or mitigated through alternative design, or security measures, or safety protocols.

We have ‘caused or materially enabled’ and also ‘substantial factor’ and ‘harm that mitigations could have reasonably prevented’ and either 100 serious injuries or a billion dollars in damage as the thresholds, and either the act has to be autonomous, be a CBRN risk, or constitute a crime in the penal law.

That seems like a robust way of saying ‘if you trigger this provision you screwed up?’

They have to be reported, so what exactly are they?

“Safety incident” means a known incidence of critical harm

OR an incident of the following kinds that occurs in such a way that it provides demonstrable evidence of an increased risk of critical harm:

  1. A frontier model autonomously engaging in behavior other than at the request of a user;

  2. Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model;

  3. The critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model;

  4. Unauthorized use of a frontier model.

The incidence of an actual critical harm is clear.

The second half of the definition has two halves.

  1. It has to involve one of the four things listed.

  2. It has to provide demonstrable evidence of an increased risk of critical harm.

As in, something in your safety protocols goes wrong, in a way that makes you more worried about risk. That seems like the kind of thing you should report. I will be very happy to see these systematically written down, and even happier to have them disclosed.

As in, within 72 hours of any safety incidents, you have to notify the attorney general and DHSES. This is the common standard used for cybersecurity breaches. You have to include:

  1. The date of the incident.

  2. Why it qualifies as a safety incident.

  3. ‘A short and plain statement describing the safety incident.’

Does this, as some have suggested, constitute such a burden that it interferes with the ability to respond to the incident? That seems difficult to believe.

For example, you could write ‘On Tuesday, June 17, 2025, someone gained unauthorized access to our frontier model. This makes us more worried about future unauthorized access.’ That’s it.

I have no sympathy for the claim that asking for that style of statement within three days is a distracting or undue burden that outweighs our right to know, or its costs exceed benefits. In many cases, waiting longer could have serious repercussions.

What are we actually asking companies to produce, exactly? A documentation and description of technical and organizational protocols that if fully implemented would:

  1. ‘Appropriately reduce the risk of critical harm.’

  2. ‘Appropriately reduce the risk of’ unauthorized access to or misuse of the model weights ‘leading to critical harm.’

  3. Describe a detailed test procedure to evaluate potential misuse or loss of control or combination with other software to potentially cause critical harm.

  4. Enable compliance with this article.

  5. Designate senior personnel to be responsible for ensuring compliance.

This requires ‘detailed test procedures’ to be described in advance, which seems like a very good idea, and does not preclude additional tests. The rest seems so basic that it seems laughable to object to being told to do any of it.

Before deploying (meaning externally, as in giving a third party access) to a frontier model, the developer must write and implement an SSP, retain an up-to-date copy of that SSP, conspicuously publish a redacted copy of the SSP, give the attorney general and DHSES access upon request to the full SSP and retain copies of all your test results sufficient to allow third-party replication.

As always, there are no specific requirements for the SSP, other than that it must ‘appropriately reduce the risk’ of critical harms, both directly or through unauthorized access, and that it spell out your testing procedure, and that you actually have someone ensure you use it. If you want to write the classic ‘lol we’re Meta, we don’t run tests, full open weights release without them seems appropriate, I’m sure it will be fine’ you can do that, although you might not like what happens when people notice you did that, or when the risks materialize, or potentially the AG notices you’re not taking the appropriate actions and sues you.

You need to conduct an annual review of the SSP to adjust for increased model capabilities, and make and publish any appropriate adjustments. Seems wise.

I for one think that if your model would create an unreasonable risk of critical harm then that means you shouldn’t release it. But that’s just me.

Again, yeah, I mean, I hope that stands to reason.

The attorney general can bring a civil action with penalties of:

  1. $10 million for the first violation, $30 million for additional ones.

  2. Injunctive or declaratory relief.

And that’s it. Explicitly no private right of action, no limit of the application of other laws, everything is cumulative with other requirements. If you cause an incident that costs billions of dollars, your fines don’t scale with that.

I don’t see any clause allowing compensatory relief. So if there’s a violation related to an actual critical harm, I presume any fines involved will be the least of your problems.

The main actual consequences are that frontier labs will be forced to be transparent about their safety and security protocols (SSPs) and what tests they intend to run and other precautions they intend to take, in order to guard against critical harms. Most labs impacted already do this, and will only have to newly include the evals they intend to run. Publishing these details will allow us to critique them, and apply pressure to create better protocols.

Again, while I have concerns that the bill is insufficient strong, I think all of this is a very good thing. I strongly support the bill.

Discussion about this post

RTFB: The RAISE Act Read More »

founder-of-23andme-buys-back-company-out-of-bankruptcy-auction

Founder of 23andMe buys back company out of bankruptcy auction

TTAM’s winning offer requires judicial approval, and a court hearing to approve the bid is set for next week.

Several US states have filed objections or lawsuits with the court expressing concerns about the transfer of customers’ genetic data to a new company, though those may now be moot because of Wojcicki’s continued involvement.

An expert hired by the court to review data privacy concerns over a sale of 23andMe submitted a report on Wednesday that noted Wojcicki had been chief executive when a 2023 data breach compromised 7 million customer accounts. Litigation over the breach continues, although that liability remains with the bankruptcy estate to be paid off with the proceeds from the winning bid.

Wojcicki was once married to Google co-founder Sergey Brin. 23andMe went public in 2021 through a merger with a blank cheque vehicle sponsored by Richard Branson, quickly reaching a market cap of nearly $6 billion.

The company has been plagued by years of falling revenue as it was unable to grow beyond its genetic testing business, in which customers sent saliva samples in to be analyzed for medical conditions and family genealogy.

Wojcicki had bid 40 cents a share to acquire the company prior to the bankruptcy filing.

Shares of 23andMe, which now trade over the counter, have rocketed to $5.49 on the belief the company will stage a recovery after settling the litigation.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Founder of 23andMe buys back company out of bankruptcy auction Read More »

meta-beefs-up-disappointing-ai-division-with-$15-billion-scale-ai-investment

Meta beefs up disappointing AI division with $15 billion Scale AI investment

Meta has invested heavily in generative AI, with the majority of its planned $72 billion in capital expenditure this year earmarked for data centers and servers. The deal underlines the high price AI companies are willing to pay for data that can be used to train AI models.

Zuckerberg pledged last year that his company’s models would outstrip rivals’ efforts in 2025, but Meta’s most recent release, Llama 4, has underperformed on various independent reasoning and coding benchmarks.

The long-term goal of researchers at Meta “has always been to reach human intelligence and go beyond it,” said Yann LeCun, the company’s chief AI scientist at the VivaTech conference in Paris this week.

Building artificial “general” intelligence—AI technologies that have human-level intelligence—is a popular goal for many AI companies. An increasing number of Silicon Valley groups are also seeking to reach “superintelligence,” a hypothetical scenario where AI systems surpass human intelligence.

The core of Scale’s business has been data-labeling, a manual process of ensuring images and text are accurately labeled and categorized before they are used to train AI models.

Wang has forged relationships with Silicon Valley’s biggest investors and technologists, including OpenAI’s Sam Altman. Scale AI’s early customers were autonomous vehicle companies, but the bulk of its expected $2 billion in revenues this year will come from labeling the data used to train the massive AI models built by OpenAI and others.

The deal will result in a substantial payday for Scale’s early venture capital investors, including Accel, Tiger Global Management, and Index Ventures. Tiger’s $200 million investment is worth more than $1 billion at the company’s new valuation, according to a person with knowledge of the matter.

Additional reporting by Tabby Kinder in San Francisco

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Meta beefs up disappointing AI division with $15 billion Scale AI investment Read More »

how-to-draft-a-will-to-avoid-becoming-an-ai-ghost—it’s-not-easy

How to draft a will to avoid becoming an AI ghost—it’s not easy


Why requests for “no AI resurrections” will probably go ignored.

Proton beams capturing the ghost of OpenAI to suck it into a trap where it belongs

All right! This AI is TOAST! Credit: Aurich Lawson

All right! This AI is TOAST! Credit: Aurich Lawson

As artificial intelligence has advanced, AI tools have emerged to make it possible to easily create digital replicas of lost loved ones, which can be generated without the knowledge or consent of the person who died.

Trained on the data of the dead, these tools, sometimes called grief bots or AI ghosts, may be text-, audio-, or even video-based. Chatting provides what some mourners feel is a close approximation to ongoing interactions with the people they love most. But the tech remains controversial, perhaps complicating the grieving process while threatening to infringe upon the privacy of the deceased, whose data could still be vulnerable to manipulation or identity theft.

Because of suspected harms and perhaps a general repulsion to the idea of it, not everybody wants to become an AI ghost.

After a realistic video simulation was recently used to provide a murder victim’s impact statement in court, Futurism summed up social media backlash, noting that the use of AI was “just as unsettling as you think.” And it’s not the first time people have expressed discomfort with the growing trend. Last May, The Wall Street Journal conducted a reader survey seeking opinions on the ethics of so-called AI resurrections. Responding, a California woman, Dorothy McGarrah, suggested there should be a way to prevent AI resurrections in your will.

“Having photos or videos of lost loved ones is a comfort. But the idea of an algorithm, which is as prone to generate nonsense as anything lucid, representing a deceased person’s thoughts or behaviors seems terrifying. It would be like generating digital dementia after your loved ones’ passing,” McGarrah said. “I would very much hope people have the right to preclude their images being used in this fashion after death. Perhaps something else we need to consider in estate planning?”

For experts in estate planning, the question may start to arise as more AI ghosts pop up. But for now, writing “no AI resurrections” into a will remains a complicated process, experts suggest, and such requests may not be honored by all unless laws are changed to reinforce a culture of respecting the wishes of people who feel uncomfortable with the idea of haunting their favorite people through AI simulations.

Can you draft a will to prevent AI resurrection?

Ars contacted several law associations to find out if estate planners are seriously talking about AI ghosts. Only the National Association of Estate Planners and Councils responded; it connected Ars to Katie Sheehan, an expert in the estate planning field who serves as a managing director and wealth strategist for Crestwood Advisors.

Sheehan told Ars that very few estate planners are prepared to answer questions about AI ghosts. She said not only does the question never come up in her daily work, but it’s also “essentially uncharted territory for estate planners since AI is relatively new to the scene.”

“I have not seen any documents drafted to date taking this into consideration, and I review estate plans for clients every day, so that should be telling,” Sheehan told Ars.

Although Sheehan has yet to see a will attempting to prevent AI resurrection, she told Ars that there could be a path to make it harder for someone to create a digital replica without consent.

“You certainly could draft into a power of attorney (for use during lifetime) and a will (for use post death) preventing the fiduciary (attorney in fact or executor) from lending any of your texts, voice, image, writings, etc. to any AI tools and prevent their use for any purpose during life or after you pass away, and/or lay the ground rules for when they can and cannot be used after you pass away,” Sheehan told Ars.

“This could also invoke issues with contract, property and intellectual property rights, and right of publicity as well if AI replicas (image, voice, text, etc.) are being used without authorization,” Sheehan said.

And there are likely more protections for celebrities than for everyday people, Sheehan suggested.

“As far as I know, there is no law” preventing unauthorized non-commercial digital replicas, Sheehan said.

Widely adopted by states, the Revised Uniform Fiduciary Access to Digital Assets Act—which governs who gets access to online accounts of the deceased, like social media or email accounts—could be helpful but isn’t a perfect remedy.

That law doesn’t directly “cover someone’s AI ghost bot, though it may cover some of the digital material some may seek to use to create a ghost bot,” Sheehan said.

“Absent any law” blocking non-commercial digital replicas, Sheehan expects that people’s requests for “no AI resurrections” will likely “be dealt with in the courts and governed by the terms of one’s estate plan, if it is addressed within the estate plan.”

Those potential fights seemingly could get hairy, as “it may be some time before we get any kind of clarity or uniform law surrounding this,” Sheehan suggested.

In the future, Sheehan said, requests prohibiting digital replicas may eventually become “boilerplate language in almost every will, trust, and power of attorney,” just as instructions on digital assets are now.

As “all things AI become more and more a part of our lives,” Sheehan said, “some aspects of AI and its components may also be woven throughout the estate plan regularly.”

“But we definitely aren’t there yet,” she said. “I have had zero clients ask about this.”

Requests for “no AI resurrections” will likely be ignored

Whether loved ones would—or even should—respect requests blocking digital replicas appears to be debatable. But at least one person who built a grief bot wished he’d done more to get his dad’s permission before moving forward with his own creation.

A computer science professor at the University of Washington Bothell, Muhammad Aurangzeb Ahmad, was one of the earliest AI researchers to create a grief bot more than a decade ago after his father died. He built the bot to ensure that his future kids would be able to interact with his father after seeing how incredible his dad was as a grandfather.

When Ahmad started his project, there was no ChatGPT or other advanced AI model to serve as the foundation, so he had to train his own model based on his dad’s data. Putting immense thought into the effort, Ahmad decided to close off the system from the rest of the Internet so that only his dad’s memories would inform the model. To prevent unauthorized chats, he kept the bot on a laptop that only his family could access.

Ahmad was so intent on building a digital replica that felt just like his dad that it didn’t occur to him until after his family started using the bot that he never asked his dad if this was what he wanted. Over time, he realized that the bot was biased to his view of his dad, perhaps even feeling off to his siblings who had a slightly different relationship with their father. It’s unclear if his dad would similarly view the bot as preserving just one side of him.

Ultimately, Ahmad didn’t regret building the bot, and he told Ars he thinks his father “would have been fine with it.”

But he did regret not getting his father’s consent.

For people creating bots today, seeking consent may be appropriate if there’s any chance the bot may be publicly accessed, Ahmad suggested. He told Ars that he would never have been comfortable with the idea of his dad’s digital replica being publicly available because the question of an “accurate representation” would come even more into play, as malicious actors could potentially access it and sully his dad’s memory.

Today, anybody can use ChatGPT’s model to freely create a similar bot with their own loved one’s data. And a wide range of grief tech services have popped up online, including HereAfter AI, SeanceAI, and StoryFile, Axios noted in an October report detailing the latest ways “AI could be used to ‘resurrect’ loved ones.” As this trend continues “evolving very fast,” Ahmad told Ars that estate planning is probably the best way to communicate one’s AI ghost preferences.

But in a recently published article on “The Law of Digital Resurrection,” law professor Victoria Haneman warned that “there is no legal or regulatory landscape against which to estate plan to protect those who would avoid digital resurrection, and few privacy rights for the deceased. This is an intersection of death, technology, and privacy law that has remained relatively ignored until recently.”

Haneman agreed with Sheehan that “existing protections are likely sufficient to protect against unauthorized commercial resurrections”—like when actors or musicians are resurrected for posthumous performances. However, she thinks that for personal uses, digital resurrections may best be blocked not through estate planning but by passing a “right to deletion” that would focus on granting the living or next of kin the rights to delete the data that could be used to create the AI ghost rather than regulating the output.

A “right to deletion” could help people fight inappropriate uses of their loved ones’ data, whether AI is involved or not. After her article was published, a lawyer reached out to Haneman about a client’s deceased grandmother whose likeness was used to create a meme of her dancing in a church. The grandmother wasn’t a public figure, and the client had no idea “why or how somebody decided to resurrect her deceased grandmother,” Haneman told Ars.

Although Haneman sympathized with the client, “if it’s not being used for a commercial purpose, she really has no control over this use,” Haneman said. “And she’s deeply troubled by this.”

Haneman’s article offers a rare deep dive into the legal topic. It sensitively maps out the vague territory of digital rights of the dead and explains how those laws—or the lack thereof—interact with various laws dealing with death, from human remains to property rights.

In it, Haneman also points out that, on balance, the rights of the living typically outweigh the rights of the dead, and even specific instructions on how to handle human remains aren’t generally considered binding. Some requests, like organ donation that can benefit the living, are considered critical, Haneman noted. But there are mixed results on how courts enforce other interests of the dead—like a famous writer’s request to destroy all unpublished work or a pet lover’s insistence to destroy their cat or dog at death.

She told Ars that right now, “a lot of people are like, ‘Why do I care if somebody resurrects me after I’m dead?’ You know, ‘They can do what they want.’ And they think that, until they find a family member who’s been resurrected by a creepy ex-boyfriend or their dead grandmother’s resurrected, and then it becomes a different story.”

Existing law may protect “the privacy interests of the loved ones of the deceased from outrageous or harmful digital resurrections of the deceased,” Haneman noted, but in the case of the dancing grandma, her meme may not be deemed harmful, no matter how much it troubles the grandchild to see her grandma’s memory warped.

Limited legal protections may not matter so much if, culturally, communities end up developing a distaste for digital replicas, particularly if it becomes widely viewed as disrespectful to the dead, Haneman suggested. Right now, however, society is more fixated on solving other problems with deepfakes rather than clarifying the digital rights of the dead. That could be because few people have been impacted so far, or it could also reflect a broader cultural tendency to ignore death, Haneman told Ars.

“We don’t want to think about our own death, so we really kind of brush aside whether or not we care about somebody else being digitally resurrected until it’s in our face,” Haneman said.

Over time, attitudes may change, especially if the so-called “digital afterlife industry” takes off. And there is some precedent that the law could be changed to reinforce any culture shift.

“The throughline revealed by the law of the dead is that a sacred trust exists between the living and the deceased, with an emphasis upon protecting common humanity, such that data afforded no legal status (or personal data of the deceased) may nonetheless be treated with dignity and receive some basic protections,” Haneman wrote.

An alternative path to prevent AI resurrection

Preventing yourself from becoming an AI ghost seemingly now falls in a legal gray zone that policymakers may need to address.

Haneman calls for a solution that doesn’t depend on estate planning, which she warned “is a structurally inequitable and anachronistic approach that maximizes social welfare only for those who do estate planning.” More than 60 percent of Americans die without a will, often including “those without wealth,” as well as women and racial minorities who “are less likely to die with a valid estate plan in effect,” Haneman reported.”We can do better in a technology-based world,” Haneman wrote. “Any modern framework should recognize a lack of accessibility as an obstacle to fairness and protect the rights of the most vulnerable through approaches that do not depend upon hiring an attorney and executing an estate plan.”

Rather than twist the law to “recognize postmortem privacy rights,” Haneman advocates for a path for people resistant to digital replicas that focuses on a right to delete the data that would be used to create the AI ghost.

“Put simply, the deceased may exert control over digital legacy through the right to deletion of data but may not exert broader rights over non-commercial digital resurrection through estate planning,” Haneman recommended.

Sheehan told Ars that a right to deletion would likely involve estate planners, too.

“If this is not addressed in an estate planning document and not specifically addressed in the statute (or deemed under the authority of the executor via statute), then the only way to address this would be to go to court,” Sheehan said. “Even with a right of deletion, the deceased would need to delete said data before death or authorize his executor to do so post death, which would require an estate planning document, statutory authority, or court authority.”

Haneman agreed that for many people, estate planners would still be involved, recommending that “the right to deletion would ideally, from the perspective of estate administration, provide for a term of deletion within 12 months.” That “allows the living to manage grief and open administration of the estate before having to address data management issues,” Haneman wrote, and perhaps adequately balances “the interests of society against the rights of the deceased.”

To Haneman, it’s also the better solution for the people left behind because “creating a right beyond data deletion to curtail unauthorized non-commercial digital resurrection creates unnecessary complexity that overreaches, as well as placing the interests of the deceased over those of the living.”

Future generations may be raised with AI ghosts

If a dystopia that experts paint comes true, Big Tech companies may one day profit by targeting grieving individuals to seize the data of the dead, which could be more easily abused since it’s granted fewer rights than data of the living.

Perhaps in that future, critics suggest, people will be tempted into free trials in moments when they’re missing their loved ones most, then forced to either pay a subscription to continue accessing the bot or else perhaps be subjected to ad-based models where their chats with AI ghosts may even feature ads in the voices of the deceased.

Today, even in a world where AI ghosts aren’t yet compelling ad clicks, some experts have warned that interacting with AI ghosts could cause mental health harms, New Scientist reported, especially if the digital afterlife industry isn’t carefully designed, AI ethicists warned. Some people may end up getting stuck maintaining an AI ghost if it’s left behind as a gift, and ethicists suggested that the emotional weight of that could also eventually take a negative toll. While saying goodbye is hard, letting go is considered a critical part of healing during the mourning process, and AI ghosts may make that harder.

But the bots can be a helpful tool to manage grief, some experts suggest, provided that their use is limited to allow for a typical mourning process or combined with therapy from a trained professional, Al Jazeera reported. Ahmad told Ars that working on his bot has not only kept his father close to him but also helped him think more deeply about relationships and memory.

Haneman noted that people have many ways of honoring the dead. Some erect statues, and others listen to saved voicemails or watch old home movies. For some, just “smelling an old sweater” is a comfort. And creating digital replicas, as creepy as some people might find them, is not that far off from these traditions, Haneman said.

“Feeding text messages and emails into existing AI platforms such as ChatGPT and asking the AI to respond in the voice of the deceased is simply a change in degree, not in kind,” Haneman said.

For Ahmad, the decision to create a digital replica of his dad was a learning experience, and perhaps his experience shows why any family or loved one weighing the option should carefully consider it before starting the process.

In particular, he warns families to be careful introducing young kids to grief bots, as they may not be able to grasp that the bot is not a real person. When he initially saw his young kids growing confused with whether their grandfather was alive or not—the introduction of the bot was complicated by the early stages of the pandemic, a time when they met many relatives virtually—he decided to restrict access to the bot until they were older. For a time, the bot only came out for special events like birthdays.

He also realized that introducing the bot also forced him to have conversations about life and death with his kids at ages younger than he remembered fully understanding those concepts in his own childhood.

Now, Ahmad’s kids are among the first to be raised among AI ghosts. To continually enhance the family’s experience, their father continuously updates his father’s digital replica. Ahmad is currently most excited about recent audio advancements that make it easier to add a voice element. He hopes that within the next year, he might be able to use AI to finally nail down his South Asian father’s accent, which up to now has always sounded “just off.” For others working in this space, the next frontier is realistic video or even augmented reality tools, Ahmad told Ars.

To this day, the bot retains sentimental value for Ahmad, but, as Haneman suggested, the bot was not the only way he memorialized his dad. He also created a mosaic, and while his father never saw it, either, Ahmad thinks his dad would have approved.

“He would have been very happy,” Ahmad said.

There’s no way to predict how future generations may view grief tech. But while Ahmad said he’s not sure he’d be interested in an augmented reality interaction with his dad’s digital replica, kids raised seeing AI ghosts as a natural part of their lives may not be as hesitant to embrace or even build new features. Talking to Ars, Ahmad fondly remembered his young daughter once saw that he was feeling sad and came up with her own AI idea to help her dad feel better.

“It would be really nice if you can just take this program and we build a robot that looks like your dad, and then add it to the robot, and then you can go and hug the robot,” she said, according to her father’s memory.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

How to draft a will to avoid becoming an AI ghost—it’s not easy Read More »

ai-overviews-hallucinates-that-airbus,-not-boeing,-involved-in-fatal-air-india-crash

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash

When major events occur, most people rush to Google to find information. Increasingly, the first thing they see is an AI Overview, a feature that already has a reputation for making glaring mistakes. In the wake of a tragic plane crash in India, Google’s AI search results are spreading misinformation claiming the incident involved an Airbus plane—it was actually a Boeing 787.

Travelers are more attuned to the airliner models these days after a spate of crashes involving Boeing’s 737 lineup several years ago. Searches for airline disasters are sure to skyrocket in the coming days, with reports that more than 200 passengers and crew lost their lives in the Air India Flight 171 crash. The way generative AI operates means some people searching for details may get the wrong impression from Google’s results page.

Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup of both Airbus and Boeing. It’s a mess.

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787.

Credit: /u/stuckintrraffic

In this search, Google’s AI says the crash involved an Airbus A330 instead of a Boeing 787. Credit: /u/stuckintrraffic

But why is Google bringing up the Air India crash at all in the context of Airbus? Unfortunately, it’s impossible to predict if you’ll get an AI Overview that blames Boeing or Airbus—generative AI is non-deterministic, meaning the output is different every time, even for identical inputs. Our best guess for the underlying cause is that numerous articles on the Air India crash mention Airbus as Boeing’s main competitor. AI Overviews is essentially summarizing these results, and the AI goes down the wrong path because it lacks the ability to understand what is true.

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash Read More »

amazon-prime-video-subscribers-sit-through-up-to-6-minutes-of-ads-per-hour

Amazon Prime Video subscribers sit through up to 6 minutes of ads per hour

Amazon forced all Prime Video subscribers onto a new ad-based subscription tier in January 2024 unless users paid more for their subscription type. Now, the tech giant is reportedly showing twice as many ads to subscribers as it did when it started selling ad-based streaming subscriptions.

Currently, anyone who signs up for Amazon Prime (which is $15 per month or $139 per year) gets Prime Video with ads. If they don’t want to see commercials, they have to pay an extra $3 per month. One can also subscribe to Prime Video alone for $9 per month with ads or $12 per month without ads.

When Amazon originally announced the ad tier, it said it would deliver “meaningfully fewer ads than linear TV and other streaming TV providers.” Based on “six ad buyers and documents” ad trade publication AdWeek reported viewing, Amazon has determined the average is four to six minutes of advertisements per hour.

“Prime Video ad load has gradually increased to four to six minutes per hour,” an Amazon representative said via email to an ad buyer this month, AdWeek reported.

That would mean that Prime Video subscribers are spending significantly more time sitting through ads than they did at the launch of Prime Video with ads. According to a report from The Wall Street Journal (WSJ) at the time, which cited an Amazon presentation it said it reviewed, “the average ad load at launch was two to three-and-a-half minutes.” However, when reached for comment, an Amazon Ads representative told Ars Technica that the WSJ didn’t confirm that figure directly with Amazon.

Amazon’s Ads spokesperson, however, declined to specify to Ars how many ads Amazon typically shows to Prime Videos subscribers today or in the past.

Instead, they shared a statement saying:

We remain focused on prioritizing ad innovation over volume. While demand continues to grow, our commitment is to improving ad experiences rather than simply increasing the number of ads shown. Since the beginning of this year alone, we’ve announced multiple capabilities, including Brand+, Complete TV, and new ad formats—all designed to deliver industry-leading relevancy and enhanced customer experiences. We will continue to invest in this important work, creating meaningful innovations that benefit both customers and advertisers alike.

Kendra Tang, programmatic supervisor at ad firm Rain the Growth Agency, told AdWeek that Amazon “told us the ad load would be increasing” and that she’s seen more ad opportunities made available in Amazon’s ad system.

Amazon Prime Video subscribers sit through up to 6 minutes of ads per hour Read More »

experimental-retina-implants-give-mice-infrared-vision

Experimental retina implants give mice infrared vision

Finally, the tellurium meshes, especially the infrared vision capability they offered, were tested on healthy macaques, an animal model that’s much closer to humans than mice. It turned out implanted macaques could perceive infrared light, and their normal vision remained unchanged.

However, there are still a few roadblocks before we go all Cyberpunk with eye implants.

Sensitivity issues

Tellurium meshes, as the Fudan team admits in their paper, are far less sensitive to light than natural photoreceptors, and it’s hard to say if they really are a good candidate for retinal prostheses. The problem with using animal models in vision science is that it’s hard to ask a mouse or a macaque what they actually see with the implants and figure out how the electrical signals from their tellurium meshes are converted into perception in the brain.

Based on the Fudan experiments, we know the implanted animals reacted to light, albeit a bit less effectively than those with healthy vision. We also know they needed an adaptation period; the implanted mice didn’t score their impressive results on their first try. They needed to learn what the sudden signals coming from their eyes meant, just like humans who had used electrode arrays in the past. Finally, shapes in the shape recognition tests were projected with lasers, which makes it difficult to tell how the implant would perform in normal daylight.

There are also risks that come with the implantation procedure itself. The surgery involves making a local retina detachment, followed by a small retinal incision to insert the implant. According to Eduardo Fernández, a Spanish bioengineer who published a commentary to Fudan’s work in Science, doing this in fragile, diseased retinas poses a risk of fibrosis and scarring. Still, Fernández found the Chinese implants “promising.” The Fudan team is currently working on long-term safety assessments of their implants in non-human primates and on improving the coupling between the retina and the implant.

The Fudan team’s work on tellurium retinal implants is published in Science.

Science, 2025. DOI: 10.1126/science.ady4439

Experimental retina implants give mice infrared vision Read More »

rfk-jr.-announces-8-appointees-to-cdc-vaccine-panel—they’re-not-good

RFK Jr. announces 8 appointees to CDC vaccine panel—they’re not good

Anti-vaccine advocate and current Health Secretary Robert F. Kennedy Jr. took to social media Wednesday to announce the names of eight people he is appointing to a critical federal vaccine advisory committee—which is currently empty after Kennedy abruptly fired all 17 previous members Monday.

In the past, the vetting process for appointing new members to the Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices (ACIP) could take years. But Kennedy has taken just two days.

The panel, typically stocked with vaccine, infectious disease, and public health experts, carefully and publicly reviews, analyzes, and debates vaccine data and offers recommendations to the CDC via votes. The CDC typically adopts the recommendations, which set clinical practices nationwide and determine insurance coverage for vaccinations.

Yesterday, Kennedy pledged that none of the new ACIP members would be “ideological anti-vaxxers.” However, the list of today’s appointees includes Robert Malone, who falsely claims to have invented mRNA vaccines and has spent the past several years spreading misinformation and conspiracy theories about them.

Speaking at an anti-vaccine rally in 2022, Malone spread dangerous falsehoods about mRNA COVID-19 vaccines: “These genetic vaccines can damage your children. They may damage their brains, their heart, their immune system and their ability to have children in the future. Many of these damages cannot be repaired.”

Troubling list

Malone aligned with the anti-vaccine crowd during the pandemic and has become a mainstay in conspiratorial circles and an ally to Kennedy. He has claimed that vaccines cause a “form of AIDS,” amid other nonsense. He has also meddled with responses to the measles outbreak that erupted in West Texas in January. In April, Malone was the first to publicize news that a second child had died from the highly infectious and serious infection, but he did so to falsely claim that measles wasn’t the cause and spread other dangerous misinformation.

RFK Jr. announces 8 appointees to CDC vaccine panel—they’re not good Read More »

scientists-built-a-badminton-playing-robot-with-ai-powered-skills

Scientists built a badminton-playing robot with AI-powered skills

It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage—it was committed, but not suicidal.

But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.

The major leagues

The first problem was its reaction time. An average human reacts to visual stimuli in around 0.2–0.25 seconds. Elite badminton players with trained reflexes, anticipation, and muscle memory can cut this time down to 0.12–0.15 seconds. ANYmal needed roughly 0.35 seconds after the opponent hit the shuttlecock to register trajectories and figure out what to do.

Part of the problem was poor eyesight. “I think perception is still a big issue,” Ma said. “The robot localized the shuttlecock with the stereo camera and there could be a positioning error introduced at each timeframe.” The camera also had a limited field of view, which meant the robot could see the shuttlecock for only a limited time before it had to act. “Overall, it was suited for more friendly matches—when the human player starts to smash, the success rate goes way down for the robot,” Ma acknowledged.

But his team already has some ideas on how to make ANYmal better. Reaction time can be improved by predicting the shuttlecock trajectory based on the opponent’s body position rather than waiting to see the shuttlecock itself—a technique commonly used by elite badminton or tennis players. To improve ANYmal’s perception, the team wants to fit it with more advanced hardware, like event cameras—vision sensors that register movement with ultra-low latencies in the microseconds range. Other improvements might include faster, more capable actuators.

“I think the training framework we propose would be useful in any application where you need to balance perception and control—picking objects up, even catching and throwing stuff,” Ma suggested. Sadly, one thing that’s almost certainly off the table is taking ANYmal to major leagues in badminton or tennis. “Would I set up a company selling badminton-playing robots? Well, maybe not,” Ma said.

Science Robotics, 2025. DOI: 10.1126/scirobotics.adu3922

Scientists built a badminton-playing robot with AI-powered skills Read More »

mario-kart-world-review:-getting-there-is-half-the-game

Mario Kart World review: Getting there is half the game

While that kind of item-based back-and-forth isn’t new to Mario Kart, it feels like it has been taken to a new extreme by World‘s more crowded race track. If you’re in the middle of the pack, every tranche of item boxes you pass can lead, in short order, to a flurry of near-unavoidable projectiles and item-enhanced opponents cluttering your immediate space. That’s especially true in online races, where human opponents tend to be much more ruthless with their item use than even the hardest computer-controlled opponents.

That blue “Kaboom!” can send you from first to 17th in a hurry.

That blue “Kaboom!” can send you from first to 17th in a hurry.

The change ultimately rewards defensive driving, where you do your best to avoid other racers and utilize protective items until you have a chance to rocket into the relative safety of the top few positions. Sometimes, though, there’s simply no avoiding a maddening series of bad breaks that can literally send you from first place to 19th in an instant.

It’s not the destination, it’s the journey

Once you’ve adjusted to the more crowded field of racers, you’ll then have to get used to the odd structure of Mario Kart World‘s main racing modes. Rather than racing multiple laps around the game’s well-designed tracks, you’ll spend the bulk of your race time in most racing modes trekking between those tracks across the great expanses of Mario Kart World‘s, uh, world.

Get used to seeing very straight sections like this in most of the game’s racing modes.

Credit: Nintendo

Get used to seeing very straight sections like this in most of the game’s racing modes. Credit: Nintendo

These inter-course interludes offer a decent variety to the structure, which will see you traveling through desert wastelands, down traffic-clogged highways, along the surface of tiered waterfalls, across frozen tundra, and more. What’s unavoidably similar about most of them, though, is their unbearable straightness. Players used to the undulating, crisscrossing curves of a standard Mario Kart course will marvel at just how rarely they have to powerslide around turns while shuttling between those courses in World.

These long straightaways aren’t boring, per se. The designers have done their best to dress them up with plenty of obstacles (of the stationary, vehicular, and livestock varieties), as well as jumps, dash pads, and frequent item boxes to make sure you’re still paying attention. But it can still be a jarring transition to go from two or three minutes across one of these mostly straight interregnums into the usual twisty wildness of the game’s more familiar pre-designed courses.

Mario Kart World review: Getting there is half the game Read More »