Author name: Mike M.

music-industry-giants-allege-mass-copyright-violation-by-ai-firms

Music industry giants allege mass copyright violation by AI firms

No one wants to be defeated —

Suno and Udio could face damages of up to $150,000 per song allegedly infringed.

Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson’s music.

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., “a dubstep song about Linus Torvalds”).

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies’ use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Until generative AI models hit the mainstream in 2022, it was common practice in machine learning to scrape and use copyrighted information without seeking permission to do so. But now that the applications of those technologies have become commercial products themselves, rightsholders have come knocking to collect. In the case of Udio and Suno, the record labels are seeking statutory damages of up to $150,000 per song used in training.

In the lawsuit, the record labels cite specific examples of AI-generated content that allegedly re-creates elements of well-known songs, including The Temptations’ “My Girl,” Mariah Carey’s “All I Want for Christmas Is You,” and James Brown’s “I Got You (I Feel Good).” It also claims the music-synthesis models can produce vocals resembling those of famous artists, such as Michael Jackson and Bruce Springsteen.

Reuters claims it’s the first instance of lawsuits specifically targeting music-generating AI, but music companies and artists alike have been gearing up to deal with challenges the technology may pose for some time.

In May, Sony Music sent warning letters to over 700 AI companies (including OpenAI, Microsoft, Google, Suno, and Udio) and music-streaming services that prohibited any AI researchers from using its music to train AI models. In April, over 200 musical artists signed an open letter that called on AI companies to stop using AI to “devalue the rights of human artists.” And last November, Universal Music filed a copyright infringement lawsuit against Anthropic for allegedly including artists’ lyrics in its Claude LLM training data.

Similar to The New York Times’ lawsuit against OpenAI over the use of training data, the outcome of the record labels’ new suit could have deep implications for the future development of generative AI in creative fields, including requiring companies to license all musical training data used in creating music-synthesis models.

Compulsory licenses for AI training data could make AI model development economically impractical for small startups like Udio and Suno—and judging by the aforementioned open letter, many musical artists may applaud that potential outcome. But such a development would not preclude major labels from eventually developing their own AI music generators themselves, allowing only large corporations with deep pockets to control generative music tools for the foreseeable future.

Music industry giants allege mass copyright violation by AI firms Read More »

astronomers-think-they’ve-figured-out-how-and-when-jupiter’s-red-spot-formed

Astronomers think they’ve figured out how and when Jupiter’s Red Spot formed

a long-lived vortex —

Astronomers concluded it is not the same and that Cassini’s spot disappeared in 1708.

Enhanced image of Jupiter’s Great Red Spot, as seen from a Juno flyby in 2018. The Red Spot we see today is likely not the same one famously observed by Cassini in the 1600s.

Enlarge / Enhanced Juno image of Jupiter’s Great Red Spot in 2018. It is likely not the same one observed by Cassini in the 1600s.

The planet Jupiter is particularly known for its so-called Great Red Spot, a swirling vortex in the gas giant’s atmosphere that has been around since at least 1831. But how it formed and how old it is remain matters of debate. Astronomers in the 1600s, including Giovanni Cassini, also reported a similar spot in their observations of Jupiter that they dubbed the “Permanent Spot.” This prompted scientists to question whether the spot Cassini observed is the same one we see today. We now have an answer to that question: The spots are not the same, according to a new paper published in the journal Geophysical Research Letters.

“From the measurements of sizes and movements, we deduced that it is highly unlikely that the current Great Red Spot was the ‘Permanent Spot’ observed by Cassini,” said co-author Agustín Sánchez-Lavega of the University of the Basque Country in Bilbao, Spain. “The ‘Permanent Spot’ probably disappeared sometime between the mid-18th and 19th centuries, in which case we can now say that the longevity of the Red Spot exceeds 190 years.”

The planet Jupiter was known to Babylonian astronomers in the 7th and 8th centuries BCE, as well as to ancient Chinese astronomers; the latter’s observations would eventually give birth to the Chinese zodiac in the 4th century BCE, with its 12-year cycle based on the gas giant’s orbit around the Sun. In 1610, aided by the emergence of telescopes, Galileo Galilei famously observed Jupiter’s four largest moons, thereby bolstering the Copernican heliocentric model of the solar system.

(a) 1711 painting of Jupiter by Donato Creti showing the reddish Permanent Spot. (b) November 2, 1880, drawing of Jupiter by E.L. Trouvelot. (c) November 28, 1881, drawing by T.G. Elger.

Enlarge / (a) 1711 painting of Jupiter by Donato Creti showing the reddish Permanent Spot. (b) November 2, 1880, drawing of Jupiter by E.L. Trouvelot. (c) November 28, 1881, drawing by T.G. Elger.

Public domain

It’s possible that Robert Hooke may have observed the “Permanent Spot” as early as 1664, with Cassini following suit a year later and multiple more sightings through 1708. Then it disappeared from the astronomical record. A pharmacist named Heinrich Schwabe made the earliest known drawing of the Red Spot in 1831, and by 1878 it was once again quite prominent in observations of Jupiter, fading again in 1883 and at the onset of the 20th century.

Perhaps the spot is not the same…

But was this the same Permanent Spot that Cassini had observed? Sánchez-Lavega and his co-authors set out to answer this question, combing through historical sources—including Cassini’s notes and drawings from the 17th century—and more recent astronomical observations and quantifying the results. They conducted a year-by-year measurement of the sizes, ellipticity, area, and motions of both the Permanent Spot and the Great Red Spot from the earliest recorded observations into the 21st century.

The team also performed multiple numerical computer simulations testing different models for vortex behavior in Jupiter’s atmosphere that are the likely cause of the Great Red Spot. It’s essentially a massive, persistent anticyclonic storm. In one of the models the authors tested, the spot forms in the wake of a massive superstorm. Alternatively, several smaller vortices created by wind shear may have merged, or there could have been an instability in the planet’s wind currents that resulted in an elongated atmospheric cell shaped like the spot.

Sánchez-Lavega et al. concluded that the current Red Spot is probably not the same as that observed by Cassini and others in the 17th century. They argue that the Permanent Spot had faded by the start of the 18th century, and a new spot formed in the 19th century—the one we observe today, making it more than 190 years old.

Comparison between the Permanent Spot and the current Great Red Spot. (a) December 1690. (b) January 1691. (c) January 19, 1672. (d) August 10, 2023.

Enlarge / Comparison between the Permanent Spot and the current Great Red Spot. (a) December 1690. (b) January 1691. (c) January 19, 1672. (d) August 10, 2023.

Public domain/Eric Sussenbach

But maybe it is?

Others remain unconvinced of that conclusion, such as astronomer Scott Bolton of the Southwest Research Institute in Texas. “What I think we may be seeing is not so much that the storm went away and then a new one came in almost the same place,” he told New Scientist. “It would be a very big coincidence to have it occur at the same exact latitude, or even a similar latitude. It could be that what we’re really watching is the evolution of the storm.”

The numerical simulations ruled out the merging vortices model for the spot’s formation; it is much more likely that it’s due to wind currents producing an elongated atmospheric shell. Furthermore, in 1879, the Red Spot measured about 24,200 miles (39,000 kilometers) at its longest axis and is now about 8,700 miles (14,000 kilometers). So, the spot has been shrinking over the ensuing decades and becoming more rounded. The Juno mission’s most recent observations also revealed the spot is thin and shallow.

The question of why the Great Red Spot is shrinking remains a matter of debate. The team plans further simulations aiming to reproduce the shrinking dynamics and predict whether the spot will stabilize at a certain size and remain stable or eventually disappear like Cassini’s Permanent Spot presumably did.

Geophysical Research Letters, 2024. DOI: 10.1029/2024GL108993  (About DOIs).

Astronomers think they’ve figured out how and when Jupiter’s Red Spot formed Read More »

pornhub-prepares-to-block-five-more-states-rather-than-check-ids

Pornhub prepares to block five more states rather than check IDs

“Uphill battle” —

The number of states blocked by Pornhub will soon nearly double.

Pornhub prepares to block five more states rather than check IDs

Aurich Lawson | Getty Images

Pornhub will soon be blocked in five more states as the adult site continues to fight what it considers privacy-infringing age-verification laws that require Internet users to provide an ID to access pornography.

On July 1, according to a blog post on the adult site announcing the impending block, Pornhub visitors in Indiana, Idaho, Kansas, Kentucky, and Nebraska will be “greeted by a video featuring” adult entertainer Cherie Deville, “who explains why we had to make the difficult decision to block them from accessing Pornhub.”

Pornhub explained that—similar to blocks in Texas, Utah, Arkansas, Virginia, Montana, North Carolina, and Mississippi—the site refuses to comply with soon-to-be-enforceable age-verification laws in this new batch of states that allegedly put users at “substantial risk” of identity theft, phishing, and other harms.

Age-verification laws requiring adult site visitors to submit “private information many times to adult sites all over the Internet” normalizes the unnecessary disclosure of personally identifiable information (PII), Pornhub argued, warning, “this is not a privacy-by-design approach.”

Pornhub does not outright oppose age verification but advocates for laws that require device-based age verification, which allows users to access adult sites after authenticating their identity on their devices. That’s “the best and most effective solution for protecting minors and adults alike,” Pornhub argued, because the age-verification technology is proven and less PII would be shared.

“Users would only get verified once, through their operating system, not on each age-restricted site,” Pornhub’s blog said, claiming that “this dramatically reduces privacy risks and creates a very simple process for regulators to enforce.”

A spokesperson for Pornhub-owner Aylo told Ars that “unfortunately, the way many jurisdictions worldwide have chosen to implement age verification is ineffective, haphazard, and dangerous.”

“Any regulations that require hundreds of thousands of adult sites to collect significant amounts of highly sensitive personal information is putting user safety in jeopardy,” Aylo’s spokesperson told Ars. “Moreover, as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.

Age-verification laws are harmful, Pornhub says

Pornhub’s big complaint with current age-verification laws is that these laws are hard to enforce and seem to make it riskier than ever to visit an adult site.

“Since age verification software requires users to hand over extremely sensitive information, it opens the door for the risk of data breaches,” Pornhub’s blog said. “Whether or not your intentions are good, governments have historically struggled to secure this data. It also creates an opportunity for criminals to exploit and extort people through phishing attempts or fake [age verification] processes, an unfortunate and all too common practice.”

Over the past few years, the risk of identity theft or stolen PII on both widely used and smaller niche adult sites has been well-documented.

Hundreds of millions of people were impacted by major leaks exposing PII shared with popular adult sites like Adult Friend Finder and Brazzers in 2016, while likely tens of thousands of users were targeted on eight poorly secured adult sites in 2018. Niche and free sites have also been vulnerable to attacks, including millions collectively exposed through breaches of fetish porn site Luscious in 2019 and MyFreeCams in 2021.

And those are just the big breaches that make headlines. In 2019, Kaspersky Lab reported that malware targeting online porn account credentials more than doubled in 2018, and researchers analyzing 22,484 pornography websites estimated that 93 percent were leaking user data to a third party.

That’s why Pornhub argues that, as states have passed age-verification laws requiring ID, they’ve “introduced harm” by redirecting visitors to adult sites that have fewer privacy protections and worse security, allegedly exposing users to more threats.

As an example, Pornhub reported, traffic to Pornhub in Louisiana “dropped by approximately 80 percent” after their age-verification law passed. That allegedly showed not just how few users were willing to show an ID to access their popular platform, but also how “very easily” users could simply move to “pirate, illegal, or other non-compliant sites that don’t ask visitors to verify their age.”

Pornhub has continued to argue that states passing laws like Louisiana’s cannot effectively enforce the laws and are simply shifting users to make riskier choices when accessing porn.

“The Louisiana law and other copycat state-level laws have no regulator, only civil liability, which results in a flawed enforcement regime, effectively making it an option for platform operators to comply,” Pornhub’s blog said. As one of the world’s most popular adult platforms, Pornhub would surely be targeted for enforcement if found to be non-compliant, while smaller adult sites perhaps plagued by security risks and disincentivized to check IDs would go unregulated, the thinking goes.

Aylo’s spokesperson shared 2023 Similarweb data with Ars, showing that sites complying with age-verification laws in Virginia, including Pornhub and xHamster, lost substantial traffic while seven non-compliant sites saw a sharp uptick in traffic. Similar trends were observed in Google trends data in Utah and Mississippi, while market shares were seemingly largely maintained in California, a state not yet checking IDs to access adult sites.

Pornhub prepares to block five more states rather than check IDs Read More »

radioactive-drugs-strike-cancer-with-precision

Radioactive drugs strike cancer with precision

Pharma interest and investment in radiotherapy drugs is heating up.

Enlarge / Pharma interest and investment in radiotherapy drugs is heating up.

Knowable Magazine

On a Wednesday morning in late January 1896 at a small light bulb factory in Chicago, a middle-aged woman named Rose Lee found herself at the heart of a groundbreaking medical endeavor. With an X-ray tube positioned above the tumor in her left breast, Lee was treated with a torrent of high-energy particles that penetrated into the malignant mass.

“And so,” as her treating clinician later wrote, “without the blaring of trumpets or the beating of drums, X-ray therapy was born.”

Radiation therapy has come a long way since those early beginnings. The discovery of radium and other radioactive metals opened the doors to administering higher doses of radiation to target cancers located deeper within the body. The introduction of proton therapy later made it possible to precisely guide radiation beams to tumors, thus reducing damage to surrounding healthy tissues—a degree of accuracy that was further refined through improvements in medical physics, computer technologies and state-of-the-art imaging techniques.

But it wasn’t until the new millennium, with the arrival of targeted radiopharmaceuticals, that the field achieved a new level of molecular precision. These agents, akin to heat-seeking missiles programmed to hunt down cancer, journey through the bloodstream to deliver their radioactive warheads directly at the tumor site.

Use of radiation to kill cancer cells has a long history. In this 1915 photo, a woman receives “roentgenotherapy”—treatment with X-rays—directed at an epithelial-cell cancer on her face.

Use of radiation to kill cancer cells has a long history. In this 1915 photo, a woman receives “roentgenotherapy”—treatment with X-rays—directed at an epithelial-cell cancer on her face.

Wikimedia Commons

Today, only a handful of these therapies are commercially available for patients—specifically, for forms of prostate cancer and for tumors originating within hormone-producing cells of the pancreas and gastrointestinal tract. But this number is poised to grow as major players in the biopharmaceutical industry begin to invest heavily in the technology.

AstraZeneca became the latest heavyweight to join the field when, on June 4, the company completed its purchase of Fusion Pharmaceuticals, maker of next-generation radiopharmaceuticals, in a deal worth up to $2.4 billion. The move follows similar billion-dollar-plus transactions made in recent months by Bristol Myers Squibb (BMS) and Eli Lilly, along with earlier takeovers of innovative radiopharmaceutical firms by Novartis, which continued its acquisition streak—begun in 2018—with another planned $1 billion upfront payment for a radiopharma startup, as revealed in May.

“It’s incredible how, suddenly, it’s all the rage,” says George Sgouros, a radiological physicist at Johns Hopkins University School of Medicine in Baltimore and the founder of Rapid, a Baltimore-based company that provides software and imaging services to support radiopharmaceutical drug development. This surge in interest, he points out, underscores a wider recognition that radiopharmaceuticals offer “a fundamentally different way of treating cancer.”

Treating cancer differently, however, means navigating a minefield of unique challenges, particularly in the manufacturing and meticulously timed distribution of these new therapies, before the radioactivity decays. Expanding the reach of the therapy to treat a broader array of cancers will also require harnessing new kinds of tumor-killing particles and finding additional suitable targets.

“There’s a lot of potential here,” says David Nierengarten, an analyst who covers the radiopharmaceutical space for Wedbush Securities in San Francisco. But, he adds, “There’s still a lot of room for improvement.”

Atomic advances

For decades, a radioactive form of iodine stood as the sole radiopharmaceutical available on the market. Once ingested, this iodine gets taken up by the thyroid, where it helps to destroy cancerous cells of that butterfly-shaped gland in the neck—a treatment technique established in the 1940s that remains in common use today.

But the targeted nature of this strategy is not widely applicable to other tumor types.

The thyroid is naturally inclined to absorb iodine from the bloodstream since this mineral, which is found in its nonradioactive form in many foods, is required for the synthesis of certain hormones made by the gland.

Other cancers don’t have a comparable affinity for radioactive elements. So instead of hijacking natural physiological pathways, researchers have had to design drugs that are capable of recognizing and latching onto specific proteins made by tumor cells. These drugs are then further engineered to act as targeted carriers, delivering radioactive isotopes—unstable atoms that emit nuclear energy—straight to the malignant site.

Radioactive drugs strike cancer with precision Read More »

anthropic-introduces-claude-3.5-sonnet,-matching-gpt-4o-on-benchmarks

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks

The Anthropic Claude 3 logo, jazzed up by Benj Edwards.

Anthropic / Benj Edwards

On Thursday, Anthropic announced Claude 3.5 Sonnet, its latest AI language model and the first in a new series of “3.5” models that build upon Claude 3, launched in March. Claude 3.5 can compose text, analyze data, and write code. It features a 200,000 token context window and is available now on the Claude website and through an API. Anthropic also introduced Artifacts, a new feature in the Claude interface that shows related work documents in a dedicated window.

So far, people outside of Anthropic seem impressed. “This model is really, really good,” wrote independent AI researcher Simon Willison on X. “I think this is the new best overall model (and both faster and half the price of Opus, similar to the GPT-4 Turbo to GPT-4o jump).”

As we’ve written before, benchmarks for large language models (LLMs) are troublesome because they can be cherry-picked and often do not capture the feel and nuance of using a machine to generate outputs on almost any conceivable topic. But according to Anthropic, Claude 3.5 Sonnet matches or outperforms competitor models like GPT-4o and Gemini 1.5 Pro on certain benchmarks like MMLU (undergraduate level knowledge), GSM8K (grade school math), and HumanEval (coding).

Claude 3.5 Sonnet benchmarks provided by Anthropic.

Enlarge / Claude 3.5 Sonnet benchmarks provided by Anthropic.

If all that makes your eyes glaze over, that’s OK; it’s meaningful to researchers but mostly marketing to everyone else. A more useful performance metric comes from what we might call “vibemarks” (coined here first!) which are subjective, non-rigorous aggregate feelings measured by competitive usage on sites like LMSYS’s Chatbot Arena. The Claude 3.5 Sonnet model is currently under evaluation there, and it’s too soon to say how well it will fare.

Claude 3.5 Sonnet also outperforms Anthropic’s previous-best model (Claude 3 Opus) on benchmarks measuring “reasoning,” math skills, general knowledge, and coding abilities. For example, the model demonstrated strong performance in an internal coding evaluation, solving 64 percent of problems compared to 38 percent for Claude 3 Opus.

Claude 3.5 Sonnet is also a multimodal AI model that accepts visual input in the form of images, and the new model is reportedly excellent at a battery of visual comprehension tests.

Claude 3.5 Sonnet benchmarks provided by Anthropic.

Enlarge / Claude 3.5 Sonnet benchmarks provided by Anthropic.

Roughly speaking, the visual benchmarks mean that 3.5 Sonnet is better at pulling information from images than previous models. For example, you can show it a picture of a rabbit wearing a football helmet, and the model knows it’s a rabbit wearing a football helmet and can talk about it. That’s fun for tech demos, but the tech is still not accurate enough for applications of the tech where reliability is mission critical.

Anthropic introduces Claude 3.5 Sonnet, matching GPT-4o on benchmarks Read More »

researchers-describe-how-to-tell-if-chatgpt-is-confabulating

Researchers describe how to tell if ChatGPT is confabulating

Researchers describe how to tell if ChatGPT is confabulating

Aurich Lawson | Getty Images

It’s one of the world’s worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that’s indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn’t capable of; or some aspect of the LLM’s training might have incentivized a falsehood.

But perhaps the simplest explanation is that an LLM doesn’t recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation.

Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they’ve found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.

Catching confabulation

The new research is strictly about confabulations, and not instances such as training on false inputs. As the Oxford team defines them in their paper describing the work, confabulations are where “LLMs fluently make claims that are both wrong and arbitrary—by which we mean that the answer is sensitive to irrelevant details such as random seed.”

The reasoning behind their work is actually quite simple. LLMs aren’t trained for accuracy; they’re simply trained on massive quantities of text and learn to produce human-sounding phrasing through that. If enough text examples in its training consistently present something as a fact, then the LLM is likely to present it as a fact. But if the examples in its training are few, or inconsistent in their facts, then the LLMs synthesize a plausible-sounding answer that is likely incorrect.

But the LLM could also run into a similar situation when it has multiple options for phrasing the right answer. To use an example from the researchers’ paper, “Paris,” “It’s in Paris,” and “France’s capital, Paris” are all valid answers to “Where’s the Eiffel Tower?” So, statistical uncertainty, termed entropy in this context, can arise either when the LLM isn’t certain about how to phrase the right answer or when it can’t identify the right answer.

This means it’s not a great idea to simply force the LLM to return “I don’t know” when confronted with several roughly equivalent answers. We’d probably block a lot of correct answers by doing so.

So instead, the researchers focus on what they call semantic entropy. This evaluates all the statistically likely answers evaluated by the LLM and determines how many of them are semantically equivalent. If a large number all have the same meaning, then the LLM is likely uncertain about phrasing but has the right answer. If not, then it is presumably in a situation where it would be prone to confabulation and should be prevented from doing so.

Researchers describe how to tell if ChatGPT is confabulating Read More »

why-interplay’s-original-fallout-3-was-canceled-20+-years-ago

Why Interplay’s original Fallout 3 was canceled 20+ years ago

The path untaken —

OG Fallout producer says “Project Van Buren” ran out of time and money.

What could have been.

Enlarge / What could have been.

PC gamers of a certain vintage will remember tales of Project Van Buren, a title that early ’00s Interplay intended as the sequel to 1998’s hit Fallout 2. Now, original Fallout producer Timothy Cain is sharing some behind-the-scenes details about how he contributed to the project’s cancellation during a particularly difficult time for publisher Interplay.

Cain famously left Interplay during Fallout 2‘s development in the late ’90s to help form short-lived RPG house Troika Games. After his departure, though, he was still in touch with some people from his former employer, including an unnamed Interplay vice president looking for some outside opinions on the troubled Van Buren project.

“Would you mind coming over and playing one of my game prototypes?” Cain recalls this vice president asking him sometime in mid-2003. “We’re making a Fallout game and I’m going to have to cancel it. I don’t think they can get it done… but if you could come over and look at it and give me an estimate, there’s a chance I wouldn’t cancel it.”

Cain discusses his memories of testing “Project Van Buren.”

Cain recalls walking “across the street” from Troika to the Interplay offices, motivated to help because, as he remembers it, “if you don’t do it, bad things will happen to other people.” There, he got to see the latest build of Project Van Buren, running on the 3D Jefferson Engine that was intended to replace the sprite-based isometric view of the first two Fallout games. Cain said the version he played was similar or identical to a tech demo obtained by fan site No Mutants Allowed in 2007 and featured in a recent YouTube documentary about the failed project.

After playing for about two hours and talking to the team behind the project, Cain said the VP asked him directly how long the demo needed to become a shippable game. The answer Cain reportedly gave—18 months of standard development for “a really good game” or 12 months of “death march” crunch time for an unbalanced, buggy mess—was too long for the financially strapped publisher to devote to funding the project.

“He could not afford a development period of more than six months,” Cain said. “To me, that time frame was out of the question… He thought it couldn’t be done in six months; I just confirmed that.”

Show me the money

Looking back today, Cain said it’s hard to pinpoint a single “villain” responsible for Van Buren’s failure. Even reusing the engine from the first Fallout game—as the Fallout 2 team did for that title’s quick 12-month development process—wouldn’t have necessarily helped, Cain said. “Would that engine have been acceptable five years later [after Fallout 2]?” he asked rhetorically. “Had anyone really looked at it? I started the engine in 1994… it’s creaky.”

Real “Van Buren”-heads will enjoy this in-depth look at the game’s development, including details of Interplay’s troubled financial situation in the early ’00s.

In the end, Van Buren’s cancellation (and that of a planned Interplay Fallout MMO years later) simply “comes down to money,” Cain said. “I do not believe that [with] the money they had left, the game in the state it was in, and the people who were working on it could have completed it within six months,” he said. “And [if they did], I don’t think it would have been a game you would have liked playing.”

Luckily, the all-but-name shuttering of Interplay in the years after Van Buren’s cancellation wouldn’t spell the end of the Fallout series. Bethesda acquired the license in 2007, leading to a completely reimagined Fallout 3 that has become the cornerstone of a fan-favorite franchise many years later. But for those still wondering what Interplay’s original “Fallout 3” could have been, a group of fans is trying to rebuild the Project Van Buren demo from the ground up for modern audiences.

Why Interplay’s original Fallout 3 was canceled 20+ years ago Read More »

natgeo-documents-salvage-of-tuskegee-airman’s-lost-wwii-plane-wreckage

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage

Remembering a hero this Juneteenth —

The Real Red Tails investigates the fatal crash of 2nd Lt. Frank Moody in 1944.

Michigan's State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the wreckage.

Enlarge / Michigan’s State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the Lake Huron WWII wreckage of 2nd Lt. Frank Moody’s P-39 Airacobra. Moody, one of the famed Tuskagee Airmen, fatally crashed in 1944.

National Geographic

In April 1944, a pilot with the Tuskegee Airmen, Second Lieutenant Frank Moody, was on a routine training mission when his plane malfunctioned. Moody lost control of the aircraft and plunged to his death in the chilly waters of Lake Huron. His body was recovered two months later, but the airplane was left at the bottom of the lake—until now. Over the last few years, a team of divers working with the Tuskegee Airmen National Historical Museum in Detroit has been diligently recovering the various parts of Moody’s plane to determine what caused the pilot’s fatal crash.

That painstaking process is the centerpiece of The Real Red Tails, a new documentary from National Geographic narrated by Sheryl Lee Ralph (Abbot Elementary). The documentary features interviews with the underwater archaeologists working to recover the plane, as well as firsthand accounts from Moody’s fellow airmen and stunning underwater footage from the wreck itself.

The Tuskegee Airmen were the first Black military pilots in the US Armed Forces and helped pave the way for the desegregation of the military. The men painted the tails of their P-47 planes red, earning them the nickname the Red Tails. (They initially flew Bell P-39 Airacobras like Moody’s downed plane, and later flew P-51 Mustangs.) It was then-First Lady Eleanor Roosevelt who helped tip popular opinion in favor of the fledgling unit when she flew with the Airmen’s chief instructor, C. Alfred Anderson, in March 1941. The Airmen earned praise for their skill and bravery in combat during World War II, with members being awarded three Distinguished Unit Citations, 96 Distinguished Flying Crosses, 14 Bronze Stars, 60 Purple Hearts, and at least one Silver Star.

  • 2nd Lt. Frank Moody’s official military portrait.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) Harry T. Stewart.

    National Geographic/Rob Lyall

  • Stewart’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) James H. Harvey.

    National Geographic/Rob Lyall

  • Harvey’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Stewart and Harvey (second and third, l-r).

    James Harvey

  • Stewart stands next to a restored WWII Mustang airplane at the Tuskegee Airmen National Museum in Detroit.

    National Geographic/Rob Lyall

A father-and-son team, David and Drew Losinski, discovered the wreckage of Moody’s plane in 2014 during cleanup efforts for a sunken barge. They saw what looked like a car door lying on the lake bed that turned out to be a door from a WWII-era P-39. The red paint on the tail proved it had been flown by a “Red Tail” and it was eventually identified as Moody’s plane. The Losinskis then joined forces with Wayne Lusardi, Michigan’s state maritime archaeologist, to explore the remarkably well-preserved wreckage. More than 600 pieces have been recovered thus far, including the engine, the propeller, the gearbox, machine guns, and the main 37mm cannon.

Ars caught up with Lusardi to learn more about this fascinating ongoing project.

Ars Technica: The area where Moody’s plane was found is known as Shipwreck Alley. Why have there been so many wrecks—of both ships and airplanes—in that region?

Wayne Lusardi: Well, the Great Lakes are big, and if you haven’t been on them, people don’t really understand they’re literally inland seas. Consequently, there has been a lot of maritime commerce on the lakes for hundreds of years. Wherever there’s lots of ships, there’s usually lots of accidents. It’s just the way it goes. What we have in the Great Lakes, especially around some places in Michigan, are really bad navigation hazards: hidden reefs, rock piles that are just below the surface that are miles offshore and right near the shipping lanes, and they often catch ships. We have bad storms that crop up immediately. We have very chaotic seas. All of those combined to take out lots of historic vessels. In Michigan alone, there are about 1,500 shipwrecks; in the Great Lakes, maybe close to 10,000 or so.

One of the biggest causes of airplanes getting lost offshore here is fog. Especially before they had good navigation systems, pilots got lost in the fog and sometimes crashed into the lake or just went missing altogether. There are also thunderstorms, weather conditions that impact air flight here, and a lot of ice and snow storms.

Just like commercial shipping, the aviation heritage of the Great Lakes is extensive; a lot of the bigger cities on the Eastern Seaboard extend into the Great Lakes. It’s no surprise that they populated the waterfront, the shorelines first, and in the early part of the 20th century, started connecting them through aviation. The military included the Great Lakes in their training regimes because during World War I, the conditions that you would encounter in the Great Lakes, like flying over big bodies of water, or going into remote areas to strafe or to bomb, mimicked what pilots would see in the European theater during the first World War. When Selfridge Field near Detroit was developed by the Army Air Corps in 1917, it was the farthest northern military air base in the United States, and it trained pilots to fly in all-weather conditions to prepare them for Europe.

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage Read More »

shadow-of-the-erdtree-has-ground-me-into-dust,-which-is-why-i-recommend-it

Shadow of the Erdtree has ground me into dust, which is why I recommend it

Elden Ring: Shadow of the Erdtree DLC —

Souls fans seeking real challenge should love it. Casuals like me might wait.

Image of a fight from Shadow of the Erdtree

Bandai

Elden Ring was my first leap into FromSoftware titles (and Dark-Souls-like games generally), and I fell in deep. Over more than 200 hours, I ate up the cryptic lore, learned lots of timings, and came to appreciate the feeling of achievement through perseverance.

Months ago, in preparation for Elden Ring’s expansion, Shadow of the Erdtree (also on PlayStation and Xbox, arriving June 21), I ditched the save file with which I had beaten the game and started over. I wanted to try out big swords and magic casting. I wanted to try a few new side quests. And I wanted to have a fresh experience with the game before Shadow arrived.

I have had a very fresh experience, in that this DLC has made me feel like I’m still in the first hour of my first game. Reader, this expansion is mopping the floor with me. It looked at my resume, which has “Elden Lord” as its most recent job title, and has tossed it into the slush pile. If you’re wondering whether Shadow would, like Elden Ring, provide a different kind of challenge and offer, like the base game, easier paths for Souls newcomers: No, not really. At least not until you’re already far along. This DLC is for people who beat Elden Ring, or all but beat it, and want capital-M More.

That should be great news for longtime Souls devotees, who fondly recall the difficulty spikes of some of earlier games’ DLC or those who want a slightly more linear, dungeon-by-dungeon, boss-by-boss experience. For everybody else, I’d suggest waiting until you’re confidently through most of the main game—and for the giant wiki/YouTube apparatus around the game to catch up and provide some guidance.

What “ready for the DLC” really means

Technically, you can play Shadow of the Erdtree once you’ve done two things in Elden Ring: beaten Starscourge Radahn and Mohg, Lord of Blood. Radahn is a mid-game boss, and Mohg is generally encountered in the later stages. But, perhaps anticipating the DLC, the game allows you to get to Mohg relatively early by using a specific item.

Just getting to a level where you’re reasonably ready to tackle Mohg will be a lot. As of a week ago, more than 60 percent of players on Steam (PC) had not yet beaten Mohg; that number is even higher on consoles. On my replay, I got to about level 105 at around 50 hours, but I remembered a lot about both the mechanics and the map. I had the item to travel to Mohg and the other item that makes him easier to beat. Maybe it’s strange to avoid spoilers for a game that came out more than two years ago, but, again, most players have not gotten this far.

I took down Mohg in one try; I’m not bragging, just setting expectations. I had a fully upgraded Moonlight Greatsword, a host of spells, a fully upgraded Mimic Tear spirit helper, and a build focused on Intelligence (for the sword and spell casting), but I could also wear decent armor while still adequately rolling. Up until this point, I was surprised by how much easier the bosses and dungeons I revisited had felt (except the Valiant Gargoyle, which was just as hard).

I stepped into the DLC, wandered around a bit, killed a few shambling souls (“Shadows of the Dead”), and found a sealed chasm (“Blackgaol”) in the first area. The knight inside took me out, repeatedly, usually in two quick sword flicks. Sometimes he would change it up and perforate me with gatling-speed flaming crossbow bolts or a wave emanating from his sword. Most of the time, he didn’t even touch his healing flask before I saw “YOU DIED.”

Ah, but most Elden Ring players will remember that the game put an intentionally way-too-hard enemy in the very first open area, almost as a lesson about leveling up and coming back. So I hauled my character and bruised ego toward a nearby ruin, filled mostly with more dead Shadows. The first big “legacy dungeon,” Belurat, Tower Settlement, was just around the corner. I headed in and started compiling my first of what must be 100 deaths by now.

There are the lumbering Shadows, yes, but there are also their bigger brothers, who love to ambush with a leaping strike and take me down in two hits. There are Man-Flies, which unsurprisingly swarmed and latched onto my head, killing me if I wasn’t at full health (40 Vigor, if you must know). There are Gravebirds, which, like all birds in Elden Ring, are absolute jerks that mess with your camera angles. And there are Horned Warriors, who are big, fast, relentless, and responsible for maybe a dozen each of my deaths.

At level 105, with a known build strategy centered around a weapon often regarded as overpowered and all the knowledge I had of the game’s systems and strategies, I was barely hanging on, occasionally inching forward. What gives?

Shadow of the Erdtree has ground me into dust, which is why I recommend it Read More »

t-mobile-defends-misleading-“price-lock”-claim-but-agrees-to-change-ads

T-Mobile defends misleading “Price Lock” claim but agrees to change ads

T-Mobile logo displayed in front of a stock market chart.

Getty Images | SOPA Images

T-Mobile has agreed to change its advertising for the “Price Lock” guarantee that doesn’t actually lock in a customer’s price, but continues to defend the offer.

T-Mobile users expressed their displeasure about being hit with up to $5 per-line price hikes on plans that seemed to have a lifetime price guarantee, but it was a challenge by AT&T that forced T-Mobile to agree to change its advertising. AT&T filed the challenge with the advertising industry’s self-regulatory group, which ruled that T-Mobile’s Price Lock ads were misleading.

As we’ve reported, T-Mobile’s guarantee (currently called “Price Lock” and previously the “Un-contract”) is simply a promise that T-Mobile will pay your final month’s bill if the carrier raises your price and you decide to cancel. Despite that, T-Mobile promised users that it “will never change the price you pay” if you’re on a plan with the provision.

BBB National Programs’ National Advertising Division (NAD), the ad industry’s self-regulatory body, ruled against T-Mobile in a decision issued yesterday. BBB National Programs is an independent nonprofit that is affiliated with the International Association of Better Business Bureaus.

The NAD’s decisions aren’t binding, but advertisers usually comply with them. That’s what T-Mobile is doing.

“T-Mobile is proud of its innovative Price Lock policy, where customers can get their last month of service on T-Mobile if T-Mobile ever changes the customer’s price, and the customer decides to leave,” the company said in its official response to the NAD’s decision. “While T-Mobile believes the challenged advertisements appropriately communicate the generous terms of its Price Lock policy, T-Mobile is a supporter of self-regulation and will take NAD’s recommendations to clarify the terms of its policy into account with respect to its future advertising.”

AT&T: Price Lock not a real price lock

While our recent reports on Price Lock concerned mobile plans, the ads challenged by AT&T were for T-Mobile’s 5G home Internet service.

“AT&T argued that the ‘Price Lock’ claims are false because T-Mobile is not committing to locking the pricing of its service for any amount of time,” the NAD’s decision said. “AT&T also argued that T-Mobile’s disclosures contradict the ‘Price Lock’ claim because they set forth limitations which make clear that T-Mobile may increase the price of service for any reason at any time.”

T-Mobile countered “that its home Internet service ‘price lock’ is innovative and unique in the industry, serving as a strong disincentive to T-Mobile against raising prices and offering a potential benefit of free month’s service, and that it has the discretion as to how to define a ‘price lock’ so long as it clearly communicates the terms,” the NAD noted.

AT&T challenged print and online ads, and a TV commercial featuring actors Zach Braff, Donald Faison, and Jason Momoa. The ads displayed a $50 monthly rate with the text “Price Lock” and included language clarifying the actual details of the offer.

The NAD said that “impactful claims about pricing policies require clear communication of what those policies are and cannot leave consumers with a fundamental misunderstanding about what those policies mean.” T-Mobile’s ads created a fundamental misunderstanding, the NAD found.

T-Mobile defends misleading “Price Lock” claim but agrees to change ads Read More »

supermassive-black-hole-roars-to-life-as-astronomers-watch-in-real-time

Supermassive black hole roars to life as astronomers watch in real time

Sleeping Beauty —

A similar awakening may one day occur with the Milky Way’s supermassive black hole

Artist’s animation of the black hole at the center of SDSS1335+0728 awakening in real time—a first for astronomers.

In December 2019, astronomers were surprised to observe a long-quiet galaxy, 300 million light-years away, suddenly come alive, emitting ultraviolet, optical, and infrared light into space. Far from quieting down again, by February of this year, the galaxy had begun emitting X-ray light; it is becoming more active. Astronomers think it is most likely an active galactic nucleus (AGN), which gets its energy from supermassive black holes at the galaxy’s center and/or from the black hole’s spin. That’s the conclusion of a new paper accepted for publication in the journal Astronomy and Astrophysics, although the authors acknowledge the possibility that it might also be some kind of rare tidal disruption event (TDE).

The brightening of SDSS1335_0728 in the constellation Virgo, after decades of quietude, was first detected by the Zwicky Transient Facility telescope. Its supermassive black hole is estimated to be about 1 million solar masses. To get a better understanding of what might be going on, the authors combed through archival data and combined that with data from new observations from various instruments, including the X-shooter, part of the Very Large Telescope (VLT) in Chile’s Atacama Desert.

There are many reasons why a normally quiet galaxy might suddenly brighten, including supernovae or a TDE, in which part of the shredded star’s original mass is ejected violently outward. This, in turn, can form an accretion disk around the black hole that emits powerful X-rays and visible light. But these events don’t last nearly five years—usually not more than a few hundred days.

So the authors concluded that the galaxy has awakened and now has an AGN. First discovered by Carl Seyfert in 1943, the glow is the result of the cold dust and gas surrounding the black hole, which can form orbiting accretion disks. Gravitational forces compress the matter in the disk and heat it to millions of degrees Kelvin, producing radiation across the electromagnetic spectrum.

Alternatively, the activity might be due to an especially long and faint TDE—the longest and faintest yet detected, if so. Or it could be an entirely new phenomenon altogether. So SDSS1335+0728 is a galaxy to watch. Astronomers are already preparing for follow-up observations with the VLT’s Multi Unit Spectroscopic Explorer (MUSE) and Extremely Large Telescope, among others, and perhaps even the Vera Rubin Observatory slated to come online next summer. Its Large Synoptic Survey Telescope (LSST) will be capable of imaging the entire southern sky continuously, potentially capturing even more galaxy awakenings.

“Regardless of the nature of the variations, [this galaxy] provides valuable information on how black holes grow and evolve,” said co-author Paula Sánchez Sáez, an astronomer at the European Southern Observatory in Germany. “We expect that instruments like [these] will be key in understanding [why the galaxy is brightening].”

There is also a supermassive black hole at the center of our Milky Way galaxy (Sgr A*), but there is not yet enough material that has accreted for astronomers to pick up any emitted radiation, even in the infrared. So, its galactic nucleus is deemed inactive. It may have been active in the past, and it’s possible that it will reawaken again in a few million (or even billion) years when the Milky Way merges with the Andromeda Galaxy and their respective supermassive black holes combine. Only much time will tell.

Astronomy and Astrophysics, 2024. DOI: 10.1051/0004-6361/202347957  (About DOIs).

Listing image by ESO/M. Kornmesser

Supermassive black hole roars to life as astronomers watch in real time Read More »

proton-is-taking-its-privacy-first-apps-to-a-nonprofit-foundation-model

Proton is taking its privacy-first apps to a nonprofit foundation model

Proton going nonprofit —

Because of Swiss laws, there are no shareholders, and only one mission.

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Getty Images

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act “in accordance with the purpose for which they were established.” While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton’s founding mission, Yen wrote.

There’s a lot more Proton to protect these days

Proton has gone from a single email offering to a wide range of services, many of which specifically target the often invasive offerings of other companies (read, mostly: Google). You can now take your cloud files, passwords, and calendars over to Proton and use its VPN services, most of which offer end-to-end encryption and open source core software hosted in Switzerland, with its notably strong privacy laws.

None of that guarantees that a Swiss court can’t compel some forms of compliance from Proton, as happened in 2021. But compared to most service providers, Proton offers a far clearer and easier-to-grasp privacy model: It can’t see your stuff, and it only makes money from subscriptions.

Of course, foundations are only as strong as the people who guide them, and seemingly firewalled profit/non-profit models can be changed. Time will tell if Proton’s new model can keep up with changing markets—and people.

Proton is taking its privacy-first apps to a nonprofit foundation model Read More »