Author name: Mike M.

unity-is-dropping-its-unpopular-per-install-runtime-fee

Unity is dropping its unpopular per-install Runtime Fee

It certainly was novel —

Cross-platform game engine saw the downside to “novel and controversial” plan.

Unity logo against pink and blue shapes

Unity

Unity, maker of a popular cross-platform engine and toolkit, will not pursue a broadly unpopular Runtime Fee that would have charged developers based on game installs rather than per-seat licenses. The move comes exactly one year after the fee’s initial announcement.

In a blog post attributed to President and CEO Matt Bromberg, the CEO writes that the company cannot continue “democratizing game development” without “a partnership built on trust.” Bromberg states that customers understand the necessity of price increases, but not in “a novel and controversial new form.” So game developers will not be charged per installation, but they will be sorted into Personal, Pro, and Enterprise tiers by level of revenue or funding.

“Canceling the Runtime Fee for games and instituting these pricing changes will allow us to continue investing to improve game development for everyone while also being better partners,” Bromberg writes.

A year of discontent

Unity’s announcement of a new “Runtime Fee that’s based on game installs” in mid-September 2023 (Wayback archive), while joined by cloud storage and “AI at runtime,” would have been costly for smaller developers who found success. The Runtime Fee would have cost 20 cents per install on the otherwise free Personal tier after a game had reached $200,000 in revenue and more than 200,000 installs. Fees decreased slightly for Pro and Enterprise customers after $1 million in revenue and 1 million installs.

The move led to almost immediate backlash from many developers. Unity, whose then-CEO John Riccitiello had described in 2015 as having “no royalties, no [f-ing] around,” was “quite simply not a company to be trusted,” wrote Necrosoft Games’ Brandon Sheffield. Developers said they would hold off updates or switch engines rather than absorb the fee, which would have retroactively counted installs before January 2024 toward its calculations.

Unity’s terms of service seemed to allow for such sudden shifts. Unity softened the impact of the fee on Personal users, removed the backdated install counting, and capped fees at 2.5 percent of revenue. Unity Create President and General Manager Marc Whitten told Ars at that time that while the fee would seemingly not affect the vast majority of Unity users, he understood that a stable agreement should be a feature of the engine.

However coincidentally or not, Riccitiello announced his retirement the next month, which led to some celebration among devs, but not total restoration of trust. A massive wave of layoffs throughout the winter of 2023 and 2024 showed that Unity’s financial position was precarious, partly due to acquisitions during Riccitiello’s term. The Runtime Fee would have minimal impact in 2024, the company said in filings, but would “ramp from there as customers adopt our new releases.”

Instead of ramping from there, the Runtime Fee is now gone, and Unity has made other changes to its pricing structure:

  • Unity Personal remains free, and its revenue/funding ceiling increases from $100,000 to $200,000
  • Unity Pro, for customers over the Personal limit, sees an 8 percent price increase to $2,200 per seat
  • Unity Enterprise, with customized packages for those over $25 million in revenue or funding, sees a 25 percent increase.

“From this point forward, it’s our intention to revert to a more traditional cycle of considering any potential price increases only on an annual basis,” Bromberg wrote in his post. Changes to Unity’s Editor software should allow customers to keep using their existing version under previously agreed terms, he wrote.

Unity is dropping its unpopular per-install Runtime Fee Read More »

openai’s-new-“reasoning”-ai-models-are-here:-o1-preview-and-o1-mini

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini

fruit by the foot —

New o1 language model can solve complex tasks iteratively, count R’s in “strawberry.”

An illustration of a strawberry made out of pixel-like blocks.

OpenAI finally unveiled its rumored “Strawberry” AI language model on Thursday, claiming significant improvements in what it calls “reasoning” and problem-solving capabilities over previous large language models (LLMs). Formally named “OpenAI o1,” the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and certain API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and “scientific reasoning.” However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, “There’s a lot of o1 hype on my feed, so I’m worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it’ll only get better. (I’m personally psyched about the model’s potential & trajectory!) what o1 isn’t (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today’s launch—but we’re working to get there!”

OpenAI reports that o1-preview ranked in the 89th percentile on competitive programming questions from Codeforces. In mathematics, it scored 83 percent on a qualifying exam for the International Mathematics Olympiad, compared to GPT-4o’s 13 percent. OpenAI also states, in a claim that may later be challenged as people scrutinize the benchmarks and run their own evaluations over time, o1 performs comparably to PhD students on specific tasks in physics, chemistry, and biology. The smaller o1-mini model is designed specifically for coding tasks and is priced at 80 percent less than o1-preview.

A benchmark chart provided by OpenAI. They write,

Enlarge / A benchmark chart provided by OpenAI. They write, “o1 improves over GPT-4o on a wide range of benchmarks, including 54/57 MMLU subcategories. Seven are shown for illustration.”

OpenAI attributes o1’s advancements to a new reinforcement learning (RL) training approach that teaches the model to spend more time “thinking through” problems before responding, similar to how “let’s think step-by-step” chain-of-thought prompting can improve outputs in other LLMs. The new process allows o1 to try different strategies and “recognize” its own mistakes.

AI benchmarks are notoriously unreliable and easy to game; however, independent verification and experimentation from users will show the full extent of o1’s advancements over time. It’s worth noting that MIT Research showed earlier this year that some of the benchmark claims OpenAI touted with GPT-4 last year were erroneous or exaggerated.

A mixed bag of capabilities

OpenAI demos “o1” correctly counting the number of Rs in the word “strawberry.”

Amid many demo videos of o1 completing programming tasks and solving logic puzzles that OpenAI shared on its website and social media, one demo stood out as perhaps the least consequential and least impressive, but it may become the most talked about due to a recurring meme where people ask LLMs to count the number of R’s in the word “strawberry.”

Due to tokenization, where the LLM processes words in data chunks called tokens, most LLMs are typically blind to character-by-character differences in words. Apparently, o1 has the self-reflective capabilities to figure out how to count the letters and provide an accurate answer without user assistance.

Beyond OpenAI’s demos, we’ve seen optimistic but cautious hands-on reports about o1-preview online. Wharton Professor Ethan Mollick wrote on X, “Been using GPT-4o1 for the last month. It is fascinating—it doesn’t do everything better but it solves some very hard problems for LLMs. It also points to a lot of future gains.”

Mollick shared a hands-on post in his “One Useful Thing” blog that details his experiments with the new model. “To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.”

Mollick gives the example of asking o1-preview to build a teaching simulator “using multiple agents and generative AI, inspired by the paper below and considering the views of teachers and students,” then asking it to build the full code, and it produced a result that Mollick found impressive.

Mollick also gave o1-preview eight crossword puzzle clues, translated into text, and the model took 108 seconds to solve it over many steps, getting all of the answers correct but confabulating a particular clue Mollick did not give it. We recommend reading Mollick’s entire post for a good early hands-on impression. Given his experience with the new model, it appears that o1 works very similar to GPT-4o but iteratively in a loop, which is something that the so-called “agentic” AutoGPT and BabyAGI projects experimented with in early 2023.

Is this what could “threaten humanity?”

Speaking of agentic models that run in loops, Strawberry has been subject to hype since last November, when it was initially known as Q(Q-star). At the time, The Information and Reuters claimed that, just before Sam Altman’s brief ouster as CEO, OpenAI employees had internally warned OpenAI’s board of directors about a new OpenAI model called Q*  that could “threaten humanity.”

In August, the hype continued when The Information reported that OpenAI showed Strawberry to US national security officials.

We’ve been skeptical about the hype around Qand Strawberry since the rumors first emerged, as this author noted last November, and Timothy B. Lee covered thoroughly in an excellent post about Q* from last December.

So even though o1 is out, AI industry watchers should note how this model’s impending launch was played up in the press as a dangerous advancement while not being publicly downplayed by OpenAI. For an AI model that takes 108 seconds to solve eight clues in a crossword puzzle and hallucinates one answer, we can say that its potential danger was likely hype (for now).

Controversy over “reasoning” terminology

It’s no secret that some people in tech have issues with anthropomorphizing AI models and using terms like “thinking” or “reasoning” to describe the synthesizing and processing operations that these neural network systems perform.

Just after the OpenAI o1 announcement, Hugging Face CEO Clement Delangue wrote, “Once again, an AI system is not ‘thinking,’ it’s ‘processing,’ ‘running predictions,’… just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is.”

“Reasoning” is also a somewhat nebulous term since, even in humans, it’s difficult to define exactly what the term means. A few hours before the announcement, independent AI researcher Simon Willison tweeted in response to a Bloomberg story about Strawberry, “I still have trouble defining ‘reasoning’ in terms of LLM capabilities. I’d be interested in finding a prompt which fails on current models but succeeds on strawberry that helps demonstrate the meaning of that term.”

Reasoning or not, o1-preview currently lacks some features present in earlier models, such as web browsing, image generation, and file uploading. OpenAI plans to add these capabilities in future updates, along with continued development of both the o1 and GPT model series.

While OpenAI says the o1-preview and o1-mini models are rolling out today, neither model is available in our ChatGPT Plus interface yet, so we have not been able to evaluate them. We’ll report our impressions on how this model differs from other LLMs we have previously covered.

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini Read More »

reported-dreamcast-addict-tim-walz-is-now-an-unofficial-crazy-taxi-character

Reported Dreamcast addict Tim Walz is now an unofficial Crazy Taxi character

Are you ready? Here. We. Go! —

New “Tim Walz Edition” mod lets the VP hopeful earn some ca-razy (campaign) money.

The

Enlarge / The “VP” on the cab light is a nice touch.

Last month, in a profile of newly named Democratic vice presidential candidate Tim Walz, The New York Times included a throwaway line about “the time his wife had seized his Dreamcast, the Sega video game console, because he had been playing to excess.” Weeks later, that anecdote formed the unlikely basis for the unlikely Crazy Taxi: Tim Walz Edition mod, which inserts the Minnesota governor (and top-of-the-ticket running mate Kamala Harris) into the Dreamcast classic driving game.

“Rumor has it that Tim Walz played Crazy Taxi so much his wife took his Dreamcast away from him… so I decided to put him in the game,” modder Edward La Barbera wrote on the game’s Itch.io page.

Unfortunately, the pay-what-you-want mod can’t just be burned to a CD-R and played on actual Dreamcast hardware. Currently, the mod’s visual files are tuned to work only with Dreamcast emulator Flycast, which includes built-in features for replacing in-game textures.

After a few minutes of playing with settings and folder structures, launching the mod replaces the Crazy Taxi character Gus with a model of Walz sporting an open-buttoned black and red flannel shirt (a look The Wall Street Journal said has “made him a DNC fashion icon”). There are plenty of little visual touches beyond the Walz character, too, including “D” and “R” gears labeled as “Democrat” and “Republican,” a fare counter that’s labeled as “campaign $$$,” and a countdown measuring the seconds “till election time.”

The mod files also include new voice lines for Walz and Harris sourced from “their respective DNC speeches.” Getting those files to work in a standard Crazy Taxi ROM requires a good deal of fiddling with third-party ISO management programs, though, so you might be better off watching a gameplay video if you just want to hear a taxi-driving Walz cry out, “Mind your own business” to an impatient passenger.

The mod lets Walz and Harris (unofficially) join a long line of real political figures appearing in video games, including Bill Clinton and Al Gore as hidden NBA Jam characters, Barack Obama and Sarah Palin as DLC characters in Mercenaries 2 and, uh, Abraham Lincoln leading a mechanical strike team in 3DS title Code Name: S.T.E.A.M.

Where are they now: Walz’s Dreamcast edition

Current owner Bryn Tanner poses with the actual Dreamcast once owned by Tim Walz.

Enlarge / Current owner Bryn Tanner poses with the actual Dreamcast once owned by Tim Walz.

While Walz and the campaign haven’t commented publicly on his reported Dreamcast addiction, there’s been a surprising amount of detailed reporting on the fate of the Governor’s fabled Dreamcast itself. Former Walz student and campaign intern Tom Johnson told IGN that Walz brought the old console in as a potential donation in the summer of 2007. Tanner said it didn’t get much use in the office break room, but after the campaign, he took the system—complete with a copy of Crazy Taxi in the drive—home to play with roommate Alex Gaterud.

From there, Gaterud sold the console on Craigslist for a mere $25 in 2012—he recalled to IGN that, at the time, “advertising something ‘formerly owned by a US Congressman’ doesn’t add any value on Craigslist.” The customer in that Craigslist sale, Bryn Tanner, was posting about the acquisition online years before Walz became a national political celebrity.

More recently, Tanner has taken to posting TikTok videos about his relatively notable console and showing it off at local gaming convention 2D Con. “I’m basically famous,” Tanner joked in a post following the NYT story.

Walz, 60, is on the edge of the first generation of national politicians that grew up in the era of early home video game consoles (he would have been about 13 when the Atari 2600 launched). Younger politicians have even started to cater more specifically to the gaming demographic, too, as when congresswoman Alexandria Ocasio-Cortez appeared on a charity-focused Donkey Kong 64 marathon Twitch stream to promote transgender awareness and political action.

But some of Walz’s political opponents have tried to make hay of the fact that he was apparently hooked on a console that came out when he was in his mid-30s. “This is reported as a fun, relatable story, but doesn’t anybody else find a 35-year-old man getting that addicted to video games a little sad?” conservative activist Charlie Kirk posted on social media.

To the people who knew Walz as a gamer, though, it just makes him more human. “I think it’s sort of the specificity of it that really gets me,” Tanner told the Minneapolis Star-Tribune last month. “The fact that it wasn’t just, ‘Oh, my wife took away my video game,’ because that could be anything. It was the Dreamcast. It was this kind of underrated flash in the pan.”

Reported Dreamcast addict Tim Walz is now an unofficial Crazy Taxi character Read More »

old-easter-island-genomes-show-no-sign-of-a-population-collapse

Old Easter Island genomes show no sign of a population collapse

A row of grey rock sculptures of human torsos and heads, arranged in a long line.

Rapa Nui, often referred to as Easter Island, is one of the most remote populated islands in the world. It’s so distant that Europeans didn’t stumble onto it until centuries after they had started exploring the Pacific. When they arrived, though, they found that the relatively small island supported a population of thousands, one that had built imposing monumental statues called moai. Arguments over how this population got there and what happened once it did have gone on ever since.

Some of these arguments, such as the idea that the island’s indigenous people had traveled there from South America, have since been put to rest. Genomes from people native to the island show that its original population was part of the Polynesian expansion across the Pacific. But others, such as the role of ecological collapse in limiting the island’s population and altering its culture, continue to be debated.

Researchers have now obtained genome sequence from the remains of 15 Rapa Nui natives who predate European contact. And they indicate that the population of the island appears to have grown slowly and steadily, without any sign of a bottleneck that could be associated with an ecological collapse. And roughly 10 percent of the genomes appear to have a Native American source that likely dates from roughly the same time that the island was settled.

Out of the museum

The remains that provided these genomes weren’t found on Rapa Nui, at least not recently. Instead, they reside at the Muséum National d’Histoire Naturelle in France, having been obtained at some uncertain point in the past. Their presence there is a point of contention for the indigenous people of Rapa Nui, but the researchers behind the new work had the cooperation of the islanders in this project, having worked with them extensively. The researchers’ description of these interactions could be viewed as a model for how this sort of work should be done:

Throughout the course of the study, we met with representatives of the Rapanui community on the island, the Comisión de Desarrollo Rapa Nui and the Comisión Asesora de Monumentos Nacionales, where we presented our research goals and ongoing results. Both commissions voted in favor of us continuing with the research… We presented the research project in public talks, a short video and radio interviews on the island, giving us the opportunity to inquire about the questions that are most relevant to the Rapanui community. These discussions have informed the research topics we investigated in this work.

Given the questionable record-keeping at various points in the past, one of the goals of this work was simply to determine whether these remains truly had originated on Rapa Nui. That was unambiguously true. All comparisons with genomes of modern populations show that all 15 of these genomes have a Polynesian origin and are most closely related to modern residents of Rapa Nui. “The confirmation of the origin of these individuals through genomic analyses will inform repatriation efforts led by the Rapa Nui Repatriation Program (Ka Haka Hoki Mai Te Mana Tupuna),” the authors suggest.

A second question was whether the remains predate European contact. The researchers attempted to perform carbon dating, but it produced dates that made no sense. Some of the remains had dates that were potentially after they had been collected, according to museum records. And all of them were from the 1800s, well after European contact and introduced diseases had shrunk the native population and mixed in DNA from non-Polynesians. Yet none of the genomes showed more than one percent European ancestry, a fraction low enough to be ascribed to a spurious statistical fluke.

So the precise date these individuals lived is uncertain. But the genetic data clearly indicates that they were born prior to the arrival of Europeans. They can therefore tell us about what the population was experiencing in the period between Rapa Nui’s settlement and the arrival of colonial powers.

Back from the Americas

While these genomes showed no sign of European ancestry, they were not fully Polynesian. Instead, roughly 10 percent of the genome appeared to be derived from a Native American population. This is the highest percentage seen in any Polynesian population, including some that show hints of Native American contact that dates to before Europeans arrived on the scene.

Isolating these DNA sequences and comparing them to populations from across the world showed that the group most closely related to the one who contributed to the Rapa Nui population presently resides in the central Andes region of South America. That’s in contrast to the earlier results, which suggested the contact was with populations further north in South America.

Old Easter Island genomes show no sign of a population collapse Read More »

google’s-ad-tech-empire-may-be-$95b-and-“too-big”-to-sell,-analysts-warn-doj

Google’s ad tech empire may be $95B and “too big” to sell, analysts warn DOJ

“Impossible to negotiate” —

Google Ad Manager is key to ad tech monopoly, DOJ aims to prove.

A staffer with the Paul, Weiss legal firm wheels boxes of legal documents into the Albert V. Bryan US Courthouse at the start of a Department of Justice antitrust trial against Google over its advertiing business in Alexandria, Virginia, on September 9, 2024. Google faces its second major antitrust trial in less than a year, with the US government accusing the tech giant of dominating online advertising and stifling competition.

Enlarge / A staffer with the Paul, Weiss legal firm wheels boxes of legal documents into the Albert V. Bryan US Courthouse at the start of a Department of Justice antitrust trial against Google over its advertiing business in Alexandria, Virginia, on September 9, 2024. Google faces its second major antitrust trial in less than a year, with the US government accusing the tech giant of dominating online advertising and stifling competition.

Just a couple of days into the Google ad tech antitrust trial, it seems clear that the heart of the US Department of Justice’s case is proving that Google Ad Manager is the key to the tech giant’s alleged monopoly.

Google Ad Manager is the buy-and-sell side ad tech platform launched following Google’s acquisition of DoubleClick and AdX in 2008 for $3 billion. It is currently used to connect Google’s publisher ad servers with its ad exchanges, tying the two together in a way that allegedly locks the majority of publishers into paying higher fees on the publisher side because they can’t afford to drop Google’s ad exchange.

The DOJ has argued that Google Ad Manager “serves 90 percent of publishers that use the ad tech tools to sell their online ad inventory,” AdAge reported, and through it, Google clearly wields monopoly powers.

In her opening statement, DOJ attorney Julia Tarver Wood argued that acquisitions helped Google manipulate the rules of ad auctions to maximize profits while making it harder for rivals to enter and compete in the markets Google allegedly monopolized. The DOJ has argued those alleged monopolies are in markets “for publisher ad servers, advertiser ad networks, and the ad exchanges that connect the two,” Reuters reported.

Google has denied this characterization of its ad tech dominance, calling the DOJ’s market definitions too narrow. The tech company also pointed out that the Federal Trade Commission (FTC) investigated and unconditionally approved the DoubleClick merger in 2007, amidst what the FTC described as urgent “high profile public discussions of the competitive merits of the transaction, in which numerous (sometimes conflicting) theories of competitive harm were proposed.” At that time, the FTC concluded that the acquisition “was unlikely to reduce competition in any relevant antitrust market.”

But in its complaint, the DOJ argued that the DoubleClick “acquisition vaulted Google into a commanding position over the tools publishers use to sell advertising opportunities, complementing Google’s existing tool for advertisers, Google Ads, and set the stage for Google’s later exclusionary conduct across the ad tech industry.”

To set things right, at the very least, the DOJ has asked the court to order Google to spin off Google Ad Manager, which may or may not include valuable products like Google’s Display and Video 360 (DV360) platform. There is also the possibility that the US district judge, Leonie Brinkema, could order Google to sell off its ad tech business entirely.

One problem with those proposed remedies, analysts told AdAge, is that no one knows how big Google’s ad tech business really is or the actual value of Google Ad Manager.

Google Ad Manager could be worth less if Google’s DV360 platform isn’t included in the sale or if selling either the publisher or advertiser side cuts out data allowing Google to set the prices that it wants. The CEO of an ad platform called Permutive, Joe Root, told AdAge that “it is hard to say how much of the value of Google’s ads business is because it has this advertiser product and DV360, versus how much of its value comes from Google Ad Manager alone.”

Root doubts that Google Ad Manager is “on its own that valuable.” However, based on “newly released documents for the trial,” some analysts predict that “any new entity spun out of Google” would be “almost too big for any buyer,” AdAge reported.

One estimate from an ad tech consultant who helms a strategic advisory firm called Luma Partners, Terence Kawaja, suggested that Google’s ad tech business as a standalone company “could be worth up to $95 billion” today, AdAge reported.

“You can’t divest $100 billion,” Kawaja said. “There is no buyer for it. [Google] would have to spin it off to shareholders, that’s how any forced remedy would manifest.”

Google’s ad tech empire may be $95B and “too big” to sell, analysts warn DOJ Read More »

the-2024-vw-golf-gti-is-the-last-of-its-kind-with-a-manual-transmission

The 2024 VW Golf GTI is the last of its kind with a manual transmission

blame Europe this time —

Get the manual while you can.

A grey VW Golf GTI

Enlarge / The latest Volkswagen Golf GTI isn’t perfect, but it has enough charm to overcome its flaws.

Jonathan Gitlin

“They won’t make them like this much longer” is a pretty hackneyed aphorism, but it certainly applies to the Volkswagen Golf GTI. The Mk 8 Golf is due for a mid-life refresh next year, and when that happens, VW will be simplifying things by dropping the manual transmission option. That means model year 2024 is the final chance anyone will have to buy a GTI with three pedals. Yes, it has some flaws, but it’s also small and nimble, both attributes lacking in so much of what the automotive industry has to offer these days.

We’ve been a bit deficient in not reviewing the Mk 8 Golf GTI until now. I reviewed the more expensive, more powerful Golf R in 2022, but the last GTI we drove was the outgoing Mk 7 car in mid-2020. That time, we were only able to source a GTI with the two-pedal, dual-clutch gearbox, a transmission I felt didn’t quite suit the engine it was mated to. On the other hand, I was effusive about the old GTI’s infotainment, calling it “one of the best systems on the market.” Well, it was 2020, remember.

Under the hood, you’ll find yet another version of VW Group’s venerable EA888 four-cylinder engine, here with a turbocharger and direct injection. It generates 241 hp (180 kW) and 273 lb-ft (370 Nm), with that peak torque arriving at just 1,750 rpm. This sends its power to the front wheels via a seven-speed DSG or the soon-to-be-retired six-speed manual.

  • For this review, we tested both the DSG and manual transmission versions of the GTI.

    Jonathan Gitlin

  • I don’t know if generations of Golf styling is exactly like Star Trek movies, but I do usually prefer the even numbers…

    Jonathan Gitlin

  • VW decided to drop the three-door Golf body style when it introduced the Mk 8 Golf.

    Jonathan Gitlin

  • There’s 19.9 cubic feet (563 L) of cargo space with the seats in use, or 34.5 cubic feet (977 L) with the rear seats folded flat.

    Jonathan Gitlin

You can blame enlightened Europe for the six-speed’s demise. Over there, buyers prefer the two-pedal version by a massive margin, which even the high take rate for three-pedal GTIs in the US and Canada couldn’t make up for. (This is, of course, contrary to popular wisdom, which has it that all Europeans shun auto ‘boxes as a matter of course.) On top of that, getting the six-speed to comply with incoming Euro 7 emissions regulations proved to be just too much, according to VW, so it decided to drop the option.

Here in the US, both transmissions are rated at a combined 27 mpg (8.7 L/100 km), with the DSG getting the edge in city driving (24 mpg/9.8 L/100 km) and the manual beating it slightly for highway (34 mpg/6.9 L/100 km). In practice, I saw as high as 36 mpg (6.5 L/100 km) on highway trips with the three-pedal GTI.

A smarter GTI

A more modern electronic architecture was one of the improvements to the Golf from Mk 7 to Mk 8. On the plus side, it enables some clever vehicle dynamics control via the torque-sensing limited slip differential, the GTI’s stability and traction control, and the adaptive dampers, if fitted. Very keen drivers might prefer a mechanical limited slip diff, but in day-to-day driving, you’d never have an issue with the Mk 8 GTI’s electronic version.

The new electronics meant a big tech upgrade for the interior, too. Out went the physical analog gauges, which were replaced by a 10.25-inch digital display with various different user-configurable views. A move to capacitive control panels instead of discrete buttons adds an extra level of minimalism to VW’s traditionally spartan approach to cabin design, but they’re far too easy to activate by mistake.

The 2024 VW Golf GTI is the last of its kind with a manual transmission Read More »

“hail-holy-terror”:-two-us-citizens-charged-for-running-online-“terrorgram-collective”

“HAIL HOLY TERROR”: Two US citizens charged for running online “Terrorgram Collective”

out in the open —

White accelerationist terror meets social media.

The US government recently announced multiple charges against the alleged leaders of the “Terrorgram Collective,” which does just what it sounds like—it promotes terrorism on the Telegram messaging platform. In this case, the terrorism was white racial terror, complete with a “hit list” of US officials and activists, a homemade “White Terror” video glorifying “saints” who had killed others, and instructions for taking down US infrastructure such as electrical substation transformers. (Read the indictment.)

The group's criteria for

Enlarge / The group’s criteria for “sainthood.”

Chaos was the point. Terrorgram promoted “white supremacist accelerationism,” which believes that society must be incited into a civil war or apocalyptic confrontation in order to bring down the existing system of government and establish a white nationalist state.

The group’s manifestos and chat rooms sometimes felt suffused with the habits of the extremely online: hand-clap emojis between every important word, instructional videos on how to make bombs, the language of trolling, catchphrases so over the top they sound ironic (“HAIL HOLY TERROR” in all caps).

Despite using technology to organize and publicize its ideology, though, the group was skeptical of technology—or at least of certain kinds. “Do not let those technophiles have a day of rest!” said one post encouraging its readers to go after the local power grid.

“LEAVE. YOUR. PHONE. AT. HOME,” said another. “Death to the grid. Death to the System,” concluded a third. The group’s accelerationist manifesto was called “Hard Reset.”

An

Enlarge / An “encyclopedia” of killers, produced by Terrorgram.

But they were apparently happy to use other tech to spread the word. One Terrorgram publication was called “Do it for the Gram,” and Terrorgram admins created audiobooks of shooter manifestos, such as “A White Boy Summer to Remember.”

But Telegram, which combines the wider reach of channels and chat rooms (unencrypted) with the possibility of direct messaging (which can be encrypted), was a favorite spot for recruiting and sharing information. According to the government, Dallas Humber (34) of Elk Grove, California, and Matthew Allison (37) of Boise, Idaho, were the leaders of Terrorgram, which they appear to have run out in the open.

The group constantly encouraged violence, and it stressed the need for attackers to mentally prepare themselves to kill so as not to chicken out. But neither Humber nor Allison are accused of violence themselves; they seem to have been content to cheer on new martyrs to their cause.

The government traces several real-world killers to the Terrorgram community, including a 19-year-old from Slovakia who, in 2022, killed two people at an LGBTQ+ bar in Bratislava before sending his manifesto to Allison and then killing himself in a park. The manifesto specifically listed “Hard Reset” in its “Recommended Reading” section.

“HAIL HOLY TERROR”: Two US citizens charged for running online “Terrorgram Collective” Read More »

satisfactory-is-officially-released,-officially-a-scary-wonderful-time-sink

Satisfactory is officially released, officially a scary wonderful time sink

What is a “game”? —

Even people with 1,000 hours in the game are still learning about it.

Updated

Where are the gentle creatures and native plants you first saw when you landed? More importantly, could this conveyer belt run on a shorter path?

Enlarge / Where are the gentle creatures and native plants you first saw when you landed? More importantly, could this conveyer belt run on a shorter path?

Coffee Stain Studios

The company that compels you to industrialize an untouched alien planet in Satisfactory, FICSIT, is similar to Portal‘s Aperture Science or Fallout‘s Vault-Tec. You are a disposable employee, fed misinformation and pushed to ignore awful or incongruous things, all for the greater good of science, profit, or an efficient mixture of the two.

And yet even FICSIT was a bit concerned about how deep into the 1.0 release of Satisfactory (Steam, Epic Games, on sale until September 23) I had fallen. I got a warning that I had been playing for two hours straight. While FICSIT approved of hard work, it was important to have some work-life balance, it suggested.

Friends of mine had told me that they had to stop playing Factorio when it began to feel like an unpaid part-time job. Given a chance to check out Satisfactory, I presumed, like I always do, That Could Never Be Me. Folks, it was definitely me. I’m having a hard time writing this post, not because it’s hard to describe or recommend Satisfactory. I just stayed up very late “reviewing” it, woke up thinking about it, and am wondering whether enough friends would want to join me that I should set up a private server.

  • Up to four players can explore, gather, and argue about optimal workflows together.

  • The exhaust from your efforts surely won’t affect things long-term. Keep building.

  • The control panel for a nuclear power unit. Enough said.

Travel the galaxy, meet interesting creatures, ignore them, and automate

You are a Pioneer, working for FICSIT (motto: “We do anything to find short-term solutions to long-term problems”). You are dropped onto an alien planet, with a first-person view, and your first job is to disassemble your landing craft so you can use its parts for a HUB (Habitat and Utility Base). Your second job is to upgrade your HUB so you can build tools and workshops. Your third job is to upgrade it again, unlocking even more tools and workshops. You’ll need resources to keep building, like mined ore, fuel for a generator, and, sadly, animal parts for research.

How should you feel about the lush landscape you are slowly stripping away and populating with smoke-belching machines? Is there a greater plan for all this stuff you’re making? What is a “Space Elevator” and where does it take things? How bad should you feel about putting down animals that charge you as you invade their space?

You might think about these things, but there’s a stronger pull on your brain. Can you better optimize the flow of ore from a mining machine into the smelter, onto the tool machine (Constructor), and then into a storage bin? What about your power—do you really have to manually feed your system leaves and wood and turn it off between productions? Are we really balancing the wattage input and clock speed of our machines in our spare time, for fun?

5.5 million copies, a thousand hours, no end in sight

Yes, we are. I have only glimpsed into my future in Satisfactory, and it’s already full of excuses for why I didn’t get other things done. People with 540 hours into the game are advising me on how to site my permanent factory while I kludge it out with a “starter factory,” a wonderfully evocative phrase about human nature. Searching for people with 1,000 hours of experience yields a remarkable number of people who are still asking questions and figuring things out.

One of the things being added to the 1.0 release is a “Quantum Encoder.” I have no idea what this is or what it does, but I have a sense of where this is all headed.

Coffee Stain Studios’ 1.0 release trailer for Satisfactory.

Satisfactory launched exclusively on the Epic Games Store and was one of the very few games that earned enough to actually make Epic some money. Its developer, Coffee Stain Studios (maker of Goat Simulator, publisher of Deep Rock Galactic), cites 5.5 million copies sold since it hit early access in March 2019. With its 1.0 release, the developer has promised “Premium plumbing,” such that the toilet in your living quarters “has been updated with an advanced flushing mechanism, providing an extra luxurious worker experience for fans.”

Perhaps my one saving grace is that Satisfactory is “Playable,” not Verified, on Steam Deck. There are some text and input quirks and video efficiencies to contend with, though I fear the active community offers solutions for all of them. It’s up to me, and all of us, to find some work/life/game-work/sleep balance. Time to do your part.

This post was updated at 12: 45 p.m. to modify a reference to the Space Elevator.

Listing image by Coffee Stain Studios

Satisfactory is officially released, officially a scary wonderful time sink Read More »

microsoft-performs-operations-with-multiple-error-corrected-qubits

Microsoft performs operations with multiple error-corrected qubits

Image of a chip with a device on it that is shaped like two triangles connected by a bar.

Enlarge / Quantinuum’s H2 “racetrack” quantum processor.

Quantinuum

On Tuesday, Microsoft made a series of announcements related to its Azure Quantum Cloud service. Among them was a demonstration of logical operations using the largest number of error-corrected qubits yet.

Since April, we’ve tripled the number of logical qubits here,” said Microsoft Technical Fellow Krysta Svore. “So we are accelerating toward that hundred-logical-qubit capability.” The company has also lined up a new partner in the form of Atom Computing, which uses neutral atoms to hold qubits and has already demonstrated hardware with over 1,000 hardware qubits.

Collectively, the announcements are the latest sign that quantum computing has emerged from its infancy and is rapidly progressing toward the development of systems that can reliably perform calculations that would be impractical or impossible to run on classical hardware. We talked with people at Microsoft and some of its hardware partners to get a sense of what’s coming next to bring us closer to useful quantum computing.

Making error correction simpler

Logical qubits are a route out of the general despair of realizing that we’re never going to keep hardware qubits from producing too many errors for reliable calculation. Error correction on classical computers involves measuring the state of bits and comparing their values to an aggregated value. Unfortunately, you can’t analogously measure the state of a qubit to determine if an error has occurred since measurement causes it to adopt a concrete value, destroying any of the superposition of values that make quantum computing useful.

Logical qubits get around this by spreading a single bit of quantum information across a collection of bits, which makes any error less catastrophic. Detecting when one occurs involves adding some additional bits to the logical qubit such that their value is dependent upon the ones holding the data. You can measure these ancillary qubits to identify if any problem has occurred and possibly gain information on how to correct it.

There are many potential error correction schemes, some of which can involve dedicating around a thousand qubits to each logical qubit. It’s possible to get away with far less than that—schemes with fewer than 10 qubits exist. But in general, the fewer hardware qubits you use, the greater your chance of experiencing errors that you can’t recover from. This trend can be offset in part through hardware qubits that are less error-prone.

The challenge is that this only works if error rates are low enough that you don’t run into errors during the correction process. In other words, the hardware qubits have to be good enough that they don’t produce so many errors that it’s impossible to know when an error has occurred and how to correct it. That threshold has been passed only relatively recently.

Microsoft’s earlier demonstration involved the use of hardware from Quantinuum, which uses qubits based on ions trapped in electrical fields. These have some of the best error rates yet reported, and Microsoft had shown that this allowed it to catch and correct errors over several rounds of error correction. In the new work, the collaboration went further, performing multiple logical operations with error correction on a collection of logical qubits.

Microsoft performs operations with multiple error-corrected qubits Read More »

doj-claims-google-has-“trifecta-of-monopolies”-on-day-1-of-ad-tech-trial

DOJ claims Google has “trifecta of monopolies” on Day 1 of ad tech trial

Karen Dunn, one of the lawyers representing Google, outside of the Albert V. Bryan US Courthouse at the start of a Department of Justice antitrust trial against Google over its advertiing business in Alexandria, Virginia, on September 9, 2024.

Enlarge / Karen Dunn, one of the lawyers representing Google, outside of the Albert V. Bryan US Courthouse at the start of a Department of Justice antitrust trial against Google over its advertiing business in Alexandria, Virginia, on September 9, 2024.

On Monday, the US Department of Justice’s next monopoly trial against Google started in Virginia—this time challenging the tech giant’s ad tech dominance.

The trial comes after Google lost two major cases that proved Google had a monopoly in both general search and the Android app store. During her opening statement, DOJ lawyer Julia Tarver Wood told US District Judge Leonie Brinkema—who will be ruling on the case after Google cut a check to avoid a jury trial—that “it’s worth saying the quiet part out loud,” AP News reported.

“One monopoly is bad enough,” Wood said. “But a trifecta of monopolies is what we have here.”

In its complaint, the DOJ argued that Google broke competition in the ad tech space “by engaging in a systematic campaign to seize control of the wide swath of high-tech tools used by publishers, advertisers, and brokers, to facilitate digital advertising.”

The result of such “insidious” allegedly anti-competitive behavior is that today Google pockets at least 30 cents “of each advertising dollar flowing from advertisers to website publishers through Google’s ad tech tools … and sometimes far more,” the DOJ alleged.

Meanwhile, as Google profits off both advertisers and publishers, “website creators earn less, and advertisers pay more” than “they would in a market where unfettered competitive pressure could discipline prices and lead to more innovative ad tech tools,” the DOJ alleged.

On Monday, Wood told Brinkema that Google intentionally put itself in this position to “manipulate the rules of ad auctions to its own benefit,” The Washington Post reported.

“Publishers were understandably furious,” Wood said. “The evidence will show that they could do nothing.”

Wood confirmed that the DOJ planned to call several publishers as witnesses in the coming weeks to explain the harms caused. Expected to take the stand will be “executives from companies including USA Today, [Wall Street] Journal parent company News Corp., and the Daily Mail,” the Post reported.

The ad tech trial, which is expected to last four to six weeks, may be the most consequential of the monopoly trials Google has recently faced, experts have said.

That’s because during the DOJ’s trial proving Google’s monopoly in search, it remained unclear what remedies the DOJ sought. Some ways to destroy Google’s search monopoly could be “unlikely to create meaningful competition” or hurt Google’s bottom line, experts told Ars, but a more drastic order to spin out its Chrome browser or Android operating system could really impact Google’s revenue. It won’t be until December that the DOJ will even provide a rough outline of proposed remedies in that case, Reuters reported, with the judge not expected to rule until next August.

But the DOJ has been very clear about the remedies needed in the ad tech case, “asking Brinkema to order a divestment of Google’s Ad Manager suite of services, which is responsible for many of the rectangular ads that populate the tops and sides of webpages across the Internet,” the Post reported.

Because the most “obvious” remedy would be to require Google to sell off parts of its ad business, experts told AP News that the ad tech trial “could potentially be more harmful to Google” than the search trial. Perhaps at the furthest extreme, antitrust expert Shubha Ghosh told Ars that “if this case goes against Google as the last one did, it could set the stage for splitting it into separate search and advertising companies.”

In the DOJ’s complaint, prosecutors argued that it “is critical to restore competition in these markets by enjoining Google’s anticompetitive practices, unwinding Google’s anticompetitive acquisitions, and imposing a remedy sufficient both to deny Google the fruits of its illegal conduct and to prevent further harm to competition in the future.”

Ghosh said that undoing Google’s acquisitions could lead to Google no longer representing both advertisers’ and sellers’ interests in each ad auction—instead requiring Google to either pick a side or perhaps involve a broker.

Although the Post reported that Google has argued that “customers prefer the convenience of a one-stop shop,” the DOJ hopes to prove that Google’s alleged monopoly has shuttered newspapers across the US and threatens to do more harm if left unchecked.

DOJ claims Google has “trifecta of monopolies” on Day 1 of ad tech trial Read More »

unlocked,-loaded-guns-more-common-among-parents-who-give-kids-firearm-lessons

Unlocked, loaded guns more common among parents who give kids firearm lessons

parenting paradox —

It’s unknown if demonstrating responsible handling actually keeps kids safe.

A man helps a boy look at a handgun during the National Rifle Association's Annual Meetings & Exhibits at the Indiana Convention Center in Indianapolis on April 16, 2023.

Enlarge / A man helps a boy look at a handgun during the National Rifle Association’s Annual Meetings & Exhibits at the Indiana Convention Center in Indianapolis on April 16, 2023.

Gun-owning parents who teach their kids how to responsibly handle and shoot a gun are less likely to store those deadly weapons safely, according to a survey-based study published Monday in JAMA Pediatrics.

The study, conducted by gun violence researchers at Rutgers University, analyzed survey responses from 870 gun-owning parents. Of those, the parents who responded that they demonstrated proper handling to their child or teen, had their kid practice safe handling under supervision, and/or taught their kid how to shoot a firearm were more likely than other gun-owning parents to keep at least one gun unsecured—that is, unlocked and loaded. In fact, each of the three responses carried at least double the odds of the parent having an unlocked, loaded gun around, the study found.

The survey responses may seem like a paradox for parents who value safe and responsible gun handling. Previous studies have suggested that safe storage of firearms can reduce the risk of injuries and deaths among children and teens. A 2005 JAMA study, for instance, found lower risks of firearm injuries among children and teens when parents securely store their firearms—meaning they kept them locked, unloaded, and stored separately from locked ammunition. And as of 2020, firearm-related injuries became the leading cause of death among children and teens in the US.

Still, earlier surveys of gun-owning parents have hinted that some parents believe responsible gun-handling lessons are enough to keep children safe.

Safe handling, unsecured storage

To add data to that potential sentiment, the Rutgers researchers collected survey responses from adults in nine states (New Jersey, Pennsylvania, Ohio, Minnesota, Florida, Mississippi, Texas, Colorado, and Washington) who participated in an Ipsos poll. The states span varying geography, firearm ownership, firearm policies, and gun violence rates. The responses were collected between June and July of 2023. The researchers then analyzed the data, adjusting their calculations to account for a variety of factors, including military status, education level, income, and political beliefs.

Among the 870 parents included in the study, 412 (47 percent) said they demonstrated proper firearm handling for their kids, 321 (37 percent) said they had their children practice proper handling with supervision, and 324 (37 percent) taught their kids how to shoot their firearm.

The researchers then split the group into those who did not keep an unlocked, loaded gun around and those who did. Of the 870 gun-owning parents, 720 (83 percent) stored firearms securely, while 150 (17 percent) reported that they kept at least one firearm unlocked and loaded. Compared with the 720 secure-storage parents, the 150 parents with an unlocked, loaded gun had adjusted odds of 2.03-fold higher for saying they demonstrated proper handling, 2.29-fold higher for practicing handling with their kids, and 2.27-fold higher for teaching their kids to shoot.

“Consistent with qualitative research results, these findings suggest that some parents may believe that modeling responsible firearm use negates the need for secure storage,” the authors concluded. “However, it is unknown whether parents’ modeling responsible behavior is associated with a decreased risk of firearm injury.”

Unlocked, loaded guns more common among parents who give kids firearm lessons Read More »

how-did-volcanism-trigger-climate-change-before-the-eruptions-started?

How did volcanism trigger climate change before the eruptions started?

Image of a person in a stream-filled gap between two tall rock faces.

Enlarge / Loads of lava: Kasbohm with a few solidified lava flows of the Columbia River Basalts.

Joshua Murray

As our climate warms beyond its historical range, scientists increasingly need to study climates deeper in the planet’s past to get information about our future. One object of study is a warming event known as the Miocene Climate Optimum (MCO) from about 17 to 15 million years ago. It coincided with floods of basalt lava that covered a large area of the Northwestern US, creating what are called the “Columbia River Basalts.” This timing suggests that volcanic CO2 was the cause of the warming.

Those eruptions were the most recent example of a “Large Igneous Province,” a phenomenon that has repeatedly triggered climate upheavals and mass extinctions throughout Earth’s past. The Miocene version was relatively benign; it saw CO2 levels and global temperatures rise, causing ecosystem changes and significant melting of Antarctic ice, but didn’t trigger a mass extinction.

A paper just published in Geology, led by Jennifer Kasbohm of the Carnegie Science’s Earth and Planets Laboratory, upends the idea that the eruptions triggered the warming while still blaming them for the peak climate warmth.

The study is the result of the world’s first successful application of high-precision radiometric dating on climate records obtained by drilling into ocean sediments, opening the door to improved measurements of past climate changes. As a bonus, it confirms the validity of mathematical models of our orbits around the Solar System over deep time.

A past climate with today’s CO2 levels

“Today, with 420 parts per million [of CO2], we are basically entering the Miocene Climate Optimum,” said Thomas Westerhold of the University of Bremen, who peer-reviewed Kasbohm’s study. While our CO2 levels match, global temperatures have not yet reached the MCO temperatures of up to 8° C above the preindustrial era. “We are moving the Earth System from what we call the Ice House world… in the complete opposite direction,” said Westerhold.

When Kasbohm began looking into the link between the basalts and the MCO’s warming in 2015, she found that the correlation had huge uncertainties. So she applied high-precision radiometric dating, using the radioactive decay of uranium trapped within zircon crystals to determine the age of the basalts. She found that her new ages no longer spanned the MCO warming. “All of these eruptions [are] crammed into just a small part of the Miocene Climate Optimum,” said Kasbohm.

But there were also huge uncertainties in the dates for the MCO, so it was possible that the mismatch was an artifact of those uncertainties. Kasbohm set out to apply the same high-precision dating to the marine sediments that record the MCO.

A new approach to an old problem

“What’s really exciting… is that this is the first time anyone’s applied this technique to sediments in these ocean drill cores,” said Kasbohm.

Normally, dates for ocean sediments drilled from the seabed are determined using a combination of fossil changes, magnetic field reversals, and aligning patterns of sediment layers with orbital wobbles calculated by astronomers. Each of those methods has uncertainties that are compounded by gaps in the sediment caused by the drilling process and by natural pauses in the deposition of material. Those make it tricky to match different records with the precision needed to determine cause and effect.

The uncertainties made the timing of the MCO unclear.

Tiny clocks: Zircon crystals from volcanic ash that fell into the Caribbean Sea during the Miocene.

Enlarge / Tiny clocks: Zircon crystals from volcanic ash that fell into the Caribbean Sea during the Miocene.

Jennifer Kasbohm

Radiometric dating would circumvent those uncertainties. But until about 15 years ago, its dates had such large errors that they were useless for addressing questions like the timing of the MCO. The technique also typically needs kilograms of material to find enough uranium-containing zircon crystals, whereas ocean drill cores yield just grams.

But scientists have significantly reduced those limitations: “Across the board, people have been working to track and quantify and minimize every aspect of uncertainty that goes into the measurements we make. And that’s what allows me to report these ages with such great precision,” Kasbohm said.

How did volcanism trigger climate change before the eruptions started? Read More »