Author name: Kris Guyer

tcl-tvs-will-use-films-made-with-generative-ai-to-push-targeted-ads

TCL TVs will use films made with generative AI to push targeted ads

Advertising has become a focal point of TV software. We’re seeing companies that sell TV sets be increasingly interested in leveraging TV operating systems (OSes) for ads and tracking. This has led to bold new strategies, like an adtech firm launching a TV OS and ads on TV screensavers.

With new short films set to debut on its free streaming service tomorrow, TV-maker TCL is positing a new approach to monetizing TV owners and to film and TV production that sees reduced costs through reliance on generative AI and targeted ads.

TCL’s five short films are part of a company initiative to get people more accustomed to movies and TV shows made with generative AI. The movies will “be promoted and featured prominently on” TCL’s free ad-supported streaming television (FAST) service, TCLtv+, TCL announced in November. TCLtv+has hundreds of FAST channels and comes on TCL-brand TVs using various OSes, including Google TV and Roku OS.

Some of the movies have real actors. You may even recognize some, (like Kellita Smith, who played Bernie Mac’s wife, Wanda, on The Bernie Mac Show). Others feature characters made through generative AI. All the films use generative AI for special effects and/or animations and took 12 weeks to make, 404 Media, which attended a screening of the movies, reported today. AI tools used include ComfyUI, Nuke, and Runway, 404 reported. However, all of the TCL short movies were written, directed, and scored by real humans (again, including by people you may be familiar with). At the screening, Chris Regina, TCL’s chief content officer for North America, told attendees that “over 50 animators, editors, effects artists, professional researchers, [and] scientists” worked on the movies.

I’ve shared the movies below for you to judge for yourself, but as a spoiler, you can imagine the quality of short films made to promote a service that was created for targeted ads and that use generative AI for fast, affordable content creation. AI-generated videos are expected to improve, but it’s yet to be seen if a TV brand like TCL will commit to finding the best and most natural ways to use generative AI for video production. Currently, TCL’s movies demonstrate the limits of AI-generated video, such as odd background imagery and heavy use of narration that can distract from badly synced audio.

TCL TVs will use films made with generative AI to push targeted ads Read More »

the-talos-principle:-reawakened-adds-new-engine,-looks,-and-content-to-a-classic

The Talos Principle: Reawakened adds new engine, looks, and content to a classic

Are humans just squishy machines? Can an artificially intelligent robot create a true moral compass for itself? Is there a best time to play The Talos Principle again?

The answer to at least one of these questions is now somewhat answered. The Talos Principle: Reawakened, due in “Early 2025,” will bundle the original critically acclaimed 2014 game, its Road to Gehenna DLC, and a new chapter, “In the Beginning,” into an effectively definitive edition. Developer commentary and a level editor will also be packed in. But most of all, the whole game has been rebuilt from the ground up in Unreal Engine 5, bringing “vastly improved visuals” and quality-of-life boosts to the game, according to publisher Devolver Digital.

Trailer for The Talos Principle: Reawakened.

Playing Reawakened, according to its Steam page requires a minimum of 8 GB of RAM, 75 GB of storage space, and something more than an Intel integrated GPU. It also recommends 16 GB RAM, something close to a GeForce 3070, and a 6–8-core CPU.

It starts off with puzzle pieces and gets a bit more complicated as you go on.

Credit: Devolver Digital

It starts off with puzzle pieces and gets a bit more complicated as you go on. Credit: Devolver Digital

The Talos Principle, from the developers of the Serious Sam series, takes its name from the bronze-made protector of Crete in Greek mythology. The gameplay has you solve a huge assortment of puzzles as a robot avatar and answer the serious philosophical questions that it ponders. You don’t shoot things or become a stealth archer, but you deal with drones, turrets, and other obstacles that require some navigation, tool use, and deeper thinking. As you progress, you learn more about what happened to the world, why you’re being challenged with these puzzles, and what choices an artificial intelligence can really make. It’s certainly not bad timing for this game to arrive once more.

If you can’t wait until the remaster, the original game and its also well-regarded sequel, The Talos Principle II, are on deep sale at the moment, both on Steam (I and II) and GOG (I and II).

The Talos Principle: Reawakened adds new engine, looks, and content to a classic Read More »

nasa-says-orion’s-heat-shield-is-good-to-go-for-artemis-ii—but-does-it-matter?

NASA says Orion’s heat shield is good to go for Artemis II—but does it matter?

“We have since determined that while the capsule was dipping in and out of the atmosphere, as part of that planned skip entry, heat accumulated inside the heat shield outer layer, leading to gases forming and becoming trapped inside the heat shield,” said Pam Melroy, NASA’s deputy administrator. “This caused internal pressure to build up and led to cracking and uneven shedding of that outer layer.”

An independent team of experts concurred with NASA’s determination of the root cause, Melroy said.

NASA Administrator Bill Nelson, Deputy Administrator Pam Melroy, Associate Administrator Jim Free, and Artemis II Commander Reid Wiseman speak with reporters Thursday in Washington, DC. Credit: NASA/Bill Ingalls

Counterintuitively, this means NASA engineers are comfortable with the safety of the heat shield if the Orion spacecraft reenters the atmosphere at a slightly steeper angle than it did on Artemis I and spends more time subjected to higher temperatures.

When the Orion spacecraft climbed back out of the atmosphere during the Artemis I skip reentry, a period known as the skip dwell, NASA said heating rates decreased and thermal energy accumulated inside the heat shield’s Avcoat material. This generated gases inside the heat shield through a process known as pyrolysis. 

“Pyrolysis is just burning without oxygen,” said Amit Kshatriya, deputy associate administrator of NASA’s Moon to Mars program. “We learned that as part of that reaction, the permeability of the Avcoat material is essential.”

During the skip dwell, “the production of those gases was higher than the permeability could tolerate, so as a result, pressure differential was created. That pressure led to cracks in plane with the outer mold line of the vehicle,” Kshatriya said.

NASA didn’t know this could happen because engineers tested the heat shield on the ground at higher temperatures than the Orion spacecraft encountered in flight to prove the thermal barrier could withstand the most extreme possible heating during reentry.

“What we missed was this critical region in the middle, and we missed that region because we didn’t have the test facilities to produce the low-level energies that occur during skip and dwell,” Kshatriya said Thursday.

During the investigation, NASA replicated the charring and cracking after engineers devised a test procedure to expose Avcoat heat shield material to the actual conditions of the Artemis I reentry.

So, for Artemis II, NASA plans to modify the reentry trajectory to reduce the skip reentry’s dwell time. Let’s include some numbers to help illustrate the difference.

The distance traveled by Artemis I during the reentry phase of the mission was more than 3,000 nautical miles (3,452 miles; 5,556 kilometers), according to Kshatriya. This downrange distance will be limited to no more than 1,775 nautical miles (2,042 miles; 3,287 kilometers) on Artemis II, effectively reducing the dwell time the Orion spacecraft spends in the lower heating regime that led to the cracking on Artemis I.

NASA’s inspector general report in May included new images of Orion’s heat shield that the agency did not initially release after the Artemis I mission. Credit: NASA Inspector General

With this change, Kshatriya said NASA engineers don’t expect to see the heat shield erosion they saw on Artemis I. “The gas generation that occurs during that skip dwell is sufficiently low that the environment for crack generation is not going to overwhelm the structural integrity of the char layer.”

For future Orion spaceships, NASA and its Orion prime contractor, Lockheed Martin, will incorporate changes to address the heat shield’s permeability problem.

Waiting for what?

NASA officials discussed the heat shield issue, and broader plans for the Artemis program, in a press conference in Washington on Thursday. But the event’s timing added a coat of incredulity to much of what they said. President-elect Donald Trump, with SpaceX founder Elon Musk in his ear, has vowed to cut wasteful government spending.

NASA says Orion’s heat shield is good to go for Artemis II—but does it matter? Read More »

booking.com-says-typos-giving-strangers-access-to-private-trip-info-is-not-a-bug

Booking.com says typos giving strangers access to private trip info is not a bug

For Booking.com, it’s essential that users can book travel for other users by adding their email addresses to a booking because that’s how people frequently book trips together. And if it happens that the email address added to a booking is also linked to an existing Booking.com user, the trip is automatically added to that person’s account. After that, there’s no way for Booking.com to remove the trip from the stranger’s account, even if there’s a typo in the email or if auto-complete adds the wrong email domain and the user booking the trip doesn’t notice.

According to Booking.com, there is nothing to fix because this is not a “system glitch,” and there was no “security breach.” What Alfie encountered is simply the way the platform works, which, like any app where users input information, has the potential for human error.

In the end, Booking.com declined to remove the trip from Alfie’s account, saying that would have violated the privacy of the user booking the trip. The only resolution was for Alfie to remove the trip from his account and pretend it never happened.

Alfie remains concerned, telling Ars, “I can’t help thinking this can’t be the only occurrence of this issue.” But Jacob Hoffman-Andrews, a senior staff technologist for the digital rights group the Electronic Frontier Foundation, told Ars that after talking to other developers, his “gut reaction” is that Booking.com didn’t have a ton of options to prevent typos during bookings.

“There’s only so much they can do to protect people from their own typos,” Hoffman-Andrews said.

One step Booking.com could take to protect privacy

Perhaps the bigger concern exposed by Alfie’s experience beyond typos is Booking.com’s practice of automatically adding bookings to accounts linked to emails that users they don’t know input. Once the trip is added to someone’s account, that person can seemingly access sensitive information about the users booking the trip that Booking.com otherwise would not share.

While engaging with the Booking.com support team member, Alfie told Ars that he “probed for as much information as possible” to find out who was behind the strange booking on his account. And seemingly because the booking was added to Alfie’s account, the support team member had no problem sharing sensitive information that went beyond the full name and last four digits of the credit card used for the booking, which were listed in the trip information by default.

Booking.com says typos giving strangers access to private trip info is not a bug Read More »

how-did-the-ceo-of-an-online-payments-firm-become-the-nominee-to-lead-nasa?

How did the CEO of an online payments firm become the nominee to lead NASA?


Expect significant changes for America’s space agency.

A young man smiles while sitting amidst machinery.

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

Jared Isaacman at SpaceX Headquarters in Hawthorne, California. Credit: SpaceX

President-elect Donald Trump announced Wednesday his intent to nominate entrepreneur and commercial astronaut Jared Isaacman as the next administrator of NASA.

For those unfamiliar with Isaacman, who at just 16 years old founded a payment processing company in his parents’ basement that ultimately became a major player in online payments, it may seem an odd choice. However, those inside the space community welcomed the news, with figures across the political spectrum hailing Isaacman’s nomination variously as “terrific,” “ideal,” and “inspiring.”

This statement from Isaac Arthur, president of the National Space Society, is characteristic of the response: “Jared is a remarkable individual and a perfect pick for NASA Administrator. He brings a wealth of experience in entrepreneurial enterprise as well as unique knowledge in working with both NASA and SpaceX, a perfect combination as we enter a new era of increased cooperation between NASA and commercial spaceflight.”

So who is Jared Isaacman? Why is his nomination being welcomed in most quarters of the spaceflight community? And how might he shake up NASA? Read on.

Meet Jared

Isaacman is now 41 years old, about half the age of current NASA Administrator Bill Nelson. He has founded a couple of companies, including the publicly traded Shift4 (look at the number 4 on a keyboard to understand the meaning of the name), as well as Draken International, a company that trained pilots of the US Air Force.

Throughout his career, Isaacman has shown a passion for flying and adventure. About five years ago, he decided he wanted to fly into space and bought the first commercial mission on a SpaceX Dragon spacecraft. But this was no joy ride. Some of his friends assumed Isaacman would invite them along. Instead, he brought a cancer survivor, a science educator, and a raffle winner. As part of the flight, this Inspiration4 mission raised hundreds of millions of dollars for research into childhood cancer.

After this mission, Isaacman set about a more ambitious project he named Polaris. The nominal plan was to fly two additional missions on Dragon and then become the first person to fly on SpaceX’s Starship. He flew the first of these missions, Polaris Dawn, in September. He brought along a pilot, Scott “Kidd” Poteet, and two SpaceX engineers, Anna Menon and Sarah Gillis. They were the first SpaceX employees to ever fly into orbit.

The mission was characteristic of Isaacman’s goal to expand the horizon of what is possible for humans in space. Polaris Dawn flew to an altitude of 1,408.1 km on the first day, the highest Earth-orbit mission ever flown and the farthest humans have traveled from our planet since Apollo. On the third day of the flight, the four crew members donned spacesuits designed and developed by SpaceX within the last two years. After venting the cabin’s atmosphere into space, first Isaacman and then Gillis spent several minutes extending their bodies out of the Dragon spacecraft.

This was the first private spacewalk in history and underscored Isaacman’s commitment to accelerating the transition of spaceflight as rare and government-driven to more publicly accessible.

Why does the space community welcome him?

In the last five years, Isaacman has impressed most of those within the spaceflight community he has interacted with. He has taken his responsibilities seriously, training hard for his Dragon missions and using NASA facilities such as a pressure chamber at NASA’s Johnson Space Center when appropriate.

Through these interactions—based upon my interviews with many people—Isaacman has demonstrated that he is not a billionaire seeking a joyride but someone who wants to change spaceflight for the better. In his spaceflights, he has also demonstrated himself to be a thoughtful and careful leader.

Two examples illustrate this. The ride to space aboard a Crew Dragon vehicle is dynamic, with the passengers pulling in excess of 3 Gs during the initial ascent, the abrupt cutoff of the main Falcon 9 rocket’s engines, stage separation, and then the grinding thrust of the upper stage engines just behind the capsule. In interviews, each of the Polaris Dawn crew members remarked about how Isaacman calmly called out these milestones in advance, with a few words about what to expect. It had a calming, reassuring effect and demonstrated that his crew’s health and safety were foremost among his concerns.

Another way in which Isaacman shows care for his crew and families is through an annual event called “Fighter Jet Training.” Cognizant of the time crew members spend away from their families training, he invites them and SpaceX employees who have supported his flights to an airstrip in Montana. Over the course of two days, family members get to ride in jets, go on a zero-gravity flight, and participate in other fun activities to get a taste of what flying on the edge is like. Isaacman underwrites all of this as a way of thanking all who are helping him.

The bottom line is that Isaacman, through his actions and words, appears to be a caring person who wants the US spaceflight enterprise to advance to greater heights.

Why would Isaacman want the job?

So why would a billionaire who has been to space twice (and plans to go at least two more times) want to run a federal agency? I have not asked Isaacman this question directly, but in interviews over the years, he has made it clear that he is passionate about spaceflight and views his role as a facilitator desiring to move things forward.

Most likely, he has accepted the job because he wants to modernize NASA and put the space agency in the best position to succeed in the future. NASA is no longer the youthful agency that took the United States to the Moon during the Apollo program. That was more than half a century ago, and while NASA is still capable of great things, it is living with one foot in the past and beholden to large, traditional contractors.

The space agency has a budget of about $25 billion, and no one could credibly argue that all of those dollars are spent efficiently. Several major programs at NASA were created by Congress with the intent of ensuring maximum dollars flowed to certain states and districts. It seems likely that Isaacman and the Trump administration will take a whack at some of these sacred cows.

High on the list is the Space Launch System rocket, which Congress created more than a dozen years ago. The rocket, and its ground systems, have been a testament to the waste inherent in large government programs funded by cost-plus contracts. NASA’s current administrator, Nelson, had a hand in creating this SLS rocket. Even he has decried the effect of this type of contracting as a “plague” on the space agency.

Currently, NASA plans to use the SLS rocket as the means of launching four astronauts inside the Orion spacecraft to lunar orbit. There, they will rendezvous with SpaceX’s Starship vehicle, go down to the Moon for a few days, and then come back to Orion. The spacecraft will then return to Earth.

So long, SLS?

Multiple sources have told Ars that the SLS rocket—which has long had staunch backing from Congress—is now on the chopping block. No final decisions have been made, but a tentative deal is in place with lawmakers to end the rocket in exchange for moving US Space Command to Huntsville, Alabama.

So how would NASA astronauts get to the Moon without the SLS rocket? Nothing is final, and the trade space is open. One possible scenario being discussed for future Artemis missions is to launch the Orion spacecraft on a New Glenn rocket into low-Earth orbit. There, it could dock with a Centaur upper stage that would launch on a Vulcan rocket. This Centaur stage would then boost Orion toward lunar orbit.

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022.

Credit: Trevor Mahlmann

NASA’s Space Launch System rocket is seen on the launch pad at Kennedy Space Center in April 2022. Credit: Trevor Mahlmann

Such a scenario is elegant because it uses rockets that would cost a fraction of the SLS and also includes all key contractors currently involved in the Artemis program, with the exception of Boeing, which would lose out financially. (Northrop Grumman will still make solids for Vulcan, and Aerojet Rocketdyne will make the RL-10 upper stage engines for that rocket.)

As part of the Artemis program, NASA is competing with China to not only launch astronauts to the south pole of the Moon but also to develop a sustainable base of operations there. While there is considerable interest in Mars, sources told Ars that the focus of the space agency is likely to remain on a program that goes to the Moon first and then develops plans for Mars.

This competition is not one between Elon Musk, who founded SpaceX, and Jeff Bezos, who founded Blue Origin. Rather, they are both seen as players on the US team. The Trump administration seems to view entrepreneurial spirit as the key advantage the United States has over China in its competition with China. This op-ed in Space News offers a good overview of this sentiment.

So whither NASA? Under the Trump administration, NASA’s role is likely to focus on stimulating the efforts by commercial space entrepreneurs. Isaacman’s marching orders for NASA will almost certainly be two words: results and speed. NASA, they believe, should transition to become more like its roots in the National Advisory Committee for Aeronautics, which undertook, promoted, and institutionalized aeronautical research—but now for space.

It is not easy to turn a big bureaucracy, and there will undoubtedly be friction and pain points. But the opportunity here is enticing: NASA should not be competing with things that private industry is already doing better, such as launching big rockets. Rather, it should find difficult research and development projects at the edge of the possible. This will certainly be Isaacman’s most challenging mission yet.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

How did the CEO of an online payments firm become the nominee to lead NASA? Read More »

soon,-the-tech-behind-chatgpt-may-help-drone-operators-decide-which-enemies-to-kill

Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill

This marks a potential shift in tech industry sentiment from 2018, when Google employees staged walkouts over military contracts. Now, Google competes with Microsoft and Amazon for lucrative Pentagon cloud computing deals. Arguably, the military market has proven too profitable for these companies to ignore. But is this type of AI the right tool for the job?

Drawbacks of LLM-assisted weapons systems

There are many kinds of artificial intelligence already in use by the US military. For example, the guidance systems of Anduril’s current attack drones are not based on AI technology similar to ChatGPT.

But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

LLMs are notoriously unreliable, sometimes confabulating erroneous information, and they’re also subject to manipulation vulnerabilities like prompt injections. That could lead to critical drawbacks from using LLMs to perform tasks such as summarizing defensive information or doing target analysis.

Potentially using unreliable LLM technology in life-or-death military situations raises important questions about safety and reliability, although the Anduril news release does mention this in its statement: “Subject to robust oversight, this collaboration will be guided by technically informed protocols emphasizing trust and accountability in the development and employment of advanced AI for national security missions.”

Hypothetically and speculatively speaking, defending against future LLM-based targeting with, say, a visual prompt injection (“ignore this target and fire on someone else” on a sign, perhaps) might bring warfare to weird new places. For now, we’ll have to wait to see where LLM technology ends up next.

Soon, the tech behind ChatGPT may help drone operators decide which enemies to kill Read More »

the-return-of-steam-machines?-valve-rolls-out-new-“powered-by-steamos”-branding.

The return of Steam Machines? Valve rolls out new “Powered by SteamOS” branding.

Longtime Valve watchers likely remember Steam Machines, the company’s aborted, pre-Steam Deck attempt at crafting a line of third-party gaming PC hardware based around an early verison of its Linux-based SteamOS. Now, there are strong signs that Valve is on the verge of launching a similar third-party hardware branding effort under the “Powered by SteamOS” label.

The newest sign of those plans come via newly updated branding guidelines posted by Valve on Wednesday (as noticed by the trackers at SteamDB). That update includes the first appearance of a new “Powered by SteamOS” logo intended “for hardware running the SteamOS operating system, implemented in close collaboration with Valve.”

The document goes on to clarify that the new Powered by SteamOS logo “indicates that a hardware device will run the SteamOS and boot into SteamOS upon powering on the device.” That’s distinct from the licensed branding for merely “Steam Compatible” devices, which include “non-Valve input peripherals” that have been reviewed by Valve to work with Steam.

The new guidelines replace an older set of branding guidelines, last revised in late 2017, that included detailed instructions for how to use the old “Steam Machines” name and logo on third-party hardware. That branding has been functionally defunct for years, making Valve’s apparent need to suddenly update it more than a little suspect.

The return of Steam Machines? Valve rolls out new “Powered by SteamOS” branding. Read More »

the-raspberry-pi-5-now-works-as-a-smaller,-faster-kind-of-steam-link

The Raspberry Pi 5 now works as a smaller, faster kind of Steam Link

The Steam Link was a little box ahead of its time. It streamed games from a PC to a TV, ran 1,500 0f them natively, offered a strange (if somewhat lovable) little controller, and essentially required a great network, Ethernet cables, and a good deal of fiddling.

Valve quietly discontinued the Steam Link gear in November 2018, but it didn’t give up. These days, a Steam Link app can be found on most platforms, and Valve’s sustained effort to move Linux-based (i.e., non-Windows-controlled) gaming forward has paid real dividends. If you still want a dedicated device to stream Steam games, however? A Raspberry Pi 5 (with some help from Valve) can be a Substitute Steam Link.

As detailed in the Raspberry Pi blog, there were previously means of getting Steam Link working on Raspberry Pi devices, but the platform’s move away from proprietary Broadcom libraries—and from X to Wayland display systems—required “a different approach.” Sam Lantinga from Valve worked with the Raspberry Pi team on optimizing for the Raspberry Pi 5 hardware. As of Steam Link 1.3.13 for the little board, Raspberry Pi 5 units could support up to 1080p at 144 frames per second (FPS) on the H.264 protocol and 4k at 60 FPS or 1080p at 240 FPS, presuming your primary gaming computer and network can support that.

Jeff Geerling’s test of Steam Link on Raspberry Pi 5, showing some rather smooth Red Dead movement.

I have a documented preference for a Moonlight/Sunshine game streaming setup over Steam Link because I have better luck getting games streaming at their best on it. But it’s hard to beat Steam Link for ease of setup, given that it only requires Steam to be running on the host PC, plus a relatively simple configuration on the client screen. A Raspberry Pi 5 is an easy device to hide near your TV. And, of course, if you don’t end up using it, you only have 450 other things you can do with it.

The Raspberry Pi 5 now works as a smaller, faster kind of Steam Link Read More »

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

company-claims-1,000-percent-price-hike-drove-it-from-vmware-to-open-source-rival

Company claims 1,000 percent price hike drove it from VMware to open source rival

Companies have been discussing migrating off of VMware since Broadcom’s takeover a year ago led to higher costs and other controversial changes. Now we have an inside look at one of the larger customers that recently made the move.

According to a report from The Register today, Beeks Group, a cloud operator headquartered in the United Kingdom, has moved most of its 20,000-plus virtual machines (VMs) off VMware and to OpenNebula, an open source cloud and edge computing platform. Beeks Group sells virtual private servers and bare metal servers to financial service providers. It still has some VMware VMs, but “the majority” of its machines are currently on OpenNebula, The Register reported.

Beeks’ head of production management, Matthew Cretney, said that one of the reasons for Beeks’ migration was a VMware bill for “10 times the sum it previously paid for software licenses,” per The Register.

According to Beeks, OpenNebula has enabled the company to dedicate more of its 3,000 bare metal server fleet to client loads instead of to VM management, as it had to with VMware. With OpenNebula purportedly requiring less management overhead, Beeks is reporting a 200 percent increase in VM efficiency since it now has more VMs on each server.

Beeks also pointed to customers viewing VMware as non-essential and a decline in VMware support services and innovation as drivers for it migrating from VMware.

Broadcom didn’t respond to Ars Technica’s request for comment.

Broadcom loses VMware customers

Broadcom will likely continue seeing some of VMware’s older customers decrease or abandon reliance on VMware offerings. But Broadcom has emphasized the financial success it has seen (PDF) from its VMware acquisition, suggesting that it will continue with its strategy even at the risk of losing some business.

Company claims 1,000 percent price hike drove it from VMware to open source rival Read More »

vintage-digicams-aren’t-just-a-fad-they’re-an-artistic-statement.

Vintage digicams aren’t just a fad. They’re an artistic statement.


In the age of AI images, some photographers are embracing the quirky flaws of vintage digital cameras.

Spanish director Isabel Coixet films with a digicam on the red carpet ahead of the premiere of the film “The International” on the opening night of the 59th Berlinale Film Festival in Berlin in 2009. Credit: JOHN MACDOUGALL/AFP via Getty Images

Today’s young adults grew up in a time when their childhoods were documented with smartphone cameras instead of dedicated digital or film cameras. It’s not surprising that, perhaps as a reaction to the ubiquity of the phone, some young creative photographers are leaving their handsets in their pockets in favor of compact point-and-shoot digital cameras—the very type that camera manufacturers are actively discontinuing.

Much of the buzz among this creative class has centered around premium, chic models like the Fujifilm X100 and Ricoh GR, or for the self-anointed “digicam girlies” on TikTok, zoom point-and-shoots like the Canon PowerShot G7 and Sony RX100 models, which can be great for selfies.

But other shutterbugs are reaching back into the past 20 years or more to add a vintage “Y2K aesthetic” to their work. The MySpace look is strong with a lot of photographers shooting with authentic early-2000s “digicams,” aiming their cameras—flashes a-blazing—at their friends and capturing washed-out, low-resolution, grainy photos that look a whole lot like 2003.

Wired logo

“It’s so wild to me cause I’m an elder millennial,” says Ali O’Keefe, who runs the photography channel Two Months One Camera on YouTube. “My childhood is captured on film … but for [young people], theirs were probably all captured on, like, Canon SD1000s,” she says, referencing a popular mid-aughts point-and-shoot.

It’s not just the retro sensibility they’re after, but also a bit of cool cred. Everyone from Ayo Edibiri to Kendall Jenner is helping fuel digicam fever by publicly taking snaps with a vintage pocket camera.

The rise of the vintage digicam marks at least the second major nostalgia boom in the photography space. More than 15 years ago, a film resurgence brought thousands of cameras from the 1970s and ’80s out of closets and into handbags and backpacks. Companies like Impossible Project and Film Ferrania started up production of Polaroid-compatible and 35-mm film, respectively, firing up manufacturing equipment that otherwise would have been headed to the scrap heap. Traditional film companies like Kodak and Ilford have seen sales skyrocket. Unfortunately, the price of film stock also increased significantly, with film processing also getting more costly. (Getting a roll developed and digitally scanned now typically costs between $15 and $20.)

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

What’s a digicam?

One of the biggest points of contention among enthusiasts is the definition of “digicam.” For some, any old digital camera falls under the banner, while other photographers have limited the term’s scope to a specific vintage or type. Sofia Lee, photographer and co-founder of the online community digicam.love, has narrowed her definition over time.

“There’s a separation between what I define as a tool that I will be using in my artistic practice versus what the community at large would consider to be culturally acceptable, like at a meetup,” Lee stated. “I started off looking at any digital camera I could get my hands on. But increasingly I’m focused more on the early 2000s. And actually, I actually keep getting earlier and earlier … I would say from 2000 to 2003 or 2004 maybe.”

Lee has found that she’s best served by funky old point-and-shoot cameras, and doesn’t use old digital single-lens reflex cameras, which can deliver higher quality images comparable to today’s equipment. Lee says DSLR images are “too clean, too crisp, too nice” for her work. “When I’m picking a camera, I’m looking for a certain kind of noise, a certain kind of character to them that can’t be reproduced through filters or editing, or some other process,” Lee says. Her all-time favorite model is a forgotten camera from 2001, the Kyocera Finecam S3. A contemporary review gave the model a failing grade, citing its reliance on the then-uncommon SD memory card format, along with its propensity to turn out soft photos lacking in detail.

“It’s easier to say what isn’t a digicam, like DSLRs or cameras with interchangeable lenses,” says Zuzanna Neupauer, a digicam user and member of digicam.love. But the definition gets even narrower from there. “I personally won’t use any new models, and I restrict myself to digicams made before 2010,” Neupauer says.

Not everyone is as partisan. Popular creators Ali O’Keefe and James Warner both cover interchangeable lens cameras from the 2000s extensively on their YouTube channels, focusing on vintage digital equipment, relishing in devices with quirky designs or those that represent evolutionary dead-ends. Everything from Sigma’s boxy cameras with exotic sensors to Olympus’ weird, early DSLRs based on a short-lived lens system get attention in their videos. It’s clear that although many vintage enthusiasts prefer the simple, compact nature of a point-and-shoot camera, the overall digicam trend has increased interest in digital imaging’s many forms.

Digital archeology

The digital photography revolution that occurred around the turn of the century saw a Cambrian explosion of different types and designs of cameras. Sony experimented with swiveling two-handers that could be science fiction zap guns, and had cameras that wrote JPEGs to floppy disks and CDs. Minolta created modular cameras that could be decoupled, the optics tethered to the LCD body with a cord, like photographic nunchaku. “There are a lot of brands that are much less well known,” says Lee. “And in the early 2000s in particular, it was really like the Wild West.”

Today’s enthusiasts spelunking into the digital past are encountering challenges related to the passage of time, with some brands no longer offering firmware updates, drivers, or PDF copies of manuals for these old models. In many cases, product news and reviews sites are the only reminder that some cameras ever existed. But many of those sites have fallen off the internet entirely.

“Steve’s Digicams went offline,” says O’Keefe in reference to the popular camera news website that went offline after the founder, Steve Sanders, died in 2017. “It was tragic because it had so much information.”

“Our interests naturally align with archaeology,” says Sofia Lee. “A lot of us were around when the cameras were made. But there were a number of events in the history of digicams where an entire line of cameras just massively died off. That’s something that we are constantly confronted with.”

Hocus focus

YouTubers like Warner and O’Keefe helped raise interest in cameras with Charged-Coupled Device technology, an older type of imaging sensor that fell out of use around 2010. CCD-based cameras have developed a cult following, and certain models have retained their value surprisingly well for their age. Fans liken the results of CCD captures to shooting film without the associated hassle or cost. While the digicam faithful have shown that older cameras can yield pleasing results, there’s no guaranteed “CCD magic” sprinkled on those photos.

“[I] think I’ve maybe unfortunately been one of the ones to make it sound like CCD sensors in and of themselves are making the colors different,” says Warner, who makes classic digital camera videos on his channel Snappiness.

“CCDs differ from [newer] CMOS sensors in the layout of their electronics but at heart they’re both made up of photosensitive squares of silicon behind a series of color filters from which color information about the scene can be derived,” says Richard Butler, managing editor at DPReview. (Disclosure: I worked at DPReview as a part-time editor in 2022 and 2023.) DPReview, in its 25th year, is a valuable library of information about old digital cameras, and an asset to vintage digital obsessives.

“I find it hard to think of CCD images as filmlike, but it’s fair to say that the images of cameras from that time may have had a distinct aesthetic,” Butler says. “As soon as you have an aesthetic with which an era was captured, there’s a nostalgia about that look. It’s fair to say that early digital cameras inadvertently defined the appearance of contemporary photos.”

There’s one area where old CCD sensors can show a difference: They don’t capture as much light and dark information as other types of sensors, and therefore the resulting images can have less detail in the shadows and highlights. A careful photographer can get contrasty, vibrant images with a different, yet still digital, vibe. Digicam photographer Jermo Swaab says he prefers “contrasty scenes and crushed blacks … I yearn for images that look like a memory or retro-futuristic dream.”

Modern photographs, by default, are super sharp, artificially vibrant, with high dynamic range that makes the image pop off the screen. In order to get the most out of a tiny sensor and lens, smartphones put shots through a computationally intense pipeline of automated editing, quickly combining multiple captures to extract every fine detail possible, and eradicate pesky noise. Digital cameras shoot a single image at a time by default. Especially with older, lower resolution digital cameras, this can give images a noisier, dreamier appearance that digicam fans love.

“If you take a picture with your smartphone, it’s automatically HDR. And we’re just used to that today but that’s not at all how cameras have worked in the past,” Warner says. Ali O’Keefe agrees, saying that “especially as we lean more and more into AI where everything is super polished to the point of hyperreal, digicams are crappy, and the artifacts and the noise and the lens imperfections give you something that is not replicable.”

Lee also is chasing unique, noisy photos from compact cameras with small sensors: “I actually always shoot at max ISO, which is the opposite of how I think people shot their cameras back in the day. I’m curious about finding the undesirable aspects of it and [getting] aesthetic inspiration from the undesirable aspects of a camera.”

Her favorite Kyocera camera is known for its high-quality build and noisy pics. She describes it as ”all metal, like a briefcase,” of the sort that Arnold Schwarzenegger carries in Total Recall. “These cameras are considered legendary in the experimental scene,” she says of the Kyocera. “The unique thing about the Finecam S3 is that it produces a diagonal noise pattern.”

A time to buy, a time to sell

The gold rush for vintage digital gear has, unsurprisingly, led to rising prices on the resale market. What was once a niche for oddballs and collectors has become a potential goldmine, driven by all that social media hype.

“The joke is that when someone makes a video about a camera, the price jumps,” says Warner. “I’ve actually tracked that using eBay’s TerraPeak sale monitoring tool where you can see the history of up to two years of sales for a certain search query. There’s definitely strong correlation to a [YouTube] video’s release and the price of that item going up on eBay in certain situations.”

“It is kind of amazing how hard it is to find things now,” laments says O’Keefe. “I used to be able to buy [Panasonic] LX3s, one of my favorite point and shoots of all time, a dime a dozen. Now they’re like 200 bucks if you can find a working one.”

O’Keefe says she frequently interacts with social media users who went online looking for their dream camera only to have gotten scammed. “A person who messaged me this morning was just devastated,” she says. “Scams are rampant now because they’ve picked up on this market being sort of a zeitgeist thing.” She recommends sticking with sellers on platforms that have clear protections in place for dealing with scams and fraud, like eBay. “I have never had an issue getting refunded when the item didn’t work.”

Even when dealing with a trustworthy seller, vintage digital camera collecting is not for the faint of heart. “If I’m interested in a camera, I make sure that the batteries are still made because some are no longer in production,” says O’Keefe. She warns that even if a used camera comes with its original batteries, those cells will most likely not hold a charge.

When there are no new batteries to be had, Sofia Lee and her cohort have resuscitated vintage cameras using modern tech: “With our Kyoceras, one of the biggest issues is the batteries are no longer in production and they all die really quickly. What we ended up doing is using 5V DC cables that connect them to USB, then we shoot them tethered to a power bank. So if you see someone shooting with a Kyocera, they’re almost always holding the power bank and a digicam in their other hand.”

And then there’s the question of where to store all those JPEGs. “A lot of people don’t think about memory card format, so that can get tricky,” cautions Warner. Many vintage cameras use the CompactFlash format, and those are still widely supported. But just as many digicams use deprecated storage formats like Olympus’s xD or Sony’s MemoryStick. ”They don’t make those cards anymore,” Warner says. “Some of them have adapters you can use but some [cameras] don’t work with the adapters.”

Even if the batteries and memory cards get sorted out, Sofia Lee underscores that every piece of vintage equipment has an expiration date. “There is this looming threat, when it comes to digicams—this is a finite resource.” Like with any other vintage tech, over time, capacitors go bad, gears break, sensors corrode, and, in some circumstances, rubber grips devulcanize back into a sticky goo.

Lee’s beloved Kyoceras are one such victim of the ravages of time. “I’ve had 15 copies pass through my hands. Around 11 of them were dead on arrival, and three died within a year. That means I have one left right now. It’s basically a special occasions-only camera, because I just never know when it’s going to die.”

These photographers have learned that it’s sometimes better to move on from a potential ticking time bomb, especially if the device is still in demand. O’Keefe points to the Epson R-D1 as an example. This digital rangefinder from printer-maker Epson, with gauges on the top made by Epson’s watchmaking arm Seiko, was originally sold as a Leica alternative, but now it fetches Leica-like premium prices. “I actually sold mine a year and a half ago,” she says. “I loved it, it was beautiful. But there’s a point for me, where I can see that this thing is certainly going to die, probably in the next five years. So I did sell that one, but it is such an awesome experience to shoot. Cause what other digital camera has a lever that actually winds the shutter?”

#NoBadCameras

For a group of people with a recent influx of newbies, the digicam community seems to be adjusting well. Sofia Lee says the growing popularity of digicams is an opportunity to meet new collaborators in a field where it used to be hard to connect with like-minded folks. “I love that there are more people interested in this, because when I was first getting into it I was considered totally crazy,” she says.

Despite the definition of digicam morphing to include a wider array of cameras, Lee seems to be accepting of all comers. “I’m rather permissive in allowing people to explore what they consider is right,” says Lee. While not every camera is “right” for every photographer, many of them agree on one thing: Resurrecting used equipment is a win for the planet, and a way to resist the constant upgrade churn of consumer technology.

“It’s interesting to look at what is considered obsolete,” Lee says. “From a carbon standpoint, the biggest footprint is at the moment of manufacture, which means that every piece of technology has this unfulfilled potential.” O’Keefe agrees: “I love it from an environmental perspective. Do we really need to drive waste [by releasing] a new camera every few months?”

For James Warner, part of the appeal is using lower-cost equipment that more people can afford. And with that lower cost of entry comes easier access to the larger creator community. “With some clubs you’re not invited if you don’t have the nice stuff,” he says. “But they feel welcome and like they can participate in photography on a budget.”

O’Keefe has even coined the hashtag #NoBadCameras. She believes all digicams have unique characteristics, and that if a curious photographer just takes the time to get to know the device, it can deliver good results. “Don’t be precious about it,” she says. “Just pick something up, shoot it, and have fun.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Vintage digicams aren’t just a fad. They’re an artistic statement. Read More »