Author name: Ari B

searching-for-a-female-partner-for-the-world’s-“loneliest” plant

Searching for a female partner for the world’s “loneliest” plant

getting no help from dating apps —

AI assists in the pursuit for one threatened plant species.

Map from drone mission search for the Encephalartos Woodii in the Ngoye Forest in South Africa.

Enlarge / Map from drone mission search for the Encephalartos Woodii in the Ngoye Forest in South Africa.

“Surely this is the most solitary organism in the world,” wrote paleontologist Richard Fortey in his book about the evolution of life.

He was talking about Encephalartos woodii (E. woodii), a plant from South Africa. E. woodii is a member of the cycad family, heavy plants with thick trunks and large stiff leaves that form a majestic crown. These resilient survivors have outlasted dinosaurs and multiple mass extinctions. Once widespread, they are today one of the most threatened species on the planet.

The only known wild E. Woodii was discovered in 1895 by the botanist John Medley Wood while he was on a botanical expedition in the Ngoye Forest in South Africa. He searched the vicinity for others, but none could be found. Over the next couple of decades, botanists removed stems and offshoots and cultivated them in gardens.

Fearing that the final stem would be destroyed, the Forestry Department removed it from the wild in 1916 for safekeeping in a protective enclosure in Pretoria, South Africa, making it extinct in the wild. The plant has since been propagated worldwide. However, the E. woodii faces an existential crisis. All the plants are clones from the Ngoye specimen. They are all males, and without a female, natural reproduction is impossible. E. woodii’s story is one of both survival and solitude.

My team’s research was inspired by the dilemma of the lonely plant and the possibility that a female may still be out there. Our research involves using remote sensing technologies and artificial intelligence to assist in our search for a female in the Ngoye Forest.

The evolutionary journey of cycads

Cycads are the oldest surviving plant groups alive today and are often referred to as “living fossils” or “dinosaur plants” due to their evolutionary history dating back to the Carboniferous period, approximately 300 million years ago. During the Mesozoic era (250-66 million years ago), also known as the Age of Cycads, these plants were ubiquitous, thriving in the warm, humid climates that characterised the period.

Although they resemble ferns or palms, cycads are not related to either. Cycads are gymnosperms, a group that includes conifers and ginkgos. Unlike flowering plants (angiosperms), cycads reproduce using cones. It is impossible to tell male and female apart until they mature and produce their magnificent cones.

Female cones are typically wide and round, and male cones appear elongated and narrower. The male cones produce pollen, which is carried by insects (weevils) to the female cones. This ancient method of reproduction has remained largely unchanged for millions of years.

Despite their longevity, today cycads are ranked as the most endangered living organisms on Earth with the majority of the species considered threatened with extinction. This is because of their slow growth and reproductive cycles, typically taking ten to 20 years to mature, and habitat loss due to deforestation, grazing, and over-collection. Cycads have become symbols of botanical rarity.

Their striking appearance and ancient lineage make them popular in exotic ornamental horticulture and that has led to illegal trade. Rare cycads can command exorbitant prices from $620 (495 pounds) per cm with some specimens selling for millions of pounds each. The poaching of cycads is a threat to their survival.

Among the most valuable species is the E. woodii. It is protected in botanical gardens with security measures such as alarmed cages designed to deter poachers.

AI in the sky

In our search to find a female E.woodii we have used innovative technologies to explore areas of the forest from a vertical vantage point. In 2022 and 2024, our drone surveys covered an area of 195 acres or 148 football fields, creating detailed maps from thousands of photos taken by the drones. It’s still a small portion of the Ngoye Forest, which covers 10,000 acres.

An example of the still images used to train the AI software.

Enlarge / An example of the still images used to train the AI software.

Our AI system enhanced the efficiency and accuracy of these searches. As E. woodii is considered extinct in the wild, synthetic images were used in the AI model’s training to improve its ability, via an image recognition algorithm, to recognise cycads by shape in different ecological contexts.

Plant species globally are disappearing at an alarming rate. Since all existing E. woodii specimens are clones, their potential for genetic diversity in the face of environmental change and disease is limited.

Notable examples include the Great Famine in 1840s Ireland, where the uniformity of cloned potatoes worsened the crisis, and the vulnerability of clonal Cavendish bananas to Panama disease, which threatens their production as it did with the Gros Michel banana in the 1950s.

Finding a female would mean E. woodii is no longer at the brink of extinction and could revive the species. A female would allow for sexual reproduction, bring in genetic diversity, and signify a breakthrough in conservation efforts.

E. woodii is a sobering reminder of the fragility of life on Earth. But our quest to discover a female E. woodii shows there is hope even for the most endangered species if we act fast enough.The Conversation

Laura Cinti, Research Fellow in bio art & plant behavior, University of Southampton. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Searching for a female partner for the world’s “loneliest” plant Read More »

how-do-brainless-creatures-control-their-appetites?

How do brainless creatures control their appetites?

Feed me! —

Separate systems register when the animals have eaten and control feeding behaviors.

Image of a greenish creature with a long stalk and tentacles, against a black background.

The hydra is a Lovecraftian-looking microorganism with a mouth surrounded by tentacles on one end, an elongated body, and a foot on the other end. It has no brain or centralized nervous system. Despite the lack of either of those things, it can still feel hunger and fullness. How can these creatures know when they are hungry and realize when they have had enough?

While they lack brains, hydra do have a nervous system. Researchers from Kiel University in Germany found they have an endodermal (in the digestive tract) and ectodermal (in the outermost layer of the animal) neuronal population, both of which help them react to food stimuli. Ectodermal neurons control physiological functions such as moving toward food, while endodermal neurons are associated with feeding behavior such as opening the mouth—which also vomits out anything indigestible.

Even such a limited nervous system is capable of some surprisingly complex functions. Hydras might even give us some insights into how appetite evolved and what the early evolutionary stages of a central nervous system were like.

No, thanks, I’m full

Before finding out how the hydra’s nervous system controls hunger, the researchers focused on what causes the strongest feeling of satiety, or fullness, in the animals. They were fed with the brine shrimp Artemia salina, which is among their usual prey, and exposed to the antioxidant glutathione. Previous studies have suggested that glutathione triggers feeding behavior in hydras, causing them to curl their tentacles toward their mouths as if they are swallowing prey.

Hydra fed with as much Artemia as they could eat were given glutathione afterward, while the other group was only given only glutathione and no actual food. Hunger was gauged by how fast and how often they opened their mouths.

It turned out that the first group, which had already glutted themselves on shrimp, showed hardly any response to glutathione eight hours after being fed. Their mouths barely opened—and slowly if so—because they were not hungry enough for even a feeding trigger like glutathione to make them feel they needed seconds.

It was only at 14 hours post-feeding that the hydra that had eaten shrimp opened their mouths wide enough and fast enough to indicate hunger. However, those that were not fed and only exposed to glutathione started showing signs of hunger only four hours after exposure. Mouth opening was not the only behavior provoked by hunger since starved animals also somersaulted through the water and moved toward light, behaviors associated with searching for food. Sated animals would stop somersaulting and cling to the wall of the tank they were in until they were hungry again.

Food on the “brain”

After observing the behavioral changes in the hydra, the research team looked into the neuronal activity behind those behaviors. They focused on two neuronal populations, the ectodermal population known as N3 and the endodermal population known as N4, both known to be involved in hunger and satiety. While these had been known to influence hydra feeding responses, how exactly they were involved was unknown until now.

Hydra have N3 neurons all over their bodies, especially in the foot. Signals from these neurons tell the animal that it has eaten enough and is experiencing satiety. The frequency of these signals decreased as the animals grew hungrier and displayed more behaviors associated with hunger. The frequency of N3 signals did not change in animals that were only exposed to glutathione and not fed, and these hydra behaved just like animals that had gone without food for an extended period of time. It was only when they were given actual food that the N3 signal frequency increased.

“The ectodermal neuronal population N3 is not only responding to satiety by increasing neuronal activity, but is also controlling behaviors that changed due to feeding,” the researchers said in their study, which was recently published in Cell Reports.

Though N4 neurons were only seen to communicate indirectly with the N3 population in the presence of food, they were found to influence eating behavior by regulating how wide the hydras opened their mouths and how long they kept them open. Lower frequency of N4 signals was seen in hydra that were starved or only exposed to glutathione. Higher frequency of N4 signals were associated with the animals keeping their mouths shut.

So, what can the neuronal activity of a tiny, brainless creature possibly tell us about the evolution of our own complex brains?

The researchers think the hydra’s simple nervous system may parallel the much more complex central and enteric (in the gut) nervous systems that we have. While N3 and N4 operate independently, there is still some interaction between them. The team also suggests that the way N4 regulates the hydra’s eating behavior is similar to the way the digestive tracts of mammals are regulated.

“A similar architecture of neuronal circuits controlling appetite/satiety can be also found in mice where enteric neurons, together with the central nervous system, control mouth opening,” they said in the same study.

Maybe, in a way, we really do think with our gut.

Cell Reports, 2024. DOI: 10.1016/j.celrep.2024.114210

How do brainless creatures control their appetites? Read More »

blue-origin-joins-spacex-and-ula-in-new-round-of-military-launch-contracts

Blue Origin joins SpaceX and ULA in new round of military launch contracts

Playing with the big boys —

“Lane 1 serves our commercial-like missions that can accept more risk.”

Blue Origin's New Glenn rocket on the launch pad for testing earlier this year.

Enlarge / Blue Origin’s New Glenn rocket on the launch pad for testing earlier this year.

After years of lobbying, protests, and bidding, Jeff Bezos’s space company is now a military launch contractor.

The US Space Force announced Thursday that Blue Origin will compete with United Launch Alliance and SpaceX for at least 30 military launch contracts over the next five years. These launch contracts have a combined value of up to $5.6 billion.

This is the first of two major contract decisions the Space Force will make this year as the military seeks to foster more competition among its roster of launch providers and reduce its reliance on just one or two companies.

For more than a decade following its formation from the merger of Boeing and Lockheed Martin rocket programs, ULA was the sole company certified to launch the military’s most critical satellites. This changed in 2018, when SpaceX started launching national security satellites for the military. In 2020, despite protests from Blue Origin seeking eligibility, the Pentagon selected ULA and SpaceX to continue sharing launch duties.

The National Security Space Launch (NSSL) program is in charge of selecting contractors to deliver military surveillance, navigation, and communications satellites into orbit.

Over the next five years, the Space Force wants to tap into new launch capabilities from emerging space companies. The procurement approach for this new round of contracts, known as NSSL Phase 3, is different from the way the military previously bought launch services. Instead of grouping all national security launches into one monolithic contract, the Space Force is dividing them into two classifications: Lane 1 and Lane 2.

The Space Force’s contract announced Thursday was for Lane 1, which is for less demanding missions to low-Earth orbit. These missions include smaller tech demos, experiments, and launches for the military’s new constellation of missile-tracking and data-relay satellites, an effort that will eventually include hundreds or thousands of spacecraft managed by the Pentagon’s Space Development Agency.

This fall, the Space Force will award up to three contracts for Lane 2, which covers the government’s most sensitive national security satellites, which require “complex security and integration requirements.” These are often large, heavy spacecraft weighing many tons and sometimes needing to go to orbits thousands of miles from Earth. The Space Force will require Lane 2 contractors to go through a more extensive certification process than is required in Lane 1.

“Today marks the beginning of this innovative, dual-lane approach to launch service acquisition, whereby Lane 1 serves our commercial-like missions that can accept more risk and Lane 2 provides our traditional, full mission assurance for the most stressing heavy-lift launches of our most risk-averse missions,” said Frank Calvelli, assistant secretary of the Air Force for space acquisition and integration.

Meeting the criteria

The Space Force received seven bids for Lane 1, but only three companies met the criteria to join the military’s roster of launch providers. The basic requirement to win a Lane 1 contract was for a company to show its rocket can place at least 15,000 pounds of payload mass into low-Earth orbit, either on a single flight or over a series of flights within a 90-day period.

The bidders also had to substantiate their plan to launch the rocket they proposed to use for Lane 1 missions by December 15 of this year. A spokesperson for Space Systems Command said SpaceX proposed using their Falcon 9 and Falcon Heavy rockets, and ULA offered its Vulcan rocket. Those launchers are already flying. Blue Origin proposed its heavy-lift New Glenn rocket, slated for an inaugural test flight no earlier than September.

“As we anticipated, the pool of awardees is small this year because many companies are still maturing their launch capabilities,” said Brig. Gen. Kristin Panzenhagen, program executive officer for the Space Force’s assured access to space division. “Our strategy accounted for this by allowing on-ramp opportunities every year, and we expect increasing competition and diversity as new providers and systems complete development.”

A SpaceX Falcon Heavy rocket lifts off from NASA's Kennedy Space Center in Florida.

Enlarge / A SpaceX Falcon Heavy rocket lifts off from NASA’s Kennedy Space Center in Florida.

Trevor Mahlmann/Ars Technica

The Space Force plans to open up the first on-ramp opportunity for Lane 1 as soon as the end of this year. Companies with medium-lift rockets in earlier stages of development, such as Rocket Lab, Relativity Space, Firefly Aerospace, and Stoke Space, will have the chance to join ULA, SpaceX, and Blue Origin in the Lane 1 pool at that time. The structure of the NSSL Phase 3 contracts allow the Pentagon to take advantage of emerging launch capabilities as soon as they become available, according to Calvelli.

In a statement, Panzenhagen said having additional launch providers will increase the Space Force’s “resiliency” in a time of increasing competition between the US, Russia, and China in orbit. “Launching more risk-tolerant satellites on potentially less mature launch systems using tailored independent government mission assurance could yield substantial operational responsiveness, innovation, and savings,” Panzenhagen said.

More competition, theoretically, will also deliver lower launch prices to the Space Force. SpaceX and Blue Origin rockets are partially reusable, while ULA eventually plans to recover and reuse Vulcan main engines.

Over the next five years, Space Systems Command will dole out fixed-price “task orders” to ULA, SpaceX, and Blue Origin for groups of Lane 1 missions. The first batch of missions up for awards in Lane 1 include seven launches for the Space Development Agency’s missile tracking mega-constellation, plus a task order for the National Reconnaissance Office, the government’s spy satellite agency. However, military officials require a rocket to have completed at least one successful orbital launch to win a Lane 1 task order, and Blue Origin’s New Glenn doesn’t yet satisfy this requirement.

The Space Force will pay Blue Origin $5 million for an “initial capabilities assessment” for Lane 1. SpaceX and ULA, the military’s incumbent launch contractors, will each receive $1.5 million for similar assessments.

ULA, SpaceX, and Blue Origin are also the top contenders to win Lane 2 contracts later this year. In order to compete in Lane 2, a launch provider must show it has a plan for its rockets to meet the Space Force’s stringent certification requirements by October 1, 2026. SpaceX’s Falcon 9 and Falcon Heavy are already certified, and ULA’s Vulcan is on a path to achieve this milestone by the end of this year, pending a successful second test flight in the next few months. A successful debut of New Glenn by the end of this year would put the October 2026 deadline within reach of Blue Origin.

Blue Origin joins SpaceX and ULA in new round of military launch contracts Read More »

this-photo-got-3rd-in-an-ai-art-contest—then-its-human-photographer-came-forward

This photo got 3rd in an AI art contest—then its human photographer came forward

Say cheese —

Humans pretending to be machines isn’t exactly a victory for the creative spirit.

To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn’t put it past an AI model to forget the flamingo’s head.

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to “honor photography as an art form,” with a panel of experienced judges who work with photos at The New York Times, Christie’s, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from “those who use the camera as their artistic medium,” as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they “reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files.” Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

The 1839 Awards winners page for the

Enlarge / The 1839 Awards winners page for the “AI” category, before Astray’s photo was disqualified.

Because of this, the photographer, who goes by the pen name Miles Astray, was able to enter his photo “F L A M I N G O N E” into that AI-generated category, where it was shortlisted and then picked for third place over plenty of other entries that were not made by a human holding a camera. The photo also won the People’s Choice Award for the AI category after Astray publicly lobbied his social media followers to vote for it multiple times.

Making a statement

On his website, Astray tells the story of a 5 am photo shoot in Aruba where he captured the photo of a flamingo that appears to have lost its head. Astray said he entered the photo in the AI category “to prove that human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.”

That’s not a completely baseless concern. Last year, German artist Boris Eldagsen made headlines after his AI-generated picture “The Electrician” won first prize in the Creative category of the World Photography Organization’s Sony World Photography Award. Eldagsen ended up refusing the prize, writing that he had entered “as a cheeky monkey, to find out if the competitions are prepared for AI images to enter. They are not.”

In a statement provided to press outlets after Astray revealed his deception, the 1839 Awards organizers noted that Astray’s entry was disqualified because it “did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category. We hope this will bring awareness (and a message of hope) to other photographers worried about AI.”

For his part, Astray says his disqualification from the 1839 Awards was “a completely justified and right decision that I expected and support fully.” But he also writes that the work’s initial success at the awards “was not just a win for me but for many creatives out there.”

Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

Enlarge / Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

I’m not sure I buy that interpretation, though. Art isn’t like chess, where the brute force of machine-learning efficiency has made even the best human players relatively helpless. Instead, as conceptual artist Danielle Baskin told Ars when talking about the DALL-E image generator, “all modern AI art has converged on kind of looking like a similar style, [so] my optimistic speculation is that people are hiring way more human artists now.”

The whole situation brings to mind the ostensibly AI-generated George Carlin-style comedy special released earlier this year, which the creators later admitted was written entirely by a human. At the time, I noted how our views of works of art are immediately colored as soon as the “AI generated” label is applied. Maybe you grade the work on a bit of a curve (“Well, it’s not bad for a machine“), or maybe you judge it more harshly for its artificial creation (“It obviously doesn’t have the human touch“).

In any case, reactions to AI artwork are “a reflection of all the fear and promise inherent in computers continuing to encroach on areas we recently thought were exclusively ‘human,’ as well as the economic and philosophical impacts of that trend,” as I wrote when talking about the fake AI Carlin. And those human-centric biases mean we can’t help but use a different eye to judge works of art presented as AI creations.

Entering a human photograph into an AI-generated photo contest says more about how we can exploit those biases than it does about the inherent superiority of man or machine in a field as subjective as art. This isn’t John Henry bravely standing up to a steam engine; it’s Homer Simpson winning a nuclear plant design contest that was not intended for him.

This photo got 3rd in an AI art contest—then its human photographer came forward Read More »

iv-infusion-enables-editing-of-the-cystic-fibrosis-gene-in-lung-stem-cells

IV infusion enables editing of the cystic fibrosis gene in lung stem cells

Right gene in the right place —

Approach relies on lipid capsules like those in the mRNA vaccines.

Abstract drawing of a pair of human hands using scissors to cut a DNA strand, with a number of human organs in the background.

The development of gene editing tools, which enable the specific targeting and correction of mutations, hold the promise of allowing us to correct those mutations that cause genetic diseases. However, the technology has been around for a while now—two researchers were critical to its development in 2020—and there have been only a few cases where gene editing has been used to target diseases.

One of the reasons for that is the challenge of targeting specific cells in a living organism. Many genetic diseases affect only a specific cell type, such as red blood cells in sickle-cell anemia, or specific tissue. Ideally, to limit potential side effects, we’d like to ensure that enough of the editing takes place in the affected tissue to have an impact, while minimizing editing elsewhere to limit side effects. But our ability to do so has been limited. Plus, a lot of the cells affected by genetic diseases are mature and have stopped dividing. So, we either need to repeat the gene editing treatments indefinitely or find a way to target the stem cell population that produces the mature cells.

On Thursday, a US-based research team said that they’ve done gene editing experiments that targeted a high-profile genetic disease: cystic fibrosis. Their technique largely targets the tissue most affected by the disease (the lung), and occurs in the stem cell populations that produce mature lung cells, ensuring that the effect is stable.

Getting specific

The foundation of the new work is the technology that gets the mRNAs of the COVID-19 mRNA vaccines inside cells. The nucleic acids of an mRNA are large molecules with a lot of charged pieces, which makes it difficult for them to cross a membrane to get inside of a cell. To overcome that problem, the researchers package the mRNA inside a bubble of lipids, which can then fuse with cell membranes, dumping the mRNA inside the cell.

This process, as the researchers note, has two very large advantages: We know it works, and we know it’s safe. “More than a billion doses of lipid nanoparticle–mRNA COVID-19 vaccines have been administered intramuscularly worldwide,” they write, “demonstrating high safety and efficacy sustained through repeatable dosing.” (As an aside, it’s interesting to contrast the research community’s view of the mRNA vaccines to the conspiracies that circulate widely among the public.)

There’s one big factor that doesn’t matter for vaccine delivery but does matter for gene editing: They’re not especially fussy about what cells they target for delivery. So, if you want to target something like blood stem cells, then you need to alter the lipid particles in some way to get them to preferentially target the cells of your choice.

There are a lot of ideas on how to do this, but the team behind this new work found a relatively simple one: changing the amount of positively charged lipids on the particle. In 2020, they published a paper in which they describe the development of selective organ targeting (SORT) lipid nanoparticles. By default, many of the lipid particles end up in the liver. But, as the fraction of positively charged lipids increases, the targeting shifts to the spleen and then to the lung.

So, presumably, because they know they can target the lung, they decided to use SORT particles to send a gene editing system specific to cystic fibrosis, which primarily affects that tissue and is caused by mutations in a single gene. While it’s relatively easy to get things into the lung, it’s tough to get them to lung cells, given all the mucus, cilia, and immune cells that are meant to take care of foreign items in the lung.

IV infusion enables editing of the cystic fibrosis gene in lung stem cells Read More »

retired-engineer-discovers-55-year-old-bug-in-lunar-lander-computer-game-code

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code

The world’s oldest feature —

A physics simulation flaw in text-based 1969 computer game went unnoticed until today.

Illustration of the Apollo lunar lander Eagle over the Moon.

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon.

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game’s physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module’s descent onto the Moon’s surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Lunar Lander game, provided by Jim Storer.” height=”524″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/lunar_lander_teletype_output-640×524.jpg” width=”640″>

Enlarge / A scan of printed teletype output from the original Lunar Lander game, provided by Jim Storer.

Jim Storer

Fast forward to 2024, when Martin—an AI expert, game developer, and former postdoctoral associate at MIT—stumbled upon a bug in Storer’s high school code while exploring what he believed was the optimal strategy for landing the module with maximum fuel efficiency—a technique known among Kerbal Space Program enthusiasts as the “suicide burn.” This method involves falling freely to build up speed and then igniting the engines at the last possible moment to slow down just enough to touch down safely. He also tried another approach—a more gentle landing.

“I recently explored the optimal fuel burn schedule to land as gently as possible and with maximum remaining fuel,” Martin wrote on his blog. “Surprisingly, the theoretical best strategy didn’t work. The game falsely thinks the lander doesn’t touch down on the surface when in fact it does. Digging in, I was amazed by the sophisticated physics and numerical computing in the game. Eventually I found a bug: a missing ‘divide by two’ that had seemingly gone unnoticed for nearly 55 years.”

A matter of division

Diagram of launch escape system on top of the Apollo capsule.

Enlarge / Diagram of launch escape system on top of the Apollo capsule.

NASA

Despite applying what should have been a textbook landing strategy, Martin found that the game inconsistently reported that the lander had missed the Moon’s surface entirely. Intrigued by the anomaly, Martin dug into the game’s source code and discovered that the landing algorithm was based on highly sophisticated physics for its time, including the Tsiolkovsky rocket equation and a Taylor series expansion.

As mentioned in the quote above, the root of the problem was a simple computational oversight—a missing division by two in the formula used to calculate the lander’s trajectory. This seemingly minor error had big consequences, causing the simulation to underestimate the time until the lander reached its lowest trajectory point and miscalculate the landing.

Despite the bug, Martin was impressed that Storer, then a high school senior, managed to incorporate advanced mathematical concepts into his game, a feat that remains impressive even by today’s standards. Martin reached out to Storer himself, and the Lunar Lander author told Martin that his father was a physicist who helped him derive the equations used in the game simulation.

While people played and enjoyed Storer’s game for years with the bug in place, it goes to show that realism isn’t always the most important part of a compelling interactive experience. And thankfully for Aldrin and Armstrong, the real Apollo lunar landing experience didn’t suffer from the same issue.

You can read more about Martin’s exciting debugging adventure over on his blog.

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code Read More »

apple-punishes-women-for-same-behaviors-that-get-men-promoted,-lawsuit-says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.

A victory for women suing could mean that more than 12,000 current and former female employees in California could collectively claw back potentially millions in lost wages from an apparently ever-widening wage gap allegedly perpetuated by Apple policies.

The lawsuit was filed by two employees who have each been with Apple for more than a decade, Justina Jong and Amina Salgado. They claimed that Apple violated California employment laws between 2020 and 2024 by unfairly discriminating against California-based female employees in Apple’s engineering, marketing, and AppleCare divisions and “systematically” paying women “lower compensation than men with similar education and experience.”

Apple allegedly has displayed an ongoing bias toward male employees, offering them higher starting salaries and promoting them for the “same behaviors” that female employees allegedly were punished for.

Jong, currently a customer/technical training instructor on Apple’s global developer relations/app review team, said that she only became aware of a stark pay disparity by chance.

“One day, I saw a W-2 left on the office printer,” Jong said. “It belonged to my male colleague, who has the same job position. I noticed that he was being paid almost $10,000 more than me, even though we performed substantially similar work. This revelation made me feel terrible.”

But Salgado had long been aware of the problem. Salgado, currently on a temporary assignment as a development manager in the AppleCare division, spent years complaining about her lower wages, prompting Apple internal investigations that never led to salary increases.

Finally, late last year, Salgado’s insistence on fair pay was resolved after Apple hired a third-party firm that concluded she was “paid less than men performing substantially similar work.” Apple subsequently increased her pay rate but dodged responsibility for back pay that Salgado now seeks to recover.

Eve Cervantez, a lawyer for women suing, said in a press release shared with Ars that these women were put in “a no-win situation.”

“Once women are hired into a lower pay range at Apple, subsequent pay raises or any bonuses are tracked accordingly, meaning they don’t correct the gender pay gap,” Cervantez said. “Instead, they perpetuate and widen the gap because raises and bonuses are based on a percentage of the employee’s base salary.”

Apple did not immediately respond to Ars’ request to comment.

Apple punishes women for same behaviors that get men promoted, lawsuit says Read More »

Customer-Centric Marketing for Technology Vendors

In today’s fast-paced, highly competitive market, technology vendors often struggle to connect with their customers on a meaningful level. Traditional marketing approaches, which focus on pushing products and services to a broad audience, are no longer effective. Customers demand more personalized and relevant experiences. Without a customer-centric approach, companies risk losing customer loyalty and market share to competitors who better understand and cater to their customers’ needs.

Historical Context

Marketing has evolved significantly over the decades. In the early 20th century, marketing was primarily product-focused, emphasizing mass production and broad-reaching advertising. As markets became more saturated, the focus shifted to differentiation and brand building in the mid-20th century.

The late 20th and early 21st centuries saw the rise of digital marketing, enabling more targeted and data-driven approaches. However, despite these advancements, many businesses continued to prioritize their products and services over the needs and preferences of their customers.

The advent of the internet and social media further transformed the marketing landscape, giving customers a powerful voice and more choices than ever before. This shift necessitated a more customer-centric approach, but many companies have struggled to fully embrace this change.

Why It’s Critical Now

The importance of customer-centric marketing has never been more pronounced. Today’s consumers are more informed, connected, and empowered. They have higher expectations for personalized experiences and are quick to switch brands if their expectations are not met. Additionally, the rise of digital technologies has created a more competitive environment, where startups and smaller companies can challenge established players by offering superior customer experiences.

COVID-19 has also accelerated the need for customer-centric marketing. The pandemic has fundamentally changed consumer behavior, driving more people online and increasing the demand for seamless digital interactions. Customers now expect brands to understand their unique situations and provide relevant solutions.

Investing in customer-centric marketing is not just a trend; it’s a necessity. Companies that prioritize their customers are better positioned to build long-term loyalty, increase customer lifetime value, and achieve sustainable growth. By truly understanding and addressing customer needs, businesses can differentiate themselves and thrive in an increasingly competitive market.

Practical Strategies for Customer-Centric Marketing

1. Understand Your Customer

First things first, you need to know your customer. Not just demographics, but their pain points, preferences, and behaviors. Start by regularly asking for customer feedback through short, focused surveys to understand their needs and expectations. Additionally, analyze purchase history, website interactions, and social media engagement to gather deeper insights into their behavior. By combining direct feedback with data analysis, you can create a comprehensive profile of your customer that goes beyond basic demographics.

Segment your customers based on industry and role. Each segment will have different pain points, preferences, and behaviors. Understand the unique challenges and trends in each industry you serve. For example, the needs of a healthcare provider will differ significantly from those of a financial services firm. Tailor your understanding to the specific roles within these industries. A CTO might focus on technological innovation, while a CFO might prioritize cost efficiency.

2. Personalize Your Communication

Customers today expect personalization. They want to feel like you understand them. Implementing segmentation allows you to divide your audience into groups based on their behavior and preferences. This enables you to tailor your messages to each group, ensuring relevance and increasing engagement. Utilize dynamic content tools that allow you to change the content of your emails or website based on who is viewing them. This could mean showing different product recommendations or messaging depending on the customer’s past interactions with your brand.

Develop segmented communication strategies that cater to the unique needs of different industries and roles. Customize your messages to address industry-specific challenges. Use industry jargon and case studies relevant to their field. Personalize your communication based on the roles of your customers. For example, send technical insights to IT professionals and strategic overviews to executive leaders.

3. Create Valuable Content

Content is still king, but it needs to be valuable. Focus on providing educational content that helps your customers solve their problems. Blog posts, how-to videos, and webinars can be very effective in this regard. Additionally, share engaging stories that highlight customer success. By making your customers the heroes of your narratives, you not only build trust but also demonstrate real-life applications of your products or services. Valuable content should aim to inform, entertain, and inspire your audience, making your brand a go-to resource.

Content should be tailored to provide value to different industries and roles. Develop content that addresses industry-specific pain points. For example, create whitepapers on compliance for healthcare and financial industries. Generate role-specific content, such as technical guides for IT professionals, financial analyses for CFOs, and strategic trends for CEOs.

4. Be Where Your Customers Are

You need to be present on the platforms your customers use. This could be social media, forums, or even offline events. Engage in social listening to monitor what your customers are talking about and join the conversation where relevant. Providing an omnichannel presence ensures a seamless experience across all touchpoints. Your customers should feel like they’re dealing with the same brand whether they’re on your website, your app, or in your store. This consistency builds trust and reinforces your brand’s reliability.

Ensure your presence on platforms popular in different industries and roles. Participate in industry-specific forums, trade shows, and online communities. Engage on platforms and at events where specific roles are active, such as LinkedIn for professionals and GitHub for developers.

5. Build a Community

People like to feel part of a community. Foster this by creating forums where your customers can interact with each other and your brand. These forums can be online spaces such as social media groups or dedicated sections on your website. Develop engagement programs such as loyalty programs or ambassador programs to reward your most engaged customers. These programs not only incentivize repeat business but also encourage word-of-mouth promotion, as loyal customers are more likely to recommend your brand to others.

Foster a sense of community within each industry and role. Create industry-specific forums or groups where customers can interact. Develop role-specific engagement programs, such as technical meetups for developers or financial strategy workshops for CFOs.

6. Measure and Adapt

Finally, always measure your efforts and be ready to adapt. Regularly check in with your customers to see how they feel about your marketing efforts. Use customer feedback to gauge their satisfaction and areas for improvement. This means looking at key metrics like conversion rates, engagement rates, and customer retention rates. By continuously measuring and adapting your strategies, you ensure that your marketing efforts remain effective and aligned with customer needs.

Validate the effectiveness of your strategies and be ready to adapt based on industry and role-specific feedback. Use analytics tools to track industry-specific performance metrics. Gather role-specific feedback to understand the impact on different positions within your customer base.

By putting your customers at the center of your marketing efforts, you not only meet their needs but also build lasting relationships that drive your business forward. Let’s move beyond the jargon and focus on what truly matters—delivering value to our customers.

Customer-Centric Marketing for Technology Vendors Read More »

cop-busted-for-unauthorized-use-of-clearview-ai-facial-recognition-resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Secret face scans —

Indiana cop easily hid frequent personal use of Clearview AI face scans.

Cop busted for unauthorized use of Clearview AI facial recognition resigns

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear “misuse” of Clearview AI’s controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as “the world’s largest facial recognition network.” The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

But these scans must always be linked to an investigation, and Evansville police chief Philip Smith said that instead, the disgraced cop repeatedly disguised his personal searches by deceptively “utilizing an actual case number associated with an actual incident” to evade detection.

Smith’s department discovered the officer’s unauthorized use after performing an audit before renewing their Clearview AI subscription in March. That audit showed “an anomaly of very high usage of the software by an officer whose work output was not indicative of the number of inquiry searches that they had.”

Another clue to the officer’s abuse of the tool was that most face scans conducted during investigations are “usually live or CCTV images”—shots taken in the wild—Smith said. However, the officer who resigned was mainly searching social media images, which was a red flag.

An investigation quickly “made clear that this officer was using Clearview AI” for “personal purposes,” Smith said, declining to name the officer or verify if targets of these searchers were notified.

As a result, Smith recommended that the department terminate the officer. However, the officer resigned “before the Police Merit Commission could make a final determination on the matter,” Smith said.

Easily dodging Clearview AI’s built-in compliance features

Clearview AI touts the face image network as a public safety resource, promising to help law enforcement make arrests sooner while committing to “ethical and responsible” use of the tech.

On its website, the company says that it understands that “law enforcement agencies need built-in compliance features for increased oversight, accountability, and transparency within their jurisdictions, such as advanced admin tools, as well as user-friendly dashboards, reporting, and metrics tools.”

To “help deter and detect improper searches,” its website says that a case number and crime type is required, and “every agency is required to have an assigned administrator that can see an in-depth overview of their organization’s search history.”

It seems that neither of those safeguards stopped the Indiana cop from repeatedly scanning social media images for undisclosed personal reasons, seemingly rubber-stamping the case number and crime type requirement and going unnoticed by his agency’s administrator. This incident could have broader implications in the US, where its technology has been widely used by police to conduct nearly 1 million searches, Clearview AI CEO Hoan Ton-That told the BBC last year.

In 2022, Ars reported when Clearview AI told investors it had ambitions to collect more than 100 billion face images, ensuring that “almost everyone in the world will be identifiable.” As privacy concerns about the controversial tech mounted, it became hotly debated. Facebook moved to stop the company from scraping faces on its platform, and the ACLU won a settlement that banned Clearview AI from contracting with most businesses. But the US government retained access to the tech, including “hundreds of police forces across the US,” Ton-That told the BBC.

Most law enforcement agencies are hesitant to discuss their Clearview AI tactics in detail, the BBC reported, so it’s often unclear who has access and why. But the Miami Police confirmed that “it uses this software for every type of crime,” the BBC reported.

Now, at least one Indiana police department has confirmed that an officer can sneakily abuse the tech and conduct unapproved face scans with apparent ease.

According to Kashmir Hill—the journalist who exposed Clearview AI’s tech—the disgraced cop was following in the footsteps of “billionaires, Silicon Valley investors, and a few high-wattage celebrities” who got early access to Clearview AI tech in 2020 and considered it a “superpower on their phone, allowing them to put a name to a face and dig up online photos of someone that the person might not even realize were online.”

Advocates have warned that stronger privacy laws are needed to stop law enforcement from abusing Clearview AI’s network, which Hill described as “a Shazam for people.”

Smith said the officer disregarded department guidelines by conducting the improper face scans.

“To ensure that the software is used for its intended purposes, we have put in place internal operational guidelines and adhere to the Clearview AI terms of service,” Smith said. “Both have language that clearly states that this is a tool for official use and is not to be used for personal reasons.

Cop busted for unauthorized use of Clearview AI facial recognition resigns Read More »

musk-says-he’s-winning-tesla-shareholder-vote-on-pay-plan-by-“wide-margin”

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin”

Tesla shareholder vote —

Court battle over pay plan will continue even if Musk wins shareholder vote.

Elon Musk wearing a suit and waving with his hand as he walks away from a courthouse.

Enlarge / Elon Musk.

Getty Images | Bloomberg

Elon Musk said last night that Tesla shareholders provided enough votes to re-approve his 2018 pay package, which was previously nullified by a Delaware judge. A proposal to transfer Tesla’s state of incorporation from Delaware to Texas also has enough votes to pass, according to a post by Musk.

“Both Tesla shareholder resolutions are currently passing by wide margins!” Musk wrote. His post included charts indicating that both shareholder resolutions had more than enough yes votes to surpass the “guaranteed win” threshold.

The Wall Street Journal notes that the “results provided by Musk are preliminary, and voters can change their votes until the polls close at the meeting on Thursday.” The shareholder meeting is at 3: 30 pm Central Time. An official announcement on the results is expected today.

Under a settlement with the Securities and Exchange Commission, Musk is required to get pre-approval from a Tesla securities lawyer for social media posts that may contain information material to the company or its shareholders. Tesla today submitted an SEC filing containing a screenshot of Musk’s X post describing the preliminary results, but the company otherwise did not make an announcement.

Legal uncertainty remains

The vote isn’t the last word on the pay package that was once estimated to be worth $56 billion and more recently valued at $46 billion based on Tesla’s stock price. The pay plan was nullified by a Delaware Court of Chancery ruling in January 2024 after a lawsuit filed by a shareholder.

Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla’s shareholders, saying the proxy information given to investors before 2018 was materially deficient. McCormick said that “the proxy statement inaccurately described key directors as independent and misleadingly omitted details about the process.”

As the Financial Times wrote, there would still be legal uncertainty even if shareholders re-approve the pay deal today:

In asking shareholders to approve of the same 2018 pay package that was nullified by the Delaware Court of Chancery in January, Tesla is relying on a legal principle known as “ratification,” in which the validity of a corporate action can be cemented by a shareholder vote. Ratification, the company told shareholders in a proxy note earlier this year, “will restore Tesla’s stockholder democracy.”

This instance, however, is the first time a company has tried to leverage that principle after its board was found to have breached its fiduciary duty to approve the deal in the first place.

Even Tesla admits it does not know what happens next. “The [Tesla board] special committee and its advisers noted that they could not predict with certainty how a stockholder vote to ratify the 2018 CEO performance award would be treated under Delaware law in these novel circumstances,” it said in a proxy statement sent to shareholders.

The BBC writes that “legal experts say it is not clear if a court that blocked the deal will accept the re-vote, which is not binding, and allow the company to restore the pay package.”

New lawsuit challenges re-vote

The re-vote was already being challenged in the same Delaware court that nullified the 2018 vote. Donald Ball, who owns 28,245 shares of Tesla stock, last week sued Musk and Tesla in a complaint that alleges the Tesla “Board has not disclosed a complete or fair picture” to shareholders of the impact of re-approving Musk’s pay plan.

That includes “radical tax implications for Tesla that will potentially wipe out Tesla’s pre-tax profits for the last two years,” the lawsuit said. The Ball lawsuit also alleged that “Musk has engaged in strong-arm, coercive tactics to obtain stockholder approval for both the Redomestication Vote and the Ratification Vote.”

Tesla Board Chairperson Robyn Denholm urged shareholders to re-approve the Musk pay plan, suggesting that Musk could leave Tesla or devote less time to the company if the resolution is voted down.

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin” Read More »

turkish-student-creates-custom-ai-device-for-cheating-university-exam,-gets-arrested

Turkish student creates custom AI device for cheating university exam, gets arrested

spy hard —

Elaborate scheme involved hidden camera and an earpiece to hear answers.

A photo illustration of what a shirt-button camera <em>could</em> look like. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/shirt-button-camera-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo illustration of what a shirt-button camera could look like.

Aurich Lawson | Getty Images

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

A video released by the Isparta police demonstrated how the cheating system functioned. In the video, a police officer scans a question, and the AI software provides the correct answer through the earpiece.

In addition to the student, Turkish police detained another individual for assisting the student during the exam. The police discovered a mobile phone that could allegedly relay spoken sounds to the other person, allowing for two-way communication.

A history of calling on computers for help

The recent arrest recalls other attempts to cheat using wireless communications and computers, such as the famous case of the Eudaemons in the late 1970s. The Eudaemons were a group of physics graduate students from the University of California, Santa Cruz, who developed a wearable computer device designed to predict the outcome of roulette spins in casinos.

The Eudaemons’ device consisted of a shoe with a computer built into it, connected to a timing device operated by the wearer’s big toe. The wearer would click the timer when the ball and the spinning roulette wheel were in a specific position, and the computer would calculate the most likely section of the wheel where the ball would land. This prediction would be transmitted to an earpiece worn by another team member, who would quickly place bets on the predicted section.

While the Eudaemons’ plan didn’t involve a university exam, it shows that the urge to call upon remote computational powers greater than oneself is apparently timeless.

Turkish student creates custom AI device for cheating university exam, gets arrested Read More »

ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror

unstable diffusion —

Users react to mangled SD3 generations and ask, “Is this release supposed to be a joke?”

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

“It wasn’t too long ago that StableDiffusion was competing with Midjourney, now it just looks like a joke in comparison. At least our datasets are safe and ethical!” wrote one Reddit user.

  • An AI-generated image created using Stable Diffusion 3 Medium.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “woman wearing a dress on the beach.”

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “photograph of a person napping in a living room.”

AI image fans are so far blaming the Stable Diffusion 3’s anatomy fails on Stability’s insistence on filtering out adult content (often called “NSFW” content) from the SD3 training data that teaches the model how to generate images. “Believe it or not, heavily censoring a model also gets rid of human anatomy, so… that’s what happened,” wrote one Reddit user in the thread.

Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.

The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity can severely hamper an AI model’s ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by strongly filtering NSFW content.

Another issue that can occur during model pre-training is that sometimes the NSFW filter researchers use remove adult images from the dataset is too picky, accidentally removing images that might not be offensive and depriving the model of depictions of humans in certain situations. “[SD3] works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw,” wrote one Redditor on the topic.

Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt “a man showing his hands” returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

  • A SD3 Medium example we generated with the prompt “A woman lying on the beach.”

  • A SD3 Medium example we generated with the prompt “A man showing his hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “A woman showing her hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

  • A SD3 Medium example we generated with the prompt “A cat in a car holding a can of beer.”

Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes. Today’s release is for the “Medium” version, which is a 2 billion-parameter model. In addition to the weights being available on Hugging Face, they are also available for experimentation through the company’s Stability Platform. The weights are available for download and use for free under a non-commercial license only.

Soon after its February announcement, delays in releasing the SD3 model weights inspired rumors that the release was being held back due to technical issues or mismanagement. Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs. Just prior to that, three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—left the company. And its troubles go back even farther, with news of the company’s dire financial position lingering since 2023.

To some Stable Diffusion fans, the failures with Stable Diffusion 3 Medium are a visual manifestation of the company’s mismanagement—and an obvious sign of things falling apart. Although the company has not filed for bankruptcy, some users made dark jokes about the possibility after seeing SD3 Medium:

“I guess now they can go bankrupt in a safe and ethically [sic] way, after all.”

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror Read More »