Author name: Kris Guyer

ram-now-represents-35-percent-of-bill-of-materials-for-hp-pcs

RAM now represents 35 percent of bill of materials for HP PCs

In an illustration of the severity of the current memory shortage, HP Inc. CFO Karen Parkhill said that RAM has gone from accounting for “roughly 15 percent to 18 percent” of HP PCs’ bill of materials in its fiscal Q4 2025 to “roughly 35 percent” for the rest of the year.

Parkhill was speaking during HP’s Q1 2026 earnings call, where the company said it expects the total addressable market for its Personal Systems business to decline by double digits this calendar year, as higher prices hurt customer demand.

“We have seen memory costs increase roughly 100 percent sequentially, and we do forecast that to further increase as we move into the fiscal year,” Parkhill said, per a transcript of the call by Seeking Alpha.

HP expects its financials to be most severely impacted by the RAM shortage in the second half of its fiscal year.

“We are seeing increased input costs driven primarily by the rising prices of DRAM and NAND,” Bruce Broussard, HP’s interim CEO and director, said. “We expect this volatility to remain throughout fiscal [year 2026] and likely into fiscal [year 2027].”

RAM shortage drives higher prices, lower specs

HP’s CFO noted that a third of the margin for HP’s Personal Systems business comes from non-RAM-related categories, including IT services and peripherals. However, HP has also raised PC prices to keep making money while paying significantly more for RAM.

RAM now represents 35 percent of bill of materials for HP PCs Read More »

panasonic,-the-former-plasma-king,-will-no-longer-make-its-own-tvs

Panasonic, the former plasma king, will no longer make its own TVs

Panasonic, once revered for its plasma TVs, is giving up on making its own TV sets. Today, it announced that Chinese company Skyworth will take over manufacturing, marketing, and selling Panasonic-branded TVs.

Skyworth is a Shenzhen-headquartered TV brand. The company claims to be “a top three global provider of the Android TV platform.” In July, research firm Omdia reported that Skyworth was one of the top-five TV brands by sales revenue in Q1 2025; however, Skyworth hasn’t been able to maintain that position regularly.

Panasonic made its announcement at a “launch event,” FlatpanelsHD reported today. During the event, a Panasonic representative reportedly said:

Under the agreement the new partner will lead sales, marketing, and logistics across the region, while Panasonic provide expertise and quality assurance to uphold its renowned audiovisual standards with full joint development on top-end OLED models.

Panasonic also said that it will provide support “for all Panasonic TVs sold up to March 2026 and all those available from April.”

Skyworth-made Panasonic TVs will be sold in the US and Europe. In the latter geography, the companies are aiming for double-digit market share.

Panasonic’s wavering TV business

Panasonic has been wavering on its commitment to the TV business for at least 12 years.

When plasma ruled the living room, Panasonic dominated the market. In 2010, Panasonic controlled 40.7 percent of the plasma panel market, beating Samsung (33.7 percent) and LG (23.2 percent), according to research from consultancy DisplaySearch. But in March 2014, Panasonic quit making plasma TVs, pointing to increasing interest in flat-screen LCD TVs and economic challenges derived from the bankruptcy of global investment bank Lehman Brothers. At the time, Panasonic reportedly hadn’t made money off of its popular, high-contrast plasma TVs for years.

Panasonic, the former plasma king, will no longer make its own TVs Read More »

new-microsoft-gaming-chief-has-“no-tolerance-for-bad-ai”

New Microsoft gaming chief has “no tolerance for bad AI”

A gaming education

Unlike Spencer, who spent years at Microsoft Game Studios before heading Microsoft’s gaming division, Sharma has no professional experience in the video game industry. And her personal experience with Xbox also seems somewhat limited; after sharing her Gamertag on social media over the weekend, curious gamers found that her Xbox play history dates back roughly one month. That’s also in stark contrast to Spencer, who has amassed a score of over 121,000 across decades of play.

In her interview with Variety, Sharma cited 2016’s Firewatch as an example of the kinds of games with “deep emotional resonance” and “a distinct point of view” that she’s looking for from Microsoft. And on social media, Sharma shared her list of the three greatest games ever: “Halo, Valheim, Goldeneye,” for what it’s worth. Sharma also seems to be taking recommendations for games to catch up on; after saying on social media that she would try Borderlands 2, the game appeared in her recently played games over the weekend.

A look at some of Sharma’s recently played Xbox games, as of this writing.

A look at some of Sharma’s recently played Xbox games, as of this writing. Credit: Xbox.com

Being a personal fan of video games isn’t necessarily required to succeed in running a gaming company. Nintendo President Hiroshi Yamauchi famously didn’t care for video games even as he launched the Famicom and Nintendo Entertainment System to worldwide success in the 1980s. Still, the lack of direct experience with the gaming world marks a sharp change after Spencer’s long tenure at a time when Microsoft is struggling to redefine the Xbox brand amid cratering hardware sales, a pivot away from software exclusives, and a move to extend the Xbox brand to many different devices.

Xbox President and COO Sarah Bond, who by all accounts was being set up to succeed Spencer, also announced her departure from Microsoft on Friday, ending a nearly nine-year stint as a public face for the company’s gaming efforts. The Verge reports that Bond caused a lot of friction within the Xbox team when she championed the “Xbox Everywhere” strategy and “This is an Xbox” marketing campaign, which focused on streaming Xbox games to hardware like mobile phones and tablets, according to anonymous sources. Shortly before the launch of that campaign in 2024, Microsoft lost marketing executives Jerrett West and Kareem Choudry, leading to significant internal reorganization.

Longtime Xbox Game Studios executive Matt Booty, whose history in the game industry dates back to working for Williams Electronics in the ’90s, has been promoted to executive vice president and chief content officer for Xbox and “will continue working closely with [Sharma] to ensure a smooth transition,” Microsoft said in its announcement Friday.

New Microsoft gaming chief has “no tolerance for bad AI” Read More »

the-first-cars-bold-enough-to-drive-themselves

The first cars bold enough to drive themselves


Quevedo’s telekino of 1904 was the first step on the road to autonomous Waymos.

Credit: Aurich Lawson | Getty Images

No one knows exactly when the vehicles we drive will finally wrest the steering wheel from us. But the age of the autonomous automobile isn’t some sudden Big Bang. It’s more of a slow crawl, one that started during the Roosevelt administration. And that’s Theodore, not Franklin. And not in America, but in Spain, by someone you’ve probably never heard of.

His name was Leonardo Torres Quevedo, a Spanish engineer born in Santa Cruz, Spain, in 1852. Smart? In 1914, he developed a mechanical chess machine that autonomously played against humans. But more than a decade earlier, he pioneered the development of remote-control systems. What he wrought was brilliant, if crude—and certainly ahead of its time.

The first wireless control

It was called the Telekino, a name drawn from the Greek “tele,” meaning at a distance, and “kino,” meaning movement. Patented in Spain, France, and the United States, it was conceived as a way to prevent airship accidents. The Telekino transmitted wireless signals to a small receiver known as a coherer, which detected electromagnetic waves and transformed them into an electrical current. This current was amplified and sent on to electromagnets that slowly rotated a switch controlling the proper servomotor. Quevedo could issue 19 distinct commands to the systems of an airship without ever touching a control cable.

By 1904, he was using the Telekino to direct a small, three-wheeled vehicle from nearly 100 feet away. It was the earliest recorded instance of a vehicle being controlled by radio. After that, Quevedo demonstrated the system’s usefulness aboard boats and even torpedoes, but here the story slows. The Spanish Crown, cautious and reluctant to invest, withheld its support. Without funding, Quevedo couldn’t build and sell the Telekino.

But he had shown that a machine could be guided by signals. It would be more than a century before that notion would reach fruition. But that doesn’t mean others didn’t try.

Leave it to Ohio

Dayton, Ohio, August 5, 1921. The country was in the thick of the automotive age, and Dayton stood as one of its industrious nerve centers. General Motors had established a strong presence there with its Frigidaire Division, promising a future of electrified domestic bliss. Meanwhile, across town, engineers at Delco, the Dayton Engineering Laboratories Company, were refining the very heart of the automobile. This was a place where invention was not merely encouraged, but expected.

But on this particular summer afternoon, the most remarkable innovation did not come from the factory floor or the corporate drafting room. It came instead from the US Army, an outfit not usually known for whimsical experimentation. It sent a small, three-wheeled vehicle, scarcely eight feet long and fitted with radio equipment, rolling through the city’s business district. The vehicle moved without a driver. Some 50 feet behind it, Captain R. E. Vaughn of nearby McCook Field guided its movement by radio signal.

1926: A woman smiles and waves from the driver's seat of a Chandler convertible parked on a gravel road near a coastline. She wears an overcoat and a cloche hat

A 1926 Chandler. Obviously, this one is human-driven—you can tell by the human waving from the driver’s seat.

Credit: American Stock/Getty Images

A 1926 Chandler. Obviously, this one is human-driven—you can tell by the human waving from the driver’s seat. Credit: American Stock/Getty Images

Four years later, the spectacle reappeared. This time it was on the streets of New York City, where a crowd along Broadway watched as a 1926 Chandler, sitting quietly at the curb, came to life. The engine turned, the gears engaged, and it pulled smoothly into the stream of traffic before making its way up Fifth Avenue without a driver. Dubbed the “American Wonder” by its creator, Francis P. Houdina, the car responded to radio commands transmitted from a chase car. Signals were received by antennas atop the Chandler, where they triggered circuit breakers and small electric motors that operated the steering, throttle, brakes, and horn.

The idea proved too tantalizing to fade. In Cincinnati, a Toledo inventor named Maurice J. Francill took up the cause in 1928. Francill, who styled himself “America’s Radio Wizard,” demonstrated how radio control could move Ford automobiles without a driver. In a series of stage-like performances, he also milked cows, baked bread, and operated a laundry, all through radio command. By 1936, newspapers from Ohio to California were still reporting his feats.

“Francill claims that he can accomplish anything the human hand can do by radio,” the Orange County News observed. “Eight pounds [3.6 kg] of delicate brain-like radio apparatus was employed to control the lights, ignition system, horn and start the motor running. Five pounds [2.3 kg] of radio apparatus is required to guide the car.”

These vehicles may seem like novelties today, but they’re early proof that the automobile can be guided by something other than humans.

Detroit buys into the dream

The dream of a self-driving automobile did not vanish when these moments passed. It lingered, an idea returned to again and again, particularly in the years when America believed that anything was possible.

At the 1939 New York World’s Fair, General Motors offered a glimpse of that future with its enormous Futurama exhibit. Seated above a raised platform, fairgoers saw a miniature city where tiny electric cars moved serenely along highways without drivers. The cars, they were told, would one day be guided by radio signals and electric currents running through cables and circuits beneath the pavement, creating an electromagnetic field that could both power the vehicles and guide their course. It was a bold, imaginative vision—and characteristic of a time when modern engineering was forecast to remake the world.

After the war, engineers did not let the idea fade. They continued to work on the idea of communication between road and machine. At General Motors’ Motorama, a traveling showcase of the car’s newest vehicles and latest ideas, one display in 1956 captured the imagination of audiences across the country. GM unveiled a sleek, gas turbine–powered automobile, sheathed in titanium and brimming with the promise of autonomous driving.

GM's Firebird II concept from 1956

The Firebird II concept from 1956 could drive itself on special roads.

Credit: General Motors

The Firebird II concept from 1956 could drive itself on special roads. Credit: General Motors

Beneath certain stretches of highway, GM proposed laying an electronic strip. When the car traveled over it, sensors would lock onto the signal, guiding the vehicle automatically along its lane. The driver would simply lean back, hands free from the wheel, and watch the miles roll by. Onboard amenities inexplicably included an orange juice dispenser.

Proof of concept

By 1958, the idea became a reality. On a plain stretch of highway outside Lincoln, Nebraska, it was put to the test. The state’s Department of Roads embedded a 400-foot (121 m) length of the roadway with electric circuits, while engineers from RCA and General Motors brought specially fitted Chevrolets to test it. Observers watched as the driverless cars steered themselves, responding to the buried signal beneath the pavement.

A few years later, across the Atlantic, the United Kingdom’s Transport and Road Research Laboratory undertook its own experiments. Using a Citroën DS, they laid magnetic cables beneath a test track and sent the car down it at speeds of up to 80 mph (129 km/h). Wind and weather made no difference; the DS held its line faithfully.

Autonomy emerges in the modern age

Fast forward to 1986, and German scientist Ernst Dickmanns, as part of his position with the German armed forces, began testing an autonomously driving Mercedes-Benz using computers, cameras, and sensors, not unlike modern-day cars. Within a year, it was travelling down the Autobahn at nearly 55 mph (89 km/h). That was enough to capture the attention of Daimler-Benz, which helped fund further research.

Several years later, in October 1994, Dickmanns gathered his research team at Charles de Gaulle Airport outside Paris, where they met a delegation of high-ranking officials. Parked at the curb were two sedans. They appeared ordinary but were fitted with cameras, sensors, and onboard computers. The guests climbed in, and the cars made their way toward the nearby thoroughfare. Then, with the traffic flowing steadily around them, the engineers switched the vehicles into self-driving mode and took their hands off the wheel. The cars held their lanes, adjusted their speed, and followed the road’s gentle curves without driver intervention.

An illustration of a 1994 driverless car

The experimental driverless car VaMP (Versuchsfahrzeug für autonome Mobilität und Rechnersehen), which was developed during the European research project PROMETHEUS: (top left) components for autonomous driving; (right) VaMP and view into passenger cabin (lower right); (lower left) bifocal camera arrangement (front) on yaw platform.

Credit: CC BY-SA 3.0

The experimental driverless car VaMP (Versuchsfahrzeug für autonome Mobilität und Rechnersehen), which was developed during the European research project PROMETHEUS: (top left) components for autonomous driving; (right) VaMP and view into passenger cabin (lower right); (lower left) bifocal camera arrangement (front) on yaw platform. Credit: CC BY-SA 3.0

A year later, Dickmanns would travel from Bavaria to Denmark, a trip of more than 1,056 miles (1,700 km), reaching speeds of nearly 110 mph (177 km/h). Unfortunately, Daimler lost interest and cut funding for the effort. Dickmann’s project came to a halt, but the modern-day technology was in place to set the stage for what came next.

The military sparks innovation–again

By the turn of the century, the federal government had created a new research arm of the Pentagon, the Defense Advanced Research Projects Agency, or DARPA. Its mission was ambitious: to develop technologies that could protect American soldiers on the battlefield. Among its goals was the creation of vehicles that could drive themselves, sparing troops the dangers of roadside ambushes and explosive traps.

To accelerate progress, DARPA announced a competition to build a driverless vehicle capable of traveling 142 miles (229 km) across the Mojave Desert. The prize was $1 million, though the real prize was the knowledge gained along the way.

When race day arrived, the results were humbling. One by one, every vehicle failed to finish. But in the sun and dust of the Mojave, a community emerged, one of engineers, programmers, and dreamers who believed that the autonomous vehicle was not a fantasy but a problem to be solved. Twenty years later, their work has brought the idea closer to everyday reality than ever before.

By themselves, these efforts did not yet give the world the self-driving car. But these successful experiments demonstrate the ability to make a fantasy reality. It’s also a reminder that while the tech industry likes to position itself as a disruptor bringing self-driving cars to market, Detroit was dreaming about and demonstrating autonomous transportation long before Silicon Valley existed.

The first cars bold enough to drive themselves Read More »

study-shows-how-rocket-launches-pollute-the-atmosphere

Study shows how rocket launches pollute the atmosphere

Atmospheric scientist Laura Revell, with the University of Canterbury in New Zealand, presented research showing that rocket exhaust in the atmosphere can erase some of the hard-won gains in mitigating ozone depletion.

In a high-growth scenario for the space industry, there could be as many as 2,000 launches per year, which her modeling shows could result in about 3 percent ozone loss, equal to the atmospheric impacts of a bad wildfire season in Australia. She said most of the damage comes from chlorine-rich solid rocket fuels and black carbon in the plumes.

The black carbon could also warm parts of the stratosphere by about half-a-degree Celsius as it absorbs sunlight. That heats the surrounding air and can shift winds that steer storms and areas of precipitation.

“This is probably not a fuel type that we want to start using in massive quantities in the future,” she added.

Researchers at the conference estimated that in the past five years, the mass of human‑made material injected into the upper atmosphere by re‑entries has doubled to nearly a kiloton a year. For some metals like lithium, the amount is already much larger than that contributed by disintegrating meteors.

In the emerging field of space sustainability science, researchers say orbital space and near-space should be considered part of the global environment. A 2022 journal article co-authored by Moriba Jah, a professor of aerospace engineering and engineering mechanics at the University of Texas at Austin, argued that the upper reaches of the atmosphere are experiencing increased impacts from human activities.

The expanding commercial use of what appears to be a free resource is actually shifting its real costs onto others, the article noted.

At last year’s European Geosciences Union conference, Leonard Schulz, who studies space pollution at the Technical University Braunschweig in Germany, said, “If you put large amounts of catalytic metals in the atmosphere, I immediately think about geoengineering.”

There may not be time to wait for more scientific certainty, Schulz said: “In 10 years, it might be too late to do anything about it.”

Bob Berwyn is an Austria-based reporter who has covered climate science and international climate policy for more than a decade. Previously, he reported on the environment, endangered species and public lands for several Colorado newspapers, and also worked as editor and assistant editor at community newspapers in the Colorado Rockies.

This story originally appeared on Inside Climate News.

Study shows how rocket launches pollute the atmosphere Read More »

dinosaur-eggshells-can-reveal-the-age-of-other-fossils

Dinosaur eggshells can reveal the age of other fossils

When dinosaur fossils surface at a site, it is often not possible to tell how many millions of years ago their bones were buried. While the different strata of sedimentary rock represent periods of geologic history frozen in time, accurately dating them or the fossils trapped within them has frequently proven to be frustrating.

Fossilized bones and teeth have been dated with some success before, but that success is inconsistent and depends on the specimens. Both fossilization and the process of sediment turning to rock can alter the bone in ways that interfere with accuracy. While uranium-lead dating is among the most widely used methods for dating materials, it is just an emerging technology when applied to directly dating fossils.

Dinosaur eggshells might have finally cracked a way to date surrounding rocks and fossils. Led by paleontologist Ryan Tucker of Stellenbosch University, a team of researchers has devised a method of dating eggshells that reveals how long ago they were covered in what was once sand, mud, or other sediments. That information will give the burial time of any other fossils embedded in the same layer of rock.

“If validated, this approach could greatly expand the range of continental sedimentary successions amenable to radioisotopic dating,” Tucker said in a study recently published in Nature Communications Earth & Environment.

This goes way back

Vertebrates have been laying calcified eggs for hundreds of millions of years (although the first dinosaur eggs had soft shells). What makes fossil eggshells so useful for figuring out the age of other fossils is the unique microstructure of calcium carbonate found in them. The way its crystals are arranged capture a record of diagenetic changes, or physical and chemical changes that occurred during fossilization. These can include water damage, along with breaks and fissures caused by being compacted between layers of sediment. This makes it easier to screen for these signs when trying to determine how old they are.

Dinosaur eggshells can reveal the age of other fossils Read More »

have-we-leapt-into-commercial-genetic-testing-without-understanding-it?

Have we leapt into commercial genetic testing without understanding it?


A new book argues that tests might reshape human diversity even if they don’t work.

Daphne O. Martschenko and Sam Trejo both want to make the world a better, fairer, more equitable place. But they disagree on whether studying social genomics—elucidating any potential genetic contributions to behaviors ranging from mental illnesses to educational attainment to political affiliation—can help achieve this goal.

Martschenko’s argument is largely that genetic research and data have almost always been used thus far as a justification to further entrench extant social inequalities. But we know the solutions to many of the injustices in our world—trying to lift people out of poverty, for example—and we certainly don’t need more genetic research to implement them. Trejo’s point is largely that more information is generally better than less. We can’t foresee the benefits that could come from basic research, and this research is happening anyway, whether we like it or not, so we may as well try to harness it as best we can toward good and not ill.

Obviously, they’re both right. In What We Inherit: How New Technologies and Old Myths Are Shaping Our Genomic Future, we get to see how their collaboration can shed light on our rapidly advancing genetic capabilities.

An “adversarial collaboration”

Trejo is a (quantitative) sociologist at Princeton; Martschenko is a (qualitative) bioethicist at Stanford. He’s a he, and she’s a she; he looks white, she looks black; he’s East Coast, she’s West. On the surface, it seems clear that they would hold different opinions. But they still chose to spend 10 years writing this book in an “adversarial collaboration.” While they still disagree, by now at least they can really listen to and understand each other. In today’s world, that seems pretty worthwhile in and of itself.

The titular “What we inherit” refers to both actual DNA (Trejo’s field) and the myths surrounding it (Martschenko’s). There are two “genetic myths” that most concern them. One is the Destiny Myth: the notion, first promulgated by Francis Galton in his 1869 book Heredity Genius, that the effects of DNA can be separable from the effects of environment. He didn’t deny the effects of nurture; he just erroneously pitted it against nature, as if it were one versus the other instead of each impacting and working through the other. (The most powerful “genetic” determinant of educational attainment in his world was a Y chromosome.) His ideas reached their apotheosis in the forced sterilizations of the eugenics movement in the early 20th century in the US and, eventually, in the policies of Nazi Germany.

The other genetic myth the authors address is the Race Myth, “the false belief that DNA differences divide humans into discrete and biologically distinct racial groups.” (Humans can be genetically sorted by ancestry, but that’s not quite the same thing.) But they spend most of the book discussing polygenic scores, which sum up the impact of lots of small genetic influences. They cover what they are, their strengths and weaknesses, their past, present, and potential uses, and how and how much their use should be regulated. And of course, their ultimate question: Are they worth generating and studying at all?

One thing they agree on is that science education in this country is abysmal and needs to be improved immediately. Most people’s knowledge of genetics is stuck at Mendel and his green versus yellow, smooth versus wrinkled peas: dominant and recessive traits with manifestations that can be neatly traced in Punnet squares. Alas, most human traits are much more complicated than that, especially the interesting ones.

Polygenic scores: uses and abuses

Polygenic scores tally the contributions of many genes to particular traits to predict certain outcomes. There’s no single gene for height, depression, or heart disease; there are a bunch of genes that each make very small contributions to making an outcome more or less likely. Polygenic scores can’t tell you that someone will drop out of high school or get a PhD; they can just tell you that someone might be slightly more or less likely to do so. They are probabilistic, not deterministic, because people’s mental health and educational attainment and, yes, even height, are determined by environmental factors as well as genes.

Polygenic scores, besides only giving predictions, are (a) not that accurate by nature; (b) become less accurate for each trait if you select for more than one trait, like height and intelligence; and (c) are less accurate for those not of European descent, since most genetic studies have thus far been done only with Europeans. So right out of the gate, any potential benefits of the technology will be distributed unevenly.

Another thing that Martschenko and Trejo agree on is that the generation, sale, and use of polygenic scores must be regulated much more assiduously than they currently are to ensure that they are implemented responsibly and equitably. “While scientists and policymakers are guarding the front gate against gene editing, genetic embryo selection (using polygenic scores) is slipping in through the backdoor,” they write. Potential parents using IVF have long been able to choose which embryos to implant based on gender and the presence of very clearcut genetic markers for certain serious diseases. Now, they can choose which embryos they want to implant based on their polygenic scores.

In 2020, a company called Genomic Prediction started offering genomic scores for diabetes, skin cancer, high blood pressure, elevated cholesterol, intellectual disability, and “idiopathic short stature.” They’ve stopped advertising the last two “because it’s too controversial.” Not, mind you, because the effects are minor and the science is unreliable. The theoretical maximum polygenic score for height would make a difference of 2.5 inches, and that theoretical maximum has not been seen yet, even in studies of Europeans. Polygenic scores for most other traits lag far behind. (And that’s just one company; another called Herasight has since picked up the slack and claims to offer embryo selection based on intelligence.)

Remember, the more traits one selects for, the less accurate each prediction is. Moreover, many genes affect multiple biological processes, so a gene implicated in one undesirable trait may have as yet undefined impacts on other desirable ones.

And all of this is ignoring the potential impact of the child’s environment. The first couple who used genetic screening for their daughter opted for an embryo that had a reduced risk of developing heart disease; her risk was less than 1 percent lower than the three embryos they rejected. Feeding her vegetables and sticking her on a soccer team would have been cheaper and probably more impactful.

The risks of reduced genetic diversity

Almost every family I know has a kid who has taken growth hormones, and plenty of them get tutoring, too. These interventions are hardly equitably distributed. But if embryos are selected based on polygenic scores, the authors fear that a new form of social inequality can arise. While growth hormone injections affect only one individual, embryonic selection based on polygenic scores affects all of that embryo’s descendants going forward. So the chosen embryos’ progeny could eventually end up treated as a new class of optimized people whose status might be elevated simply because their parents could afford to comb through their embryonic genomes—regardless of whether their “genetic” capabilities are actually significantly different from everyone else’s.

While it is understandable that parents want to give their kids the best chance of success, eliminating traits that they find objectionable will make humanity as a whole more uniform and society as a whole poorer for the lack of heterogeneity. Everyone can benefit from exposure to people who are different from them; if everyone is bred to be tall, smart, and good-looking, how will we learn to tolerate otherness?

Polygenic embryo selection is currently illegal in the UK, Israel, and much of Europe. In 2024, the FDA made some noise about planning to regulate the market, but for now companies offering polygenic scores to the public fall under the same nonmedical category as nutritional supplements—i.e. not regulatable. These companies advertise scores for traits like musical ability and acrophobia, but only for “wellness” or “educational” purposes.

So Americans are largely at the mercy of corporations that want to profit off of them at least as much as they claim to want to help them. And because this is still in the private sector, people who have the most social and environmental advantages—wealthy people with European ancestry—are often the only ones who can afford to try to give their kids any genetic advantages that might be had, further entrenching those social inequalities and potentially creating genetic inequalities that didn’t exist before. Hopefully, these parents will just be funding the snake-oil phase of the process so that if we can ever generate enough data to make polygenic scores actually reliable at predicting anything meaningful, they will be inexpensive enough to be accessible to anyone who wants them.

Have we leapt into commercial genetic testing without understanding it? Read More »

microsoft-gaming-chief-phil-spencer-steps-down-after-38-years-with-company

Microsoft gaming chief Phil Spencer steps down after 38 years with company

Microsoft Executive Vice President for Gaming Phil Spencer announced he will retire after 38 years at Microsoft and 12 years leading the company’s video game efforts. Asha Sharma, an executive currently in charge of Microsoft’s CoreAI division, will take his place.

Xbox President Sarah Bond, who many assumed was being groomed as Spencer’s eventual replacement, is also resigning from the company. Current Xbox Studios Head Matt Booty, meanwhile, is being promoted to Executive Vice President and Chief Content Officer and will work closely with Sharma.

In his departure note, Spencer said he told Microsoft CEO Satya Nadella last fall that he was “thinking about stepping back and starting the next chapter of my life.” Spencer will remain at Microsoft “in an advisory role” through the summer to help Sharma during the transition, he wrote.

Spencer, who got his start at Microsoft as an intern in 1988, served as a manager and executive at Microsoft Game Studios in 2003. In 2014, he took over as Head of Xbox, guiding the company through the aftermath of the troubled, Kinect-bundled launch of the Xbox One. More recently, he helped shepherd the company’s 2020 purchase of Bethesda Softworks and its $68.7 billion merger with Activision Blizzard, including the many regulatory battles that followed that latter announcement.

Meet the new boss

Sharma, who joined Microsoft just two years ago after stints at Meta and Instacart, promised in an introductory message to preside over “the return of Xbox,” and a “recommit[ment] to our core fans and players.” That commitment would “start with console which has shaped who we are,” but expand “across PC, mobile, and cloud,” Sharma wrote.

Microsoft gaming chief Phil Spencer steps down after 38 years with company Read More »

wikipedia-blacklists-archive.today,-starts-removing-695,000-archive-links

Wikipedia blacklists Archive.today, starts removing 695,000 archive links

The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog.

In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

“There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it,” stated an update today on Wikipedia’s Archive.today discussion. “There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users’ computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today’s operators have altered the content of archived pages, rendering it unreliable.”

More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site is commonly used to bypass news paywalls, and the FBI has sought information on the site operator’s identity with a subpoena to domain registrar Tucows.

“Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability,” said today’s Wikipedia update. “However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today.”

Editors urged to remove links

Guidance published as a result of the decision asked editors to help remove and replace links to the following domain names used by the archive site: archive.today, archive.is, archive.ph, archive.fo, archive.li, archive.md, and archive.vn. The guidance says editors can remove Archive.today links when the original source is still online and has identical content; replace the archive link so it points to a different archive site, like the Internet Archive, Ghostarchive, or Megalodon; or “change the original source to something that doesn’t need an archive (e.g., a source that was printed on paper), or for which a link to an archive is only a matter of convenience.”

Wikipedia blacklists Archive.today, starts removing 695,000 archive links Read More »

why-final-fantasy-is-now-targeting-pc-as-its-“lead-platform”

Why Final Fantasy is now targeting PC as its “lead platform”

For a long time now, PC gamers have been used to the Final Fantasy series treating their platform as somewhat secondary to the game’s core console versions. There are some signs that may be starting to change, though, as director Naoki Hamaguchi has confirmed that the PC is now the “lead platform” for development of the Final Fantasy VII Remake trilogy.

In a recent interview with Automaton, Hamaguchi clarified that the team takes the relatively common practice of creating visual assets for its multiplatform games by targeting “high-end environments first,” then performing a “reduction” for less powerful platforms. These days, that means “our 3D assets are created at the highest quality level based on PC as the foundation,” he said. Players have already noticed this graphical difference in the PC version of Final Fantasy VII Rebirth, Hamaguchi said, and “our philosophy will not change for the third installment.”

While PC gaming is only “gradually expanding in Japan,” Hamaguchi said the rapid growth in international PC gamers has led the company to “develop assets with the broad PC market in mind.”

The PC versions of recent Final Fantasy VII Remake games have sold well on Steam and the Epic Games Store, he added.

It’s unclear if that means PC gamers will have to wait longer than console owners for future Final Fantasy games. The first Final Fantasy VII Remake didn’t hit PCs until 19 months after the PlayStation 4 version, and Rebirth was first available on PC 11 months after its PS5 launch. Elsewhere in the franchise, the PC versions of both Final Fantasy XVI and Final Fantasy XV didn’t hit until over a year after their console counterparts.

Why Final Fantasy is now targeting PC as its “lead platform” Read More »

rubik’s-wowcube-adds-complexity,-possibility-by-reinventing-the-puzzle-cube

Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube


Technology is a double-edged sword in the $399 Rubik’s Cube-inspired toy.

There’s something special about the gadget that “just works.” Technology can open opportunities for those devices but also complicate and weigh down products that have done just fine without things like sensors and software.

So when a product like the beloved Rubik’s Cube gets stuffed with wires, processors, and rechargeable batteries, there’s demand for it to be not just on par with the original—but markedly better.

The Cubios Rubik’s WOWCube successfully breathes fresh life into the classic puzzle, but it’s also an example of when too much technology can cannibalize a gadget’s main appeal.

Rubik's WOWCube with hearts screensaver

The WOWCube showing off one of its screensavers.

Credit: Scharon Harding

The WOWCube showing off one of its screensavers. Credit: Scharon Harding

The WOWCube is a modern take on the Rubik’s Cube, an experiment from Hungarian architecture professor Ernő Rubik. Rubik aimed to make a structure composed of eight cubes that could move independently without the structure collapsing. The Rubik’s Cube became a widely distributed toy, an ’80s craze, and, eventually, a puzzle icon.

The Rubik’s Cube did all that without electronics and with a current MSRP of $10. The WOWCube takes the opposite approach. It’s $399 (as of this writing) and ditches the traditional 3×3 grid in favor of a 2×2 grid that can still do the traditional Rubik’s puzzle (albeit on a smaller scale) and perform a host of other tricks, including playing other games and telling the weather.

A smaller puzzle

The WOWCube’s 2×2 grid will disappoint hardcore puzzlers. There’s no way to play the traditional 3×3 version or even harder modified versions of the 2×2 grid. With only 24 squares, compared to the traditional 54, solving the WOWCube is significantly easier than solving a standard Rubik’s Cube. Although skilled players might enjoy the challenge of trying to solve the WOWCube extra rapidly.

For people who are awful at the original Rubik’s Cube, like this author, a more accessible version of the puzzle is welcome. Solving the new Rubik’s Cube feels more attainable and less frustrating.

The WOWCube is made up of eight modules. Each module has its own PCB, processor, gyroscope, and accelerometer. That may explain why Cubios went with this smaller design. The predicament also begs the question of whether electronics really improve the Rubik’s Cube.

Games and other apps

Once I played some of the WOWCube’s other games, I saw the advantage of the smaller grid. The 2×2 layout is more appropriate for games like White Rabbit, which is like Pac-Man but relies on tilting and twisting the cube, or Ladybug, where you twist the cube to create a path for a perpetually crawling ladybug. A central module might add unneeded complexity and space to these games and other WOWCube apps, like Pixel World, which is like a Rubik’s Cube puzzle but with images depicting global landmarks, or the WOWCube implementation of Gabriele Cirulli’s puzzle game, 2048.

At the time of writing, the WOWCube has 15 “games,” including the Rubik’s Cube puzzle. Most of the games are free, but some, such as Space Invaders Cubed ($30) and Sunny Side Up ($5), cost money.

Unlike the original Rubik’s Cube, which is content to live on your shelf until you need a brain exercise or go on a road trip, the WOWCube craves attention with dozens of colorful screens, sound effects, and efforts to be more than a toy.

With its Widgets app open, the cube can display information, like the time, temperature, and alerts, from a limited selection of messaging apps. More advanced actions, like checking the temperature for tomorrow or opening a WhatsApp message, are unavailable. There’s room for improvement, but further development, perhaps around features like an alarm clock or reminders, could turn the WOWCube into a helpful desk companion.

Technology overload

The new technology makes the Rubik’s Cube more versatile, exciting, and useful while bringing the toy back into the spotlight; at times, though, it also brought more complexity to a simple beloved concept.

Usually, to open an app, make a selection, or otherwise input yes, you “knock” on the side of WOWCube twice. You also have to shake the cube three times in order to exit an app, and you can’t open an app when another app is open. Being able to tap an icon or press an actual button would make tasks, like opening apps or controlling volume and brightness levels, easier. On a couple of occasions, my device got buggy and inadvertently turned off some, but not all, of its screens. The reliance on a battery and charging dock that plugs into a wall presents limitations, too.

The WOWCube showing its main menu while sitting next to its charging dock.

The WOWCube showing its main menu while sitting next to its charging dock.

Credit: Scharon Harding

The WOWCube showing its main menu while sitting next to its charging dock. Credit: Scharon Harding

The WOWCube’s makers brag of the device’s octads of speakers, processors, accelerometer, and gyroscopes, but I found the tilting mechanism unreliable and, at times, frustrating for doing things like highlighting an icon. Perhaps I don’t hold the WOWCube at the angles that its creators intended. There were also times when the image was upside down, and main information was displayed on a side of the cube that was facing away from me.

Rubik's WOWCube with pomodoro timer

One of my favorite features: WOWCube’s pomodoro-like timer app.

Credit: Scharon Harding

One of my favorite features: WOWCube’s pomodoro-like timer app. Credit: Scharon Harding

The WOWCube has its own iOS and Android app, WOWCube Connect, which lets you connect the toy to your phone via Bluetooth and download new apps to the device via the dock’s Wi-Fi connection. You can also use the app to customize things like widgets, screensavers, and display brightness. If you don’t want to do any of those things, you can disconnect the WOWCube from your phone and reconnect it only when you want to.

I wasn’t able to use the iOS app unless I agreed to allow the “app to track activity.” This gives me privacy concerns, and I’ve reached out to Cubios to ask if there’s a way to use the app without the company tracking your activity.

New-age Rubik’s Cube

Cubios attempted to reinvent a classic puzzle with the WOWCube. In the process, it added bells and whistles that detract from what originally made Rubik’s Cubes great.

The actual Rubik’s Cube puzzle is scaled back, and the idea of spending hours playing with the cube is hindered by its finite battery life (the WOWCube can last up to five hours of constant play, Cubios claims). The device’s reliance on sensors and chips doesn’t always yield a predictable user experience, especially when navigating apps. And all of its tech makes the puzzle about 40 times pricier than the classic toy.

IPS screens, integrated speakers, and app integration add more possibilities, but some might argue that the Rubik’s Cube was sufficient without them. Notably, the WOWCube began as its own product and got the rights to use Rubik’s branding in 2024.

We’ve seen technology come for the Rubik’s Cube before. The Rubik’s Revolution we tested years ago had pressure-sensitive, LED-lit buttons for faces. In 2020, Rubik’s Connected came out with its own companion app. Clearly, there’s interest in bringing the Rubik’s Cube into the 20th century. For those who believe in that mission, the WOWCube is a fascinating new chapter for the puzzle.

I applaud Cubios’ efforts to bring the Rubik’s Cube new relevance and remain intrigued by the potential of new software-driven puzzles and uses. But it’s hard to overlook the downfalls of its tech reliance.

And the WOWCube could never replace the classic.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Rubik’s WOWCube adds complexity, possibility by reinventing the puzzle cube Read More »

diablo-ii’s-new-warlock-is-a-great-excuse-to-revisit-a-classic-game

Diablo II’s new Warlock is a great excuse to revisit a classic game

Of the Warlock’s three Demonic partner options, I found myself leaning most on the Tainted, which can stay out of harm’s way while harassing slower enemies from afar with fireballs. The other Demon options both had their charms but often got too caught up in massive enemy swarms to be as effective as I wanted, I found. I also didn’t see much point in the skill option that let me teleport my demon into a specific fight or sacrifice itself for some splash damage; their standard, AI-controlled attack patterns were usually sufficient.

Then there’s the Chaos upgrade branch, which is focused mostly on area-of-effect (AoE) spells. My build thus far has ended up pretty reliant on the direct-damage AoE options; the Flame Wave, in particular, is especially good for quickly clearing out long, narrow corridors. I also leaned on the Sigil of Lethargy, which effectively slows down some of the more frenetic enemy swarms and gives you some time to gather your attack plan.

Something borrowed, something blue…

Combining these Chaos skills with the weapon-improving options in the Eldritch branch has made my time with the Diablo II Warlock feel like a bit of a “best of both worlds” situation. The mixture of ranged combat options, area-of-effect magic, and allies-summoning abilities ends up feeling like a weird cross between a Sorceress, Amazon, and Necromancer, without feeling like a carbon copy of any of those classes.

I haven’t yet gotten to the new late-game content in the “Reign of the Warlock” DLC, so I can’t say how well the Warlock holds up in the extreme difficulty of the Terror Zones. I also haven’t experimented with any of the truly broken Warlock builds that some committed high-level min-maxxers have been busy discovering.

As a casual excuse to revisit the world of Diablo II, though, the Warlock class provides just enough of a new twist on some familiar gameplay mechanics to make it worth the trip.

Diablo II’s new Warlock is a great excuse to revisit a classic game Read More »