Author name: Rejus Almole

z-wave-long-range-and-its-mile-long-capabilities-will-arrive-next-year

Z-Wave Long Range and its mile-long capabilities will arrive next year

Z-Wave can be a very robust automation network, free from the complications and fragility of Wi-Fi and Bluetooth. Just how robust, you ask? More than a mile long, under the right circumstances, as hardware soon to hit the market promises.

All claims of radio distances should be taken with amounts of salt unhealthy for consumption. What can be accomplished across an empty field is not the same as what can be done through buildings, interference, and scatter. But Z-Wave Long Range (or Z-Wave LR), operating “in long range mode at full power,” can hit 1.5 miles, according to the Z-Wave Alliance, presuming you’ve got the right star-shaped hub network.

By using a star network topology instead of a more traditional mesh, Z-Wave LR reduces the need for hubs and repeaters, relying instead on a central hub. It can be more reliable for larger commercial spaces, security setups, and bigger homes, and also more power efficient. Devices automatically adjust their signal strength while on Z-Wave networks, extending the battery life of a single coin cell up to 10 years—again, under best-case circumstances. If you’re really a glutton for punishment, you can fit up to 4,000 devices on a network running Z-Wave LR, because LR can co-exist on the same network as standard Z-Wave meshes.

Z-Wave Long Range and its mile-long capabilities will arrive next year Read More »

nvidia’s-new-app-is-causing-large-frame-rate-dips-in-many-games

Nvidia’s new app is causing large frame rate dips in many games

When Nvidia replaced the longstanding GeForce Experience App with a new, unified Nvidia App last month, most GPU owners probably noted the refresh and rebranding with nothing more than bemusement (though the new lack of an account login requirement was a nice improvement). Now, testing shows that running the new app with default settings can lead to some significant frame rate dips on many high-end games, even when the app’s advanced AI features aren’t being actively used.

Tom’s Hardware noted the performance dip after reading reports of related problems around the web. The site’s testing with and without the Nvidia App installed confirms that, across five games running on an RTX 4060, the app reduced average frame rates by around 3 to 6 percent, depending on the resolution and graphical quality level.

The site’s measured frame rate drop peaked at 12 percent for Assassin’s Creed Mirage running at 1080p Ultra settings; other tested games (including Baldur’s Gate 3, Black Myth: Wukong, Flight Simulator 2024, and Stalker 2) showed a smaller drop at most settings.

Unfiltered

This is a significant performance impact for an app that simply runs quietly in the background for most users. The impact is roughly comparable to that of going from a top-of-the-line RTX 4070 Ti Super to an older RTX 4070 Ti or 4070 Super, based on our earlier testing of those cards.

Nvidia’s new app is causing large frame rate dips in many games Read More »

why-do-we-get-headaches-from-drinking-red-wine?

Why do we get headaches from drinking red wine?

Putting enzymes to the test

Testing ALDH was the next step. We set up an inhibition assay in test tubes. In the assay, we measured how fast the enzyme ALDH breaks down acetaldehyde. Then, we added the suspected inhibitors—quercetin, as well as some other phenolics we wanted to test—to see whether they slowed the process.

The chemical structure of quercetin, which may cause red wine headaches.

The chemical structure of quercetin, which may cause red wine headaches. Credit: Johannes Botne (CC BY-SA)

These tests confirmed that quercetin was a good inhibitor. Some other phenolics had varying effects, but quercetin glucuronide was the winner. When your body absorbs quercetin from food or wine, most is converted to glucuronide by the liver in order to quickly eliminate it from the body.

Our enzyme tests suggest that quercetin glucuronide disrupts your body’s metabolism of alcohol. This disruption means extra acetaldehyde circulates, causing inflammation and headaches. This discovery points to what’s known as a secondary, or synergistic, effect.

These secondary effects are much harder to identify because two factors must both be in play for the outcome to arise. In this case, other foods that contain quercetin are not associated with headaches, so you might not initially consider quercetin as the cause of the red wine problem.

The next step could be to give human subjects two red wines that are low and high in quercetin and ask whether either wine causes a headache. If the high-quercetin wine induces more headaches, we’d know we’re on the right track.

So, if quercetin causes headaches, are there red wines without it? Unfortunately, the data available on specific wines is far too limited to provide any helpful advice. However, grapes exposed to the Sun do produce more quercetin, and many inexpensive red wines are made from grapes that see less sunlight.

If you’re willing to take a chance, look for an inexpensive, lighter red wine.

Andrew Waterhouse is professor of enology, University of California, Davis, and Apramita Devi is a postdoctoral researcher in food science and technology, University of California, Davis.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why do we get headaches from drinking red wine? Read More »

werner-herzog-muses-on-mysteries-of-the-brain-in-theater-of-thought

Werner Herzog muses on mysteries of the brain in Theater of Thought

That mind is partly revealed through Herzog’s running narration, such as when he muses about collective behavior and whether fish have souls—a digression sparked by his interview with Siri co-inventor Tom Gruber. “In the background, I saw his TV screen still on, we didn’t switch it off, and I saw some very, very strange school of fish,” said Herzog. “I asked him about the school of fish, which he had filmed himself. And all of a sudden, I’m only interested in the fish and common behavior. Why do they behave in big schools, in unison? Why do they do that? Do they dream? And if they think, what are they thinking about? I immerse the audience into a very strange form of underwater landscape and behavior of fish.”

Werner Herzog’s inspiration for Theater of Thought arose from conversations with Columbia University neuroscientist Rafael Yuste, who served as science advisor on the film. Argot Pictures

We glimpse the inner workings of Herzog’s mind in the kinds of questions he asks his subjects, such as when he queries IBM’s Dario Gil, who works on quantum computing, about his passion for fishing, eliciting an enthusiastic smile in response. He agrees to interview University of Washington neuroscientist Christof Koch after Koch’s early-morning row on the Puget Sound and includes music from New York University neuroscientist Joseph LeDoux‘s band, the Amygdaloids, in the film’s soundtrack. He asks married scientists Cori Bargmann and Richard Axel about music, their dinner conversations, and the linguistic capabilities of parrots. In so doing, he brings out their innate humanity, not just their scientific expertise.

“That’s what I do. If you don’t have it in you, you shouldn’t be a filmmaker,” said Herzog. “But you see, also, the joy of getting into all of this and the joy of meeting these scientists. We are talking about speaking parrots. What if two parrots learned a language that is already extinct and they would speak to each other? What would we make of it? So I’m asking, spontaneously, because I saw it, I sensed it, there was something I should depart completely from scientific quests. And yet there’s a deep scientific background to it.”

Werner Herzog muses on mysteries of the brain in Theater of Thought Read More »

elon-musk-slams-sec-as-agency-threatens-charges-in-twitter-stock-probe

Elon Musk slams SEC as agency threatens charges in Twitter stock probe

An SEC spokesperson told Ars today that the commission’s policy is “to conduct investigations on a confidential basis to preserve the integrity of its investigative process. The SEC therefore does not comment on the existence or nonexistence of a possible investigation.”

A Reuters source confirmed the settlement offer. “The SEC sent Musk a settlement offer on Tuesday seeking a response in 48 hours, but extended it to Monday after a request for more time, the source said,” according to a Reuters article today.

The settlement offer was also confirmed by a source who spoke to The Washington Post. “One person familiar with the probe, who spoke on the condition of anonymity to describe a confidential law enforcement proceeding, confirmed that Musk had been sent a settlement offer in recent days,” the Post wrote last night. “But the person said they believed the tech billionaire had actually been given until Monday to evaluate the offer—adding that rejecting a settlement still would not immediately trigger charges by the SEC, which typically sends formal notices before such cases are brought.”

Musk has had several legal battles with the SEC. In 2018, he and Tesla each agreed to $20 million payments in a settlement over the SEC’s complaint that “Musk’s misleading tweets” about taking Tesla private caused the stock price to jump “and led to significant market disruption.” He has tried and failed to get out of that settlement, claiming that he was “forced” into signing the deal and that the SEC used the 2018 consent decree to “micro-manage” his social media activity.

Musk to have influence in Trump admin

Musk won’t have to worry as much about government regulation once Trump takes over. Trump picked Musk to lead a new Department of Government Efficiency, or “DOGE,” which will make recommendations for eliminating regulations, cutting expenses, and restructuring federal agencies.

As Reuters wrote today, Musk “is set to gain extraordinary influence after spending more than a quarter of a billion dollars to help Donald Trump win November’s presidential election. His companies are expected to be well insulated from regulation and enforcement measures.”

The SEC’s November announcement of Gensler’s planned departure from the agency touted his work to adopt “several rules to ensure that investors get the disclosure they need from public companies and companies seeking to go public.”

Trump chose Paul Atkins to replace Gensler as SEC chair, calling Atkins an advocate “for common sense regulations.” Atkins, a former SEC commissioner who founded the Patomak Global Partners consultancy firm, testified to Congress in 2019 that the SEC should reduce its disclosure requirements.

Elon Musk slams SEC as agency threatens charges in Twitter stock probe Read More »

the-us-military-is-now-talking-openly-about-going-on-the-attack-in-space

The US military is now talking openly about going on the attack in space

Mastalir said China is “copying the US playbook” with the way it integrates satellites into more conventional military operations on land, in the air, and at sea. “Their specific goals are to be able to track and target US high-value assets at the time and place of their choosing,” Mastalir said.

China’s strategy, known as Anti-Access/Area Denial, or A2AD, is centered on preventing US forces from accessing international waters extending hundreds or thousands of miles from mainland China. Some of the islands occupied by China within the last 15 years are closer to the Philippines, another treaty ally, than to China itself.

The A2AD strategy first “extended to the first island chain (bounded by the Philippines), and now the second island chain (extending to the US territory of Guam), and eventually all the way to the West Coast of California,” Mastalir said.

US officials say China has based anti-ship, anti-air, and anti-ballistic weapons in the region, and many of these systems rely on satellite tracking and targeting. Mastalir said his priority at Indo-Pacific Command, headquartered in Hawaii, is to defend US and allied satellites, or “blue assets,” and challenge “red assets” to break the Chinese military’s “long-range kill chains and protect the joint force from space-enabled attack.”

What this means is the Space Force wants to have the ability to disable or destroy the satellites China would use to provide communication, command, tracking, navigation, or surveillance support during an attack against the US or its allies.

Buildings and structures are seen on October 25, 2022, on an artificial island built by China on Subi Reef in the Spratly Islands of the South China Sea. China has progressively asserted its claim of ownership over disputed islands in the region. Credit: Ezra Acayan/Getty Images

Mastalir said he believes China’s space-based capabilities are “sufficient” to achieve the country’s military ambitions, whatever they are. “The sophistication of their sensors is certainly continuing to increase—the interconnectedness, the interoperability. They’re a pacing challenge for a reason,” he said.

“We’re seeing all signs point to being able to target US aircraft carriers… high-value assets in the air like tankers, AWACS (Airborne Warning And Control System),” Mastalir said. “This is a strategy to keep the US from intervening, and that’s what their space architecture is.”

That’s not acceptable to Pentagon officials, so Space Force personnel are now training for orbital warfare. Just don’t expect to know the specifics of any of these weapons systems any time soon.

“The details of that? No, you’re not going to get that from any war-fighting organization—’let me tell you precisely how I intend to attack an adversary so that they can respond and counter that’—those aren’t discussions we’re going to have,” Saltzman said. “We’re still going to protect some of those (details), but broadly, from an operational concept, we are going to be ready to contest space.”

A new administration

The Space Force will likely receive new policy directives after President-elect Donald Trump takes office in January. The Trump transition team hasn’t identified any changes coming for the Space Force, but a list of policy proposals known as Project 2025 may offer some clues.

Published by the Heritage Foundation, a conservative think tank, Project 2025 calls for the Pentagon to pivot the Space Force from a mostly defensive posture toward offensive weapons systems. Christopher Miller, who served as acting secretary of defense in the first Trump administration, authored the military section of Project 2025.

Miller wrote that the Space Force should “reestablish offensive capabilities to guarantee a favorable balance of forces, efficiently manage the full deterrence spectrum, and seriously complicate enemy calculations of a successful first strike against US space assets.”

Trump disavowed Project 2025 during the campaign, but since the election, he has nominated several of the policy agenda’s authors and contributors to key administration posts.

Saltzman met with Trump last month while attending a launch of SpaceX’s Starship rocket in Texas, but he said the encounter was incidental. Saltzman was already there for discussions with SpaceX officials, and Trump’s travel plans only became known the day before the launch.

The conversation with Trump at the Starship launch didn’t touch on any policy details, according to Saltzman. He added that the Space Force hasn’t yet had any formal discussions with the Trump transition team.

Regardless of the direction Trump takes with the Space Force, Saltzman said the service is already thinking about what to do to maintain what the Pentagon now calls “space superiority”—a twist on the term air superiority, which might have seemed equally as fanciful at the dawn of military aviation more than a century ago.

“That’s the reason we’re the Space Force,” Saltzman said. “So administration to administration, that’s still going to be true. Now, it’s just about resourcing and the discussions about what we want to do and when we want to do it, and we’re ready to have those discussions.”

The US military is now talking openly about going on the attack in space Read More »

twirling-body-horror-in-gymnastics-video-exposes-ai’s-flaws

Twirling body horror in gymnastics video exposes AI’s flaws


The slithy toves did gyre and gimble in the wabe

Nonsensical jabberwocky movements created by OpenAI’s Sora are typical for current AI-generated video, and here’s why.

A still image from an AI-generated video of an ever-morphing synthetic gymnast. Credit: OpenAI / Deedy

On Wednesday, a video from OpenAI’s newly launched Sora AI video generator went viral on social media, featuring a gymnast who sprouts extra limbs and briefly loses her head during what appears to be an Olympic-style floor routine.

As it turns out, the nonsensical synthesis errors in the video—what we like to call “jabberwockies”—hint at technical details about how AI video generators work and how they might get better in the future.

But before we dig into the details, let’s take a look at the video.

An AI-generated video of an impossible gymnast, created with OpenAI Sora.

In the video, we see a view of what looks like a floor gymnastics routine. The subject of the video flips and flails as new legs and arms rapidly and fluidly emerge and morph out of her twirling and transforming body. At one point, about 9 seconds in, she loses her head, and it reattaches to her body spontaneously.

“As cool as the new Sora is, gymnastics is still very much the Turing test for AI video,” wrote venture capitalist Deedy Das when he originally shared the video on X. The video inspired plenty of reaction jokes, such as this reply to a similar post on Bluesky: “hi, gymnastics expert here! this is not funny, gymnasts only do this when they’re in extreme distress.”

We reached out to Das, and he confirmed that he generated the video using Sora. He also provided the prompt, which was very long and split into four parts, generated by Anthropic’s Claude, using complex instructions like “The gymnast initiates from the back right corner, taking position with her right foot pointed behind in B-plus stance.”

“I’ve known for the last 6 months having played with text to video models that they struggle with complex physics movements like gymnastics,” Das told us in a conversation. “I had to try it [in Sora] because the character consistency seemed improved. Overall, it was an improvement because previously… the gymnast would just teleport away or change their outfit mid flip, but overall it still looks downright horrifying. We hoped AI video would learn physics by default, but that hasn’t happened yet!”

So what went wrong?

When examining how the video fails, you must first consider how Sora “knows” how to create anything that resembles a gymnastics routine. During the training phase, when the Sora model was created, OpenAI fed example videos of gymnastics routines (among many other types of videos) into a specialized neural network that associates the progression of images with text-based descriptions of them.

That type of training is a distinct phase that happens once before the model’s release. Later, when the finished model is running and you give a video-synthesis model like Sora a written prompt, it draws upon statistical associations between words and images to produce a predictive output. It’s continuously making next-frame predictions based on the last frame of the video. But Sora has another trick for attempting to preserve coherency over time. “By giving the model foresight of many frames at a time,” reads OpenAI’s Sora System Card, we’ve solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily.”

A still image from a moment where the AI-generated gymnast loses her head. It soon re-attaches to her body.

A still image from a moment where the AI-generated gymnast loses her head. It soon reattaches to her body. Credit: OpenAI / Deedy

Maybe not quite solved yet. In this case, rapidly moving limbs prove a particular challenge when attempting to predict the next frame properly. The result is an incoherent amalgam of gymnastics footage that shows the same gymnast performing running flips and spins, but Sora doesn’t know the correct order in which to assemble them because it’s pulling on statistical averages of wildly different body movements in its relatively limited training data of gymnastics videos, which also likely did not include limb-level precision in its descriptive metadata.

Sora doesn’t know anything about physics or how the human body should work, either. It’s drawing upon statistical associations between pixels in the videos in its training dataset to predict the next frame, with a little bit of look-ahead to keep things more consistent.

This problem is not unique to Sora. All AI video generators can produce wildly nonsensical results when your prompts reach too far past their training data, as we saw earlier this year when testing Runway’s Gen-3. In fact, we ran some gymnast prompts through the latest open source AI video model that may rival Sora in some ways, Hunyuan Video, and it produced similar twirling, morphing results, seen below. And we used a much simpler prompt than Das did with Sora.

An example from open source Chinese AI model Hunyuan Video with the prompt, “A young woman doing a complex floor gymnastics routine at the olympics, featuring running and flips.”

AI models based on transformer technology are fundamentally imitative in nature. They’re great at transforming one type of data into another type or morphing one style into another. What they’re not great at (yet) is producing coherent generations that are truly original. So if you happen to provide a prompt that closely matches a training video, you might get a good result. Otherwise, you may get madness.

As we wrote about image-synthesis model Stable Diffusion 3’s body horror generations earlier this year, “Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.”

For the engineers who make these models, success in AI video generation quickly becomes a question of how many examples (and how much training) you need before the model can generalize enough to produce convincing and coherent results. It’s also a question of metadata quality—how accurately the videos are labeled. In this case, OpenAI used an AI vision model to describe its training videos, which helped improve quality, but apparently not enough—yet.

We’re looking at an AI jabberwocky in action

In a way, the type of generation failure in the gymnast video is a form of confabulation (or hallucination, as some call it), but it’s even worse because it’s not coherent. So instead of calling it a confabulation, which is a plausible-sounding fabrication, we’re going to lean on a new term, “jabberwocky,” which Dictionary.com defines as “a playful imitation of language consisting of invented, meaningless words; nonsense; gibberish,” taken from Lewis Carroll’s nonsense poem of the same name. Imitation and nonsense, you say? Check and check.

We’ve covered jabberwockies in AI video before with people mocking Chinese video-synthesis models, a monstrously weird AI beer commercial, and even Will Smith eating spaghetti. They’re a form of misconfabulation where an AI model completely fails to produce a plausible output. This will not be the last time we see them, either.

How could AI video models get better and avoid jabberwockies?

In our coverage of Gen-3 Alpha, we called the threshold where you get a level of useful generalization in an AI model the “illusion of understanding,” where training data and training time reach a critical mass that produces good enough results to generalize across enough novel prompts.

One of the key reasons language models like OpenAI’s GPT-4 impressed users was that they finally reached a size where they had absorbed enough information to give the appearance of genuinely understanding the world. With video synthesis, achieving this same apparent level of “understanding” will require not just massive amounts of well-labeled training data but also the computational power to process it effectively.

AI boosters hope that these current models represent one of the key steps on the way to something like truly general intelligence (often called AGI) in text, or in AI video, what OpenAI and Runway researchers call “world simulators” or “world models” that somehow encode enough physics rules about the world to produce any realistic result.

Judging by the morphing alien shoggoth gymnast, that may still be a ways off. Still, it’s early days in AI video generation, and judging by how quickly AI image-synthesis models like Midjourney progressed from crude abstract shapes into coherent imagery, it’s likely video synthesis will have a similar trajectory over time. Until then, enjoy the AI-generated jabberwocky madness.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Twirling body horror in gymnastics video exposes AI’s flaws Read More »

generating-power-with-a-thin,-flexible-thermoelectric-film

Generating power with a thin, flexible thermoelectric film

The No. 1 nuisance with smartphones and smartwatches is that we need to charge them every day. As warm-blooded creatures, however, we generate heat all the time, and that heat can be converted into electricity for some of the electronic gadgetry we carry.

Flexible thermoelectric devices, or F-TEDs, can convert thermal energy into electric power. The problem is that F-TEDs weren’t actually flexible enough to comfortably wear or efficient enough to power even a smartwatch. They were also very expensive to make.

But now, a team of Australian researchers thinks they finally achieved a breakthrough that might take F-TEDs off the ground.

“The power generated by the flexible thermoelectric film we have created would not be enough to charge a smartphone but should be enough to keep a smartwatch going,” said Zhi-Gang Chen, a professor at Queensland University of Technology in Brisbane, Australia. Does that mean we have reached a point where it would be possible to make a thermoelectric Apple Watch band that could keep the watch charged all the time? “It would take some industrial engineering and optimization, but we can definitely achieve a smartwatch band like that,” Chen said.

Manufacturing heaven

Thermoelectric generators producing enough power to run something like an Apple Watch were, so far, made with rigid bulk materials. The obvious problem with them was that nobody would want to wear a metal slab on their wrist or run a power cable from anywhere else to their watch. Flexible thermoelectric devices, on the other hand, were perfectly wearable but offered efficiencies that made them good for low-power health-monitoring electronics rather than more power-hungry hardware like smartwatches.

Back in 2021, generating 35 microwatts per square centimeter in a wristband worn during a typical walk outside was impressive enough to land your research paper in Nature. Today, Chen and his colleagues made a flexible thermoelectric device that performed over 34 times better at room temperature. “To the best of our knowledge, we hold a current record in this field,” Chen says.

Generating power with a thin, flexible thermoelectric film Read More »

russia-takes-unusual-route-to-hack-starlink-connected-devices-in-ukraine

Russia takes unusual route to hack Starlink-connected devices in Ukraine

“Microsoft assesses that Secret Blizzard either used the Amadey malware as a service (MaaS) or accessed the Amadey command-and-control (C2) panels surreptitiously to download a PowerShell dropper on target devices,” Microsoft said. “The PowerShell dropper contained a Base64-encoded Amadey payload appended by code that invoked a request to Secret Blizzard C2 infrastructure.”

The ultimate objective was to install Tavdig, a backdoor Secret Blizzard used to conduct reconnaissance on targets of interest. The Amdey sample Microsoft uncovered collected information from device clipboards and harvested passwords from browsers. It would then go on to install a custom reconnaissance tool that was “selectively deployed to devices of further interest by the threat actor—for example, devices egressing from STARLINK IP addresses, a common signature of Ukrainian front-line military devices.”

When Secret Blizzard assessed a target was of high value, it would then install Tavdig to collect information, including “user info, netstat, and installed patches and to import registry settings into the compromised device.”

Earlier in the year, Microsoft said, company investigators observed Secret Blizzard using tools belonging to Storm-1887 to also target Ukrainian military personnel. Microsoft researchers wrote:

In January 2024, Microsoft observed a military-related device in Ukraine compromised by a Storm-1837 backdoor configured to use the Telegram API to launch a cmdlet with credentials (supplied as parameters) for an account on the file-sharing platform Mega. The cmdlet appeared to have facilitated remote connections to the account at Mega and likely invoked the download of commands or files for launch on the target device. When the Storm-1837 PowerShell backdoor launched, Microsoft noted a PowerShell dropper deployed to the device. The dropper was very similar to the one observed during the use of Amadey bots and contained two base64 encoded files containing the previously referenced Tavdig backdoor payload (rastls.dll) and the Symantec binary (kavp.exe).

As with the Amadey bot attack chain, Secret Blizzard used the Tavdig backdoor loaded into kavp.exe to conduct initial reconnaissance on the device. Secret Blizzard then used Tavdig to import a registry file, which was used to install and provide persistence for the KazuarV2 backdoor, which was subsequently observed launching on the affected device.

Although Microsoft did not directly observe the Storm-1837 PowerShell backdoor downloading the Tavdig loader, based on the temporal proximity between the execution of the Storm-1837 backdoor and the observation of the PowerShell dropper, Microsoft assesses that it is likely that the Storm-1837 backdoor was used by Secret Blizzard to deploy the Tavdig loader.

Wednesday’s post comes a week after both Microsoft and Lumen’s Black Lotus Labs reported that Secret Blizzard co-opted the tools of a Pakistan-based threat group tracked as Storm-0156 to install backdoors and collect intel on targets in South Asia. Microsoft first observed the activity in late 2022. In all, Microsoft said, Secret Blizzard has used the tools and infrastructure of at least six other threat groups in the past seven years.

Russia takes unusual route to hack Starlink-connected devices in Ukraine Read More »

ios-182,-macos-15.2-updates-arrive-today-with-image-and-emoji-generation

iOS 18.2, macOS 15.2 updates arrive today with image and emoji generation

Apple has announced that it will be releasing the iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 updates to the public later this afternoon, following weeks of beta testing for developers and users. As with iOS 18.1, the headlining features are new additions to Apple Intelligence, mainly the image-generation capabilities: Image Playground for general images, and “Genmoji” for making custom images in the style of Apple’s built-in Unicode-based emoji characters.

Other AI features include “Image Wand,” which will take sketched images from the Notes app and turn them into a “polished image” using context clues from other notes; and ChatGPT integration for the Writing Tools feature.

The updates also include a long list of bug fixes and security updates, for those who don’t care about Apple Intelligence. Safari gets better data importing and exporting support, an HTTPS Priority feature that “upgrades URLs to HTTPS whenever possible,” and a download status indicator for iPhones with a Dynamic Island. Mail in iOS offers to automatically sort messages to bring important ones to the top of your inbox. There are also various tweaks and improvements for the Photos, Podcasts, Voice Memos, and Stocks apps, while the Weather app in macOS can optionally display the weather in your menu bar.

iOS 18.2, macOS 15.2 updates arrive today with image and emoji generation Read More »

nasa-believes-it-understands-why-ingenuity-crashed-on-mars

NASA believes it understands why Ingenuity crashed on Mars

Eleven months after the Ingenuity helicopter made its final flight on Mars, engineers and scientists at NASA and a private company that helped build the flying vehicle said they have identified what probably caused it to crash on the surface of Mars.

In short, the helicopter’s on-board navigation sensors were unable to discern enough features in the relatively smooth surface of Mars to determine its position, so when it touched down, it did so moving horizontally. This caused the vehicle to tumble, snapping off all four of the helicopter’s blades.

Delving into the root cause

It is not easy to conduct a forensic analysis like this on Mars, which is typically about 100 million miles from Earth. Ingenuity carried no black box on board, so investigators have had to piece together their findings from limited data and imagery.

“While multiple scenarios are viable with the available data, we have one we believe is most likely: Lack of surface texture gave the navigation system too little information to work with,” said Ingenuity’s first pilot, Håvard Grip of NASA’s Jet Propulsion Laboratory, in a news release.

A team from NASA and a company that specializes in unmanned aerial vehicles, AeroVironment, started by looking at the terrain where Ingenuity was operating over during its 72nd flight, on January 18 of this year. The helicopter’s navigation system tracked visual features on the surface using a downward-looking camera. During its initial flights, Ingenuity was able to discern pebbles and other features to determine its position. But nearly three years later, Ingenuity was flying in a region of Jezero Crater filled with steep, relatively featureless sand ripples.

NASA believes it understands why Ingenuity crashed on Mars Read More »

startup-will-brick-$800-emotional-support-robot-for-kids-without-refunds

Startup will brick $800 emotional support robot for kids without refunds

In addition to the robot being bricked, Embodied noted that warranties, repair services, the corresponding parent app and guides, and support staff will no longer be accessible.

“Unable to offer refunds”

Embodied said it is “unable” to offer most Moxie owners refunds due to its “financial situation and impending dissolution.” The potential exception is for people who bought a Moxie within 30 days. For those customers, Embodied said that “if the company or its assets are sold, we will do our best to prioritize refunds for purchases,” but it emphasized that this is not a guarantee.

Embodied also acknowledged complications for those who acquired the expensive robot through a third-party lender. Embodied advised such customers to contact their lender, but it’s possible that some will end up paying interest on a toy that no longer works.

Embodied said it’s looking for another company to buy Moxie. Should that happen, the new company will receive Embodied customer data and determine how it may use it, according to Embodied’s Terms of Service. Otherwise, Embodied said it “securely” erases user data “in accordance with our privacy policy and applicable law,” which includes deleting personally identifiable information from Embodied systems.

Another smart gadget bites the dust

Currently, there’s some hope that Moxies can be resurrected. Things look grim for Moxie owners, but we’ve seen failed smart device companies, like Insteon, be resurrected before. It’s also possible that someone will release of an open-source version of the product, like the one made for Spotify Car Thing, which Spotify officially bricked today.

But the short-lived, expensive nature of Moxie is exactly why some groups, like right-to-repair activists, are pushing the FTC to more strongly regulate smart devices, particularly when it comes to disclosure and commitments around software support. With smart gadget makers trying to determine how to navigate challenging economic landscapes, the owners of various types of smart devices—from AeroGarden indoor gardening systems to Snoo bassinets —have had to deal with the consequences, including broken devices and paywalled features. Last month, the FTC noted that smart device manufacturers that don’t commit to software support may be breaking the law.

For Moxie owners, disappointment doesn’t just come from wasted money and e-waste creation but also from the pain of giving a child a tech “companion” to grow with and then have it suddenly taken away.

Startup will brick $800 emotional support robot for kids without refunds Read More »