Author name: Mike M.

sony-removes-still-unmet-“8k”-promise-from-ps5-packaging

Sony removes still-unmet “8K” promise from PS5 packaging

8K? We never said 8K! —

Move could presage an expected resolution bump in the rumored PS5 Pro.

  • The new PS5 packaging, as seen on the PlayStation Direct online store, is missing the “8K” label in the corner.

  • The original PS5 packaging with the 8K label, as still seen on the GameStop website.

When we first received our PlayStation 5 review unit from Sony in 2020, we reacted with some bemusement to the “8K” logo on the box and its implied promise of full 7630×4320 resolution output. We then promptly forgot all about it since native 8K content and 8K compatible TVs have remained a relative curiosity thus far in the PS5’s lifespan.

But on Wednesday, Digital Foundry’s John Linneman discovered that Sony has quietly removed that longstanding 8K label from the PS5 box. The ultra-high-resolution promise no longer appears on the packaging shown on Sony’s official PlayStation Direct store, a change that appears to have happened between late January and mid-February, according to Internet Archive captures of the store page (the old “8K” box can still be seen at other online retailers, though).

A promise deferred

This packaging change has been a long time coming since the PS5 hasn’t technically been living up to its 8K promise for years now. While Sony’s Mark Cerny mentioned the then-upcoming hardware’s 8K support in a 2019 interview, the system eventually launched with a pretty big “coming soon” caveat for that feature. “PS5 is compatible with 8K displays at launch, and after a future system software update will be able to output resolutions up to 8K when content is available, with supported software,” the company said in an FAQ surrounding the console’s 2020 launch.

Well over three years later, that 8K-enabling software update has yet to appear, meaning the console’s technical ability to push 8K graphics is still a practical impossibility for users. Until Sony’s long-promised software patch hits, even PS5 games that render frames internally at a full 8K resolution are still pushing out a downscaled 4K framebuffer through that HDMI 2.1 cable.

A slide from TV manufacturer TCL guesses at some details for the next micro-generation of high-end game consoles.

Enlarge / A slide from TV manufacturer TCL guesses at some details for the next micro-generation of high-end game consoles.

At this point, though, there’s some reason to expect that the promised patch may never come to the standard PS5. At the moment, the ever-churning rumor mill is expecting an impending mid-generation PS5 Pro upgrade that could offer true, native 8K resolution support right out of the box. If that comes to pass, removing the outdated “8K” promise from the original PS5 packaging could be a subtle way to highlight the additional power of the upcoming “Pro” upgrade.

A slight majority of participants in a double-blind study saw no discernible difference between a 4K and 8K image video clips.

Enlarge / A slight majority of participants in a double-blind study saw no discernible difference between a 4K and 8K image video clips.

So will console gamers be missing out if they don’t upgrade to an 8K-compatible display? Probably not, as studies show extremely diminishing returns in the perceived quality jump from 4K to 8K visual content for most users and living room setups. Unless you are sitting extremely close to an extremely large display, it’s pretty unlikely you’ll even be able to tell the difference.

Sony removes still-unmet “8K” promise from PS5 packaging Read More »

duckduckgo-offers-“anonymous”-access-to-ai-chatbots-through-new-service

DuckDuckGo offers “anonymous” access to AI chatbots through new service

anonymous confabulations —

DDG offers LLMs from OpenAI, Anthropic, Meta, and Mistral for factually-iffy conversations.

DuckDuckGo's AI Chat promotional image.

DuckDuckGo

On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using “!ai” or “!chat” shortcuts in the search field. AI Chat can also be disabled in the site’s settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

“We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days,” says DuckDuckGo, “and that none of the chats made on our platform can be used to train or improve the models.”

An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Enlarge / An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Benj Edwards

However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user’s inputs to remote servers for processing over the Internet. Given certain inputs (i.e., “Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill”), a user could still potentially be identified if such an extreme need arose.

While the service appears to work well for us, there’s a question about its utility. For example, while GPT-3.5 initially wowed people when it launched with ChatGPT in 2022, it also confabulated a lot—and it still does. GPT-4 was the first major LLM to get confabulations under control to a point where the bot became more reasonably useful for some tasks (though this itself is a controversial point), but that more capable model isn’t present in DuckDuckGo’s AI Chat. Also missing are similar GPT-4-level models like Claude Opus or Google’s Gemini Ultra, likely because they are far more expensive to run. DuckDuckGo says it may roll out paid plans in the future, and those may include higher daily usage limits or access to “more advanced models.”)

It’s true that the other three models generally (and subjectively) pass GPT-3.5 in capability for coding with lower hallucinations, but they can still make things up, too. With DuckDuckGo AI Chat as it stands, the company is left with a chatbot novelty with a decent interface and the promise that your conversations with it will remain private. But what use are fully private AI conversations if they are full of errors?

Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It's a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Enlarge / Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It’s a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Benj Edwards

As DuckDuckGo itself states in its privacy policy, “By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice).”

So, have fun talking to bots, but tread carefully. They’ll easily “lie” to your face because they don’t understand what they are saying and are tuned to output statistically plausible information, not factual references.

DuckDuckGo offers “anonymous” access to AI chatbots through new service Read More »

radio-telescope-finds-another-mystery-long-repeat-source

Radio telescope finds another mystery long-repeat source

File under W for WTF —

Unlike earlier object, the new source’s pulses of radio waves are erratic.

Image of a purple, glowing sphere with straight purple-white lines emerging from opposite sides, all against a black background.

Enlarge / A slowly rotating neutron star is still our best guess as to the source of the mystery signals.

Roughly a year ago, astronomers announced that they had observed an object that shouldn’t exist. Like a pulsar, it emitted regularly timed bursts of radio emissions. But unlike a pulsar, those bursts were separated by over 20 minutes. If the 22 minute gap between bursts represents the rotation period of the object, then it is rotating too slowly to produce radio emissions by any known mechanism.

Now, some of the same team (along with new collaborators) are back with the discovery of something that, if anything, is acting even more oddly. The new source of radio bursts, ASKAP J193505.1+214841.0, takes nearly an hour between bursts. And it appears to have three different settings, sometimes producing weaker bursts and sometimes skipping them entirely. While the researchers suspect that, like pulsars, this is also powered by a neutron star, it’s not even clear that it’s the same class of object as their earlier discovery.

How pulsars pulse

Contrary to the section heading, pulsars don’t actually pulse. Neutron stars can create the illusion by having magnetic poles that aren’t lined up with their rotational pole. The magnetic poles are a source of constant radio emissions but, as the neutron star rotates, the emissions from the magnetic pole sweep across space in a manner similar to the light from a rotating lighthouse. If Earth happens to be caught up in that sweep, then the neutron star will appear to blink on and off as it rotates.

The star’s rotation is also needed for the generation of radio emissions themselves. If the neutron star rotates too slowly, then its magnetic field won’t be strong enough to produce radio emissions. So, it’s thought that if a pulsar’s rotation slows down enough (causing its pulses to be separated by too much time), it will simply shut down, and we’ll stop observing any radio emissions from the object.

We don’t have a clear idea of how long the time between pulses can get before a pulsar will shut down. But we do know that it’s going to be far less than 22 minutes.

Which is why the 2023 discovery was so strange. The object, GPM J1839–10, not only took a long time between pulses, but archival images showed that it had been pulsing on and off since at least 35 years ago.

To figure out what is going on, we really have two options. One is more and better observations of the source we know about. The second is to find other examples of similar behavior. There’s a chance we now have a second object like this, although there are enough differences that it’s not entirely clear.

An enigmatic find

The object, ASKAPJ193505.1+214841.0, was discovered by accident when the Australian Square Kilometre Array Pathfinder telescope was used to perform observations in the area due to detections of a gamma ray burst. It picked up a bright radio burst in the same field of view, but unrelated to the gamma ray burst. Further radio bursts showed up in later observations, as did a few far weaker bursts. A search of the telescope’s archives also spotted a weaker burst from the same location.

Checking the timing of the radio bursts, the team found that they could be explained by an object that emitted bursts every 54 hours, with bursts lasting from 10 seconds to just under a minute. Checking additional observations, however, showed that there were often instances where a 54 minute period would not end with a radio burst, suggesting the source sometimes skipped radio emissions entirely.

Odder still, the photons in the strong and weak bursts appeared to have different polarizations. These differences arise from the magnetic fields present where the bursts originate, suggesting that the two types of bursts differ not only in total energy, but also that the object that’s making them has a different magnetic field.

So, the researchers suggest that the object has three modes: strong pulses, faint pulses, and an off mode, although they can’t rule out the off mode producing weak radio signals that are below the detection capabilities of the telescopes we’re using. Over about eight months of sporadic observations, there’s no apparent pattern to the bursts.

What is this thing?

Checks at other wavelengths indicate there’s a magnetar and a supernova remnant in the vicinity of the mystery object, but not at the same location. There’s also a nearby brown dwarf at that point in the sky, but they strongly suspect that’s just a chance overlap. So, none of that tells us more about what produces these erratic bursts.

As with the earlier find, there seem to be two possible explanations for the ASKAP source. One is a neutron star that’s still managing to emit radiofrequency radiation from its poles despite rotating extremely slowly. The second is a white dwarf that has a reasonable rotation period but an unreasonably strong magnetic field.

To get at this issue, the researchers estimate the strength of the magnetic field needed to produce the larger bursts and come up with a value that’s significantly higher than any previously observed to originate on a white dwarf. So they strongly argue for the source being a neutron star. Whether that argues for the earlier source being a neutron star will depend on whether you feel that the two objects represent a single phenomenon despite their somewhat different behaviors.

In any case, we now have two of these mystery slow-repeat objects to explain. It’s possible that we’ll be able to learn more about this newer one if we can get some information as to what’s involved in its mode switching. But then we’ll have to figure out if what we learn applies to the one we discovered earlier.

Nature Astronomy, 2024. DOI: 10.1038/s41550-024-02277-w  (About DOIs).

Radio telescope finds another mystery long-repeat source Read More »

toyota-tests-liquid-hydrogen-burning-corolla-in-another-24-hour-race

Toyota tests liquid hydrogen-burning Corolla in another 24-hour race

yep, still at it —

The experience has taught it how to improve thermal efficiency, Toyota says.

A Toyota GR Corolla race car

Enlarge / “It got more attention than last year, and the development feels steadier, faster, and safer,” said Toyota Chairman Akio Toyoda when asked how the hydrogen-powered Corolla had improved from 2023.

Toyota

A couple of weekends ago, when most of the world’s motorsport attention was focused on Monaco and Indianapolis, Toyota President Akio “Morizo” Toyoda was taking part in the Super Taikyu Fuji 24 Hours at Fuji Speedway in Japan. Automotive executives racing their own products is not exactly unheard of, but few instances have been quite as unexpected as competing in endurance races with a hydrogen-burning Corolla.

A hydrogen-powered Toyota has shown up for the past few years, in fact, as the company uses the race track to learn new things about thermal efficiency that it says have benefitted its latest generation of internal-combustion engines, which it debuted to the public at the end of May.

With backing from its government, the Japanese auto industry has continued to explore hydrogen as an alternative vehicle energy source instead of liquid hydrocarbons or batteries. Commercially, that’s been in the form of hydrogen fuel cells, although with very little success among drivers, even in areas that have some hydrogen fueling infrastructure.

But the hydrogen powertrain in the GR Corolla uses an internal combustion engine, not a fuel cell. The project first competed in the 24-hour race at Fuji in 2021, then again with a little more success in 2022.

For 2023, there was a significant change to the car, now fueled by liquid hydrogen, not gaseous. Instead of trying to fill tanks pressurized to 70 MPa (700 bar), now it just has to be cooled to minus-253° C (minus-423° F). Liquid hydrogen has almost twice the energy density—although still only a third as much as gasoline—and the logistics and equipment required to support cryogenic refueling at the racetrack were much less than with pressurized hydrogen.

The new (left) and old (right) liquid hydrogen tanks.

Enlarge / The new (left) and old (right) liquid hydrogen tanks.

Toyota

The liquid hydrogen is stored in a double-walled tank that was much easier to package within the compact interior of the GR Corolla than the four pressurized cylinders it replaced. This year, the tank is 50 percent larger (storing 15 kg of hydrogen) and elliptical, which proved quite an interesting technical challenge for supplier Shinko. The new tank required Toyota to rebuild the car to repackage everything, taking the opportunity to cut 50 kg (110 lbs) of weight in the process.

From the tank, a high-pressure pump injects the fuel into a vaporizer, where it becomes a gas again and then heads to the engine to be burned. Unfortunately, the pump wasn’t so durable in 2023 and had to be replaced twice during the race, costing hours in the process.

For 2024, a revised pump was designed to last the full 24 hours, although during testing, it proved to be the source of a fuel leak, wasting the team’s time while the problem was isolated. Luckily, this was much less severe than when, in 2023, a gaseous hydrogen pipe leak in the engine bay led to a fire at a test.

Sadly, the new fuel pump had intermittent problems actually pumping fuel during the race, most likely due to sloshing in the tank. Later on, an ABS module failure sidelined the car in the garage for five hours, and while the team was able to take the checkered flag, it had completed fewer laps in 2024 than in 2023.

But 24-hour racing is really hard, and the race wasn’t a write-off for Toyota. It achieved its goal of 30-lap stints between refueling, and while the new pump wasn’t problem-free throughout the race (nor had to run for the entire 24 hours), it didn’t need to be replaced once, let alone twice.

2 filter.” height=”654″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/20240524_01_05-980×654.jpg” width=”980″>

Enlarge / For 2024, there was an automated system to clean the CO2 filter.

Toyota

I’m still scratching my head slightly about the carbon capture device that’s fitted to the car’s air filter. This adsorbs CO2 out of the air as the car drives, storing it in a small tank. It’s a nice gesture, I guess.

Since starting development of the hydrogen ICE engine, Toyota has found real gains in performance and efficiency, and the switch to liquid hydrogen has cut refueling times by 40 percent. All of those make it more viable as a carbon-free fuel, it says. But the chances of seeing production vehicles that get refueled with liquid hydrogen seem remote to me.

Even though Toyota still has optimism that one day it will be able to sell combustion cars that just emit water from their tailpipes, it’s pragmatic enough to know there needs to be some real-world payoff now beyond that the chairman likes racing and people like to keep him happy.

“Hydrogen engine development has really contributed to our deeper understanding of engine heat efficiency. It was a trigger that brought this technology” Toyota CTO Hiroki Nakajima told Automotive News at the debut of the automaker’s new 1.5 L and 2.0 L four-cylinder engines, which are designed to meet the European Union’s new Euro 7 emissions regulations, which go into effect in 2027.

Toyota tests liquid hydrogen-burning Corolla in another 24-hour race Read More »

canada-demands-5%-of-revenue-from-netflix,-spotify,-and-other-streamers

Canada demands 5% of revenue from Netflix, Spotify, and other streamers

Streaming fees —

Canada says $200M in annual fees will support local news and other content.

Illustrative photo featuring Canadian 1-cent coins with the Canadian flag displayed on a computer screen in the background,

Getty Images | NurPhoto /

Canada has ordered large online streaming services to pay 5 percent of their Canadian revenue to the government in a program expected to raise $200 million per year to support local news and other home-grown content. The Canadian Radio-television and Telecommunications Commission (CRTC) announced its decision yesterday after a public comment period.

“Based on the public record, the CRTC is requiring online streaming services to contribute 5 percent of their Canadian revenues to support the Canadian broadcasting system. These obligations will start in the 2024–2025 broadcast year and will provide an estimated $200 million per year in new funding,” the regulator said.

The fees apply to both video and music streaming services. The CRTC imposed the rules despite opposition from Amazon, Apple, Disney, Google, Netflix, Paramount, and Spotify.

The new fees are scheduled to take effect in September and apply to online streaming services that make at least $25 million a year in Canada. The regulations exclude revenue from audiobooks, podcasts, video game services, and user-generated content. The exclusion of revenue from user-generated content is a win for Google’s YouTube.

Streaming companies have recently been raising prices charged to consumers, and the CBC notes that streamers might raise prices again to offset the fees charged in Canada.

Fees to support local news, Indigenous content

The CRTC said it is relying on authority from the Online Streaming Act, which was approved by Canada’s parliament in 2023. The new fees are similar to the ones already imposed on licensed broadcasters.

“The funding will be directed to areas of immediate need in the Canadian broadcasting system, such as local news on radio and television, French-language content, Indigenous content, and content created by and for equity-deserving communities, official language minority communities, and Canadians of diverse backgrounds,” the CRTC said.

CRTC Chairperson Vicky Eatrides said the agency’s “decision will help ensure that online streaming services make meaningful contributions to Canadian and Indigenous content.” The agency also said that streaming companies “will have some flexibility to direct parts of their contributions to support Canadian television content directly.”

Industry groups blast CRTC

The Motion Picture Association-Canada criticized the CRTC yesterday, saying the fee ruling “reinforces a decades-old regulatory approach designed for cable companies” and is “discriminatory.” The fees “will make it harder for global streamers to collaborate directly with Canadian creatives and invest in world-class storytelling made in Canada for audiences here and around the world,” the lobby group said.

The MPA-Canada said the CRTC didn’t fully consider “the significant contributions streamers make in working directly with Canada’s creative communities.” The group represents streamers including Netflix, Disney Plus, HAYU, Sony’s Crunchyroll, Paramount Plus, and PlutoTV.

“Global studios and streaming services have spent over $6.7 billion annually producing quality content in Canada for local and international audiences and invested more in the content made by Canadian production companies last year than the CBC, or the Canada Media Fund and Telefilm combined,” the group said.

The fees were also criticized by the Digital Media Association, which represents streaming music providers including Amazon Music, Apple Music, and Spotify. The “discriminatory tax on music streaming services… is effectively a protectionist subsidy for radio” and may worsen “Canada’s affordability crisis,” the group said.

The Canadian Media Producers Association praised the CRTC decision, saying the decision benefits independent producers and “tilts our industry toward a more level playing field.”

Canada demands 5% of revenue from Netflix, Spotify, and other streamers Read More »

china’s-plan-to-dominate-ev-sales-around-the-world

China’s plan to dominate EV sales around the world

China’s plan to dominate EV sales around the world

FT montage/Getty Images

The resurrection of a car plant in Brazil’s poor northeast stands as a symbol of China’s global advance—and the West’s retreat.

BYD, the Shenzhen-based conglomerate, has taken over an old Ford factory in Camaçari, which was abandoned by the American automaker nearly a century after Henry Ford first set up operations in Brazil.

When Luiz Inácio Lula da Silva, Brazil’s president, visited China last year, he met BYD’s billionaire founder and chair, Wang Chuanfu. After that meeting, BYD picked the country for its first carmaking hub outside of Asia.

Under a $1 billion-plus investment plan, BYD intends to start producing electric and hybrid automobiles this year at the site in Bahia state, which will also manufacture bus and truck chassis and process battery materials.

The new Brazil plant is no outlier—it falls into a wave of corporate Chinese investment in electric vehicle manufacturing supply chains in the world’s most important developing economies.

Financial Times

The inadvertent result of rising protectionism in the US and Europe could be to drive many emerging markets into China’s hands.

Last month, Joe Biden issued a new broadside against Beijing’s deep financial support of Chinese industry as he unveiled sweeping new tariffs on a range of cleantech products—most notably, a 100 percent tariff on electric vehicles. “It’s not competition. It’s cheating. And we’ve seen the damage here in America,” Biden said.

The measures were partly aimed at boosting Biden’s chances in his presidential battle with Donald Trump. But the tariffs, paired with rising restrictions on Chinese investment on American soil, will have an immense impact on the global auto market, in effect shutting China’s world-leading EV makers out of the world’s biggest economy.

The EU’s own anti-subsidy investigation into Chinese electric cars is expected to conclude next week as Brussels tries to protect European carmakers by stemming the flow of low-cost Chinese electric vehicles into the bloc.

Government officials, executives, and experts say that the series of new cleantech tariffs issued by Washington and Brussels are forcing China’s leading players to sharpen their focus on markets in the rest of the world.

This, they argue, will lead to Chinese dominance across the world’s most important emerging markets, including Southeast Asia, Latin America, and the Middle East and the remaining Western economies that are less protectionist than the US and Europe.

“That is the part that seems to be lost in this whole discussion of ‘can we raise some tariffs and slow down the Chinese advance.’ That’s only defending your homeland. That’s leaving everything else open,” says Bill Russo, the former head of Chrysler in Asia and founder of Automobility, a Shanghai consultancy.

“Those markets are in play and China is aggressively going after those markets.”

China’s plan to dominate EV sales around the world Read More »

microsoft-to-test-“new-features-and-more”-for-aging,-stubbornly-popular-windows-10

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10

but the clock is still ticking —

Support ends next year, but Windows 10 remains the most-used version of the OS.

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10

Microsoft

In October 2025, Microsoft will stop supporting Windows 10 for most PC users, which means no more technical support and (crucially) no more security updates unless you decide to pay for them. To encourage adoption, the vast majority of new Windows development is happening in Windows 11, which will get one of its biggest updates since release sometime this fall.

But Windows 10 is casting a long shadow. It remains the most-used version of Windows by all publicly available metrics, including Statcounter (where Windows 11’s growth has been largely stagnant all year) and the Steam Hardware Survey. And last November, Microsoft decided to release a fairly major batch of Windows 10 updates that introduced the Copilot chatbot and other changes to the aging operating system.

That may not be the end of the road. Microsoft has announced that it is reopening a Windows Insider Beta Channel for PCs still running Windows 10, which will be used to test “new features and more improvements to Windows 10 as needed.” Users can opt into the Windows 10 Beta Channel regardless of whether their PC meets the requirements for Windows 11; if your PC is compatible, signing up for the less-stable Dev or Canary channels will still upgrade your PC to Windows 11.

Any new Windows 10 features that are released will be added to Windows 10 22H2, the operating system’s last major yearly update. Per usual for Windows Insider builds, Microsoft may choose not to release all new features that it tests, and new features will be released for the public version of Windows 10 “when they’re ready.”

One thing this new beta program doesn’t change is the end-of-support date for Windows 10, which Microsoft says is still October 14, 2025. Microsoft says that joining the beta program doesn’t extend support. The only way to continue getting Windows 10 security updates past 2025 is to pay for the Extended Security Updates (ESU) program; Microsoft plans to offer these updates to individual users but still hasn’t announced pricing for individuals. Businesses will pay as much as $61 per PC for the first year of updates, while schools will pay as little as $1 per PC.

Beta program or no, we still wouldn’t expect Windows 10 to change dramatically between now and its end-of-support date. We’d guess that most changes will relate to the Copilot assistant, given how aggressively Microsoft has moved to add generative AI to all of its products. For example, the Windows 11 version of Copilot is shedding its “preview” tag and becoming an app that runs in a regular window rather than a persistent sidebar, changes Microsoft could also choose to implement in Windows 10.

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10 Read More »

Could Network Modeling Replace Observability?

Over the past four years, I’ve consolidated a representative list of network observability vendors, but have not yet considered any modeling-based solutions. That changed when Forward Networks and NetBrain requested inclusion in the network observability report.

These two vendors have built their products on top of a network modeling technology, and both of them met the report’s table stakes, which meant they qualified for inclusion. In this iteration of the report, the fourth, including these two modeling-based vendors did not have a huge impact. Vendors have shifted around on the Radar chart, but generally speaking, the report is consistent with the third iteration.

However, these modeling solutions are a fresh take on observability, which is a category that has so far been evolving incrementally. While there have been occasional leaps forward, driven by the likes of ML and eBPF, there hasn’t been an overhaul of the whole solution.

I cannot foresee any future version of network observability that does not include some degree of modeling, so I’ve been thinking about the evolution of these technologies, the current vendor landscape, and whether modeling-based products will overtake non-modeling-based observability products.

Even though it’s still early days for modeling-based observability, I want to explore and validate these two ideas:

  • It’s harder for observability-only tools to pivot into modeling than the other way around.
  • Modeling products offer some distinct advantages.

Pivoting to Modeling

The roots of modeling solutions are based in observability—specifically, collecting information about the configuration and state of the network. With this information, these solutions create a digital twin, which can simulate traffic to understand how the network currently behaves or would behave in hypothetical conditions.

Observability tools do not need to simulate traffic to do their job. They can report on near-real time network performance information to provide network operations center (NOC) analysts with the right information to maintain the level of performance. Observability tools can definitely incorporate modeling features (and some solutions already do), but the point here is that they don’t have to.

My understanding of today’s network modeling tools indicates that these solutions cannot yet deliver the same set of features as network observability tools. This is rather expected, as a large percentage of network observability tools have more than three decades of continuous development.

However, when looking at future developments, we need to consider that network modeling tools use proprietary algorithms, which have been developed over a number of years and require a highly specific set of skills. I do not expect that developers and engineers equipped with network modeling skills are readily available in the job market, and these use cases are not as trendy as other topics. For example, AI developers are also in demand, but there’s also going to be a continuous increase in supply over the next few years as younger generations choose to specialize in this subject.

In contrast, modeling tools can tap into existing observability knowledge and mimic a very mature set of products to implement comparable features.

Modeling Advantages

In the vendor questionnaires, I’ve been asking these two questions for a few years:

  • Can the tool correlate changes in network performance with configuration changes?
  • Can the tool learn from the administrator’s decisions and remediation actions to autonomously solve similar incidents or propose resolutions?

The majority of network observability vendors don’t focus on these sorts of features. But the modeling solutions do, and they do so very well.

This list is by no means exhaustive; I’m only highlighting it because I’ve been asking myself whether these sorts of features are out of scope for network observability tools. But this is the first time since I started researching this space that the responses to these questions went from “we sort of do that” to “yes, this is our core strength.”

This leads me to think there is an extensive set of features that can benefit NOC analysts that can be developed on top of the underlying technology, which may very well be network modeling.

Next Steps

Whether modeling tools can displace today’s observability tools is something that remains to be determined. I expect that the answer to this question will lie with the organizations whose business model heavily relies on network performance. If such an organization deploys both an observability and modeling tool, and increasingly favors modeling for observability tasks to the point where they decommission the observability tool, we’ll have a much clearer indication of the direction of the market.

To learn more, take a look at GigaOm’s network observability Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Could Network Modeling Replace Observability? Read More »

gamestop-stock-influencer-roaring-kitty-may-lose-access-to-e-trade,-report-says

GameStop stock influencer Roaring Kitty may lose access to E-Trade, report says

“I like the stock” —

E-Trade fears restricting influencer’s trading may trigger boycott, sources say.

Keith Gill, known on Reddit under the pseudonym DeepFuckingValue and as Roaring Kitty, is seen on a fragment of a YouTube video.

Enlarge / Keith Gill, known on Reddit under the pseudonym DeepFuckingValue and as Roaring Kitty, is seen on a fragment of a YouTube video.

E-Trade is apparently struggling to balance the risks and rewards of allowing Keith Gill to continue trading volatile meme stocks on its platform, The Wall Street Journal reported.

The meme-stock influencer known as “Roaring Kitty” and “DeepF—Value” is considered legendary for instantly skyrocketing the price of stocks, notably GameStop, most recently with a single tweet.

E-Trade is concerned, according to The Journal’s insider sources, that on the one hand, Gill’s social media posts are potentially illegally manipulating the market—and possibly putting others’ investments at risk. But on the other, the platform worries that restricting Gill’s trading could incite a boycott fueled by his “meme army” closing their accounts “in solidarity.” That could also sharply impact trading on the platform, sources said.

It’s unclear what gamble E-Trade, which is owned by Morgan Stanley, might be willing to make. The platform could decide to take no action at all, the WSJ reported, but through its client agreement has the right to restrict or close Gill’s account “at any time.”

As of late Monday, Gill’s account was still active, the WSJ reported, apparently showing total gains of $85 million over the past three weeks. After Monday’s close, Gill’s GameStop positions “were valued at more than $289 million,” the WSJ reported.

Trading platforms unprepared for Gill’s comeback

In 2021, Gill’s social media activity on Reddit helped drive GameStop stock to historic highs. At that time, Gill encouraged others to invest in the stock—not based on the fundamentals of GameStop business but on his pure love for GameStop. The craze that he helped spark rapidly triggered temporary restrictions on GameStop trading, as well as a congressional hearing, but ultimately there were few consequences for Gill, who disappeared after making at least $30 million, the WSJ reported.

All remained quiet until a few weeks ago when Roaring Kitty suddenly came back. On X (formerly Twitter), Gill posted a meme of a man sitting up in his chair, then blitzed his feed with memes and movie clips, seemingly sending a continual stream of coded messages to his millions of followers who eagerly posted about their trades and gains on Reddit.

“Welcome back, legend,” one follower responded.

Once again, Gill’s abrupt surge in online activity immediately kicked off a GameStop stock craze fueling prices to a spike of more than 60 percent. And once again, because of the stock’s extreme volatility, Gill’s social posts prompted questions from both trading platforms and officials who continue to fret over whether Gill’s online influencing should be considered market manipulation.

For Gill’s biggest fans, the goal is probably to profit as much as possible before the hammer potentially comes down again and trading gets restricted. That started happening late on Sunday night, when it became harder or impossible to purchase GameStop shares on Robinhood, prompting some traders to complain on X.

The WallStreetBets account shared a warning that Robinhood sent to would-be buyers, which showed that trading was being limited, but not by Robinhood. Instead, the platform that facilitates Robinhood’s overnight trading of the stock, Blue Ocean ATS, set the limit, only accepting “trades 20 percent above or below” that day’s reference price—a move designed for legal or compliance reasons to stop trading once the stock exceeds a certain price.

These limits are set, the Securities and Exchange Commission (SEC) noted in 2021, partly to prevent fraudsters from spreading misleading tips online and profiting at the expense of investors from illegal price manipulation. A common form of this fraud is a pump-and-dump scheme, where fraudsters “make false and misleading statements to create a buying frenzy, and then sell shares at the pumped-up price.”

GameStop stock influencer Roaring Kitty may lose access to E-Trade, report says Read More »

google’s-ai-overviews-misunderstand-why-people-use-google

Google’s AI Overviews misunderstand why people use Google

robot hand holding glue bottle over a pizza and tomatoes

Aurich Lawson | Getty Images

Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google’s new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature’s rollout for at least some types of queries.

But the more I’ve thought about that rollout, the more I’ve begun to question the wisdom of Google’s AI-powered search results in the first place. Even when the system doesn’t give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

Reliability and relevance

When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they’re looking for subjective information where there is no one “right” answer: “What are the best Mexican restaurants in Santa Fe?” or “What should I do with my kids on a rainy day?” or “How can I prevent cheese from sliding off my pizza?”

The value of Google has always been in pointing you to the places it thinks are likely to have good answers to those questions. But it’s still up to you, as a user, to figure out which of those sources is the most reliable and relevant to what you need at that moment.

  • This wasn’t funny when the guys at Pep Boys said it, either. (via)

    Kyle Orland / Google

  • Weird Al recommends “running with scissors” as well! (via)

    Kyle Orland / Google

  • This list of steps actually comes from a forum thread response about doing something completely different. (via)

    Kyle Orland / Google

  • An island that’s part of the mainland? (via)

    Kyle Orland / Google

  • If everything’s cheaper now, why does everything seem so expensive?

    Kyle Orland / Google

  • Pretty sure this Truman was never president… (via)

    Kyle Orland / Google

For reliability, any savvy Internet user makes use of countless context clues when judging a random Internet search result. Do you recognize the outlet or the author? Is the information from someone with seeming expertise/professional experience or a random forum poster? Is the site well-designed? Has it been around for a while? Does it cite other sources that you trust, etc.?

But Google also doesn’t know ahead of time which specific result will fit the kind of information you’re looking for. When it comes to restaurants in Santa Fe, for instance, are you in the mood for an authoritative list from a respected newspaper critic or for more off-the-wall suggestions from random locals? Or maybe you scroll down a bit and stumble on a loosely related story about the history of Mexican culinary influences in the city.

One of the unseen strengths of Google’s search algorithm is that the user gets to decide which results are the best for them. As long as there’s something reliable and relevant in those first few pages of results, it doesn’t matter if the other links are “wrong” for that particular search or user.

Google’s AI Overviews misunderstand why people use Google Read More »

windows-recall-demands-an-extraordinary-level-of-trust-that-microsoft-hasn’t-earned

Windows Recall demands an extraordinary level of trust that Microsoft hasn’t earned

The Recall feature as it currently exists in Windows 11 24H2 preview builds.

Enlarge / The Recall feature as it currently exists in Windows 11 24H2 preview builds.

Andrew Cunningham

Microsoft’s Windows 11 Copilot+ PCs come with quite a few new AI and machine learning-driven features, but the tentpole is Recall. Described by Microsoft as a comprehensive record of everything you do on your PC, the feature is pitched as a way to help users remember where they’ve been and to provide Windows extra contextual information that can help it better understand requests from and meet the needs of individual users.

This, as many users in infosec communities on social media immediately pointed out, sounds like a potential security nightmare. That’s doubly true because Microsoft says that by default, Recall’s screenshots take no pains to redact sensitive information, from usernames and passwords to health care information to NSFW site visits. By default, on a PC with 256GB of storage, Recall can store a couple dozen gigabytes of data across three months of PC usage, a huge amount of personal data.

The line between “potential security nightmare” and “actual security nightmare” is at least partly about the implementation, and Microsoft has been saying things that are at least superficially reassuring. Copilot+ PCs are required to have a fast neural processing unit (NPU) so that processing can be performed locally rather than sending data to the cloud; local snapshots are protected at rest by Windows’ disk encryption technologies, which are generally on by default if you’ve signed into a Microsoft account; neither Microsoft nor other users on the PC are supposed to be able to access any particular user’s Recall snapshots; and users can choose to exclude apps or (in most browsers) individual websites to exclude from Recall’s snapshots.

This all sounds good in theory, but some users are beginning to use Recall now that the Windows 11 24H2 update is available in preview form, and the actual implementation has serious problems.

“Fundamentally breaks the promise of security in Windows”

This is Recall, as seen on a PC running a preview build of Windows 11 24H2. It takes and saves periodic screenshots, which can then be searched for and viewed in various ways.

Enlarge / This is Recall, as seen on a PC running a preview build of Windows 11 24H2. It takes and saves periodic screenshots, which can then be searched for and viewed in various ways.

Andrew Cunningham

Security researcher Kevin Beaumont, first in a thread on Mastodon and later in a more detailed blog post, has written about some of the potential implementation issues after enabling Recall on an unsupported system (which is currently the only way to try Recall since Copilot+ PCs that officially support the feature won’t ship until later this month). We’ve also given this early version of Recall a try on a Windows Dev Kit 2023, which we’ve used for all our recent Windows-on-Arm testing, and we’ve independently verified Beaumont’s claims about how easy it is to find and view raw Recall data once you have access to a user’s PC.

To test Recall yourself, developer and Windows enthusiast Albacore has published a tool called AmperageKit that will enable it on Arm-based Windows PCs running Windows 11 24H2 build 26100.712 (the build currently available in the Windows Insider Release Preview channel). Other Windows 11 24H2 versions are missing the underlying code necessary to enable Recall.

  • Windows uses OCR on all the text in all the screenshots it takes. That text is also saved to an SQLite database to facilitate faster searches.

    Andrew Cunningham

  • Searching for “iCloud,” for example, brings up every single screenshot with the word “iCloud” in it, including the app itself and its entry in the Microsoft Store. If I had visited websites that mentioned it, they would show up here, too.

    Andrew Cunningham

The short version is this: In its current form, Recall takes screenshots and uses OCR to grab the information on your screen; it then writes the contents of windows plus records of different user interactions in a locally stored SQLite database to track your activity. Data is stored on a per-app basis, presumably to make it easier for Microsoft’s app-exclusion feature to work. Beaumont says “several days” of data amounted to a database around 90KB in size. In our usage, screenshots taken by Recall on a PC with a 2560×1440 screen come in at 500KB or 600KB apiece (Recall saves screenshots at your PC’s native resolution, minus the taskbar area).

Recall works locally thanks to Azure AI code that runs on your device, and it works without Internet connectivity and without a Microsoft account. Data is encrypted at rest, sort of, at least insofar as your entire drive is generally encrypted when your PC is either signed into a Microsoft account or has Bitlocker turned on. But in its current form, Beaumont says Recall has “gaps you can drive a plane through” that make it trivially easy to grab and scan through a user’s Recall database if you either (1) have local access to the machine and can log into any account (not just the account of the user whose database you’re trying to see), or (2) are using a PC infected with some kind of info-stealer virus that can quickly transfer the SQLite database to another system.

Windows Recall demands an extraordinary level of trust that Microsoft hasn’t earned Read More »

surgeons-remove-pig-kidney-transplant-from-woman

Surgeons remove pig kidney transplant from woman

Interspecies —

No rejection, just a matter of blood flow.

Transplant team

Courtesy of NYU Langone

Surgeons in New York have removed a pig kidney less than two months after transplanting it into Lisa Pisano, a 54-year-old woman with kidney failure who also needed a mechanical heart pump. The team behind the transplant says there were problems with the heart pump, not the pig kidney, and that the patient is in stable condition.

Pisano was facing heart and kidney failure and required routine dialysis. She wasn’t eligible to receive a traditional heart and kidney transplant from a human donor because of several chronic medical conditions that reduced the likelihood of a good outcome.

Pisano first received a heart pump at NYU Langone Health on April 4, followed by the pig kidney transplant on April 12. The heart pump, a device called a left ventricular assist device or LVAD, is used in patients who are either awaiting heart transplantation or otherwise aren’t a candidate for a heart transplant.

In a statement provided to WIRED, Pisano’s medical team explained that they electively removed the pig kidney on May 29—47 days after transplant—after several episodes of the heart pump not being able to pass enough blood through the transplanted kidney. Steady blood flow is important so that the kidney can produce urine and filter waste. Without it, Pisano’s kidney function began to decline.

“On balance, the kidney was no longer contributing enough to justify continuing the immunosuppression regimen,” said Robert Montgomery, director of the NYU Langone Transplant Institute, in the statement. Like traditional transplant patients, Pisano needed to take immunosuppressive drugs to prevent her immune system from rejecting the donor organ.

The kidney came from a pig genetically engineered by Virginia biotech company Revivicor to lack a gene responsible for the production of a sugar known as alpha-gal. In previous studies at NYU Langone, researchers found that removing this sugar prevented immediate rejection of the organ when transplanted into brain-dead patients. During Pisano’s surgery, the donor pig’s thymus gland, which is responsible for “educating” the immune system, was also transplanted to reduce the likelihood of rejection.

A recent biopsy did not show signs of rejection, but Pisano’s kidney was injured due to a lack of blood flow, according to the statement. The team plans to study the explanted pig kidney to learn more.

Pisano is now back on dialysis, a treatment for kidney-failure patients, and her heart pump is still functioning. She would not have been a candidate for the heart pump if she had not received the pig kidney.

“We are hoping to get Lisa back home to her family soon,” Montgomery said, calling Pisano a “pioneer and a hero in the effort to create a sustainable option for people waiting for an organ transplant.”

Pisano was the second living person to receive a kidney from a genetically engineered pig. The first, Richard Slayman of Massachusetts, died in May just two months after the historic transplant. The surgery was carried out on March 16 at Massachusetts General Hospital. In a statement released on May 11, the hospital said it had “no indication” that Slayman’s death was the result of the pig kidney transplant. The donor pig used in Slayman’s procedure had a total of 69 different genetic edits.

The global donor organ shortage has led researchers including the NYU and Massachusetts teams to pursue the possibility of using pigs as an alternative source. But the body immediately recognizes pig tissue as foreign, so scientists are using gene editing in an effort to make pig organs look more like human ones to the immune system. Just how many gene edits will be needed to keep pig organs working in people is a topic of much debate.

Pig heart transplants have also been carried out in two individuals—one in 2022 and the other in 2023—at the University of Maryland. In both cases, the patients were not eligible for human ones. Those donor pigs had 10 genetic edits and were also bred by Revivcor. Both recipients died around two months after their transplants.

This story originally appeared on wired.com.

Surgeons remove pig kidney transplant from woman Read More »