Author name: Mike M.

china’s-plan-to-dominate-ev-sales-around-the-world

China’s plan to dominate EV sales around the world

China’s plan to dominate EV sales around the world

FT montage/Getty Images

The resurrection of a car plant in Brazil’s poor northeast stands as a symbol of China’s global advance—and the West’s retreat.

BYD, the Shenzhen-based conglomerate, has taken over an old Ford factory in Camaçari, which was abandoned by the American automaker nearly a century after Henry Ford first set up operations in Brazil.

When Luiz Inácio Lula da Silva, Brazil’s president, visited China last year, he met BYD’s billionaire founder and chair, Wang Chuanfu. After that meeting, BYD picked the country for its first carmaking hub outside of Asia.

Under a $1 billion-plus investment plan, BYD intends to start producing electric and hybrid automobiles this year at the site in Bahia state, which will also manufacture bus and truck chassis and process battery materials.

The new Brazil plant is no outlier—it falls into a wave of corporate Chinese investment in electric vehicle manufacturing supply chains in the world’s most important developing economies.

Financial Times

The inadvertent result of rising protectionism in the US and Europe could be to drive many emerging markets into China’s hands.

Last month, Joe Biden issued a new broadside against Beijing’s deep financial support of Chinese industry as he unveiled sweeping new tariffs on a range of cleantech products—most notably, a 100 percent tariff on electric vehicles. “It’s not competition. It’s cheating. And we’ve seen the damage here in America,” Biden said.

The measures were partly aimed at boosting Biden’s chances in his presidential battle with Donald Trump. But the tariffs, paired with rising restrictions on Chinese investment on American soil, will have an immense impact on the global auto market, in effect shutting China’s world-leading EV makers out of the world’s biggest economy.

The EU’s own anti-subsidy investigation into Chinese electric cars is expected to conclude next week as Brussels tries to protect European carmakers by stemming the flow of low-cost Chinese electric vehicles into the bloc.

Government officials, executives, and experts say that the series of new cleantech tariffs issued by Washington and Brussels are forcing China’s leading players to sharpen their focus on markets in the rest of the world.

This, they argue, will lead to Chinese dominance across the world’s most important emerging markets, including Southeast Asia, Latin America, and the Middle East and the remaining Western economies that are less protectionist than the US and Europe.

“That is the part that seems to be lost in this whole discussion of ‘can we raise some tariffs and slow down the Chinese advance.’ That’s only defending your homeland. That’s leaving everything else open,” says Bill Russo, the former head of Chrysler in Asia and founder of Automobility, a Shanghai consultancy.

“Those markets are in play and China is aggressively going after those markets.”

China’s plan to dominate EV sales around the world Read More »

microsoft-to-test-“new-features-and-more”-for-aging,-stubbornly-popular-windows-10

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10

but the clock is still ticking —

Support ends next year, but Windows 10 remains the most-used version of the OS.

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10

Microsoft

In October 2025, Microsoft will stop supporting Windows 10 for most PC users, which means no more technical support and (crucially) no more security updates unless you decide to pay for them. To encourage adoption, the vast majority of new Windows development is happening in Windows 11, which will get one of its biggest updates since release sometime this fall.

But Windows 10 is casting a long shadow. It remains the most-used version of Windows by all publicly available metrics, including Statcounter (where Windows 11’s growth has been largely stagnant all year) and the Steam Hardware Survey. And last November, Microsoft decided to release a fairly major batch of Windows 10 updates that introduced the Copilot chatbot and other changes to the aging operating system.

That may not be the end of the road. Microsoft has announced that it is reopening a Windows Insider Beta Channel for PCs still running Windows 10, which will be used to test “new features and more improvements to Windows 10 as needed.” Users can opt into the Windows 10 Beta Channel regardless of whether their PC meets the requirements for Windows 11; if your PC is compatible, signing up for the less-stable Dev or Canary channels will still upgrade your PC to Windows 11.

Any new Windows 10 features that are released will be added to Windows 10 22H2, the operating system’s last major yearly update. Per usual for Windows Insider builds, Microsoft may choose not to release all new features that it tests, and new features will be released for the public version of Windows 10 “when they’re ready.”

One thing this new beta program doesn’t change is the end-of-support date for Windows 10, which Microsoft says is still October 14, 2025. Microsoft says that joining the beta program doesn’t extend support. The only way to continue getting Windows 10 security updates past 2025 is to pay for the Extended Security Updates (ESU) program; Microsoft plans to offer these updates to individual users but still hasn’t announced pricing for individuals. Businesses will pay as much as $61 per PC for the first year of updates, while schools will pay as little as $1 per PC.

Beta program or no, we still wouldn’t expect Windows 10 to change dramatically between now and its end-of-support date. We’d guess that most changes will relate to the Copilot assistant, given how aggressively Microsoft has moved to add generative AI to all of its products. For example, the Windows 11 version of Copilot is shedding its “preview” tag and becoming an app that runs in a regular window rather than a persistent sidebar, changes Microsoft could also choose to implement in Windows 10.

Microsoft to test “new features and more” for aging, stubbornly popular Windows 10 Read More »

Could Network Modeling Replace Observability?

Over the past four years, I’ve consolidated a representative list of network observability vendors, but have not yet considered any modeling-based solutions. That changed when Forward Networks and NetBrain requested inclusion in the network observability report.

These two vendors have built their products on top of a network modeling technology, and both of them met the report’s table stakes, which meant they qualified for inclusion. In this iteration of the report, the fourth, including these two modeling-based vendors did not have a huge impact. Vendors have shifted around on the Radar chart, but generally speaking, the report is consistent with the third iteration.

However, these modeling solutions are a fresh take on observability, which is a category that has so far been evolving incrementally. While there have been occasional leaps forward, driven by the likes of ML and eBPF, there hasn’t been an overhaul of the whole solution.

I cannot foresee any future version of network observability that does not include some degree of modeling, so I’ve been thinking about the evolution of these technologies, the current vendor landscape, and whether modeling-based products will overtake non-modeling-based observability products.

Even though it’s still early days for modeling-based observability, I want to explore and validate these two ideas:

  • It’s harder for observability-only tools to pivot into modeling than the other way around.
  • Modeling products offer some distinct advantages.

Pivoting to Modeling

The roots of modeling solutions are based in observability—specifically, collecting information about the configuration and state of the network. With this information, these solutions create a digital twin, which can simulate traffic to understand how the network currently behaves or would behave in hypothetical conditions.

Observability tools do not need to simulate traffic to do their job. They can report on near-real time network performance information to provide network operations center (NOC) analysts with the right information to maintain the level of performance. Observability tools can definitely incorporate modeling features (and some solutions already do), but the point here is that they don’t have to.

My understanding of today’s network modeling tools indicates that these solutions cannot yet deliver the same set of features as network observability tools. This is rather expected, as a large percentage of network observability tools have more than three decades of continuous development.

However, when looking at future developments, we need to consider that network modeling tools use proprietary algorithms, which have been developed over a number of years and require a highly specific set of skills. I do not expect that developers and engineers equipped with network modeling skills are readily available in the job market, and these use cases are not as trendy as other topics. For example, AI developers are also in demand, but there’s also going to be a continuous increase in supply over the next few years as younger generations choose to specialize in this subject.

In contrast, modeling tools can tap into existing observability knowledge and mimic a very mature set of products to implement comparable features.

Modeling Advantages

In the vendor questionnaires, I’ve been asking these two questions for a few years:

  • Can the tool correlate changes in network performance with configuration changes?
  • Can the tool learn from the administrator’s decisions and remediation actions to autonomously solve similar incidents or propose resolutions?

The majority of network observability vendors don’t focus on these sorts of features. But the modeling solutions do, and they do so very well.

This list is by no means exhaustive; I’m only highlighting it because I’ve been asking myself whether these sorts of features are out of scope for network observability tools. But this is the first time since I started researching this space that the responses to these questions went from “we sort of do that” to “yes, this is our core strength.”

This leads me to think there is an extensive set of features that can benefit NOC analysts that can be developed on top of the underlying technology, which may very well be network modeling.

Next Steps

Whether modeling tools can displace today’s observability tools is something that remains to be determined. I expect that the answer to this question will lie with the organizations whose business model heavily relies on network performance. If such an organization deploys both an observability and modeling tool, and increasingly favors modeling for observability tasks to the point where they decommission the observability tool, we’ll have a much clearer indication of the direction of the market.

To learn more, take a look at GigaOm’s network observability Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Could Network Modeling Replace Observability? Read More »

gamestop-stock-influencer-roaring-kitty-may-lose-access-to-e-trade,-report-says

GameStop stock influencer Roaring Kitty may lose access to E-Trade, report says

“I like the stock” —

E-Trade fears restricting influencer’s trading may trigger boycott, sources say.

Keith Gill, known on Reddit under the pseudonym DeepFuckingValue and as Roaring Kitty, is seen on a fragment of a YouTube video.

Enlarge / Keith Gill, known on Reddit under the pseudonym DeepFuckingValue and as Roaring Kitty, is seen on a fragment of a YouTube video.

E-Trade is apparently struggling to balance the risks and rewards of allowing Keith Gill to continue trading volatile meme stocks on its platform, The Wall Street Journal reported.

The meme-stock influencer known as “Roaring Kitty” and “DeepF—Value” is considered legendary for instantly skyrocketing the price of stocks, notably GameStop, most recently with a single tweet.

E-Trade is concerned, according to The Journal’s insider sources, that on the one hand, Gill’s social media posts are potentially illegally manipulating the market—and possibly putting others’ investments at risk. But on the other, the platform worries that restricting Gill’s trading could incite a boycott fueled by his “meme army” closing their accounts “in solidarity.” That could also sharply impact trading on the platform, sources said.

It’s unclear what gamble E-Trade, which is owned by Morgan Stanley, might be willing to make. The platform could decide to take no action at all, the WSJ reported, but through its client agreement has the right to restrict or close Gill’s account “at any time.”

As of late Monday, Gill’s account was still active, the WSJ reported, apparently showing total gains of $85 million over the past three weeks. After Monday’s close, Gill’s GameStop positions “were valued at more than $289 million,” the WSJ reported.

Trading platforms unprepared for Gill’s comeback

In 2021, Gill’s social media activity on Reddit helped drive GameStop stock to historic highs. At that time, Gill encouraged others to invest in the stock—not based on the fundamentals of GameStop business but on his pure love for GameStop. The craze that he helped spark rapidly triggered temporary restrictions on GameStop trading, as well as a congressional hearing, but ultimately there were few consequences for Gill, who disappeared after making at least $30 million, the WSJ reported.

All remained quiet until a few weeks ago when Roaring Kitty suddenly came back. On X (formerly Twitter), Gill posted a meme of a man sitting up in his chair, then blitzed his feed with memes and movie clips, seemingly sending a continual stream of coded messages to his millions of followers who eagerly posted about their trades and gains on Reddit.

“Welcome back, legend,” one follower responded.

Once again, Gill’s abrupt surge in online activity immediately kicked off a GameStop stock craze fueling prices to a spike of more than 60 percent. And once again, because of the stock’s extreme volatility, Gill’s social posts prompted questions from both trading platforms and officials who continue to fret over whether Gill’s online influencing should be considered market manipulation.

For Gill’s biggest fans, the goal is probably to profit as much as possible before the hammer potentially comes down again and trading gets restricted. That started happening late on Sunday night, when it became harder or impossible to purchase GameStop shares on Robinhood, prompting some traders to complain on X.

The WallStreetBets account shared a warning that Robinhood sent to would-be buyers, which showed that trading was being limited, but not by Robinhood. Instead, the platform that facilitates Robinhood’s overnight trading of the stock, Blue Ocean ATS, set the limit, only accepting “trades 20 percent above or below” that day’s reference price—a move designed for legal or compliance reasons to stop trading once the stock exceeds a certain price.

These limits are set, the Securities and Exchange Commission (SEC) noted in 2021, partly to prevent fraudsters from spreading misleading tips online and profiting at the expense of investors from illegal price manipulation. A common form of this fraud is a pump-and-dump scheme, where fraudsters “make false and misleading statements to create a buying frenzy, and then sell shares at the pumped-up price.”

GameStop stock influencer Roaring Kitty may lose access to E-Trade, report says Read More »

google’s-ai-overviews-misunderstand-why-people-use-google

Google’s AI Overviews misunderstand why people use Google

robot hand holding glue bottle over a pizza and tomatoes

Aurich Lawson | Getty Images

Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google’s new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature’s rollout for at least some types of queries.

But the more I’ve thought about that rollout, the more I’ve begun to question the wisdom of Google’s AI-powered search results in the first place. Even when the system doesn’t give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

Reliability and relevance

When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they’re looking for subjective information where there is no one “right” answer: “What are the best Mexican restaurants in Santa Fe?” or “What should I do with my kids on a rainy day?” or “How can I prevent cheese from sliding off my pizza?”

The value of Google has always been in pointing you to the places it thinks are likely to have good answers to those questions. But it’s still up to you, as a user, to figure out which of those sources is the most reliable and relevant to what you need at that moment.

  • This wasn’t funny when the guys at Pep Boys said it, either. (via)

    Kyle Orland / Google

  • Weird Al recommends “running with scissors” as well! (via)

    Kyle Orland / Google

  • This list of steps actually comes from a forum thread response about doing something completely different. (via)

    Kyle Orland / Google

  • An island that’s part of the mainland? (via)

    Kyle Orland / Google

  • If everything’s cheaper now, why does everything seem so expensive?

    Kyle Orland / Google

  • Pretty sure this Truman was never president… (via)

    Kyle Orland / Google

For reliability, any savvy Internet user makes use of countless context clues when judging a random Internet search result. Do you recognize the outlet or the author? Is the information from someone with seeming expertise/professional experience or a random forum poster? Is the site well-designed? Has it been around for a while? Does it cite other sources that you trust, etc.?

But Google also doesn’t know ahead of time which specific result will fit the kind of information you’re looking for. When it comes to restaurants in Santa Fe, for instance, are you in the mood for an authoritative list from a respected newspaper critic or for more off-the-wall suggestions from random locals? Or maybe you scroll down a bit and stumble on a loosely related story about the history of Mexican culinary influences in the city.

One of the unseen strengths of Google’s search algorithm is that the user gets to decide which results are the best for them. As long as there’s something reliable and relevant in those first few pages of results, it doesn’t matter if the other links are “wrong” for that particular search or user.

Google’s AI Overviews misunderstand why people use Google Read More »

windows-recall-demands-an-extraordinary-level-of-trust-that-microsoft-hasn’t-earned

Windows Recall demands an extraordinary level of trust that Microsoft hasn’t earned

The Recall feature as it currently exists in Windows 11 24H2 preview builds.

Enlarge / The Recall feature as it currently exists in Windows 11 24H2 preview builds.

Andrew Cunningham

Microsoft’s Windows 11 Copilot+ PCs come with quite a few new AI and machine learning-driven features, but the tentpole is Recall. Described by Microsoft as a comprehensive record of everything you do on your PC, the feature is pitched as a way to help users remember where they’ve been and to provide Windows extra contextual information that can help it better understand requests from and meet the needs of individual users.

This, as many users in infosec communities on social media immediately pointed out, sounds like a potential security nightmare. That’s doubly true because Microsoft says that by default, Recall’s screenshots take no pains to redact sensitive information, from usernames and passwords to health care information to NSFW site visits. By default, on a PC with 256GB of storage, Recall can store a couple dozen gigabytes of data across three months of PC usage, a huge amount of personal data.

The line between “potential security nightmare” and “actual security nightmare” is at least partly about the implementation, and Microsoft has been saying things that are at least superficially reassuring. Copilot+ PCs are required to have a fast neural processing unit (NPU) so that processing can be performed locally rather than sending data to the cloud; local snapshots are protected at rest by Windows’ disk encryption technologies, which are generally on by default if you’ve signed into a Microsoft account; neither Microsoft nor other users on the PC are supposed to be able to access any particular user’s Recall snapshots; and users can choose to exclude apps or (in most browsers) individual websites to exclude from Recall’s snapshots.

This all sounds good in theory, but some users are beginning to use Recall now that the Windows 11 24H2 update is available in preview form, and the actual implementation has serious problems.

“Fundamentally breaks the promise of security in Windows”

This is Recall, as seen on a PC running a preview build of Windows 11 24H2. It takes and saves periodic screenshots, which can then be searched for and viewed in various ways.

Enlarge / This is Recall, as seen on a PC running a preview build of Windows 11 24H2. It takes and saves periodic screenshots, which can then be searched for and viewed in various ways.

Andrew Cunningham

Security researcher Kevin Beaumont, first in a thread on Mastodon and later in a more detailed blog post, has written about some of the potential implementation issues after enabling Recall on an unsupported system (which is currently the only way to try Recall since Copilot+ PCs that officially support the feature won’t ship until later this month). We’ve also given this early version of Recall a try on a Windows Dev Kit 2023, which we’ve used for all our recent Windows-on-Arm testing, and we’ve independently verified Beaumont’s claims about how easy it is to find and view raw Recall data once you have access to a user’s PC.

To test Recall yourself, developer and Windows enthusiast Albacore has published a tool called AmperageKit that will enable it on Arm-based Windows PCs running Windows 11 24H2 build 26100.712 (the build currently available in the Windows Insider Release Preview channel). Other Windows 11 24H2 versions are missing the underlying code necessary to enable Recall.

  • Windows uses OCR on all the text in all the screenshots it takes. That text is also saved to an SQLite database to facilitate faster searches.

    Andrew Cunningham

  • Searching for “iCloud,” for example, brings up every single screenshot with the word “iCloud” in it, including the app itself and its entry in the Microsoft Store. If I had visited websites that mentioned it, they would show up here, too.

    Andrew Cunningham

The short version is this: In its current form, Recall takes screenshots and uses OCR to grab the information on your screen; it then writes the contents of windows plus records of different user interactions in a locally stored SQLite database to track your activity. Data is stored on a per-app basis, presumably to make it easier for Microsoft’s app-exclusion feature to work. Beaumont says “several days” of data amounted to a database around 90KB in size. In our usage, screenshots taken by Recall on a PC with a 2560×1440 screen come in at 500KB or 600KB apiece (Recall saves screenshots at your PC’s native resolution, minus the taskbar area).

Recall works locally thanks to Azure AI code that runs on your device, and it works without Internet connectivity and without a Microsoft account. Data is encrypted at rest, sort of, at least insofar as your entire drive is generally encrypted when your PC is either signed into a Microsoft account or has Bitlocker turned on. But in its current form, Beaumont says Recall has “gaps you can drive a plane through” that make it trivially easy to grab and scan through a user’s Recall database if you either (1) have local access to the machine and can log into any account (not just the account of the user whose database you’re trying to see), or (2) are using a PC infected with some kind of info-stealer virus that can quickly transfer the SQLite database to another system.

Windows Recall demands an extraordinary level of trust that Microsoft hasn’t earned Read More »

surgeons-remove-pig-kidney-transplant-from-woman

Surgeons remove pig kidney transplant from woman

Interspecies —

No rejection, just a matter of blood flow.

Transplant team

Courtesy of NYU Langone

Surgeons in New York have removed a pig kidney less than two months after transplanting it into Lisa Pisano, a 54-year-old woman with kidney failure who also needed a mechanical heart pump. The team behind the transplant says there were problems with the heart pump, not the pig kidney, and that the patient is in stable condition.

Pisano was facing heart and kidney failure and required routine dialysis. She wasn’t eligible to receive a traditional heart and kidney transplant from a human donor because of several chronic medical conditions that reduced the likelihood of a good outcome.

Pisano first received a heart pump at NYU Langone Health on April 4, followed by the pig kidney transplant on April 12. The heart pump, a device called a left ventricular assist device or LVAD, is used in patients who are either awaiting heart transplantation or otherwise aren’t a candidate for a heart transplant.

In a statement provided to WIRED, Pisano’s medical team explained that they electively removed the pig kidney on May 29—47 days after transplant—after several episodes of the heart pump not being able to pass enough blood through the transplanted kidney. Steady blood flow is important so that the kidney can produce urine and filter waste. Without it, Pisano’s kidney function began to decline.

“On balance, the kidney was no longer contributing enough to justify continuing the immunosuppression regimen,” said Robert Montgomery, director of the NYU Langone Transplant Institute, in the statement. Like traditional transplant patients, Pisano needed to take immunosuppressive drugs to prevent her immune system from rejecting the donor organ.

The kidney came from a pig genetically engineered by Virginia biotech company Revivicor to lack a gene responsible for the production of a sugar known as alpha-gal. In previous studies at NYU Langone, researchers found that removing this sugar prevented immediate rejection of the organ when transplanted into brain-dead patients. During Pisano’s surgery, the donor pig’s thymus gland, which is responsible for “educating” the immune system, was also transplanted to reduce the likelihood of rejection.

A recent biopsy did not show signs of rejection, but Pisano’s kidney was injured due to a lack of blood flow, according to the statement. The team plans to study the explanted pig kidney to learn more.

Pisano is now back on dialysis, a treatment for kidney-failure patients, and her heart pump is still functioning. She would not have been a candidate for the heart pump if she had not received the pig kidney.

“We are hoping to get Lisa back home to her family soon,” Montgomery said, calling Pisano a “pioneer and a hero in the effort to create a sustainable option for people waiting for an organ transplant.”

Pisano was the second living person to receive a kidney from a genetically engineered pig. The first, Richard Slayman of Massachusetts, died in May just two months after the historic transplant. The surgery was carried out on March 16 at Massachusetts General Hospital. In a statement released on May 11, the hospital said it had “no indication” that Slayman’s death was the result of the pig kidney transplant. The donor pig used in Slayman’s procedure had a total of 69 different genetic edits.

The global donor organ shortage has led researchers including the NYU and Massachusetts teams to pursue the possibility of using pigs as an alternative source. But the body immediately recognizes pig tissue as foreign, so scientists are using gene editing in an effort to make pig organs look more like human ones to the immune system. Just how many gene edits will be needed to keep pig organs working in people is a topic of much debate.

Pig heart transplants have also been carried out in two individuals—one in 2022 and the other in 2023—at the University of Maryland. In both cases, the patients were not eligible for human ones. Those donor pigs had 10 genetic edits and were also bred by Revivcor. Both recipients died around two months after their transplants.

This story originally appeared on wired.com.

Surgeons remove pig kidney transplant from woman Read More »

google-accidentally-published-internal-search-documentation-to-github

Google accidentally published internal Search documentation to GitHub

My author ranking is super high, right Google? —

Commit snafu slapped an irrevocable Apache 2.0 license on confidential API Docs.

A large Google logo at a trade fair.

Getty Images | Alexander Koerner

Google apparently accidentally posted a big stash of internal technical documents to GitHub, partially detailing how the search engine ranks webpages. For most of us, the question of search rankings is just “are my web results good or bad,” but the SEO community is both thrilled to get a peek behind the curtain and up in arms since the docs apparently contradict some of what Google has told them in the past. Most of the commentary on the leak is from SEO experts Rand Fishkin and Mike King.

Google confirmed the authenticity of the documents to The Verge, saying, “We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.”

The fun thing about accidentally publishing to the GoogleAPI GitHub is that, while these are sensitive internal documents, Google technically released them under an Apache 2.0 license. That means anyone who stumbled across the documents was granted a “perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license” to them, so these are freely available online now, like here.

One of the leaked documents.

Enlarge / One of the leaked documents.

The leak contains a ton of API documentation for Google’s “ContentWarehouse,” which sounds a lot like the search index. As you’d expect, even this incomplete look at how Google ranks webpages is impossibly complex. King writes that there are “2,596 modules represented in the API documentation with 14,014 attributes (features).” These are all documents written by programmers for programmers and rely on a lot of background information that you’d probably only know if you worked on the search team. The SEO community is still poring over the documents and using them to build assumptions on how Google Search works.

Both Fishkin and King accuse Google of “lying” to SEO experts in the past. One of the revelations in the documents is that the click-through rate of a search result listing affects its ranking, something Google has denied goes into the results “stew” on several occasions. The click tracking system is called “Navboost,” in other words, boosting websites users navigate to. Naturally, a lot of this click data comes from Chrome, even when you leave search. For instance, some results can show a small set of “sitemap” results below the main listing, and apparently a part of what powers this is the most-popular subpages as determined by Chrome’s click tracking.

The documents also suggest Google has whitelists that will artificially boost certain websites for certain topics. The two mentioned were “isElectionAuthority” and “isCovidLocalAuthority.”

A lot of the documentation is exactly how you would expect a search engine to work. Sites have a “SiteAuthority” value that will rank well-known sites higher than lesser known ones. Authors also have their own rankings, but like with everything here, it’s impossible to how know everything interacts with everything else.

Both bits of commentary from our SEO experts make them sound offended that Google would ever mislead them, but doesn’t the company need to maintain at least a slightly adversarial relationship with the people who try to manipulate the search results? One recent study found that “search engines seem to lose the cat-and-mouse game that is SEO spam” and found “an inverse relationship between a page’s optimization level and its perceived expertise, indicating that SEO may hurt at least subjective page quality.” None of this additional documentation is likely great for users or Google’s results quality. For instance, now that people know that the click-through rate affects search ranking, couldn’t you boost a website’s listing with a click farm?

Google accidentally published internal Search documentation to GitHub Read More »

no-physics?-no-problem-ai-weather-forecasting-is-already-making-huge-strides.

No physics? No problem. AI weather forecasting is already making huge strides.

AI weather models are arriving just in time for the 2024 Atlantic hurricane season.

Enlarge / AI weather models are arriving just in time for the 2024 Atlantic hurricane season.

Aurich Lawson | Getty Images

Much like the invigorating passage of a strong cold front, major changes are afoot in the weather forecasting community. And the end game is nothing short of revolutionary: an entirely new way to forecast weather based on artificial intelligence that can run on a desktop computer.

Today’s artificial intelligence systems require one resource more than any other to operate—data. For example, large language models such as ChatGPT voraciously consume data to improve answers to queries. The more and higher quality data, the better their training, and the sharper the results.

However, there is a finite limit to quality data, even on the Internet. These large language models have hoovered up so much data that they’re being sued widely for copyright infringement. And as they’re running out of data, the operators of these AI models are turning to ideas such as synthetic data to keep feeding the beast and produce ever more capable results for users.

If data is king, what about other applications for AI technology similar to large language models? Are there untapped pools of data? One of the most promising that has emerged in the last 18 months is weather forecasting, and recent advances have sent shockwaves through the field of meteorology.

That’s because there’s a secret weapon: an extremely rich data set. The European Centre for Medium-Range Weather Forecasts, the premiere organization in the world for numerical weather prediction, maintains a set of data about atmospheric, land, and oceanic weather data for every day, at points around the world, every few hours, going back to 1940. The last 50 years of data, after the advent of global satellite coverage, is especially rich. This dataset is known as ERA5, and it is publicly available.

It was not created to fuel AI applications, but ERA5 has turned out to be incredibly useful for this purpose. Computer scientists only really got serious about using this data to train AI models to forecast the weather in 2022. Since then, the technology has made rapid strides. In some cases, the output of these models is already superior to global weather models that scientists have labored decades to design and build, and they require some of the most powerful supercomputers in the world to run.

“It is clear that machine learning is a significant part of the future of weather forecasting,” said Matthew Chantry, who leads AI forecasting efforts at the European weather center known as ECMWF, in an interview with Ars.

It’s moving fast

John Dean and Kai Marshland met as undergraduates at Stanford University in the late 2010s. Dean, an electrical engineer, interned at SpaceX during the summer of 2017. Marshland, a computer scientist, interned at the launch company the next summer. Both graduated in 2019 and were trying to figure out what to do with their lives.

“We decided we wanted to solve the problem of weather uncertainty,” Marshland said, so they co-founded a company called WindBorne Systems.

The premise of the company was simple: For about 85 percent of the Earth and its atmosphere, we have no good data about weather conditions there. A lack of quality data, which establishes initial conditions, represents a major handicap for global weather forecast models. The company’s proposed solution was in its name—wind borne.

Dean and Marshland set about designing small weather balloons they could release into the atmosphere and which would fly around the world for up to 40 days, relaying useful atmospheric data that could be packaged and sold to large, government-funded weather models.

Weather balloons provide invaluable data about atmospheric conditions—readings such as temperature, dewpoints, and pressures—that cannot be captured by surface observations or satellites. Such atmospheric “profiles” are helpful in setting the initial conditions models start with. The problem is that traditional weather balloons are cumbersome and only operate for a few hours. Because of this, the National Weather Service only launches them twice daily from about 100 locations in the United States.

No physics? No problem. AI weather forecasting is already making huge strides. Read More »

to-pee-or-not-to-pee?-that-is-a-question-for-the-bladder—and-the-brain

To pee or not to pee? That is a question for the bladder—and the brain

💦

The basic urge to pee is surprisingly complex and can go awry as we age.

Cut view of man covering urine with hands. He has some pain and problem. Isolated on striped and blue background

You’re driving somewhere, eyes on the road, when you start to feel a tingling sensation in your lower abdomen. That extra-large Coke you drank an hour ago has made its way through your kidneys into your bladder. “Time to pull over,” you think, scanning for an exit ramp.

To most people, pulling into a highway rest stop is a profoundly mundane experience. But not to neuroscientist Rita Valentino, who has studied how the brain senses, interprets, and acts on the bladder’s signals. She’s fascinated by the brain’s ability to take in sensations from the bladder, combine them with signals from outside of the body, like the sights and sounds of the road, then use that information to act—in this scenario, to find a safe, socially appropriate place to pee. “To me, it’s really an example of one of the beautiful things that the brain does,” she says.

Scientists used to think that our bladders were ruled by a relatively straightforward reflex—an “on-off” switch between storing urine and letting it go. “Now we realize it’s much more complex than that,” says Valentino, now director of the division of neuroscience and behavior at the National Institute of Drug Abuse. An intricate network of brain regions that contribute to functions like decision-making, social interactions, and awareness of our body’s internal state, also called interoception, participates in making the call.

In addition to being mind-bogglingly complex, the system is also delicate. Scientists estimate, for example, that more than 1 in 10 adults have overactive bladder syndrome—a common constellation of symptoms that includes urinary urgency (the sensation of needing to pee even when the bladder isn’t full), nocturia (the need for frequent nightly bathroom visits) and incontinence. Although existing treatments can improve symptoms for some, they don’t work for many people, says Martin Michel, a pharmacologist at Johannes Gutenberg University in Mainz, Germany, who researches therapies for bladder disorders. Developing better drugs has proven so challenging that all major pharmaceutical companies have abandoned the effort, he adds.

Recently, however, a surge of new research is opening the field to fresh hypotheses and treatment approaches. Although therapies for bladder disorders have historically focused on the bladder itself, the new studies point to the brain as another potential target, says Valentino. Combined with studies aimed at explaining why certain groups, such as post-menopausal women, are more prone to bladder problems, the research suggests that we shouldn’t simply accept symptoms like incontinence as inevitable, says Indira Mysorekar, a microbiologist at Baylor College of Medicine in Houston. We’re often told such problems are just part of getting old, particularly for women—“and that’s true to some extent,” she says. But many common issues are avoidable and can be treated successfully, she says: “We don’t have to live with pain or discomfort.”

A delicate balance

The human bladder is, at the most basic level, a stretchy bag. To fill to capacity—a volume of 400 to 500 milliliters (about 2 cups) of urine in most healthy adults—it must undergo one of the most extreme expansions of any organ in the human body, expanding roughly sixfold from its wrinkled, empty state.

To stretch that far, the smooth muscle wall that wraps around the bladder, called the detrusor, must relax. Simultaneously, sphincter muscles that surround the bladder’s lower opening, or urethra, must contract, in what scientists call the guarding reflex.

It’s not just sensory neurons (purple) that can detect stretch, pressure, pain and other sensations in the bladder. Other types of cells, like the umbrella-shaped cells that form the urothelium’s barrier against urine, can also sense and respond to mechanical forces — for example, by releasing chemical signaling molecules such as adenosine triphosphate (ATP) as the organ expands to fill with urine.

Enlarge / It’s not just sensory neurons (purple) that can detect stretch, pressure, pain and other sensations in the bladder. Other types of cells, like the umbrella-shaped cells that form the urothelium’s barrier against urine, can also sense and respond to mechanical forces — for example, by releasing chemical signaling molecules such as adenosine triphosphate (ATP) as the organ expands to fill with urine.

Filling or full, the bladder spends more than 95 percent of its time in storage mode, allowing us to carry out our daily activities without leaks. At some point—ideally, when we decide it’s time to pee—the organ switches from storage to release mode. For this, the detrusor muscle must contract forcefully to expel urine, while the sphincter muscles surrounding the urethra simultaneously relax to let urine flow out.

For a century, physiologists have puzzled over how the body coordinates the switch between storage and release. In the 1920s, a surgeon named Frederick Barrington, of University College London, went looking for the on-off switch in the brainstem, the lowermost part of the brain that connects with the spinal cord.

Working with sedated cats, Barrington used an electrified needle to damage slightly different areas in the pons, part of the brainstem that handles vital functions like sleeping and breathing. When the cats recovered, Barrington noticed that some demonstrated a desire to urinate—by scratching, circling, or squatting—but were unable to voluntarily go. Meanwhile, cats with lesions in a different part of the pons seemed to have lost any awareness of the need to urinate, peeing at random times and appearing startled whenever it happened. Clearly, the pons served as an important command center for urinary function, telling the bladder when to release urine.

To pee or not to pee? That is a question for the bladder—and the brain Read More »

is-a-colonial-era-drop-in-co₂-tied-to-regrowing-forests?

Is a colonial-era drop in CO₂ tied to regrowing forests?

More trees, less carbon —

Carbon dioxide dropped after colonial contact wiped out Native Americans.

Image of a transparent disk against a blue background. The disk has lots of air bubbles embedded in it.

Enlarge / A slice through an ice core showing bubbles of trapped air.

British Antarctic Survey

Did the massive scale of death in the Americas following colonial contact in the 1500s affect atmospheric CO2 levels? That’s a question scientists have debated over the last 30 years, ever since they noticed a sharp drop in CO2 around the year 1610 in air preserved in Antarctic ice.

That drop in atmospheric CO2 levels is the only significant decline in recent millennia, and scientists suggested that it was caused by reforestation in the Americas, which resulted from their depopulation via pandemics unleashed by early European contact. It is so distinct that it was proposed as a candidate for the marker of the beginning of a new geological epoch—the “Anthropocene.”

But the record from that ice core, taken at Law Dome in East Antarctica, shows that CO2 starts declining a bit late to match European contact, and it plummets over just 90 years, which is too drastic for feasible rates of vegetation regrowth. A different ice core, drilled in the West Antarctic, showed a more gradual decline starting earlier, but lacked the fine detail of the Law Dome ice.

Which one was right? Beyond the historical interest, it matters because it is a real-world, continent-scale test of reforestation’s effectiveness at removing CO2 from the atmosphere.

In a recent study, Amy King of the British Antarctic Survey and colleagues set out to test if the Law Dome data is a true reflection of atmospheric CO2 decline, using a new ice core drilled on the “Skytrain Ice Rise” in West Antarctica.

Precious tiny bubbles

In 2018, scientists and engineers from the British Antarctic Survey and the University of Cambridge drilled the ice core, a cylinder of ice 651 meters long by 10 centimeters in diameter (2,136 feet by 4 inches), from the surface down to the bedrock. The ice contains bubbles of air that got trapped as snow fell, forming tiny capsules of past atmospheres.

The project’s main aim was to investigate ice from the time about 125,000 years ago when the climate was about as warm as it is today. But King and colleagues realized that the younger portion of ice could shed light on the 1610 CO2 decline.

“Given the resolution of what we could obtain with Skytrain Ice Rise, we predicted that, if the drop was real in the atmosphere as in Law Dome, we should see the drop in Skytrain, too,” said Thomas Bauska of the British Antarctic Survey, a co-author of the new study.

The ice core was cut into 80-centimeter (31-inch) lengths, put into insulated boxes, and shipped to the UK, all the while held at -20°C (-4°F) to prevent it from melting and releasing its precious cargo of air from millennia ago. “That’s one thing that keeps us up at night, especially as gas people,” said Bauska.

In the UK they took a series of samples across 31 depth intervals spanning the period from 1454 to 1688 CE: “We went in and sliced and diced our ice core as much as we could,” said Bauska. They sent the samples, still refrigerated, off to Oregon State University where the CO2 levels were measured.

The results didn’t show a sharp drop of CO2—instead, they showed a gentler CO2 decline of about 8 ppm over 157 years between 1516 and 1670 CE, matching the other West Antarctic ice core.

“We didn’t see the drop,” said Bauska, “so we had to say, OK, is our understanding of how smooth the records are accurate?”

A tent on the Antarctic ice where the core is cut into segments for shipping.

A tent on the Antarctic ice where the core is cut into segments for shipping.

British Antarctic Survey

To test if the Skytrain ice record is too blurry to show a sharp 1610 drop, they analyzed the levels of methane in the ice. Because methane is much less soluble in water than CO2, they were able to melt continuously along the ice core to liberate the methane and get a more detailed graph of its concentration than was possible for CO2. If the atmospheric signal was blurred in Skytrain, it should have smoothed the methane record. But it didn’t.

“We didn’t see that really smoothed out methane record,” said Bauska, “which then told us the CO2 record couldn’t have been that smoothed.”

In other words, the gentler Skytrain CO2 signal is real, not an artifact.

Does this mean the sharp drop at 1610 in the Law Dome data is an artifact? It looks that way, but Bauska was cautious, saying, “the jury will still be out until we actually get either re-measurements of the Law Dome, or another ice core drilled with a similarly high accumulation.”

Is a colonial-era drop in CO₂ tied to regrowing forests? Read More »

boeing’s-starliner-test-flight-scrubbed-again-after-hold-in-final-countdown

Boeing’s Starliner test flight scrubbed again after hold in final countdown

Hold Hold Hold —

The ground launch sequencer computer called a hold at T-minus 3 minutes, 50 seconds.

NASA commander Butch Wilmore exits the Starliner spacecraft Saturday following the scrubbed launch attempt.

Enlarge / NASA commander Butch Wilmore exits the Starliner spacecraft Saturday following the scrubbed launch attempt.

A computer controlling the Atlas V rocket’s countdown triggered an automatic hold less than four minutes prior to liftoff of Boeing’s commercial Starliner spacecraft Saturday, keeping the crew test flight on the ground at least a few more days.

NASA astronauts Butch Wilmore and Suni Williams were already aboard the spacecraft when the countdown stopped due to a problem with a ground computer. “Hold. Hold. Hold,” a member of Atlas V launch team called out on an audio feed.

With the hold, the mission missed an instantaneous launch opportunity at 12: 25 pm EDT (16: 25 UTC), and later Saturday, NASA announced teams will forego a launch opportunity Sunday. The next chance to send Starliner into orbit will be 10: 52 am EDT (14: 52 UTC) Wednesday. The mission has one launch opportunity every one-to-two days, when the International Space Station’s orbital track moves back into proper alignment with the Atlas V rocket’s launch pad in Florida.

Wilmore and Williams will take the Starliner spacecraft on its first crew flight into low-Earth orbit. The capsule will dock with the International Space Station around a day after launch, spend at least a week there, then return to a parachute-assisted landing at one of two landing zones in New Mexico or Arizona. Once operational, Boeing’s Starliner will join SpaceX’s Crew Dragon capsule to give NASA two independent human-rated spacecraft for transporting astronauts to and from the space station.

It’s been a long road to get here with the Starliner spacecraft, and there’s more work to do before the capsule’s long-delayed first flight with astronauts.

Technicians from United Launch Alliance, builder of the Atlas V rocket, will begin troubleshooting the computer glitch at the launch pad Saturday evening, after draining propellant from the launch vehicle. Early indications suggest that a card in one of three computers governing the final minutes of the Atlas V’s countdown didn’t boot up as quickly as anticipated.

“You can imagine a large rack that is a big computer where the functions of the computer as a controller are broken up separately into individual cards or printed wire circuit boards with their logic devices,” said Tory Bruno, ULA’s president and CEO. “They’re all standalone, but together it’s an integrated controller.”

The computers are located at the launch pad inside a shelter near the base of the Atlas V rocket at Cape Canaveral Space Force Station. All three computers must be fully functioning in the final phase of the countdown to ensure triple redundancy. At the moment of liftoff, these computers control things like retracting umbilical lines and releasing bolts holding the rocket to its mobile launch platform.

Two of the computers activated as the final countdown sequence began at T-minus 4 minutes. A single card in the third computer took about six more seconds to come online, although it did boot up eventually, Bruno said.

“Two came up normally and the third one came up, but it was slow to come up, and that tripped a red line,” he said.

A disappointment

Wilmore and Williams, both veteran astronauts and former US Navy test pilots, exited the Starliner spacecraft with the help of Boeing’s ground team. They returned to NASA crew quarters at the nearby Kennedy Space Center to wait for the next launch attempt.

The schedule for the next try will depend on what ULA workers find when they access the computers at the launch pad. Officials initially said they could start another launch countdown early Sunday if they found a simple solution to the computer problem, such as swapping out a faulty card. The computers are networked together, but the architecture is designed with replaceable cards, each responsible for different functions during the countdown, to allow for a quick fix without having to replace the entire unit, Bruno said.

United Launch Alliance's Atlas V rocket and Boeing's Starliner spacecraft at Cape Canaveral Space Force Station, Florida.

Enlarge / United Launch Alliance’s Atlas V rocket and Boeing’s Starliner spacecraft at Cape Canaveral Space Force Station, Florida.

Later Saturday, NASA announced the launch won’t happen Sunday, giving teams additional time to assess the computer issue. The next launch opportunities are Wednesday and Thursday.

Bruno said ULA’s engineers suspect a hardware problem or a network communication glitch caused the computer issue during Saturday’s countdown. That is what ULA’s troubleshooting team will try to determine overnight. NASA said officials will share another update Sunday.

If it doesn’t get off the ground by Thursday, the Starliner test flight could face a longer delay to allow time for ULA to change out limited-life batteries on the Atlas V rocket. Bruno said the battery swap would take about 10 days.

Saturday’s aborted countdown was the latest in a string of delays for Boeing’s Starliner program. The spacecraft’s first crew test flight is running seven years behind the schedule Boeing announced when NASA awarded the company a $4.2 billion contract for the crew capsule in 2014. Put another way, Boeing has arrived at this moment nine years after the company originally said the spacecraft could be operational, when the program was first announced in 2010.

“Of course, this is emotionally disappointing,” said Mike Fincke, a NASA astronaut and a backup to Wilmore and Williams on the crew test flight. “I know Butch and Suni didn’t sound disappointed when we heard them on the loops, and it’s because it comes back to professionalism.”

NASA and Boeing were on the cusp of launching the Starliner test flight May 6, but officials called off the launch attempt due to a valve problem on the Atlas V rocket. Engineers later discovered a helium leak on the Starliner spacecraft’s service module, but managers agreed to proceed with the launch Saturday if the leak did not worsen during the countdown.

A check of the helium system Saturday morning showed the leak rate had decreased from a prior measurement, and it was no longer a constraint to launch. Instead, a different problem emerged to keep Starliner on Earth.

“Everybody is a little disappointed, but you kind of roll your sleeves up and get right back to work,” said Steve Stich, manager of NASA’s commercial crew program.

Boeing’s Starliner test flight scrubbed again after hold in final countdown Read More »