Author name: Mike M.

Unleashing Transformation

AI isn’t just another tool in the technology toolkit; it’s a revolution waiting to be led. As tech leaders, this is your moment—not merely to optimize but to revolutionize. This isn’t about minor efficiency gains; it’s about redefining what’s possible. AI has the potential to transform your specialists into versatile, strategic thinkers and to amplify your generalists into powerhouses of productivity. As the leader, you’re at the helm of this revolution, so lean into it. This is your chance to create something spectacular, to be the one who leads to the finish line. And when you cross it, don’t just celebrate—let everyone know you’re setting a new standard.

1. Start Small, But Think Big

Revolutions don’t always start with fireworks. They start with steady wins that build momentum. In AI, begin with small, budgetable projects—ones that can scale over time. These are about creating quick, valuable wins that prove AI’s worth to the business. But as you do this, keep the bigger picture in mind. These small steps should ladder up to a vision that’s much larger. With each project, you’re setting the stage for bigger transformations, paving the way for AI to eventually touch every corner of the organization.

2. Make Trust the Core Metric

In today’s IT landscape, trust is everything. The greatest silent threat to modern enterprises isn’t a technical vulnerability but shadow IT—the projects people start outside of sanctioned channels because they don’t trust IT to deliver. And with shadow IT comes unmanaged risk, scattered governance, and countless security gaps. To counteract this, focus on trust as your ultimate KPI. Trust isn’t measured by words; it’s seen in the number of projects in your backlog and the speed at which they’re delivered. If your backlog is robust and your delivery is steady, trust is growing. This isn’t just an IT metric; it’s a company-wide indicator of how aligned and connected your teams are. Bubble these metrics up, celebrate them, and make sure the whole organization knows that trust in IT is climbing.

3. Champions: The Lifeblood of Transformational Success

In AI and beyond, champions are everything. Champions don’t just amplify your work—they are the lifeblood of a culture of change. Think of them as the ultimate multiplier, bringing new projects to you and generating excitement for what’s possible. They’re the ones telling the story of AI’s value to their peers and advocating for your team’s contributions. The presence of champions signals that you’re creating a sustainable, scalable transformation that resonates at every level.

But here’s the kicker: champions don’t come from rigid structures or executive edicts. They’re grown organically, at the peer level, where their influence is strongest. Don’t force it or set arbitrary criteria; let champions emerge naturally, based on their enthusiasm and impact. Executive leaders can define what a champion looks like and provide air cover when needed, but let the team breathe life into it. Trust me, if you create an environment where people feel valued and rewarded for driving change, champions will come out in force.

And if you’re doing it right, champions will bring champions. With each new advocate in your ranks, you’re not only building momentum—you’re creating an unstoppable movement. A movement where your backlog is filled not by top-down initiatives but by genuine, grassroots demand for AI to make work better, faster, and more exciting.

4. Embrace Failure as a Learning Engine

The path to AI-driven success isn’t linear. It’s a loop of small experiments, constant adjustments, and, yes, failures. Each failure is just as valuable as a win; it’s a guidepost pointing out what doesn’t work so you can zero in on what does. If a project falters, don’t overanalyze. Just pick another approach, adjust, and try again. Like any scientist, identify your variables, change one at a time, and see what sticks. Failure, in this context, isn’t the enemy—it’s a tool for refinement, a path to the ideal solution.

5. Build a Culture of Feedback and Recognition

For this revolution to succeed, feedback must flow freely. You want all ideas—not just the “good” ones. Keep feedback channels open and easy, and make sure people know they’re being heard. Even when a suggestion doesn’t pan out, employees should feel valued in the process. Celebrate wins loudly and visibly. Acknowledge everyone who contributes to a successful project, regardless of their role. Set up a dashboard to track accepted ideas and feature requests, and make it public. Broadcast the wins far and wide—in newsletters, on office screens, in board reports. Recognition shouldn’t be just an afterthought; it should be a cornerstone of the culture you’re building.

Rewarding each accepted idea, even in small ways like a coffee gift card, creates a culture where people feel inspired to bring their best ideas forward. It’s not about setting up hoops to jump through; it’s about creating a space where people are excited to contribute.

6. Lead the Charge, Don’t Micromanage the Details

Your role as a leader isn’t in the trenches; it’s in the vision. Enable your team to succeed by setting the direction, then letting them own the journey. Guide, support, and celebrate their wins, but resist the urge to do the work for them. Give them the autonomy to test, iterate, and implement. This approach builds both capability and confidence, giving your team the space to become their own champions for change.

7. When You’ve Built Enough Champions, Scale Up

When the number of champions in your organization reaches a critical mass, you’ll have the trust and support to move from smaller projects to transformative ones. By then, your backlog will be brimming with projects that have organic buy-in, and your team will be experienced enough to handle larger, more complex initiatives. This is where the revolution goes full-scale. And remember: the more you focus on trust, champion growth, and continuous feedback, the easier it will be to sustain this momentum.

Call to Action: Seize the Revolution

The era of incrementalism is over. This is your chance to redefine what it means to be a transformational leader. Trust, champions, and a culture of continuous learning aren’t just buzzwords—they’re the foundation of an AI-driven revolution that you, as a tech leader, are uniquely positioned to lead. Don’t just let AI happen to your organization; use it to drive unparalleled value and unleash your team’s true potential.

And if you’re ready to go deeper, to push harder, and to make this transformation a reality, let’s talk. Reach out to me and my team to explore how we can support you on this journey. Together, we’ll make sure your organization doesn’t just adopt AI but thrives because of it.

Unleashing Transformation Read More »

apple-silicon-macs-will-get-their-ultimate-gaming-test-with-cyberpunk-2077-release

Apple silicon Macs will get their ultimate gaming test with Cyberpunk 2077 release

Cyberpunk 2077, one of the most graphically demanding and visually impressive games in recent years, will soon get a Mac release, according to developer and publisher CD Projekt Red.

The announcement was published on CD Projekt Red’s blog and also appeared briefly during Apple’s pre-recorded MacBook Pro announcement video. The game will be sold on the Mac App Store, Steam, GOG, and the Epic Game Store when it launches, and it will be labeled the Cyberpunk 2077: Ultimate Edition, which simply means it also includes Phantom Liberty, the expansion that was released a couple of years after the original game.

Cyberpunk 2027 launched in a rough state in 2020, especially on low-end hardware. Subsequent patches and a significant overhaul with Phantom Liberty largely redeemed it in critics’ eyes—the result of all that post-launch work is the version Mac users will get.

Apple has been working with AAA game publishers to try and get the games they made for consoles or Windows gaming PCs onto the Mac or iPhone, including Assassin’s Creed Mirage, Death Stranding, and Resident Evil Village, among others. But the addition of Cyberpunk 2077 is notable because of its history of running poorly on low-end hardware, and because it uses new technologies like ray-traced illumination, reflections, and shadows. It also heavily relies on AI upscaling like DLSS or FSR to be playable even on high-end machines.

Apple silicon Macs will get their ultimate gaming test with Cyberpunk 2077 release Read More »

proton-is-the-latest-entrant-in-the-quirky-“vpn-for-your-tv”-market

Proton is the latest entrant in the quirky “VPN for your TV” market

Netflix started blocking VPN and proxy providers as early as 2015, then stepped up its efforts in 2021. VPN providers aiming to keep up geofence-avoiding services to customers would sometimes lease IP addresses generally associated with residential IP subnets. This resulted in Netflix banning larger swaths of IP addresses that VPNs were using as exit proxies.

Amazon’s Prime Video, Parmount+, and other services, including the BBC, have similarly ramped up efforts to block anything resembling tunneled traffic. Proton has, for example, a guide to “unblock Amazon Prime Video with Proton VPN“; Proton also writes on that page that it “does not condone the use of our VPN service to bypass copyright regulations.”

You can search the web and find freshly updated lists of the best VPNs for getting around various services’ geo-filtering blocks, but the fact that so many are dated by the year, or even month, gives you some clue as to how effective any one solution may be.

For the purposes of getting back to the content you’re entitled to view, or maybe keeping your viewing habits private on an Apple TV you’re using outside your home, Proton VPN is likely more useful. As for the other stuff, hey, it might be worth a shot. Using the Apple TV app requires a paid Proton VPN plan.

Proton is the latest entrant in the quirky “VPN for your TV” market Read More »

“impact-printing”-is-a-cement-free-alternative-to-3d-printed-structures

“Impact printing” is a cement-free alternative to 3D-printed structures

Recently, construction company ICON announced that it is close to completing the world’s largest 3D-printed neighborhood in Georgetown, Texas. This isn’t the only 3D-printed housing project. Hundreds of 3D-printed homes are under construction in the US and Europe, and more such housing projects are in the pipeline.

There are many factors fueling the growth of 3D printing in the construction industry. It reduces the construction time; a home that could take months to build can be constructed within days or weeks with a 3D printer. Compared to traditional methods, 3D printing also reduces the amount of material that ends up as waste during construction. These advantages lead to reduced labor and material costs, making 3D printing an attractive choice for construction companies.

A team of researchers from the Swiss Federal Institute of Technology (ETH) Zurich, however, claims to have developed a robotic construction method that is even better than 3D printing. They call it impact printing, and instead of typical construction materials, it uses Earth-based materials such as sand, silt, clay, and gravel to make homes. According to the researchers, impact printing is less carbon-intensive and much more sustainable and affordable than 3D printing.

This is because Earth-based materials are abundant, recyclable, available at low costs, and can even be excavated at the construction site. “We developed a robotic tool and a method that could take common material, which is the excavated material on construction sites, and turn it back into usable building products, at low cost and efficiently, with significantly less CO2 than existing industrialized building methods, including 3D printing,” said Lauren Vasey, one of the researchers and an SNSF Bridge Fellow at ETH Zurich.

How does impact printing work?

Excavated materials can’t be used directly for construction. So before beginning the impact printing process, researchers prepare a mix of Earth-based materials that has a balance of fine and coarse particles, ensuring both ease of use and structural strength. Fine materials like clay act as a binder, helping the particles stick together, while coarser materials like sand or gravel make the mix more stable and strong. This optimized mix is designed such that it can move easily through the robotic system without getting stuck or causing blockages.

“Impact printing” is a cement-free alternative to 3D-printed structures Read More »

how-the-new-york-times-is-using-generative-ai-as-a-reporting-tool

How The New York Times is using generative AI as a reporting tool

If you don’t have a 1960s secretary who can do your audio transcription for you, AI tools can now serve as a very good stand-in.

Credit: Getty Images

If you don’t have a 1960s secretary who can do your audio transcription for you, AI tools can now serve as a very good stand-in. Credit: Getty Images

This rapid advancement is definitely bad news for people who make a living transcribing spoken words. But for reporters like those at the Times—who can now accurately transcribe hundreds of hours of audio quickly and accurately at a much lower cost—these AI systems are now just another important tool in the reporting toolbox.

Leave the analysis to us?

With the automated transcription done, the NYT reporters still faced the difficult task of reading through 5 million words of transcribed text to pick out relevant, reportable news. To do that, the team says it “employed several large-language models,” which let them “search the transcripts for topics of interest, look for notable guests and identify recurring themes.”

Summarizing complex sets of documents and identifying themes has long been touted as one of the most practical uses for large language models. Last year, for instance, Anthropic hyped the expanded context window of its Claude model by showing off its ability to absorb the entire text of The Great Gatsby and “then interactively answer questions about it or analyze its meaning,” as we put it at the time. More recently, I was wowed by Google’s NotebookLM and its ability to form a cogent review of my Minesweeper book and craft an engaging spoken-word podcast based on it.

There are important limits to LLMs’ text analysis capabilities, though. Earlier this year, for instance, an Australian government study found that Meta’s Llama 2 was much worse than humans at summarizing public responses to a government inquiry committee.

Australian government evaluators found AI summaries were often “wordy and pointless—just repeating what was in the submission.”

Credit: Getty Images

Australian government evaluators found AI summaries were often “wordy and pointless—just repeating what was in the submission.” Credit: Getty Images

In general, the report found that the AI summaries showed “a limited ability to analyze and summarize complex content requiring a deep understanding of context, subtle nuances, or implicit meaning.” Even worse, the Llama summaries often “generated text that was grammatically correct, but on occasion factually inaccurate,” highlighting the ever-present problem of confabulation inherent to these kinds of tools.

How The New York Times is using generative AI as a reporting tool Read More »

study:-dna-corroborates-“well-man”-tale-from-norse-saga

Study: DNA corroborates “Well-man” tale from Norse saga

The results: The Well-man was indeed male, between 30 and 40, with blue eyes and blond or light-brown hair, and his ancestry was traced to southern Norway, most likely present-day Vest-Agder. This is interesting because King Sverre’s men were from central Norway, and it had long been assumed that the dead body thrown into the well was part of that army. It was the invading Baglers who hailed from southern Norway. The authors are careful to note that one cannot definitively conclude that therefore the Well-man was a Bagler, but it’s certainly possible that the Baglers tossed one of their own dead into the well.

As for whether the action was a form of 12th-century biological warfare intended to poison the well, the authors weren’t able to identify any pathogens in their analysis. But that might be because of the strict decontamination procedures that were used to prepare the tooth samples, which may have also removed traces of any pathogen DNA. So they could not conclude one way or another whether the Well-man had been infected with a deadly pathogen at the time of his death.

Seven well-man teeth recovered from excavation

Seven Well-man teeth recovered from the excavation.

Credit: Norwegian Institute for Cultural Heritage Research

Seven Well-man teeth recovered from the excavation. Credit: Norwegian Institute for Cultural Heritage Research

“It was a compromise between removing surface contamination of the people who have touched the tooth and then removing some of the possible pathogens. There are lots of ethical considerations,” said co-author Martin Ellegaard, also of the Norwegian University of Science and Technology. “We need to consider what kind of tests we’re doing now because it will limit what we can do in the future.”

The fact that the Well-man hailed from southern Norway indicates that the distinctive genetic drift observed in southern Norway populations already existed during King Sverre’s reign. “This has implications for our understanding of Norwegian populations, insofar as it implies that this region must have been relatively isolated not only since that time, but also at least for a few hundred years beforehand and perhaps longer,” the authors concluded. Future research sequencing more ancient Norwegian DNA would shed further light on this finding—perhaps even the remains of the Norwegian Saint Olaf, believed to be buried near Trondheim Cathedral.

iScience, 2024. DOI: 10.1016/j.isci.2024.111076  (About DOIs).

Study: DNA corroborates “Well-man” tale from Norse saga Read More »

google,-microsoft,-and-perplexity-promote-scientific-racism-in-ai-search-results

Google, Microsoft, and Perplexity promote scientific racism in AI search results


AI-powered search engines are surfacing deeply racist, debunked research.

Literal Nazis

LOS ANGELES, CA – APRIL 17: Members of the National Socialist Movement (NSM) salute during a rally on near City Hall on April 17, 2010 in Los Angeles, California. Credit: David McNew via Getty

AI-infused search engines from Google, Microsoft, and Perplexity have been surfacing deeply racist and widely debunked research promoting race science and the idea that white people are genetically superior to nonwhite people.

Patrik Hermansson, a researcher with UK-based anti-racism group Hope Not Hate, was in the middle of a monthslong investigation into the resurgent race science movement when he needed to find out more information about a debunked dataset that claims IQ scores can be used to prove the superiority of the white race.

He was investigating the Human Diversity Foundation, a race science company funded by Andrew Conru, the US tech billionaire who founded Adult Friend Finder. The group, founded in 2022, was the successor to the Pioneer Fund, a group founded by US Nazi sympathizers in 1937 with the aim of promoting “race betterment” and “race realism.”

Wired logo

Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hermansson immediately recognized the numbers being fed back to him. They were being taken directly from the very study he was trying to debunk, published by one of the leaders of the movement that he was working to expose.

The results Google was serving up came from a dataset published by Richard Lynn, a University of Ulster professor who died in 2023 and was president of the Pioneer Fund for two decades.

“His influence was massive. He was the superstar and the guiding light of that movement up until his death. Almost to the very end of his life, he was a core leader of it,” Hermansson says.

A WIRED investigation confirmed Hermanssons’s findings and discovered that other AI-infused search engines—Microsoft’s Copilot and Perplexity—are also referencing Lynn’s work when queried about IQ scores in various countries. While Lynn’s flawed research has long been used by far-right extremists, white supremacists, and proponents of eugenics as evidence that the white race is superior genetically and intellectually from nonwhite races, experts now worry that its promotion through AI could help radicalize others.

“Unquestioning use of these ‘statistics’ is deeply problematic,” Rebecca Sear, director of the Center for Culture and Evolution at Brunel University London, tells WIRED. “Use of these data therefore not only spreads disinformation but also helps the political project of scientific racism—the misuse of science to promote the idea that racial hierarchies and inequalities are natural and inevitable.”

To back up her claim, Sear pointed out that Lynn’s research was cited by the white supremacist who committed the mass shooting in Buffalo, New York, in 2022.

Google’s AI Overviews were launched earlier this year as part of the company’s effort to revamp its all-powerful search tool for an online world being reshaped by artificial intelligence. For some search queries, the tool, which is only available in certain countries right now, gives an AI-generated summary of its findings. The tool pulls the information from the Internet and gives users the answers to queries without needing to click on a link.

The AI Overview answer does not always immediately say where the information is coming from, but after complaints from people about how it showed no articles, Google now puts the title for one of the links to the right of the AI summary. AI Overviews have already run into a number of issues since launching in May, forcing Google to admit it had botched the heavily hyped rollout. AI Overviews is turned on by default for search results and can’t be removed without resorting to installing third-party extensions. (“I haven’t enabled it, but it was enabled,” Hermansson, the researcher, tells WIRED. “I don’t know how that happened.”)

In the case of the IQ results, Google referred to a variety of sources, including posts on X, Facebook, and a number of obscure listicle websites, including World Population Review. In nearly all of these cases, when you click through to the source, the trail leads back to Lynn’s infamous dataset. (In some cases, while the exact numbers Lynn published are referenced, the websites do not cite Lynn as the source.)

When querying Google’s Gemini AI chatbot directly using the same terms, it provided a much more nuanced response. “It’s important to approach discussions about national IQ scores with caution,” read text that the chatbot generated in response to the query “Pakistan IQ.” The text continued: “IQ tests are designed primarily for Western cultures and can be biased against individuals from different backgrounds.”

Google tells WIRED that its systems weren’t working as intended in this case and that it is looking at ways it can improve.

“We have guardrails and policies in place to protect against low quality responses, and when we find Overviews that don’t align with our policies, we quickly take action against them,” Ned Adriance, a Google spokesperson, tells WIRED. “These Overviews violated our policies and have been removed. Our goal is for AI Overviews to provide links to high quality content so that people can click through to learn more, but for some queries there may not be a lot of high quality web content available.”

While WIRED’s tests suggest AI Overviews have now been switched off for queries about national IQs, the results still amplify the incorrect figures from Lynn’s work in what’s called a “featured snippet,” which displays some of the text from a website before the link.

Google did not respond to a question about this update.

But it’s not just Google promoting these dangerous theories. When WIRED put the same query to other AI-powered online search services, we found similar results.

Perplexity, an AI search company that has been found to make things up out of thin air, responded to a query about “Pakistan IQ” by stating that “the average IQ in Pakistan has been reported to vary significantly depending on the source.”

It then lists a number of sources, including a Reddit thread that relied on Lynn’s research and the same World Population Review site that Google’s AI Overview referenced. When asked for Sierra Leone’s IQ, Perplexity directly cited Lynn’s figure: “Sierra Leone’s average IQ is reported to be 45.07, ranking it among the lowest globally.”

Perplexity did not respond to a request for comment.

Microsoft’s Copilot chatbot, which is integrated into its Bing search engine, generated confident text—“The average IQ in Pakistan is reported to be around 80”—citing a website called IQ International, which does not reference its sources. When asked for “Sierra Leone IQ,” Copilot’s response said it was 91. The source linked in the results was a website called Brainstats.com, which references Lynn’s work. Copilot also referenced Brainstats.com work when queried about IQ in Kenya.

“Copilot answers questions by distilling information from multiple web sources into a single response,” Caitlin Roulston, a Microsoft spokesperson, tells WIRED. “Copilot provides linked citations so the user can further explore and research as they would with traditional search.”

Google added that part of the problem it faces in generating AI Overviews is that, for some very specific queries, there’s an absence of high quality information on the web—and there’s little doubt that Lynn’s work is not of high quality.

“The science underlying Lynn’s database of ‘national IQs’ is of such poor quality that it is difficult to believe the database is anything but fraudulent,” Sear said. “Lynn has never described his methodology for selecting samples into the database; many nations have IQs estimated from absurdly small and unrepresentative samples.”

Sear points to Lynn’s estimation of the IQ of Angola being based on information from just 19 people and that of Eritrea being based on samples of children living in orphanages.

“The problem with it is that the data Lynn used to generate this dataset is just bullshit, and it’s bullshit in multiple dimensions,” Rutherford said, pointing out that the Somali figure in Lynn’s dataset is based on one sample of refugees aged between 8 and 18 who were tested in a Kenyan refugee camp. He adds that the Botswana score is based on a single sample of 104 Tswana-speaking high school students aged between 7 and 20 who were tested in English.

Critics of the use of national IQ tests to promote the idea of racial superiority point out not only that the quality of the samples being collected is weak, but also that the tests themselves are typically designed for Western audiences, and so are biased before they are even administered.

“There is evidence that Lynn systematically biased the database by preferentially including samples with low IQs, while excluding those with higher IQs for African nations,” Sear added, a conclusion backed up by a preprint study from 2020.

Lynn published various versions of his national IQ dataset over the course of decades, the most recent of which, called “The Intelligence of Nations,” was published in 2019. Over the years, Lynn’s flawed work has been used by far-right and racist groups as evidence to back up claims of white superiority. The data has also been turned into a color-coded map of the world, showing sub-Saharan African countries with purportedly low IQ colored red compared to the Western nations, which are colored blue.

“This is a data visualization that you see all over [X, formerly known as Twitter], all over social media—and if you spend a lot of time in racist hangouts on the web, you just see this as an argument by racists who say, ‘Look at the data. Look at the map,’” Rutherford says.

But the blame, Rutherford believes, does not lie with the AI systems alone, but also with a scientific community that has been uncritically citing Lynn’s work for years.

“It’s actually not surprising [that AI systems are quoting it] because Lynn’s work in IQ has been accepted pretty unquestioningly from a huge area of academia, and if you look at the number of times his national IQ databases have been cited in academic works, it’s in the hundreds,” Rutherford said. “So the fault isn’t with AI. The fault is with academia.”

This story originally appeared on wired.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Google, Microsoft, and Perplexity promote scientific racism in AI search results Read More »

ars-live:-what-else-can-glp-1-drugs-do?-join-us-tuesday-for-a-discussion.

Ars Live: What else can GLP-1 drugs do? Join us Tuesday for a discussion.

News and talk of GLP-1 drugs are everywhere these days—from their smash success in treating Type 2 diabetes and obesity to their astronomical pricing, drug shortages, compounding disputes, and what sometimes seems like an ever-growing list of other conditions the drugs could potentially treat. There are new headlines every day.

However, while the drugs have abruptly stolen the spotlight in recent years, researchers have been toiling away at developing and understanding them for decades, stretching back to the 1970s. And even since they were developed, the drugs still have held mysteries and unknowns. For instance, researchers thought for years that they worked directly in the gut to decrease blood sugar levels and make people feel full. After all, the drugs mimic an incretin hormone, glucagon-like peptide-1, that does exactly that. But, instead, studies have since found that they work in the brain.

In fact, the molecular receptors for GLP-1 are sprinkled in many places around the body. They’re found in the central nervous system, the heart, blood vessels, liver, and kidney. Their presence in the brain even plays a role in inflammation. As such, research on GLP-1 continues to flourish as scientists work to understand the role it could play in treating a range of other chronic conditions.

Ars Live: What else can GLP-1 drugs do? Join us Tuesday for a discussion. Read More »

for-the-first-time,-beloved-ide-jetbrains-rider-will-be-available-for-free

For the first time, beloved IDE Jetbrains Rider will be available for free

The integrated development environment (IDE) Rider by Jetbrains is now available for free for the first time ever.

After trialing non-commercial free licenses with other products like RustRover and Aqua, Jetbrains has introduced a similar option for Rider. It also says this is a permanent situation, not a limited-time initiative.

In a blog post announcing the change, Jetbrains’ Ekaterina Ryabukha acknowledges that there are numerous cases where people use an IDE without any commercial intent—for example, hobbyists, open source developers, and educators or students. She also cites a Stack Overflow survey that 68 percent of professional developers “code outside of work as a hobby.”

Rider has always been a bit niche, but it’s often beloved by those who use it. Making it free could greatly expand its user base, and it could also make it more popular in the long run because learners could start with it without having to pay an annual fee, and some learners go pro.

It’s also good news for some macOS developers, as Microsoft not long ago chose to end support for Visual Studio on that platform. Yes, you can use VS Code, Xcode, or other options, but there were some types of projects that were left in the lurch, especially for developers who don’t find VS Code robust enough for their purposes.

There is one drawback that might matter to some: users working in Rider on the non-commercial free license “cannot opt out of the collection of anonymous usage statistics.”

There are some edge cases that are in a bit of a gray area when it comes to using a free license versus a paid one. Sometimes, projects that start without commercial intent can become commercial later on. Jetbrains simply says that “if your intentions change over time, you’ll need to reassess whether you still qualify for non-commercial use.”

For the first time, beloved IDE Jetbrains Rider will be available for free Read More »

good-omens-will-wrap-with-a-single-90-minute-episode

Good Omens will wrap with a single 90-minute episode

The third and final season of Good Omens, Prime Video’s fantasy series adapted from the classic 1990 novel by Neil Gaiman and Terry Pratchett, will not be a full season after all, Deadline Hollywood reports. In the wake of allegations of sexual assault against Gaiman this summer, the streaming platform has decided that rather than a full slate of episodes, the series finale will be a single 90-minute episode—the equivalent of a TV movie.

(Major spoilers for the S2 finale of Good Omens below.)

As reported previously, the series is based on the original 1990 novel by Gaiman and the late Pratchett. Good Omens is the story of an angel, Aziraphale (Michael Sheen), and a demon, Crowley (David Tennant), who gradually become friends over the millennia and team up to avert Armageddon. Gaiman’s obvious deep-down, fierce love for this project—and the powerful chemistry between its stars—made the first season a sheer joy to watch. Apart from a few minor quibbles, it was pretty much everything book fans could have hoped for in a TV adaptation of Good Omens.

S2 found Aziraphale and Crowley getting back to normal, when the archangel Gabriel (Jon Hamm) turned up unexpectedly at the door of Aziraphale’s bookshop with no memory of who he was or how he got there. The duo had to evade the combined forces of Heaven and Hell to solve the mystery of what happened to Gabriel and why.

In the cliffhanger S2 finale, the pair discovered that Gabriel had defied Heaven and refused to support a second attempt to bring about Armageddon. He hid his own memories from himself to evade detection. Oh, and he and Beelzebub (Shelley Conn) had fallen in love. They ran off together, and the Metatron (Derek Jacobi) offered Aziraphale Gabriel’s old job. That’s when Crowley professed his own love for the angel and asked him to leave Heaven and Hell behind, too. Aziraphale wanted Crowley to join him in Heaven instead. So Crowley kissed him and they parted. And once Aziraphale got to Heaven, he learned his task was to bring about the Second Coming.

Good Omens will wrap with a single 90-minute episode Read More »

bird-flu-hit-a-dead-end-in-missouri,-but-it’s-running-rampant-in-california

Bird flu hit a dead end in Missouri, but it’s running rampant in California

So, in all, Missouri’s case count in the H5N1 outbreak will stay at one for now, and there remains no evidence of human-to-human transmission. Though both the household contact and the index case had evidence of an exposure, their identical blood test results and simultaneous symptom development suggest that they were exposed at the same time by a single source—what that source was, we may never know.

California and Washington

While the virus seems to have hit a dead end in Missouri, it’s still running rampant in California. Since state officials announced the first dairy herd infections at the end of August, the state has now tallied 137 infected herds and at least 13 infected dairy farm workers. California, the country’s largest dairy producer, now has the most herd infections and human cases in the outbreak, which was first confirmed in March.

In the briefing Thursday, officials announced another front in the bird flu fight. A chicken farm in Washington state with about 800,000 birds became infected with a different strain of H5 bird flu than the one circulating among dairy farms. This strain likely came from wild birds. While the chickens on the infected farms were being culled, the virus spread to farmworkers. So far, two workers have been confirmed to be infected, and five others are presumed to be positive.

As of publication time, at least 31 humans have been confirmed infected with H5 bird flu this year.

With the spread of bird flu in dairies and the fall bird migration underway, the virus will continue to have opportunities to jump to mammals and gain access to people. Officials have also expressed anxiety as seasonal flu ramps up, given influenza’s penchant for swapping genetic fragments to generate new viral combinations. The reassortment and exposure to humans increases the risk of the virus adapting to spread from human to human and spark an outbreak.

Bird flu hit a dead end in Missouri, but it’s running rampant in California Read More »

google-offers-its-ai-watermarking-tech-as-free-open-source-toolkit

Google offers its AI watermarking tech as free open source toolkit

Google also notes that this kind of watermarking works best when there is a lot of “entropy” in the LLM distribution, meaning multiple valid candidates for each token (e.g., “my favorite tropical fruit is [mango, lychee, papaya, durian]”). In situations where an LLM “almost always returns the exact same response to a given prompt”—such as basic factual questions or models tuned to a lower “temperature”—the watermark is less effective.

A diagram explaining how SynthID’s text watermarking works.

A diagram explaining how SynthID’s text watermarking works. Credit: Google / Nature

Google says SynthID builds on previous similar AI text watermarking tools by introducing what it calls a Tournament sampling approach. During the token-generation loop, this approach runs each potential candidate token through a multi-stage, bracket-style tournament, where each round is “judged” by a different randomized watermarking function. Only the final winner of this process makes it into the eventual output.

Can they tell it’s Folgers?

Changing the token selection process of an LLM with a randomized watermarking tool could obviously have a negative effect on the quality of the generated text. But in its paper, Google shows that SynthID can be “non-distortionary” on the level of either individual tokens or short sequences of text, depending on the specific settings used for the tournament algorithm. Other settings can increase the “distortion” introduced by the watermarking tool while at the same time increasing the detectability of the watermark, Google says.

To test how any potential watermark distortions might affect the perceived quality and utility of LLM outputs, Google routed “a random fraction” of Gemini queries through the SynthID system and compared them to unwatermarked counterparts. Across 20 million total responses, users gave 0.1 percent more “thumbs up” ratings and 0.2 percent fewer “thumbs down” ratings to the watermarked responses, showing barely any human-perceptible difference across a large set of real LLM interactions.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy. Credit: Google / Nature

Google’s testing also showed its SynthID detection algorithm successfully detected AI-generated text significantly more often than previous watermarking schemes like Gumbel sampling. But the size of this improvement—and the total rate at which SynthID can successfully detect AI-generated text—depends heavily on the length of the text in question and the temperature setting of the model being used. SynthID was able to detect nearly 100 percent of 400-token-long AI-generated text samples from Gemma 7B-1T at a temperature of 1.0, for instance, compared to about 40 percent for 100-token samples from the same model at a 0.5 temperature.

Google offers its AI watermarking tech as free open source toolkit Read More »