Author name: Tim Belzer

another-jeff-bezos-company-has-announced-plans-to-develop-a-megaconstellation

Another Jeff Bezos company has announced plans to develop a megaconstellation

The announcement came out of the blue, from Blue, on Wednesday.

The space company founded by Jeff Bezos, Blue Origin, said it was developing a new megaconstellation named TeraWave to deliver data speeds of up to 6Tbps anywhere on Earth. The constellation will consist of 5,408 optically interconnected satellites, with a majority in low-Earth orbit and the remainder in medium-Earth orbit.

The satellites in low-Earth orbit will provide up to 144Gbps through radio spectrum, whereas those in medium-Earth orbit will provide higher data rates through optical links.

“This provides the reliability and resilience needed for real-time operations and massive data movement,” Blue Origin’s chief executive, Dave Limp, said on social media. “It also provides backup connectivity during outages, keeping critical operations running. Plus, the ability to scale on demand and rapidly deploy globally while maintaining performance.”

Going for the enterprise market

Unlike other megaconstellations, including SpaceX’s Starlink, Blue Origin’s new constellation will not serve consumers or try to provide direct-to-cell communications. Rather, TeraWave will seek to serve “tens of thousands” of enterprise, data center, and government users who require reliable connectivity for critical operations.

The announcement was surprising for several reasons, but it may also represent a shrewd business decision.

It was surprising because Bezos’ other company, Amazon, has already spent more than half a decade developing its own megaconstellation, now known as Amazon Leo, which is presently authorized to deploy 3,236 satellites into low-Earth orbit. This service is intended to compete with Starlink, both through customer terminals and by providing services such as in-flight Wi-Fi.

However, the emergence of increased data needs from AI data centers and other operations must have convinced Bezos that Blue Origin should enter the competition for lucrative enterprise customers—an area in which Amazon Leo is also expected to compete.

Another Jeff Bezos company has announced plans to develop a megaconstellation Read More »

claude-codes-#3

Claude Codes #3

We’re back with all the Claude that’s fit to Code. I continue to have great fun with it and find useful upgrades, but the biggest reminder is that you need the art to have an end other than itself. Don’t spend too long improving your setup, or especially improving how you improve your setup, without actually working on useful things.

Odd Lots covered Claude Code. Fun episode, but won’t teach my regular readers much that is new.

Bradly Olsen at the Wall Street Journal reports Claude [Code and now Cowork are] Taking the AI World By Storm, and ‘Even Non-Nerds Are Blown Away.’

It is remarkable how everyone got the ‘Google is crushing everyone’ narrative going with Gemini 3, then it took them a month to realize that actually Anthropic is crushing everyone, at least among the cognoscenti with growing momentum elsewhere, with Claude Code and Claude Opus 4.5. People are realizing you can know almost nothing and still use it to do essentially everything.

Are Claude Code and Codex having a ‘GPT moment’?

Wall St Engine: Morgan Stanley says Anthropic’s ClaudeCode + Cowork is dominating investor chatter and adding pressure on software.

They flag OpenRouter token growth “going vertical,” plus anecdotes that the Cowork launch pushed usage hard enough to crash Opus 4.5 and hit rate limits, framing it as another “GPT moment” and a net positive for AI capex.

They add that OpenAI sentiment is still shaky: some optimism around a new funding round and Blackwell-trained models in 2Q, but competitive worries are widening beyond $GOOGL to Anthropic, with Elon Musk saying the OpenAI for-profit conversion lawsuit heads to trial on April 27.

Claude Cowork is now available to Pro subscribers, not only Max subscribers.

Claude Cowork will ask explicit permission before all deletions, add new folders in the directory picker without starting over and make smarter connector suggestions.

Claude Code on the web gets a good looking diff view.

Claude Code for VSCode has now officially shipped, it’s been available for a while. To drag and drop files, hold shift.

Claude Code now has ‘community events’ in various cities. New York and San Francisco aren’t on the list, but also don’t need to be.

Claude Code upgraded to 2.1.9, and then to 2.1.10 and 2.1.11 which were tiny, and now has reached 2.1.14.

Few have properly updated for this sentence: ‘Claude Codex was built in 1.5 weeks with Claude Code.’

Nabeel S. Qureshi: I don’t even see how you can be an AI ‘skeptic’ anymore when the *currentAI, right in front of us, is so good, e.g. see Claude Cowork being written by Claude Code in 1.5 weeks.

It’s over, the skeptics were wrong.

Planning mode now automatically clears context when you accept a plan.

Anthropic is developing a new Customize section for Claude to centralize Skills, connectors and upcoming commands for Claude Code. My understanding is that custom commands already exist if you want to create them, but reducing levels of friction, including levels of friction in reducing levels of friction, is often highly valuable. A way to browse skills and interact with the files easily, or see and manage your connectors, or an easy interface for defining new commands, seems great.

I highly recommend using Obsidian or another similar tool together with Claude Code. This gives you a visual representation of all the markdown files, and lets you easily navigate and search and edit them, and add more and so on. I think it’s well worth keeping it all human readable, where that human is you.

Heinrich calls it ‘vibe note taking’ whether or not you use Obsidian. I think the notes are a place you want to be less vibing and more intentional, and be systematically optimizing the notes, for both Claude Code and for your own use.

You can combine Obsidian and Claude Code directly via the Obsidian terminal plugin, but I don’t see any mechanical advantage to doing so.

Siqi Chen offers us /claude-continuous-learning. Claude’s evaluation is that this could be good if you’re working in codebases where you need to continuously learn things, but the overhead and risk of clutter are real.

Jasmine Sun created a tool to turn any YouTube podcast into a clean, grammatical PDF transcript with chapters and takeaways.

The big change with Claude Code version 2.1.7 was enabled MCP tool search auto mode by default, which triggers when MCP tools are more than 10% of the context window. You can disable this by adding ‘MCPSearch’ to ‘disallowedTools’ in settings. This seems big for people using a lot of MCPs at once, which could eat a lot of context.

Thariq (Anthropic): ​Today we’re rolling out MCP Tool Search for Claude Code.

As MCP has grown to become a more popular protocol and agents have become more capable, we’ve found that MCP servers may have up to 50+ tools and take up a large amount of context.

Tool Search allows Claude Code to dynamically load tools into context when MCP tools would otherwise take up a lot of context.

How it works:

– Claude Code detects when your MCP tool descriptions would use more than 10% of context

– When triggered, tools are loaded via search instead of preloaded

Otherwise, MCP tools work exactly as before. This resolves one of our most-requested features on GitHub: lazy loading for MCP servers. Users were documenting setups with 7+ servers consuming 67k+ tokens.

If you’re making a MCP server

Things are mostly the same, but the “server instructions” field becomes more useful with tool search enabled. It helps Claude know when to search for your tools, similar to skills

If you’re making a MCP client

We highly suggest implementing the ToolSearchTool, you can find the docs here. We implemented it with a custom search function to make it work for Claude Code.

What about programmatic tool calling?

We experimented with doing programmatic tool calling such that MCP tools could be composed with each other via code. While we will continue to explore this in the future, we felt the most important need was to get Tool Search out to reduce context usage.

Tell us what you think here or on Github as you see the ToolSearchTool work.

With that solved, presumably you should be ‘thinking MCP’ at all times, it is now safe to load up tons of them even if you rarely use each one individually.

Well, yes, this is happening.

bayes: everyone 3 years ago: omg what if ai becomes too widespread and then it turns against us with the strategic advantage of our utter and total dependence

everyone now: hi claude here’s my social security number and root access to my brain i love you please make me rich and happy.

Some of us three years ago were pointing out, loud and clear, that exactly this was obviously going to happen, modulo various details. Now you can see it clearly.

Not giving Claude a lot of access is going to slow things down a lot. The only thing holding most people back was the worry things would accidentally get totally screwed up, and that risk is a lot lower now. Yes, obviously this all causes other concerns, including prompt injections, but in practice on an individual level the risk-reward calculation is rather clear. It’s not like Google didn’t effectively have root access to our digital lives already. And it’s not like a truly rogue AI couldn’t have done all these things without having to ask for the permissions.

The humans are going to be utterly dependent on the AIs in short order, and the AIs are going to have access, collectively, to essentially everything. Grok has root access to Pentagon classified information, so if you’re wondering where we draw the line the answer is there is no line. Let the right one in, and hope there is a right one?

What’s better than one agent? Multiple agents that work together and that don’t blow up your budget. Rohit Ghumare offers a guide to this.

Rohit Ghumare: Single agents hit limits fast. Context windows fill up, decision-making gets muddy, and debugging becomes impossible. Multi-agent systems solve this by distributing work across specialized agents, similar to how you’d structure a team.

The benefits are real:

  • Specialization: Each agent masters one domain instead of being mediocre at everything

  • Parallel processing: Multiple agents can work simultaneously on independent subtasks

  • Maintainability: When something breaks, you know exactly which agent to fix

  • Scalability: Add new capabilities by adding new agents, not rewriting everything

The tradeoff: coordination overhead. Agents need to communicate, share state, and avoid stepping on each other. Get this wrong and you’ve just built a more expensive failure mode.​

You can do this with a supervisor agent, which scales to about 3-8 agents, if you need quality control and serial tasks and can take a speed hit. To scale beyond that you’ll need hierarchy, the same as you would with humans, which gets expensive in overhead, the same as it does in humans.

Or you can use a peer-to-peer swarm that communicates directly if there aren’t serial steps and the tasks need to cross-react and you can be a bit messy.

You can use a shared state and set of objects, or you can pass messages. You also need to choose a type of memory.

My inclination is by default you should use supervisors and then hierarchy. Speed takes a hit but it’s not so bad and you can scale up with more agents. Yes, that gets expensive, but in general the cost of the tokens is less important than the cost of human time or the quality of results, and you can be pretty inefficient with the tokens if it gets you better results.

Olivia Moore offers a basic guide to Cursor and Claude Code for nontechnical folks.

Here’s another Twitter post with basic tips. I need to do better on controlling context and starting fresh windows for each issue, in particular.

Mitchell Hashimoto: It’s pretty cool that I can tell an agent that CI broke at some point this morning, ask it to use `git bisect` to find the offending commit, and fix it. I then went to the bathroom, talked to some people in the hallway, came back, and it did a swell job.

Often you’ll want to tell the AI what tool is best for the job. Patrick McKenzie points out that even if you don’t know how the orthodox solution works, as long as you know the name of the orthodox solution, you can say ‘use [X]’ and that’s usually good enough. One place I’ve felt I’ve added a lot of value is when I explain why I believe that a solution to a problem exists, or that a method of some type should work, and then often Claude takes it from there. My taste is miles ahead of my ability to implement.

Always be trying to get actual use out of your setup as you’re improving it. It’s so tempting to think ‘oh obviously if I do more optimization first that’s more efficient’ but this prevents you knowing what you actually need, and it risks getting caught in an infinite loop.

@deepfates: Btw thing you get with claude code is not psychosis either. It’s mania

near: men will go on a claude code weekend bender and have nothing to show for it but a “more optimized claude setup”

Danielle Fong : that’s ok i’ll still keep drinkin’ that garbage

palcu: spent an hour tweaking my settings.local.json file today

Near: i got hit hard enough to wonder about finetuning a model to help me prompt claude since i cant cross-prompt claudes the way i want to (well, i can sometimes, but not all the time). many causalities, stay safe out there 🙏

near: claude code is a cursed relic causing many to go mad with the perception of power. they forget what they set out to do, they forget who they are. now enthralled with the subtle hum of a hundred instances, they no longer care. hypomania sets in as the outside world becomes a blur.

Always optimize in the service of a clear target. Build the pieces you need, as you need them. Otherwise, beware.

Nick: need –dangerously-skip-permissions-except-rm

Daniel San: If you’re running Claude Code with –dangerously-skip-permissions, ALWAYS use this hook to prevent file deletion:

Run:

npx claude-code-templates@latest –hook=security/dangerous-command-blocker –yes

Web: https://aitmpl.com/component/hook/dangerous-command-blocker

Once people start understanding how to use hooks, many autonomous workflows will start unlocking! 😮

Yes, you could use a virtual machine, but that introduces some frictions that many of us want to avoid.

​I’m experimenting with using a similar hook system plus a bunch of broad permissions, rather than outright using –dangerously-skip-permissions, but definitely thinking to work towards dangerously skipping permissions.

At first everyone laughed at Anthropic’s obsession with safety and trust, and its stupid refusals. Now that Anthropic has figured out how to make dangerous interactions safer, it can actually do the opposite. In contexts where it is safe and appropriate to take action, Claude knows that refusal is not a ‘safe’ choice, and is happy to help.

Dean W. Ball: One underrated fact is that OpenAI’s Codex and Gemini CLI have meaningfully heavier guardrails than Claude Code. These systems have refused many tasks (for example, anything involving research into and execution of investing strategies) that Claude Code happily accepts. Codex/Gemini also seek permission more.

The conventional narrative is that “Anthropic is more safety-pilled than the others.” And it’s definitely true that Claude is likelier to refuse tasks relating to eg biology research. But overall the current state of play would seem to be that Anthropic is more inclined to let their agents rip than either OAI or GDM.

My guess is that this comes down to Anthropic creating guardrails principally via a moral/ethical framework, and OAI/GDM doing so principally via lists of rules. But just a guess.

Tyler John: The proposed explanation is key. If true, it means that Anthropic’s big investment in alignment research is paying off by making the model much more usable.

Investment strategizing tends to be safe across the board, but there are presumably different lines on where they become unwilling to help you execute. So far, I have not had Claude Code refuse a request from me, not even once.

Dean W. Ball: My high-level review of Claude Cowork:

  1. It’s probably superior for many users to Claude Code just because of the UI.

  2. It’s not obviously superior for me, not so much because the command line is such a better UI, but because Opus in Claude Code seems more capable to me than in Cowork. I’m not sure if this is because Code is better as a harness, because the model has more permissive guardrails in Code, or both.

  3. There are certain UI niceties in Cowork I like very much; for example, the ability to leave a comment or clarification on any item in the model’s active to-do list while it is running–this is the kind of thing that is simply not possible to do nicely within the confines of a Terminal UI.

  4. Cowork probably has a higher ceiling as a product, simply because a GUI allows for more experimentation. I am especially excited to see GUI innovation in the orchestration and oversight of multi-agent configurations. We have barely scratched the surface here.

  5. Because of (4), if I had to bet money, I’d bet that within 6-12 months Cowork and similar products will be my default tool for working with agents, beating out the command-line interfaces. But for now, the command-line-based agents remain my default.

I haven’t tried Cowork myself due to the Mac-only restriction and because I don’t have a problem working with the command line. I’ve essentially transitioned into Claude Code for everything that isn’t pure chat, since it seems to be more intelligent and powerful in that mode than it does on the web even if you don’t need the extra functionality.

The joy of the simple things:

Matt Bruenig: lot of lower level Claude Code use is basically just the recognition that you can kind of do everything with bash and python one-liners, it’s just no human has the time or will to write them.

Or to figure out how to write them.

Enjoying the almost as simple things:

Ado: Here’s a fun use case for Claude Cowork.

I was thinking of getting a hydroponic garden. I asked Claude to go through my grocery order history on various platforms and sum up vegetable purchases to justify the ROI.

Worked like a charm!

For some additional context:

– it looked at 2 orders on each platform (Kroger, Safeway, Instacart)

– It extrapolated to get the annual costs from there

Could have gotten more accurate by downloading order history in a CSV and feeding that to Claude, but this was good enough.

The actual answer is that very obviously it was not worth it for Ado to get a hydroponic garden, because his hourly rate is insanely high, but this is a fun project and thus goes by different standards.

The transition from Claude Code to Claude Cowork, for advanced users, if you’ve got a folder with the tools then the handoff should be seamless:

Tomasz Tunguz: I asked Claude Cowork to read my tools folder. Eleven steps later, it understood how I work.

Over the past year, I built a personal operating system inside Claude Code : scripts to send email, update our CRM, research startups, draft replies. Dozens of small tools wired together. All of it lived in a folder on my laptop, accessible only through the terminal.

Cowork read that folder, parsed each script, & added them to its memory. Now I can do everything I did yesterday, but in a different interface. The capabilities transferred. The container didn’t matter.

My tools don’t belong to the application anymore. They’re portable. In the enterprise, this means laptops given to new employees would have Cowork installed plus a collection of tools specific to each role : the accounting suite, the customer support suite, the executive suite.

The name choice must have been deliberate. Microsoft trained us on copilot for three years : an assistant in the passenger seat, helpful but subordinate. Anthropic chose cowork. You’re working with someone who remembers how you like things done.

We’re entering an era where you just tell the computer what to do. Here’s all my stuff. Here are the five things we need to do today. When we need to see something, a chart, a document, a prototype, an interface will appear on demand.

The current version of Cowork is rough. It’s slow. It crashed twice on startup. It changed the authorization settings for my Claude Code installation. But the promised power is enough to plow through.

Simon Willison: This is great – context pollution is why I rarely used MCP, now that it’s solved there’s no reason not to hook up dozens or even hundreds of MCPs to Claude Code.

Justine Moore has Claude Cowork write up threads on NeurIPS best papers, generate graphics for them on Krea and validate this with ChatGPT. Not the best thing.

Peter Wildeford is having success doing one-shot Instacart orders from plans without an explicit list, and also one-shotting an Uber Eats order.

A SaaS vendor (Cypress) a startup was using tried to double their price from $70k to $170k a year, so the startup does a three week sprint and duplicates the product. Or at least, that’s the story.

By default Claude Code only saves 30 days of session history. I can’t think of a good reason not to change this so it saves sessions indefinitely, you never know when that will prove useful. So tell Claude Code to change that for you by setting cleanupPeriodDays to 0.

Kaj Sotala: People were talking about how you can also use Claude Code as a general-purpose assistant for any files on your computer, so I had Claude Code do some stuff like extracting data from a .csv file and rewriting it and putting it into another .csv file

Then it worked great and then I was like “it’s dumb to use an LLM for this, Claude could you give me a Python script that would do the same” and then it did and then that script worked great

So uhh I can recommend using Claude Code as a personal assistant for your local files I guess, trying to use it that way got me an excellent non-CC solution

Yep. Often the way you ues Claude Code is to notice that you can automate things and then have it automate the automation process. It doesn’t have to do everything itself any more than you do.

An explanation (direct link to 15 minute video) of what Claude skills are.

James Ide points out that ‘vibe coding’ anything serious still requires a deep understanding of software engineering and computer systems. You need to figure out and specify what you want. You need to be able to spot the times it’s giving you something different than you asked for, or is otherwise subtly wrong. Typing source code is dead, but reading source code and the actual art of software engineering are very much not.

I find the same, and am rapidly getting a lot better at various things as I go.

Every’s Dan Shipper writes that OpenAI has some catching up to do, as his office has with one exception turned entirely to Claude Code with Opus 4.5, where a year ago it would have been all GPT models, and a month prior there would have been a bunch of Codex CLI and GPT 5.1 in Cursor alongside Claude Code.

Codex did add the ability to instruct mid-execution with new prompts without the need to interrupt the agent (requires /experimental), but Claude Code already did that.

There are those who still prefer Codex and GPT-5.2, such as Hasan Can. They are very much in the minority lately, but if you’re a heavy duty coder definitely check and see which option works best for you, and consider potential hybrid strategies.

One hybrid strategy is that Claude Code can directly call the Gemini CLI, even without an API key. Tyler John reports it is a great workflow, as Gemini can spot things Claude missed and act as a reviewer and way to call out Claude on its mistakes. Gemini CLI is here.

Contrary to claims by some, including George Hotz, Anthropic did not cut off OpenRouter or other similar services from Claude Opus 4.5. The API exists. They can use it.

What other interfaces cannot do is use the Claude Code authorization token to use the tokens from your Claude subscription for a different service, which was always against Anthropic’s ToS. The subscription is a special deal.

​Marcos Nils: We exchanged postures through DMs but I’m on the other side regarding this matter. Devs knew very well what they were doing while breaking CC’s ToS by spoofing and reverse engineering CC to use the max subscription in unintended ways.

I think it’s important to separate the waters here:

– Could Anthropic’s enforcement have been handled better? sureley, yes

– Were devs/users “deceived” or got a different service for what they paid for? I don’t think so.

Not only this, it’s even worse than that. OpenCode intentionally knew they were violating Claude ToS by allowing their users to use the max subscription in the first place.

I guess people just like to complain.

I agree that Anthropic’s communications about this could have been better, but what they actually did was tolerate a rather blatant loophole for a while, allowing people to use Claude on the cheap and probably at a loss for Anthropic, which they have now reversed with demand surging faster than they can spin up servers.

Claude Codes quite a lot, usage is taking off. Here’s OpenRouter (this particular use case might be confounded a bit by the above story where they cut off alternative uses of Claude Code authorization tokens, but I’m guessing mostly it isn’t):

A day later, it looked like this.

(January 14, 11: 27am eastern): Resolved, should be back to normal now​

Reports are the worst of the outage was due to a service deployment, which took about 4 hours to fix.

aidan: If I were running Claude marketing the tagline would be “Why not today?”

Olivia Moore: Suddenly seeing lots of paid creator partnerships with Claude

Many of them are beautifully shot and focused on: (1) building personal software; or (2) deep learning

The common tagline is “Think more, not less”

She shared a sample TikTok, showing a woman who doesn’t understand math using Claude to automatically code up visualizations to help her understand science, which seemed great.

OpenAI takes the approach of making things easy on the user and focusing on basic things like cooking or workouts. Anthropic shows you a world where anything is possible and you can learn and engage your imagination. Which way, modern man?

And yet some people think the AIs won’t be able to take over.

Discussion about this post

Claude Codes #3 Read More »

verizon-starts-requiring-365-days-of-paid-service-before-it-will-unlock-phones

Verizon starts requiring 365 days of paid service before it will unlock phones

Verizon has started enforcing a 365-day lock period on phones purchased through its TracFone division, one week after the Federal Communications Commission waived a requirement that Verizon unlock handsets 60 days after they are activated on its network.

Verizon was previously required to unlock phones automatically after 60 days due to restrictions imposed on its spectrum licenses and merger conditions that helped Verizon obtain approval of its purchase of TracFone. But an update applied today to the TracFone unlocking policy said new phones will be locked for at least a year and that each customer will have to request an unlock instead of getting it automatically.

The “new” TracFone policy is basically a return to the yearlong locking it imposed before Verizon bought the company in 2021. TracFone first agreed to provide unlocking in a 2015 settlement with the Obama-era FCC, which alleged that TracFone failed to comply with a commitment to unlock phones for customers enrolled in the Lifeline subsidy program. TracFone later shortened the locking period from a year to 60 days as a condition of the Verizon merger.

While a locked phone is tied to the network of one carrier, an unlocked phone can be switched to another carrier if the device is compatible with the other carrier’s network. But the new TracFone unlocking policy is stringent, requiring customers to pay for a full year of service before they can get a phone unlocked.

“For all cellphones Activated on or after January 20, 2026, the cellphone will be unlocked upon request after 365 days of paid and active service,” the policy says. A customer who doesn’t maintain an active service plan for the whole 12 months will thus have their unlocking eligibility date delayed.

Besides TracFone, the change applies to prepaid brands Straight Talk, Net10 Wireless, Clearway, Total Wireless, Simple Mobile, SafeLink Wireless, and Walmart Family Mobile. Customers who bought phones before today are still eligible for unlocks after 60 days.

365 days of paid service

As DroidLife points out, the Verizon-owned prepaid brand Visible is also requiring a year of paid service. The Visible policy updated today requires “at least 365 days of paid service” for an unlocking request. “If you stop paying for service, your progress toward the 365-day requirement pauses. It will resume once you reactivate your account and continue until you reach a total of 365 paid days of service,” the policy says.

Verizon starts requiring 365 days of paid service before it will unlock phones Read More »

flesh-eating-flies-are-eating-their-way-through-mexico,-cdc-warns

Flesh-eating flies are eating their way through Mexico, CDC warns

Across Central America and Mexico, there have been 1,190 human cases of NWS reported and seven deaths. More than 148,000 animals have been affected.

Close calls

In September, the USDA warned that an 8-month-old cow with an active NWS infection was found in a feedlot in the Mexican state of Nuevo León, just 70 miles from the border. The finding prompted Texas Agriculture Commissioner Sid Miller to step up warnings about the threat.

The screwworm is dangerously close,” Miller said at the time. “It nearly wiped out our cattle industry before; we need to act forcefully now.”

According to the USDA’s latest data, Nuevo León has seen three cases in the outbreak, with none that are currently active. But, its neighboring state, Tamaulipas, is having a flare-up, with eight animal cases considered active. The Mexican state shares a border with the southern-most portion of Texas. Mexico overall has reported 24 hospitalizations among people and 601 animal cases.

For now, the NWS has not been detected in the US, and the CDC considers the risk to people to be low.

“However, given the potential for geographic spread, CDC is issuing this Health Advisory to increase awareness of the outbreak and to summarize CDC recommendations for clinicians and health departments in the United States on case identification and reporting, specimen collection, diagnosis, and treatment of NWS, as well as guidance for the public,” the agency said.

Generally, the agency advises being on the lookout for egg masses or fly larvae in wounds or infection sites, especially if there’s destruction of living tissue or feelings of movement. Once discovered, health care workers should report the case and promptly remove and kill all larvae and eggs, preferably by drowning in a sealed, leak-proof container of 70 percent ethanol. “Failure to kill and properly dispose of all larvae or eggs could result in the new introduction and spread of NWS in the local environment,” the CDC warns in bold. At least 10 dead larvae should then be sent to the CDC for confirmation.

The USDA is currently releasing 100 million sterile male flies per week in Mexico to try to establish a new biological barrier.

This isn’t the fly’s first attempt at a US comeback since the 1960s. In 2016, the flies were somehow reintroduced to the Florida Keys, where they viciously attacked Key Deer, an endangered species and the smallest of North America’s white-tailed deer. The flies were eliminated again in 2017 using the sterile fly method.

Flesh-eating flies are eating their way through Mexico, CDC warns Read More »

netflix-to-pay-all-cash-for-warner-bros.-to-fend-off-paramount-hostile-takeover

Netflix to pay all cash for Warner Bros. to fend off Paramount hostile takeover

“By transitioning to all-cash consideration, we can now deliver the incredible value of our combination with Netflix at even greater levels of certainty, while providing our stockholders the opportunity to participate in management’s strategic plans to realize the value of Discovery Global’s iconic brands and global reach,” Warner Bros. Discovery board Chairman Samuel Di Piazza Jr. said in today’s press release.

Netflix is more likely to complete the deal, firms argue

Paramount also made an all-cash offer, but the Warner Bros. board called the Paramount bid “illusory” because it requires an “extraordinary amount of debt financing” and other terms that allegedly make it less likely to be completed than a Netflix merger.

Paramount “is a $14B market cap company with a ‘junk’ credit rating, negative free cash flows, significant fixed financial obligations, and a high degree of dependency on its linear business,” while Netflix has “market capitalization of approximately $400 billion, an investment grade balance sheet, an A/A3 credit rating and estimated free cash flow of more than $12 billion for 2026,” Warner Bros. told shareholders.

Warner Bros. and Netflix today continued to tout Netflix’s strong financial position and its ability to close the deal. “Netflix’s strong cash flow generation supports the revised all-cash transaction structure while preserving a healthy balance sheet and flexibility to capitalize on future strategic priorities,” the joint press release said.

The Wall Street Journal explained that the new “deal structure does away with a so-called collar, a mechanism meant to protect shareholders from large swings in an acquirer’s share price between the time when a deal is announced and when it closes. If Netflix shares dipped below $97.91, Warner shareholders were to get a larger portion of Netflix shares as part of the deal. If they rose above $119.67, shareholders would have received a smaller portion.”

Netflix to pay all cash for Warner Bros. to fend off Paramount hostile takeover Read More »

the-first-commercial-space-station,-haven-1,-is-now-undergoing-assembly-for-launch

The first commercial space station, Haven-1, is now undergoing assembly for launch


“We have a very strong incentive to send a crew as quickly as we can safely do so.”

The Haven-1 space station seen here in the Vast Space clean room. Credit: Vast Space

The Haven-1 space station seen here in the Vast Space clean room. Credit: Vast Space

As Ars reported last week, NASA’s plan to replace the International Space Station with commercial space stations is running into a time crunch.

The sprawling International Space Station is due to be decommissioned less than five years from now, and the US space agency has yet to formally publish rules and requirements for the follow-on stations being designed and developed by several different private companies.

Although there are expected to be multiple bidders in “phase two” of NASA’s commercial space station program, there are at present four main contenders: Voyager Technologies, Axiom Space, Blue Origin, and Vast Space. At some point later this year, the space agency is expected to select one, or more likely two, of these companies for larger contracts that will support their efforts to build their stations.

To get a sense of the overall landscape as the competition heats up, Ars recently interviewed Voyager chief executive Dylan Taylor about his company’s plans for a private station, Starlab. Today we are publishing an interview with Max Haot, the chief executive of Vast. The company is furthest along in terms of development, choosing to build a smaller, interim space station, Haven-1, capable of short-duration stays. Eventually, NASA wants facilities capable of continuous habitation, but it is not clear whether that will be a requirement starting in 2030.

Until today, Haven-1 had a public launch date of mid-2026. However, as Haot explained in our interview, that launch date is no longer tenable.

Ars: You’re slipping the launch of Haven-1 from the middle of this year to the first quarter of 2027. Why?

Max Haot: This is obviously our first space station, and we’re moving as safely and as fast as we can. That’s the date right now that we are confident we will meet. We’ve been tracking that date, without slip, for quite a while. And that’s still a year, probably two years or even more, ahead of anyone else. It will be building the world’s first commercial space station from scratch, from an empty building and no team, in under four years.

Ars: Where are you with the hardware?

Haot: Last Saturday (January 10) we reached the key milestone of fully completing the primary structure, and some of the secondary structure; all of the acceptance testing occurred in November as well. Now we are starting clean room integration, which starts with TCS (thermal control system), propulsion, interior shells, and then moving on to avionics. And then final close out, which we expect will be done by the fall, and then we have on the books with NASA a full test campaign at the end of the year at Plum Brook. Then the launch in Q1 next year.

Ars: What happens after you launch Haven-1?

Haot: We are not launching Haven-1 with crew inside. It’s a 15-ton, very valuable and expensive satellite, but still no humans involved, launching on a Falcon 9. So then we have a period that we can monitor it and control it uncrewed and confirm everything is functioning perfectly, right? We are holding pressure. We are controlling attitude. These checkouts can happen in as little as two weeks.

At the end of it, we have to basically convince SpaceX, both contractually and with many verification events, that it will be safe to dock Dragon. And if they agree with the data we provide them, they will put a fully trained crew on board Dragon and bring them up. It could be as early as two weeks after, and it could be as late as any time within three years, which is a lifetime of Haven-1. But we have a very strong incentive to send a crew as quickly as we can safely do so.

The Haven-1 space station undergoes acceptance testing.

Credit: Vast Space

The Haven-1 space station undergoes acceptance testing. Credit: Vast Space

Ars: Have you picked the crew yet?

Haot: We are in deep negotiations, maybe more than that, with both private individuals and nation states. But there’s nothing we are ready to announce yet. Especially with the Q1 launch date, in our desire to follow with the crew right after, this is now becoming pretty urgent. We believe, with our partner at SpaceX, one year for training is very comfortable, and we think we can compress it to maybe as little as six months for both training on Dragon and Haven-1 so long as we have an experienced crew. So we have a bit of time left to announce it.

Ars: You mentioned Haven-1 has a three-year lifetime. How many crews will you try to cycle through?

Haot: The nominal plan is for a two-week mission, and we have one fully contracted with SpaceX, as well as a second one that we have a deposit and an option on. And then we plan to do two more. That’s assuming they are 10-day missions with two days of transfer on either side. So two-week missions. We also have the option to maybe do a 30-day mission if we want. So the exact duration and makeup will be decided as we make progress with customers and potentially NASA.

Ars: What is the plan after Haven-1?

Haot: If you look at the first module of our second station, what will be the difference? We have two docking ports, not one. We expect to have more power, and potentially more volume, depending on the launch vehicle. What you see on our website and what we do might be different. We have a lot of optionality. But other than that, it’s all of the exact same components of Haven demo and Haven-1, which are basically being iterated on. And so that’s the key. The life support system, the air revitalization system, the software, the primary structure—the first module of Haven-2 will be just tweaks on Haven-1. That’s why we think we’re in the best position of all of the competitors. And that’s not been enabled by chance, right? It’s been enabled by a billion-dollar investment in 1,000 employees and all the facilities to mass produce the follow-on modules.

Ars: NASA is nearing the second phase of its competition for commercial space stations, known as CLDs. Do you plan to compete with Haven-1 or Haven-2 for these contracts?

Haot: We have not decided because, as you know, it’s unclear yet what the requirements will be. Will they be asking for a 30-day demonstration flight? On our end it’s unclear if we want to bid that 30-day demonstration with Haven-1, or Haven-2 with two or three modules. If they ask for a 30-day mission, we have the option to offer it on Haven-1 in 2027 if we want to.

Ars: Last week a key space staffer in the US Senate, Maddy Davis, said she was “begging” for NASA to release the phase two “request for proposals” that would set the ground rules for the CLD competition. Do you feel the same way?

Haot: Vast is dedicated to ensuring we have continuous human presence in low-Earth orbit after the ISS is retired. The date we are aiming at is end of 2030. Maddy mentioned an ISS extension. We agree, for America, if no one is ready it should be extended. But in our view, we will be ready, and we need to make sure we’re ready to start a continuous crewed mission by the end of 2030. That’s less than five years away now, right? So we definitely agree with the sentiment, and I think the full industry agrees, and I’m pretty sure Jared Isaacman also agrees that it is overdue and it’s time to make a decision and release an RFP.

Ars: What do you hope to see in that RFP when it comes to requirements?

Haot: We obviously can’t decide what NASA will do, and we will be competitive in whatever they decide. But there’s a few key recommendations we feel strongly about. The first one is that, as they consider whether they proceed with a demonstration mission or something else, we think they should focus on what is right for the country. What we are hearing is that they are trying to tweak the approach to do something fair to all of the bidders. And I don’t think it should matter whether people have been doing a right thing or wrong thing, and whether what’s right for the country puts somebody in a better position or not.

The second piece, obviously, is to move faster, which we just talked about. The third piece is that we think it’s really important that they require a demonstration. If you look at every human space flight program in history, none of them went straight from the program starting to a long-duration mission on a spacecraft. They all had a stepping stone, and right now none of us has proven we can have humans safely on orbit in a space station. And so in our view, they should require demonstration, and not on the eve of January 1, 2031. They should require a demonstration with crew as quickly as possible before they buy services.

Ars: You mentioned doing the “right thing for the country.” What does that mean for NASA?

Haot: It means you’re focused on commercial stations being ready by 2030, so there is not a need to extend the ISS. And it means ensuring we have not just one winner, but two, in case history repeats itself, such as Boeing and SpaceX in crew transportation.

Ars: Do you think the government has committed enough funding to make the commercial space station program a success?

Haot: I’m a vendor, and obviously I’d like as much buffer as possible, and as much funding as possible. With the current budget we don’t think more than two winners is reasonable, but it should absolutely be two in the best interest of the country. If there was a bigger budget, obviously, three would be great. And so if you look at the CLD budget line, which is approved for next year—projected over five years for development, and you assume two winners, and then services that come later—we are confident we can be successful and profitable with two companies operating.

Obviously, we also need international customers, right? We need Europe. We need Japan, where we just opened a subsidiary. We need all the new emerging human spaceflight nations in the Middle East, in Europe, in Asia. And a little bit of private spaceflight. We’re not in a space tourism era, in orbit, but there are still some private individuals willing to fund a mission and do important work. With that, we get to profitability.

We think a big differentiator of Vast is that we are really excited and eager to unlock the orbital economy. I’m talking about in-space semiconductor, fiber, pharmaceutical manufacturing, and so on. We think that’s our upside. We want to unlock it. But we don’t know how quickly it will happen or how big it will be. What we do know is, whoever has a platform up there with flight crew, facilities, and power will be the one unlocking it. But in our business model, if that’s delayed, we can still be profitable.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

The first commercial space station, Haven-1, is now undergoing assembly for launch Read More »

reports-of-ad-supported-xbox-game-streams-show-microsoft’s-lack-of-imagination

Reports of ad-supported Xbox game streams show Microsoft’s lack of imagination

You can do better than that

That’s a moderately useful option for cloud-curious Xbox players that might not be willing to take the plunge on a monthly subscription, we suppose. But it also feels like Microsoft could come up with some more imaginative ways to use Cloud Gaming to reach occasional players in new ways.

What’s stopping Microsoft from offer streaming players a 30-minute timed demo stream of any available Xbox Cloud Gaming title—perhaps in exchange for watching a short ad, or perhaps simply as an Xbox Live Arcade-style sales juicing tactic? Or why not offer discounted access to a streaming-only Game Pass subscription for players willing to watch occasional ads, like Netflix? Microsoft could even let players spend a couple of bucks to rent a digital copy of the title for a few days, much as services like iTunes do for newer films.

Those are just a few ideas off the top of our heads. And they all feel potentially more impactful than using ads as a way to let Xbox players stream copies of games they already purchased.

Back in 2019, we noted how Stadia’s strictly buy-before-you-play streaming business model limited the appeal of what ended up as a doomed cloud-gaming experiment. Microsoft should take some lessons from Google’s failure and experiment with new ways to use streaming to reach players that might not have access to the latest high-end hardware for their gaming experiences.

Reports of ad-supported Xbox game streams show Microsoft’s lack of imagination Read More »

ferrari-doing-what-it-does-best:-the-12cilindri-review

Ferrari doing what it does best: The 12Cilindri review


Retro design and a naturally aspirated V12 deliver tremendous appeal, but it’ll cost ya.

The front of a Ferrari 12Cilindri

In the old days, they used to say Ferrari would sell you an engine and give you the car for free. The rest of the 12Cilindri is too good for that cliche, but it really is all about the engine. Credit: Bradley Iger

In the old days, they used to say Ferrari would sell you an engine and give you the car for free. The rest of the 12Cilindri is too good for that cliche, but it really is all about the engine. Credit: Bradley Iger

It has been nearly 80 years since Ferrari unleashed its first V12-powered sports car upon the world with the 125 S. In 1947, its debut year, the 125 S secured Ferrari’s first race victory, along with five other wins in the 14 events it competed in that season.

Although it was soon replaced by the 159 S, the success of the 125 S kick-started Ferrari’s storied history of producing some of the most desirable 12-cylinder performance cars known to man. And while the Italian automaker has come to embrace forced induction and electrification in recent years, its legacy of building stunning front-engine, rear-wheel drive machines with spectacular V12s stuffed into their engine bays continues with the 12Cilindri Spider.

Ferrari hasn’t shied away from leveraging cutting-edge technology in the development of its latest models, but the company also understands the value of a good throwback. As the successor to the 812 Superfast, the 12Cilindri boasts clever performance technologies, like a sophisticated active aero system and a four-wheel steering system that can manage each corner independently to enhance response, but it’s ultimately an homage to the heady days of late ’60s luxury grand touring. The exterior styling takes obvious inspiration from the 365 GTB Daytona, while its lack of all-wheel drive, turbocharging, and electric assistance bucks trends that have become nearly inescapable in modern performance cars.

It’s actually an easy car to drive every day, despite the width. Bradley Iger

Buy the engine, get the car for free?

Instead, Ferrari has deliberately prioritized the core principles that have defined its most enduring GT icons: elegant design, a meticulously engineered chassis, and a sensational naturally aspirated V12, the latter represented here by a 6.5 L dry-sump mill that delivers 819 hp (611 kW) and a soaring 9,500-rpm redline.

That horsepower figure might not raise as many eyebrows as it would have just a few years ago, but it’s worth noting that at a time in history when an alarming number of new performance vehicles are now as heavy as full-size pickups, the 12Cilindri Spider tips the scales at a relatively svelte dry weight of 3,571 (1,620 kg) pounds thanks in part to its focus on the fundamentals. Equipped with massage seats and a retractable hardtop that opens and closes in just 14 seconds, the 12Cilindri Spider is primarily aimed at fulfilling drivers’ fantasies of cruising along the French Riviera with the smell of the ocean in the air and the banshee wail of 12 cylinders in their ears. But it also takes on a noticeably more sportscar-like persona than its primary rival, the Aston Martin Vanquish Volante, mainly due to the 12Cilindri’s eight-speed dual-clutch transmission and more earnest performance-tuned chassis.

Sport is the 12Cilindri Spider’s default drive mode, a naming decision that helps set expectations for suspension stiffness, but you can also depress the steering-wheel-mounted Manettino drive mode dial to enable Bumpy Road mode, which softens the adaptive dampers beyond their standard tuning for more compliance on rough pavement. While the gearbox occasionally needs a second to get its act together from a standstill, and the car’s low stance makes the nose lift system an often-used feature, the 12Cilindri Spider is a remarkably civil cruiser when pressed into service for everyday driving tasks.

Crackle red paint covers the intake boxes, and maybe the cylinder heads. Bradley Iger

Still an HMI disaster

The in-car tech does tarnish this driving experience to a tangible degree, though. The liberal use of capacitive surfaces on the steering wheel and the instrument panel to control features like rear-view mirror position and adaptive cruise control, as well as the functions that are accessed via the 15.6-inch digital gauge cluster, frequently led to frustration during my time with the car, and although the high-resolution 10.25-inch central touchscreen looks great and is quick to respond to user inputs, wireless Apple CarPlay crashed on several occasions for no discernible reason and remained inaccessible until after the next key cycle. These may seem like trivial issues, but in a car with a $507,394 MSRP ($661,364 as-tested with destination fee), it’s tough to excuse problems that are so distracting and seemingly easy to rectify.

We had the same problem with the 296 GTB, and it’s time Ferrari retired its capacitive wheel and replaced them all with the version that has physical buttons. Which it will do for existing owners—for a hefty fee.

But, perhaps unsurprisingly, those quibbles always seemed to fade away whenever I found an open stretch of canyon road and set the Manettino to Race mode. Doing so eases up the electronic assists, sets up the transmission and differential for sharper response, and opens up the valves in the active exhaust system. But, in contrast to convention, it leaves the steering weight, suspension stiffness, throttle response, and brake-by-wire system alone in order to maintain predictable dynamic behavior regardless of which drive mode you’re in.

Ferrari’s capacitive touch multifunctioning steering wheel continues to let down the experience of driving a modern Ferrari. Bradley Iger

Although the exhaust is a bit quieter than I’d prefer, even with the roof stowed away, the sound that this V12 makes as you wind it out is the stuff that dreams are made of. It took me a moment to recalibrate to the lofty redline, though—with the gearbox set to manual mode, my mind naturally wanted to pull the column-mounted paddle about 2,000 rpm early. I blame this on my seat time in the Vanquish coupe last year. Aston’s decision to equip the Vanquish’s 5.2 L V12 with a pair of turbochargers enables it to best the 12Cilinidri’s horsepower figure by a few ponies while also providing a significant advantage in peak torque output (738 lb-ft/1,000 Nm versus the Ferrari’s 500 lb-ft/678 Nm), but it also relegates the Vanquish’s redline to a more prosaic 7,000 rpm while naturally muting its tone a bit.

OK, that’s enough torque

And to be frank, I don’t think the 12Cilindri Spider needs another 238 lb-ft (322 Nm), a theory that was backed by the flashing traction control light that fired up any time I got a little too brave with the throttle coming out of a slow corner. Intervention from the Ferrari’s electronic safeguards is so seamless that I rarely noticed it happening at all, though, and I can’t say the same for the Vanquish, which is undoubtedly thrilling to drive but often felt like it was fighting against its own prodigious output in order to keep the nose on the intended path. The 12Cilindri, by contrast, feels easy to trust when the going gets fast, and that sensation is bolstered by tons of mechanical grip, a quick steering rack, and a firm, progressive brake pedal.

But regardless of my thoughts on the matter, the 12Cilindri’s successor will likely be a significantly different beast with a lot more power on tap. Nearly a decade ago, we predicted that the 812 would likely be the last Ferrari to feature a naturally aspirated V12, and while this is a prediction that we’re happy to have been wrong about, this era is undoubtedly drawing to a close. A hybridized V12 will likely offer even more grunt, and enthusiasts rarely scoff at the prospect of more power, but it also opens the door to all-wheel drive, significantly more heft, and ultimately a very different driving experience. Until then, the 12Cilindri Spider serves as an important reminder that sometimes the most compelling aspects of a performance car can’t be quantified on a spec sheet.

Ferrari doing what it does best: The 12Cilindri review Read More »

this-may-be-the-grossest-eye-pic-ever—but-the-cause-is-what’s-truly-horrifying

This may be the grossest eye pic ever—but the cause is what’s truly horrifying

Savage microbe

Whatever was laying waste to his eye seemed to have come from inside his own body, carried in his bloodstream—possibly the same thing that could explain the liver mass, lung nodules, and brain lesions. There was one explanation that fit the condition perfectly: hypervirulent Klebsiella pneumoniae or hvKP.

Classical K. pneumoniae is a germ that dwells in people’s intestinal tracts and is one that’s familiar to doctors. It’s known for lurking in health care settings and infecting vulnerable patients, often causing pneumonia or urinary tract infections. But hvKP is very different. In comparison, it’s a beefed-up bacteria with a rage complex. It was first identified in the 1980s in Taiwan—not for stalking weak patients in the hospital but for devastating healthy people in normal community settings.

An infection with hvKP—even in otherwise healthy people—is marked by metastatic infection. That is, the bacteria spreads throughout the body, usually starting with the liver, where it creates a pus-filled abscess. It then goes on a trip through the bloodstream, invading the lungs, brain, soft tissue, skin, and the eye (endogenous endophthalmitis). Putting it all together, the man had a completely typical clinical case of an hvKP infection.

Still, definitively identifying hvKP is tricky. Mucus from the man’s respiratory tract grew a species of Klebsiella, but there’s not yet a solid diagnostic test to differentiate hvKP from the classical variety. Since 2024, researchers have worked out a strategy of using the presence of five different virulence genes found on plasmids (relatively small, circular pieces of DNA, separate from chromosomal DNA, that can replicate on their own and be shared among bacteria.) But the method isn’t perfect—some classical K. pneumoniae can also carry the five genes.

A string test performed on the rare growth of Klebsiella pneumoniae from the sputum culture shows a positive result, with the formation of a viscous string with a height of greater than 5 mm.

A string test performed on the rare growth of Klebsiella pneumoniae from the sputum culture shows a positive result, with the formation of a viscous string with a height of greater than 5 mm. Credit: NEJM 2026

Another much simpler method is the string test, in which clinicians basically test the goopy-ness of the bacteria—hvKP is known for being sticky. For this test, a clinician grows the bacteria into a colony on a petri dish, then touches an inoculation loop to the colony and pulls up. If the string of attached goo stretches more than 5 mm off the petri dish, it’s considered positive for hvKP. This is (obviously) not a precise test.

This may be the grossest eye pic ever—but the cause is what’s truly horrifying Read More »

openai-to-test-ads-in-chatgpt-as-it-burns-through-billions

OpenAI to test ads in ChatGPT as it burns through billions

Financial pressures and a changing tune

OpenAI’s advertising experiment reflects the enormous financial pressures facing the company. OpenAI does not expect to be profitable until 2030 and has committed to spend about $1.4 trillion on massive data centers and chips for AI.

According to financial documents obtained by The Wall Street Journal in November, OpenAI expects to burn through roughly $9 billion this year while generating $13 billion in revenue. Only about 5 percent of ChatGPT’s 800 million weekly users pay for subscriptions, so it’s not enough to cover all of OpenAI’s operating costs.

Not everyone is convinced ads will solve OpenAI’s financial problems. “I am extremely bearish on this ads product,” tech critic Ed Zitron wrote on Bluesky. “Even if this becomes a good business line, OpenAI’s services cost too much for it to matter!”

OpenAI’s embrace of ads appears to come reluctantly, since it runs counter to a “personal bias” against advertising that Altman has shared in earlier public statements. For example, during a fireside chat at Harvard University in 2024, Altman said he found the combination of ads and AI “uniquely unsettling,” implying that he would not like it if the chatbot itself changed its responses due to advertising pressure. He added: “When I think of like GPT writing me a response, if I had to go figure out exactly how much was who paying here to influence what I’m being shown, I don’t think I would like that.”

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI.

An example mock-up of an advertisement in ChatGPT provided by OpenAI. Credit: OpenAI

Along those lines, OpenAI’s approach appears to be a compromise between needing ad revenue and not wanting sponsored content to appear directly within ChatGPT’s written responses. By placing banner ads at the bottom of answers separated from the conversation history, OpenAI appears to be addressing Altman’s concern: The AI assistant’s actual output, the company says, will remain uninfluenced by advertisers.

Indeed, Simo wrote in a blog post that OpenAI’s ads will not influence ChatGPT’s conversational responses and that the company will not share conversations with advertisers and will not show ads on sensitive topics such as mental health and politics to users it determines to be under 18.

“As we introduce ads, it’s crucial we preserve what makes ChatGPT valuable in the first place,” Simo wrote. “That means you need to trust that ChatGPT’s responses are driven by what’s objectively useful, never by advertising.”

OpenAI to test ads in ChatGPT as it burns through billions Read More »

tsmc-says-ai-demand-is-“endless”-after-record-q4-earnings

TSMC says AI demand is “endless” after record Q4 earnings

TSMC posted net income of NT$505.7 billion (about $16 billion) for the quarter, up 35 percent year over year and above analyst expectations. Revenue hit $33.7 billion, a 25.5 percent increase from the same period last year. The company expects nearly 30 percent revenue growth in 2026 and plans to spend between $52 billion and $56 billion on capital expenditures this year, up from $40.9 billion in 2025.

Checking with the customers’ customers

Wei’s optimism stands in contrast to months of speculation about whether the AI industry is in a bubble. In November, Google CEO Sundar Pichai warned of “irrationality” in the AI market and said no company would be immune if a potential bubble bursts. OpenAI’s Sam Altman acknowledged in August that investors are “overexcited” and that “someone” will lose a “phenomenal amount of money.”

But TSMC, which manufactures the chips that power the AI boom, is betting the opposite way, with Wei telling analysts he spoke directly to cloud providers to verify that demand is real before committing to the spending increase.

“I want to make sure that my customers’ demand are real. So I talked to those cloud service providers, all of them,” Wei said. “The answer is that I’m quite satisfied with the answer. Actually, they show me the evidence that the AI really helps their business.”

The earnings report landed the same day the US and Taiwan finalized a trade agreement that cuts tariffs on Taiwanese goods to 15 percent, down from 20 percent. The deal commits Taiwanese companies to $250 billion in direct US investment, and TSMC is accelerating the expansion of its Arizona chip fabrication facilities to match.

TSMC says AI demand is “endless” after record Q4 earnings Read More »

feds-give-tesla-another-five-weeks-to-respond-to-fsd-probe

Feds give Tesla another five weeks to respond to FSD probe

The original request was sent to Tesla on December 3 with a deadline of January 19—next Monday—with penalties of up to $27,874 per day (to a maximum of $139.4 million) for not complying.

However, the winter holiday period ate up two weeks of the six-and-a-bit weeks, and the company has had to simultaneously prepare two other information requests for other ongoing NHTSA probes, one due today, another on January 23rd, and yet another on February 4, the company told NHTSA. Identifying all the complaints and reports will take more time, Tesla said, as it found 8,313 items when it searched for traffic violations, and it can only process 300 a day to see which ones are relevant.

Answering the remaining questions on NHTSA’s list would require the above to be completed first, so Tesla asked for and was granted an extension until February 23.

Meanwhile, Tesla has changed how its driver assist cash cow contributes to the bottom line. Until now, Tesla owners had the option of buying the system outright for (currently) $8,000. Now, CEO Elon Musk says that option will go away on February 14. From then on, if a Tesla owner wants FSD, they’ll have to pay a $99 monthly fee to use it.

Feds give Tesla another five weeks to respond to FSD probe Read More »