Author name: Rejus Almole

signs-point-to-a-sooner-rather-than-later-m5-macbook-pro-refresh

Signs point to a sooner-rather-than-later M5 MacBook Pro refresh

Mac power users waiting on new high-end MacBook Pro models may have been disappointed last fall, when Apple released an M5 upgrade for the low-end 14-inch MacBook Pro without touching the M4 Pro or Max versions of the laptop. But the wait for M5 Pro and M5 Max models may be nearing its end.

The tea-leaf readers at MacRumors noticed that shipping times for a handful of high-end MacBook Pro configurations have slipped into mid-to-late February, rather than being available immediately as most Mac models are. This is often, though not always, a sign that Apple has slowed down or stopped production of an existing product in anticipation of an update.

Currently, the shipping delays affect the M4 Max versions of both the 14-inch and 16-inch MacBook Pros. If you order them today, these models will arrive sometime between February 3 and February 24, depending on the configuration you choose; many M4 Pro versions are still available for same-day shipping, though adding a nano-texture display or upgrading RAM can still add a week or so to the shipping time.

Apple could choose to launch new Pro hardware on January 28, to go with the new Creator Studio subscription it announced last week. Aimed primarily at independent content creators that make their own video, audio, and images, the Creator Studio subscription bundles Final Cut Pro, Logic Pro, Pixelmator Pro, and enhancements for the Pages, Numbers, and Keynote apps (along with some other odds and ends) for $13 a month or $130 a year. None of these apps require a MacBook Pro, but many would benefit in some way from the additional CPU and GPU power, RAM, and storage available in Apple’s high-end laptops.

Of course, an imminent replacement isn’t the only reason why the shipping estimates for any given Mac might slip. Ongoing, AI-fueled RAM shortages could be causing problems, and Apple probably prioritizes production of the widely-used base-model M4 and M5 chips to the larger, more expensive, more complex Max models.

But the only other device in Apple’s lineup that offers the M4 Max and similar RAM configuration options is the high-end Mac Studio, which currently isn’t subject to the same shipping delays. That does imply that the delays are specific to the MacBook Pro—and one explanation for this is that the laptop is about to be replaced.

Signs point to a sooner-rather-than-later M5 MacBook Pro refresh Read More »

meet-veronika,-the-tool-using-cow

Meet Veronika, the tool-using cow

Each time, Veronika used her tongue to lift and position the broom in her mouth, clamping down with her teeth for a stable grip. This enabled her to use the broom to scratch otherwise hard-to-reach areas on the rear half of her body. Veronika seemed to prefer the brush end to the stick end (i.e., the exploitation of distinct properties of a single object for different functions) although which end she used depended on body area. For example, she used the brush end to scratch her upper body using a scrubbing motion, while using the stick end to scratch more sensitive lower areas like her udders and belly skin flaps using precisely targeted gentle forward pushes. She also anticipated the need to adjust her grip.

The authors conclude that this behavior demonstrates “goal-directed, context-sensitive tooling,” as well as versatility in her tool-use anticipation, and fine-motor targeting. Veronika’s scratching behavior is likely motivated by the desire to relieve itching from insect bites, but her open, complex environment, compared to most livestock, and regular interactions with humans enabled her unusual cognitive abilities to emerge.

The implication is that this kind of technical problem-solving is not confined to species with large brains and hands or beaks. “[Veronika] did not fashion tools like the cow in Gary Larson’s cartoon, but she selected, adjusted, and used one with notable dexterity and flexibility,” the authors wrote. “Perhaps the real absurdity lies not in imagining a tool-using cow, but in assuming such a thing could never exist.”

DOI: Current Biology, 2025. 10.1016/j.cub.2025.11.059 (About DOIs).

Meet Veronika, the tool-using cow Read More »

10-things-i-learned-from-burning-myself-out-with-ai-coding-agents

10 things I learned from burning myself out with AI coding agents


Opinion: As software power tools, AI agents may make people busier than ever before.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

If you’ve ever used a 3D printer, you may recall the wondrous feeling when you first printed something you could have never sculpted or built yourself. Download a model file, load some plastic filament, push a button, and almost like magic, a three-dimensional object appears. But the result isn’t polished and ready for mass production, and creating a novel shape requires more skills than just pushing a button. Interestingly, today’s AI coding agents feel much the same way.

Since November, I have used Claude Code and Claude Opus 4.5 through a personal Claude Max account to extensively experiment with AI-assisted software development (I have also used OpenAI’s Codex in a similar way, though not as frequently). Fifty projects later, I’ll be frank: I have not had this much fun with a computer since I learned BASIC on my Apple II Plus when I was 9 years old. This opinion comes not as an endorsement but as personal experience: I voluntarily undertook this project, and I paid out of pocket for both OpenAI and Anthropic’s premium AI plans.

Throughout my life, I have dabbled in programming as a utilitarian coder, writing small tools or scripts when needed. In my web development career, I wrote some small tools from scratch, but I primarily modified other people’s code for my needs. Since 1990, I’ve programmed in BASIC, C, Visual Basic, PHP, ASP, Perl, Python, Ruby, MUSHcode, and some others. I am not an expert in any of these languages—I learned just enough to get the job done. I have developed my own hobby games over the years using BASIC, Torque Game Engine, and Godot, so I have some idea of what makes a good architecture for a modular program that can be expanded over time.

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called “Christmas Roll-Up.”

In December, I used Claude Code to create a multiplayer online clone of Katamari Damacy called “Christmas Roll-Up.” Credit: Benj Edwards

Claude Code, Codex, and Google’s Gemini CLI, can seemingly perform software miracles on a small scale. They can spit out flashy prototypes of simple applications, user interfaces, and even games, but only as long as they borrow patterns from their training data. Much like a 3D printer, doing production-level work takes far more effort. Creating durable production code, managing a complex project, or crafting something truly novel still requires experience, patience, and skill beyond what today’s AI agents can provide on their own.

And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. Even with that impression, though, I know these are hobby projects, and the limitations of coding agents lead me to believe that veteran software developers probably shouldn’t fear losing their jobs to these tools any time soon. In fact, they may become busier than ever.

So far, I have created over 50 demo projects in the past two months, fueled in part by a bout of COVID that left me bedridden with a laptop and a generous 2x Claude usage cap that Anthropic put in place during the last few weeks of December. As I typed furiously all day, my wife kept asking me, “Who are you talking to?”

You can see a few of the more interesting results listed on my personal website. Here are 10 interesting things I’ve learned from the process.

1. People are still necessary

Even with the best AI coding agents available today, humans remain essential to the software development process. Experienced human software developers bring judgment, creativity, and domain knowledge that AI models lack. They know how to architect systems for long-term maintainability, how to balance technical debt against feature velocity, and when to push back when requirements don’t make sense.

For hobby projects like mine, I can get away with a lot of sloppiness. But for production work, having someone who understands version control, incremental backups, testing one feature at a time, and debugging complex interactions between systems makes all the difference. Knowing something about how good software development works helps a lot when guiding an AI coding agent—the tool amplifies your existing knowledge rather than replacing it.

As independent AI researcher Simon Willison wrote in a post distinguishing serious AI-assisted development from casual “vibe coding,” “AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.”

With AI assistance, you don’t have to remember how to do everything. You just need to know what you want to do.

Card Miner: Heart of the Earth is entirely human-designed by AI coded using Claude Code. It represents about a month of iterative work.

Card Miner: Heart of the Earth is entirely human-designed, but it was AI-coded using Claude Code. It represents about a month of iterative work.

Card Miner: Heart of the Earth is entirely human-designed, but it was AI-coded using Claude Code. It represents about a month of iterative work. Credit: Benj Edwards

So I like to remind myself that coding agents are software tools best used to enact human ideas, not autonomous coding employees. They are not people (and not people replacements) no matter how the companies behind them might market them.

If you think about it, everything you do on a computer was once a manual process. Programming a computer like the ENIAC involved literally making physical bits (connections) with wire on a plugboard. The history of programming has been one of increasing automation, so even though this AI-assisted leap is somewhat startling, one could think of these tools as an advancement similar to the advent of high-level languages, automated compilers and debugger tools, or GUI-based IDEs. They can automate many tasks, but managing the overarching project scope still falls to the person telling the tool what to do.

And they can have rapidly compounding benefits. I’ve now used AI tools to write better tools—such as changing the source of an emulator so a coding agent can use it directly—and those improved tools are already having ripple effects. But a human must be in the loop for the best execution of my vision. This approach has kept me very busy, and contrary to some prevailing fears about people becoming dumber due to AI, I have learned many new things along the way.

2. AI models are brittle beyond their training data

Like all AI models based on the Transformer architecture, the large language models (LLMs) that underpin today’s coding agents have a significant limitation: They can only reliably apply knowledge gleaned from training data, and they have a limited ability to generalize that knowledge to novel domains not represented in that data.

What is training data? In this case, when building coding-flavored LLMs, AI companies download millions of examples of software code from sources like GitHub and use them to make the AI models. Companies later specialize them for coding through fine-tuning processes.

The ability of AI agents to use trial and error—attempting something and then trying again—helps mitigate the brittleness of LLMs somewhat. But it’s not perfect, and it can be frustrating to see a coding agent spin its wheels trying and failing at a task repeatedly, either because it doesn’t know how to do it or because it previously learned how to solve a problem but then forgot because the context window got compacted (more on that here).

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code.

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code.

Violent Checkers is a physics-based corruption of the classic board game, coded using Claude Code. Credit: Benj Edwards

To get around this, it helps to have the AI model take copious notes as it goes along about how it solved certain problems so that future instances of the agent can learn from them again. You also want to set ground rules in the claude.md file that the agent reads when it begins its session.

This brittleness means that coding agents are almost frighteningly good at what they’ve been trained and fine-tuned on—modern programming languages, JavaScript, HTML, and similar well-represented technologies—and generally terrible at tasks on which they have not been deeply trained, such as 6502 Assembly or programming an Atari 800 game with authentic-looking character graphics.

It took me five minutes to make a nice HTML5 demo with Claude but a week of torturous trial and error, plus actual systematic design on my part, to make a similar demo of an Atari 800 game. To do so, I had to use Claude Code to invent several tools, like command-line emulators and MCP servers, that allow it to peek into the operation of the Atari 800’s memory and chipset to even begin to make it happen.

3. True novelty can be an uphill battle

Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.

For example, I spent four days trying to get Claude Code to create an Atari 800 version of my HTML game Violent Checkers, but it had trouble because in the game’s design, the squares on the checkerboard don’t matter beyond their starting positions. No matter how many times I told the agent (and made notes in my Claude project files), it would come back to trying to center the pieces to the squares, snap them within squares, or use the squares as a logical basis of the game’s calculations when they should really just form a background image.

To get around this in the Atari 800 version, I started over and told Claude that I was creating a game with a UFO (instead of a circular checker piece) flying over a field of adjacent squares—never once mentioning the words “checker,” “checkerboard,” or “checkers.” With that approach, I got the results I wanted.

A screenshot of Benj's Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects.

A screenshot of Benj’s Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects.

A screenshot of Benj’s Mac while working on a Violent Checkers port for the Atari 800 home computer, amid other projects. Credit: Benj Edwards

Why does this matter? Because with LLMs, context is everything, and in language, context changes meaning. Take the word “bank” and add the words “river” or “central” in front of it, and see how the meaning changes. In a way, words act as addresses that unlock the semantic relationships encoded in a neural network. So if you put “checkerboard” and “game” in the context, the model’s self-attention process links up a massive web of semantic associations about how checkers games should work, and that semantic baggage throws things off.

A couple of tricks can help AI coders navigate around these limitations. First, avoid contaminating the context with irrelevant information. Second, when the agent gets stuck, try this prompt: “What information do you need that would let you implement this perfectly right now? What tools are available to you that you could use to discover that information systematically without guessing?” This forces the agent to identify (semantically link up) its own knowledge gaps, spelled out in the context window and subject to future action, instead of flailing around blindly.

4. The 90 percent problem

The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction. The limitations we discussed above can also cause your project to hit a brick wall.

From what I have observed over the years, larger LLMs can potentially make deeper contextual connections than smaller ones. They have more parameters (encoded data points), and those parameters are linked in more multidimensional ways, so they tend to have a deeper map of semantic relationships. As deep as those go, it seems that human brains still have an even deeper grasp of semantic connections and can make wild semantic jumps that LLMs tend not to.

Creativity, in this sense, may be when you jump from, say, basketball to how bubbles form in soap film and somehow make a useful connection that leads to a breakthrough. Instead, LLMs tend to follow conventional semantic paths that are more conservative and entirely guided by mapped-out relationships from the training data. That limits their creative potential unless the prompter unlocks it by guiding the LLM to make novel semantic connections. That takes skill and creativity on the part of the operator, which once again shows the role of LLMs as tools used by humans rather than independent thinking machines.

5. Feature creep becomes irresistible

While creating software with AI coding tools, the joy of experiencing novelty makes you want to keep adding interesting new features rather than fixing bugs or perfecting existing systems. And Claude (or Codex) is happy to oblige, churning away at new ideas that are easy to sketch out in a quick and pleasing demo (the 90 percent problem again) rather than polishing the code.

Flip-Lash started as a

Flip-Lash started as a “Tetris but you can flip the board,” but feature creep made me throw in the kitchen sink, losing focus.

Flip-Lash started as a “Tetris but you can flip the board,” but feature creep made me throw in the kitchen sink, losing focus. Credit: Benj Edwards

Fixing bugs can also create bugs elsewhere. This is not new to coding agents—it’s a time-honored problem in software development. But agents supercharge this phenomenon because they can barrel through your code and make sweeping changes in pursuit of narrow-minded goals that affect lots of working systems. We’ve already talked about the importance of having a good architecture guided by the human mind behind the wheel above, and that comes into play here.

6. AGI is not here yet

Given the limitations I’ve described above, it’s very clear that an AI model with general intelligence—what people usually call artificial general intelligence (AGI)—is still not here. AGI would hypothetically be able to navigate around baked-in stereotype associations and not have to rely on explicit training or fine-tuning on many examples to get things right. AI companies will probably need a different architecture in the future.

I’m speculating, but AGI would likely need to learn permanently on the fly—as in modify its own neural network weights—instead of relying on what is called “in-context learning,” which only persists until the context fills up and gets compacted or wiped out.

Grapheeti is a

Grapheeti is a “drawing MMO” where people around the world share a canvas.

Grapheeti is a “drawing MMO” where people around the world share a canvas. Credit: Benj Edwards

In other words, you could teach a true AGI system how to do something by explanation or let it learn by doing, noting successes, and having those lessons permanently stick, no matter what is in the context window. Today’s coding agents can’t do that—they forget lessons from earlier in a long session or between sessions unless you manually document everything for them. My favorite trick is instructing them to write a long, detailed report on what happened when a bug is fixed. That way, you can point to the hard-earned solution the next time the amnestic AI model makes the same mistake.

7. Even fast isn’t fast enough

While using Claude Code for a while, it’s easy to take for granted that you suddenly have the power to create software without knowing certain programming languages. This is amazing at first, but you can quickly become frustrated that what is conventionally a very fast development process isn’t fast enough. Impatience at the coding machine sets in, and you start wanting more.

But even if you do know the programming languages being used, you don’t get a free pass. You still need to make key decisions about how the project will unfold. And when the agent gets stuck or makes a mess of things, your programming knowledge becomes essential for diagnosing what went wrong and steering it back on course.

8. People may become busier than ever

After guiding way too many hobby projects through Claude Code over the past two months, I’m starting to think that most people won’t become unemployed due to AI—they will become busier than ever. Power tools allow more work to be done in less time, and the economy will demand more productivity to match.

It’s almost too easy to make new software, in fact, and that can be exhausting. One project idea would lead to another, and I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once. That’s too much split attention for good results, but the novelty of seeing my ideas come to life was addictive. In addition to the game ideas I’ve mentioned here, I made tools that scrape and search my past articles, a graphical MUD based on ZZT, a new type of MUSH (text game) that uses AI-generated rooms, a new type of Telnet display proxy, and a Claude Code client for the Apple II (more on that soon). I also put two AI-enabled emulators for Apple II and Atari 800 on GitHub. Phew.

Consider the advent of the steam shovel, which allowed humans to dig holes faster than a team using hand shovels. It made existing projects faster and new projects possible. But think about the human operator of the steam shovel. Suddenly, we had a tireless tool that could work 24 hours a day if fueled up and maintained properly, while the human piloting it would need to eat, sleep, and rest.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet.

I used Claude Code to create a windowing GUI simulation of the Mac that works over Telnet. Credit: Benj Edwards

In fact, we may end up needing new protections for human knowledge workers using these tireless information engines to implement their ideas, much as unions rose as a response to industrial production lines over 100 years ago. Humans need rest, even when machines don’t.

Will an AI system ever replace the human role here? Even if AI coding agents could eventually work fully autonomously, I don’t think they’ll replace humans entirely because there will still be people who want to get things done, and new AI power tools will emerge to help them do it.

9. Fast is scary to people

AI coding tools can turn what was once a year-long personal project into a five-minute session. I fed Claude Code a photo of a two-player Tetris game I sketched in a notebook back in 2008, and it produced a working prototype in minutes (prompt: “create a fully-featured web game with sound effects based on this diagram”). That’s wild, and even though the results are imperfect, it’s a bit frightening to comprehend what kind of sea change in software development this might entail.

Since early December, I’ve been posting some of my more amusing experimental AI-coded projects to Bluesky for people to try out, but I discovered I needed to deliberately slow down with updates because they came too fast for people to absorb (and too fast for me to fully test). I’ve also received comments like “I’m worried you’re using AI, you’re making games too fast” and so on.

Benj's handwritten game design note about a two-player Tetris concept from 2007.

Benj’s handwritten game design note about a two-player Tetris concept from 2007.

Benj’s handwritten game design note about a two-player Tetris concept from 2007. Credit: Benj Edwards

Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen.

Social media tends to prime people to believe that AI is all good or all bad, but that kind of black-and-white thinking may be the easy way out. You’ll have no cognitive dissonance, but you’ll miss a far richer third option: seeing these tools as imperfect and deserving of critique but also as useful and empowering when they bring your ideas to life.

AI agents should be considered tools, not entities or employees, and they should be amplifiers of human ideas. My game-in-progress Card Miner is entirely my own high-level creative design work, but the AI model handled the low-level code. I am still proud of it as an expression of my personal ideas, and it would not exist without AI coding agents.

10. These tools aren’t going away

For now, at least, coding agents remain very much tools in the hands of people who want to build things. The question is whether humans will learn to wield these new tools effectively to empower themselves. Based on two months of intensive experimentation, I’d say the answer is a qualified yes, with plenty of caveats.

We also have social issues to face: Professional developers already use these tools, and with the prevailing stigma against AI tools in some online communities, many software developers and the platforms that host their work will face difficult decisions.

Ultimately, I don’t think AI tools will make human software designers obsolete. Instead, they may well help those designers become more capable. This isn’t new, of course; tools of every kind have been serving this role since long before the dawn of recorded history. The best tools amplify human capability while keeping a person behind the wheel. The 3D printer analogy holds: amazing fast results are possible, but mastery still takes time, skill, and a lot of patience with the machine.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

10 things I learned from burning myself out with AI coding agents Read More »

mother-of-one-of-elon-musk’s-offspring-sues-xai-over-sexualized-deepfakes

Mother of one of Elon Musk’s offspring sues xAI over sexualized deepfakes

The news comes as xAI and Musk have come under fire over fake sexualized images of women and children, which proliferated on the platform this year, particularly after Musk jokingly shared an AI-altered post of himself in a bikini.

Over the past week, the issue has prompted threats of fines and bans in the EU, UK, and France, as well as investigations by the California attorney-general and Britain’s Ofcom regulator. Grok has also been banned in Indonesia and Malaysia.

On Wednesday, xAI took action to restrict the image-generation function on its Grok AI model to block the chatbot from undressing users, insisting that it removed Child Sexual Abuse Material (CSAM) and non-consensual nudity material.

St Clair, who has in recent months been increasingly critical of Musk, is also seeking a temporary restraining order to prevent xAI from generating images that undress her.

“Ms St Clair is humiliated, depressed, fearful for her life, angry and desperately in need of action from this court to protect her against xAI’s facilitation of this unfathomable nightmare,” lawyers wrote in a filing seeking the restraining order.

xAI filed a lawsuit against St Clair in Texas on Thursday, claiming she had breached the company’s terms of service by bringing her lawsuit against the company in a New York court instead of in Texas.

Earlier this week, Musk also said on X that he would be filing for “full custody” of their 1-year-old son Romulus, after St Clair apologized for sharing posts critical of transgender people in the past. Musk, who has a transgender child, has repeatedly been critical of transgender people and the rights of trans individuals.

Additional reporting by Kaye Wiggins in New York.

© 2026 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Mother of one of Elon Musk’s offspring sues xAI over sexualized deepfakes Read More »

rackspace-customers-grapple-with-“devastating”-email-hosting-price-hike

Rackspace customers grapple with “devastating” email hosting price hike

“We had really good reseller pricing that we negotiated with Rackspace due to the number of mailboxes we had with them and how long we had been a customer. All of that seemed to vanish when they notified us of their new pricing,” he said.

Ars contacted Rackspace asking about the 706 percent price hike that Laughing Squid says it’s facing, why Rackspace decided to increase its prices now, and why it didn’t give its partners more advanced notice. A company spokesperson responded, saying:

Rackspace Email is a reliable and secure business-class email solution for small businesses. To continue delivering the service levels our customers expect, effective March 2026, Rackspace Technology is increasing the price of Rackspace Email. We have a support team available to help our customers to discuss their options.

The spokesperson added that Rackspace’s “mission is to deliver quality, trusted and reliable hosted email solution for businesses.”

Email hosting is a tough business

Despite Rackspace’s stated commitment to email hosting, the prohibitive pricing seems like a deterrent for a business being viewed as high-effort and low-margin. Email has grown complex over the years, requiring time and expertise for proper management at scale. It’s become simpler, or more lucrative, for some cloud companies to focus on selling their managed services on top of offerings like Microsoft 365—as Rackspace does—or Google Workspace and let the larger companies behind those solutions deal with infrastructure costs and complexities.

Rackspace’s price hike also comes as an AI-driven RAM shortage is impacting the availability and affordability of other computing components, including storage.

With Rackspace, which went public in 2020, also having quit hosting Microsoft Exchange following a costly 2022 ransomware attack, the Texas-headquartered company may be looking to minimize its email hosting duties as much as possible.

Meanwhile, Laughing Squid is increasing prices for Rackspace mailboxes and offering services with a different email provider, PolarisMail, to customers at lower prices. Beale said he has reached out to Rackspace about the new pricing but hasn’t heard back yet.

Rackspace customers grapple with “devastating” email hosting price hike Read More »

archaeologists-find-a-supersized-medieval-shipwreck-in-denmark

Archaeologists find a supersized medieval shipwreck in Denmark


the wreck and the story of the wreck

The sunken ship reveals that the medieval European economy was growing fast.

photo of a sailing ship with a single mast and a square sail painted red and white

This is a replica of another cog, based on an excavated shipwreck from Bremen. Note the sterncastle. Credit: VollwertBIT

This is a replica of another cog, based on an excavated shipwreck from Bremen. Note the sterncastle. Credit: VollwertBIT

Archaeologists recently found the wreck of an enormous medieval cargo ship lying on the seafloor off the Danish coast, and it reveals new details of medieval trade and life at sea.

Archaeologists discovered the shipwreck while surveying the seabed in preparation for a construction project for the city of Copenhagen, Denmark. It lay on its side, half-buried in the sand, 12 meters below the choppy surface of the Øresund, the straight that runs between Denmark and Sweden. By comparing the tree rings in the wreck’s wooden planks and timbers with rings from other, precisely dated tree samples, the archaeologists concluded that the ship had been built around 1410 CE.

photo of a scuba diver swimming over wooden planks underwater

The Skaelget 2 shipwreck, with a diver for scale.

Credit: Viking Ship Museum

The Skaelget 2 shipwreck, with a diver for scale. Credit: Viking Ship Museum

A medieval megaship

Svaelget 2, as archaeologists dubbed the wreck (its original name is long since lost to history), was a type of merchant ship called a cog: a wide, flat-bottomed, high-sided ship with an open cargo hold and a square sail on a single mast. A bigger, heavier, more advanced version of the Viking knarrs of centuries past, the cog was the high-tech supertanker of its day. It was built to carry bulky commodities from ports in the Netherlands, north around the coast of Denmark, and then south through the Øresund to trading ports on the Baltic Sea—but this one didn’t quite make it.

Most cogs would have been about 15 to 25 meters long and 5 to 8 meters wide, capable of carrying about 200 tons of cargo—big, impressive ships for their time. But Svaelget 2, an absolute unit of a ship, measured about 28 meters from bow to stern, 9 meters wide, and could have carried about 300 tons. Its size alone was a surprise to the archaeologists.

“We now know, undeniably, that cogs could be this large—that the ship type could be pushed to this extreme,” said archaeologist Otto Uldum of Denmark’s Viking Ship Museum, who led the excavation, in a press release.

Medieval Europe’s merchant class was growing in both size and wealth in the early 1400s, and the cog was both a product of that growth and the engine driving it. The mere fact of its existence points to a society that could afford to invest in building big, expensive trading ships (and could confidently expect a return on that investment). And physically, it’s a product of the same trading networks it supplied: while the heavy timbers of its frame were cut locally in the Netherlands, the Pomeranian oak planks of Svaelget 2’s hull came from Poland.

“The cog revolutionized trade in northern Europe,” said Uldum. “It made it possible to transport goods on a scale never seen before.”

The super ship’s superb superstructure

For about 600 years, layers of sand had protected the starboard (right, for you landlubbers) side of the wreck from erosion and decay. Nautical archaeologists usually find only the very bottoms of cogs; the upper structures of the ship—rigging, decks, and castles—quickly decay in the ocean. That means that some of the most innovative parts of the ships’ construction appear only in medieval drawings and descriptions.

But Svaelget 2 offers archaeologists a hands-on look at the real deal, from rigging to the ship’s galley and the stern castle: a tall wooden structure at the back of the ship, where crew and passengers could have sought at least a little shelter from the elements. Medieval drawings and texts describe cogs having high castles at both bow and stern, but archaeologists have never gotten to examine a real one to learn how it’s put together or how it connects with the rest of the ship’s construction.

“We have plenty of drawings of castles, but they have never been found because usually only the bottom of the ship survives,” said Uldum. “[The castle] is a big step forward compared to Viking Age ships, which had only open decks in all kinds of weather.”

Lying on and around the remains of the cog’s decks, Uldum and his colleagues also found stays (ropes that would have held the mast in place) and lines for controlling the ship’s single square sail, along with ropes and chains that would once have secured the merchant vessel’s cargo in the open hold.

Life at sea in the Middle Ages

The cog would probably have sailed with between 30 and 45 crew members. No remains were found on the wreck, but the lost crew left behind small, tantalizing traces of their lives and their presence. Uldum and his colleagues found combs, shoes, and rosary beads, along with dishes and tableware.

“The sailor brought his comb to keep his hair neat and his rosary to say his prayers,” said Uldum (and one has to picture the sailor’s grandmother beaming proudly at that description). “These personal objects show us that the crew brought everyday items with them. They transferred their life on land to life at sea.”

Life at sea, for the medieval sailors aboard Svaelget 2, would have included at least occasional hot meals, cooked in bronze pots over an open fire in the ship’s galley and eaten on dishes of ceramic and painted wood. Bricks (about 200 of them) and tiles formed a sort of fireplace where the cook could safely build a fire aboard the otherwise very flammable ship.

“It speaks of remarkable comfort and organization on board,” said Uldum. “Now sailors could have hot meals similar to those on land, instead of the dried and cold food that previously dominated life at sea.” Plenty of dried meat and cold biscuits still awaited sailors for the next several centuries, of course, but when weather and time permitted, at least the crew of Svaelget 2 could gather around a hot meal. The galley would have been a relatively new part of shipboard life for sailors in the early 1400s—and it quickly became a vital one.

Cargo? Go where?

One thing usually marks the site of a shipwreck, even when everything else has disintegrated into the ocean: ballast stones. When merchant ships were empty, they carried stones in their holds to help keep the ship stable; otherwise, the empty ship would be top-heavy and prone to tipping over, which is usually not ideal. (Modern merchant vessels use water, in special tanks, for ballast.) But Uldum and his colleagues didn’t find ballast stones on Svaelget 2, which means the cog was probably fully laden with cargo when it sank.

But the cargo is also conspicuously absent. Cogs were built to carry bulk goods—things like bricks, grain and other staple foods, fabric, salt, and timber. Those goods would have been stowed in an open hold amidships, secured by ropes and chains (some of which remain on the wreck). But barrels, boards, and bolts of fabric all float. As the ship sank and water washed into the hold, it would have carried away the cargo.

Some of it may have washed up on the shores or even more distant beaches, becoming a windfall for local residents. The rest probably sank to the bottom of the sea, far from the ship and its destination.

Photo of Kiona N. Smith

Kiona is a freelance science journalist and resident archaeology nerd at Ars Technica.

Archaeologists find a supersized medieval shipwreck in Denmark Read More »

mandiant-releases-rainbow-table-that-cracks-weak-admin-password-in-12-hours

Mandiant releases rainbow table that cracks weak admin password in 12 hours

Microsoft released NTLMv1 in the 1980s with the release of OS/2. In 1999, cryptanalyst Bruce Schneier and Mudge published research that exposed key weaknesses in the NTLMv1 underpinnings. At the 2012 Defcon 20 conference, researchers released a tool set that allowed attackers to move from untrusted network guest to admin in 60 seconds, by attacking the underlying weakness. With the 1998 release of Windows NT SP4 in 1998, Microsoft introduced NTLMv2, which fixed the weakness.

Organizations that rely on Windows networking aren’t the only laggards. Microsoft only announced plans to deprecate NTLMv1 last August.

Despite the public awareness that NTLMv1 is weak, “Mandiant consultants continue to identify its use in active environments,” the company said. “This legacy protocol leaves organizations vulnerable to trivial credential theft, yet it remains prevalent due to inertia and a lack of demonstrated immediate risk.”

The table first assists attackers in providing the proper answer to a challenge that Windows sends during the authentication process by using a known plaintext attack with the challenge 1122334455667788. Once the challenge has been solved, the attacker obtains the Net-NTLMv1 hash and uses the table to rapidly crack it. Typically tools including Responder, PetitPotam, and DFSCoerce are involved.

In a thread on Mastodon, researchers and admins applauded the move, because they said it would give them added ammunition when trying to convince decision makers to make the investments to move off the insecure function.

“I’ve had more than one instance in my (admittedly short) infosec career where I’ve had to prove the weakness of a system and it usually involves me dropping a sheet of paper on their desk with their password on it the next morning,” one person said. “These rainbow tables aren’t going to mean much for attackers as they’ve likely already got them or have far better methods, but where it will help is in making the argument that NTLMv1 is unsafe.”

The Mandiant post provides basic steps required to move off of NTLMv1. It links to more detailed instructions.

“Organizations should immediately disable the use of Net-NTLMv1,” Mandiant said. Organizations that get hacked because they failed to heed will have only themselves to blame.

Mandiant releases rainbow table that cracks weak admin password in 12 hours Read More »

ram-shortage-chaos-expands-to-gpus,-high-capacity-ssds,-and-even-hard-drives

RAM shortage chaos expands to GPUs, high-capacity SSDs, and even hard drives

Big Tech’s AI-fueled memory shortage is set to be the PC industry’s defining story for 2026 and beyond. Standalone, direct-to-consumer RAM kits were some of the first products to feel the bite, with prices spiking by 300 or 400 percent by the end of 2025; prices for SSDs had also increased noticeably, albeit more modestly.

The rest of 2026 is going to be all about where, how, and to what extent those price spikes flow downstream into computers, phones, and other components that use RAM and NAND chips—areas where the existing supply of products and longer-term supply contracts negotiated by big companies have helped keep prices from surging too noticeably so far.

This week, we’re seeing signs that the RAM crunch is starting to affect the GPU market—Asus made some waves when it inadvertently announced that it was discontinuing its GeForce RTX 5070 Ti.

Though the company has since tried to walk this announcement back, if you’re a GPU manufacturer, there’s a strong argument for either discontinuing this model or de-prioritizing it in favor of other GPUs. The 5070 Ti uses 16GB of GDDR7, plus a partially disabled version of Nvidia’s GB203 GPU silicon. This is the same chip and the same amount of RAM used in the higher-end RTX 5080—the thinking goes, why continue to build a graphics card with an MSRP of $749 when the same basic parts could go to a card with a $999 MSRP instead?

Whether Asus or any other company is canceling production or not, you can see why GPU makers would be tempted by the argument: Street prices for the RTX 5070 Ti models start in the $1,050 to $1,100 range on Newegg right now, where RTX 5080 cards start in the $1,500 to $1,600 range. Though 5080 models may need more robust boards, heatsinks, and other components than a 5070 Ti, if you’re just trying to maximize the profit-per-GPU you can get for the same amount of RAM, it makes sense to shift allocation to the more expensive cards.

RAM shortage chaos expands to GPUs, high-capacity SSDs, and even hard drives Read More »

calif.-counters-fcc-attack-on-dei-with-conditions-on-verizon/frontier-merger

Calif. counters FCC attack on DEI with conditions on Verizon/Frontier merger

Verizon has received all approvals it needs for a $9.6 billion acquisition of Frontier Communications, an Internet service provider with about 3.3 million broadband customers in 25 states. Verizon said it expects to complete the merger on January 20.

The last approval came from the California Public Utilities Commission (CPUC), which allowed the deal in a 5–0 vote yesterday. There were months of negotiations that resulted in requirements to deploy more fiber and wireless infrastructure, offer $20-per-month Internet service to people with low incomes for the next decade, and other commitments, including some designed to replace the DEI (diversity, equity, and inclusion) policies that Verizon had to end because of demands by the Trump administration.

“The approval follows extensive public participation, testimony from multiple parties, and negotiated settlement agreements with consumer advocates and labor organizations,” the CPUC said yesterday.

Verizon struck the merger deal with Frontier in September 2024, agreeing to pay $9.6 billion in cash and assume over $10 billion in debt held by Frontier. The all-cash transaction is valued at $20 billion including debt. Verizon said yesterday that the merged firm “will have an expanded reach of almost 30 million fiber passings across 31 states and Washington, DC.”

Verizon to expand network, maintain low-income plans

Verizon’s interest in its home Internet business has waxed and waned over the years, but the company seems pretty committed to fiber and fixed wireless home Internet these days. Part of the deal involves Verizon buying back a former portion of its network that it sold to Frontier almost 10 years ago. In 2016, Frontier bought Verizon’s FiOS and DSL operations in Florida, California, and Texas.

At yesterday’s CPUC meeting, Commissioner John Reynolds described Verizon’s commitments. Verizon will deploy fiber to 75,000 new locations within five years, prioritizing census blocks with income at or below 90 percent of the county median, he said. For wireless service, Verizon is required to deploy 250 new cell sites with 5G and fixed wireless capability in areas eligible for state broadband grants and areas with high fire threats, he said.

Calif. counters FCC attack on DEI with conditions on Verizon/Frontier merger Read More »

“i-am-very-annoyed”:-pharma-execs-blast-rfk-jr.’s-attack-on-vaccines

“I am very annoyed”: Pharma execs blast RFK Jr.’s attack on vaccines

Waiting for the midterms

But pharmaceutical executives don’t appear comforted by the pushback. “Today it may be childhood vaccines or mRNA, but tomorrow it’s everything,” Noubar Afeyan, co-founder and chairman of Moderna, maker of mRNA vaccines, said. “We have to say not just ‘why is this happening?,’ but ‘Where will it stop?’”

As a bad flu season is underway, Dean Li, president of Merck Research Laboratories, noted that the anti-vaccine rhetoric is hitting seasonal flu shots. “With the pressure on vaccination, I cannot foresee flu vaccination increasing in this country over the next three years,” he said in a presentation.

Sanofi Chief Executive Paul Hudson had a similarly pessimistic outlook. “It’s clear this administration has a particular sensitivity around vaccination, and indeed pediatric vaccination,” Hudson said. “I’m asked all the time ‘what are you going to do to fix this?,’ and the truth is we just need to stay extremely objective and continue presenting the evidence. There’s really very little else we can do,” except wait for the midterm elections, he said.

“We will have to maintain a steely focus on the long-term future of vaccines and deal with any uncertainty around vaccine coverage rates in the short-term based on misinformation, Facebook posts, and statements from the top,” he said.

Bourla also worried about the conditions Kennedy is creating to attack drug makers. Kennedy, who is an environmental lawyer with no scientific or medical background, has profited from lawsuits against vaccine makers, as have many of his allies and advisors. “There is also a lot of plaintiffs’ playbook there,” Bourla said. “Everybody will start litigating.”

“I am very annoyed”: Pharma execs blast RFK Jr.’s attack on vaccines Read More »

are-people-avoiding-ios-26-because-of-liquid-glass?-it’s-complicated.

Are people avoiding iOS 26 because of Liquid Glass? It’s complicated.


are people really skipping Liquid Glass?

Liquid Glass is controversial, but adoption rates aren’t as low as they seem.

iPhones running iOS 26. Credit: Apple

iPhones running iOS 26. Credit: Apple

Last week, news about the adoption rates for Apple’s iOS 26 update started making the rounds. The new update, these reports claim, was being installed at dramatically lower rates than past iOS updates. And while we can’t infer anything about why people might choose not to install iOS 26, the conclusion being jumped to is that iPhone users are simply desperate to avoid the redesigned Liquid Glass user interface.

The numbers do, in fact, look bad: Statcounter data for January suggests that the various versions of iOS 26 are running on just 16.6 percent of all devices, compared to around 70 percent for the various versions of iOS 18. The iOS 18.7 update alone—released at the same time as iOS 26.0 in September for people who wanted the security patches but weren’t ready to step up to a brand-new OS—appears to be running on nearly one-third of all iOS devices.

Those original reports were picked up and repeated because they tell a potentially interesting story of the “huge if true” variety: that users’ aversion to the Liquid Glass design is so intense and widespread that it’s actively keeping users away from the operating system. But after examining our own traffic numbers, as well as some technical changes made in iOS 26, it appears Statcounter’s data is dramatically undercounting the number of iOS 26 devices in the wild.

We’ve taken a high-level look at all iPhone traffic across all Condé Nast websites for October, November, and December of 2025 and compared it to traffic from October, November, and December of 2024. This data suggests that iOS 26 is being adopted more slowly than iOS 18 was the year before—roughly 76 percent of all iPhone pageviews came from devices running iOS 18 in December of 2024, compared to about 45 percent for iOS 26 in December of 2025.

That’s not as cataclysmic a dropoff as Statcounter’s data suggests, even before considering other mitigating factors—iOS 26 dropped support for 2018’s iPhone XS, XS Max, and XR, for example, while iOS 18 ran on every iPhone that could run iOS 17.

But it’s still a much slower rate of adoption than we’re used to for most iOS versions, and it’s something to monitor as we get closer to iOS 27 and Apple’s first opportunity to make major changes to Liquid Glass. And to monitor it, it’s important to be able to measure it correctly. There have been behind-the-scenes changes to iOS 26 that appear to have thrown off Statcounter’s data collection—let’s talk about those, about what our own data shows, and about why you may want to upgrade to iOS 26 soon even if you don’t care for Liquid Glass.

User agent string changes in iOS 26

It turns out that telling an iOS 18 device from an iOS 26 device is harder than it ought to be, and that’s because of a change Apple made to Safari in iOS 26.

Web analytics software (and services like Statcounter) attempt to gather device data by looking at the browser’s user agent string, a short list of information about the hardware, operating system, browser, and browser engine. There are benign and useful reasons to collect this kind of data. If you’re a web developer fielding a ton of user complaints from people who are all using a specific browser or OS version, it can help you narrow down what the issue is and test a fix. You could also use the user agent string to decide whether to show the desktop or mobile version of your site to a user.

But if this information is too accurate or detailed, it can lead to “fingerprinting”—the ability for sites to identify a specific user or specific type of user from their user-agent string. Browser makers have taken steps, both together and separately, to reduce the amount of fingerprinting that is possible.

And occasionally, browsers will intentionally misrepresent their user agent string for compatibility reasons. For example, the default user agent string for Safari running on modern versions of iPadOS claims that the browser is running on top of macOS to make sites rendered on an iPad work more like sites rendered on a Mac. Apple froze the macOS version in Safari’s user agent string to 10.15.7 several years ago, partly to reduce fingerprinting and partly to resolve compatibility problems that some sites had when Apple put “macOS 11” in the user agent string after decades of macOS 10.

All of this is to say: information derived from the user-agent string is only as accurate as the OSes and browsers that are reporting their user-agent strings. And in iOS 26, Apple decided to freeze the iOS version in Safari’s user agent string to version 18 in order to reduce fingerprinting (credit to developer and blogger Niels Leenheer, who both explained this change and confirmed with Apple engineer Karl Dubost why it was made).

Which explains why anyone looking at Statcounter’s data could draw incorrect conclusions about iOS 26 adoption: because most iOS users are running Safari, and because all Safari versions running on iOS 26 are claiming to be running on iOS 18.6 or 18.7 instead.

Only third-party browsers like Google Chrome or Microsoft Edge are reporting an iOS version of 26 in their user agent strings, so what Statcounter is inadvertently measuring is the number of Chrome users who have updated to iOS 26, not the total number of users who have updated.

What our data says

There is a workaround for this, at least for iOS. Safari on iOS 26 will report an iOS version of 18.6 or 18.7, but it also reports a Safari version of 26.x. This isn’t as useful on macOS, where Safari 26 could be running on macOS 14 Sonoma, macOS 15 Sequoia, or macOS 26 Tahoe. But on iOS, Safari 26 only runs on iOS 26, so it’s a useful proxy for identifying the operating system version.

iOS 18 Safari pageviews in 2024 iOS 26 Safari pageviews in 2025
October 24.9% 22.1%
November 35.1% 26.3%
December 75.9% 45.3%

For these stats, we’ve grouped together all devices claiming to run Safari 26 on an iPhone, regardless of whether the underlying iOS version is listed as 18.x or 26.x (some apps or third-party browsers using Apple’s built-in WebKit engine can still identify themselves as “Safari,” though Chrome, Edge, and Mozilla Firefox at least report their own user-agent strings). We’ve compared those numbers to all devices claiming to run Safari 18 on iPhones claiming to run iOS 18. This does screen out users running third-party browsers on iPhones, but Statcounter data suggests that the ratio of Safari to Chrome users on iOS hasn’t changed much over that period.

What’s interesting is that for October 2024 and October 2025—the first full month that iOS 18 and iOS 26 were available, respectively—adoption numbers don’t look all that different. About 25 percent of iPhone pageviews across all Condé Nast were served to devices running Safari on iOS 18, compared to 22 percent for iOS 26 the following year. That is a step down, but it suggests that early adopters weren’t repelled en masse by Liquid Glass or anything else about the operating system.

But the gap widens over the next two months, which does suggest that “normal” users aren’t in a rush to get the update. By December 2024, our data shows that 76 percent of iPhone Safari pageviews were going to iOS 18 devices, compared to just 45 percent for iOS 26 in December 2025.

Adoption of new iOS versions does plateau after a while. Adoption of iOS 18 hit 80 percent in January 2025, according to our data, and then rose more slowly afterward, peaking at around 91 percent in August 2025. Those stats are in the same ballpark as both Statcounter data (78 percent as of August 2025) and the last stats Apple has published (82 percent of all iPhones as of June 2025) for iOS 18. (We’ve asked the company if it has any updated internal stats to share and will update the article if we receive a response.)

We’ll see where iOS 26 eventually settles. If I’m Apple, I’m a bit less worried about slower adoption as long as iOS 26 eventually hits that same 80 to 90 percent range. But if usage settles significantly below that historical watermark, it could signal a more lasting negative response to the iOS 26 update that needs to be addressed in future versions.

Why it’s time to take the plunge, even if you don’t like Liquid Glass

Apple’s most recent security updates for iOS 18 are only available for phones that can’t run iOS 26 at all, like the iPhone XR. That means it’s probably time to install iOS 26 even if you don’t like Liquid Glass.

Credit: Samuel Axon

Apple’s most recent security updates for iOS 18 are only available for phones that can’t run iOS 26 at all, like the iPhone XR. That means it’s probably time to install iOS 26 even if you don’t like Liquid Glass. Credit: Samuel Axon

However you feel about Liquid Glass, we’re getting to the point that upgrading is going to become necessary for people who want security patches and functional fixes for their phones.

For a short time after each new iOS version is released, Apple continues to provide security patches for the previous version of iOS, for people who would rather wait for early bugs in the new OS to be patched. The company started this practice in 2021, when it provided security patches for iOS 14 for a couple of months after the release of iOS 15. But those patches don’t last forever, and eventually devices that can upgrade to the new operating system will need to do it to stay patched.

Apple never formally announces when these security updates have stopped, but you can tell by looking at the company’s security updates page. The iOS 18.7, 18.7.1, and 18.7.2 updates all apply to the “iPhone XS and later.” But the iOS 18.7.3 update released on December 12, 2025, only applies to the iPhone XS, iPhone XS Max, and iPhone XR. It’s a subtle difference, but it means that Apple is only continuing to patch iOS 18 on devices that can’t run iOS 26.

This is standard practice for iPhones and iPads, but it differs from the update model Apple uses for macOS—any Mac can continue to download and install security updates for macOS 14 Sonoma and macOS 15 Sequoia, regardless of whether they’re eligible for the macOS 26 Tahoe upgrade.

If you skipped the early versions of iOS 26 and iPadOS 26 because of Liquid Glass, the good news is that Apple provided options to allow users to tone down the effect. The iOS 26.1 update added a “tinted” option for Liquid Glass, increasing the interface’s contrast and opacity to help with the legibility issues you’ll occasionally run into with the default settings. The company also added opacity controls for the lock screen clock in iOS 26.2. Personally, I also found it helpful to switch the Tabs view in the Safari settings from “Compact” to “Bottom” to make the browser look and act more like it did in its iOS 18-era iteration.

Those settings may feel like half-measures to hardcore Liquid Glass haters who just want Apple to revert to its previous design language. But if you’ve got a modern iPhone or iPad and you want to stay up to date and secure, those toggles (plus additional controls for motion and transparency in the Accessibility settings) may at least ease the transition for you.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

Are people avoiding iOS 26 because of Liquid Glass? It’s complicated. Read More »

bully-online-mod-taken-down-abruptly-one-month-after-launch

Bully Online mod taken down abruptly one month after launch

A PC mod that added online gameplay to Rockstar’s 2006 school-exploration title Bully was abruptly taken down on Wednesday, roughly a month after it was first made available. While the specific reason for the “Bully Online” takedown hasn’t been publicly discussed, a message posted by the developers to the project’s now-defunct Discord server clarifies that “this was not something we wanted.”

The Bully Online mod was spearheaded by Swegta, a Rockstar-focused YouTuber who formally announced the project in October as a mod that “allows you and your friends to play minigames, role-play, compete in racing, fend off against NPCs, and much more.”

At the time of the announcement, Swegta said the mod was “a project me and my team have been working on for a very long time” and that early access in December would be limited to those who contributed at least $8 to a Ko-Fi account. When December actually rolled around, though, a message on Swegta.com (archived) suggested that the mod was being released freely as an open source project, with a registration page (archived) offering new accounts to anyone.

That source code has now been completely removed from Swegta.com, along with any webpages referencing the project or offering downloads for the mod’s custom launcher. On Discord, the team said that development of any Bully Online scripts would stop and that any account data created by users would be deleted.

Bully Online mod taken down abruptly one month after launch Read More »