Tech

samsung’s-android-15-update-has-been-halted

Samsung’s Android 15 update has been halted

When asked about specifics, Samsung doesn’t have much to say. “The One UI 7 rollout schedule is being updated to ensure the best possible experience. The new timing and availability will be shared shortly,” the company said.

Samsung foldables

Samsung’s flagship foldables, the Z Flip 6 and Z Fold 6, are among the phones waiting on the One UI 7 update.

Credit: Ryan Whitwam

Samsung’s flagship foldables, the Z Flip 6 and Z Fold 6, are among the phones waiting on the One UI 7 update. Credit: Ryan Whitwam

One UI 7 is based on Android 15, which is the latest version of the OS for the moment. Google plans to release the first version of Android 16 in June, which is much earlier than in previous cycles. Samsung’s current-gen Galaxy S25 family launched with One UI 7, so owners of those devices don’t need to worry about the buggy update.

Samsung is no doubt working to fix the issues and restart the update rollout. Its statement is vague about timing—”shortly” can mean many things. We’ve reached out and will report if Samsung offers any more details on the pause or when it will be over.

When One UI 7 finally arrives on everyone’s phones, the experience will be similar to what you get on the Galaxy S25 lineup. There are a handful of base Android features in the update, but it’s mostly a Samsung affair. There’s the new AI-infused Now Bar, more expansive AI writing tools, camera UI customization, and plenty of interface tweaks.

Samsung’s Android 15 update has been halted Read More »

report:-apple-will-take-another-crack-at-ipad-multitasking-in-ipados-19

Report: Apple will take another crack at iPad multitasking in iPadOS 19

Apple is taking another crack at iPad multitasking, according to a report from Bloomberg’s Mark Gurman. This year’s iPadOS 19 release, due to be unveiled at Apple’s Worldwide Developers Conference on June 9, will apparently include an “overhaul that will make the tablet’s software more like macOS.”

The report is light on details about what’s actually changing, aside from a broad “focus on productivity, multitasking, and app window management.” But Apple will apparently continue to stop short of allowing users of newer iPads to run macOS on their tablets, despite the fact that modern iPad Airs and Pros use the same processors as Macs.

If this is giving you déjà vu, you’re probably thinking about iPadOS 16, the last time Apple tried making significant upgrades to the iPad’s multitasking model. Gurman’s reporting at the time even used similar language, saying that iPads running the new software would work “more like a laptop and less like a phone.”

The result of those efforts was Stage Manager. It had steep hardware requirements and launched in pretty rough shape, even though Apple delayed the release of the update by a month to keep polishing it. Stage Manager did allow for more flexible multitasking, and on newer models, it enabled true multi-monitor support for the first time. But early versions were buggy and frustrating in ways that still haven’t fully been addressed by subsequent updates (MacStories’ Federico Viticci keeps the Internet’s most comprehensive record of the issues with the software.)

Report: Apple will take another crack at iPad multitasking in iPadOS 19 Read More »

a-history-of-the-internet,-part-1:-an-arpa-dream-takes-form

A history of the Internet, part 1: An ARPA dream takes form


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 1: An ARPA dream takes form Read More »

turbulent-global-economy-could-drive-up-prices-for-netflix-and-rivals

Turbulent global economy could drive up prices for Netflix and rivals


“… our members are going to be punished.”

A scene from BBC’s Doctor Who. Credit: BBC/Disney+

Debate around how much taxes US-based streaming services should pay internationally, among other factors, could result in people paying more for subscriptions to services like Netflix and Disney+.

On April 10, the United Kingdom’s Culture, Media and Sport (CMS) Committee reignited calls for a streaming tax on subscription revenue acquired through UK residents. The recommendation came alongside the committee’s 120-page report [PDF] that makes numerous recommendations for how to support and grow Britain’s film and high-end television (HETV) industry.

For the US, the recommendation garnering the most attention is one calling for a 5 percent levy on UK subscriber revenue from streaming video on demand services, such as Netflix. That’s because if streaming services face higher taxes in the UK, costs could be passed onto consumers, resulting in more streaming price hikes. The CMS committee wants money from the levy to support HETV production in the UK and wrote in its report:

The industry should establish this fund on a voluntary basis; however, if it does not do so within 12 months, or if there is not full compliance, the Government should introduce a statutory levy.

Calls for a streaming tax in the UK come after 2024’s 25 percent decrease in spending for UK-produced high-end TV productions and 27 percent decline in productions overall, per the report. Companies like the BBC have said that they lack funds to keep making premium dramas.

In a statement, the CMS committee called for streamers, “such as Netflix, Amazon, Apple TV+, and Disney+, which benefit from the creativity of British producers, to put their money where their mouth is by committing to pay 5 percent of their UK subscriber revenue into a cultural fund to help finance drama with a specific interest to British audiences.” The committee’s report argues that public service broadcasters and independent movie producers are “at risk,” due to how the industry currently works. More investment into such programming would also benefit streaming companies by providing “a healthier supply of [public service broadcaster]-made shows that they can license for their platforms,” the report says.

The Department for Digital, Culture, Media and Sport has said that it will respond to the CMS Committee’s report.

Streaming companies warn of higher prices

In response to the report, a Netflix spokesperson said in a statement shared by the BBC yesterday that the “UK is Netflix’s biggest production hub outside of North America—and we want it to stay that way.” Netflix reportedly claims to have spent billions of pounds in the UK via work with over 200 producers and 30,000 cast and crew members since 2020, per The Hollywood Reporter. In May 2024, Benjamin King, Netflix’s senior director of UK and Ireland public policy, told the CMS committee that the streaming service spends “about $1.5 billion” annually on UK-made content.

Netflix’s statement this week, responding to the CMS Committee’s levy, added:

… in an increasingly competitive global market, it’s key to create a business environment that incentivises rather than penalises investment, risk taking, and success. Levies diminish competitiveness and penalise audiences who ultimately bear the increased costs.

Adam Minns, executive director for the UK’s Association for Commercial Broadcasters and On-Demand Services (COBA), highlighted how a UK streaming tax could impact streaming providers’ content budgets.

“Especially in this economic climate, a levy risks impacting existing content budgets for UK shows, jobs, and growth, along with raising costs for businesses,” he said, per the BBC.

An anonymous source that The Hollywood Reporter described as “close to the matter” said that “Netflix members have already paid the BBC license fee. A levy would be a double tax on them and us. It’s unfair. This is a tariff on success. And our members are going to be punished.”

The anonymous source added: “Ministers have already rejected the idea of a streaming levy. The creation of a Cultural Fund raises more questions than it answers. It also begs the question: Why should audiences who choose to pay for a service be then compelled to subsidize another service for which they have already paid through the license fee. Furthermore, what determines the criteria for ‘Britishness,’ which organizations would qualify for funding … ?”

In May, Mitchel Simmons, Paramount’s VP of EMEA public policy and government affairs, also questioned the benefits of a UK streaming tax when speaking to the CMS committee.

“Where we have seen levies in other jurisdictions on services, we then see inflation in the market. Local broadcasters, particularly in places such as Italy, have found that the prices have gone up because there has been a forced increase in spend and others have suffered as a consequence,” he said at the time.

Tax threat looms largely on streaming companies

Interest in the UK putting a levy on streaming services follows other countries recently pushing similar fees onto streaming providers.

Music streaming providers, like Spotify, for example, pay a 1.2 percent tax on streaming revenue made in France. Spotify blamed the tax for a 1.2 percent price hike in the country issued in May. France’s streaming taxes are supposed to go toward the Centre National de la Musique.

Last year, Canada issued a 5 percent tax on Canadian streaming revenue that’s been halted as companies including Netflix, Amazon, Apple, Disney, and Spotify battle it in court.

Lawrence Zhang, head of policy of the Centre for Canadian Innovation and Competitiveness at the Information Technology and Innovation Foundation think tank, has estimated that a 5 percent streaming tax would result in the average Canadian family paying an extra CA$40 annually.

A streaming provider group called the Digital Media Association has argued that the Canadian tax “could lead to higher prices for Canadians and fewer content choices.”

“As a result, you may end up paying more for your favourite streaming services and have less control over what you can watch or listen to,” the Digital Media Association’s website says.

Streaming companies hold their breath

Uncertainty around US tariffs and their implications on the global economy have also resulted in streaming companies moving slower than expected regarding new entrants, technologies, mergers and acquisitions, and even business failures, Alan Wolk, co-founder and lead analyst at TVRev, pointed out today. “The rapid-fire nature of the executive orders coming from the White House” has a massive impact on the media industry, he said.

“Uncertainty means that deals don’t get considered, let alone completed,” Wolk mused, noting that the growing stability of the streaming industry overall also contributes to slowing market activity.

For consumers, higher prices for other goods and/or services could result in smaller budgets for spending on streaming subscriptions. Establishing and growing advertising businesses is already a priority for many US streaming providers. However, the realities of stingier customers who are less willing to buy multiple streaming subscriptions or opt for premium tiers or buy on-demand titles are poised to put more pressure on streaming firms’ advertising plans. Simultaneously, advertisers are facing pressures from tariffs, which could result in less money being allocated to streaming ads.

“With streaming platform operators increasingly turning to ad-supported tiers to bolster profitability—rather than just rolling out price increases—this strategy could be put at risk,” Matthew Bailey, senior principal analyst of advertising at Omdia, recently told Wired. He added:

Against this backdrop, I wouldn’t be surprised if we do see some price increases for some streaming services over the coming months.

Streaming service providers are likely to tighten their purse strings, too. As we’ve seen, this can result in price hikes and smaller or less daring content selection.   

Streaming customers may soon be forced to reduce their subscriptions. The good news is that most streaming viewers are already accustomed to growing prices and have figured out which streaming services align with their needs around affordability, ease of use, content, and reliability. Customers may set higher standards, though, as streaming companies grapple with the industry and global changes.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Turbulent global economy could drive up prices for Netflix and rivals Read More »

chrome’s-new-dynamic-bottom-bar-gives-websites-a-little-more-room-to-breathe

Chrome’s new dynamic bottom bar gives websites a little more room to breathe

The Internet might look a bit different on Android soon. Last month, Google announced its intent to make Chrome for Android a more immersive experience by hiding the navigation bar background. The promised edge-to-edge update is now rolling out to devices on Chrome version 135, giving you a touch more screen real estate. However, some websites may also be a bit harder to use.

Moving from button to gesture navigation reduced the amount of screen real estate devoted to the system UI, which leaves more room for apps. Google’s move to a “dynamic bottom bar” in Chrome creates even more space for web content. When this feature shows up, the pages you visit will be able to draw all the way to the bottom of the screen instead of stopping at the navigation area, which Google calls the “chin.”

Chrome edge-to-edge

Credit: Google

As you scroll down a page, Chrome hides the address bar. With the addition of the dynamic bottom bar, the chin also vanishes. The gesture handle itself remains visible, shifting between white and black based on what is immediately behind it to maintain visibility. Unfortunately, this feature will not work if you have chosen to stick with the classic three-button navigation option.

Chrome’s new dynamic bottom bar gives websites a little more room to breathe Read More »

powerful-programming:-bbc-controlled-electric-meters-are-coming-to-an-end

Powerful programming: BBC-controlled electric meters are coming to an end

Two rare tungsten-centered, hand-crafted cooled anode modulators (CAM) are needed to keep the signal going, and while the BBC bought up the global supply of them, they are running out. The service is seemingly on its last two valves and has been telling the public about Long Wave radio’s end for nearly 15 years. Trying to remanufacture the valves is hazardous, as any flaws could cause a catastrophic failure in the transmitters.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich.

BBC Radio 4’s 198 kHz transmitting towers at Droitwich. Credit: Bob Nienhuis (Public domain)

Rebuilding the transmitter, or moving to different, higher frequencies, is not feasible for the very few homes that cannot get other kinds of lower-power radio, or internet versions, the BBC told The Guardian in 2011. What’s more, keeping Droitwich powered such that it can reach the whole of the UK, including Wales and lower Scotland, requires some 500 kilowatts of power, more than most other BBC transmission types.

As of January 2025, roughly 600,000 UK customers still use RTS meters to manage their power switching, after 300,000 were switched away in 2024. Utilities and the BBC have agreed that the service will stop working on June 30, 2025, and have pushed to upgrade RTS customers to smart meters.

In a combination of sad reality and rich irony, more than 4 million smart meters in the UK are not working properly. Some have delivered eye-popping charges to their customers, based on estimated bills instead of real readings, like Sir Grayson Perry‘s 39,000 pounds due on 15 simultaneous bills. But many have failed because the UK, like other countries, phased out the 2G and 3G networks older meters relied upon without coordinated transition efforts.

Powerful programming: BBC-controlled electric meters are coming to an end Read More »

oneplus-releases-watch-3-with-inflated-$500-price-tag,-won’t-say-why

OnePlus releases Watch 3 with inflated $500 price tag, won’t say why

watch 3 pricing

Credit: OnePlus

The tariff fees are typically paid on a product’s declared value rather than the retail cost. So a $170 price bump could be close to what the company’s US arm will pay to import the Watch 3 in the midst of a trade war. Many technology firms have attempted to stockpile products in the US ahead of tariffs, but it’s possible OnePlus simply couldn’t do that because it had to fix its typo.

Losing its greatest advantage?

Like past OnePlus wearables, the Watch 3 is a chunky, high-power device with a stainless steel case. It sports a massive 1.5-inch OLED screen, the latest Snapdragon W5 wearable processor, 32GB of storage, and 2GB of RAM. It runs Google’s Wear OS for smart features, but it also has a dialed-back power-saving mode that runs separate RTOS software. This robust hardware adds to the manufacturing cost, which also means higher tariffs now. As it currently stands, the Watch 3 is just too expensive given the competition.

OnePlus has managed to piece together a growing ecosystem of devices, including phones, tablets, earbuds, and, yes, smartwatches. With a combination of competitive prices and high-end specs, it successfully established a foothold in the US market, something few Chinese OEMs have accomplished.

The implications go beyond wearables. OnePlus also swings for the fences with its phone hardware, using the best Arm chips and expensive, high-end OLED panels. OnePlus tends to price its phones lower than similar Samsung and Google hardware, so it doesn’t make as much on each phone. If the tariffs stick, that strategy could be unviable.

OnePlus releases Watch 3 with inflated $500 price tag, won’t say why Read More »

google-takes-advantage-of-federal-cost-cutting-with-steep-workspace-discount

Google takes advantage of federal cost-cutting with steep Workspace discount

Google has long been on the lookout for ways to break Microsoft’s stranglehold on US government office software, and the current drive to cut costs may be it. Google and the federal government have announced an agreement that makes Google Workspace available to all agencies at a significant discount, trimming 71 percent from the service’s subscription price tag.

Since Donald Trump returned to the White House, the government has engaged in a campaign of unbridled staffing reductions and program cancellations, all with the alleged aim of reducing federal spending. It would appear Google recognized this opportunity, negotiating with the General Services Administration (GSA) to offer Workspace at a lower price. Google claims the deal could yield up to $2 billion in savings.

Google has previously offered discounts for federal agencies interested in migrating to Workspace, but it saw little success displacing Microsoft. The Windows maker has enjoyed decades as an entrenched tech giant, leading the 365 productivity tools to proliferate throughout the government. While Google has gotten some agencies on board, Microsoft has traditionally won the lion’s share of contracts, including the $8 billion Defense Enterprise Office Solutions contract that pushed Microsoft 365 to all corners of the Pentagon beginning in 2020.

Google takes advantage of federal cost-cutting with steep Workspace discount Read More »

hands-on:-handwriting-recognition-app-brings-sticky-notes-into-the-21st-century

Hands-on: Handwriting recognition app brings sticky notes into the 21st century


Rocketbook Reusable Sticky Notes are an excessive solution for too many sticky notes.

For quick reminders and can’t-miss memos, sticky notes are effective tools, and I’d argue that the simplicity of the sticky note is its best attribute. But the ease behind propping up sticky notes also means that it’s easy for people to find their desks covered in the things, making it difficult to glean critical information quickly.

Rocketbook, a Boston-based company that also makes reusable notebooks, thinks it has a solution for sticky note overload in the form of an app that interprets handwriting and organizes reusable sticky notes. But not everyone has the need—or time—for a dedicated sticky notes app.

Rocketbook’s Reusable Sticky Notes

Like Rocketbook’s flagship notebooks, its Reusable Sticky Notes rely on erasable pens that allow you to use the paper repeatedly. The Reusable Sticky Notes work with the Rocketbook app (available for iOS or Android), which transforms the sticky notes into images that are automatically stored in the app and can be emailed to specified people (as a PDF) or shared with third-party apps.

The $30 starter kit I used comes with weeks’, if not months’, worth of materials: That includes 15 3×3-inch reusable sticky notes, a case for said notes, a small microfiber towel for wiping the text off of the sticky notes, and a pen from Pilot’s FriXion line of erasable pens, markers, and highlighters. Rocketbook claims that any FriXion writing utensil will write and erase on its sticky notes. I only tried the pen included in the starter kit, a FriXion Ball gel pen with a 0.7 mm tip. Using the built-in eraser, I could usually remove enough ink from the notes so that only a faint imprint of what I wrote remained. For total clarity, I’d need to whip out the included microfiber cloth and some water. The notes seemed able to withstand water well and without getting flimsy.

The Pilot Frixion pen.

The gray tip on the right side of the open pen is the eraser.

Credit: Scharon Harding

The gray tip on the right side of the open pen is the eraser. Credit: Scharon Harding

Rocketbook claims that the adhesive on its sticky notes is so strong that they can be stuck and re-stuck hundreds of times. I didn’t get to put that to the test but can confirm that the notes’ adhesive area is thicker than that of a normal sticky note. The paper is thicker and smoother than a normal sticky note, too, while still being lightweight and comfortable enough to write on.

A picture of the back of an unsued Reusable Sticky Note (left) and used one with the adhesive covering removed (right).

A picture of the back of an unused Reusable Sticky Note (left) and the back of a used one with the adhesive covering removed (right).

Credit: Scharon Harding

A picture of the back of an unused Reusable Sticky Note (left) and the back of a used one with the adhesive covering removed (right). Credit: Scharon Harding

Sticky note software

The Reusable Sticky Notes are among the most technologically advanced scraps of paper you can find. In my experience, the technology, including the optical character recognition, worked reliably.

For example, scanning a sticky note was seamless. The camera in the iOS app quickly identified any sticky notes in the shot and snapped an image (or images) without me having to do much aligning or pressing more buttons.

Afterward, it was easy to share the image. I could send it to frequently used emails I saved in the app or send it to other apps, like AirDrop, Google Drive, ToDoist, or a search engine. The app can read the sticky note images as text, but it doesn’t convert the images to text. So, while Google could interpret an image of a sticky note as text via Google Lens, for example, ToDoist only saw a JPEG.

The app uses optical character recognition to convert handwriting into machine-readable text. This enables you to use the app to search uploaded sticky notes for specific words or phrases. I initially feared that the app wouldn’t be able to read my cursive, but even when I scribbled quickly and deviated from writing in a straight line, the app understood my writing. Don’t expect it to pick up chicken scratch, though. My handwriting didn’t need to be perfect for the app to understand it, but the app couldn’t comprehend my sloppiest notes—the type that only I could read, or ones that are common when someone is quickly jotting something on a sticky note.

Further, I didn’t always notice which notes I wrote neatly enough for the app to read. That made it confusing when I searched for terms that I knew I wrote on scanned notes but that were scrawled, per the app, illegibly.

A screenshot of the Rocketbook app.

A screenshot of the Rocketbook app. Credit: Scharon Harding/Rocketbook

Perhaps most useful for sticky note aficionados is the app’s ability to quickly group sticky notes. Sure, you could put sticky notes with to-do list items on the left side of your computer monitor and place notes with appointments to remember on the right side of your monitor. However, the app offers superior organization by letting you add tags to each scanned note. Then, it’s easy to look at all notes with the same tag on one page. But because each scanned note shown on a tag page is shown as a thumbnail, you can’t read everything written on all notes with the same tag simultaneously. That’s a con for people who prefer seeing all relevant notes and their contents at once.

There are additional ways that the Rocketbook app can help bring order to workstations containing so many posted sticky notes that they look like evidence boards. Typically, I denote titles on sticky notes by trying to write the title larger than the rest of the text and then underlining it. In the Rocketbook app, you can manually add titles to each sticky note. Alternatively, if you physically write “##” before and after the title on the actual Sticky Note, the app will automatically read the words in between the pound signs as a title and name the image as such. This is a neat trick, but I also found it distracting to have four pound signs written on my notes.

Another Reusable Sticky Notes feature lets you turn scanned notes into to-do lists that are accessible via the companion app. If you write a list on a note using square boxes at the start of each line, the app will read it as a “Smart List.” Once scanned, the app converts this into a to-do list with boxes that you can check off as you complete tasks. This is easier than trying to check off items on a sticky note that’s, for example, dangling on your computer screen. But it’s not always possible to fit every to-do list item on one line. And numerous times, the app failed to read my Smart List properly, as you can see in the gallery below. This could be due to my handwriting being unclear or misaligned. But as someone merely trying to write down a to-do list quickly, I lack the time or patience for thorough troubleshooting.

Organizing your organizational tools

Sticky notes can help you stay on schedule, but it’s easy to accumulate so many that the memos become a distracting crutch rather than handy organizational tools. For people who live by sticky notes, Rocketbook’s solution is excellent for grouping related tasks, appointments, and reminders and preventing things from getting overlooked.

However, leveraging Reusable Sticky Notes to their maximum potential requires scanning notes into the app. This doesn’t take long, but it is an extra step that detracts from the instant gratification of writing something down on a note and slapping it somewhere visible. For people who just like to write it down and post it, the Rocketbook app can feel cumbersome and unnecessary. The problems I had using Smart Lists hindered the product’s helpfulness, simplicity, and productivity as well.

Rocketbook’s sticky notes are also more beneficial to people who are more likely to look at an app on their phone than a bunch of papers surrounding them. There’s also a distinct advantage to being able to read your notes via an app when you’re not near the physical pieces of paper. Going further, it would be beneficial if the app could further leverage the phones that it’s on by being able to set alarms, for example, to correspond with scanned notes.

Much like with their app-free counterparts, for me, the best part of Rocketbook’s Reusable Sticky notes lies within its simpler features. The ability to easily reuse notes is more helpful than the ability to catalogue and archive memos. And while the handwriting recognition was mostly impressive, it seems more advantageous in something like a reusable notebook than a sticky memo.

But if you find yourself drowning in crumpled, flailing pieces of sticky paper, Rocketbook offers an option for organizing your organizational tools.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Hands-on: Handwriting recognition app brings sticky notes into the 21st century Read More »

google-announces-faster,-more-efficient-gemini-ai-model

Google announces faster, more efficient Gemini AI model

We recently spoke with Google’s Tulsee Doshi, who noted that the 2.5 Pro (Experimental) release was still prone to “overthinking” its responses to simple queries. However, the plan was to further improve dynamic thinking for the final release, and the team also hoped to give developers more control over the feature. That appears to be happening with Gemini 2.5 Flash, which includes “dynamic and controllable reasoning.”

The newest Gemini models will choose a “thinking budget” based on the complexity of the prompt. This helps reduce wait times and processing for 2.5 Flash. Developers even get granular control over the budget to lower costs and speed things along where appropriate. Gemini 2.5 models are also getting supervised tuning and context caching for Vertex AI in the coming weeks.

In addition to the arrival of Gemini 2.5 Flash, the larger Pro model has picked up a new gig. Google’s largest Gemini model is now powering its Deep Research tool, which was previously running Gemini 2.0 Pro. Deep Research lets you explore a topic in greater detail simply by entering a prompt. The agent then goes out into the Internet to collect data and synthesize a lengthy report.

Gemini vs. ChatGPT chart

Credit: Google

Google says that the move to Gemini 2.5 has boosted the accuracy and usefulness of Deep Research. The graphic above shows Google’s alleged advantage compared to OpenAI’s deep research tool. These stats are based on user evaluations (not synthetic benchmarks) and show a greater than 2-to-1 preference for Gemini 2.5 Pro reports.

Deep Research is available for limited use on non-paid accounts, but you won’t get the latest model. Deep Research with 2.5 Pro is currently limited to Gemini Advanced subscribers. However, we expect before long that all models in the Gemini app will move to the 2.5 branch. With dynamic reasoning and new TPUs, Google could begin lowering the sky-high costs that have thus far made generative AI unprofitable.

Google announces faster, more efficient Gemini AI model Read More »

windows-11’s-copilot-vision-wants-to-help-you-learn-to-use-complicated-apps

Windows 11’s Copilot Vision wants to help you learn to use complicated apps

Some elements of Microsoft’s Copilot assistant in Windows 11 have felt like a solution in search of a problem—and it hasn’t helped that Microsoft has frequently changed Copilot’s capabilities, turning it from a native Windows app into a web app and back again.

But I find myself intrigued by a new addition to Copilot Vision that Microsoft began rolling out this week to testers in its Windows Insider program. Copilot Vision launched late last year as a feature that could look at pages in the Microsoft Edge browser and answer questions based on those pages’ contents. The new Vision update extends that capability to any app window, allowing you to ask Copilot not just about the contents of a document but also about the user interface of the app itself.

Microsoft’s Copilot Vision update can see the contents of any app window you share with it. Credit: Microsoft

Provided the app works as intended—not a given for any software, but especially for AI features—Copilot Vision could replace “frantic Googling” as a way to learn how to use a new app or how to do something new or obscure in complex PC apps like Word, Excel, or Photoshop. I recently switched from Photoshop to Affinity Photo, for example, and I’m still finding myself tripped up by small differences in workflows and UI between the two apps. Copilot Vision could, in theory, ease that sort of transition.

Windows 11’s Copilot Vision wants to help you learn to use complicated apps Read More »

japanese-railway-shelter-replaced-in-less-than-6-hours-by-3d-printed-model

Japanese railway shelter replaced in less than 6 hours by 3D-printed model

Hatsushima is not a particularly busy station, relative to Japanese rail commuting as a whole. It serves a town (Arida) of about 25,000, known for mandarin oranges and scabbardfish, that is shrinking in population, like most of Japan. Its station sees between one to three trains per hour at its stop, helping about 530 riders find their way. Its wooden station was due for replacement, and the replacement could be smaller.

The replacement, it turned out, could also be a trial for industrial-scale 3D-printing of custom rail shelters. Serendix, a construction firm that previously 3D-printed 538-square-foot homes for about $38,000, built a shelter for Hatsushima in about seven days, as shown at The New York Times. The fabricated shelter was shipped in four parts by rail, then pieced together in a span that the site Futurism says is “just under three hours,” but which the Times, seemingly present at the scene, pegs at six. It was in place by the first train’s arrival at 5: 45 am.

Either number of hours is a marked decrease from the days or weeks you might expect for a new rail station to be constructed. In one overnight, teams assembled a shelter that is 2.6 meters (8.5 feet) tall and 10 square meters (32 square feet) in area. It’s not actually in use yet, as it needs ticket machines and finishing, but is expected to operate by July, according to the Japan Times.

Japanese railway shelter replaced in less than 6 hours by 3D-printed model Read More »