Author name: Kris Guyer

ula-hasn’t-given-up-on-developing-a-long-lived-cryogenic-space-tug

ULA hasn’t given up on developing a long-lived cryogenic space tug


On Friday’s launch, United Launch Alliance will test the limits of its Centaur upper stage.

United Launch Alliance’s second Vulcan rocket underwent a countdown dress rehearsal Tuesday. Credit: United Launch Alliance

The second flight of United Launch Alliance’s Vulcan rocket, planned for Friday morning, has a primary goal of validating the launcher’s reliability for delivering critical US military satellites to orbit.

Tory Bruno, ULA’s chief executive, told reporters Wednesday that he is “supremely confident” the Vulcan rocket will succeed in accomplishing that objective. The Vulcan’s second test flight, known as Cert-2, follows a near-flawless debut launch of ULA’s new rocket on January 8.

“As I come up on Cert-2, I’m pretty darn confident I’m going to have a good day on Friday, knock on wood,” Bruno said. “These are very powerful, complicated machines.”

The Vulcan launcher, a replacement for ULA’s Atlas V and Delta IV rockets, is on contract to haul the majority of the US military’s most expensive national security satellites into orbit over the next several years. The Space Force is eager to certify Vulcan to launch these payloads, but military officials want to see two successful test flights before committing one of its satellites to flying on the new rocket.

If Friday’s test flight goes well, ULA is on track to launch at least one—and perhaps two—operational missions for the Space Force by the end of this year. The Space Force has already booked 25 launches on ULA’s Vulcan rocket for military payloads and spy satellites for the National Reconnaissance Office. Including the launch Friday, ULA has 70 Vulcan rockets in its backlog, mostly for the Space Force, the NRO, and Amazon’s Kuiper satellite broadband network.

The Vulcan rocket is powered by two methane-fueled BE-4 engines produced by Jeff Bezos’ space company Blue Origin, and ULA can mount zero, two, four, or six strap-on solid rocket boosters from Northrop Grumman around the Vulcan’s first stage to propel heavier payloads to space. The rocket’s Centaur V upper stage is fitted with a pair of hydrogen-burning RL10 engines from Aerojet Rocketdyne.

The second Vulcan rocket will fly in the same configuration as the first launch earlier this year, with two strap-on solid-fueled boosters. The only noticeable modification to the rocket is the addition of some spray-on foam insulation around the outside of the first stage methane tank, which will keep the cryogenic fuel at the proper temperature as Vulcan encounters aerodynamic heating on its ascent through the atmosphere.

“This will give us just over one second more usable propellant,” Bruno wrote on X.

There is one more change from Vulcan’s first launch, which boosted a commercial lunar lander for Astrobotic on a trajectory toward the Moon. This time, there are no real spacecraft on the Vulcan rocket. Instead, ULA mounted a dummy payload to the Centaur V upper stage to simulate the mass of a functioning satellite.

ULA originally planned to launch Sierra Space’s first Dream Chaser spaceplane on the second Vulcan rocket. But the Dream Chaser won’t be ready to fly its first mission to resupply the International Space Station until next year. Under pressure from the Pentagon, ULA decided to move ahead with the second Vulcan launch without a payload at the company’s own expense, which Bruno tallied in the “high tens of millions of dollars.”

Heliocentricity

The test flight will begin with liftoff from Cape Canaveral Space Force Station, Florida, during a three-hour launch window opening at 6 am EDT (10: 00 UTC). The 202-foot-tall (61.6-meter) Vulcan rocket will head east over the Atlantic Ocean, shedding its boosters, first stage, and payload fairing in the first few minutes of flight.

The Centaur upper stage will fire its RL10 engines two times, completing the primary mission within about 35 minutes of launch. The rocket will then continue on for a series of technical demonstrations before ending up on an Earth escape trajectory into a heliocentric orbit around the Sun.

“We have a number of experiments that we’re conducting that are really technology demonstrations and measurements that are associated with our high-performance, longer-duration version of Centaur V that we’ll be introducing in the future,” Bruno said. “And these will help us go a little bit faster on that development. And, of course, because we don’t have an active spacecraft as a payload, we also have more instrumentation that we’re able to use for just characterizing the vehicle.”

The Centaur V upper stage for the Vulcan rocket.

The Centaur V upper stage for the Vulcan rocket. Credit: United Launch Alliance

ULA engineers have worked on the design of a long-lived upper stage for more than a decade. Their vision was to develop an upper stage fed by super-efficient cryogenic liquid hydrogen and liquid oxygen propellants that could generate its own power and operate in space for days, weeks, or longer rather than an upper stage’s usual endurance limit of several hours. This would allow the rocket to not only deliver satellites into bespoke high-altitude orbits but also continue on to release more payloads at different altitudes or provide longer-term propulsion in support of other missions.

The concept was called the Advanced Cryogenic Evolved Stage (ACES). ULA’s corporate owners, Boeing and Lockheed Martin, never authorized the full development of ACES, and the company said in 2020 that it was no longer pursuing the ACES concept.

The Centaur V upper stage currently used on the Vulcan rocket is a larger version of the thin-walled, pressure-stabilized Centaur upper stage that has been flying since the 1960s. Bruno said the Centaur V design, as it is today, offers as much as 12 hours of operating life in space. This is longer than any other existing rocket using cryogenic propellants, which can boil off over time.

ULA’s chief executive still harbors an ambition for regaining some of the same capabilities promised by ACES.

“What we are looking to do is to extend that by orders of magnitude,” Bruno said. “And what that would allow us to do is have a in-space transportation capability for in-space mobility and servicing and things like that.”

Space Force leaders have voiced a desire for future spacecraft to freely maneuver between different orbits, a concept the military calls “dynamic space operations.” This would untether spacecraft operations from fuel limitations and eventually require the development of in-orbit refueling, propellant depots, or novel propulsion technologies.

No one has tried to store large amounts of super-cold propellants in space for weeks or longer. Accomplishing this is a non-trivial thermal problem, requiring insulation to keep heat from the Sun from reaching the liquid cryogenic propellant, stored at temperatures of several hundred degrees below zero.

Bruno hesitated to share details of the experiments ULA plans for the Centaur V upper stage on Friday’s test flight, citing proprietary concerns. He said the experiments will confirm analytical models about how the upper stage performs in space.

“Some of these are devices, some of these are maneuvers because maneuvers make a difference, and some are related to performance in a way,” he said. “In some cases, those maneuvers are helping us with the thermal load that tries to come in and boil off the propellants.”

Eventually, ULA would like to eliminate hydrazine attitude control fuel and battery power from the Centaur V upper stage, Bruno said Wednesday. This sounds a lot like what ULA wanted to do with ACES, which would have used an internal combustion engine called Integrated Vehicle Fluids (IVF) to recycle gasified waste propellants to pressurize its propellant tanks, generate electrical power, and feed thrusters for attitude control. This would mean the upper stage wouldn’t need to rely on hydrazine, helium, or batteries.

ULA hasn’t talked much about the IVF system in recent years, but Bruno said the company is still developing it. “It’s part of all of this, but that’s all I will say, or I’ll start revealing what all the gadgets are.”

A comparison between ULA’s legacy Centaur upper stage and the new Centaur V.

A comparison between ULA’s legacy Centaur upper stage and the new Centaur V. Credit: United Launch Alliance

George Sowers, former vice president and chief scientist at ULA, was one of the company’s main advocates for extending the lifetime of upper stages and developing technologies for refueling and propellant depot. He retired from ULA in 2017 and is now a professor at the Colorado School of Mines and an independent aerospace industry consultant.

In an interview with Ars earlier this year, Sowers said ULA solved many of the problems with keeping cryogenic propellants at the right temperature in space.

“We had a lot of data on boil-off, just from flying Centaurs all the way to geosynchronous orbit, which doesn’t involve weeks, but it involves maybe half a day or so, which is plenty of time to get all the temperatures to stabilize at deep space levels,” Sowers said. “So you have to understand the heat transfer very well. Good models are very important.”

ULA experimented with different types of insulation and vapor cooling, which involves taking cold gas that boiled off of cryogenic fuel and blowing it on heat penetration points into the tanks.

“There are tricks to managing boil-off,” he said. “One of the tricks is that you never want to boil oxygen. You always want to boil hydrogen. So you size your propellant tanks and your propellant loads, assuming you’re going to have that extra hydrogen boil-off. Then what you can do is use the hydrogen to keep the oxygen cold to keep it from boiling.

“The amount of heat that you can reject by boiling off one kilogram of hydrogen is about five times what you would reject by boiling off one kilogram of oxygen. So those are some of the thermodynamic tricks,” Sowers said. “The way ULA accomplished that is by having a common bulkhead, so the hydrogen tank and the oxygen tank are in thermal contact. So hydrogen keeps the oxygen cold.”

ULA’s experiments showed it could get the hydrogen boil-off rate down to about 10 percent per year, based on thermodynamic models calibrated by data from flying older versions of the Centaur upper stage on Atlas V rockets, according to Sowers.

“In my mind, that kind of cemented the idea that distribution depots and things like that are very well in hand without having to have exotic cryocoolers, which tend to use a lot of power,” Sowers said. “It’s about efficiency. If you can do it passively, you don’t have to expend energy on cryocoolers.”

“We’re going to go to days, and then we’re going to go to weeks, and then we think it’s possible to take us to months,” Bruno said. “That’s a game changer.”

However, ULA’s corporate owners haven’t yet fully bought into this vision. Bruno said the Vulcan rocket and its supporting manufacturing and launch infrastructure cost between $5 billion and $7 billion to develop. ULA also plans to eventually recover and reuse BE-4 main engines from the Vulcan rocket, but that is still at least several years away.

But ULA is reportedly up for sale, and a well-capitalized buyer might find the company’s long-duration cryogenic upper stage more attractive and worth the investment.

“There’s a whole lot of missions that enables,” Bruno said. “So that’s a big step in capability, both for the United States and also commercially.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

ULA hasn’t given up on developing a long-lived cryogenic space tug Read More »

evgo-gets-$1.05b-loan-to-build-7,500-dc-fast-chargers

EVgo gets $1.05B loan to build 7,500 DC fast chargers

The electric vehicle charging company EVgo has secured conditional approval for a $1.05 billion loan from the US Department of Energy, the company revealed this morning. EVgo has applied to DOE’s Title 17 program, which exists to provide US Treasury-backed loans or loan guarantees for clean energy projects. If the deal is finalized, the money will be used to build around 7,500 DC fast chargers, with powerful 350 kW chargers its priority, EVgo said.

EVgo said the charger build-out will be concentrated in Arizona, California, Florida, Georgia, Illinois, New York, New Jersey, Michigan, Pennsylvania, and Texas and should be completed by 2030.

Since the federal government is already spending billions on a network of DC fast chargers along highway corridors, EVgo is instead focusing on creating community charging stations, particularly in areas with a high density of multifamily developments and other communities where EV drivers have to rely on public chargers.

In fact, the company said 40 percent of the chargers will be deployed “in marginalized areas that have been overburdened by environmental impacts.”

EVgo gets $1.05B loan to build 7,500 DC fast chargers Read More »

backup-to-the-future!

Backup to the Future!

“I finally invented something that works!” This iconic quote from Back to the Future perfectly encapsulates the transformative journey of cloud-native data protection. As organizations increasingly migrate to the cloud, the landscape of data protection is evolving at a breakneck pace, much like Marty McFly’s adventures through time. Let’s take a closer look at the current state of the market, its history, and what the future holds for cloud-native data protection.

A Brief History of Cloud-Native Data Protection

“And in the future, we don’t need horses. We have motorized carriages called automobiles,” explains Doc Brown in Back to the Future III. Similarly, the journey of data protection has been nothing short of revolutionary. Initially, organizations relied on traditional on-premises solutions, which were often cumbersome and expensive. As cloud adoption grew, these legacy systems struggled to keep up with the dynamic and scalable nature of cloud environments. This led to the emergence of cloud-native data protection solutions, designed to leverage the inherent benefits of the cloud.

Cloud-native data protection aligns with the cloud’s operating model, offering pay-as-you-grow pricing, comprehensive workload support, and seamless cloud integration. These solutions are built using modern cloud principles such as microservices, containerization, and serverless architectures, providing greater scalability, resilience, and agility compared to traditional monolithic applications.

The Current State of the Market

“Wait a minute, Doc. Are you telling me you built a time machine … out of a DeLorean?” Just as Doc Brown’s DeLorean was a game changer, cloud-native data protection solutions have revolutionized how organizations safeguard their data. Today, the market is characterized by a diverse array of solutions that cater to various needs, from backup and disaster recovery to cyber resilience and compliance.

Key players in the market, such as Clumio, Rubrik, Commvault, and Acronis, offer robust solutions that integrate seamlessly with public cloud providers like AWS, Azure, and Google Cloud. These solutions provide features such as end-to-end encryption, granular recovery options, and advanced threat detection, ensuring that data is protected against both accidental loss and malicious attacks.

Emerging Technologies and Future Trends

“If my calculations are correct, when this baby hits 88 miles per hour, you’re gonna see some serious s–t,” said Doc Brown of his invention. The future of cloud-native data protection is also brimming with potential, driven by emerging technologies and evolving market demands. Here are some key trends to watch:

  • AI and ML: AI and ML are set to play a pivotal role in enhancing data protection. These technologies can automate threat detection, predict potential vulnerabilities, and optimize backup and recovery processes. For instance, Commvault’s AI-driven capabilities help organizations detect threats quickly and ensure clean recovery points.
  • Serverless architectures: The adoption of serverless architectures is expected to increase, providing greater scalability and cost efficiency. Solutions like Rubrik’s serverless design simplify deployment and management, allowing organizations to focus on their core business activities.
  • Multicloud and hybrid environments: As organizations continue to adopt multicloud and hybrid strategies, data protection solutions will need to offer seamless integration across different cloud platforms. This will ensure consistent protection and management of data, regardless of where it resides.
  • Cyber resilience: With the rise in cyber threats, the focus on cyber resilience will intensify. Solutions will need to offer advanced features such as data immutability, air-gapped backups, and proactive threat monitoring to safeguard against ransomware and other attacks.

The Next 12-24 Months: Market and Technology Evolution

The next couple of years will see significant advancements in cloud-native data protection, reshaping the market and technology landscape. “Where we’re going, we don’t need roads,” but here’s what to expect:

  • Increased adoption of SaaS models: The shift toward software-as-a-service (SaaS) models will continue, offering organizations flexible, pay-as-you-go options. This will reduce the need for heavy upfront investments and allow for more agile and scalable data protection strategies.
  • Enhanced integration with cloud-native services: Data protection solutions will increasingly integrate with cloud-native services such as Kubernetes and serverless functions. This will enable more efficient and automated protection of containerized and serverless workloads.
  • Focus on data privacy and compliance: As data privacy regulations become more stringent, solutions will need to offer robust compliance features. This includes capabilities for data discovery, classification, and policy-based management to ensure adherence to regulations like GDPR and CCPA.
  • Expansion of cyber resilience capabilities: The emphasis on cyber resilience will lead to the development of more sophisticated threat detection and response mechanisms. Solutions will leverage AI and ML to provide real-time insights and automated remediation, minimizing the impact of cyber incidents.

Preparing for the Future

“Your future is whatever you make it.” To stay ahead in the evolving landscape of cloud-native data protection, organizations should take proactive steps to position themselves for success.

  • Embrace automation: Leverage AI and ML to automate data protection processes, from threat detection to backup and recovery. This will enhance efficiency and reduce the risk of human error.
  • Adopt a multicloud strategy: Ensure your data protection solution supports multicloud and hybrid environments. This will provide flexibility and resilience, allowing you to protect data across different platforms.
  • Focus on cyber resilience: Implement advanced cyber resilience features such as data immutability, air-gapped backups, and proactive threat monitoring. This will safeguard your data against evolving cyber threats.
  • Stay compliant: Keep abreast of data privacy regulations and ensure your data protection solution offers robust compliance features. This will help you avoid costly fines and protect your organization’s reputation.
  • Invest in training: Equip your IT teams with the skills and knowledge needed to manage modern data protection solutions. This will ensure they can effectively leverage the latest technologies and best practices.

Next Steps

“Great Scott!” The future of cloud-native data protection is bright, with emerging technologies and evolving market demands driving innovation. By embracing these advancements and preparing for the future, organizations can ensure their data is protected, resilient, and compliant. As Doc Brown wisely said, “Your future is whatever you make it, so make it a good one.”

To learn more, take a look at GigaOm’s cloud-native data protection Sonar report. This report provides a comprehensive overview of the market, outlines the criteria you’ll want to consider in a purchase decision, and evaluates how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Backup to the Future! Read More »

meta-smart-glasses-can-be-used-to-dox-anyone-in-seconds,-study-finds

Meta smart glasses can be used to dox anyone in seconds, study finds

To prevent anyone from being doxxed, the co-creators are not releasing the code, Nguyen said on social media site X. They did, however, outline how their disturbing tech works and how shocked random strangers used as test subjects were to discover how easily identifiable they are just from accessing with the smart glasses information posted publicly online.

Nguyen and Ardayfio tested out their technology at a subway station “on unsuspecting people in the real world,” 404 Media noted. To demonstrate how the tech could be abused to trick people, the students even claimed to know some of the test subjects, seemingly using information gleaned from the glasses to make resonant references and fake an acquaintance.

Dozens of test subjects were identified, the students claimed, although some results have been contested, 404 Media reported. To keep their face-scanning under the radar, the students covered up a light that automatically comes on when the Meta Ray Bans 2 are recording, Ardayfio said on X.

Opt out of PimEyes now, students warn

For Nguyen and Ardayfio, the point of the project was to persuade people to opt out of invasive search engines to protect their privacy online. An attempt to use I-XRAY to identify 404 Media reporter Joseph Cox, for example, didn’t work because he’d opted out of PimEyes.

But while privacy is clearly important to the students and their demo video strove to remove identifying information, at least one test subject was “easily” identified anyway, 404 Media reported. That test subject couldn’t be reached for comment, 404 Media reported.

So far, neither Facebook nor Google has chosen to release similar technologies that they developed linking smart glasses to face search engines, The New York Times reported.

Meta smart glasses can be used to dox anyone in seconds, study finds Read More »

microsoft-releases-office-2024,-the-latest-buy-once-own-forever-version-of-office

Microsoft releases Office 2024, the latest buy-once-own-forever version of Office

Microsoft’s push to get Windows and Office users to buy Microsoft 365 subscriptions can be irritating, but to the company’s credit, it’s one of the few that has continued to sell buy-once, use-forever versions of its flagship software after launching a subscription model. This week the company has officially launched Microsoft Office 2024, a new “locked-in-time” update to Word, Excel, PowerPoint, and other apps for people who don’t want the continuously developed Microsoft 365 versions.

For end users, Office Home 2024 gets you Word, Excel, PowerPoint, and OneNote for $149.99. Office Home & Business 2024 costs $249.99, which adds Outlook “and the rights to use the apps for commercial purposes.” Both licenses cover a single PC or Mac.

New Office Long-Term Servicing Channel (LTSC) products are also being released for businesses and other organizations; Office LTSC Professional Plus 2024 for Windows also includes the Access database management app. Project 2024 and Visio 2024 are also still being offered as standalone products.

The new apps include most changes made since Office 2021 was released three years ago; Microsoft details those updates here and on its Learn documentation site. Unlike the Microsoft 365 versions, the perpetually licensed versions of the apps won’t get ongoing feature updates, they’re missing some real-time collaboration capabilities, and they also won’t get any features related to the Copilot AI assistant.

Microsoft releases Office 2024, the latest buy-once-own-forever version of Office Read More »

directv/dish-merger-has-a-problem-as-debt-holders-object-to-$1.6-billion-loss

DirecTV/Dish merger has a problem as debt holders object to $1.6 billion loss

DirecTV’s agreement to buy the Dish satellite and streaming TV business from EchoStar is facing opposition from Dish creditors who would be forced to take a loss on the value of their debt.

Dish creditors “plan to block a distressed exchange that’s a key part of its tie-up with rival DirecTV, according to people familiar with the matter,” Bloomberg reported today. “A group of steering committee investors has gained a blocking position in order to negotiate with the company, the people said. They may even explore a better outcome through litigation, said some of the people.” The Bloomberg article was titled, “Dish-DirecTV Deal Sparks Creditor Revolt Over $1.6 Billion Loss.”

As Bloomberg notes, “Dish needs consent from its bondholders to exchange old debts for notes issued out of the new combined entity” in order to complete the deal. A previous Bloomberg article said that “just over two-thirds of [Dish] bondholders in each series of notes have to agree to the exchange, with the deadline set for October 29.” EchoStar executives argue that debt holders will benefit from the merger by “owning debt of a stronger company with lower leverage,” the article said.

Credit-rating firm S&P Global said in a research note that it views “these transactions as tantamount to a default because investors will receive less value than the promise of the original securities,” according to Variety. On the other hand, S&P Global “added that in exchange the new notes will carry a higher rate of 8.875 percent and be secured by assets of the combined businesses of DirecTV and Dish,” Variety wrote.

Debt exchange

DirecTV agreed to buy the Dish satellite TV and Sling TV business for a nominal fee of $1 in exchange for taking on $9.75 billion of Dish debt. But DirecTV’s deal announcement on Monday said the merger needs approval from Dish debt holders, who would see their investments devalued.

Dish notes would be exchanged with “a reduced principal amount of DirecTV debt which will have terms and collateral that mirror DirecTV’s existing secured debt,” DirecTV said. DirecTV’s announcement goes on to say that the principal amount will be reduced by at least $1.568 billion and that the deal can be scrapped if debt holders object:

DirecTV/Dish merger has a problem as debt holders object to $1.6 billion loss Read More »

“obviously-a-failure”:-sonos-execs-not-getting-bonuses-due-to-app-fiasco

“Obviously a failure”: Sonos execs not getting bonuses due to app fiasco

Sonos’ controversial app update in May was “obviously a failure,” Sonos CEO Patrick Spence told Reuters today.

When the update launched in May, customers revolted over missing features, like the ability to search music libraries, edit song and playlist queues, and set sleep timers. In addition, some already purchased hardware, especially older models, began having problems.

In a note to investors on Tuesday, Sonos said that “more than 80 percent of the app’s missing features have been reintroduced.” The app should be “almost 100 percent restored in the coming weeks.” Sonos has been updating the app every two weeks in an effort to bring it to parity with the old one.

Speaking to Reuters, Spence took the blame for an app said to be rushed out prematurely ahead of Sonos’ first headphones, Ace. 

“This is obviously a failure of Sonos, but it starts with me in terms of where it started,” he said.

The CEO reportedly admitted to the botched rollout stemming from a lack of proper testing and a desire to push out a lot of features simultaneously:

We underestimated the complexity of the system, and so our testing didn’t capture all of the things that it should. We released it too soon.

Sonos was reportedly eager to get the app out to accommodate Ace, resulting in an overhaul of the app, its player side, and Sonos’ cloud infrastructure. Last month, purported former and current Sonos employees spoke about the app accumulating technical debt before being forced into an update that wasn’t ready and overlooked some workers’ concerns.

No executive bonuses for now

Reuters reported today that Spence and seven other execs “would forgo their bonus in the most recent fiscal year,” which ended on September 30. The publication noted that Spence got a bonus of approximately $72,000 for fiscal year 2023. Reuters reported that the company heads have “certain benchmarks” to meet to receive bonuses for the October 2024 to September 2025 fiscal year.

It’s not hard to understand why the executives aren’t getting their bonuses. In addition to the damage that the botched app redesign has wrought on Sonos’ reputation—aggravating long-time customers and deterring prospective ones—the app has had tangible financial consequences. The Santa Barbara, California company is expecting to pay up to $30 million in the short term to fix the app and try to restore customer and partner trust. The company also delayed two hardware releases, which led it to reduce its fiscal 2024 guidance. Sonos shares have fallen more than 30 percent since before the app update, Reuters noted.

“Obviously a failure”: Sonos execs not getting bonuses due to app fiasco Read More »

sidecarless-service-meshes:-are-they-ready-for-prime-time?

Sidecarless Service Meshes: Are They Ready for Prime Time?

Service meshes have become a cornerstone in the architecture of modern microservices, providing a dedicated infrastructure layer to manage service-to-service communication. Traditionally, service meshes have relied on sidecar proxies to handle tasks such as load balancing, traffic routing, and security enforcement. However, the emergence of sidecarless service meshes has introduced a new paradigm, promising to simplify operations and reduce overhead.

This blog offers a detailed overview of the pros and cons of sidecarless service meshes, focusing on the security aspects that can make a significant difference. It enables you to navigate the complexities of managing a modern microservices architecture. Whether you choose to stick with the traditional sidecar model, explore the emerging sidecarless approach, or use a mix of both based on the use case, understanding the trade-offs allows you to optimize your microservices communication and achieve greater efficiency and reliability in your deployments.

The Pros and Cons of Sidecarless Service Meshes

A sidecarless service mesh operates by integrating the service mesh layer directly into the underlying infrastructure, such as the kernel, rather than deploying individual sidecar proxies alongside each microservice. This approach leverages shared resources such as DaemonSets or node-level proxies or technologies like eBPF (extended Berkeley Packet Filter) to manage network connectivity and application protocols at the kernel level, handling tasks like traffic management, security enforcement, and observability.

Pros

  • Reduced operational complexity: Sidecarless service meshes, such as Istio’s Ambient Mesh and Cilium’s eBPF-based approach, aim to simplify operations by eliminating the need for sidecar proxies. Instead, they use shared resources like DaemonSets or node-level proxies, reducing the number of components that need to be managed and maintained.
  • Improved performance: By removing resource-intensive sidecar proxies such as Envoy, sidecarless service meshes can reduce the latency and performance overhead associated with routing traffic through additional containers. This can lead to improved network performance and more efficient resource utilization.
  • Lower infrastructure costs: Without the need for individual sidecar proxies, sidecarless service meshes can reduce overall resource consumption, leading to lower infrastructure costs. This is particularly beneficial in large-scale environments with numerous microservices.
  • Simplified upgrades and maintenance: Upgrading and maintaining a sidecarless service mesh can be more straightforward, as there are fewer components to update. This can lead to reduced downtime and fewer disruptions during maintenance windows.

Cons

  • Limited maturity and adoption: Sidecarless service meshes are relatively new and may not be as mature or widely adopted as their sidecar-based counterparts. This can lead to potential stability and reliability issues, as well as a steeper learning curve for teams adopting the technology.
  • Security concerns: Some experts argue that sidecarless service meshes may not provide the same level of security isolation as sidecar-based meshes. Shared proxies can introduce potential vulnerabilities and may not offer the same granularity of security controls.
  • Compatibility issues: Not all existing tools and frameworks may be compatible with sidecarless service meshes. This can create challenges when integrating with existing infrastructure and may require additional effort to adapt or replace tools.
  • Feature limitations: While sidecarless service meshes can handle many of the same tasks as sidecar-based meshes, they may not support all the advanced features and capabilities. For example, some complex traffic management and routing functions may still require sidecar proxies.

The Security Debate

A critical consideration when choosing a service mesh, the debate as to whether a sidecarless service mesh can meet the needs of the evolving threat landscape continues to rage. When it comes to sidecarless service meshes, the primary security risks include:

  • Reduced isolation: Without dedicated sidecars for each service, there is less isolation between services, potentially allowing security issues to spread more easily across the mesh.
  • Shared resources: Sidecarless approaches often use shared resources like DaemonSets or node-level proxies, which may introduce vulnerabilities if compromised, affecting multiple services simultaneously.
  • Larger attack surface: Some argue that sidecarless architectures may present a larger attack surface, especially when using node-level proxies or shared components.
  • Fine-grained policy challenges: Implementing fine-grained security policies can be more difficult without the granular control offered by per-service sidecars.
  • Certificate and mTLS concerns: There are debates about the security of certificate management and mutual TLS (mTLS) implementation in sidecarless architectures, particularly regarding the separation of authentication from data payloads.
  • eBPF security implications: For eBPF-based sidecarless approaches, there are ongoing discussions about potential security risks associated with kernel-level operations.
  • Reduced security boundaries: The lack of clear pod-level boundaries in sidecarless designs may make it harder to contain security breaches.
  • Complexity in security management: Without dedicated proxies per service, managing and auditing security across the mesh may become more complex.
  • Potential for “noisy neighbor” issues: Shared proxy resources might lead to security problems where one compromised service affects others.
  • Evolving security practices: As sidecarless architectures are relatively new, best practices for securing these environments are still developing, potentially leaving gaps in an organization’s security posture.

It’s important to note that while concerns exist, proponents of sidecarless architectures argue that they can be addressed through careful design and implementation. Moreover, some advocates of the sidecarless approach believe that the separation of L4 and L7 processing in sidecarless designs may actually improve security by reducing the attack surface for services that don’t require full L7 processing.

The Middle Road

A mixed deployment, integrating both sidecar and sidecarless modes, can offer a balanced approach that leverages the strengths of both models while mitigating their respective weaknesses. Here are the key benefits and relevant use cases of using a mixed sidecar and sidecarless service mesh deployment:

Benefits

  • Optimized Resource Utilization
    • Sidecarless for lightweight services: Sidecarless deployments can be used for lightweight services that do not require extensive security or observability features. This reduces the overhead associated with running sidecar proxies, leading to more efficient resource utilization.
    • Sidecar for critical services: Critical services that require enhanced security, fine-grained traffic management, and detailed observability can continue to use sidecar proxies. This ensures that these services benefit from the robust security and control features provided by sidecars.
  • Enhanced Security and Compliance
    • Granular security control: By using sidecars for services that handle sensitive data or require strict compliance, organizations can enforce granular security policies, including mutual TLS (mTLS), access control, and encryption.
    • Simplified security for less critical services: For less critical services, sidecarless deployments can provide adequate security without the complexity and overhead of sidecar proxies.
  • Improved Performance and Latency
    • Reduced latency for high-performance services: Sidecarless deployments can reduce the latency introduced by sidecar proxies, making them suitable for high-performance services where low latency is critical.
    • Balanced performance for mixed workloads: By selectively deploying sidecars only where necessary, organizations can achieve a balance between performance and security, optimizing the overall system performance.
  • Operational Flexibility and Simplification
    • Simplified operations for non-critical services: Sidecarless deployments can simplify operations by reducing the number of components that need to be managed and maintained. This is particularly beneficial for non-critical services where operational simplicity is a priority.
    • Flexible deployment strategies: A mixed deployment allows organizations to tailor their service mesh strategy to the specific needs of different services, providing flexibility in how they manage and secure their microservices.
  • Cost Efficiency
    • Lower infrastructure costs: Organizations can lower their infrastructure costs by reducing the number of sidecar proxies (or replacing Envoy with lightweight proxies), particularly in large-scale environments with numerous microservices.
    • Cost-effective security: Sidecar proxies can be reserved for services that truly need them, ensuring that resources are allocated efficiently and cost-effectively.

Use Cases

  • Hybrid cloud environments: In hybrid cloud environments, a mixed deployment can provide the flexibility to optimize resource usage and security across different cloud and on-premises infrastructures. Sidecarless deployments can be used in cloud environments where resource efficiency is critical, while sidecars can be deployed on-premises for services requiring stringent security controls.
  • Microservices with varying security requirements: In microservices architectures where different services have varying security and compliance requirements, a mixed deployment allows for tailored security policies. Critical services handling sensitive data can use sidecar proxies for enhanced security, while less critical services can leverage sidecarless deployments for better performance and lower overhead.
  • Performance-sensitive applications: Applications requiring high performance and low latency can benefit from lightweight sidecars or sidecarless deployments for performance-sensitive components. At the same time, sidecar proxies can be used for components where security and observability are more critical, ensuring a balanced approach.
  • Development and test environments: In development and test environments, sidecarless deployments can simplify the setup and reduce resource consumption, making it easier for developers to iterate quickly. Sidecar proxies can be introduced in staging or production environments where security and observability become more critical.
  • Gradual migration to sidecarless architectures: Organizations looking to gradually migrate to sidecarless architectures can start with a mixed deployment. This allows them to transition some services to sidecarless mode while retaining sidecar proxies for others, providing a smooth migration path and minimizing disruption.

While much depends on the service mesh chosen, a mixed sidecar and sidecarless service mesh deployment may offer a versatile and balanced approach to managing microservices. However, a mixed environment also adds a layer of complexity, requiring additional expertise, which may be prohibitive for some organizations.

The Bottom Line

Both sidecar and sidecarless approaches offer distinct advantages and disadvantages. Sidecar-based service meshes provide fine-grained control, enhanced security, and compatibility with existing tools but may come with increased operational complexity, performance overhead, and resource usage depending on the service mesh and proxy chosen. On the other hand, sidecarless service meshes promise reduced operational complexity, improved performance, and lower infrastructure costs but face challenges related to maturity, security, and compatibility.

The choice between sidecar and sidecarless service meshes ultimately depends on your specific use case, requirements, existing infrastructure, in-house expertise, and timeframe. For organizations with immediate requirements or complex, large-scale microservices environments that require advanced traffic management and security features, sidecar-based service meshes may be the better choice. However, for those looking to simplify operations and reduce overhead, sidecarless service meshes are maturing to the point where they may offer a compelling alternative in the next 12 to 18 months. In the meantime, however, it’s worth taking a look in a controlled environment.

As the technology continues to evolve, it is essential to stay informed about the latest developments and best practices in the service mesh landscape. By carefully evaluating the pros and cons of each approach, you can make an informed decision that aligns with your organization’s goals and needs.

Next Steps

To learn more, take a look at GigaOm’s Service Mesh Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Sidecarless Service Meshes: Are They Ready for Prime Time? Read More »

crook-made-millions-by-breaking-into-execs’-office365-inboxes,-feds-say

Crook made millions by breaking into execs’ Office365 inboxes, feds say

WHAT IS THE NAME OF YOUR FIRST PET? —

Email accounts inside 5 US companies unlawfully breached through password resets.

Crook made millions by breaking into execs’ Office365 inboxes, feds say

Getty Images

Federal prosecutors have charged a man for an alleged “hack-to-trade” scheme that earned him millions of dollars by breaking into the Office365 accounts of executives at publicly traded companies and obtaining quarterly financial reports before they were released publicly.

The action, taken by the office of the US Attorney for the district of New Jersey, accuses UK national Robert B. Westbrook of earning roughly $3.75 million in 2019 and 2020 from stock trades that capitalized on the illicitly obtained information. After accessing it, prosecutors said, he executed stock trades. The advance notice allowed him to act and profit on the information before the general public could. The US Securities and Exchange Commission filed a separate civil suit against Westbrook seeking an order that he pay civil penalties and return all ill-gotten gains.

Buy low, sell high

“The SEC is engaged in ongoing efforts to protect markets and investors from the consequences of cyber fraud,” Jorge G. Tenreiro, acting chief of the SEC’s Crypto Assets and Cyber Unit, said in a statement. “As this case demonstrates, even though Westbrook took multiple steps to conceal his identity—including using anonymous email accounts, VPN services, and utilizing bitcoin—the Commission’s advanced data analytics, crypto asset tracing, and technology can uncover fraud even in cases involving sophisticated international hacking.”

A federal indictment filed in US District Court for the District of New Jersey said that Westbrook broke into the email accounts of executives from five publicly traded companies in the US. He pulled off the breaches by abusing the password reset mechanism Microsoft offered for Office365 accounts. In some cases, Westbrook allegedly went on to create forwarding rules that automatically sent all incoming emails to an email address he controlled.

Prosecutors alleged in one such incident:

On or about January 26, 2019, WESTBROOK gained unauthorized access to the Office365 email account of Company-1 ‘s Director of Finance and Accounting (“Individual-!”) through an unauthorized password reset. During the intrusion, an auto-forwarding rule was implemented, which was designed to automatically forward content from lndividual-1 ‘s compromised email account to an email account controlled by WESTBROOK. At the time of the intrusion, the compromised email account of Individual-I contained non-public information about Company-1 ‘s quarterly earnings, which indicated that Company-1 ‘s sales were down.

Once a person gains unauthorized access to an email account, it’s possible to conceal the breach by disabling or deleting password reset alerts and burying password reset rules deep inside account settings.

Prosecutors didn’t say how the defendant managed to abuse the reset feature. Typically such mechanisms require control of a cell phone or registered email account belonging to the account holder. In 2019 and 2020 many online services would also allow users to reset passwords by answering security questions. The practice is still in use today but has been slowly falling out of favor as the risks have come to be more widely understood.

By obtaining material information, Westbrook was able to predict how a company’s stock would perform once it became public. When results were likely to drive down stock prices, he would place “put” options, which give the purchaser the right to sell shares at a specific price within a specified span of time. The practice allowed Westbrook to profit when shares fell after financial results became public. When positive results were likely to send stock prices higher, Westbrook allegedly bought shares while they were still low and later sold them for a higher price.

The prosecutors charged Westbrook with one count each of securities fraud and wire fraud and five counts of computer fraud. The securities fraud count carries a maximum penalty of up to 20 years’ prison time and $5 million in fines The wire fraud count carries a maximum penalty of up to 20 years in prison and a fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest. Each computer fraud count carries a maximum five years in prison and a maximum fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest.

The US Attorney’s office in the District of New Jersey didn’t say if Westbrook has made an initial appearance in court or if he has entered a plea.

Crook made millions by breaking into execs’ Office365 inboxes, feds say Read More »

openai-unveils-easy-voice-assistant-creation-at-2024-developer-event

OpenAI unveils easy voice assistant creation at 2024 developer event

Developers developers developers —

Altman steps back from the keynote limelight and lets four major API additions do the talking.

A glowing OpenAI logo on a blue background.

Benj Edwards

On Monday, OpenAI kicked off its annual DevDay event in San Francisco, unveiling four major API updates for developers that integrate the company’s AI models into their products. Unlike last year’s single-location event featuring a keynote by CEO Sam Altman, DevDay 2024 is more than just one day, adopting a global approach with additional events planned for London on October 30 and Singapore on November 21.

The San Francisco event, which was invitation-only and closed to press, featured on-stage speakers going through technical presentations. Perhaps the most notable new API feature is the Realtime API, now in public beta, which supports speech-to-speech conversations using six preset voices and enables developers to build features very similar to ChatGPT’s Advanced Voice Mode (AVM) into their applications.

OpenAI says that the Realtime API streamlines the process of creating voice assistants. Previously, developers had to use multiple models for speech recognition, text processing, and text-to-speech conversion. Now, they can handle the entire process with a single API call.

The company plans to add audio input and output capabilities to its Chat Completions API in the next few weeks, allowing developers to input text or audio and receive responses in either format.

Two new options for cheaper inference

OpenAI also announced two features that may help developers balance performance and cost when making AI applications. “Model distillation” offers a way for developers to fine-tune (customize) smaller, cheaper models like GPT-4o mini using outputs from more advanced models such as GPT-4o and o1-preview. This potentially allows developers to get more relevant and accurate outputs while running the cheaper model.

Also, OpenAI announced “prompt caching,” a feature similar to one introduced by Anthropic for its Claude API in August. It speeds up inference (the AI model generating outputs) by remembering frequently used prompts (input tokens). Along the way, the feature provides a 50 percent discount on input tokens and faster processing times by reusing recently seen input tokens.

And last but not least, the company expanded its fine-tuning capabilities to include images (what it calls “vision fine-tuning”), allowing developers to customize GPT-4o by feeding it both custom images and text. Basically, developers can teach the multimodal version of GPT-4o to visually recognize certain things. OpenAI says the new feature opens up possibilities for improved visual search functionality, more accurate object detection for autonomous vehicles, and possibly enhanced medical image analysis.

Where’s the Sam Altman keynote?

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Getty Images

Unlike last year, DevDay isn’t being streamed live, though OpenAI plans to post content later on its YouTube channel. The event’s programming includes breakout sessions, community spotlights, and demos. But the biggest change since last year is the lack of a keynote appearance from the company’s CEO. This year, the keynote was handled by the OpenAI product team.

On last year’s inaugural DevDay, November 6, 2023, OpenAI CEO Sam Altman delivered a Steve Jobs-style live keynote to assembled developers, OpenAI employees, and the press. During his presentation, Microsoft CEO Satya Nadella made a surprise appearance, talking up the partnership between the companies.

Eleven days later, the OpenAI board fired Altman, triggering a week of turmoil that resulted in Altman’s return as CEO and a new board of directors. Just after the firing, Kara Swisher relayed insider sources that said Altman’s DevDay keynote and the introduction of the GPT store had been a precipitating factor in the firing (though not the key factor) due to some internal disagreements over the company’s more consumer-like direction since the launch of ChatGPT.

With that history in mind—and the focus on developers above all else for this event—perhaps the company decided it was best to let Altman step away from the keynote and let OpenAI’s technology become the key focus of the event instead of him. We are purely speculating on that point, but OpenAI has certainly experienced its share of drama over the past month, so it may have been a prudent decision.

Despite the lack of a keynote, Altman is present at Dev Day San Francisco today and is scheduled to do a closing “fireside chat” at the end (which has not yet happened as of this writing). Also, Altman made a statement about DevDay on X, noting that since last year’s DevDay, OpenAI had seen some dramatic changes (literally):

From last devday to this one:

*98% decrease in cost per token from GPT-4 to 4o mini

*50x increase in token volume across our systems

*excellent model intelligence progress

*(and a little bit of drama along the way)

In a follow-up tweet delivered in his trademark lowercase, Altman shared a forward-looking message that referenced the company’s quest for human-level AI, often called AGI: “excited to make even more progress from this devday to the next one,” he wrote. “the path to agi has never felt more clear.”

OpenAI unveils easy voice assistant creation at 2024 developer event Read More »

ebay-listings-for-banned-chemicals-shielded-by-section-230,-judge-rules

eBay listings for banned chemicals shielded by Section 230, judge rules

No sale —

DOJ can’t force eBay to remove environmentally harmful product listings.

eBay listings for banned chemicals shielded by Section 230, judge rules

eBay has defeated a lawsuit that the US Department of Justice raised last fall, which alleged that eBay violated environmental protection and public safety laws by allowing users to sell hundreds of thousands of banned products.

Among products targeted by the DOJ suit were at least 343,011 “aftermarket products for motor vehicles” used to “tamper with or disable vehicle emissions control systems” and at least 23,000 “unregistered, misbranded, or restricted use pesticides.” The DOJ also took issue with sales of products containing methylene chloride, which is used as a “thinning agent in paint and coating removal products.” Most uses of that chemical were banned by the Environmental Protection Agency this April to prevent causing cancer, liver harm, and death.

In her order, US District Judge Orelia Merchant agreed with eBay that the DOJ failed to prove that eBay was liable for selling some of these targeted products. Ultimately, Merchant ruled that whether the products violated environmental laws or not, Section 230 barred all of the DOJ’s claims, as eBay is shielded from liability for third-party postings (in this case, listings) on its platform.

“eBay contends that it does not actually ‘sell’ any item listed on its platform,” Merchant wrote, pointing to a precedent set in a 2004 lawsuit where the jewelry company Tiffany attempted to sue eBay over counterfeit items. Merchant agreed with the Second Circuit, which affirmed that “eBay did not itself sell counterfeit Tiffany goods; only the fraudulent vendors did,” mainly due to the fact that eBay “never physically possesses” the goods that are sold on its platform. For the same reason, Merchant found that eBay never sold any of the restricted items the DOJ flagged last year.

While the entire motion to dismiss was granted, the DOJ did succeed in arguing that eBay had violated the Toxic Substances Control Act (TSCA) and the Methylene Chloride Rule by not removing some listings for products containing methylene chloride.

Under those laws, the DOJ persuasively alleged that eBay was a “retailer” who introduced and “distributed in commerce” products containing methylene chloride, Merchant’s order noted.

eBay’s attempt to defend against that claim by narrowly arguing that the TSCA should only be applied to the literal first seller to introduce a product to market not only failed, Merchant said, but also threatened to “undermine the TSCA’s regulatory scope” as a law designed to protect the public from any introduction of harmful substances.

However, none of that matters, eBay argued, because Section 230 bars that claim, too. Merchant agreed that without “allegations… eBay fails to remove third-party listings (conduct that is plainly immune under Section 230),” and the government’s complaint “would not state a claim.”

eBay vows to help prevent toxic sales

Perhaps the government had hoped that eBay might settle the lawsuit, as the company did last February in a DOJ case over the sales of pill presses. Similar to the DOJ’s bid to hold eBay liable for enabling product sales causing environmental harms, the DOJ had accused eBay of selling pill presses tied to fentanyl drug rings amid an opioid epidemic killing 100,000 people annually at its peak. Both suits were designed to stop eBay from distributing products causing harms, but only one succeeded.

In the pill press case, eBay did not invoke the Section 230 shield. Instead, eBay admitted no wrongdoing while agreeing to “pay $59 million” and voluntarily removing products targeted by the DOJ. In a statement, eBay said this was “in the best interest of the company and its shareholders as it avoids the costs, uncertainty, and distraction associated with protracted litigation.”

eBay did not appear concerned that the environmental lawsuit might have similarly long legs in court. An eBay spokesperson told Ars that the company appreciated the court’s “thoughtful review,” which “found that the government’s lawsuit should not be permitted to move forward.”

“Maintaining a safe and trusted marketplace for our global community of sellers and buyers is a fundamental principle of our business at eBay,” eBay’s spokesperson said. “As we have throughout our history, eBay will continue to invest significant resources to support its well-recognized and proactive efforts to help prevent prohibited items from being listed on our marketplace.”

Because Merchant granted eBay’s motion to dismiss the DOJ’s lawsuit over alleged environmental harms with prejudice, the DOJ will not have a chance to re-file the case in the same court but could possibly appeal to a higher court.

The DOJ declined Ars’ request for comment.

eBay listings for banned chemicals shielded by Section 230, judge rules Read More »

in-fear-of-more-user-protests,-reddit-announces-controversial-policy-change

In fear of more user protests, Reddit announces controversial policy change

Protest blowback —

Moderators now need Reddit’s permission to turn subreddits private, NSFW.

The Reddit application can be seen on the display of a smartphone.

Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.

Reddit’s VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit’s permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit’s employee claimed that requests will be responded to “in under 24 hours.”

Reddit’s employee said that “temporarily going restricted is exempt” from this requirement, adding that “mods can continue to instantly restrict posts and/or comments for up to 7 days using Temporary Events.” Additionally, if a subreddit has fewer than 5,000 members or is less than 30 days old, the request “will be automatically approved,” per Go_JasonWaterfalls.

Reddit’s post includes a list of “valid” reasons that mods tend to change their subreddit’s Community Type and provides alternative solutions.

Last year’s protests “accelerated” this policy change

Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.

Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators’ actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year’s site-wide protests from happening again.

Speaking to The Verge, Laura Nestler, who The Verge reported is Go_JasonWaterfalls, claimed that Reddit has been talking about making this change since at least 2021. The protests, she said, were a wake-up call that moderators’ ability to turn subreddits private “could be used to harm Reddit at scale. The protests “accelerated” the policy change, per Nestler.

The announcement on r/modnews reads:

… the ability to instantly change Community Type settings has been used to break the platform and violate our rules. We have a responsibility to protect Reddit and ensure its long-term health, and we cannot allow actions that deliberately cause harm.

After shutting down a tactic for responding to unfavorable Reddit policy changes, Go_JasonWaterfalls claimed that Reddit still wants to hear from users.

“Community Type settings have historically been used to protest Reddit’s decisions,” they wrote.

“While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit. We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”

Last year’s user protests illustrated how dependent Reddit is on unpaid moderators and user-generated content. At times, things turned ugly, pitting Reddit executives against long-time users (Reddit CEO Steve Huffman infamously called Reddit mods “landed gentry,” something that some were quick to remind Go_JasonWaterfalls of) and reportedly worrying Reddit employees.

Although the protests failed to reverse Reddit’s prohibitive API fees or to save most third-party apps, it succeeded in getting users’ concerns heard and even crashed Reddit for three hours. Further, NFSW protests temporarily prevented Reddit from selling ads on some subreddits. Since going public this year and amid a push to reach profitability, Reddit has been more focused on ads than ever. (Most of Reddit’s money comes from ads.)

Reddit’s Nestler told The Verge that the new policy was reviewed by Reddit’s Mod Council. Reddit is confident that it won’t lose mods because of the change, she said.

“Demotes us all to janitors”

The news marks another broad policy change that is likely to upset users and make Reddit seem unwilling to give into user feedback, despite Go_JasonWaterfalls saying that “protest is allowed on Reddit.” For example, in response, Reddit user CouncilOfStrongs said:

Don’t lie to us, please.

Something that you can ignore because it has no impact cannot be a protest, and no matter what you say that is obviously the one and only point of you doing this – to block moderators from being able to hold Reddit accountable in even the smallest way for malicious, irresponsible, bad faith changes that they make.

Reddit user belisaurius, who is listed as a mod for several active subreddits, including a 336,000-member one for the Philadelphia Eagles NFL team, said that the policy change “removes moderators from any position of central responsibility and demotes us all to janitors.”

As Reddit continues seeking profits and seemingly more control over a platform built around free user-generated content and moderation, users will have to either accept that Reddit is changing or leave the platform.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

In fear of more user protests, Reddit announces controversial policy change Read More »