Author name: Kris Guyer

sidecarless-service-meshes:-are-they-ready-for-prime-time?

Sidecarless Service Meshes: Are They Ready for Prime Time?

Service meshes have become a cornerstone in the architecture of modern microservices, providing a dedicated infrastructure layer to manage service-to-service communication. Traditionally, service meshes have relied on sidecar proxies to handle tasks such as load balancing, traffic routing, and security enforcement. However, the emergence of sidecarless service meshes has introduced a new paradigm, promising to simplify operations and reduce overhead.

This blog offers a detailed overview of the pros and cons of sidecarless service meshes, focusing on the security aspects that can make a significant difference. It enables you to navigate the complexities of managing a modern microservices architecture. Whether you choose to stick with the traditional sidecar model, explore the emerging sidecarless approach, or use a mix of both based on the use case, understanding the trade-offs allows you to optimize your microservices communication and achieve greater efficiency and reliability in your deployments.

The Pros and Cons of Sidecarless Service Meshes

A sidecarless service mesh operates by integrating the service mesh layer directly into the underlying infrastructure, such as the kernel, rather than deploying individual sidecar proxies alongside each microservice. This approach leverages shared resources such as DaemonSets or node-level proxies or technologies like eBPF (extended Berkeley Packet Filter) to manage network connectivity and application protocols at the kernel level, handling tasks like traffic management, security enforcement, and observability.

Pros

  • Reduced operational complexity: Sidecarless service meshes, such as Istio’s Ambient Mesh and Cilium’s eBPF-based approach, aim to simplify operations by eliminating the need for sidecar proxies. Instead, they use shared resources like DaemonSets or node-level proxies, reducing the number of components that need to be managed and maintained.
  • Improved performance: By removing resource-intensive sidecar proxies such as Envoy, sidecarless service meshes can reduce the latency and performance overhead associated with routing traffic through additional containers. This can lead to improved network performance and more efficient resource utilization.
  • Lower infrastructure costs: Without the need for individual sidecar proxies, sidecarless service meshes can reduce overall resource consumption, leading to lower infrastructure costs. This is particularly beneficial in large-scale environments with numerous microservices.
  • Simplified upgrades and maintenance: Upgrading and maintaining a sidecarless service mesh can be more straightforward, as there are fewer components to update. This can lead to reduced downtime and fewer disruptions during maintenance windows.

Cons

  • Limited maturity and adoption: Sidecarless service meshes are relatively new and may not be as mature or widely adopted as their sidecar-based counterparts. This can lead to potential stability and reliability issues, as well as a steeper learning curve for teams adopting the technology.
  • Security concerns: Some experts argue that sidecarless service meshes may not provide the same level of security isolation as sidecar-based meshes. Shared proxies can introduce potential vulnerabilities and may not offer the same granularity of security controls.
  • Compatibility issues: Not all existing tools and frameworks may be compatible with sidecarless service meshes. This can create challenges when integrating with existing infrastructure and may require additional effort to adapt or replace tools.
  • Feature limitations: While sidecarless service meshes can handle many of the same tasks as sidecar-based meshes, they may not support all the advanced features and capabilities. For example, some complex traffic management and routing functions may still require sidecar proxies.

The Security Debate

A critical consideration when choosing a service mesh, the debate as to whether a sidecarless service mesh can meet the needs of the evolving threat landscape continues to rage. When it comes to sidecarless service meshes, the primary security risks include:

  • Reduced isolation: Without dedicated sidecars for each service, there is less isolation between services, potentially allowing security issues to spread more easily across the mesh.
  • Shared resources: Sidecarless approaches often use shared resources like DaemonSets or node-level proxies, which may introduce vulnerabilities if compromised, affecting multiple services simultaneously.
  • Larger attack surface: Some argue that sidecarless architectures may present a larger attack surface, especially when using node-level proxies or shared components.
  • Fine-grained policy challenges: Implementing fine-grained security policies can be more difficult without the granular control offered by per-service sidecars.
  • Certificate and mTLS concerns: There are debates about the security of certificate management and mutual TLS (mTLS) implementation in sidecarless architectures, particularly regarding the separation of authentication from data payloads.
  • eBPF security implications: For eBPF-based sidecarless approaches, there are ongoing discussions about potential security risks associated with kernel-level operations.
  • Reduced security boundaries: The lack of clear pod-level boundaries in sidecarless designs may make it harder to contain security breaches.
  • Complexity in security management: Without dedicated proxies per service, managing and auditing security across the mesh may become more complex.
  • Potential for “noisy neighbor” issues: Shared proxy resources might lead to security problems where one compromised service affects others.
  • Evolving security practices: As sidecarless architectures are relatively new, best practices for securing these environments are still developing, potentially leaving gaps in an organization’s security posture.

It’s important to note that while concerns exist, proponents of sidecarless architectures argue that they can be addressed through careful design and implementation. Moreover, some advocates of the sidecarless approach believe that the separation of L4 and L7 processing in sidecarless designs may actually improve security by reducing the attack surface for services that don’t require full L7 processing.

The Middle Road

A mixed deployment, integrating both sidecar and sidecarless modes, can offer a balanced approach that leverages the strengths of both models while mitigating their respective weaknesses. Here are the key benefits and relevant use cases of using a mixed sidecar and sidecarless service mesh deployment:

Benefits

  • Optimized Resource Utilization
    • Sidecarless for lightweight services: Sidecarless deployments can be used for lightweight services that do not require extensive security or observability features. This reduces the overhead associated with running sidecar proxies, leading to more efficient resource utilization.
    • Sidecar for critical services: Critical services that require enhanced security, fine-grained traffic management, and detailed observability can continue to use sidecar proxies. This ensures that these services benefit from the robust security and control features provided by sidecars.
  • Enhanced Security and Compliance
    • Granular security control: By using sidecars for services that handle sensitive data or require strict compliance, organizations can enforce granular security policies, including mutual TLS (mTLS), access control, and encryption.
    • Simplified security for less critical services: For less critical services, sidecarless deployments can provide adequate security without the complexity and overhead of sidecar proxies.
  • Improved Performance and Latency
    • Reduced latency for high-performance services: Sidecarless deployments can reduce the latency introduced by sidecar proxies, making them suitable for high-performance services where low latency is critical.
    • Balanced performance for mixed workloads: By selectively deploying sidecars only where necessary, organizations can achieve a balance between performance and security, optimizing the overall system performance.
  • Operational Flexibility and Simplification
    • Simplified operations for non-critical services: Sidecarless deployments can simplify operations by reducing the number of components that need to be managed and maintained. This is particularly beneficial for non-critical services where operational simplicity is a priority.
    • Flexible deployment strategies: A mixed deployment allows organizations to tailor their service mesh strategy to the specific needs of different services, providing flexibility in how they manage and secure their microservices.
  • Cost Efficiency
    • Lower infrastructure costs: Organizations can lower their infrastructure costs by reducing the number of sidecar proxies (or replacing Envoy with lightweight proxies), particularly in large-scale environments with numerous microservices.
    • Cost-effective security: Sidecar proxies can be reserved for services that truly need them, ensuring that resources are allocated efficiently and cost-effectively.

Use Cases

  • Hybrid cloud environments: In hybrid cloud environments, a mixed deployment can provide the flexibility to optimize resource usage and security across different cloud and on-premises infrastructures. Sidecarless deployments can be used in cloud environments where resource efficiency is critical, while sidecars can be deployed on-premises for services requiring stringent security controls.
  • Microservices with varying security requirements: In microservices architectures where different services have varying security and compliance requirements, a mixed deployment allows for tailored security policies. Critical services handling sensitive data can use sidecar proxies for enhanced security, while less critical services can leverage sidecarless deployments for better performance and lower overhead.
  • Performance-sensitive applications: Applications requiring high performance and low latency can benefit from lightweight sidecars or sidecarless deployments for performance-sensitive components. At the same time, sidecar proxies can be used for components where security and observability are more critical, ensuring a balanced approach.
  • Development and test environments: In development and test environments, sidecarless deployments can simplify the setup and reduce resource consumption, making it easier for developers to iterate quickly. Sidecar proxies can be introduced in staging or production environments where security and observability become more critical.
  • Gradual migration to sidecarless architectures: Organizations looking to gradually migrate to sidecarless architectures can start with a mixed deployment. This allows them to transition some services to sidecarless mode while retaining sidecar proxies for others, providing a smooth migration path and minimizing disruption.

While much depends on the service mesh chosen, a mixed sidecar and sidecarless service mesh deployment may offer a versatile and balanced approach to managing microservices. However, a mixed environment also adds a layer of complexity, requiring additional expertise, which may be prohibitive for some organizations.

The Bottom Line

Both sidecar and sidecarless approaches offer distinct advantages and disadvantages. Sidecar-based service meshes provide fine-grained control, enhanced security, and compatibility with existing tools but may come with increased operational complexity, performance overhead, and resource usage depending on the service mesh and proxy chosen. On the other hand, sidecarless service meshes promise reduced operational complexity, improved performance, and lower infrastructure costs but face challenges related to maturity, security, and compatibility.

The choice between sidecar and sidecarless service meshes ultimately depends on your specific use case, requirements, existing infrastructure, in-house expertise, and timeframe. For organizations with immediate requirements or complex, large-scale microservices environments that require advanced traffic management and security features, sidecar-based service meshes may be the better choice. However, for those looking to simplify operations and reduce overhead, sidecarless service meshes are maturing to the point where they may offer a compelling alternative in the next 12 to 18 months. In the meantime, however, it’s worth taking a look in a controlled environment.

As the technology continues to evolve, it is essential to stay informed about the latest developments and best practices in the service mesh landscape. By carefully evaluating the pros and cons of each approach, you can make an informed decision that aligns with your organization’s goals and needs.

Next Steps

To learn more, take a look at GigaOm’s Service Mesh Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Sidecarless Service Meshes: Are They Ready for Prime Time? Read More »

crook-made-millions-by-breaking-into-execs’-office365-inboxes,-feds-say

Crook made millions by breaking into execs’ Office365 inboxes, feds say

WHAT IS THE NAME OF YOUR FIRST PET? —

Email accounts inside 5 US companies unlawfully breached through password resets.

Crook made millions by breaking into execs’ Office365 inboxes, feds say

Getty Images

Federal prosecutors have charged a man for an alleged “hack-to-trade” scheme that earned him millions of dollars by breaking into the Office365 accounts of executives at publicly traded companies and obtaining quarterly financial reports before they were released publicly.

The action, taken by the office of the US Attorney for the district of New Jersey, accuses UK national Robert B. Westbrook of earning roughly $3.75 million in 2019 and 2020 from stock trades that capitalized on the illicitly obtained information. After accessing it, prosecutors said, he executed stock trades. The advance notice allowed him to act and profit on the information before the general public could. The US Securities and Exchange Commission filed a separate civil suit against Westbrook seeking an order that he pay civil penalties and return all ill-gotten gains.

Buy low, sell high

“The SEC is engaged in ongoing efforts to protect markets and investors from the consequences of cyber fraud,” Jorge G. Tenreiro, acting chief of the SEC’s Crypto Assets and Cyber Unit, said in a statement. “As this case demonstrates, even though Westbrook took multiple steps to conceal his identity—including using anonymous email accounts, VPN services, and utilizing bitcoin—the Commission’s advanced data analytics, crypto asset tracing, and technology can uncover fraud even in cases involving sophisticated international hacking.”

A federal indictment filed in US District Court for the District of New Jersey said that Westbrook broke into the email accounts of executives from five publicly traded companies in the US. He pulled off the breaches by abusing the password reset mechanism Microsoft offered for Office365 accounts. In some cases, Westbrook allegedly went on to create forwarding rules that automatically sent all incoming emails to an email address he controlled.

Prosecutors alleged in one such incident:

On or about January 26, 2019, WESTBROOK gained unauthorized access to the Office365 email account of Company-1 ‘s Director of Finance and Accounting (“Individual-!”) through an unauthorized password reset. During the intrusion, an auto-forwarding rule was implemented, which was designed to automatically forward content from lndividual-1 ‘s compromised email account to an email account controlled by WESTBROOK. At the time of the intrusion, the compromised email account of Individual-I contained non-public information about Company-1 ‘s quarterly earnings, which indicated that Company-1 ‘s sales were down.

Once a person gains unauthorized access to an email account, it’s possible to conceal the breach by disabling or deleting password reset alerts and burying password reset rules deep inside account settings.

Prosecutors didn’t say how the defendant managed to abuse the reset feature. Typically such mechanisms require control of a cell phone or registered email account belonging to the account holder. In 2019 and 2020 many online services would also allow users to reset passwords by answering security questions. The practice is still in use today but has been slowly falling out of favor as the risks have come to be more widely understood.

By obtaining material information, Westbrook was able to predict how a company’s stock would perform once it became public. When results were likely to drive down stock prices, he would place “put” options, which give the purchaser the right to sell shares at a specific price within a specified span of time. The practice allowed Westbrook to profit when shares fell after financial results became public. When positive results were likely to send stock prices higher, Westbrook allegedly bought shares while they were still low and later sold them for a higher price.

The prosecutors charged Westbrook with one count each of securities fraud and wire fraud and five counts of computer fraud. The securities fraud count carries a maximum penalty of up to 20 years’ prison time and $5 million in fines The wire fraud count carries a maximum penalty of up to 20 years in prison and a fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest. Each computer fraud count carries a maximum five years in prison and a maximum fine of either $250,000 or twice the gain or loss from the offense, whichever is greatest.

The US Attorney’s office in the District of New Jersey didn’t say if Westbrook has made an initial appearance in court or if he has entered a plea.

Crook made millions by breaking into execs’ Office365 inboxes, feds say Read More »

openai-unveils-easy-voice-assistant-creation-at-2024-developer-event

OpenAI unveils easy voice assistant creation at 2024 developer event

Developers developers developers —

Altman steps back from the keynote limelight and lets four major API additions do the talking.

A glowing OpenAI logo on a blue background.

Benj Edwards

On Monday, OpenAI kicked off its annual DevDay event in San Francisco, unveiling four major API updates for developers that integrate the company’s AI models into their products. Unlike last year’s single-location event featuring a keynote by CEO Sam Altman, DevDay 2024 is more than just one day, adopting a global approach with additional events planned for London on October 30 and Singapore on November 21.

The San Francisco event, which was invitation-only and closed to press, featured on-stage speakers going through technical presentations. Perhaps the most notable new API feature is the Realtime API, now in public beta, which supports speech-to-speech conversations using six preset voices and enables developers to build features very similar to ChatGPT’s Advanced Voice Mode (AVM) into their applications.

OpenAI says that the Realtime API streamlines the process of creating voice assistants. Previously, developers had to use multiple models for speech recognition, text processing, and text-to-speech conversion. Now, they can handle the entire process with a single API call.

The company plans to add audio input and output capabilities to its Chat Completions API in the next few weeks, allowing developers to input text or audio and receive responses in either format.

Two new options for cheaper inference

OpenAI also announced two features that may help developers balance performance and cost when making AI applications. “Model distillation” offers a way for developers to fine-tune (customize) smaller, cheaper models like GPT-4o mini using outputs from more advanced models such as GPT-4o and o1-preview. This potentially allows developers to get more relevant and accurate outputs while running the cheaper model.

Also, OpenAI announced “prompt caching,” a feature similar to one introduced by Anthropic for its Claude API in August. It speeds up inference (the AI model generating outputs) by remembering frequently used prompts (input tokens). Along the way, the feature provides a 50 percent discount on input tokens and faster processing times by reusing recently seen input tokens.

And last but not least, the company expanded its fine-tuning capabilities to include images (what it calls “vision fine-tuning”), allowing developers to customize GPT-4o by feeding it both custom images and text. Basically, developers can teach the multimodal version of GPT-4o to visually recognize certain things. OpenAI says the new feature opens up possibilities for improved visual search functionality, more accurate object detection for autonomous vehicles, and possibly enhanced medical image analysis.

Where’s the Sam Altman keynote?

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Getty Images

Unlike last year, DevDay isn’t being streamed live, though OpenAI plans to post content later on its YouTube channel. The event’s programming includes breakout sessions, community spotlights, and demos. But the biggest change since last year is the lack of a keynote appearance from the company’s CEO. This year, the keynote was handled by the OpenAI product team.

On last year’s inaugural DevDay, November 6, 2023, OpenAI CEO Sam Altman delivered a Steve Jobs-style live keynote to assembled developers, OpenAI employees, and the press. During his presentation, Microsoft CEO Satya Nadella made a surprise appearance, talking up the partnership between the companies.

Eleven days later, the OpenAI board fired Altman, triggering a week of turmoil that resulted in Altman’s return as CEO and a new board of directors. Just after the firing, Kara Swisher relayed insider sources that said Altman’s DevDay keynote and the introduction of the GPT store had been a precipitating factor in the firing (though not the key factor) due to some internal disagreements over the company’s more consumer-like direction since the launch of ChatGPT.

With that history in mind—and the focus on developers above all else for this event—perhaps the company decided it was best to let Altman step away from the keynote and let OpenAI’s technology become the key focus of the event instead of him. We are purely speculating on that point, but OpenAI has certainly experienced its share of drama over the past month, so it may have been a prudent decision.

Despite the lack of a keynote, Altman is present at Dev Day San Francisco today and is scheduled to do a closing “fireside chat” at the end (which has not yet happened as of this writing). Also, Altman made a statement about DevDay on X, noting that since last year’s DevDay, OpenAI had seen some dramatic changes (literally):

From last devday to this one:

*98% decrease in cost per token from GPT-4 to 4o mini

*50x increase in token volume across our systems

*excellent model intelligence progress

*(and a little bit of drama along the way)

In a follow-up tweet delivered in his trademark lowercase, Altman shared a forward-looking message that referenced the company’s quest for human-level AI, often called AGI: “excited to make even more progress from this devday to the next one,” he wrote. “the path to agi has never felt more clear.”

OpenAI unveils easy voice assistant creation at 2024 developer event Read More »

ebay-listings-for-banned-chemicals-shielded-by-section-230,-judge-rules

eBay listings for banned chemicals shielded by Section 230, judge rules

No sale —

DOJ can’t force eBay to remove environmentally harmful product listings.

eBay listings for banned chemicals shielded by Section 230, judge rules

eBay has defeated a lawsuit that the US Department of Justice raised last fall, which alleged that eBay violated environmental protection and public safety laws by allowing users to sell hundreds of thousands of banned products.

Among products targeted by the DOJ suit were at least 343,011 “aftermarket products for motor vehicles” used to “tamper with or disable vehicle emissions control systems” and at least 23,000 “unregistered, misbranded, or restricted use pesticides.” The DOJ also took issue with sales of products containing methylene chloride, which is used as a “thinning agent in paint and coating removal products.” Most uses of that chemical were banned by the Environmental Protection Agency this April to prevent causing cancer, liver harm, and death.

In her order, US District Judge Orelia Merchant agreed with eBay that the DOJ failed to prove that eBay was liable for selling some of these targeted products. Ultimately, Merchant ruled that whether the products violated environmental laws or not, Section 230 barred all of the DOJ’s claims, as eBay is shielded from liability for third-party postings (in this case, listings) on its platform.

“eBay contends that it does not actually ‘sell’ any item listed on its platform,” Merchant wrote, pointing to a precedent set in a 2004 lawsuit where the jewelry company Tiffany attempted to sue eBay over counterfeit items. Merchant agreed with the Second Circuit, which affirmed that “eBay did not itself sell counterfeit Tiffany goods; only the fraudulent vendors did,” mainly due to the fact that eBay “never physically possesses” the goods that are sold on its platform. For the same reason, Merchant found that eBay never sold any of the restricted items the DOJ flagged last year.

While the entire motion to dismiss was granted, the DOJ did succeed in arguing that eBay had violated the Toxic Substances Control Act (TSCA) and the Methylene Chloride Rule by not removing some listings for products containing methylene chloride.

Under those laws, the DOJ persuasively alleged that eBay was a “retailer” who introduced and “distributed in commerce” products containing methylene chloride, Merchant’s order noted.

eBay’s attempt to defend against that claim by narrowly arguing that the TSCA should only be applied to the literal first seller to introduce a product to market not only failed, Merchant said, but also threatened to “undermine the TSCA’s regulatory scope” as a law designed to protect the public from any introduction of harmful substances.

However, none of that matters, eBay argued, because Section 230 bars that claim, too. Merchant agreed that without “allegations… eBay fails to remove third-party listings (conduct that is plainly immune under Section 230),” and the government’s complaint “would not state a claim.”

eBay vows to help prevent toxic sales

Perhaps the government had hoped that eBay might settle the lawsuit, as the company did last February in a DOJ case over the sales of pill presses. Similar to the DOJ’s bid to hold eBay liable for enabling product sales causing environmental harms, the DOJ had accused eBay of selling pill presses tied to fentanyl drug rings amid an opioid epidemic killing 100,000 people annually at its peak. Both suits were designed to stop eBay from distributing products causing harms, but only one succeeded.

In the pill press case, eBay did not invoke the Section 230 shield. Instead, eBay admitted no wrongdoing while agreeing to “pay $59 million” and voluntarily removing products targeted by the DOJ. In a statement, eBay said this was “in the best interest of the company and its shareholders as it avoids the costs, uncertainty, and distraction associated with protracted litigation.”

eBay did not appear concerned that the environmental lawsuit might have similarly long legs in court. An eBay spokesperson told Ars that the company appreciated the court’s “thoughtful review,” which “found that the government’s lawsuit should not be permitted to move forward.”

“Maintaining a safe and trusted marketplace for our global community of sellers and buyers is a fundamental principle of our business at eBay,” eBay’s spokesperson said. “As we have throughout our history, eBay will continue to invest significant resources to support its well-recognized and proactive efforts to help prevent prohibited items from being listed on our marketplace.”

Because Merchant granted eBay’s motion to dismiss the DOJ’s lawsuit over alleged environmental harms with prejudice, the DOJ will not have a chance to re-file the case in the same court but could possibly appeal to a higher court.

The DOJ declined Ars’ request for comment.

eBay listings for banned chemicals shielded by Section 230, judge rules Read More »

in-fear-of-more-user-protests,-reddit-announces-controversial-policy-change

In fear of more user protests, Reddit announces controversial policy change

Protest blowback —

Moderators now need Reddit’s permission to turn subreddits private, NSFW.

The Reddit application can be seen on the display of a smartphone.

Following site-wide user protests last year that featured moderators turning thousands of subreddits private or not-safe-for-work (NSFW), Reddit announced that mods now need its permission to make those changes.

Reddit’s VP of community, going by Go_JasonWaterfalls, made the announcement about what Reddit calls Community Types today. Reddit’s permission is also required to make subreddits restricted or to go from NSFW to safe-for-work (SFW). Reddit’s employee claimed that requests will be responded to “in under 24 hours.”

Reddit’s employee said that “temporarily going restricted is exempt” from this requirement, adding that “mods can continue to instantly restrict posts and/or comments for up to 7 days using Temporary Events.” Additionally, if a subreddit has fewer than 5,000 members or is less than 30 days old, the request “will be automatically approved,” per Go_JasonWaterfalls.

Reddit’s post includes a list of “valid” reasons that mods tend to change their subreddit’s Community Type and provides alternative solutions.

Last year’s protests “accelerated” this policy change

Last year, Reddit announced that it would be charging a massive amount for access to its previously free API. This caused many popular third-party Reddit apps to close down. Reddit users then protested by turning subreddits private (or read-only) or by only showing NSFW content or jokes and memes. Reddit then responded by removing some moderators; eventually, the protests subsided.

Reddit, which previously admitted that another similar protest could hurt it financially, has maintained that moderators’ actions during the protests broke its rules. Now, it has solidified a way to prevent something like last year’s site-wide protests from happening again.

Speaking to The Verge, Laura Nestler, who The Verge reported is Go_JasonWaterfalls, claimed that Reddit has been talking about making this change since at least 2021. The protests, she said, were a wake-up call that moderators’ ability to turn subreddits private “could be used to harm Reddit at scale. The protests “accelerated” the policy change, per Nestler.

The announcement on r/modnews reads:

… the ability to instantly change Community Type settings has been used to break the platform and violate our rules. We have a responsibility to protect Reddit and ensure its long-term health, and we cannot allow actions that deliberately cause harm.

After shutting down a tactic for responding to unfavorable Reddit policy changes, Go_JasonWaterfalls claimed that Reddit still wants to hear from users.

“Community Type settings have historically been used to protest Reddit’s decisions,” they wrote.

“While we are making this change to ensure users’ expectations regarding a community’s access do not suddenly change, protest is allowed on Reddit. We want to hear from you when you think Reddit is making decisions that are not in your communities’ best interests. But if a protest crosses the line into harming redditors and Reddit, we’ll step in.”

Last year’s user protests illustrated how dependent Reddit is on unpaid moderators and user-generated content. At times, things turned ugly, pitting Reddit executives against long-time users (Reddit CEO Steve Huffman infamously called Reddit mods “landed gentry,” something that some were quick to remind Go_JasonWaterfalls of) and reportedly worrying Reddit employees.

Although the protests failed to reverse Reddit’s prohibitive API fees or to save most third-party apps, it succeeded in getting users’ concerns heard and even crashed Reddit for three hours. Further, NFSW protests temporarily prevented Reddit from selling ads on some subreddits. Since going public this year and amid a push to reach profitability, Reddit has been more focused on ads than ever. (Most of Reddit’s money comes from ads.)

Reddit’s Nestler told The Verge that the new policy was reviewed by Reddit’s Mod Council. Reddit is confident that it won’t lose mods because of the change, she said.

“Demotes us all to janitors”

The news marks another broad policy change that is likely to upset users and make Reddit seem unwilling to give into user feedback, despite Go_JasonWaterfalls saying that “protest is allowed on Reddit.” For example, in response, Reddit user CouncilOfStrongs said:

Don’t lie to us, please.

Something that you can ignore because it has no impact cannot be a protest, and no matter what you say that is obviously the one and only point of you doing this – to block moderators from being able to hold Reddit accountable in even the smallest way for malicious, irresponsible, bad faith changes that they make.

Reddit user belisaurius, who is listed as a mod for several active subreddits, including a 336,000-member one for the Philadelphia Eagles NFL team, said that the policy change “removes moderators from any position of central responsibility and demotes us all to janitors.”

As Reddit continues seeking profits and seemingly more control over a platform built around free user-generated content and moderation, users will have to either accept that Reddit is changing or leave the platform.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder in Reddit.

In fear of more user protests, Reddit announces controversial policy change Read More »

can-addressing-gut-issues-treat-long-covid-in-children?

Can addressing gut issues treat long COVID in children?

Child holding his stomach

Frazao Studio Latino/ Getty Images

Four years after the outbreak of the COVID-19 pandemic, doctors and researchers are still seeking ways to help patients with long COVID, the persistent and often debilitating symptoms that can continue long after a COVID-19 infection.

In adults, the most common long COVID symptoms include fatigue and brain fog, but for children the condition can look different. A study published last month suggests preteens are more likely to experience symptoms such as headaches, stomach pain, trouble sleeping, and attention difficulties. Even among children, effects seem to vary by age. “There seems to be some differences between age groups, with less signs of organ damage in younger children and more adultlike disease in adolescents,” says Petter Brodin, professor of pediatric immunology at Imperial College London.

While vast sums have been devoted to long COVID research—the US National Institutes of Health have spent more than a billion dollars on research projects and clinical trials—research into children with the condition has been predominantly limited to online surveys, calls with parents, and studies of electronic health records. This is in spite of a recent study suggesting that between 10 and 20 percent of children may have developed long COVID following an acute infection, and another report finding that while many have recovered, some still remain ill three years later.

Now, what’s believed to be the first clinical trial specifically aimed at children and young adults with long COVID is underway, recruiting subjects aged 7 to 21 on which to test a potential treatment. It builds on research that suggests long COVID in children may be linked to the gut.

In May 2021, Lael Yonker, a pediatric pulmonologist at Massachusetts General Hospital in Boston, published a study of multisystem inflammatory syndrome in children (MIS-C), which she says is now regarded as a more severe and acute version of long COVID. It showed that these children had elevated levels of a protein called zonulin, a sign of a so-called leaky gut. Higher levels of zonulin are associated with greater permeability in the intestine, which could enable SARS-CoV-2 viral particles to leak out of the intestines and into the bloodstream instead of being excreted out of the body. From there, they could trigger inflammation.

As Yonker began to see more and more children with long COVID, she theorized that many of the gastrointestinal and neurological symptoms they were experiencing might be linked. But her original study also pointed to a possible solution. When she gave the children with MIS-C a drug called larazotide, an existing treatment for people with issues relating to a leaky gut, the levels of viral particles in their blood decreased and their symptoms improved.

Can addressing gut issues treat long COVID in children? Read More »

ifixit’s-iphone-16-teardown-finds-a-greatly-improved-battery-removal-process

iFixit’s iPhone 16 teardown finds a greatly improved battery removal process

iFixit —

The new iPhones received a repair score of 7 out of 10.

iFixit’s iPhone 16 and 16 Plus teardown.

iFixit has published teardown views for the iPhone 16 and iPhone 16 Pro, along with their larger cousins, the Plus and Pro Max.

The videos are really marketing for iFixit’s various repair kits and other tools and products that you can buy—and sometimes these videos now have lengthy plugs for some new product or another—but nonetheless, the videos almost always include interesting insights about devices’ components.

Tearing down the iPhone 16, iFixit confirmed one thing we already suspected: One of the mmWave antennas was removed and replaced in that same spot by the Camera Control button. It also found that the camera systems in the 16 Pro and 16 Pro Max are almost interchangeable, but sadly aren’t because of the placement of a single screw and the length of a single cable. Too bad.

The disassembly process for the Pro phones is mostly the same as before, but thankfully, there’s been a redesign that reduces the risk of damaging the OLED panel when tearing the phone down.

The biggest discovery was that the iPhone 16 and iPhone 16 Plus have a superior battery replacement process compared to earlier phones. Instead of pull tabs, they use an adhesive that lets go when affected by an electric current.

iFixit says this is one of the easiest battery removal processes in the industry, which is high praise, especially when it’s directed toward a company with a difficult record on that front.

iFixit’s iPhone 16 Pro and 16 Pro Max teardown.

Unfortunately, the 16 Pro and 16 Pro Max haven’t moved to the new battery replacement process found in the 16 and 16 Plus. On the bright side, it’s much easier to service the USB-C port than before, though Apple doesn’t sell that part separately.

iFixit gave all the new iPhones a 7 out of 10 repairability score, which is historically high for an iPhone.

The videos go into much more detail, so check them out.

Listing image by iFixit

iFixit’s iPhone 16 teardown finds a greatly improved battery removal process Read More »

“not-a-good-look”:-google’s-ad-tech-monopoly-defense-widely-criticized

“Not a good look”: Google’s ad tech monopoly defense widely criticized

“Not a good look”: Google’s ad tech monopoly defense widely criticized

Google wound down its defense in the US Department of Justice’s ad tech monopoly trial this week, following a week of testimony from witnesses that experts said seemed to lack credibility.

The tech giant started its defense by showing a widely mocked chart that Google executive Scott Sheffer called a “spaghetti football,” supposedly showing a fluid industry thriving thanks to Google’s ad tech platform but mostly just “confusing” everyone and possibly even helping to debunk its case, Open Markets Institute policy analyst Karina Montoya reported.

“The effect of this image might have backfired as it also made it evident that Google is ubiquitous in digital advertising,” Montoya reported. “During DOJ’s cross-examination, the spaghetti football was untangled to show only the ad tech products used specifically by publishers and advertisers on the open web.”

One witness, Marco Hardie, Google’s current head of industry, was even removed from the stand, his testimony deemed irrelevant by US District Judge Leonie Brinkema, Big Tech On Trial reported. Another, Google executive Scott Sheffer, gave testimony Brinkema considered “tainted,” Montoya reported. But perhaps the most heated exchange about a witness’ credibility came during the DOJ’s cross-examination of Mark Israel, the key expert that Google is relying on to challenge the DOJ’s market definition.

Google’s case depends largely on Brinkema agreeing that the DOJ’s market definition is too narrow, with an allegedly outdated focus on display ads on the open web, as opposed to a broader market including display ads appearing in apps or on social media. But experts monitoring the trial suggested that Brinkema may end up questioning Israel’s credibility after DOJ lawyer Aaron Teitelbaum’s aggressive cross-examination.

According to Big Tech on Trial, which posted the exchange on X (formerly Twitter), Teitelbaum’s line of questioning came across as a “striking and effective impeachment of Mark Israel’s credibility as a witness.”

During his testimony, Israel told Brinkema that Google’s share of the US display ads market is only 25 percent, minimizing Google’s alleged dominance while emphasizing that Google faced “intense competition” from other Big Tech companies like Amazon, Meta, and TikTok in this broader market, Open Markets Institute policy analyst Karina Montoya reported.

On cross-examination, Teitelbaum called Israel out as a “serial ‘expert’ for companies facing antitrust challenges” who “always finds that the companies ‘explained away’ market definition,” Big Tech on Trial posted on X. Teitelbaum even read out quotes from past cases “in which judges described” Israel’s “expert testimony as ‘not credible’ and having ‘misunderstood antitrust law.'”

Israel was also accused by past judges of rendering his opinions “based on false assumptions,” according to USvGoogleAds, a site run by the digital advertising watchdog Check My Ads with ad industry partners. And specifically for the Google ad tech case, Teitelbaum noted that Israel omitted ad spend data to seemingly manipulate one of his charts.

“Not a good look,” the watchdog’s site opined.

Perhaps most damaging, Teitelbaum asked Israel to confirm that “80 percent of his income comes from doing this sort of expert testimony,” suggesting that Israel seemingly depended on being paid by companies like Jet Blue and Kroger-Albertsons—and even previously by Google during the search monopoly trial—to muddy the waters on market definition. Lee Hepner, an antitrust lawyer with the American Economic Liberties Project, posted on X that the DOJ’s antitrust chief, Jonathan Kanter, has grown wary of serial experts supposedly sowing distrust in the court system.

“Let me say this clearly—this will not end well,” Kanter said during a speech at a competition law conference this month. “Already we see a seeping distrust of expertise by the courts and by law enforcers.”

“Best witnesses money can buy”

In addition to experts and Google staffers backing up Google’s proposed findings of fact and conclusions of law, Google brought in Courtney Caldwell—the CEO of a small business that once received a grant from Google and appears in Google’s marketing materials—to back up claims that a DOJ win could harm small businesses, Big Tech on Trial reported.

Google’s direct examination of Caldwell was “basically just a Google ad,” Big Tech on Trial said, while Check My Ads’ site suggested that Google mostly just called upon “the best witnesses their money can buy, and it still did not get them very far.”

According to Big Tech on Trial, Google is using a “light touch” in its defense, refusing to go “pound for pound” to refute the DOJ’s case. Using this approach, Google can seemingly ignore any argument the DOJ raises that doesn’t fit into the picture Google wants Brinkema to accept of Google’s ad empire growing organically, rather than anti-competitively constructed with the intent to shut out rivals through mergers and acquisitions.

Where the DOJ wants the judge to see “a Google-only pipeline through the heart of the ad tech stack, denying non-Google rivals the same access,” Google argues that it has only “designed a set of products that work efficiently with each other and attract a valuable customer base.”

The main problem with Google’s defense appears to be the evidence emerging from its own internal documents. AdExchanger’s Allison Schiff, who has been monitoring the trial, pulled out the spiciest quotes from the courtroom, where Google’s own employees seem to show intent to monopolize the ad tech industry.

Evidence that Brinkeman might find hard to ignore include a 2008 statement from Google’s former president of display advertising, David Rosenblatt, confirming that it would “take an act of god” to get people to switch ad platforms because of extremely high switching costs. Rosenblatt also suggested in a 2009 presentation that Google acquiring DoubleClick for Publishers would make Google’s ad tech like the New York Stock Exchange, putting Google in a position to monitor every ad sale and doing for display ads “what Google did to search.” There’s also a 2010 email where now-YouTube CEO Neal Mohan recommended getting Google ahead in the display ad market by “parking” a rival with “the most traction.”

On Friday, testimony concluded abruptly after the DOJ only called one rebuttal witness, Big Tech on Trial posted on X. Brinkema is expected to hear closing arguments on November 25, Big Tech on Trial reported, and rule in December, Montoya reported.

“Not a good look”: Google’s ad tech monopoly defense widely criticized Read More »

dell-sales-team-told-to-return-to-office-5-days-a-week,-starting-monday

Dell sales team told to return to office 5 days a week, starting Monday

office culture —

“… sales teams are more productive when onsite.”

The exterior of a Dell Technologies office building is seen on January 04, 2023 in Round Rock, Texas. (

Most members of Dell’s sales team will no longer have the option to work remotely, starting on Monday, Reuters reported this week, citing an internal memo. The policy applies to salespeople worldwide and is aimed at helping “grow skills,” per the note.

Like the rest of Dell’s workforce, Dell’s salespeople have previously been allowed to work remotely two days per week. A memo, which a Reddit user claims to have posted online (The Register reported that the post “mirrors” one that it viewed separately), says that field sellers aren’t required to go into an office but “should prioritize time spent in person with customers and partners.” The policy doesn’t apply to “remote sales team members,” but Dell said to expect additional unspecified communications regarding remote workers “in the coming weeks.” Bloomberg reported that top sales executives Bill Scannell, Dell’s president of global sales and customer operations, and John Byrne, president of sales and global regions at Dell Tech Select, signed the memo, saying:

… our data showed that sales teams are more productive when onsite.

Dell is viewing mandatory on-site work as a way to maintain its sales team’s culture and drive growth, according to the memo, which mentions things like “real-time feedback” and “dynamic” office energy. Moving forward, remote work will be permitted as an exception, Dell said.

Notably, the letter, which was reportedly sent to workers on Thursday, doesn’t give employees much time for adjustments. The memo acknowledges that workers have built schedules around working from home regularly but doesn’t offer immediate solutions.

In a statement to The Register, a Dell spokesperson confirmed the policy change.

“We continually evolve our business so we’re set up to deliver the best innovation, value and service to our customers and partners,” they said. “That includes more in-person connection to drive market leadership.”

Dell’s RTO push

After permitting full-time remote work in response to the COVID-19 pandemic, in February, Dell started requiring workers to go into the office 39 days per quarter (or about three days per week) or be totally remote. The latter, however, seemed discouraged, as Dell reportedly told remote workers they were ineligible for promotions in March. Still, nearly 50 percent of Dell workers chose to stay remote, Business Insider reported in June, citing internal Dell Data.

Dell’s return-to-office (RTO) mandates have reportedly been enforced with VPN and badge tracking. Some employees have accused Dell of trying to reduce headcount with RTO policies. Other companies pushing workers back into offices have also been accused of this; there’s research showing that at least some companies have used RTO policies to lower headcount while avoiding layoffs. Dell laid off 13,000 people in 2023 and plans more layoffs. In August, it announced plans to lay off an undisclosed additional number of people. The company is expected to have 120,000 employees.

Dell’s RTO change follows an announcement this week requiring Amazon employees to work on-site five days a week starting next year. Following the announcement, a survey of 2,585 US Amazon employees found that 73 percent of Amazon workers are “considering looking for an another job” in response.

“Yes, this is a shift…”

The memo, according to Reddit, acknowledges to workers, “Yes, this is a shift from current expectations.” Dell’s RTO push represents an about-face from previously stated positions on remote work from the company. In 2022, for example, CEO and founder Michael Dell wrote a blog that said Dell “found no meaningful differences” between remote and on-site workers, including before the pandemic. Dell COO Jeff Clarke made similar arguments in 2020.

The idea that remote work hinders productivity has been a hot topic of debate, especially as companies grapple with their remote work policies following pandemic restrictions. Dell says that its decision to force sales workers back into offices is backed by data, and its claims of boosted productivity could potentially be true when it comes to this specific Dell division. However, there have also been studies suggesting that return-to-office mandates hurt productivity. For example, a Great Place to Work survey conducted in July 2023 of 4,400 employees concluded that “productivity was lower for both on-site and remote employees when their employer mandated where they work.” Workers with companies allowing employees to choose between remote and on-site work were more likely to give “extra effort,” the survey found.

Dell sales team told to return to office 5 days a week, starting Monday Read More »

exponential-growth-brews-1-million-ai-models-on-hugging-face

Exponential growth brews 1 million AI models on Hugging Face

The sand has come alive —

Hugging Face cites community-driven customization as fuel for diverse AI model boom.

The Hugging Face logo in front of shipping containers.

On Thursday, AI hosting platform Hugging Face surpassed 1 million AI model listings for the first time, marking a milestone in the rapidly expanding field of machine learning. An AI model is a computer program (often using a neural network) trained on data to perform specific tasks or make predictions. The platform, which started as a chatbot app in 2016 before pivoting to become an open source hub for AI models in 2020, now hosts a wide array of tools for developers and researchers.

The machine-learning field represents a far bigger world than just large language models (LLMs) like the kind that power ChatGPT. In a post on X, Hugging Face CEO Clément Delangue wrote about how his company hosts many high-profile AI models, like “Llama, Gemma, Phi, Flux, Mistral, Starcoder, Qwen, Stable diffusion, Grok, Whisper, Olmo, Command, Zephyr, OpenELM, Jamba, Yi,” but also “999,984 others.”

The reason why, Delangue says, stems from customization. “Contrary to the ‘1 model to rule them all’ fallacy,” he wrote, “smaller specialized customized optimized models for your use-case, your domain, your language, your hardware and generally your constraints are better. As a matter of fact, something that few people realize is that there are almost as many models on Hugging Face that are private only to one organization—for companies to build AI privately, specifically for their use-cases.”

A Hugging Face-supplied chart showing the number of AI models added to Hugging Face over time, month to month.

Enlarge / A Hugging Face-supplied chart showing the number of AI models added to Hugging Face over time, month to month.

Hugging Face’s transformation into a major AI platform follows the accelerating pace of AI research and development across the tech industry. In just a few years, the number of models hosted on the site has grown dramatically along with interest in the field. On X, Hugging Face product engineer Caleb Fahlgren posted a chart of models created each month on the platform (and a link to other charts), saying, “Models are going exponential month over month and September isn’t even over yet.”

The power of fine-tuning

As hinted by Delangue above, the sheer number of models on the platform stems from the collaborative nature of the platform and the practice of fine-tuning existing models for specific tasks. Fine-tuning means taking an existing model and giving it additional training to add new concepts to its neural network and alter how it produces outputs. Developers and researchers from around the world contribute their results, leading to a large ecosystem.

For example, the platform hosts many variations of Meta’s open-weights Llama models that represent different fine-tuned versions of the original base models, each optimized for specific applications.

Hugging Face’s repository includes models for a wide range of tasks. Browsing its models page shows categories such as image-to-text, visual question answering, and document question answering under the “Multimodal” section. In the “Computer Vision” category, there are sub-categories for depth estimation, object detection, and image generation, among others. Natural language processing tasks like text classification and question answering are also represented, along with audio, tabular, and reinforcement learning (RL) models.

A screenshot of the Hugging Face models page captured on September 26, 2024.

Enlarge / A screenshot of the Hugging Face models page captured on September 26, 2024.

Hugging Face

When sorted for “most downloads,” the Hugging Face models list reveals trends about which AI models people find most useful. At the top, with a massive lead at 163 million downloads, is Audio Spectrogram Transformer from MIT, which classifies audio content like speech, music, and environmental sounds. Following that, with 54.2 million downloads, is BERT from Google, an AI language model that learns to understand English by predicting masked words and sentence relationships, enabling it to assist with various language tasks.

Rounding out the top five AI models are all-MiniLM-L6-v2 (which maps sentences and paragraphs to 384-dimensional dense vector representations, useful for semantic search), Vision Transformer (which processes images as sequences of patches to perform image classification), and OpenAI’s CLIP (which connects images and text, allowing it to classify or describe visual content using natural language).

No matter what the model or the task, the platform just keeps growing. “Today a new repository (model, dataset or space) is created every 10 seconds on HF,” wrote Delangue. “Ultimately, there’s going to be as many models as code repositories and we’ll be here for it!”

Exponential growth brews 1 million AI models on Hugging Face Read More »

openai-asked-us-to-approve-energy-guzzling-5gw-data-centers,-report-says

OpenAI asked US to approve energy-guzzling 5GW data centers, report says

Great scott! —

OpenAI stokes China fears to woo US approvals for huge data centers, report says.

OpenAI asked US to approve energy-guzzling 5GW data centers, report says

OpenAI hopes to convince the White House to approve a sprawling plan that would place 5-gigawatt AI data centers in different US cities, Bloomberg reports.

The AI company’s CEO, Sam Altman, supposedly pitched the plan after a recent meeting with the Biden administration where stakeholders discussed AI infrastructure needs. Bloomberg reviewed an OpenAI document outlining the plan, reporting that 5 gigawatts “is roughly the equivalent of five nuclear reactors” and warning that each data center will likely require “more energy than is used to power an entire city or about 3 million homes.”

According to OpenAI, the US needs these massive data centers to expand AI capabilities domestically, protect national security, and effectively compete with China. If approved, the data centers would generate “thousands of new jobs,” OpenAI’s document promised, and help cement the US as an AI leader globally.

But the energy demand is so enormous that OpenAI told officials that the “US needs policies that support greater data center capacity,” or else the US could fall behind other countries in AI development, the document said.

Energy executives told Bloomberg that “powering even a single 5-gigawatt data center would be a challenge,” as power projects nationwide are already “facing delays due to long wait times to connect to grids, permitting delays, supply chain issues, and labor shortages.” Most likely, OpenAI’s data centers wouldn’t rely entirely on the grid, though, instead requiring a “mix of new wind and solar farms, battery storage and a connection to the grid,” John Ketchum, CEO of NextEra Energy Inc, told Bloomberg.

That’s a big problem for OpenAI, since one energy executive, Constellation Energy Corp. CEO Joe Dominguez, told Bloomberg that he’s heard that OpenAI wants to build five to seven data centers. “As an engineer,” Dominguez said he doesn’t think that OpenAI’s plan is “feasible” and would seemingly take more time than needed to address current national security risks as US-China tensions worsen.

OpenAI may be hoping to avoid delays and cut the lines—if the White House approves the company’s ambitious data center plan. For now, a person familiar with OpenAI’s plan told Bloomberg that OpenAI is focused on launching a single data center before expanding the project to “various US cities.”

Bloomberg’s report comes after OpenAI’s chief investor, Microsoft, announced a 20-year deal with Constellation to re-open Pennsylvania’s shuttered Three Mile Island nuclear plant to provide a new energy source for data centers powering AI development and other technologies. But even if that deal is approved by regulators, the resulting energy supply that Microsoft could access—roughly 835 megawatts (0.835 gigawatts) of energy generation, which is enough to power approximately 800,000 homes—is still more than five times less than OpenAI’s 5-gigawatt demand for its data centers.

Ketchum told Bloomberg that it’s easier to find a US site for a 1-gigawatt data center, but locating a site for a 5-gigawatt facility would likely be a bigger challenge. Notably, Amazon recently bought a $650 million nuclear-powered data center in Pennsylvania with a 2.5-gigawatt capacity. At the meeting with the Biden administration, OpenAI suggested opening large-scale data centers in Wisconsin, California, Texas, and Pennsylvania, a source familiar with the matter told CNBC.

During that meeting, the Biden administration confirmed that developing large-scale AI data centers is a priority, announcing “a new Task Force on AI Datacenter Infrastructure to coordinate policy across government.” OpenAI seems to be trying to get the task force’s attention early on, outlining in the document that Bloomberg reviewed the national security and economic benefits its data centers could provide for the US.

In a statement to Bloomberg, OpenAI’s spokesperson said that “OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AI’s benefits accessible to everyone.”

Big Tech companies and AI startups will likely continue pressuring officials to approve data center expansions, as well as new kinds of nuclear reactors as the AI explosion globally continues. Goldman Sachs estimated that “data center power demand will grow 160 percent by 2030.” To ensure power supplies for its AI, according to the tech news site Freethink, Microsoft has even been training AI to draft all the documents needed for proposals to secure government approvals for nuclear plants to power AI data centers.

OpenAI asked US to approve energy-guzzling 5GW data centers, report says Read More »

terminator’s-cameron-joins-ai-company-behind-controversial-image-generator

Terminator’s Cameron joins AI company behind controversial image generator

a net in the sky —

Famed sci-fi director joins board of embattled Stability AI, creator of Stable Diffusion.

A photo of filmmaker James Cameron.

Enlarge / Filmmaker James Cameron.

On Tuesday, Stability AI announced that renowned filmmaker James Cameron—of Terminator and Skynet fame—has joined its board of directors. Stability is best known for its pioneering but highly controversial Stable Diffusion series of AI image-synthesis models, first launched in 2022, which can generate images based on text descriptions.

“I’ve spent my career seeking out emerging technologies that push the very boundaries of what’s possible, all in the service of telling incredible stories,” said Cameron in a statement. “I was at the forefront of CGI over three decades ago, and I’ve stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave.”

Cameron is perhaps best known as the director behind blockbusters like Avatar, Titanic, and Aliens, but in AI circles, he may be most relevant for the co-creation of the character Skynet, a fictional AI system that triggers nuclear Armageddon and dominates humanity in the Terminator media franchise. Similar fears of AI taking over the world have since jumped into reality and recently sparked attempts to regulate existential risk from AI systems through measures like SB-1047 in California.

In a 2023 interview with CTV news, Cameron referenced The Terminator‘s release year when asked about AI’s dangers: “I warned you guys in 1984, and you didn’t listen,” he said. “I think the weaponization of AI is the biggest danger. I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate.”

Hollywood goes AI

Of course, Stability AI isn’t building weapons controlled by AI. Instead, Cameron’s interest in cutting-edge filmmaking techniques apparently drew him to the company.

“James Cameron lives in the future and waits for the rest of us to catch up,” said Stability CEO Prem Akkaraju. “Stability AI’s mission is to transform visual media for the next century by giving creators a full stack AI pipeline to bring their ideas to life. We have an unmatched advantage to achieve this goal with a technological and creative visionary like James at the highest levels of our company. This is not only a monumental statement for Stability AI, but the AI industry overall.”

Cameron joins other recent additions to Stability AI’s board, including Sean Parker, former president of Facebook, who serves as executive chairman. Parker called Cameron’s appointment “the start of a new chapter” for the company.

Despite significant protest from actors’ unions last year, elements of Hollywood are seemingly beginning to embrace generative AI over time. Last Wednesday, we covered a deal between Lionsgate and AI video-generation company Runway that will see the creation of a custom AI model for film production use. In March, the Financial Times reported that OpenAI was actively showing off its Sora video synthesis model to studio executives.

Unstable times for Stability AI

Cameron’s appointment to the Stability AI board comes during a tumultuous period for the company. Stability AI has faced a series of challenges this past year, including an ongoing class-action copyright lawsuit, a troubled Stable Diffusion 3 model launch, significant leadership and staff changes, and ongoing financial concerns.

In March, founder and CEO Emad Mostaque resigned, followed by a round of layoffs. This came on the heels of the departure of three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz, who have since founded Black Forest Labs and released a new open-weights image-synthesis model called Flux, which has begun to take over the r/StableDiffusion community on Reddit.

Despite the issues, Stability AI claims its models are widely used, with Stable Diffusion reportedly surpassing 150 million downloads. The company states that thousands of businesses use its models in their creative workflows.

While Stable Diffusion has indeed spawned a large community of open-weights-AI image enthusiasts online, it has also been a lightning rod for controversy among some artists because Stability originally trained its models on hundreds of millions of images scraped from the Internet without seeking licenses or permission to use them.

Apparently that association is not a concern for Cameron, according to his statement: “The convergence of these two totally different engines of creation [CGI and generative AI] will unlock new ways for artists to tell stories in ways we could have never imagined. Stability AI is poised to lead this transformation.”

Terminator’s Cameron joins AI company behind controversial image generator Read More »