Author name: Kris Guyer

amazon-hides-cheaper-items-with-faster-delivery,-lawsuit-alleges

Amazon hides cheaper items with faster delivery, lawsuit alleges

A game of hide-and-seek —

Hundreds of millions of Amazon’s US customers have overpaid, class action says.

Amazon hides cheaper items with faster delivery, lawsuit alleges

Amazon rigged its platform to “routinely” push an overwhelming majority of customers to pay more for items that could’ve been purchased at lower costs with equal or faster delivery times, a class-action lawsuit has alleged.

The lawsuit claims that a biased algorithm drives Amazon’s “Buy Box,” which appears on an item’s page and prompts shoppers to “Buy Now” or “Add to Cart.” According to customers suing, nearly 98 percent of Amazon sales are of items featured in the Buy Box, because customers allegedly “reasonably” believe that featured items offer the best deal on the platform.

“But they are often wrong,” the complaint said, claiming that instead, Amazon features items from its own retailers and sellers that participate in Fulfillment By Amazon (FBA), both of which pay Amazon higher fees and gain secret perks like appearing in the Buy Box.

“The result is that consumers routinely overpay for items that are available at lower prices from other sellers on Amazon—not because consumers don’t care about price, or because they’re making informed purchasing decisions, but because Amazon has chosen to display the offers for which it will earn the highest fees,” the complaint said.

Authorities in the US and the European Union have investigated Amazon’s allegedly anticompetitive Buy Box algorithm, confirming that it’s “favored FBA sellers since at least 2016,” the complaint said. In 2021, Amazon was fined more than $1 billion by the Italian Competition Authority over these unfair practices, and in 2022, the European Commission ordered Amazon to “apply equal treatment to all sellers when deciding what to feature in the Buy Box.”

These investigations served as the first public notice that Amazon’s Buy Box couldn’t be trusted, customers suing said. Amazon claimed that the algorithm was fixed in 2020, but so far, Amazon does not appear to have addressed all concerns over its Buy Box algorithm. As of 2023, European regulators have continued pushing Amazon “to take further action to remedy its Buy Box bias in their respective jurisdictions,” the customers’ complaint said.

The class action was filed by two California-based long-time Amazon customers, Jeffrey Taylor and Robert Selway. Both feel that Amazon “willfully” and “deceptively” tricked them and hundreds of millions of US customers into purchasing the featured item in the Buy Box when better deals existed.

Taylor and Selway’s lawyer, Steve Berman, told Reuters that Amazon has placed “a great burden” on its customers, who must invest more time on the platform to identify the best deals. Unlike other lawsuits over Amazon’s Buy Box, this is the first lawsuit to seek compensation over harms to consumers, not over antitrust concerns or harms to sellers, Reuters noted.

The lawsuit has been filed on behalf of “all persons who made a purchase using the Buy Box from 2016 to the present.” Because Amazon supposedly “frequently” features more expensive items in the Buy Box and most sales result from Buy Box placements, they’ve alleged that “the chances that any Class member was unharmed by one or more purchases is virtually non-existent.”

“Our team expects the class to include hundreds of millions of Amazon consumers because virtually all purchases are made from the Buy Box,” a spokesperson for plaintiffs’ lawyers told Ars.

Customers suing are hoping that a jury will decide that Amazon continues to “deliberately steer” customers to purchase higher-priced items in the Buy Box to spike its own profits. They’ve asked a US district court in Washington, where Amazon is based, to permanently stop Amazon from using allegedly biased algorithms to drive sales through its Buy Box.

The extent of damages that Amazon could owe are currently unknown but appear significant. It’s estimated that 80 percent of Amazon’s 300 million userbase is comprised of US subscribers, each allegedly overpaying on most of their purchases over the past seven years. Last year, Amazon’s US sales exceeded $574 billion.

“Amazon claims to be a ‘customer-centric’ company that works to offer the lowest prices to its customers, but in violation of the Washington Consumer Protection Act, Amazon employs a deceptive scheme to keep its profits—and consumer prices—high,” customer’s lawsuit alleged.

Amazon hides cheaper items with faster delivery, lawsuit alleges Read More »

wade-wilson-is-kidnapped-by-the-tva-in-deadpool-and-wolverine-teaser

Wade Wilson is kidnapped by the TVA in Deadpool and Wolverine teaser

Everyone deserves a happy ending —

“Your little cinematic universe is about to change forever.”

Wade Wilson (Ryan Reynolds), aka Deadpool, is back to save the MCU: “I am Marvel Jesus.”

After some rather lackluster performances at the box office over the last year or so, Marvel Studios has scaled back its MCU offerings for 2024. We’re getting just one: Deadpool and Wolverine. Maybe one is all we need. Marvel released a two-minute teaser during yesterday’s Super Bowl. And if this is the future of the MCU, count us in. The teaser has already racked up more than 12 million views on YouTube, and deservedly so. It has the cheeky irreverence that made audiences embrace Ryan Reynold’s R-rated superhero in the first place, plus a glimpse of Hugh Jackman’s Wolverine—or rather, his distinctive shadow. And yes, Marvel is retaining that R rating—a big step given that all the prior MCU films have been resoundingly PG-13.

(Some spoilers for the first two films below.)

Reynolds famously made his first foray into big-screen superhero movies in 2011’s The Green Lantern, which was a box office disappointment and not especially good. But he found the perfect fit with 2016’s Deadpool, starring as Wade Wilson, a former Canadian special forces operative (dishonorably discharged) who develops regenerative healing powers that heal his cancer but leave him permanently disfigured with scars all over his body. Wade decides to become a masked vigilante, turning down an invitation to join the X-Men and abandon his bad-boy ways.

The first Deadpool was a big hit, racking up $782 million at the global box office, critical praise, and a couple of Golden Globe nominations for good measure. So 20th Century Fox naturally commissioned a sequel. Deadpool 2 was released in 2018 and was just as successful. The adult humor and playful pop culture references were a big part of both films’ appeal, including their respective post-credits scenes. The first film had a post-credits scene spoofing Ferris Bueller’s Day Off. The sequel’s mid-credits sequence showed a couple of X-Men repairing a time travel device for Deadpool, which he used to save his girlfriend Vanessa (Morena Baccarin‚—whose tragic death kicked off Deadpool 2—and kill Ryan Reynolds, just as the actor finished reading the script for Green Lantern.

This time around, Shawn Levy takes the director’s chair; he also directed Reynolds in the thoroughly delightful Free Guy (2021), which had similar tonal elements, minus the R-rated humorous riffs. Once we learned that Jackman had agreed to co-star, reprising his iconic X-Men role, fan anticipation shot through the roof. Filming (and hence the release date) was delayed by last summer’s Hollywood strikes but finally wrapped early this year.

Deadpool and Wolverine reunites many familiar faces from the first two films: Reynolds and Baccarin, obviously, but also Leslie Uggams as Blind Al; Karan Soni as Wade’s personal chauffeur, taxi driver Dopinder; Brianna Hildebrand as Negasonic Teenage Warhead; Stefan Kapičić as the voice of Colossus; Shioli Kutsuna as Negasonic’s mutant girlfriend Yukio; Randal Reeder as Buck; and Lewis Tan as X-Force member Shatterstar.

We’re also getting some characters drawn from various films under the 20th Century Fox Marvel umbrella: Pyro (Aaron Stanford)—last seen in 2006’s X-Men: The Last Stand—and Jennifer Garner’s Elektra who appeared in the 2003 Daredevil film as well as 2005’s Elektra. Apparently, the mutants Sabretooth and Toad will also appear, along with Dogpool. New to the franchise are Matthew MadFadyen as a Time Variance Authority agent named Paradox and Emma Corrin as the lead villain. There are rumors that Owen Wilson’s Mobius and the animated Miss Minutes from Loki will also appear in the film, which makes sense, given the TVA’s key role in the plot.

The teaser opens with Wade celebrating his birthday with Vanessa and all their friends, only to then have a group of formidable TVA agents knock on his door, brandishing their wands. (“Is that supposed to be scary?” Wade responds. “Pegging isn’t new for me, friendo, but it is for Disney.”) He’s tossed through a portal and ends up at TVA headquarters, face to face with Paradox, who offers him a chance to be “a hero among heroes.” And Wade decides he’s game, declaring himself a superhero Messiah: “I… am… Marvel Jesus.” He suits up as Deadpool, and violence inevitably ensues.

Then comes the shot we’ve all been waiting for: Deadpool lying on his back on icy terrain after being tossed through a wall, with a Wolverine-shaped shadow falling across his body. “Don’t just stand there, you ape—give me a hand up,” Deadpool says, and then sees the claws. We get the briefest glimpse of Wolverine’s trademark yellow X-Men uniform before the credits roll.

Deadpool and Wolverine hits theaters on July 26, 2024.

Listing image by YouTube/Marvel Studios

Wade Wilson is kidnapped by the TVA in Deadpool and Wolverine teaser Read More »

on-the-proposed-california-sb-1047

On the Proposed California SB 1047

California Senator Scott Wiener of San Francisco introduces SB 1047 to regulate AI. I have put up a market on how likely it is to become law.

“If Congress at some point is able to pass a strong pro-innovation, pro-safety AI law, I’ll be the first to cheer that, but I’m not holding my breath,” Wiener said in an interview. “We need to get ahead of this so we maintain public trust in AI.”

Congress is certainly highly dysfunctional. I am still generally against California trying to act like it is the federal government, even when the cause is good, but I understand.

Can California effectively impose its will here?

On the biggest players, for now, presumably yes.

In the longer run, when things get actively dangerous, then my presumption is no.

There is a potential trap here. If we put our rules in a place where someone with enough upside can ignore them, and we never then pass anything in Congress.

So what does it do, according to the bill’s author?

California Senator Scott Wiener: SB 1047 does a few things:

  1. Establishes clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems. These standards apply only to the largest models, not startups.

  2. Establish CalCompute, a public AI cloud compute cluster. CalCompute will be a resource for researchers, startups, & community groups to fuel innovation in CA, bring diverse perspectives to bear on AI development, & secure our continued dominance in AI.

  3. prevent price discrimination & anticompetitive behavior

  4. institute know-your-customer requirements

  5. protect whistleblowers at large AI companies

@geoffreyhinton called SB 1047 “a very sensible approach” to balancing these needs. Leaders representing a broad swathe of the AI community have expressed support.

People are rightfully concerned that the immense power of AI models could present serious risks. For these models to succeed the way we need them to, users must trust that AI models are safe and aligned w/ core values. Fulfilling basic safety duties is a good place to start.

With AI, we have the opportunity to apply the hard lessons learned over the past two decades. Allowing social media to grow unchecked without first understanding the risks has had disastrous consequences, and we should take reasonable precautions this time around.

As usual, RTFC (Read the Card, or here the bill) applies.

Section 1 names the bill.

Section 2 says California is winning in AI (see this song), AI has great potential but could do harm. A missed opportunity to mention existential risks.

Section 3 22602 offers definitions. I have some notes.

  1. Usual concerns with the broad definition of AI.

  2. Odd that ‘a model autonomously engaging in a sustained sequence of unsafe behavior’ only counts as an ‘AI safety incident’ if it is not ‘at the request of a user.’ If a user requests that, aren’t you supposed to ensure the model doesn’t do it? Sounds to me like a safety incident.

  3. Covered model is defined primarily via compute, not sure why this isn’t a ‘foundation’ model, I like the secondary extension clause: “The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations OR The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability..”

  4. Critical harm is either mass casualties or 500 million in damage, or comparable.

  5. Full shutdown means full shutdown but only within your possession and control. So when we really need a full shutdown, this definition won’t work. The whole point of a shutdown is that it happens everywhere whether you control it or not.

  6. Open-source artificial intelligence model is defined to only include models that ‘may be freely modified and redistributed’ so that raises the question of whether that is legal or practical. Such definitions need to be practical, if I can do it illegally but can clearly still do it, that needs to count.

  7. Definition (s): [“Positive safety determination” means a determination, pursuant to subdivision (a) or (c) of Section 22603, with respect to a covered model that is not a derivative model that a developer can reasonably exclude the possibility that a covered model has a hazardous capability or may come close to possessing a hazardous capability when accounting for a reasonable margin for safety and the possibility of posttraining modifications.]

    1. Very happy to see the mention of post-training modifications, which is later noted to include access to tools and data, so scaffolding explicitly counts.

Section 3 22603 (a) says that before you train a new non-derivative model, you need to determine whether you can make a positive safety determination.

I like that this happens before you start training. But of course, this raises the question of how you know how it will score on the benchmarks?

One thing I worry about is the concept that if you score below another model on various benchmarks, that this counts as a positive safety determination. There are at least four obvious failure modes for this.

  1. The developer might choose to sabotage performance against the benchmarks, either by excluding relevant data and training, or otherwise. Or, alternatively, a previous developer might have gamed the benchmarks, which happens all the time, such that all you have to do to score lower is to not game those benchmarks yourself.

  2. The model might have situational awareness, and choose to get a lower score. This could be various degrees of intentional on the part of the developers.

  3. The model might not adhere to your predictions or scaling laws. So perhaps you say it will score lower on benchmarks, but who is to say you are right?

  4. The benchmarks might simply not be good at measuring what we care about.

Similarly, it is good to make a safety determination before beginning training, but also if the model is worth training then you likely cannot actually know its safety in advance, especially since this is not only existential safety.

Section 3 22603 (b) covers what you must do if you cannot make the positive safety determination. Here are the main provisions:

  1. You must prevent unauthorized access.

  2. You must be capable of a full shutdown.

  3. You must implement all covered guidance. Okie dokie.

  4. You must implement a written and separate safety and security protocol, that provides ‘reasonable assurance’ that it would ensure the model will have safeguards that prevent critical harms. This has to include clear tests that verify if you have succeeded.

  5. You must say how you are going to do all that, how you would change how you are doing it, and what would trigger a shutdown.

  6. Provide a copy of your protocol and keep it updated.

You can then make a ‘positive safety determination’ after training and testing, subject to the safety protocol.

Section (d) says that if your model is ‘not subject to a positive safety determination,’ in order to deploy it (you can still deploy it at all?!) you need to implement ‘reasonable safeguards and requirements’ that allow you prevent harms and to trace any harms that happen. I worry this section is not taking such scenarios seriously. To not be subject to such determination, the model needs to be breaking new ground in capabilities, and you were unable to assure that it wouldn’t be dangerous. So what are these ‘reasonable safeguards and requirements’ that would make deploying it acceptable? Perhaps I am misunderstanding here.

Section (g) says safety incidents must be reported.

Section (h) says if your positive safety determination is unreasonable it does not count, and that to be reasonable you need to consider any risk that has already been identified elsewhere.

Overall, this seems like a good start, but I worry it has loopholes, and I worry that it is not thinking about the future scenarios where the models are potentially existentially dangerous, or might exhibit unanticipated capabilities or situational awareness and so on. There is still the DC-style ‘anticipate and check specific harm’ approach throughout.

Section 22604 is about KYC, a large computing cluster has to collect the information and check to see if customers are trying to train a covered model.

Section 22605 requires sellers of inference or a computing cluster to provide a transparent, uniform, publicly available price schedule, banning price discrimination, and bans ‘unlawful discrimination or noncompetitive activity in determining price or access.’

I always wonder about laws that say ‘you cannot do things that are already illegal,’ I mean I thought that was the whole point of them already being illegal.

I am not sure to what extent this rule has an impact in practice, and whether it effectively means that anyone selling such services has to be a kind of common carrier unable to pick who gets its limited services, and unable to make deals of any kind. I see the appeal, but also I see clear economic downsides to forcing this.

Section 22606 covers penalties. The fines are relatively limited in scope, the main relief is injunction against and possible deletion of the model. I worry in practice that there is not enough teeth here.

Section 2207 is whistleblower protections. Odd that this is necessary, one would think there would be such protections universally by now? There are no unexpectedly strong provisions here, only the normal stuff.

Section 4 11547.6 tasks the new Frontier Model Division with its official business, including collecting reports and issuing guidance.

Section 5 11547.7 is for the CalCompute public cloud computing cluster. This seems like a terrible idea, there is no reason for public involvement here, also there is no stated or allocated budget. Assuming it is small, it does not much matter.

Sections 6-9 are standard boilerplate disclaimers and rules.

What should we think about all that?

It seems like a good faith effort to put forward a helpful bill. It has a lot of good ideas in it. I believe it would be net helpful. In particular, it is structured such that if your model is not near the frontier, your burden here is very small.

My worry is that this has potential loopholes in various places, and does not yet strongly address the nature of the future more existential threats. If you want to ignore this law, you probably can.

But it seems like a good beginning, especially on dealing with relatively mundane but still potentially catastrophic threats, without imposing an undo burden on developers. This could then be built upon.

Ah, Tyler Cowen has a link on this and it’s… California’s Effort to Strange AI.

Because of course it is. We do this every time. People keep saying ‘this law will ban satire’ or spreadsheets or pictures of cute puppies or whatever, based on what on its best day would be a maximalist anti-realist reading of the proposal, if it were enacted straight with no changes and everyone actually enforced it to the letter.

Dean Ball: This week, California’s legislature introduced SB 1047: The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. The bill, introduced by State Senator Scott Wiener (liked by many, myself included, for his pro-housing stance), would create a sweeping regulatory regime for AI, apply the precautionary principle to all AI development, and effectively outlaw all new open source AI models—possibly throughout the United States.

This is a line pulled out whenever anyone proposes that AI be governed by any regulatory regime whatsoever even with zero teeth of any kind. When someone says that someone, somewhere might be legally required to write an email.

At least one of myself and Dean Ball is extremely mistaken about what this bill says.

The definition of covered model seems to me to be clearly intended to apply only to models that are effectively at the frontier of model capabilities.

Let’s look again at the exact definition:

(1) The artificial intelligence model was trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024, or a model that could reasonably be expected to have similar performance on benchmarks commonly used to quantify the performance of state-of-the-art foundation models, as determined by industry best practices and relevant standard setting organizations.

(2) The artificial intelligence model has capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.

That seems clear as day on what it means, and what it means is this:

  1. If your model is over 10^26 we assume it counts.

  2. If it isn’t, but it is as good as state-of-the-art current models, it counts.

  3. Being ‘as good as’ is a general capability thing, not hitting specific benchmarks.

Under this definition, if no one was actively gaming benchmarks, at most three existing models would plausibly qualify for this definition: GPT-4, Gemini Ultra and Claude. I am not even sure about Claude.

If the open source models are gaming the benchmarks so much that they end up looking like a handful of them are matching GPT-4 on benchmarks, then what can I say, maybe stop gaming the benchmarks?

Or point out quite reasonably that the real benchmark is user preference, and in those terms, you suck, so it is fine. Either way.

But notice that this isn’t what the bill does. The bill applies to large models and to any models that reach the same performance regardless of the compute budget required to make them. This means that the bill applies to startups as well as large corporations.

Um, no, because the open model weights models do not remotely reach the performance level of OpenAI?

Maybe some will in the future.

But this very clearly does not ‘ban all open source.’ There are zero existing open model weights models that this bans.

There are a handful of companies that might plausibly have to worry about this in the future, if OpenAI doesn’t release GPT-5 for a while, but we’re talking Mistral and Meta, not small start-ups. And we’re talking about them exactly because they would be trying to fully play with the big boys in that scenario.

Bell is also wrong about the precautionary principle being imposed before training.

I do not see any such rule here. What I see is that if you cannot show that your model will definitely be safe before training, then you have to wait until after the training run to certify that it is safe.

In other words, this is an escape clause. Are we seriously objecting to that?

Then, if you also can’t certify that it is safe after the training run, then we talk precautions. But no one is saying you cannot train, unless I am missing something?

As usual, people such as Ball are imagining a standard of ‘my product could never be used to do harm’ that no one is trying to apply here in any way. That is why any model not at the frontier can automatically get a positive safety determination, which flies in the face of this theory. Then, if you are at the frontier, you have to obey industry standard safety procedures and let California know what procedures you are following. Woe is you. And of course, the moment someone else has a substantially better model, guess who is now positively safe?

The ‘covered guidance’ that Ball claims to be alarmed about does not mean ‘do everything any safety organization says and if they are contradictory you are banned.’ The law does not work that way. Here is what it actually says:

(e) “Covered guidance” means any of the following:

(1) Applicable guidance issued by the National Institute of Standards and Technology and by the Frontier Model Division.

(2) Industry best practices, including relevant safety practices, precautions, or testing procedures undertaken by developers of comparable models, and any safety standards or best practices commonly or generally recognized by relevant experts in academia or the nonprofit sector.

(3) Applicable safety-enhancing standards set by standards setting organizations.

So what that means is, we will base our standards off an extension of NIST’s, and also we expect you to be liable to implement anything that is considered ‘industry best practice’ even if we did not include it in the requirements. But obviously it’s not going to be best practices if it is illegal. Then we have the third rule, which only counts ‘applicable’ standards. California will review them and decide what is applicable, so that is saying they will use outside help.

Also, note the term ‘non-derivative’ when talking about all the models. If you are a derivative model, then you are fine by default. And almost all models with open weights are derivative models, because of course that is the point, distillation and refinement rather than starting over all the time.

So here’s what the law would actually do, as far as I can tell:

  1. If your model is not projected to be state of the art level and it is not over the 10^26 limit no one has hit yet and no one except the big three are anywhere near, this law has only trivial impact upon you, it is a trivial amount of paperwork. Every other business in America and especially the state of California is jealous.

  2. If your model is a derivative of an existing model, you’re fine, that’s it.

  3. If your model you want to train is projected to be state of the art, but you can show it is safe before you even train it, good job, you’re golden.

  4. If your model is projected to be state of the art, and can’t show it is safe before training it, you can still train it as long as you don’t release it and you make sure it isn’t stolen or released by others. Then if you show it is safe or show it is not state of the art, you’re golden again.

  5. If your model is state of the art, and you train it and still don’t know if it is ‘safe,’ and by safe we do not mean ‘no one ever does anything wrong’ we mean things more like ‘no one ever causes 500 million dollars in damages or mass casualties,’ then you have to implement a series of safety protocols (regulatory requirements) to be determined by California, and you have to tell them what you are doing to ensure safety.

  6. You have to have to have abilities like ‘shut down AIs running on computers under my control’ and ‘plausibly prevent unauthorized people from accessing the model if they are not supposed to.’ Which does not even apply to copies of the program you no longer control. Is that is going to be a problem?

  7. You also have to report any ‘safety incidents’ that happen.

  8. Also some ‘pro-innovation’ stuff of unknown size and importance.

Not only does SB 1047 not attempt to ‘strangle AI,’ not only does it not attempt regulatory capture or target startups, it would do essentially nothing to anyone but a handful of companies unless they have active safety incidents. If there are active safety incidents, then we get to know about them, which could introduce liability concerns or publicity concerns, and that seems like the main downside? That people might learn about your failures and existing laws might sometimes apply?

The arguments against such rules often come from the implicit assumption that we enforce our laws as written, reliably and without discretion. Which we don’t. What would happen if, as Eliezer recently joked, the law actually worked the way critics of such regulations claim that it does? If every law was strictly enforced as written, with no common sense used, as they warn will happen? And someone our courts could handle the case loads involved? Everyone would be in jail within the week.

When people see proposals for treating AI slightly more like anything else, and subjecting it to remarkably ordinary regulation, with an explicit and deliberate effort to only target frontier models that are exclusively fully closed, and they say that this ‘bans open source’ what are they talking about?

They are saying that Open Model Weights Are Unsafe and Nothing Can Fix This, and we want to do things that are patently and obviously unsafe, so asking any form of ‘is this safe?’ and having an issue with the answer being ‘no’ is a ban on open model weights. Or, alternatively, they are saying that their business model and distribution plans are utterly incompatible with complying with any rules whatsoever, so we should never pass any, or they should be exempt from any rules.

The idea that this would “spell the end of America’s leadership in AI” is laughable. If you think America’s technology industry cannot stand a whiff of regulation, I mean, do they know anything about America or California? And have they seen the other guy? Have they seen American innovation across the board, almost entirely in places with rules orders of magnitude more stringent? This here is so insanely nothing.

But then, when did such critics let that stop them? It’s the same rhetoric every time, no matter what. And some people seem willing to amplify such voices, without asking whether their words make sense.

What would happen if there was actually a wolf?

On the Proposed California SB 1047 Read More »

humans-are-living-longer-than-ever-no-matter-where-they-come-from 

Humans are living longer than ever no matter where they come from 

Live long and prosper? —

Disease outbreaks and human conflicts help dictate regional differences in longevity.

An older person drinking coffee in an urban environment.

Most of us want to stay on this planet as long as possible. While there are still differences depending on sex and region, we are now living longer as a species—and it seems life spans will only continue to grow longer.

Researcher David Atance of Universidad de Alcalá, Spain, and his team gathered data on the trends of the past. They then used their findings to project what we can expect to see in the future. Some groups have had it harder than others because of factors such as war, poverty, natural disasters, or disease, but the researchers found that morality and longevity trends are becoming more similar regardless of disparities between sexes and locations.

“The male-female gap is decreasing among the [clusters],” they said in a study recently published in PLOS One.

Remembering the past

The research team used specific mortality indicators—such as life expectancy at birth and most common age at death–to identify five global clusters that reflect the average life expectancy in different parts of the world. The countries in these clusters changed slightly from 1990 to 2010 and are projected to change further by 2030 (though 2030 projections are obviously tentative). Data for both males and females was considered when deciding which countries belonged in which cluster during each period. Sometimes, one sex thrived while the other struggled within a cluster—or even within the same country.

Clusters that included mostly wealthier countries had the best chance at longevity in 1990 and 2010. Low-income countries predictably had the worst mortality rate. In 1990, these countries, many of which are in Africa, suffered from war, political upheaval, and the lethal spread of HIV/AIDS. Rwanda endured a bloody civil war during this period. Around the same time, Uganda had tensions with Rwanda, as well as Sudan and Zaire. In the Middle East, the Gulf War and its aftermath inevitably affected 1990 male and female populations.

Along with a weak health care system, the factors that gave most African countries a high mortality rate were still just as problematic in 2010. In all clusters, male life spans tended to differ slightly less between countries than female life spans. However, in some regions, there were differences between how long males lived compared to females. Mortality significantly increased in 1990 male populations from former Soviet countries after the dissolution of the Soviet Union, and this trend continued in 2010. Deaths in those countries were attributed to violence, accidents, cardiovascular disease, alcohol, an inadequate healthcare system, poverty, and psychosocial stress.

Glimpsing the future

2030 predictions must be taken with caution. Though past trends can be good indicators of what is to come, trends do not always continue. While things may change between now and 2030 (and those changes could be drastic), these estimates project what would happen if past and current trends continue into the relatively near future.

Some countries might be worse off in 2030. The lowest-income, highest-mortality cluster will include several African countries that have been hit hard with wars as well as political and socioeconomic challenges. The second low-income, high-mortality cluster, also with mostly African countries, will now add some Eastern European and Asian countries that suffer from political and socioeconomic issues most have recently been involved in conflicts and wars or still are, such as Ukraine.

The highest-income, lowest-mortality cluster will gain some countries. These include Chile, which has made strides in development that are helping people live longer.

Former Soviet countries will probably continue to face the same issues they did in 1990 and 2010. They fall into one of the middle-income, mid-longevity clusters and will most likely be joined by some Latin American countries that were once in a higher bracket but presently face high levels of homicide, suicide, and accidents among middle-aged males. Meanwhile, there are some other countries in Latin America that the research team foresees as moving toward a higher income and lower mortality rate.

Appearances can be deceiving

The study places the US in the first or second high-income, low-mortality bracket, depending on the timeline. This could make it look like it is doing well on a global scale. While the study doesn’t look at the US specifically, there are certain local issues that say otherwise.

A 2022 study by the Centers for Disease Control and Prevention suggests that pregnancy and maternal care in the US is abysmal, with a surprisingly high (and still worsening) maternal death rate of about 33 deaths per 100,000 live births. This is more than double what it was two decades ago. In states like Texas, which banned abortion after the overturn of Roe v. Wade, infant deaths have also spiked. The US also has the most expensive health care system among high-income countries, which was only worsened by the pandemic.

The CDC also reports that life expectancy in the US keeps plummeting. Cancer, heart disease, stroke, drug overdose, and accidents are the culprits, especially in middle-aged Americans. There has also been an increase in gun violence and suicides. Guns have become the No. 1 killer of children and teens, which used to be car accidents.

Whether the US will stay in that top longevity bracket is also unsure, especially if maternal death rates keep rising and there aren’t significant improvements made to the health care system. There and elsewhere, there’s no way of telling what will actually happen between now and 2030, but Atance and his team want to revisit their study then and compare their estimates to actual data. The team is also planning to further analyze the factors that contribute to longevity and mortality, as well as conduct surveys that could support their predictions. We will hopefully live to see the results.

PLOS One, 2024. DOI:  10.1371/journal.pone.0295842

Humans are living longer than ever no matter where they come from  Read More »

hermit-crabs-find-new-homes-in-plastic-waste:-shell-shortage-or-clever choice?

Hermit crabs find new homes in plastic waste: Shell shortage or clever choice?

ocean real estate bargains —

The crustaceans are making the most of what they find on the seafloor.

hermit crab in plastic pen cap

Enlarge / Scientists have found that hermit crabs are increasingly using plastic and other litter as makeshift shell homes.

Land hermit crabs have been using bottle tops, parts of old light bulbs and broken glass bottles, instead of shells.

New research by Polish researchers studied 386 images of hermit crabs occupying these artificial shells. The photos had been uploaded by users to online platforms, then analyzed by scientists using a research approach known as iEcology. Of the 386 photos, the vast majority, 326 cases, featured hermit crabs using plastic items as shelters.

At first glance, this is a striking example of how human activities can alter the behavior of wild animals and potentially the ways that populations and ecosystems function as a result. But there are lots of factors at play and, while it’s easy to jump to conclusions, it’s important to consider exactly what might be driving this particular change.

Shell selection

Hermit crabs are an excellent model organism to study because they behave in many different ways and those differences can be easily measured. Instead of continuously growing their own shell to protect their body, like a normal crab or a lobster would, they use empty shells left behind by dead snails. As they walk around, the shell protects their soft abdomen but whenever they are threatened they retract their whole body into the shell. Their shells act as portable shelters.

Having a good enough shell is critical to an individual’s survival so they acquire and upgrade their shells as they grow. They fight other hermit crabs for shells and assess any new shells that they might find for suitability. Primarily, they look for shells that are large enough to protect them, but their decision-making also takes into account the type of snail shell, its condition and even its color—a factor that could impact how conspicuous the crab might be.

Another factor that constrains shell choice is the actual availability of suitable shells. For some as yet unknown reason, a proportion of land hermit crabs are choosing to occupy plastic items rather than natural shells, as highlighted by this latest study.

Housing crisis or ingenious new move?

Humans have intentionally changed the behavior of animals for millennia through the process of domestication. Any unintended behavioral changes in natural animal populations are potentially concerning, but how worried should we be about hermit crabs using plastic litter as shelter?

The Polish research raises a number of questions. First, how prevalent is the adoption of plastic litter instead of shells? While 326 crabs using plastic seems like a lot, this is likely to be an underestimation of the raw number given that users are likely to encounter crabs only in accessible parts of the populations. Conversely, it seems probable that users could be biased towards uploading striking or unusual images, so the iEcology approach might produce an exaggerated impression of the proportion of individuals in a population opting for plastic over natural shells. We need structured field surveys to clarify this.

Second, why are some individual crabs using plastic? One possibility is that they are forced to due to a lack of natural shells, but we can’t test this hypothesis without more information on the demographics of local snail populations. Or perhaps the crabs prefer plastic or find it easier to locate, compared with real shells? As the authors point out, plastic might be lighter than the equivalent shells affording the same amount of protection but at lower energy cost of carrying them. Intriguingly, chemicals that leach out of plastic are known to attract marine hermit crabs by mimicking the odor of food.

As hermit crabs adapt to an increase in plastic pollution, more research is needed to investigate the nuances.

Enlarge / As hermit crabs adapt to an increase in plastic pollution, more research is needed to investigate the nuances.

This leads to a third question about the possible downsides of using plastic. Compared to real shells plastic waste tends to be brighter and might contrast more with the background making the crabs more vulnerable to predators. Additionally, we know that exposure to microplastics and compounds that leach from plastic can change the behaviour of hermit crabs, making them less fussy about the shells that they choose, less adept at fighting for shells and even changing their personalities by making them more prone to take risks. To answer these questions about the causes and consequences of hermit crabs using plastic waste in this way, we need to investigate their shell selection behavior through a series of laboratory experiments.

Pollution changes behavior

Plastic pollution is just one of the ways we are changing our environment. It’s by far the most highly reported form of debris that we have introduced to marine environments. But animal behavior is affected by other forms of pollution too, including microplastics, pharmaceuticals, light, and noise, plus the rising temperatures and ocean acidification caused by climate change.

So while investigating the use of plastic waste by hermit crabs could help us better understand the consequences of certain human impacts on the environment, it doesn’t show how exactly animals will adjust to the Anthropocene, the era during which human activity has been having a significant impact on the planet. Will they cope by using plastic behavioral responses or evolve across generations, or perhaps both? In my view, the iEcology approach cannot answer questions like this. Rather, this study acts as an alarm bell highlighting potential changes that now need to be fully investigated.

Mark Briffa, Professor of Animal Behaviour, University of Plymouth. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Hermit crabs find new homes in plastic waste: Shell shortage or clever choice? Read More »

the-2024-rolex-24-at-daytona-put-on-very-close-racing-for-a-record-crowd

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd

actually 23 hours and 58 minutes this time —

The around-the-clock race marked the start of the North American racing calendar.

Porsche and Cadillac GTP race cars at Daytona

Enlarge / The current crop of GTP hybrid prototypes look wonderful, thanks to rules that cap the amount of downforce they can generate in favor of more dramatic styling.

Porsche Motorsport

DAYTONA BEACH, Fla.—Near-summer temperatures greeted a record crowd at the Daytona International Speedway in Florida last weekend. At the end of each January, the track hosts the Rolex 24, an around-the-clock endurance race that’s now as high-profile as it has ever been during the event’s 62-year history.

Between the packed crowd and the 59-car grid, there’s proof that sports car racing is in good shape. Some of that might be attributable to Drive to Survive‘s rising tide lifting a bunch of non-F1 boats, but there’s more to the story than just a resurgent interest in motorsport. The dramatic-looking GTP prototypes have a lot to do with it—powerful hybrid racing cars from Acura, BMW, Cadillac, and Porsche are bringing in the fans and, in some cases, some pretty famous drivers with F1 or IndyCar wins on their resumes.

But IMSA and the Rolex 24 is about more than just the top class of cars; in addition to the GTP hybrids, the field also comprised the very competitive pro-am LMP2 prototype class and a pair of classes (one for professional teams, another for pro-ams) for production-based machines built to a global set of rules, called GT3. (To be slightly confusing, in IMSA, those classes are known as GTD-Pro and GTD. More on sports car racing being needlessly confusing later.)

The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Enlarge / The crowd for the 2024 Rolex 24 was larger even than last year. This is the pre-race grid walk, which I chose to watch from afar.

Jonathan Gitlin

There was even a Hollywood megastar in attendance, as the Jerry Bruckheimer-produced, Joseph Kosinski-directed racing movie starring Brad Pitt was at the track filming scenes for the start of that movie.

GTP finds its groove

Last year’s Rolex 24 was the debut of the new GTP cars, and they didn’t have an entirely trouble-free race. These cars are some of the most complicated sports prototypes to turn a wheel due to hybrid systems, and during the 2023 race, two of the entrants required lengthy stops to replace their hybrid batteries. Those teething troubles are a thing of the past, and over the last 12 months, the cars have found an awful lot more speed, with most of the 10-car class breaking Daytona’s lap record during qualifying.

Most of that new speed has come from the teams’ familiarity with the cars after a season of racing but also from a year of software development. Only Porsche’s 963 has had any mechanical upgrades during the off-season. “You… will not notice anything on the outside shell of the car,” explained Urs Kuratle, Porsche Motorsport’s director of factory racing. “So the aerodynamics, all [those] things, they look the same… Sometimes it’s a material change, where a fitting used to be out of aluminum and due to reliability reasons we change to steel or things like this. There are minor details like this.”

  • This year, the Wayne Taylor Racing team had not one but two ARX-06s. I expected the cars to be front-runners, but a late BoP change added another 40 kg.

    Jonathan Gitlin

  • The Cadillacs are fan favorites because of their loud, naturally aspirated V8s. I think the car looks better than the other GTP cars, too.

    Jonathan Gitlin

  • Porsche’s 963 is the only GTP car that has had any changes since last year, but they’re all under the bodywork.

    Jonathan Gitlin

  • Porsche is the only manufacturer to start selling customer GTP cars so far. The one on the left is the Proton Competition Mustang Sampling car; the one on the right belongs to JDC-Miller MotorSports.

    Jonathan Gitlin

GTP cars aren’t as fast or even as powerful as an F1 single-seater, but the driver workload from inside the cockpit may be even higher. At last year’s season-ending Petit Le Mans, former F1 champion Jenson Button—then making a guest appearance in the privateer-run JDC Miller Motorsport Porsche 963—came away with a newfound respect for how many different systems could be tweaked from the steering wheel.

The 2024 Rolex 24 at Daytona put on very close racing for a record crowd Read More »

tesla’s-week-gets-worse:-fines,-safety-investigation,-and-massive-recall

Tesla’s week gets worse: Fines, safety investigation, and massive recall

toxic waste, seriously? —

There have been 2,388 complaints about steering failure in the Model 3 and Model Y.

A Tesla Model Y steering wheel and dashboard

Enlarge / More than 2,000 Tesla model-year 2023 Model Y and Model 3s have suffered steering failure, according to a new NHTSA safety defect investigation.

Sjoerd van der Wal/Getty Images

It’s been a rough week for Tesla. On Tuesday, a court in Delaware voided a massive $55.8 billion pay package for CEO Elon Musk. Then, news emerged that Tesla was being sued by 25 different counties in California for years of dumping toxic waste. That was followed by a recall affecting 2.2 million Teslas. Now, Ars has learned that the National Highway Traffic Safety Administration’s Office of Defects Investigation is investigating the company after 2,388 complaints of steering failure affecting the model-year 2023 Model 3 sedan and Model Y crossover.

Paint, brake fluid, used batteries, antifreeze, diesel

Tesla has repeatedly run afoul of laws designed to protect the environment from industrial waste. In 2019, author Edward Niedermeyer cataloged the troubles the company ran into with air pollution from its paint shop in Fremont, California, some of which occurred when the automaker took to painting its cars in a temporary tent-like marquee.

In 2022, the US Environmental Protection Agency fined Tesla $275,000 for violating the Clean Air Act, which followed a $31,000 penalty Tesla paid to the EPA in 2019. But EPA data shows that Tesla continued to violate the Clean Air Act in 2023.

And on Wednesday, Reuters reported that 25 Californian counties sued Tesla for violating the state’s hazardous waste laws and unfair business laws by improperly labeling hazardous waste before sending it to landfills that were not able to deal with the material.

The suit alleged that violations occurred at more than 100 facilities, including the factory in Fremont, and that Tesla disposed of hazardous materials including “but not limited to: lubricating oils, brake fluids, lead acid batteries, aerosols, antifreeze, cleaning fluids, propane, paint, acetone, liquified petroleum gas, adhesives and diesel fuel.”

Despite potentially large penalties for these industrial waste violations, which could have resulted in tens of thousands of dollars of fines for each day the automaker was not compliant, the counties and Tesla swiftly settled the suit on Thursday. Tesla, which had annual revenues of $96.8 billion in 2023, will pay just $1.3 million in civil penalties and an additional $200,000 in costs. The company is supposed to properly train its employees and hire a third party to conduct annual waste audits at 10 percent of its facilities, according to the Office of the District Attorney in San Francisco.

“While electric vehicles may benefit the environment, the manufacturing and servicing of these vehicles still generates many harmful waste streams,” said District Attorney Brooke Jenkins. “Today’s settlement against Tesla, Inc. serves to provide a cleaner environment for citizens throughout the state by preventing the contamination of our precious natural resources when hazardous waste is mismanaged and unlawfully disposed. We are proud to work with our district attorney partners to enforce California’s environmental laws to ensure these hazardous wastes are handled properly.”

An easy recall, a not-so-easy defect investigation

Tesla’s latest recall is a big one, affecting 2,193,869 vehicles—nearly every Tesla sold in the US, including the Model S (model years 2012–2023), the Model X (model years 2016–2024), the Model 3 (model years 2014–2023), the Model Y (model years 2019–2024) and the Cybertruck.

According to the official Part 573 Safety Notice, the issue is due to the cars’ displays, which use a font for the brake, park, and antilock brake warning indicators that is smaller than is legally required under the federal motor vehicle safety standards. NHTSA says it noticed the problem as part of a routine compliance audit on a Model Y in early January. After the agency informed the automaker, Tesla looked into the issue itself, and on January 24, it decided to issue a safety recall. Fortunately for the automaker, it can fix this problem with a software update.

A software patch is unlikely to help its other safety defect problem, however. Yesterday, NHTSA’s ODI upgraded a preliminary evaluation (begun in July 2023) to a full investigation of the steering components fitted to model-year 2023 Models 3 and Y.

NHTSA’s ODI says the problem affects up to 334,569 vehicles, which could suffer a loss of steering control. There have been 124 complaints of steering failure to NHTSA, and the agency says Tesla identified a further 2,264 customer complaints related to the problem. So far, at least one Tesla has crashed as a result of being unable to complete a right turn in an intersection.

A third of the complaints were reported to have happened at speeds below 5 mph, with the majority occurring between 5 and 35 mph and about 10 percent occurring above 35 mph (at least one complaint alleges the problem occurred at 75 mph). “A majority of allegations reported seeing a warning message, ‘Steering assist reduced,’ either before, during, or after the loss of steering control. A portion of drivers described their steering begin to feel ‘notchy’ or ‘clicky’ either prior to or just after the incident,” NHTSA’s investigation said.

NHTSA says there have been “multiple allegations of drivers blocking intersections and/or roadways,” and that more than 50 Teslas had to be towed as a result of the problem. The problem appears to be related to two of the four steering rack part numbers that Tesla used for these model-year 2023 EVs. They were installed in 2,187 of the vehicles, according to the complaints.

Tesla’s week gets worse: Fines, safety investigation, and massive recall Read More »

daily-telescope:-a-wolf-rayet-star-puts-on-a-howling-light-show

Daily Telescope: A Wolf-Rayet star puts on a howling light show

Hungry like the wolf —

I’d like to see it go boom.

The Crescent Nebula.

Enlarge / The Crescent Nebula.

1Zach1

Welcome to the Daily Telescope. There is a little too much darkness in this world and not enough light, a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’re going to take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Good morning. It’s February 2, and today’s image concerns an emission nebula about 5,000 light-years away in the Cygnus constellation.

Discovered more than 230 years ago by William Herschel, astronomers believe the Crescent Nebula is formed by the combination of an energetic stellar wind from a Wolf-Rayet star at its core, colliding with slower-moving material ejected earlier in the star’s lifetime. Ultimately, this should all go supernova, which will be quite spectacular.

Will you or I be alive to see it? Probably not.

But in the meantime, we can enjoy the nebula for what it is. This photo was captured by Ars reader 1Zach1 with an Astro-Tech AT80ED Refractor telescope. It was the product of 11 hours of integration, or 228 exposures each lasting three minutes. It was taken in rural southwestern Washington.

Have a great weekend, everyone.

Source: 1Zach1

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Daily Telescope: A Wolf-Rayet star puts on a howling light show Read More »

why-interstellar-objects-like-‘oumuamua-and-borisov-may-hold-clues-to-exoplanets

Why interstellar objects like ‘Oumuamua and Borisov may hold clues to exoplanets

celestial nomads —

Two celestial interlopers in Solar System have scientists eagerly anticipating more.

The first interstellar interloper detected passing through the Solar System, 1l/‘Oumuamua, came within 24 million miles of the Sun in 2017

Enlarge / The first interstellar interloper detected passing through the Solar System, 1l/‘Oumuamua, came within 24 million miles of the Sun in 2017. It’s difficult to know exactly what ‘Oumuamua looked like, but it was probably oddly shaped and elongated, as depicted in this illustration.

On October 17 and 18, 2017, an unusual object sped across the field of view of a large telescope perched near the summit of a volcano on the Hawaiian island of Maui. The Pan-STARRS1 telescope was designed to survey the sky for transient events, like asteroid or comet flybys. But this was different: The object was not gravitationally bound to the Sun or to any other celestial body. It had arrived from somewhere else.

The mysterious object was the first visitor from interstellar space observed passing through the Solar System. Astronomers named it 1I/‘Oumuamua, borrowing a Hawaiian word that roughly translates to “messenger from afar arriving first.” Two years later, in August 2019, amateur astronomer Gennadiy Borisov discovered the only other known interstellar interloper, now called 2I/Borisov, using a self-built telescope at the MARGO observatory in Nauchnij, Crimea.

While typical asteroids and comets in the Solar System orbit the Sun, ‘Oumuamua and Borisov are celestial nomads, spending most of their time wandering interstellar space. The existence of such interlopers in the Solar System had been hypothesized, but scientists expected them to be rare. “I never thought we would see one,” says astrophysicist Susanne Pfalzner of the Jülich Supercomputing Center in Germany. At least not in her lifetime.

With these two discoveries, scientists now suspect that interstellar interlopers are much more common. Right now, within the orbit of Neptune alone, there could be around 10,000 ‘Oumuamua-size interstellar objects, estimates planetary scientist David Jewitt of UCLA, coauthor of an overview of the current understanding of interstellar interlopers in the 2023 Annual Review of Astronomy and Astrophysics.

Researchers are busy trying to answer basic questions about these alien objects, including where they come from and how they end up wandering the galaxy. Interlopers could also provide a new way to probe features of distant planetary systems.

But first, astronomers need to find more of them.

“We’re a little behind at the moment,” Jewitt says. “But we expect to see more.”

2I/Borisov appears as a fuzzy blue dot in front of a distant spiral galaxy (left) in this November 2019 image taken by the Hubble Space Telescope when the object was approximately 200 million miles from Earth.

Enlarge / 2I/Borisov appears as a fuzzy blue dot in front of a distant spiral galaxy (left) in this November 2019 image taken by the Hubble Space Telescope when the object was approximately 200 million miles from Earth.

Alien origins

At least since the beginning of the 18th century, astronomers have considered the possibility that interstellar objects exist. More recently, computer models have shown that the Solar System sent its own population of smaller bodies into the voids of interstellar space long ago due to gravitational interactions with the giant planets.

Scientists expected most interlopers to be exocomets composed of icy materials. Borisov fit this profile: It had a tail made of gases and dust created by ices that evaporated during its close passage to the Sun. This suggests that it originated in the outer region of a planetary system where temperatures were cold enough for gases like carbon monoxide to have frozen into its rocks. At some point, something tossed Borisov, roughly a kilometer across, out of its system.

One potential culprit is a stellar flyby. The gravity of a passing star can eject smaller bodies, known as planetesimals, from the outer reaches of a system, according to a recent study led by Pfalzner. A giant planet could also eject an object from the outer regions of a planetary system if an asteroid or comet gets close enough for the planet’s gravitational tug to speed up the smaller body enough for it to escape its star’s hold. Close approaches can also happen when planets migrate across their planetary systems, as Neptune is thought to have done in the early Solar System.

Why interstellar objects like ‘Oumuamua and Borisov may hold clues to exoplanets Read More »

rocket-report:-spacex-at-the-service-of-a-rival;-endeavour-goes-vertical

Rocket Report: SpaceX at the service of a rival; Endeavour goes vertical

Stacked —

The US military appears interested in owning and operating its own fleet of Starships.

Space shuttle<em> Endeavour</em>, seen here in protective wrapping, was mounted on an external tank and inert solid rocket boosters at the California Science Center.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/GFNrsMPWIAAWxNw-800×1000.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Space shuttle Endeavour, seen here in protective wrapping, was mounted on an external tank and inert solid rocket boosters at the California Science Center.

Welcome to Edition 6.29 of the Rocket Report! Right now, SpaceX’s Falcon 9 rocket is the only US launch vehicle offering crew or cargo service to the International Space Station. The previous version of Northrop Grumman’s Antares rocket retired last year, forcing that company to sign a contract with SpaceX to launch its Cygnus supply ships to the ISS. And we’re still waiting on United Launch Alliance’s Atlas V (no fault of ULA) to begin launching astronauts on Boeing’s Starliner crew capsule to the ISS. Basically, it’s SpaceX or bust. It’s a good thing that the Falcon 9 has proven to be the most reliable rocket in history.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Virgin Galactic flies four passengers to the edge of space. Virgin Galactic conducted its first suborbital mission of 2024 on January 26 as the company prepares to end flights of its current spaceplane, Space News reports. The flight, called Galactic 06 by Virgin Galactic, carried four customers for the first time, along with its two pilots, on a suborbital hop over New Mexico aboard the VSS Unity rocket plane. Previous commercial flights had three customers on board, along with a Virgin Galactic astronaut trainer. The customers, which Virgin Galactic didn’t identify until after the flight, held US, Ukrainian, and Austrian citizenship.

Pending retirement … Virgin Galactic announced last year it would soon wind down flights of VSS Unity, citing the need to conserve its cash reserves for development of its next-generation Delta class of suborbital vehicles. Those future vehicles are intended to fly more frequently and at lower costs than Unity. After Galactic 06, Virgin Galactic said it will fly Unity again on Galactic 07 in the second quarter of the year with a researcher and private passengers. The company could fly Unity a final time later this year on the Galactic 08 mission. Since 2022, Virgin Galactic has been the only company offering commercial seats on suborbital spaceflights. The New Shepard rocket and spacecraft from competitor Blue Origin hasn’t flown people since a launch failure in September 2022. (submitted by Ken the Bin)

Iran launches second rocket in eight days. Iran launched a trio of small satellites into low-Earth orbit on January 28, Al Jazeera reports. This launch used Iran’s Simorgh rocket, which made its first successful flight into orbit after a series of failures dating back to 2017. The two-stage, liquid-fueled Simorgh rocket deployed three satellites. The largest of the group, named Mehda, was designed to measure the launch environments on the Simorgh rocket and test its ability to deliver multiple satellites into orbit. Two smaller satellites will test narrowband communication and geopositioning technology, according to Iran’s state media.

Back to back … This was a flight of redemption for the Simorgh rocket, which is managed by the civilian-run Iranian Space Agency. While the Simorgh design has repeatedly faltered, the Iranian military’s Islamic Revolutionary Guard Corps has launched two new orbital-class rockets in recent years. The military’s Qased launch vehicle delivered small satellites into orbit on three successful flights in 2020, 2022, and 2023. Then, on January 20, the military’s newest rocket, named the Qaem 100, put a small remote-sensing payload into orbit. Eight days later, the Iranian Space Agency finally achieved success with the Simorgh rocket. Previously, Iranian satellite launches have been spaced apart by at least several months. (submitted by Ken the Bin)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Rocket Lab’s first launch of 2024. Rocket Lab was back in action on January 31, kicking off its launch year with a recovery Electron mission from New Zealand. This was its second return-to-flight mission following a mishap late last year, Spaceflight Now reports. Rocket Lab’s Electron rocket released four Space Situational Awareness (SSA) satellites into orbit for Spire Global and NorthStar Earth & Space. Peter Beck, Rocket Lab’s founder and CEO, said in a statement that the company has more missions on the books for 2024 than in any year before. Last year, Rocket Lab launched 10 flights of its light-class Electron launcher.

Another recovery … Around 17 minutes after liftoff, the Electron’s first-stage booster splashed down in the Pacific Ocean under parachute. A recovery vessel was stationed nearby downrange from the launch base at Mahia Peninsula, located on the North Island of New Zealand. Rocket Lab has ambitions of re-flying a first stage booster in its entirety. Last August, it demonstrated partial reuse with the re-flight of a Rutherford engine salvaged from a booster recovered on a prior mission. (submitted by Ken the Bin)

PLD Space wins government backing. PLD Space has won the second and final round of a Spanish government call to develop sovereign launch capabilities, European Spaceflight reports. Spain’s Center for Technological Development and Innovation announced on January 26 that it selected PLD Space, which is developing a small launch vehicle called Miura 5, to receive a 40.5-million euro loan from a government fund devoted to aiding the Spanish aerospace sector, with a particular emphasis on access to space. Last summer, the Spanish government selected PLD Space and Pangea Aerospace to each receive 1.5 million euros in a preliminary funding round to mature their designs. PLD Space won the second round of the loan competition.

Moving toward Miura 5 … “The technical decision in favor of PLD Space confirms that our technological development strategy is sound and is based on a solid business plan,” said Ezequiel Sanchez, PLD Space’s executive president. “Winning this public contract to create a strategic national capability reinforces our position as a leading company in securing Europe’s access to space.” Miura 5 will be capable of launching about a half-ton of payload mass into low-Earth orbit and is scheduled to make its debut launch from French Guiana in late 2025 or early 2026, followed by the start of commercial operations later in 2026. PLD Space will need to repay the loan through royalties over the first 10 years of the commercial operation of Miura 5. (submitted by Leika)

Rocket Report: SpaceX at the service of a rival; Endeavour goes vertical Read More »

on-dwarkesh’s-3rd-podcast-with-tyler-cowen

On Dwarkesh’s 3rd Podcast with Tyler Cowen

This post is extensive thoughts on Tyler Cowen’s excellent talk with Dwarkesh Patel.

It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened.

I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot.

The first conversation is about Tyler’s book GOAT about the world’s greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler’s takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don’t skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me.

There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts.

The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question.

If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here.

That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks. That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson’s Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive.

Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth.

Thus at this point, I choose to treat most of Tyler’s thoughts on AI as if they are part of the second conversation, with an implicit ‘assuming an AI at least semi-fizzle’ attached to them, at which point they become mostly excellent thoughts.

Dealing with the third conversation is harder. There is place where I feel Tyler is misinterpreting a few statements, in ways I find extremely frustrating and that I do not see him do in other contexts, and I pause to set the record straight in detail. I definitely see hope in finding common ground and perhaps working together. But so far I have been unable to find the road in.

  1. I don’t buy the idea that investment returns have tended to be negative, or that VC investment returns have overall been worse than the market, but I do notice that this is entirely compatible with long term growth due to positive externalities not captured by investors.

  2. I agree with Tyler that the entrenched VCs are highly profitable, but that other VCs due to lack of good deal flow and adverse selection, and lack of skill, don’t have good returns. I do think excessive optimism produces competition that drives down returns but that returns would otherwise be insane.

  3. I also agree with Tyler that those with potential for big innovations or otherwise very large returns both do well themselves and also capture only a small fraction of total returns they generate, and I agree that the true rate is unknown and 2% is merely a wild guess.

  4. And yes, many people foolishly (or due to highly valuing independence) start small businesses that will have lower expected returns than a job. But I think that they are not foolish to value that independence highly versus taking a generic job, and also I believe that with proper attention to what actually causes success plus hard work small business can have higher private returns than a job for a great many people. A bigger issue is that many small businesses are passion projects such as restaurants and bars where the returns tend to be extremely bad. But the reason the returns are low is exactly because so many are passionate and want to do it.

  5. I find it silly to think that literal Keynes did not at the time have the ability to beat the market by anticipating what others would do. I am on record as saying the efficient market hypothesis is false, certainly in this historical context it should be expected to be highly false. The reason you cannot make money from this kind of anticipation easily is that the anticipation is priced in, but Keynes was clearly in position to notice it being not priced in. I share Tyler’s disdain for where the argument was leading regarding socializing long term investment, and also think that long term fundamentals-based investing or building factories is profitable, having less insight and more risk should get priced in. That is indeed what I am doing with most of my investments.

  6. Financial system at 2% of wealth might not be growing in those terms and maybe it’s not outrageous on its face but it is at least suspicious, that’s a hell of a management fee especially given many assets aren’t financialized, and 8% of GDP still seems like a huge issue. And yes, I think that if that number goes up as wealth goes up that still constitutes a very real problem.

  7. Risk behavior where you buy insurance for big things and take risks in small things makes perfect sense, both as mood management and otherwise, considering marginal utility curves and blameworthiness. You need to take a lot of small risks at minimum. No Gamble, No Future.

  8. The idea that someone’s failures are highly illustrative seems right, also I worry about people adapting that idea too rigorously.

  9. The science of what lets people ‘get away with’ what is generally considered socially unacceptable behaviors while being prominent seems neglected.

  10. Tyler continuing to bet on economic growth meaning things turned out well pretty much no matter what, whereas shrinking fertility risks things turning out badly. I find it so odd to model the future in ways that implicitly assume away AI.

  11. If hawks always gain long term status and pacifists always lose it, that does not seem like it can be true in equilibrium?

  12. I think that Hayek’s claim that there is a general natural human trend towards more socialism has been proven mostly right, and I’m confused why Tyler disagrees. I do think there are other issues we are facing now that are at least somewhat distinct from that question, and those issues are important, but also I would notice that those other problems are mostly closely linked to larger government intervention in markets.

  13. Urbanization is indeed very underrated. Housing theory of everything.

  14. ‘People overrate the difference between government and market’ is quite an interesting claim, that the government acts more like a market than you think. I don’t think I agree with this overall, although some doubtless do overrate it?

  15. (30: 00) The market as the thing that finds a solution that gets us to the next day is a great way to think about it. And the idea that doing that, rather than solving for the equilibrium, is the secret of its success, seems important. It turns out that, partly because humans anticipate the future and plan for it, this changes what they are willing to do at what price today, and that this getting to tomorrow to fight another day will also do great things in the longer term. That seems exactly right, and also helps us point to the places this system might fail, while keeping in mind that it tends to succeed more than you would expect. A key question regarding AI is whether this will continue to work.

  16. Refreshing to hear that the optimum amount of legibility and transparency is highly nonzero but also not maximally high either.

  17. (34: 00): Tyler reiterates that AIs will create their own markets, and use their own currencies, property rights and perhaps Bitcoins and NFTs will be involved, and that decentralized AI systems acting in self-interested ways will be an increasing portion of our economic activity. Which I agree is a baseline scenario of sorts if we dodge some other bullets. He even says that the human and AI markets will be fully integrated. And that those who are good at AI integration, at outsourcing their activities to AI, will be vastly more productive than those who do not (and by implication, outcompete them).

  18. What I find frustrating is Tyler failing to then solve for the equilibrium, and asking what happens next. If we are increasingly handing economic activity over to self-interested competitive AI agents who compete against each other in a market and to get humans to allocate power and resources to them, subject to the resulting capitalistic and competitive and evolutionary and selection dynamics, where does that lead? How do we survive? I would as Tyler often requests Model This, except that I don’t see how not to assume the conclusion.

  19. (37: 00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

  20. A lot of this, that I see from many economists, seems to be based on the idea that the world will still be fundamentally normal and respond to existing economic principles and dynamics, and effectively working backwards from there, although of course it is not framed or presented that way. Thus intelligence and other AI capabilities will ‘face bottlenecks’ and regulations that they will struggle to overcome, which will doubtless be true, but I think gets easily overrun or gone around at some point relatively quickly.

  21. (39: 00) Tyler asks, is more intelligence likely to be good or bad against existential risk? And says he thinks it is more likely to be good. There are several ways to respond with ‘it depends.’ The first is that while I would very much be against this as a strategy of course, if we were always not as intelligent as we actually are, such that we never industrialized, then we would not face substantial existential risks except over very long time horizons. Talk of asteroid impacts is innumerate, without burning coal we wouldn’t be worried about climate, nuclear and biological threats and AI would be irrelevant, fertility would remain high.

  22. Then on the flip side of adding more intelligence, I agree that adding more actually human intelligence will tend to be good, so the question again is how to think about this new intelligence and how it will get directed and to what extent we will remain in control of it and of what happens, and so on. How exactly will this new intelligence function and to what extent will it be on our behalf? Of course I have said much of this before as has Tyler, so I will stop there.

  23. The idea that AI potentially prevents other existential risks is of course true. It also potentially causes them. We are (or should be) talking price. As I have said before, if AI posed a non-trivial but sufficiently low existential risk, its upsides including preventing other risks would outweigh that.

  24. (40: 30) Tyler made an excellent point here, that market participants notice a lot more than the price level. They care about size, about reaction speed and more, and take in the whole picture. The details teach you so much more. This is also another way of illustrating that the efficient market hypothesis is false.

  25. How do some firms improve over time? It is a challenge for my model of Moral Mazes that there are large centuries old Japanese or Dutch companies. It means there is at least some chance to reinvigorate such companies, or methods that can establish succession and retain leadership that can contain the associated problems. I would love to see more attention paid to this. The fact that Israel and the United States only have young firms and have done very well on economic growth suggests the obvious counterargument.

  26. I love the point that a large part of the value of free trade is that it bankrupts your very worst firms. Selection is hugely important.

  27. (48: 00) Tyler says we should treat children better and says we have taken quite a few steps in that direction. I would say that we are instead treating children vastly worse. Children used to have copious free time and extensive freedom of movement, and now they lack both. If they do not adhere to the programs we increasingly put them on medication and under tremendous pressure. The impacts of smartphones and social media are also ‘our fault.’ There are other ways in which we treat them better, in particular not tolerating using corporal punishment or other forms of what we now consider abuse. Child labor is a special case, where we have gone from forcing children to do productive labor in often terrible ways to instead forcing children to do unproductive labor in often terrible ways, and also banning children from doing productive labor for far too long, which is also its own form of horrific. But of course most people will say that today’s abuses are fine and yesterday’s are horrific.

  28. Mill getting elected to Parliament I see as less reflecting differential past ability for a top intellectual to win an election, and more a reflection of his willingness to put himself up for the office and take one for the team. I think many of our best intellectuals could absolutely make it to Congress if they cared deeply about making it to Congress, but that they (mostly wisely) choose not to do that.

  29. (53: 00) Smith noticed, despite persistent millennia long very slow if any growth, that economic growth was coming by observing a small group and seeing those dynamics as the future. The parallels to AI are obvious and Patel asks about it. Cowen says that to Smith 10% growth would likely be inconceivable, and he wouldn’t predict it because it would just shock him. I think this is right, and also I believe a lot of current economists are doing exactly that mental step today.

  30. Cowen also says he finds 10% growth for decades on end implausible. I would agree that seems unlikely, but I would say that not because it is too high but because you would then see such growth accelerate if it failed to rapidly hit a hard wall or cause a catastrophe, not because there would be insufficient room for continued growth. I do think his point that GDP growth ceases to be a good measure under sufficiently large level changes is sensible.

  31. I am curious how he would think about all these questions with regard to for example China’s emergence in the late 20th century. China has grown at 9% a year since 1978, so it is an existence proof that this can happen for some time. In some sense you can think of growth under AI potentially as a form of catch-up growth as well, in the sense that AI unlocks a superior standard of technological, intellectual and physical capacity for production (assuming the world is somehow recognizable at all) and we would be adapting to it.

  32. Tyler asks: If you had the option to buy from today’s catalogue or the Sears catalogue from 1905 and had $50,000 to spend, which would you choose? He points out you have to think about it, which indeed you do if this is to be your entire consumption bundle. If you are allowed trade, of course, it is a very easy decision, you can turn that $50,000 into vastly more.

  33. (1: 05: 00) Dwarkesh says my exact perspective on Tyler’s thinking, that he is excellent on GPT-5 level stuff, then seems (in my words not his) to hit a wall, and fails (in Dwarkesh’s words) to take all his wide ranging knowledge and extrapolate. That seems exactly right to me, that there is an assumption of normality of sorts, and when we get to the point where normality as a baseline stops making sense the predictions stop making sense. Tyler responds saying he writes about AI a lot and shares ideas he has them, and I don’t doubt those claims, but it does not address the point. I like that Dwarkesh asked the right question, and also realized that it would not be fruitful to pursue it once Tyler dodged answering. Dwarkesh has GOAT-level podcast question game.

  34. Should we subsidize savings? Tyler says he will come close to saying yes, at minimum we should stop taxing savings, which I agree with. He warns that the issue with subsidizing savings is it is regressive and would be seen as unacceptable.

  1. (1: 14: 00) Tyler worries about the fragile world hypothesis, not in terms of what AI could do but in terms of what could be done with… cheap energy? He asks what would happen if a nuclear bomb costs $50k. Which is a great question, but seems rather odd to worry about it primarily in terms of cost of energy?

  2. Tyler notes that due to intelligence we are doing better than the other great apes. I would reply that this is very true, that being the ape with the most intelligence has gone very well for us, and perhaps we should hesitate to create something that in turn has more intelligence than we do, for similar reasons?

  3. He says the existential risk people say ‘we should not risk all of this’ for AI, and that this is not how you should view history. Well, all right, then let’s talk price?

  4. Tyler thinks there is a default outcome of retreating to a kind of Medieval Balkans style existence with a much lower population ‘with or without AI.’ The with or without part really floors me, and makes me more confident that when he thinks about AI he simply is not pondering what I am pondering, for whatever reason, at all? But the more interesting claim is that, absent ‘going for it’ via AI, we face this kind of outcome.

  5. Tyler says things are hard to control, that we cannot turn back (and that we ‘chose a decentralized world well before humans even existed’) and such, although he does expect us to turn back via the decline scenario? He calls for some set of nations to establish dominance in AI, to at least buy us some amount of time. In some senses he has a point, but he seems to be doing some sort of confluence of the motte and bailey here. Clearly some forms of centralization are possible.

  6. By calling for nations such as America and the UK to establish dominance in this way, he must mean for particular agents within those nations to establish that dominance. It is not possible for every American to have root access and model weights and have that stay within America, or be functionally non-decentralized in the way he sees as necessary here. It could be the governments themselves, a handful of corporations or a combination or synthesis thereof. I would note this is, among other things, entirely incompatible with open model weights for frontier systems, and will require a compute monitoring regime.

  7. It certainly seems like Tyler is saying that we need to avoid misuse and proliferation of sufficiently capable AI systems at the cost of establishment of hegemonic control over AI, with all that implies? There is ultimately remarkable convergence of actual models of the future and of what is to be done, on many fronts, even without Tyler buying the full potential of such systems or thinking their consequences fully through. But notice the incompatibility of American dominance in AI with the idea of everyone’s AIs engaging in Hayekian commerce under a distinct ecosystem, unless you think that there is some form of centralized control over those AIs and access to them. So what exactly is he actually proposing? And how does he propose that we lay the groundwork now in order to get there?

  1. I get a mention and am praised as super smart which is always great to hear, but in the form of Tyler once again harping on the fact that when China came out saying they would require various safety checks on their AIs, I and others pointed out that China was open to potential cooperation and was willing to slow down its AI development in the name of safety even without such cooperation. He says that I and others said “see, China is not going to compete with us, we can shut AI down.”

So I want to be clear: That is simply not what I said or was attempting to convey.

I presume he is in particular referring to this:

Zvi Mowshowitz (April 19, 2023): Everyone: We can’t pause or regulate AI, or we’ll lose to China.

China: All training data must be objective, no opinions in the training data, any errors in output are the provider’s responsibility, bunch of other stuff.

I look forward to everyone’s opinions not changing.

[I quote tweeted MMitchell saying]: Just read the draft Generative AI guidelines that China dropped last week. If anything like this ends up becoming law, the US argument that we should tiptoe around regulation ‘cos China will beat us will officially become hogwash. Here are some things that stood out…

So in this context, Tyler and many others were claiming that if we did any substantive regulations on AI development we risked losing to China.

I was pointing out that China was imposing substantial regulations for its own reasons. These requirements, even if ultimately watered down, would be quite severe restrictions on their ability to deploy such systems.

The intended implication was that China clearly was not going to go full speed ahead with AI, they were going to impose meaningfully restrictive regulations, and so it was silly to say that unless we imposed zero restrictions we would ‘lose to China.’ And also that perhaps China would be open to collaboration if we would pick up the phone.

And yes, that we could pause the largest AI training runs for some period of time without substantively endangering our lead, if we choose to do that. But the main point was that we could certainly do reasonable regulations.

The argument was not that we could permanently shut down all AI development forever without any form of international agreement, and China and others would never move forward or never catch up to that.

I believe actually that the rest of 2023 has borne out that China’s restrictions in various ways have mattered a lot, that even within specifically AI they have imposed more meaningful barriers than we have, that they remain quite behind, and that they have shown willingness to sit down to talk on several occasions, including the UK Summit, the agreement on nuclear weapons and AI, a recent explicit statement of the importance of existential risk and more.

Tyler also says we seem to have “zero understanding of some properties of decentralized worlds.” On many such fronts I would strongly deny this, I think we have been talking extensively about these exact properties for a long time, and treating them as severe problems to finding any solutions. We studied game theory and decision theory extensively, we say ‘coordination is hard’ all the time, we are not shy about the problem that places like China exist. Yes, we think that such issues could potentially be overcome, or at least that if we see no other paths to survival or victory that we need to try, and that we should not treat ‘decentralized world’ as a reason to completely give up on any form of coordination and assume that we will always be in a fully competitive equilibrium where everyone defects.

Based on his comments in the last two minutes, perhaps instead the thing he thinks we do not understand is that the AI itself will naturally and inevitably also be decentralized, and there will not be only one AI? But again that seems like something we talk about a lot, and something I actively try to model and think about a lot, and try to figure out how to deal with or prevent the consequences. This is not a neglected point.

There are also the cases made by Eliezer and others that with sufficiently advanced decision theory and game theory and ability to model others or share source code and generate agents with high correlations and high overlap of interests and identification and other such affordances then coordination between various entities becomes more practical, and thus we should indeed expect that the world with sufficiently advanced agents will act in a centralized fashion even if it started out decentralized, but that is not a failure to understand the baseline outcome absent such new affordances. I think you have to put at least substantial weight on those possibilities.

Tyler once warned me – wisely and helpfully – in an email, that I was falling into too often strawmanning or caricaturing opposing views and I needed to be careful to avoid that. I agree, and have attempted to take those words to heart, the fact that I could say many others do vastly worse, both to views I hold and to many others, on this front is irrelevant. I am of course not perfect at this, but I do what I can, and I think I do substantially less than I would be doing absent his note.

Then he notes that Eliezer made a Tweet that Tyler thinks probably was not a joke – that I distinctly remember and that was 100% very much a joke – that the AI could read all the legal code and threaten us with enforcement of the legal system. That Eliezer does not seem to understand how screwed up the legal system is, talking about how this would cause very long courtroom waits and would be impractical and so on.

That’s the joke. The whole point was that the legal system is so screwed up that it would be utterly catastrophic if we actually enforced it, and also that is bad. Eliezer is constantly tweeting and talking, independently of AI, about how screwed up the legal system is, if you follow him it is rather impossible to miss. There are also lessons here about potential misalignment of socially verbally affirmed with what we actually want to happen, and also an illustration of the fact that a sufficiently capable AI would have lots of different forms of leverage over humans, it works on many levels. I laughed at the time, and knew it was a joke without being told. It was funny.

I would say to him, please try to give a little more benefit of the doubt, perhaps?

  1. Tyler predicts that until there is an ‘SBF-like’ headline incident, the government won’t do much of anything about AI even though the smartest people in the government in national security will think we should, and then after the incident we will overreact. If that is the baseline, it seems odd to oppose (as Tyler does) doing anything at all now, as this is how you get that overreaction.

  2. Should we honor past generations more because we want our views to be respected more in the future? Tyler says probably yes, that there is no known philosophically consistent view on this that anyone lives by. I can’t think of one either. He points out the Burke perspective on this is time inconsistent, as you are honoring the recent dead only, which is how most of us actually behave. Perhaps one way to think about this is that we care about the wishes of the dead in the sense that people still alive care about those particular dead, and thus we should honor the dead to the extent that they have a link to those who are alive? Which can in turn pass along through the ages, as A begets B begets C on to Z, and we also care about such traditions as traditions, but that ultimately this fades, faster with some than others? But that if we do not care about that particular person at all anymore, than we also don’t care about their preferences because dead is dead? And on top of that, we can say that there are certain specific things which we feel the dead are entitled to, like a property right or human right, such as their funerals and graves, and the right to a proper burial even if we did not know them at all, and we honor those things for everyone as a social compact exactly to keep that compact going. However none of this bodes especially well for getting future generations, or especially future AIs, to much care about our quirky preferences in the long run.

  3. Why does Argentina go crazy with the printing press and have hyperinflation so often? Tyler points out this is a mystery. My presumption is this begets itself. The markets expect it again, although not to the extent they should, I can’t believe (and didn’t at the time) some of the bond sales over the years actually happened at the prices they got and this seems like another clear case of the EMH being false, but certainly everyone involved has ‘hyperinflation expectations’ that make it much harder to go back from the brink, and will be far more tolerant of irresponsible policies that go down such roads into the future because it looks relatively responsible, and because as Tyler asks about various interest groups presumably are used to capturing more rents than the state can afford. Of course, this can also go the other way, at some point you get fed up with all that, and thus you get Milei.

  4. So weird to hear Tyler talk about the power of American civic virtue, but he still seems right compared to most places. We have so many clearly smart and well meaning people in government, yet it in many ways functions so poorly, as they operate under such severe constraints.

  5. Agreement that in the past economists and other academics were inclined to ask bigger questions, and now they more often ask smaller questions and overspecialize.

  6. (1: 29: 00) Tyler worries about competing against AI as an academic or thinker, that people might prefer to read what the AI writes for 10-20 years. This seems to me like a clear case of ‘if this is true then we have much bigger problems.’

  7. I love Tyler’s ‘they just say that’ to the critique that you can’t have capitalism with proper moral equality. And similarly with Fukuyama. Tyler says today’s problems are more manageable than those of any previous era, although we might still all go poof. I think that if you judge relative to standards and expectations and what counts as success that is not true, but his statement that we are in the fight and have lots of resources and talent is very true. I would say, we have harder problems that we aim to solve, while also having much better tools to solve them. As he says, let’s do it, indeed. This all holds with or without AI concerns.

  8. Tyler predicts that volatility will go up a lot due to AI. I am trying out two manifold markets to attempt to capture this.

  9. It seems like Tyler is thinking of greater intelligence in terms of ‘fitting together quantum mechanics and relativity’ and thus thinking it might cap out, rather than thinking about what that intelligence could do in various more practical areas. Strange to see a kind of implicit Straw Vulcan situation.

  10. Tyler says (responding to Dwarkesh’s suggestion) that maybe the impact of AI will be like the impact of Jews in the 20th century, in terms of innovation and productivity, where they were 2% of the population and generated 20% of the Nobel Prizes. That what matters is the smartest model, not how many copies you have (or presumably how fast it can run). So once again, the expectation that the capabilities of these AIs will cap out in intelligence, capabilities and affordances essentially within the human range, even with our access to them to help us go farther? I again don’t get why we would expect that.

  11. Tyler says existential risk is indeed one of the things we should be most thinking about. He would change his position most if he thought international cooperation were very possible or no other country could make AI progress, this would cause very different views. He notices correctly that his perspective is more pessimistic than what he would call a ‘doomer’ view. He says he thinks you cannot ‘just wake up in the morning and legislate safety.’

  12. In the weak sense, well, of course you can do that, the same way we legislate safe airplanes. In the strong sense, well, of course you cannot do that one morning, it requires careful planning, laying the groundwork, various forms of coordination including international coordination and so on. And in many ways we don’t know how to get safety at all, and we are well aware of many (although doubtless not all) of the incentive issues. This is obviously very hard. And that’s exactly why we are pushing now, to lay groundwork now. In particular that is why we want to target large training runs and concentrations of compute and high end chips, where we have more leverage. If we thought you could wake up and do it in 2027, then I would be happy to wait for it.

  13. Tyler reiterates that the only safety possible here, in his view, comes from a hegemon that stays good, which he admits is a fraught proposition on both counts.

  14. His next book is going to be The Marginal Revolution, not about the blog about the actual revolution, only 40k words. Sounds exciting, I predict I will review it.

So in the end, if you combine his point that he would think very differently if international coordination were possible or others were rendered powerless, his need for a hegemon if we want to achieve safety, and clear preference for the United States (or one of its corporations?) to take that role if someone has to, and his statement that existential risk from AI is indeed one of the top things we should be thinking about, what do you get? What policies does this suggest? What plan? What ultimate world?

As he would say: Solve for the equilibrium.

On Dwarkesh’s 3rd Podcast with Tyler Cowen Read More »

convicted-console-hacker-says-he-paid-nintendo-$25-a-month-from-prison

Convicted console hacker says he paid Nintendo $25 a month from prison

Crime doesn’t pay —

As Gary Bowser rebuilds his life, fellow Team Xecuter indictees have yet to face trial.

It's-a me, the long arm of the law.

Enlarge / It’s-a me, the long arm of the law.

Aurich Lawson / Nintendo / Getty Images

When 54-year-old Gary Bowser pleaded guilty to his role in helping Team Xecuter with their piracy-enabling line of console accessories, he realized he would likely never pay back the $14.5 million he owed Nintendo in civil and criminal penalties. In a new interview with The Guardian, though, Bowser says he began making $25 monthly payments toward those massive fines even while serving a related prison sentence.

Last year, Bowser was released after serving 14 months of that 40-month sentence (in addition to 16 months of pre-trial detention), which was spread across several different prisons. During part of that stay, Bowser tells The Guardian, he was paid $1 an hour for four-hour shifts counseling other prisoners on suicide watch.

From that money, Bowser says he “was paying Nintendo $25 a month” while behind bars. That lines up roughly with a discussion Bowser had with the Nick Moses podcast last year, where he said he had already paid $175 to Nintendo during his detention.

According to The Guardian, Nintendo will likely continue to take 20 to 30 percent of Bowser’s gross income (after paying for “necessities such as rent”) for the rest of his life.

The fall guy?

While people associated with piracy often face fines rather than prison, Nintendo lawyers were upfront that they pushed for jail time for Bowser to “send a message that there are consequences for participating in a sustained effort to undermine the video game industry.” That seems to have been effective, at least as far as Bowser’s concerned; he told The Guardian that “The sentence was like a message to other people that [are] still out there, that if they get caught … [they’ll] serve hard time.”

Bowser appears on the Nick Moses Gaming Podcast from a holding center in Washington state in 2023.

Enlarge / Bowser appears on the Nick Moses Gaming Podcast from a holding center in Washington state in 2023.

Nick Moses 05 Gaming Podcast/YouTube

But Bowser also maintains that he wasn’t directly involved with the coding or manufacture of Team Xecuter’s products, and only worked on incidental details like product testing, promotion, and website coding. Speaking to Ars in 2020, Aurora, a writer for hacking news site Wololo, described Bowser as “kind of a PR guy” for Team Xecuter. Despite this, Bowser said taking a plea deal on just two charges saved him the time and money of fighting all 14 charges made against him in court.

Bowser was arrested in the Dominican Republic in 2020. Fellow Team Xecuter member and French national Max “MAXiMiLiEN” Louarn, who was indicted and detained in Tanzania at the same time as Bowser’s arrest, was still living in France as of mid-2022 and has yet to be extradited to the US. Chinese national and fellow indictee Yuanning Chen remains at large.

“If Mr. Louarn comes in front of me for sentencing, he may very well be doing double-digit years in prison for his role and his involvement, and the same with the other individual [Chen],” US District Judge Robert Lasnik said during Bowser’s sentencing.

Returning to society

During his stay in prison, Bowser tells The Guardian that he suffered a two-week bout of COVID that was serious enough that “a priest would come over once a day to read him a prayer.” A bout of elephantiasis also left him unable to wear a shoe on his left foot and required the use of a wheelchair, he said.

Now that he’s free, Bowser says he has been relying on friends and a GoFundMe page to pay for rent and necessities as he looks for a job. That search could be somewhat hampered by his criminal record and by terms of the plea deal that prevent him from working with any modern gaming hardware.

Despite this, Bowser told The Guardian that his current circumstances are still preferable to a period of homelessness he experienced during his 20s. And while console hacking might be out for Bowser, he is reportedly still “tinkering away with old-school Texas Instruments calculators” to pass the time.

Convicted console hacker says he paid Nintendo $25 a month from prison Read More »