Author name: Ryan Harris

ai-ruling-on-jobless-claims-could-make-mistakes-courts-can’t-undo,-experts-warn

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn

Nevada will soon become the first state to use AI to help speed up the decision-making process when ruling on appeals that impact people’s unemployment benefits.

The state’s Department of Employment, Training, and Rehabilitation (DETR) agreed to pay Google $1,383,838 for the AI technology, a 2024 budget document shows, and it will be launched within the “next several months,” Nevada officials told Gizmodo.

Nevada’s first-of-its-kind AI will rely on a Google cloud service called Vertex AI Studio. Connecting to Google’s servers, the state will fine-tune the AI system to only reference information from DETR’s database, which officials think will ensure its decisions are “more tailored” and the system provides “more accurate results,” Gizmodo reported.

Under the contract, DETR will essentially transfer data from transcripts of unemployment appeals hearings and rulings, after which Google’s AI system will process that data, upload it to the cloud, and then compare the information to previous cases.

In as little as five minutes, the AI will issue a ruling that would’ve taken a state employee about three hours to reach without using AI, DETR’s information technology administrator, Carl Stanfield, told The Nevada Independent. That’s highly valuable to Nevada, which has a backlog of more than 40,000 appeals stemming from a pandemic-related spike in unemployment claims while dealing with “unforeseen staffing shortages” that DETR reported in July.

“The time saving is pretty phenomenal,” Stanfield said.

As a safeguard, the AI’s determination is then reviewed by a state employee to hopefully catch any mistakes, biases, or perhaps worse, hallucinations where the AI could possibly make up facts that could impact the outcome of their case.

Google’s spokesperson Ashley Simms told Gizmodo that the tech giant will work with the state to “identify and address any potential bias” and to “help them comply with federal and state requirements.” According to the state’s AI guidelines, the agency must prioritize ethical use of the AI system, “avoiding biases and ensuring fairness and transparency in decision-making processes.”

If the reviewer accepts the AI ruling, they’ll sign off on it and issue the decision. Otherwise, the reviewer will edit the decision and submit feedback so that DETR can investigate what went wrong.

Gizmodo noted that this novel use of AI “represents a significant experiment by state officials and Google in allowing generative AI to influence a high-stakes government decision—one that could put thousands of dollars in unemployed Nevadans’ pockets or take it away.”

Google declined to comment on whether more states are considering using AI to weigh jobless claims.

AI ruling on jobless claims could make mistakes courts can’t undo, experts warn Read More »

sony-announces-ps5-pro,-a-$700-graphical-upgrade-available-nov.-7

Sony announces PS5 Pro, a $700 graphical upgrade available Nov. 7

More power —

New unit won’t include a disc drive, but will improve frame rate in high-fidelity games.

The cool racing stripe means it's faster.

Enlarge / The cool racing stripe means it’s faster.

Sony today announced the PlayStation 5 Pro, a mid-generation hardware upgrade that will play the same game library as 2020’s PlayStation 5, but with higher frame rates and better resolution than on the original system. The new units will be available on November 7 for $700, Sony said.

The updated hardware will come complete with 2TB of solid-state storage (up from 1TB on the original PS5), but without an Ultra HD Blu-Ray disc drive, which users can purchase as an add-on accessory for $80.

In a video presentation Tuesday, Sony’s Mark Cerny said PS5 developers “desire more graphics performance” in order to deliver the visuals they want at a frame rate of 60 fps. The lack of enough graphical power on the PS5 leads to a difficult decision for players between the higher resolution of “fidelity” mode and the smoother frame rates of “performance” mode (with three-quarters of players choosing the latter, according to Cerny).

The goal of the PS5 Pro, Cerny says, is delivering “the graphics that the game creators aspire to, at the high frame rates players typically prefer.” To do this, the new system will sport a larger GPU that can support “up to 45 percent faster rendering,” Cerny said, with 67 percent more compute units and 28 percent faster video RAM than the PS5. This will allow for “almost fidelity-like graphics at ‘performance’ frame rates” of 60 frames per second in many existing PS5 games, Cerny said.

  • The two sides of this comparison are supposed to look comparable in fidelity, but the PS5 Pro is running at 60 fps, as opposed to 30 fps on the PS5.

  • Spider-Man 2 is one of the games that will benefit from smoother frame rates at high-fidelity on PS5 Pro.

  • Ratchet & Clank: Rift Apart was shown running much more smoothly on the PS5 Pro.

  • Compared to the high-frame-rate “performance” mode on PS5, many games show increased fidelity on the PS5 Pro.

  • AI upscaling helps Ratchet & Clank: Rift Apart crowd scenes look sharped on PS5 Pro.

  • Incidental details like spider-Man 2‘s street traffic look sharper on the PS5 Pro than “performance” mode on the PS5.

  • The PS5 Pro allows for ray-traced car reflections in Gran Turismo 7 while maintaining a 60 fps frame rate.

  • The PS5 Pro allows for “further realism in the casting of shadows” in Hogwarts Legacy, Cerny said.

  • Horizon: Forbidden West get “improvements to lighting and visual effects” on the PS5 Pro, Cerny said.

  • Horizon: Forbidden West cinematics will show “[improvements to] the hair and skin” in cinematics on the PS5 Pro, Cerny said.

The PS5 Pro will also bring what Cerny says is a “streamlined and accelerated approach” to ray-tracing, with individual rays calculated at “double or even triple the speeds of PlayStation 5.” In examples shown on video, Cerny highlighted how reflections between cars are now available at 60 fps for Gran Turismo 7 on the PS5 Pro, and how games like Hogwarts Legacy could have more realistic shadow effects.

An “AI library” called PlayStation Spectral Super Resolution (PSSR) will be available on the PS5 Pro to automatically upscale in-game scenes as well. Cerny highlighted how this can make distant crowds in a game like Ratchet & Clank: Rift Apart to look much clearer.

Titles that can take advantage of the PS5 Pro’s more powerful GPU will be marketed as “PS5 Pro Enhanced.” Titles that will sport that designation include:

  • Alan Wake 2
  • Assassin’s Creed: Shadows
  • Demon’s Souls
  • Dragon’s Dogma 2
  • Final Fantasy 7 Rebirth
  • Gran Turismo 7
  • Hogwarts Legacy
  • Horizon Forbidden West
  • Marvel’s Spider-Man 2
  • Ratchet & Clank: Rift Apart
  • The Crew Motorfest
  • The First Descendant
  • The Last of Us Part II Remastered

Other titles will be able to take advantage of “PS5 Pro Game Boost,” which Sony says “may stabilize or improve the performance of supported PS4 and PS5 games.”

PS5.5

The upgrade follows the history of the PS4 Pro, which launched almost exactly three years after the PS4 and offered the capacity for higher resolutions, faster frame rates, or both in many PS4 library games. The PS5 Pro comes further into the lifecycle for Sony’s latest console, though, and at a point where Sony has yet to lower the $500 launch price for the console and increased the price of the disc-drive-free Digital Edition last year (though inflation has taken some of the sting out of that nominal pricing).

The PS5 Pro comes after last year’s launch of a redesigned PS5 “Slim” model, which reduced the original PS5’s famously massive bulk while keeping the internal processing power the same while

Microsoft has yet to show any sign of plans for a similar update for the Xbox Series X/S, which also launched in late 2020. Earlier this year, though, Microsoft announced the first disc-drive-free edition of the Xbox Series X for this holiday season.

Nintendo, which launched a Switch with an improved OLED screen in 2021, is widely expected to launch a backward- compatible follow-up to the Switch in 2025.

This story has been updated with additional details and visuals from Sony’s announcement.

Sony announces PS5 Pro, a $700 graphical upgrade available Nov. 7 Read More »

economics-roundup-#3

Economics Roundup #3

I am posting this now largely because it is the right place to get in discussion of unrealized capital gains taxes and other campaign proposals, but also there is always plenty of other stuff going on. As always, remember that there are plenty of really stupid proposals always coming from all sides. I’m not spending as much time talking about why it’s awful to for example impose gigantic tariffs on everything, because if you are reading this I presume you already know.

The problem, perhaps, in a nutshell:

Tess: like 10% of people understand how markets work and about 10% deeply desire and believe in a future that’s drastically better than the present but you need both of these to do anything useful and they’re extremely anticorrelated so we’re probably all fucked.

In my world the two are correlated. If you care about improving the world, you invest in learning about markets. Alas, in most places, that is not true.

The problem, in a nutshell, attempt number two:

Robin Hanson: There are two key facts near this:

  1. Government, law, and social norms in fact interfere greatly in many real markets.

  2. Economists have many ways to understand “market failure” deviations from supply and demand, and the interventions that make sense for each such failure.

Economists’ big error is: claiming that fact #2 is the main explanation for fact #1. This strong impression is given by most introductory econ textbooks, and accompanying lectures, which are the main channels by which economists influence the world.

As a result, when considering actual interventions in markets, the first instinct of economists and their students is to search for nearby plausible market failures which might explain interventions there. Upon finding a match, they then typically quit and declare this as the best explanation of the actual interventions.

Yep. There are often market failures, and a lot of the time it will be very obvious why the government is intervening (e.g. ‘so people don’t steal other people’s stuff’) but if you see a government intervention that does not have an obvious explanation, your first thought should not be to assume the policy is there to sensibly correct a market failure.

Kamala Harris endorses Biden’s no-good-very-bad 44.6% capital gains tax rate proposal, including the cataclysmic 25% tax on unrealized capital gains, via confirming she supports all Biden budget proposals. Which is not the same as calling for it on the campaign trail, but is still support.

She later pared back the proposed topline rate to 33%, which is still a big jump, and I don’t see anything there about her pulling back on the unrealized capital gains tax.

Technically speaking, the proposal for those with a net worth over $100 million is an annual minimum 25% tax on your net annual income, realized and unrealized including the theoretical ‘value’ of fully illiquid assets, with taxes on unrealized gains counting as prepayments against future realized gains (including allowing refunds if you ultimately make less). Also, there is a ‘deferral’ option on your illiquid assets if you are insufficiently liquid, but that carries a ‘deferral charge’ up to 10%, which I presume will usually be correct to take given the cost of not compounding.

All of this seems like a huge unforced error, as the people who know how bad this is care quite a lot, offered without much consideration. It effectively invokes what I dub Deadpool’s Law, which to quote Cassandra Nova is: You don’t fing matter.

The most direct ‘you’ is a combination of anyone who cares about startups, successful private businesses or creation of value, and anyone with a rudimentary understanding of economics. The broader ‘you’ is, well, everyone and everything, since we all depend on the first two groups.

One might think that ‘private illiquid business’ is an edge case here. It’s not.

Owen Zidar: The discussions of unrealized capital gains for the very rich (ie those with wealth exceeding $100M) are often missing a key fact – two thirds of unrealized gains at the very top is from gains in private businesses (From SCF data)

When people discuss capital gains, they often evoke esoteric theories and think about simple assets like selling stock. They should really have ordinary private businesses in mind (like beverage distributors and car dealers) when thinking about these proposals at the top.

The share of transactions that stock sales represent has fallen substantially. It’s not about someone’s apple stock. We should be thinking about the decisions of private business owners when analyzing how behavior might respond to these policy changes.

As in, 64% of gains for those worth over $100m are in shares in private business, versus only 26% public stocks.

Here Dylan Matthews gives a steelman defense of the proposal. He says Silicon Valley is actually ‘exempt,’ so they should stop worrying, and their taxes can be deferred if over 80% of your assets are illiquid. That’s quite the conditional, there is an additional charge for doing it, there is very much a spot where you don’t cross 80% and also cannot reasonably pay the tax, if your illiquid assets then go to zero (as happens in startups) you could be screwed beyond words, and all the rates involved are outrageous beyond words, but yes it isn’t as bad as the headlines sound.

Roon: Probably not on board but it’s definitely clear sans behavioral econ stuff charging only at realization time causes pretty distorted incentives including eg holding onto windfall long after you think it’s stopped accumulating and thereby depriving new opportunities of capital.

Otoh this is caused untold gains for people who would’ve been too paperhands to hold otherwise.

Also untold unrealized gains that turned into losses. Easy come, easy go. Certainly the distortion here is massive. I do agree this is a problem. I like the realistic solutions even less, especially as they would effectively make it impossible for founders to maintain control.

Perhaps in some narrow circumstances there is something that could be done, and one could argue for taxing unrealized gains on some highly liquid and fungible investment types, so you avoid perverse outcomes including sabotage of value (e.g. in the hypothetical ‘harbinger tax’ world, you actively want to sabotage the resale value of everything you own that you want to actually use).

But would that come with a reduction in the headline rate? Here, clearly that is not the intention. So the compounding of the taxes every year would mean an insane effective tax rate, where you lose most of your gains, and in general capital would be taxed prohibitively. No good. Very bad.

And also of course if you tax liquidity you get a lot less liquidity. Already companies postpone going public a lot, we have private equity, and so on. You have lots of reasons to not want to be part of the market. What happens if you add to that bigly?

We should not be shocked that Silicon Valley talked about the consequences of a proposal while hallucinating a different proposal. They have a pattern of doing that. But it is not like the details added here solve the problem.

When you look at the details further, and start asking practical questions, as Tyler Cowen does here, you see how disastrously deep the problems go. Even in an ideal version of the policy, you are facing nightmare after nightmare starting with the accounting, massive distortion after distortion, you cripple innovation and business activity, and liquidity takes a massive hit. Tax planning becomes central to life, everything gets twisted and much value is intentionally destroyed or hidden, as with all Harbinger tax proposals. This is in the context of Tyler critiquing an attempted defense of the proposal by Jason Furman.

Jason Furman on Twitter says that his critiques are for a given overall level of taxation of capital gains. Arthur calls him out on the obvious, which is that we are not proposing to hold the overall level of taxation of capital gains constant, the headline rate is not coming down and thus if this passes the effective rate is going way, way up. And Harris proposes to raise the baseline rate to 44.6%.

Tyler Cowen successfully persuaded me the proposal is worse than I thought it was, which is impressive given how bad I already thought it to be.

Alex Tabarrok throws in the distortion that a lot of valuation of investments is a change in the discount rate. The stock price can and sometimes does double without expected returns changing. And he emphases Tyler’s point that divorcing the entrepreneur from his capital is terrible for productivity, and is likely to happen at exactly the worst time. Many have also added the obvious, which is that the entrepreneur and investors involved can backchain from that, so likely you never even reach that point, the company will often never be founded.

Here the CEO of Dune Analytics reports on the exodus of wealthy individuals that is Norway, including himself after he closed a Series B and was about to face an outright impossible tax bill.

The most ironic part of this is that the arguments for taxing unrealized capital gains are relatively strong among people with much lower net worths, if you could keep the overall level of capital taxation constant. You encourage someone like me to rebalance, and not to feel ‘locked into’ my portfolio, and my tax planning on those assets stops mattering. The need to diversify actually matters. Whereas it scarcely matters if people with over $100 million get to ‘diversify’ and indeed I hope they do not do so with most of their wealth.

Also a 44.6% capital gains tax, or even the reduced 33% later proposal, is disastrous enough on its own, on its face.

The good point Dylan makes here, in addition to ‘step-up on death is the no-brainer,’ is that currently capital gains taxes involve a huge distortion because you can indefinitely dodge them by not selling even fully liquid assets. I have a severely ‘unbalanced’ portfolio of assets for this reason, and even getting rid of the step-up on death would not change that in many cases.

The good news is I don’t see this actually happening. Neither do most others, for example here’s Brian Riedl pointing out this isn’t designed to be a real proposal, which is why they never vote on it.

The better news is that this could create momentum for the actually good proposals, like ending the capital gains step-up on death or taxing borrowing against appreciated stocks. I do think those wins could be a big deal.

The bad news is that the downside if it did happen is so so bad, and you can never be fully sure. The other bad news is that there are lots of other bad economic proposals, including Trump’s tariffs and Harris’s war on ‘price gouging,’ to worry about.

The more I think this, the issue is we fail to close other obvious tax loopholes that are used by the wealthy. In particular, borrowing against assets without paying capital gains, and especially doing this combined with the step-up of cost basis at death.

In today’s age of electronic records, asking that cost basis be tracked indefinitely for assets with substantial cost basis seems eminently reasonable.

So I see two potential compromises here, we can do either or both.

  1. Backdate this responsibility to a fixed date, and say that you can choose the known true cost basis or the cost basis from a fixed date of let’s say 2010, whichever is higher, to not retroactively kill people without good records. If we want to exempt family farms up to some size limit or whatever special interests, sure, I don’t care.

  2. Set a size limit. Your estate can only ‘step up’ the cost basis by some fixed amount, let’s say $10 million total. If you’re worth more than that, you’re worth enough to keep good records or pay up.

Then we can combine that with the obvious way to deal with loans against assets, which is to say for tax purposes that any loan on an asset that exceeds its cost basis is a realized capital gain (which also counts as a basis step-up). You just realized part of the gain. You have to pay taxes on that part of your gain now.

Tyler Cowen argues against this by noting that if the loan against an asset charges interest, you aren’t any wealthier from having access to it. Someone is loaning you that money for a reason. Except I would say, if you have a so-far untaxed asset worth $100, and you borrow $100 against it (yes obviously in real life you don’t get the full amount), you should be able to use appreciation of the asset to pay the interest on the borrowing, so you can indeed effectively spend down what you have earned, in expectation. So I see why people see this as evading a tax.

And we should also say that if you donate stocks or other appreciated assets, you can only claim the cost basis as a deduction unless you also pay the capital gains tax on the gain.

We might also have to do something regarding trust funds.

In exchange, lower the overall capital gains rate to keep total revenue constant.

Scott Sumner uses this opportunity to ask why we would even tax realized capital gains, with the original sin being taxing income rather than consumption. I strongly agree with him. Given that we are stuck with income taxes as a baseline, we should strive to minimize capital gains as a revenue source.

I’ll stop there rather than continue beating on the dead horse.

We can now move on to talking about ordinary decent deeply stupid and destructive ideas, such as the Trump proposal, now copied by Harris, to not tax service tips.

Alex Tabarrok: If tip’s aren’t taxed, tips will increase, wages will fall, no increase in compensation.

The potential catch on total compensation is if the minimum wage binds. If tips are untaxed, the market wage for many jobs would presumably be actively negative. So even limiting it at $0 would cause an issue. That’s a relatively small issue.

The straightforward and obvious issue is this is stupid tax policy. Why should the waiter who lives off tips and earns more pay lower taxes than the cook in the back? This does not make any sense, on any level, other than political bribery.

Tyler Cowen tries to focus on the contradictory economic logic between tips and minimum wage. Either labor supply is elastic or inelastic, so either the minimum wage kills jobs or this new subsidy gets captured by employers in the form of lower wages. Unless, of course, this is illegal, via the minimum wage?

I see this as asking the wrong questions. You do not need to know the elasticity of labor supply to know not taxing tips is bad policy. No one involved is thinking in these economic terms, or cares. Bribery is bribery, pure and simple.

Also one can make a case that this will increase tipping. If people know tips are untaxed, then this is a reason to tip generously. Perhaps this increases total amount paid by consumers, so there is room for both employee and employer to benefit? But it would do so by being effectively inflationary, if this did not go along with lower base prices.

The bigger and more fun to think about issue is: What happens when there is a very strong incentive to classify as much of everyone’s income as possible as tips? What other jobs can qualify as offering ‘service tips’ versus not, and which can be used to effectively launder pay?

As an obvious example, a wide variety of sex workers can non-suspiciously be paid mostly in tips, and if elite can be paid vast amounts of money per hour, and it is definitely service work. Who is to say what happens there? What you’re into? Could be anything.

The case for not taxing tips is that if honest workers declare tips while dishonest workers are free not to and mostly suffer no consequences, what you are actually taxing is honesty and abiding by the law. I do find this unfortunate, and the de facto law here is to not report tips up to some murky threshold, and similar to the rule for gamblers, that if you want to conspicuously spend the money then you have to declare it. Alas, there are many such cases, where you need a law on the books to prevent rampant abuse.

On the question of food and grocery prices, they aren’t even up in real terms.

Dean Baker: Contrary to what you read in the papers, food is not expensive. In the last decade, food prices have risen 27.3 percent, the overall inflation rate has been 32.0 percent. The average hourly wage has risen 43.3 percent.

Why weren’t reporters telling us about all the people who couldn’t afford food ten years ago?

Your periodic reminder that an hour of unskilled labor buys increasing amounts of higher quality food over time. Food prices are something people notice and feel. They look for the ones that go up, not the ones that go down. The fact that food is cheaper in real terms, and also better, is seemingly not going to change that.

I sympathize. It gets to me too, when I see the prices of staples like milk. Yet I also know that those additional costs are trivial, and that the time I spend to get good prices is either spent on principle, or time wasted.

I do not sympathize with those warning about ‘price gouging’ or attempting to impose anything remotely resembling price controls, especially on food. If this actually happens, the downside risk of shortages here should not be underestimated.

John Cochrane fully and correctly bites all the bullets and writes Praise for Price Gouging.

These two paragraphs are one attempt among many attempts to explain why letting prices adjust when demand rises is good, actually.

John Cochrane: But what about people who can’t “afford” $10 gas and just have to get, say, to work? Rule number one of economics is, don’t distort prices in order to transfer income. In the big scheme of things, even a month of having to pay $10 for gas is not a huge change in the distribution of lifetime resources available to people. “Afford” is a squishy concept. You say you can’t afford $100 to fill your tank. But if I offer to sell you a Porsche for $100 you might suddenly be able to “afford” it.

But more deeply, if distributional consequences of a shock are important, then hand out cash. So long as everyone faces the same prices. Give everyone $100 to “pay for gas.” But let them keep the $100 or spend it on something else if they look at the $10 price of gas and decide it’s worth inconvenient substitutes like car pooling, public transit, bicycles, nor not going, and using the money on something else instead.

The post contains many excellent arguments, yet I predict (as did others) this will persuade approximately zero people, the same way John failed to persuade his mother as described in the post. People simply don’t want to hear it.

The FTC’s attempted ban on noncompetes is blocked nationwide for now, with the Fifth Circuit (drink?) setting it aside and holding it unlawful, that the FTC lacks the statutory authority. The FTC does quite obviously lack the statutory authority, and also as noted earlier IANAL and I doubt courts still think like this but to me this seems like a retroactive abrogation of contracts, and de facto a rather large taking without compensation in many cases, And That’s Terrible?

Your periodic reminder that no matter how much tough it might be today in various ways, we used to be poor, and work hard, and have little, and yes that kind of sucked.

Eric Nelson: These posts make me crazy. My father worked two jobs and my mother worked one to support 3 kids. We lived a life this woman would consider abuse now. Canned food, 1 TV, no microwave, no computers, no vacations, no air conditioning, beater cars, no dentist, no brand name anything.

In the mid80s, as a teen, I worked 365 days a year at 5am to make money so I could afford Converse sneakers, cassette tapes, and college applications.

Mark Miller tries to defend this as legit, saying that everyone clipped coupons and bought what was in season and bought their clothes all on sale and ate all their meals at home, but it was all on one income so it was great.

Even if that was typical, that sounds to me like one person earning income and two people working. Except the second person’s job was optimizing to save money, and doing various tasks we now get to largely outsource to either other people or technology. A lot of that saving money was navigating price discrimination schemes. All of it was extremely low effective wage for the extra non-job work. People went to extreme lengths, like yard sales, to raise even a little extra cash because they had no good way to use that time productively, and turn that time into money.

As usual, a lot of responses are not aware that home ownership rates are essentially unchanged, both overall and by generation by age, versus older statistics, and despite less children the new homes are bigger.

So, once again: We are vastly richer than we were. We consume vastly superior quality goods, in larger quantities, with more variety, even if you exclude pure technology and electronics (televisions, computers, phones and so on), including housing. An hour of work buys you vastly more of all that. Those who dispute this are flat out wrong.

The part I most appreciate: All things entertainment and information and communication, which are hugely important, are night and day different. You can access the world’s information for almost free. You can play the best games in history for almost free. You can watch infinite content for free. Those used to be damn expensive.

However, the ‘standard of living’ going up also means that what we consider the bare necessities, the things we must buy, have also gone way up. As Eric points out, we would not find what those ‘comfortable’ families of yesteryear had to be remotely acceptable, in any area.

In many cases, it would be illegal to offer something that shoddy, or be considered neglect to deny such goods. In others, you would simply be mocked and disheartened or be unable to properly function.

Then there are the things that actually did get vastly worse or more expensive. Housing (we end up with more anyway, because we pay the price), healthcare and education are vastly more expensive, and all mostly to stay in place. On top of that, various forms of social connection are much harder to get, friendship and community are in freefall and difficult to get even with great effort, atomization and loneliness are up, attention is down while addiction is up, dating is more toxic and difficult, people feel more constrained, freedom for children has collapsed, expectations for resources invested in children especially time is way up, and as a result of all that felt ability to raise children as a result of all of this is way down.

As a result, many people do find life tougher, and feel less able to accomplish reasonable life goals including having a family and children. And That’s Terrible. But we need to focus on the actual problems, not on an imagined idyllic past.

In particular, we need to be willing to let people live more like they did in the actual past, if they prefer to do that. Rather than rendering it illegal to live the way Eric Nelson grew up, we should enable and not look down upon living cheap and giving up many of the modern comforts if that is their best option.

The Revolution of Rising Expectations, and the Revolution of Rising Requirements, are the key to understanding what has happened, and why people struggle so much.

Arnold Kling questions the productivity statistics, citing all the measurement problems, and notes that life in 2024 is dramatically better than 1974 in many ways. Yes, our physical stuff is dramatically better in ways people do not appreciate.

The big problem is that most of the examples here are also examples of new stuff that raises our expected standard of living. We went from torture root canals to painless root canals, that is great, so is the better food and entertainment and phones and so on, but that does not allow us to make ends meet or raise a family. Whereas other aspects that are not our ‘stuff’ have also gotten worse and more onerous, or more expensive, or we are forced to purchase a lot more of them.

I would definitely take 2024 over any previous time (at least ignoring AI existential risks), but the downsides are quite real. People miss something real, but they don’t know how to properly describe what they have lost, and glam onto a false image.

That’s why I emphasize the need to consider what an accurate ‘Cost of Thriving’ index would look like.

And that’s more often than not a good thing.

We instead get this persistent claim that ‘we don’t make things like we used to,’ that the older furniture and appliances were better and especially more durable. J.D. Vance is the latest to make such claims. Is it true?

Jeremy Horpedahl has some things to say about J.D.’s 40 year old fridge.

Jeremy Horpedahl: I don’t know anything about whether fridges of the past preserved lettuce longer, but let me tell you a few things about 40-year-old fridges in this here thread Most important point: fridges are *much cheapertoday. How much? Almost 5 times cheaper…

Let’s compare apples to apples as much as we can.

In 1984, you could buy a 25.7 cu foot side-by-side fridge/freezer with water/ice in the door for $1,359.99.

Today, you can get a similar model at Home Depot for $998

That’s right… it’s cheaper today in *nominal terms.*

But wages also increased since 1984, from about $8.50 to $30

So with the time it took to buy the 1984 fridge new, you could have to work about 160 hours.

With 160 hours of work today, you would earn $4,800, enough to buy almost FIVE FRIDGES today.

But wait, there’s more!

Sears estimated the annual cost to operate that fridge was $116.

Using the national average electricity price in 1984 (8.2 cents/kWh), it used about 1,415 kWh. To operate the 1984 fridge today (assuming no efficiency loss) would cost about $250/year.

But the 2024 fridge only uses about 647 kWh per year — only about 45% as much electricity (the improvement over 1970s fridges is even more dramatic). That will cost $115 today.

In other words, if you still have a 40-year old fridge, you could throw it out and buy a new one, and in about 7.5 years it will have paid for itself in lower energy costs (assuming current prices, but also assuming that 1984 fridge is still as efficient as it was new).

But wait, you might ask, will that new fridge even last 7.5 years? Doesn’t stuff wear out faster today? Data is a little harder to find (anecdotes are easy to find!), but using the Residential Energy Consumption Survey we can see this is a bit overstated.

The oldest cutoff is 20 years. In 1990, just 8.4% of households used a fridge that was 20 years or older. In 2020, this was slightly lower: 5.5%.

But 20 year fridges were never common 10 years or older? Again a decline, from 38.2% to 35.1% — but not dramatically different.

It’s fine if J.D. Vance enjoys his 1984 fridge, but this doesn’t mean “economics is fake.”

Bottom line on this question: if you were offered a $5,000 fridge, costing $250 per year in electricity to operate, keeping vegetables fresh slightly longer (4 weeks instead of 3), but it was guaranteed to last 50 years, would you buy it?

There are many valid complaints about 2024. Our appliances are not one of them.

Furniture might be one area where the old stuff is competitive, but my guess is this is also a failure to account for real prices, or how much we actually (don’t) value durability.

Another study finds that public disclosure of wages causes wage suppression.

Abstract (Junyoung Jeong): This study examines whether wage disclosure assists employers in suppressing wages.

These findings suggest that wage disclosure enables employers in concentrated markets to tacitly coordinate and suppress wages.

I continue to think this gets the mechanism wrong. Wage disclosure does not primarily assist employers in suppressing wages. What it primarily does, as I’ve discussed before, is give employers much stronger incentive to suppress wages. Everyone is constantly comparing their salary to the salary of others, and comparing that to the status hierarchy (or sometimes to productivity or market value or seniority or what not).

Thus, before, it often would make sense to pay someone more because they were worth the money, or because they had other offers, or they negotiated hard. Now, if you do that, everyone else will get mad, treat that as a status and productivity claim, and also use that information against you. This hits especially hard when you have a 10x programmer or other similarly valuable employee. People won’t accept them getting what they are worth.

I first encountered this watching the old show L.A. Law. One prospective associate asks for more money than is typical. He’s Worth It, so they want to give it to him, but they worry about what would happen if the other more senior associates found out, as they’d either have to match the raise, or the contradiction between wages and social hierarchy would cause a rebellion.

Exactly. It is because wage disclosure allows comparison and coordination on wages, and allows employees to complain when they are treated ‘unfairly,’ that it ends up raising the cost of offering higher wages, thus suppressing them. Chalk up one more for ‘you can make people better off on average, or you can make things look fair.’

Due to a combination of factors but it is claimed primarily due to unions, it costs about five times as much to put the same show on in New York as it does in London, with exactly the same cast and set. It is amazing that such additional costs (however they arise) can be overcome and we still put on lots of shows here.

National industrial concentration is up, in the sense that within industry concentration is up, but the shift from manufacturing to services means that local employment concentration is down. I notice I don’t know why we should care, exactly?

The 2002 Bush steel tariffs cost more jobs due to high steel prices than they protected, including losing jobs in steel processing. Long term, it makes the whole industry uncompetitive as well, same as our shipbuilding under the Jones Act.

Department of Justice lawsuits for alleged fraud in FHA mortgages caused 20% reduction in subsequent FHA mortgage lending in the area, concentrated on the heavily litigated against banks. Demand, meet supply. What else would you expect? The banks are acting rationally, if you wanted the banks to issue mortgages to poor people you wouldn’t sue them for doing that.

SwiftOnSecurity thread about people’s delusions about car insurance, thinking it is a magical incantation that fixes what is wrong rather than a commercial product. Patrick McKenzie has additional thoughts about how to deal with such delusions as a customer service department.

Tyler Cowen talks eight things Republicans get wrong about free trade. He is of course right about all of them. It is especially dismaying that we might get highly destructive tariffs soon, especially on intermediate goods. On the margin shouting into the void like this can only be helpful.

New study on changes in entrepreneurship on online platforms like Shopify, with minority and female entrepreneurs especially appreciating the support such platforms bring despite the associated costs. A lot of businesses saw strong growth.

A challenge: Does it ‘conserve resources’ if the conserved resources fall into the hands of someone who would waste them? There are of course

From Chris Freiman.

Market to fire the CEO you say?

It is not a strong as it looks, because in this case everyone knew to fire the CEO. I knew the market wanted this CEO fired, and I don’t care at all. Still, a strong case.

New York City’s biggest taxi insurer, insuring 60% of taxis including rideshare vehicles, is insolvent, risking a crisis. Other than ‘liabilities exceeding premiums’ the article doesn’t explain how this happened? Medallion values crashed but that was a while ago and shouldn’t be causing this. They worry that ‘drivers will face increased premiums’ but if the insurer offering current premiums is now insolvent, presumably they were indeed not charging enough.

What, me leave California because they tax me over 10% of my gross income?

Roon: the only reason to leave California for tax reasons is if you believe you’ve made most of the money you’ll ever make in the past.

Which is fine but there’s a vaguely giving up vibe to it.

So first off, yeah, can’t leave, the vibes would be off. Classic California.

Second, obviously there are other good reasons to want to live somewhere else, and 10%+ of gross income is a huge cost, especially if you do plan to earn a lot more. Of course it is a good reason to leave. Roon is essentially assuming that one can only make money in California, that it would obviously be a much bigger hit than 10% (really 15%+ given other taxes) to be somewhere else. Why assume that? Especially since most people do not work in AI.

The Less-Efficient Market Hypothesis, a paper by Clifford Asness.

Abstract: Market efficiency is a central issue in asset pricing and investment management, but while the level of efficiency is often debated, changes in that level are relatively absent from the discussion.

I argue that over the past 30+ years markets have become less informationally efficient in the relative pricing of common stocks, particularly over medium horizons.

I offer three hypotheses for why this has occurred, arguing that technologies such as social media are likely the biggest culprit.

Looking ahead, investors willing to take the other side of these inefficiencies should rationally be rewarded with higher expected returns, but also greater risks. I conclude with some ideas to make rational, diversifying strategies easier to stick with amid a less-efficient market.

The Efficient Market Hypothesis is Now More False. I find the evidence here for less efficient markets unconvincing. I do suspect that markets are indeed less long-term efficient, for other reasons, including ‘the reaction to AI does not make sense’ and also the whole meme stock craze.

Economics Roundup #3 Read More »

roblox-announces-ai-tool-for-generating-3d-game-worlds-from-text

Roblox announces AI tool for generating 3D game worlds from text

ease of use —

New AI feature aims to streamline game creation on popular online platform.

Someone holding up a smartphone with

On Friday, Roblox announced plans to introduce an open source generative AI tool that will allow game creators to build 3D environments and objects using text prompts, reports MIT Tech Review. The feature, which is still under development, may streamline the process of creating game worlds on the popular online platform, potentially opening up more aspects of game creation to those without extensive 3D design skills.

Roblox has not announced a specific launch date for the new AI tool, which is based on what it calls a “3D foundational model.” The company shared a demo video of the tool where a user types, “create a race track,” then “make the scenery a desert,” and the AI model creates a corresponding model in the proper environment.

The system will also reportedly let users make modifications, such as changing the time of day or swapping out entire landscapes, and Roblox says the multimodal AI model will ultimately accept video and 3D prompts, not just text.

A video showing Roblox’s generative AI model in action.

The 3D environment generator is part of Roblox’s broader AI integration strategy. The company reportedly uses around 250 AI models across its platform, including one that monitors voice chat in real time to enforce content moderation, which is not always popular with players.

Next-token prediction in 3D

Roblox’s 3D foundational model approach involves a custom next-token prediction model—a foundation not unlike the large language models (LLMs) that power ChatGPT. Tokens are fragments of text data that LLMs use to process information. Roblox’s system “tokenizes” 3D blocks by treating each block as a numerical unit, which allows the AI model to predict the most likely next structural 3D element in a sequence. In aggregate, the technique can build entire objects or scenery.

Anupam Singh, vice president of AI and growth engineering at Roblox, told MIT Tech Review about the challenges in developing the technology. “Finding high-quality 3D information is difficult,” Singh said. “Even if you get all the data sets that you would think of, being able to predict the next cube requires it to have literally three dimensions, X, Y, and Z.”

According to Singh, lack of 3D training data can create glitches in the results, like a dog with too many legs. To get around this, Roblox is using a second AI model as a kind of visual moderator to catch the mistakes and reject them until the proper 3D element appears. Through iteration and trial and error, the first AI model can create the proper 3D structure.

Notably, Roblox plans to open-source its 3D foundation model, allowing developers and even competitors to use and modify it. But it’s not just about giving back—open source can be a two-way street. Choosing an open source approach could also allow the company to utilize knowledge from AI developers if they contribute to the project and improve it over time.

The ongoing quest to capture gaming revenue

News of the new 3D foundational model arrived at the 10th annual Roblox Developers Conference in San Jose, California, where the company also announced an ambitious goal to capture 10 percent of global gaming content revenue through the Roblox ecosystem, and the introduction of “Party,” a new feature designed to facilitate easier group play among friends.

In March 2023, we detailed Roblox’s early foray into AI-powered game development tools, as revealed at the Game Developers Conference. The tools included a Code Assist beta for generating simple Lua functions from text descriptions, and a Material Generator for creating 2D surfaces with associated texture maps.

At the time, Roblox Studio head Stef Corazza described these as initial steps toward “democratizing” game creation with plans for AI systems that are now coming to fruition. The 2023 tools focused on discrete tasks like code snippets and 2D textures, laying the groundwork for the more comprehensive 3D foundational model announced at this year’s Roblox Developer’s Conference.

The upcoming AI tool could potentially streamline content creation on the platform, possibly accelerating Roblox’s path toward its revenue goal. “We see a powerful future where Roblox experiences will have extensive generative AI capabilities to power real-time creation integrated with gameplay,” Roblox said  in a statement. “We’ll provide these capabilities in a resource-efficient way, so we can make them available to everyone on the platform.”

Roblox announces AI tool for generating 3D game worlds from text Read More »

iphone-16-gets-two-new-buttons-and-a-new-camera-layout

iPhone 16 gets two new buttons and a new camera layout

Flagship phones —

The 16 is positioned as the first non-Pro iPhone optimized for generative AI.

  • The iPhone 16 has a new rear camera arrangement.

    Apple

  • There are two new buttons: Action and Capture.

    Apple

Apple’s new iPhone 16 isn’t a revolution by any means, but it’s a solid upgrade with a handful of new features (including two new physical buttons), plus better performance and Apple Intelligence support.

The design is similar to the iPhone 15 but with a vertical camera arrangement on the back, which helps take more efficient spatial photos and videos. A new camera control button can be clicked to take a photo, and also is touch-sensitive, allowing you to slide your finger across it to tweak the settings for the images, like zoom. It can tell the difference between a full click and a lighter press, which allows you to access customization features instead of taking a picture right away.

That’s not the only new button; the configurable Action button has arrived on the iPhone 16. It was introduced in the Pro models last year and can be used for various predetermined purposes or assigned to work with Shortcuts.

There’s a new chip, too: the A18. It has a new 16-core Neural Engine, which is up to two times faster than what we saw in the iPhone 15. The memory subsystem has 17 percent more memory bandwidth. These features allow it to support Apple’s generative models and Apple Intelligence, which was exclusive to the Pro phones last year. The 3 nm chip has a 6-core CPU with two performance cores and four efficiency cores. It’s up to 30 percent faster than the CPU iPhone 15, Apple claims.

There’s a new, ray-tracing-enabled, 5-core GPU that Apple says is is up to 40 percent faster; Apple says a new thermal design allows up to 30 percent higher sustained performance for gaming, and that the iPhone 16 can run AAA games like Assassin’s Creed Mirage that only worked on the 15 Pro last year.

Thanks to this chip, Apple says the iPhone 16 is the first non-Pro iPhone to support the upcoming suite of generative AI features the company calls Apple Intelligence. This will allow features like taking pictures of restaurants to pull up a menu or identifying dog breeds on the street by pointing the camera their way. Apple is calling those two examples “Visual Intelligence,” and it leans on the camera control feature of the iPhone 16, so that’s unique to this new model.

Speaking of the camera, there aren’t a ton of significant improvements on the hardware side over the iPhone 15. Apple talked up the 48-megapixel wide-angle camera, but most of the features applied to the previous model. There’s a new ultra-wide camera that has a larger aperture, with up to 2.6 times more light capture. It also includes macro photography and spatial video capture support, which was previously limited to the Pro phones.

The screen comes in the same sizes as before (6.1 inches for the regular iPhone 15, 6.7 for the Plus), but it can reach 2,000 nits of brightness in a sunny environment and as low as 1 nit in the dark. There’s a 50 percent tougher ceramic shield on the screen, too.

The iPhone 16 starts at $799 (128GB), and the Plus model starts at $899 (also 128GB). They are available for pre-order this coming Friday, and it ships September 20.

Listing image by Apple

iPhone 16 gets two new buttons and a new camera layout Read More »

after-another-boeing-letdown,-nasa-isn’t-ready-to-buy-more-starliner-missions

After another Boeing letdown, NASA isn’t ready to buy more Starliner missions

Boeing's Starliner spacecraft sits atop a United Launch Alliance Atlas V rocket before liftoff in June to begin the Crew Flight Test.

Enlarge / Boeing’s Starliner spacecraft sits atop a United Launch Alliance Atlas V rocket before liftoff in June to begin the Crew Flight Test.

NASA is ready for Boeing’s Starliner spacecraft, stricken with thruster problems and helium leaks, to leave the International Space Station as soon as Friday, wrapping up a disappointing test flight that has clouded the long-term future of the Starliner program.

Astronauts Butch Wilmore and Suni Williams, who launched aboard Starliner on June 5, closed the spacecraft’s hatch Thursday in preparation for departure Friday. But it wasn’t what they envisioned when they left Earth on Starliner three months ago. Instead of closing the hatch from a position in Starliner’s cockpit, they latched the front door to the spacecraft from the space station’s side of the docking port.

The Starliner spacecraft is set to undock from the International Space Station at 6: 04 pm EDT (22: 04 UTC) Friday. If all goes according to plan, Starliner will ignite its braking rockets at 11: 17 pm EDT (03: 17 UTC) for a minute-long burn to target a parachute-assisted, airbag-cushioned landing at White Sands Space Harbor, New Mexico, at 12: 03 am EDT (04: 03 UTC) Saturday.

The Starliner mission set to conclude this weekend was the spacecraft’s first test flight with astronauts, running seven years behind Boeing’s original schedule. But due to technical problems with the spacecraft, it won’t come home with the two astronauts who flew it into orbit back in June, leaving some of the test flight’s objectives incomplete.

This outcome is, without question, a setback for NASA and Boeing, which must resolve two major problems in Starliner’s propulsion system—supplied by Aerojet Rocketdyne—before the capsule can fly with people again. NASA officials haven’t said whether they will require Boeing to launch another Starliner test flight before certifying the spacecraft for the first of up to six operational crew missions on Boeing’s contract.

A noncommittal from NASA

For over a decade, the space agency has worked with Boeing and SpaceX to develop two independent vehicles to ferry astronauts to and from the International Space Station (ISS). SpaceX launched its first Dragon spacecraft with astronauts in May 2020, and six months later, NASA cleared SpaceX to begin flying regular six-month space station crew rotation missions.

Officially, NASA has penciled in Starliner’s first operational mission for August 2025. But the agency set that schedule before realizing Boeing and Aerojet Rocketdyne would need to redesign seals and perhaps other elements in Starliner’s propulsion system.

No one knows how long that will take, and NASA hasn’t decided if it will require Boeing to launch another test flight before formally certifying Starliner for operational missions. If Starliner performs flawlessly after undocking and successfully lands this weekend, perhaps NASA engineers can convince themselves Starliner is good to go for crew rotation flights once Boeing resolves the thruster problems and helium leaks.

In any event, the schedule for launching an operational Starliner crew flight in less than a year seems improbable. Aside from the decision on another test flight, the agency also must decide whether it will order any more operational Starliner missions from Boeing. These “post-certification missions” will transport crews of four astronauts between Earth and the ISS, orbiting roughly 260 miles (420 kilometers) above the planet.

NASA has only given Boeing the “Authority To Proceed” for three of its six potential operational Starliner missions. This milestone, known as ATP, is a decision point in contracting lingo where the customer—in this case, NASA—places a firm order for a deliverable. NASA has previously said it awards these task orders about two to three years prior to a mission’s launch.

Josh Finch, a NASA spokesperson, told Ars that the agency hasn’t made any decisions on whether to commit to any more operational Starliner missions from Boeing beyond the three already on the books.

“NASA’s goal remains to certify the Starliner system for crew transportation to the International Space Station,” Finch said in a written response to questions from Ars. “NASA looks forward to its continued work with Boeing to complete certification efforts after Starliner’s uncrewed return. Decisions and timing on issuing future authorizations are on the work ahead.”

This means NASA’s near-term focus is on certifying Starliner so that Boeing can start executing its commercial crew contract. The space agency hasn’t determined when or if it will authorize Boeing to prepare for any Starliner missions beyond the three already on the books.

When it awarded commercial crew contracts to SpaceX and Boeing in 2014, NASA pledged to buy at least two operational crew flights from each company. The initial contracts from a decade ago had options for as many as six crew rotation flights to the ISS after certification.

Since then, NASA has extended SpaceX’s commercial crew contract to cover as many as 14 Dragon missions with astronauts, and SpaceX has already launched eight of them. The main reason for this contract extension was to cover NASA’s needs for crew transportation after delays with Boeing’s Starliner, which was originally supposed to alternate with SpaceX’s Dragon for human flights every six months.

After another Boeing letdown, NASA isn’t ready to buy more Starliner missions Read More »

nasa-wants-starliner-to-make-a-quick-getaway-from-the-space-station

NASA wants Starliner to make a quick getaway from the space station

WSSHing for success —

Starliner is set to land at White Sands Space Harbor in New Mexico shortly after midnight.

Boeing's Starliner spacecraft is set to undock from the International Space Station on Friday evening.

Enlarge / Boeing’s Starliner spacecraft is set to undock from the International Space Station on Friday evening.

NASA

Boeing’s Starliner spacecraft will gently back away from the International Space Station Friday evening, then fire its balky thrusters to rapidly depart the vicinity of the orbiting lab and its nine-person crew.

NASA asked Boeing to adjust Starliner’s departure sequence to get away from the space station faster and reduce the workload on the thrusters to reduce the risk of overheating, which caused some of the control jets to drop offline as the spacecraft approached the outpost for docking in June.

The action begins at 6: 04 pm EDT (22: 04 UTC) on Friday, when hooks in the docking mechanism connecting Starliner with the International Space Station (ISS) will open, and springs will nudge the spacecraft away its mooring on the forward end of the massive research complex.

Around 90 seconds later, a set of forward-facing thrusters on Starliner’s service module will fire in a series of 12 pulses over a few minutes to drive the spacecraft farther away from the space station. These maneuvers will send Starliner on a trajectory over the top of the ISS, then behind it until it is time for the spacecraft to perform a deorbit burn at 11: 17 pm EDT (03: 17 UTC) to target landing at White Sands Space Harbor, New Mexico, shortly after midnight EDT (10 pm local time at White Sands).

How to watch, and what to watch for

The two videos embedded below will show NASA TV’s live coverage of the undocking and landing of Starliner.

Starliner is leaving its two-person crew behind on the space station after NASA officials decided last month they did not have enough confidence in the spacecraft’s reaction control system (RCS) thrusters, used to make exact changes to the capsule’s trajectory and orientation in orbit. Five of the 28 RCS thrusters on Starliner’s service module failed during the craft’s rendezvous with the space station three months ago. Subsequent investigations showed overheating could cause Teflon seals in a poppet valve to swell, restricting the flow of propellant to the thrusters.

Engineers recovered four of the five thrusters after they temporarily stopped working, but NASA officials couldn’t be sure the thrusters would not overheat again on the trip home. NASA decided it was too risky for Starliner to come home with astronauts Butch Wilmore and Suni Williams, who launched on Boeing’s crew test flight on June 5, becoming the first people to fly on the commercial capsule. They will remain aboard the station until February, when they will return to Earth on a SpaceX Dragon spacecraft.

The original flight plan, had Wilmore and Williams been aboard Starliner for the trip home, called for the spacecraft to make a gentler departure from the ISS, allowing engineers to fully check out the performance of its navigation sensors and test the craft’s ability to loiter in the vicinity of the station for photographic surveys of its exterior.

“In this case, what we’re doing is the break-out burn, which will be a series of 12 burns, each not very large, about 0.1 meters per second (0.2 mph) and that’s just to take the Starliner away from the station, and then immediately start going up and away, and eventually it’ll curve around to the top and deorbit from above the station a few orbits later,” said Anthony Vareha, NASA’s flight director overseeing ISS operations during Starliner’s undocking sequence.

Astronauts won’t be inside Starliner’s cockpit to take manual control in the event of a major problem, so NASA managers want the spacecraft to get away from the space station as quickly as possible.

On this path, Starliner will exit the so-called approach ellipsoid, a 2.5-by-1.25-by-1.25-mile (4-by-2-by-2-kilometer) invisible boundary around the orbiting laboratory, about 20 to 25 minutes after undocking, NASA officials said. That’s less than half the time Starliner would normally take to leave the vicinity of the ISS.

“It’s a quicker way to get away from the station, with less stress on the thrusters,” said Steve Stich, NASA’s commercial crew program manager. “Essentially, once we open the hooks, the springs will push Starliner away and then we’ll do some really short thruster firings to put us on a trajectory that will take us above the station and behind, we’ll be opening to a nice range to where we can execute the deorbit burn.”

In the unlikely event of a more significant series of thruster failures, the springs that push Starliner away from the station should be enough to ensure there’s no risk of collision, according to Vareha.

“Then, after that, we really are going to just stay in some very benign attitudes and not fire the the thrusters very much at all,” Stich said.

Starliner will need to use the RCS thrusters again to point itself in the proper direction to fire four larger rocket engines for the deorbit burn. Once this burn is complete, the RCS thrusters will reorient the spacecraft to jettison the service module to burn up in the atmosphere. The reusable crew module relies on a separate set of thrusters during reentry.

Finally, the capsule will approach the landing zone in New Mexico from the southwest, flying over the Pacific Ocean and Mexico before deploying three main parachutes and airbags to cushion its landing at White Sands. Boeing and NASA teams there will meet the spacecraft and secure it for a road voyage back to Kennedy Space Center in Florida for refurbishment.

Meanwhile, engineers must resolve the causes of the thruster problems and helium leaks that plagued the Starliner test flight before it can fly astronauts again.

NASA wants Starliner to make a quick getaway from the space station Read More »

at&t-sues-broadcom-for-refusing-to-renew-perpetual-license-support

AT&T sues Broadcom for refusing to renew perpetual license support

AT&T vs. Broadcom —

Ars cited in lawsuit AT&T recently filed against Broadcom.

Signage is displayed outside the Broadcom offices on June 7, 2018 in San Jose, California.

AT&T filed a lawsuit against Broadcom on August 29 accusing it of seeking to “retroactively change existing VMware contracts to match its new corporate strategy.” The lawsuit, spotted by Channel Futures, concerns claims that Broadcom is not letting AT&T renew support services for previously purchased perpetual VMware software licenses unless AT&T meets certain conditions.

Broadcom closed its $61 billion VMware acquisition in November and swiftly enacted sweeping changes. For example, in December, Broadcom announced the end of VMware perpetual license sales in favor of subscriptions of bundled products. Combined with higher core requirements per CPU subscription, complaints ensued that VMware was getting more expensive to work with.

AT&T uses VMware software to run 75,000 virtual machines (VMs) across about 8,600 servers, per the complaint filed at the Supreme Court of the State of New York [PDF]. It reportedly uses the VMs to support customer service operations and for operations management efficiency.

AT&T feels it should be granted a one-year renewal for VMware support services, which it claimed would be the second of three one-year renewals to which its contract entitles it. According to AT&T, support services are critical in case of software errors and for upkeep, like security patches, software upgrades, and daily maintenance. Without support, “an error or software glitch” could result in disruptive failure, AT&T said.

AT&T claims Broadcom refuses to renew support and plans to terminate AT&T’s VMware support services on September 9. It asked the court to stop Broadcom from cutting VMware support services and for “further relief” deemed necessary. The New York Supreme Court has told Broadcom to respond within 20 days of the complaint’s filing.

In a statement to Ars Technica, an AT&T spokesperson said: “We have filed this complaint to preserve continuity in the services we provide and protect the interests of our customers.”

AT&T accuses Broadcom of trying to make it spend millions on unwanted software

AT&T’s lawsuit claims that Broadcom has refused to renew support services for AT&T’s perpetual licenses unless AT&T agrees to what it deems are unfair conditions that would cost it “tens of millions more than the price of the support services alone.”

The lawsuit reads:

Specifically, Broadcom is threatening to withhold essential support services for previously purchased VMware perpetually licensed software unless AT&T capitulates to Broadcom’s demands that AT&T purchase hundreds of millions of dollars’ worth of bundled subscription software and services, which AT&T does not want.

After buying VMware, Broadcom consolidated VMware’s offering from about 8,000 SKUs to four bundles, per Channel Futures. AT&T claims these subscription offerings “would impose significant additional contractual and technological obligations.” AT&T claims it might have to invest millions to “develop its network to accommodate the new software.”

VMware and AT&T’s agreement precludes “Broadcom’s attempt to bully AT&T into paying a king’s ransom for subscriptions AT&T does not want or need, or risk widespread network outages,” AT&T reckons.

In its lawsuit, AT&T claims “bullying tactics” were expected from Broadcom post-acquisition. Quoting Ars Technica reporting, the lawsuit claims that “Broadcom wasted no time strong-arming customers into highly unfavorable subscription models marked by ‘steeply increased prices[,]’ ‘refusing to maintain security conditions for perpetual license[d] [software,]’ and threatening to cut off support for existing products already licensed by customers—exactly as it has done here.'”

“Without the Support Services, the more than 75,000 virtual machines operated by AT&T⸺impacting millions of its customers worldwide⸺would all be just an error or software glitch away from failing,” AT&T’s lawsuit says.

Broadcom’s response

In the lawsuit, Broadcom alleges that AT&T is not eligible to renew support services for a year because it believes AT&T was supposed to renew all three one-year support service plans by the end of 2023.

In a statement to Ars Technica, a Broadcom company spokesperson said:

Broadcom strongly disagrees with the allegations and is confident we will prevail in the legal process. VMware has been moving to a subscription model, the standard for the software industry, for several years – beginning before the acquisition by Broadcom. Our focus will continue to be providing our customers choice and flexibility while helping them address their most complex technology challenges.

Communications for Office of the President, first responders could be affected

AT&T’s lawsuit emphasizes that should it lose support for VMware offerings, communications for the Office of the President and first responders would be at risk. AT&T claims that about 22,000 of its 75,000 VMs relying on VMware “are used in some way to support AT&T’s provision of services to millions of police officers, firefighters, paramedics, emergency workers and incident response team members nationwide… for use in connection with matters of public safety and/or national security.”

When reached for comment, AT&T’s spokesperson declined to comment on AT&T’s backup plan for minimizing disruption should it lose VMware support in a few days.

Ultimately, the case centers on “multiple documents involved, and resolution of the dispute will require interpretation as to which clauses prevail,” Benjamin B. Kabak, a partner practicing in technology and outsourcing at the Loeb & Loeb LLP New York law firm, points out

AT&T sues Broadcom for refusing to renew perpetual license support Read More »

what-to-expect-from-apple’s-“it’s-glowtime”-event

What to expect from Apple’s “It’s Glowtime” event

Apple It's Glowtime event promo image depicting a neon Apple logo

Enlarge / Apple’s event will likely discuss Apple Intelligence, though that’s not going to launch until later in the year with iOS 18.1

Apple

For years, Apple’s September event has focused almost exclusively on new flagship iPhones and new Apple Watch models. Once in a while, other second-tier products make an appearance. And in recent cycles, the Mac and high-end iPads had their shining moment later in the year—often in October or November.

We expect the same to happen this time. You can almost certainly count on new iPhones and Watches. As for what else to expect: well, no Macs, but there are a couple of interesting possibilities.

Here’s what we expect to see next week.

iPhone 16 and 16 Pro

Gone are the days of radical changes to the iPhone; the last dramatic redesign was the iPhone X in 2017. Since then, Apple has iterated a little bit each year—never enough to drive yearly upgrades, but perhaps enough to entice consumers with phones that are three years old or so.

The iPhone 16 and 16 Pro are expected to continue this pacing, with a grab bag of improvements to existing features but nothing too radical.

The only notable design change that has been rumored is the introduction of the “Capture” button on all models; this will allow taking pictures without using the touchscreen on all models. This could be done with the Action button on last year’s iPhone 15 Pro, and that Action button is expected to come to all iPhone 16 models (not just Pro) this year.

But adding a Capture button frees the Action button up for other things, and the Capture button is expected to produce different results depending on how you press it, making it more useful.

The iPhone 16 and iPhone 16 Plus rear camera arrangement will switch to two vertically aligned lenses instead of the diagonal arrangement of the previous model. Apart from that and the new buttons, there will be no noticeable design changes in the non-Pro phones this year.

The iPhone 16 Pro and iPhone 16 Pro Max will also not have noticeable design changes, but they will have slightly larger screens. The Pro is going from a 6.1-inch screen to 6.3 inches, while the larger Max version will go from 6.7 to 6.9 inches. The phones will be slightly larger, but much of the screen-size gain will come from Border Reduction Structure (BRS) implementation that will reduce the already barely there bezels a little bit.

Speaking of the screens, the Pro models will feature new panels that will provide just a bit more maximum brightness, following a trend of improvements in that area that has spanned the last few iPhones.

  • The general look of the new iPhones isn’t expected to change compared to these designs from last year, except for the camera arrangement on the base iPhone 16.

    Samuel Axon

  • The Action Button, seen here on the iPhone 15 Pro Max, will reach the non-Pro iPhones this year.

    Samuel Axon

That’s it for changes visible on the outside. Inside, the phones are expected to get an improved thermal design—which hopefully addresses our biggest complaint when we reviewed the iPhone 15 Pro—as well as faster 5G modems in the Pros and a new A-series chip that will probably offer modest gains in performance and efficiency over the top-tier chip from last year.

All the remaining changes that are rumored from leaks, supply-chain insights, or news reports are tweaks to the camera systems. All models will get better ultra-wide cameras that handle low light better, and the iPhone 16 Pro will go to a 48-megapixel ultra-wide camera to better match the wide-angle lens’ overall performance. Additionally, the 5x zoom telephoto lens that was reserved only for the Pro Max last year will make its way to the smaller Pro this time.

That’s all we’ve heard so far. Looking back on paragraphs of text here, it sounds like a lot, but most of these things are pretty modest improvements. Those coming from an iPhone 13 Pro or earlier may be tempted by all this, but it’ll be pretty silly to upgrade from an iPhone 15 to an iPhone 16 unless Apple has managed to keep some earth-shattering new feature a secret.

What to expect from Apple’s “It’s Glowtime” event Read More »

after-seeing-wi-fi-network-named-“stinky,”-navy-found-hidden-starlink-dish-on-us-warship

After seeing Wi-Fi network named “STINKY,” Navy found hidden Starlink dish on US warship

I need that sweet, sweet wi-fi —

To be fair, it’s hard to live without Wi-Fi.

A photo of the USS Manchester.

Enlarge / The USS Manchester. Just the spot for a Starlink dish.

Department of Defense

It’s no secret that government IT can be a huge bummer. The records retention! The security! So government workers occasionally take IT into their own hands with creative but, err, unauthorized solutions.

For instance, a former US Ambassador to Kenya in 2015 got in trouble after working out of an embassy compound bathroom—the only place where he could use his personal computer (!) to access an unsecured network (!!) that let him log in to Gmail (!!!), where he did much of his official business—rules and security policies be damned.

Still, the ambassador had nothing on senior enlisted crew members of the littoral combat ship USS Manchester, who didn’t like the Navy’s restriction of onboard Internet access. In 2023, they decided that the best way to deal with the problem was to secretly bolt a Starlink terminal to the “O-5 level weatherdeck” of a US warship.

They called the resulting Wi-Fi network “STINKY”—and when officers on the ship heard rumors and began asking questions, the leader of the scheme brazenly lied about it. Then, when exposed, she went so far as to make up fake Starlink usage reports suggesting that the system had only been accessed while in port, where cybersecurity and espionage concerns were lower.

Rather unsurprisingly, the story ends badly, with a full-on Navy investigation and court-martial. Still, for half a year, life aboard the Manchester must have been one hell of a ride.

Enlarge / A photo included in the official Navy investigation report, showing the location of the hidden Starlink terminal on the USS Manchester.

DOD (through Navy Times)

One stinky solution

The Navy Times has all the new and gory details, and you should read their account, because they went to the trouble of using the Freedom of Information Act (FOIA) to uncover the background of this strange story. But the basics are simple enough: People are used to Internet access. They want it, even (perhaps especially!) when at sea on sensitive naval missions to Asia, where concern over Chinese surveillance and hacking runs hot.

So, in early 2023, while in the US preparing for a deployment, Command Senior Chief Grisel Marrero—the enlisted shipboard leader—led a scheme to buy a Starlink for $2,800 and to install it inconspicuously on the ship’s deck. The system was only for use by chiefs—not by officers or by most enlisted personnel—and a Navy investigation later revealed that at least 15 chiefs were in on the plan.

The Navy Times describes how Starlink was installed:

The Starlink dish was installed on the Manchester’s O-5 level weatherdeck during a “blanket” aloft period, which requires a sailor to hang high above or over the side of the ship.

During a “blanket” aloft, duties are not documented in the deck logs or the officer of the deck logs, according to the investigation.

It’s unclear who harnessed up and actually installed the system for Marrero due to redactions in the publicly released copy of the probe, but records show Marrero powered up the system the night before the ship got underway to the West Pacific waters of U.S. 7th Fleet.

This was all extremely risky, and the chiefs don’t appear to have taken amazing security precautions once everything was installed. For one thing, they called the network “STINKY.” For another, they were soon adding more gear around the ship, which was bound to raise further questions. The chiefs found that the Wi-Fi signal coming off the Starlink satellite transceiver couldn’t cover the entire ship, so during a stop in Pearl Harbor, they bought “signal repeaters and cable” to extend coverage.

Sailors on the ship then began finding the STINKY network and asking questions about it. Some of these questions came to Marrero directly, but she denied knowing anything about the network… and then privately changed its Wi-Fi name to “another moniker that looked like a wireless printer—even though no such general-use wireless printers were present on the ship, the investigation found.”

Marrero even went so far as to remove questions about the network from the commanding officer’s “suggestion box” aboard ship to avoid detection.

Finding the stench

Ship officers heard the scuttlebutt about STINKY, of course, and they began asking questions and doing inspections, but they never found the concealed device. On August 18, though, a civilian worker from the Naval Information Warfare Center was installing an authorized SpaceX “Starshield” device and came across the unauthorized SpaceX device hidden on the weatherdeck.

Marrero’s attempt to create fake data showing that the system had only been used in port then failed spectacularly due to the “poorly doctored” statements she submitted. At that point, the game was up, and Navy investigators looked into the whole situation.

All of the chiefs who used, paid for, or even knew about the system without disclosing it were given “administrative nonjudicial punishment at commodore’s mast,” said Navy Times.

Marrero herself was relieved of her post last year, and she pled guilty during a court-martial this spring.

So there you go, kids: two object lessons in poor decision-making. Whether working from an embassy bathroom or the deck of a littoral combat ship, if you’re a government employee, think twice before giving in to the sweet temptation of unsecured, unauthorized wireless Internet access.

Update, Sept. 5, 3: 30pm: A reader has claimed that the default Starlink SSID is actually… “STINKY.” This seemed almost impossible to believe, but Elon Musk in fact tweeted about it in 2022, Redditors have reported it in the wild, and back in 2022 (thanks, Wayback Machine), the official Starlink FAQ said that the device’s “network name will appear as ‘STARLINK’ or ‘STINKY’ in device WiFi settings.” (A check of the current Starlink FAQ, however, shows that the default network name now is merely “STARLINK.”)

In other words, not only was this asinine conspiracy a terrible OPSEC idea, but the ringleaders didn’t even change the default Wi-Fi name until they started getting questions about it. Yikes.

2022 Twitter thread announcing that

2022 Twitter thread announcing that “STINKY” would be the default SSID for Starlink.

After seeing Wi-Fi network named “STINKY,” Navy found hidden Starlink dish on US warship Read More »

new-ai-model-“learns”-how-to-simulate-super-mario-bros.-from-video-footage

New AI model “learns” how to simulate Super Mario Bros. from video footage

At first glance, these AI-generated <em>Super Mario Bros.</em> videos are pretty impressive. The more you watch, though, the more glitches you’ll see.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/09/MarioVGG_output_grid.gif”></img><figcaption>
<p>At first glance, these AI-generated <em>Super Mario Bros.</em> videos are pretty impressive. The more you watch, though, the more glitches you’ll see.</p>
</figcaption></figure>
<p>Last month, Google’s GameNGen AI model showed that <a href=generalized image diffusion techniques can be used to generate a passable, playable version of Doom. Now, researchers are using some similar techniques with a model called MarioVGG to see if an AI model can generate plausible video of Super Mario Bros. in response to user inputs.

The results of the MarioVGG model—available as a pre-print paper published by the crypto-adjacent AI company Virtuals Protocol—still display a lot of apparent glitches, and it’s too slow for anything approaching real-time gameplay at the moment. But the results show how even a limited model can infer some impressive physics and gameplay dynamics just from studying a bit of video and input data.

The researchers hope this represents a first step toward “producing and demonstrating a reliable and controllable video game generator,” or possibly even “replacing game development and game engines completely using video generation models” in the future.

Watching 737,000 frames of Mario

To train their model, the MarioVGG researchers (GitHub users erniechew and Brian Lim are listed as contributors) started with a public data set of Super Mario Bros. gameplay containing 280 “levels'” worth of input and image data arranged for machine-learning purposes (level 1-1 was removed from the training data so images from it could be used in the evaluation). The more than 737,000 individual frames in that data set were “preprocessed” into 35 frame chunks so the model could start to learn what the immediate results of various inputs generally looked like.

To “simplify the gameplay situation,” the researchers decided to focus only on two potential inputs in the data set: “run right” and “run right and jump.” Even this limited movement set presented some difficulties for the machine-learning system, though, since the preprocessor had to look backward for a few frames before a jump to figure out if and when the “run” started. Any jumps that included mid-air adjustments (i.e., the “left” button) also had to be thrown out because “this would introduce noise to the training dataset,” the researchers write.

  • MarioVGG takes a single gameplay frame and a text input action to generate multiple video frames.

  • The last frame of a generated video sequence can be used as the baseline for the next set of frames in the video.

  • The AI-generated arc of Mario’s jump is pretty accurate (even as the algorithm creates random obstacles as the screen “scrolls”).

  • MarioVGG was able to infer the physics of behaviors like running off a ledge or running into obstacles.

  • A particularly bad example of a glitch that causes Mario to simply disappear from the scene at points.

After preprocessing (and about 48 hours of training on a single RTX 4090 graphics card), the researchers used a standard convolution and denoising process to generate new frames of video from a static starting game image and a text input (either “run” or “jump” in this limited case). While these generated sequences only last for a few frames, the last frame of one sequence can be used as the first of a new sequence, feasibly creating gameplay videos of any length that still show “coherent and consistent gameplay,” according to the researchers.

New AI model “learns” how to simulate Super Mario Bros. from video footage Read More »

after-starliner,-nasa-has-another-big-human-spaceflight-decision-to-make

After Starliner, NASA has another big human spaceflight decision to make

Heat shield a hot decision —

“We still have a lot of work to do to close out the heat shield investigation.”

The Artemis II Orion spacecraft being prepared for tests at NASA’S Kennedy Space Center in Florida in June 2024.

Enlarge / The Artemis II Orion spacecraft being prepared for tests at NASA’S Kennedy Space Center in Florida in June 2024.

NASA / Rad Sinyak

Now that NASA has resolved the question of the Starliner spacecraft and its two crew members on the International Space Station, the agency faces another high-stakes human spaceflight decision.

The choice concerns the Orion spacecraft’s heat shield and whether NASA will make any changes before the Artemis II mission that will make a lunar flyby. Although Starliner has garnered a lot of media attention, this will be an even higher-profile decision for NASA, with higher consequences—four astronauts will be on board, and hundreds of millions, if not billions of people, will be watching humanity’s first deep space mission in more than five decades.

The issue is the safety of the heat shield, located at the base of the capsule, which protects Orion’s crew during its return to Earth. During the Artemis I mission that sent Orion beyond the Moon in late 2022, without astronauts on board, chunks of charred material cracked and chipped away from Orion’s heat shield during reentry into Earth’s atmosphere. Once the spacecraft landed, engineers found more than 100 locations where the stresses of reentry damaged the heat shield.

After assessing the issue for more than a year, NASA convened an “independent review team” to conduct its analysis of NASA’s work. Initially, this review team’s work was due to be completed in June, but its deliberations continued throughout much of the summer, and it only recently concluded.

The team’s findings are not public yet, but NASA essentially faces two choices with the heat shield: It can fly Artemis II with a similar heat shield that Orion used on Artemis I, or the agency can revamp the design and construct a new heat shield, likely delaying Artemis II from its September 2025 launch date for multiple years.

What they’re saying

In recent comments, NASA officials have been relatively tight-lipped when asked how the heat shield issue will be resolved:

  • NASA Administrator Bill Nelson, in an interview with Ars, in early August: “They are still deciding. I’m very confident [in a launch date of September 2025] unless there is the problem with the heat shield. Obviously, that would be a big hit. But I have no indication at this point that the final recommendation is going to be to go with another heat shield.”
  • NASA Associate Administrator Jim Free, in conversation with Ars, in late August: “That’s on a good path right now.”
  • NASA Associate Administrator for Exploration Systems Development, Catherine Koerner, in an interview with Ars in mid-August: “The entire trade space is open. But as far as the actual Artemis II mission, right now, we’re still holding to the September ’25 launch date, knowing that we still have a lot of work to do to close out the heat shield investigation.”
  • NASA Deputy Associate Administrator for Moon to Mars Program Amit Kshatriya to the NASA Advisory Committee in late August: “The independent review team has just wrapped up their analysis, so I expect that to close out. We should have a disposition there in terms of how they incorporate those findings.”

In summary, the Independent Review Team’s work is done, and it has begun to brief NASA officials. A final decision will then be made by NASA’s senior leadership.

What happens now

In preparation for Artemis II, the Orion spacecraft underwent thermal and vacuum testing this year before it will be stacked onto the Space Launch System rocket. Initially, NASA planned to begin the stacking process this month but ultimately delayed this until there was clarity on the heat shield question. The shield is already attached to the spacecraft.

Most people Ars spoke to believe NASA will likely fly with the heat shield as is. Sources have indicated that NASA engineers believe the best way to preserve the heat shield during Artemis II is by changing its trajectory through Earth’s atmosphere.

The inspector general's report May 1 included new images of Orion's heat shield.

The inspector general’s report May 1 included new images of Orion’s heat shield.

NASA Inspector General

During Artemis I, the spacecraft followed a “skip” reentry profile, in which Orion dipped into the atmosphere, skipped back into space, and then made a final descent into the atmosphere. This allowed for precise control over Orion’s splashdown location and reduced g-forces on the vehicle. There are other options, including a ballistic reentry, with a steeper trajectory that is harder on the crew in terms of gravitational forces, and a direct reentry, which involves a miniature skip.

A steeper trajectory would allow Orion’s heat shield to be exposed to atmospheric heating and air resistance for a shorter period of time. NASA engineers believe that the cracking issues observed during Artemis I were due to the duration of exposure to atmospheric heating. So less time—theoretically—means that there would be less damage observed during the reentry of Orion during Artemis II.

After Starliner, NASA has another big human spaceflight decision to make Read More »