Author name: Paul Patrick

notes-on-dwarkesh-patel’s-podcast-with-sholto-douglas-and-trenton-bricken

Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken

Dwarkesh Patel continues to be on fire, and the podcast notes format seems like a success, so we are back once again.

This time the topic is how LLMs are trained, work and will work in the future. Timestamps are for YouTube. Where I inject my own opinions or takes, I do my best to make that explicit and clear.

This was highly technical compared to the average podcast I listen to, or that Dwarkesh does. This podcast definitely threated to technically go over my head at times, and some details definitely did go over my head outright. I still learned a ton, and expect you will too if you pay attention.

This is an attempt to distill what I found valuable, and what questions I found most interesting. I did my best to make it intuitive to follow even if you are not technical, but in this case one can only go so far. Enjoy.

  • (1: 30) Capabilities only podcast, Trenton has ‘solved alignment.’ April fools!

  • (2: 15) Huge context tokens is underhyped, a huge deal. It occurs to me that the issue is about the trivial inconvenience of providing the context. Right now I mostly do not bother providing context on my queries. If that happened automatically, it would be a whole different ballgame.

  • (2: 50) Could the models be sample efficient if you can fit it all in the context window? Speculation is it might work out of the box.

  • (3: 45) Does this mean models are already in some sense superhuman, with this much context and memory? Well, yeah, of course. Computers have been superhuman at math and chess and so on for a while. Now LLMs have quickly gone from having worse short term working memory than humans to vastly superior short term working memory. Which will make a big difference. The pattern will continue.

  • (4: 30) In-context learning is similar to gradient descent. It gets problematic for adversarial attacks, but of course you can ignore that because as Tenton reiterates alignment is solved, and certainly it is solved for such mundane practical concerns. But it does seem like he’s saying if you do this then ‘you’re fine-tuning but in a way where you cannot control what is going on’?

  • (6: 00) Models need to learn how to learn from examples in order to take advantage of long context. So does that mean the task of intelligence requires long context? That this is what causes the intelligence, in some sense, they ask? I don’t think you can reverse it that way, but it is possible that this will orient work in directions that are more effective?

  • (7: 00) Dwarkesh asks about how long contexts link to agent reliability. Douglas says this is more about lack of nines of reliability, and GPT-4-level models won’t cut it there. And if you need to get multiple things right, the reliability numbers have to multiply together, which does not go well in bulk. If that is indeed the issue then it is not obvious to me the extent to which scaffolding and tricks (e.g. Devin, probably) render this fixable.

  • (8: 45) Performance on complex tasks follows log scores. It gets it right one time in a thousand, then one in a hundred, then one in ten. So there is a clear window where the thing is in practice useless, but you know it soon won’t be. And we are in that window on many tasks. This goes double if you have complex multi-step tasks. If you have a three-step task and are getting each step right one time in a thousand, the full task is one in a billion, but you are not so far being able to in practice do the task.

  • (9: 15) The model being presented here is predicting scary capabilities jumps in the future. LLMs can actually (unreliably) do all the subtasks, including identifying what the subtasks are, for a wide variety of complex tasks, but they fall over on subtasks too often and we do not know how to get the models to correct for that. But that is not so far from the whole thing coming together, and that would include finding scaffolding that lets the model identify failed steps and redo them until they work, if which tasks fail is sufficiently non-deterministic from the core difficulties.

  • (11: 30) Attention costs for context window size are quadratic, so how is Google getting the window so big? Suggestion is the cost is still actually dwarfed by the MLP block, and while generating tokens the cost is no longer n-squared, your marginal cost becomes linear.

  • (13: 30) Are we shifting where the models learn, with more and more in the forward pass? Douglas says essentially no, the context length allows useful working memory, but is not ‘the key thing towards actual reasoning.’

  • (15: 10) Which scaling up counts? Tokens, compute, model size? Can you loop through the model or brain or language? Yes, but in practice notice humans only in practice do 5-7 steps in complex sentences because of working memory limits.

  • (17: 15) Where is the model reasoning? No crisp answer. The residual stream that the model carries forward packs in a lot of different vectors that encode all the info. Attention is about what to pick up and put into what is effectively RAM.

  • (20: 40) Does the brain work via this residual stream? Yes. Humans implement a bunch of efficient algorithms and really scale up our cerebral cortex investment. A key thing we do is very similar to the attention algorithm.

  • (24: 00) How does the brain reason? Trenton thinks mostly intelligence is pattern matching. ‘Association is all you need.’

  • (25: 45) Paper from Demis in 2008 noted that memory is reconstructive, so it is linked to creativity and also is horribly unreliable.

  • (26: 45) What makes Sherlock Homes so good? Under this theory: A really long context length and working memory, and better high-level association. Also a good algorithm for his queries and how to build representations. Also proposed: A Sherlock Homes evaluation. Give a mystery novel or story, ask for probability distribution over ‘The suspect is X.’

  • (28: 30) A vector in the residual stream is the composite of all the tokens to which I have previously paid attention, even by layer two.

  • (30: 30) Could we do an unsupervised benchmark? It has been explored, such as with constitutional AI. Again, alignment-free podcast here.

  • (31: 45) If intelligence is all associations, should we be less worried about superintelligence, because there’s not this sense in which it is Sherlock++ and it can’t solve physics from a world frame? The response is, they would need to learn the associations, but also the tech makes that quick to do, and silicon can be about as generally intelligent as humans and can recursively improve anyway.

  • My response here would strongly be that if this is true, we should be more worried rather than less worried, because it means there is no secret or trick, and scale really would be all you would need, if you scale enough distinct aspects, and we should expect that we would do that.

  • (32: 45) Dwarkesh asks if this means disagreeing with the premise of them not being that much more powerful. To which I would strongly say yes. If it turns out that the power comes from associations, then that still leads to unbounded power, so what if it does not sound impressive? What matters is if it works.

  • (33: 30) If we got thousands of you do we get an intelligence explosion? We do dramatically speed up research but compute is a binding constraint. Trenton thinks we would need longer contexts, more reliability and lower cost to get an intelligence explosion, but getting there within a few years seems plausible.

  • (37: 30) Trenton expects this to speed up a lot of the engineering soon, accelerating research and compounding, but not (yet) a true intelligence explosion.

  • (39: 00) What about the costs of training orders-of-magnitude bigger models? Does this break recursive intelligence explosion? It’s a breaking mechanism. We should be trying hard to estimate how much of this is automatable. I agree that the retraining costs and required time are a breaking mechanism, but also efficiency gains could quickly reduce those costs, and one could choose to work around the need to do that via other methods. One should not be confident here.

  • (41: 00) Understanding what goes wrong is key to making AI progress. There are lots of ideas but figuring out which ideas are worth exploring is vital. This includes anticipating which trend lines will hold when scaled up and which won’t. There’s an invisible graveyard of trend lines that looked promising and then failed to hold.

  • (44: 20) A lot of good research works backwards from solving actual problems. Trying to understand what is going on, figuring out how to run experiments. Performance is lots of low-level hard engineering work. Ruthless prioritization is key to doing high quality research, the most effective people attack the problem, do really fast experiments and do not get attached to solutions. Everything is empirical.

  • (48: 00) “Even though we wouldn’t want to admit it, the whole community is kind of doing greedy evolutionary optimization over the landscape of possible AI architectures and everything else. It’s no better than evolution. And that’s not even a slight against evolution.” Does not fill one with confidence on safety.

  • (49: 30) Compute and taste on what to do are the current limiting factors for capabilities. Scaling to properly use more humans is hard. For interpretability they need more good engineers.

  • (51: 00) “I think the Gemini program would probably be maybe five times faster with 10 times more compute or something like that. I think more compute would just directly convert into progress.”

  • (51: 30) If compute is such a bottleneck is it being insufficiently allocated to such research and smaller training tasks? You also need the big training runs to avoid getting off track.

  • (53: 00) What does it look like for AI to speed up AI research? Could be algorithmic progress from AI. That takes more compute, but seems quite reasonable this could act as a force multiplier for humans. Also could be synthetic data.

  • (55: 30) Reasoning traces are missing from data sets, and seem important.

  • (56: 15) Is progress going to be about making really amazing AI maps of the training data? Douglas says clearly a very important part. Doing next token on a sufficiently good data set requires so many other things.

  • (58: 30) Language as synthetic data by humans for humans? With verifier via real world.

  • (59: 30) Yeah, whole development process is largely evolutionary, more people means more recombination, more shots on target. That does to me seem in conflict with the best people being the ones who can discriminate over potential tasks and ideas. But also they point out serendipity is a big deal and it scales. They expect AGI to be the sum of a bunch of marginal things.

  • (1: 01: 30) If we don’t get AGI by GPT-7-levels-of-OOMs are we stuck? Sholto basically buys this, that orders of magnitude have at core diminishing returns, although they unlock reliability, reasoning progress is sublinear in OOMs. Dwarkesh notes this is highly bearish, which seems right.

  • (1: 03: 15) Sholto points out that even with smaller progress, another 3.5→4 jump in GPT-levels is still pretty huge. We should expect smart plus a lot of reliability. This is not to undersell what is coming, rather the jumps so far are huge, and even smaller jumps from here unlock lots of value. I agree.

  • (1: 07: 30) Bigger models allow you to minimize superposition (overloading more features onto less parameters), making results less noisy, whereas smaller ones are under parameterized given their goal of representing the entire internet. Speculation that superposition is why interpretability is so hard. I wonder if that means it could get easier with more parameters? Could we use ‘too many’ parameters on purpose in order to help with this?

  • (1: 11: 00) What’s happening with distilled models? Dwarkesh suggests GPT-4-Turbo is distilled, Sholto suggests it could instead be new architecture.

  • (1: 12: 30) Distillation is powerful because the full probability distribution gives you much richer data to work with.

  • (1: 13: 30) Adaptive compute means spend more cycles on harder questions. How do you do that via chain of thought? You get to pass a KV-value during forward passes, not only passing only the token, which helps, so the KV-cache is (headcanon-level, not definitively) pushing forward the CoT without having to link to the output tokens. This is ‘secret communication’ (from the user’s perspective) of the model to its forward inferences, and we don’t know how much of that is happening. Not always the thing going on, but there is high weirdness.

  • (1: 19: 15) Anthropic sleeper agents paper, notice the CoT reasoning does seem to impact results and the reasoning it does is pretty creepy. But in another paper, the model will figure out the multiple choice answer is always ‘A’ but the reasoning in its CoT will be something else that sounds plausible. Dwarkesh notes humans also come up with crazy explanations for what they are doing, such as when they have split brains. “It’s just that some people will hail chain-of-thought reasoning as a great way to solve AI safety, but actually we don’t know whether we can trust it.”

  • (1: 23: 30) Agents, how will they work once they work well enough? Short term expectation from Sholto is agent talking together. Sufficiently long context windows could make fine-tuning unnecessary or irrelevant.

  • (1: 26: 00) With sufficient context could you train everything on a global goal like ‘did the firm make money?’ In the limit, yes, that is ‘the dream of reinforcement learning.’ Can you feel the instrumental convergence? At first, though, they say, in practice, no, it won’t work.

  • (1: 27: 45) Suggestion that languages evolve to be good at encoding things to teach children important things, such as ‘don’t die.’

  • (1: 29: 30) In other modalities figuring out exactly what you are predicting is key to success. For language you predict the next token, it is easy mode in that sense.

  • (1: 31: 30) “there are interesting interpretability pieces where if we fine-tune on math problems, the model just gets better at entity recognition.” It makes the model better at attending to positions of things and such.

  • (1: 32: 30) Getting better at code makes the model a better thinking. Code is reasoning, you can see how it would transfer. I certainly see this happening in humans.

  • (1: 35: 00) Section on their careers. Sholto’s story is a lot of standard things you hear from high-agency, high-energy high-achieving people. They went ahead and did things, and also pivot and go in different directions and follow curiosity, read all the papers. Strong ideas, loosely held, carefully selected, vigorously pursued. Dwarkesh notes one of the most important things is to go do the things, and managers are desperate for people who will make sure things get done. If you get bottlenecked because you need lawyers, well, why didn’t you go get the lawyers? Lots of impact is convincing people to work with you to do a thing.

  • (1: 43: 30) Sholto is working on AI largely because he thinks it can lead to a wonderful future, and was sucked into scaling by Gwern’s scaling hypothesis post. That is indeed the right reason, if you are also taking into account the downside risks including existential risks, and still think this is a good idea. It almost certainly is not a neutral idea, it is either a very good idea or extremely ill-advised.

  • (1: 43: 35) Sholto says McKinsey taught him how to actually do work, and the value of not taking no for an answer, whereas often things don’t happen because no individual cares enough to make it happen. The consultant can be that person, and you can be that person otherwise without being a consultant. He got hired largely by being seen on the internet asking questions about how things work, causing Google to reach out. It turns out at Google you can ask the algorithm and systems experts and they will gladly teach you everything they know.

  • (1: 51: 30) Being in the office all the time, collaborating with others including pair programming with Sergey Brin sometimes, knowing the people who make decisions, matters a lot.

  • (1: 54: 00) Trenton’s story begins, his was more standard and direct.

  • (1: 55: 30) Dwarkesh notes that these stories are framed as highly contingent, that people tend to think their own stories are contingent and those of others are not. Sholto mentions the idea of shots on goal, putting yourself in position to get lucky. I buy this. There are a bunch of times I got lucky and something important happened. If you take those times away, or add different ones, my life could look very different. Also a lot of what was happening was, effectively, engineering the situation to allow those events to happen, without having a particular detailed event in mind. Same with these two.

  • (1: 57: 00) Google is continuing the experiment to find high-agency people and bootstrap them. Seems highly promising. Also Chris Olah was hired off a cold email. You need to send and look out for unusual signals. I agree with Dwarkesh that is very good for the world that a lot of this hiring is not done legibly, and instead is people looking out for agency and contributions generally. If you write a great paper or otherwise show you have the goods, the AI labs will find you.

  • (2: 01: 45) You still need to do the interview process, make sure people can code or what not and you are properly debiased, but that process should be designed not to get in the way otherwise.

  • (2: 03: 00) Emphasis on need to care a ton, and go full blast towards what you want, doing everything that would help.

  • (2: 04: 30) When you get your job then is that the time to relax or to put petal to the metal? There’s pros and cons. Not everyone can go all out, many people want to focus on their families or otherwise relax. Others need to be out there working every hour of the week, and the returns are highly superlinear. And yes, this seems very right to me, returns to going fully in on something have been much higher than returns to ordinary efforts. Jane Street would have been great for me if I could have gone fully in, but I was not in a position to do that.

  • (2: 06: 00) Dwarkesh: “I just try to come up with really smart questions to send to them. In that entire process I’ve always thought, if I just cold email them, it’s like a 2% chance they say yes. If I include this list, there’s a 10% chance. Because otherwise, you go through their inbox and every 34 seconds, there’s an interview for some podcast or interview. Every single time I’ve done this they’ve said yes.” And yep, story checks out.

  • (2: 09: 30) A discussion of what is a feature. It is whatever you call a feature, or it is anything you can turn on and off, it any of the things. Is that a useful definition? Not if the features were not predictive, or if the features did not do anything. The point is to compose the features into something higher level.

  • (2: 17: 00) Trenton thinks you can detect features that correspond to deceptive behavior, or malicious behavior, when evaluating a request. I’ve discussed my concerns on this before. It is only a feature if you can turn it on and off, perhaps?

  • (2: 20: 00) There are a bunch of circuits that have various jobs they try to do, sometimes as simple as ‘copy the last token,’ and then there are other heads that suppress that behavior. Reasons to do X, versus reasons not to do X.

  • (2: 20: 45) Deception circuit gets labeled as whatever fires in examples where you find deception, or similar? Well, sure, basically.

  • (2: 22: 00) RLHF induces theory of mind.

  • (2: 22: 05) What do we do if the model is superhuman, will our interpretability strategies still work, would we understand what was going on? Trenton says that the models are deterministic (except when finally sampling) so we have a lot to work with, and we can do automated interpretability. And if it is all associations, then in theory that means what in my words would be ‘no secret’ so you can break down whatever it is doing into parts that we can understand and thus evaluate. A claim that evaluation in this sense is easier than generation, basically.

  • (2: 24: 00) Can we find things without knowing in advance what they are? It should be possible to identify a feature and how it relates to other features even if you do not know what the feature is in some sense. Or you can train in the new thing and see what activates, or use other strategies.

  • (2: 26: 00) Is red teaming Gemma helping jailbreak Gemini? How universal are features across models? To some extent.

  • (2: 27: 00) Curriculum learning, which is trying to teach the model things in an intentional order to facilitate learning, is interesting and mentioned in the Gemini paper.

  • (2: 29: 45) Very high confidence that this general model of what is going on with superposition is right, based on success of recent work.

  • (2: 31: 00) A fascinating question: Should humans learn a real representation of the world, or would a distorted one be more useful in some cases? Should venomous animals flash neon pink, a kind of heads-up display baked into your eyes? The answer is that you have too many different use cases, distortions do more harm than good, you want to use other ways to notice key things, and so that is what we do. So Trenton is optimistic the LLMs are doing this too.

  • (2: 32: 00) “Another dinner party question. Should we be less worried about misalignment? Maybe that’s not even the right term for what I’m referring to, but alienness and Shoggoth-ness? Given feature universality there are certain ways of thinking and ways of understanding the world that are instrumentally useful to different kinds of intelligences. So should we just be less worried about bizarro paperclip maximizers as a result?” I quote this question because I do not understand it. If we have feature universality, how is that not saying that the features are compatible with any set of preferences, over next tokens or otherwise? So why is this optimistic? The response is that components of LLMs are often very Shoggoth-like.

  • (2: 34: 00) You can talk to any of the current models in Base64 and it works great.

  • (2: 34: 10) Dwarkesh asks, doesn’t the fact that you needed a Base64 expert to happen to be there to recognize what the Base64 feature was mean that interpretability on smarter models is going to be really hard, if no human can grok it? Anomaly detection is suggested, you look for something different. Any new feature is a red flag. Also you can ask the model for help sometimes, or automate the process. All of this strikes me as exactly how you train a model how not to be interpretable.

  • (2: 36: 45) Feature splitting is where if you only have so much space in the model for birds it will learn ‘birds’ and call it a day, whereas if it has more room it will learn features for different specific birds.

  • (2: 38: 30) We have this mess of neurons and connections. The dream is bootstrapping to making sense of all that. Not claiming we have made any progress here.

  • (2: 39: 45) What parts of the process for GPT-7 will be expensive? Training the sparce encoder and doing projection into a wider space of features, or labeling those features? Trenton says it depends on how much data goes in and how dimensional is your space, which I think means how overloaded and full of superpositions you are or are measuring.

  • (2: 42: 00) Dwarkesh asks: Why should the features be things we can understand? In Mixtral of Experts they noticed their experts were not distinctive in ways they could understand. They are excited to study this question more but so far don’t know much. It is empirical, and they will know when they look and find out. They claim there is usually clear breakdown of expert types, but that you can also get distinctions that break up what you would naively expect.

  • (2: 45: 00) Try to disentangle all these neurons, audience. Sholto’s challenge to you.

  • (2: 48: 00) Bruno Olshausen theorizes that all the brain regions you do not here about are doing a ton of computation in superposition. And sure, why not? The human brain sure seems under-parameterized.

  • (2: 49: 25) Superposition is a combinatorial code, not an artifact of one neuron.

  • (2: 51: 20) GPT-7 has been trained. Your interpretability research succeeded. What will you do next? Try to get it to do the work, of course. But no, before that, what do you need to do to be convinced it is safe to deploy? ‘I mean we have our RSP.’ I mean, no you don’t, not yet, not for GPT-7-level models, it says ‘fill this in later’ over there. So Trenton rightfully says we would need a lot more interpretability progress. Right now he would not give the green light, he’d be crying and hoping the tears interfered with GPUs.

  • (2: 53: 00) He says ‘Ideally we can find some compelling deception circuit which lights up when the model knows that it’s not telling the full truth to you.’ Dwarkesh asks about linear probes, Trenton says that does not look good.

  • I would ask, what makes you think that you have found the only such circuit? If the model had indeed found a way around your interpretability research, would you not expect it to give you a deception circuit to find, in addition to the one you are not supposed to find, because you are optimizing for exactly that which will fool you? Wouldn’t you expect the unsupervised learning to give you what you want to find either way? Fundamentally, this seems like saying ‘oh sure he lies all the time, but when he lies he never looks the person in the eye, so there is nothing to worry about, there is no way he would ever lie while looking you in the eye.’ And you do this with a thing much smarter than you, that knows you will notice this, and expect it to go well. For you, that is.

  • Also I would reiterate all my ‘not everything you should be worried about requires the model to be deceptive in way that is distinct from its normal behavior, even in the worlds where this distinction is maximally real,’ and also ‘deception is not a distinct thing from what is imbued into almost every communication.’ And that’s without things smarter than us. None of this seems to me to have any hope, on a very fundamental level.

  • (2: 56: 15) Yet Trenton continues to be optimistic such techniques will understand GPT-7. A third of team is scaling up dictionary learning, a second group is identifying circuits, a third is working to identify attention heads.

  • (3: 01: 00) A good test would be, we found feature X, we ablated it, and now we can’t elicit X to happen. That does sound a little better?

  • (3: 02: 00) What are the unknown unknowns for superhuman models? The answer is ‘we’ll see,’ our hope is automated interpretability. And I mean, yes, ‘we’ll see’ is in some sense the right way to discuss unknown unknowns, there are far worse answers, but my despair is palpable.

  • (3: 03: 00) Should we worry if alignment succeeds ‘too hard’ and people get fine-grained control over AIs? “That is the whole Valley lock-in argument in my mind. It’s definitely one of the strongest contributing factors for why I am working on capabilities at the moment. I think the current player set is actually extremely well-intentioned.”

  • (3: 07: 00) “If it works well, it’s probably not being published.” Finally.

  • Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken Read More »

    cows-in-texas-and-kansas-test-positive-for-highly-pathogenic-bird-flu

    Cows in Texas and Kansas test positive for highly pathogenic bird flu

    viral spread —

    The risk to the public is low, and the milk supply is safe.

    Image of cows

    Wild migratory birds likely spread a deadly strain of bird flu to dairy cows in Texas and Kansas, state and federal officials announced this week.

    It is believed to be the first time the virus, a highly pathogenic avian influenza (HPAI), has been found in cows in the US. Last week, officials in Minnesota confirmed finding an HPAI case in a young goat, marking the first time the virus has been found in a domestic ruminant in the US.

    According to the Associated Press, officials with the Texas Animal Health Commission confirmed the flu virus is the Type A H5N1 strain, which has been ravaging bird populations around the globe for several years. The explosive, ongoing spread of the virus has led to many spillover events into mammals, making epidemiologists anxious that the virus could adapt to spread widely in humans.

    For now, the risk to the public is low. According to a release from the US Department of Agriculture (USDA), genetic testing by the National Veterinary Services Laboratories indicated that H5N1 strain that spread to the cows doesn’t appear to contain any mutations that would make it more transmissible to humans. Though the flu strain was found in some milk samples from the infected cows, the USDA emphasized that all the milk from affected animals is being diverted and destroyed. Dairy farms are required to send only milk from healthy animals to be processed for human consumption. Still, even if some flu-contaminated milk was processed for human consumption, the standard pasteurization process inactivates viruses, including influenza, as well as bacteria.

    So far, officials believe the virus is primarily affecting older cows. The virus was detected in milk from sick cows on two farms in Kansas and one in Texas, as well as in a throat swab from a cow on a second Texas farm. The USDA noted that farmers have found dead birds on their properties, indicating exposure to infected birds. Sick cows have also been reported in New Mexico. Symptoms of the bird flu in cows appear to include decreased milk production and low appetite.

    But so far, the USDA believes the spread of H5N1 will not significantly affect milk production or the herds. Milk loss has been limited; only about 10 percent of affected herds have shown signs of the infection, and there has been “little to no associated mortality.” The USDA suggested it will remain vigilant, calling the infections a “rapidly evolving situation.”

    While federal and state officials continue to track the virus, Texas officials aim to assure consumers. “There is no threat to the public and there will be no supply shortages,” Texas Agriculture Commissioner Sid Miller said in a statement. “No contaminated milk is known to have entered the food chain; it has all been dumped. In the rare event that some affected milk enters the food chain, the pasteurization process will kill the virus.”

    Cows in Texas and Kansas test positive for highly pathogenic bird flu Read More »

    taylor-swift-fans-dancing-and-jumping-created-last-year’s-“swift-quakes”

    Taylor Swift fans dancing and jumping created last year’s “Swift quakes”

    Good vibrations —

    “Shake It Off” produced tremors equivalent to a local magnitude earthquake of 0.851.

    Taylor Swift on the Eras Tour in 2023

    Enlarge / Taylor Swift during her Eras Tour. Crowd motions likely caused mini “Swift quakes” recorded by seismic monitoring stations.

    When mega pop star Taylor Swift gave a series of concerts last August at the SoFi Stadium in Los Angeles, regional seismic network stations recorded unique harmonic vibrations known as “concert tremor.” A similar “Swift quake” had occurred the month before in Seattle, prompting scientists from the California Institute of Technology and UCLA to take a closer look at seismic data collected during Swift’s LA concert.

    The researchers concluded that the vibrations were largely generated by crowd motion as “Swifties” jumped and danced enthusiastically to the music and described their findings in a new paper published in the journal Seismological Research Letters. The authors contend that gaining a better understanding of atypical seismic signals like those generated by the Swift concert could improve the analysis of seismic signals in the future, as well as bolster emerging applications like using signals from train noise for seismic interferometry.

    Concert tremor consists of low-frequency signals of extended duration with harmonic frequency peaks between 1 and 10 Hz, similar to the signals generated by volcanoes or trains. There has been considerable debate about the source of these low-frequency concert tremor signals: Are they produced by the synchronized movement of the crowd, or by the sound systems or instruments coupled to the stage? Several prior studies of stadium concerts have argued for the former hypothesis, while a 2015 study found that a chanting crowd at a football game produced similar harmonic seismic tremors. However, a 2008 study concluded that such signals generated during an outdoor electronic dance music festival came from the sound system vibrating to the musical beat.

    The Caltech/UCLA team didn’t just rely on the data from the regional network stations. The scientists placed additional motion sensors throughout the stadium prior to the concert, enabling them to characterize all the seismic signals produced during the concert. The signals had such unique characteristics that it was relatively easy to identify them with a spectrogram. In fact, the authors were able to identify 43 of the 45 songs Swift performed based on the distinctive signal of each song.

    They also calculated how much radiated energy was produced by each song. “Shake It Off” produced the most radiated energy, equivalent to a local magnitude earthquake of 0.851. “Keep in mind this energy was released over a few minutes compared to a second for an earthquake of that size,” said co-author Gabrielle Tepp of Caltech.

    Tepp is a volcanologist and musician in her own right. That combination came in handy when it was time to conduct a lab-based experiment to test the team’s source hypothesis using a portable public announcement speaker system. They played Swift’s “Love Story” and Tepp gamely danced and jumped with the beat during the last chorus while sensors recorded the seismic vibrations. “Even though I was not great at staying in the same place—I ended up jumping around in a small circle, like at a concert—I was surprised at how clear the signal came out,” said Tepp. They also tested a steady beat as Tepp played her bass guitar in order to isolate the signal from a single instrument.

    The resulting fundamental harmonic during the jumping was consistent with the song’s beat rate. However, the bass beats didn’t produce a harmonic signal, which was surprising since those beats were better synchronized with the actual musical beats than Tepp’s jumping motions. This might be due to the rounder shape of the bass beat signals compared to sharper spiking signals in response to the jumping.

    Map showing the concert venue and nearby seismic stations (circles) that recorded signals from the Swift concerts (blue).

    Enlarge / Map showing the concert venue and nearby seismic stations (circles) that recorded signals from the Swift concerts (blue).

    Gabrielle Tepp et al., 2024

    The authors noted that their experiment did not involve a stage or stadium-grade sound system, “so we cannot completely rule out loudspeakers as a vibrational energy source,” they wrote. Nonetheless, “Overall the evidence suggests that crowd movement is the primary source of the low-frequency signals, with the speaker system or instruments potentially contributing via stage of building vibrations.” The fact that the same kind of low-frequency seismic signals were not detected during pre-concert sound checks seems to support that conclusion, although there were higher frequency signals during sound checks.

    The team also studied the structural response of the stadium and conducted a similar analysis of seismic readings from three other concerts at SoFi Stadium that summer: country music’s Morgan Waller, Beyoncé, and Metallica, as well as picking up clear signals at one monitoring station for the three opening acts: Pantera, DJ Khaled, and Five Finger Death Punch, respectively. The results were markedly similar to the seismic data gathered from the Taylor Swift concerts, although none of the signals matched the strongest of those detected during the Swift concerts.

    The researchers were surprised to find that the seismic signals from the Metallica concert were the weakest among all the concerts and markedly different from the others, “slanted and kind of weird looking,” per Tepp. They found several comments in music forums from fans complaining about poor sound quality at the Metallica concert. “If fans had a hard time discerning the song or beat, it may explain the more variable signals because it would have influenced their movements,” the authors wrote.

    It’s also possible that heavy metal live performances are less tightly choreographed than Beyoncé or Swift performances, or that heavy metal fans don’t move with the music in quite the same way. “Metal fans like to headbang a lot, so they’re not necessarily bouncing,” said Tepp. “It might just be that the ways in which they move don’t create as strong of a signal.”

    Seismological Research Letters, 2024. DOI: 10.1785/0220230385  (About DOIs).

    Taylor Swift fans dancing and jumping created last year’s “Swift quakes” Read More »

    scotus-mifepristone-case:-justices-focus-on-anti-abortion-groups’-legal-standing

    SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing

    Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the <em>US Food and Drug Administration v. Alliance for Hippocratic Medicine</em> on March 26, 2024 in Washington, DC.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/03/GettyImages-2115237711-800×533.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Demonstrators participate in an abortion-rights rally outside the Supreme Court as the justices of the court hear oral arguments in the case of the US Food and Drug Administration v. Alliance for Hippocratic Medicine on March 26, 2024 in Washington, DC.

    The US Supreme Court on Tuesday heard arguments in a case seeking to limit access to the abortion and miscarriage drug mifepristone, with a majority of justices expressing skepticism that the anti-abortion groups that brought the case have the legal standing to do so.

    The case threatens to dramatically alter access to a drug that has been safely used for decades and, according to the Guttmacher Institute, was used in 63 percent of abortions documented in the health care system in 2023. But, it also has sweeping implications for the Food and Drug Administration’s authority over drugs, marking the first time that courts have second-guessed the agency’s expert scientific analysis and moved to restrict access to an FDA-approved drug.

    As such, the case has rattled health experts, reproductive health care advocates, the FDA, and the pharmaceutical industry alike. But, based on the line of questioning in today’s oral arguments, they have reason to breathe a sigh of relief.

    Standing

    The case was initially filed in 2022 by a group of anti-abortion organizations led by the Alliance for Hippocratic Medicine. They collectively claimed that the FDA’s approval of mifepristone in 2000 was unlawful, as were FDA actions in 2016 and 2021 that eased access to the drug, allowing for it to be prescribed via telemedicine and dispensed through the mail. The anti-abortion groups justified bringing the lawsuit by claiming that the doctors in their ranks are harmed by the FDA’s actions because they are forced to treat girls and women seeking emergency medical care after taking mifepristone and experiencing complications.

    The FDA and numerous medical organizations have emphatically noted that mifepristone is extremely safe and the complications the lawsuit references are exceedingly rare. Serious side effects occur in less than 1 percent of patients, and major adverse events, including infection, blood loss, or hospitalization, occur in less than 0.3 percent, according to the American College of Obstetricians and Gynecologists. Deaths are almost non-existent.

    Still, a conservative federal judge in Texas sided with the anti-abortion groups last year, revoking the FDA’s 2000 approval. A conservative panel of judges for the Court of Appeals for the 5th Circuit in New Orleans then partially overturned the ruling, undoing the lower court’s ruling on the 2000 approval, allowing the FDA’s approval to stand, but still finding the FDA’s 2016 and 2021 actions unlawful. The ruling was frozen until the Supreme Court weighed in.

    Today, many of the Supreme Court Justices went back to the very beginning: the claimed scenario that the plaintiff doctors have been or will imminently be harmed by the FDA’s actions. At the outset of the hearings, Solicitor General Elizabeth Prelogar argued that the plaintiffs had not been harmed, and, even if they were, they already had federal protections and recourse. Any doctor who consciously objects to caring for a patient who has had an abortion already has federal protections that prevent them from being forced to provide that care, Prelogar argued. As such, hospitals have legal obligations and have set up contingency and staffing plans to prevent violating those doctors’ federal conscious objection protections.

    SCOTUS mifepristone case: Justices focus on anti-abortion groups’ legal standing Read More »

    thousands-of-phones-and-routers-swept-into-proxy-service,-unbeknownst-to-users

    Thousands of phones and routers swept into proxy service, unbeknownst to users

    ANONYMIZERS ON THE CHEAP —

    Two new reports show criminals may be using your device to cover their online tracks.

    Thousands of phones and routers swept into proxy service, unbeknownst to users

    Getty Images

    Crooks are working overtime to anonymize their illicit online activities using thousands of devices of unsuspecting users, as evidenced by two unrelated reports published Tuesday.

    The first, from security firm Lumen Labs, reports that roughly 40,000 home and office routers have been drafted into a criminal enterprise that anonymizes illicit Internet activities, with another 1,000 new devices being added each day. The malware responsible is a variant of TheMoon, a malicious code family dating back to at least 2014. In its earliest days, TheMoon almost exclusively infected Linksys E1000 series routers. Over the years it branched out to targeting the Asus WRTs, Vivotek Network Cameras, and multiple D-Link models.

    In the years following its debut, TheMoon’s self-propagating behavior and growing ability to compromise a broad base of architectures enabled a growth curve that captured attention in security circles. More recently, the visibility of the Internet of Things botnet trailed off, leading many to assume it was inert. To the surprise of researchers in Lumen’s Black Lotus Lab, during a single 72-hour stretch earlier this month, TheMoon added 6,000 ASUS routers to its ranks, an indication that the botnet is as strong as it’s ever been.

    More stunning than the discovery of more than 40,000 infected small office and home office routers located in 88 countries is the revelation that TheMoon is enrolling the vast majority of the infected devices into Faceless, a service sold on online crime forums for anonymizing illicit activities. The proxy service gained widespread attention last year following this profile by KrebsOnSecurity.

    “This global network of compromised SOHO routers gives actors the ability to bypass some standard network-based detection tools—especially those based on geolocation, autonomous system-based blocking, or those that focus on TOR blocking,” Black Lotus researchers wrote Tuesday. They added that “80 percent of Faceless bots are located in the United States, implying that accounts and organizations within the US are primary targets. We suspect the bulk of the criminal activity is likely password spraying and/or data exfiltration, especially toward the financial sector.”

    The researchers went on to say that more traditional ways to anonymize illicit online behavior may have fallen out of favor with some criminals. VPNs, for instance, may log user activity despite some service providers’ claims to the contrary. The researchers say that the potential for tampering with the Tor anonymizing browser may also have scared away some users.

    The second post came from Satori Intelligence, the research arm of security firm HUMAN. It reported finding 28 apps available in Google Play that, unbeknownst to users, enrolled their devices into a residential proxy network of 190,000 nodes at its peak for anonymizing and obfuscating the Internet traffic of others.

    HUMAN

    ProxyLib, the name Satori gave to the network, has its roots in Oko VPN, an app that was removed from Play last year after being revealed using infected devices for ad fraud. The 28 apps Satori discovered all copied the Oko VPN code, which made them nodes in the residential proxy service Asock.

    HUMAN

    The researchers went on to identify a second generation of ProxyLib apps developed through lumiapps[.]io, a software developer kit deploying exactly the same functionality and using the same server infrastructure as Oko VPN. The LumiApps SDK allows developers to integrate their custom code into a library to automate standard processes. It also allows developers to do so without having to create a user account or having to recompile code. Instead they can upload their custom code and then download a new version.

    HUMAN

    “Satori has observed individuals using the LumiApps toolkit in the wild,” researchers wrote. “Most of the applications we identified between May and October 2023 appear to be modified versions of known legitimate applications, further indicating that users do not necessarily need to have access to the applications’ source code in order to modify them using LumiApps. These apps are largely named as ‘mods’ or indicated as patched versions and shared outside of the Google Play Store.”

    The researchers don’t know if the 190,000 nodes comprising Asock at its peak were made up exclusively of infected Android devices or if they included other types of devices compromised through other means. Either way, the number indicates the popularity of anonymous proxies.

    People who want to prevent their devices from being drafted into such networks should take a few precautions. The first is to resist the temptation to keep using devices once they’re no longer supported by the manufacturer. Most of the devices swept into TheMoon, for instance, have reached end-of-life status, meaning they no longer receive security updates. It’s also important to install security updates in a timely manner and to disable UPnP unless there’s a good reason for it remaining on and then allowing it only for needed ports. Users of Android devices should install apps sparingly and then only after researching the reputation of both the app and the app maker.

    Thousands of phones and routers swept into proxy service, unbeknownst to users Read More »

    missouri-ag-sues-media-matters-over-its-x-research,-demands-donor-names

    Missouri AG sues Media Matters over its X research, demands donor names

    A photo of Elon Musk next to the logo for X, the social network formerly known as Twitter,.

    Getty Images | NurPhoto

    Missouri Attorney General Andrew Bailey yesterday sued Media Matters in an attempt to protect Elon Musk and X from the nonprofit watchdog group’s investigations into hate speech on the social network. Bailey’s lawsuit claims that “Media Matters has used fraud to solicit donations from Missourians in order to trick advertisers into removing their advertisements from X, formerly Twitter, one of the last platforms dedicated to free speech in America.”

    Bailey didn’t provide much detail on the alleged fraud but claimed that Media Matters is guilty of “fraudulent manipulation of data on X.com.” That’s apparently a reference to Media Matters reporting that X placed ads for major brands next to posts touting Hitler and Nazis. X has accused Media Matters of manipulating the site’s algorithm by endlessly scrolling and refreshing.

    Bailey yesterday issued an investigative demand seeking names and addresses of all Media Matters donors who live in Missouri and a range of internal communications and documents regarding the group’s research on Musk and X. Bailey anticipates that Media Matters won’t provide the requested materials, so he filed the lawsuit asking Cole County Circuit Court for an order to enforce the investigative demand.

    “Because Media Matters has refused such efforts in other states and made clear that it will refuse any such efforts, the Attorney General seeks an order… compelling Media Matters to comply with the CID [Civil Investigative Demand] within 20 days,” the lawsuit said.

    Media Matters slams Musk and Missouri AG

    Media Matters, which is separately fighting similar demands made by Texas, responded to Missouri’s legal action in a statement provided to Ars today.

    “Far from the free speech advocate he claims to be, Elon Musk has actually intensified his efforts to undermine free speech by enlisting Republican attorneys general across the country to initiate meritless, expensive, and harassing investigations against Media Matters in an attempt to punish critics,” Media Matters President Angelo Carusone said. “This Missouri investigation is the latest in a transparent endeavor to squelch the First Amendment rights of researchers and reporters; it will have a chilling effect on news reporters.”

    Musk thanked Bailey for filing the lawsuit in a post that said, “Media Matters is doing everything it can to undermine the First Amendment. Truly an evil organization.”

    Bailey is seeking the names and addresses of all Media Matters donors from Missouri since January 1, 2023, and the amounts of each donation. He wants all promotional or marketing material sent to potential donors and documents showing how the donations were used.

    Ads next to pro-Nazi content

    Several of Bailey’s demands relate to the Media Matters article titled, “As Musk endorses antisemitic conspiracy theory, X has been placing ads for Apple, Bravo, IBM, Oracle, and Xfinity next to pro-Nazi content.” Bailey wants all “documents related to the article, or to the events described in the article.”

    The Media Matters article displayed images of advertisements next to pro-Nazi posts. Musk previously sued Media Matters over the article, claiming the group “manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content.”

    X said Media Matters did this by “endlessly scrolling and refreshing its unrepresentative, hand-selected feed, generating between 13 and 15 times more advertisements per hour than viewed by the average X user repeating this inauthentic activity until it finally received pages containing the result it wanted: controversial content next to X’s largest advertisers’ paid posts.”

    X also sued the Center for Countering Digital Hate, but the lawsuit was thrown out by a federal judge yesterday.

    Missouri AG sues Media Matters over its X research, demands donor names Read More »

    wwdc-2024-starts-on-june-10-with-announcements-about-ios-18-and-beyond

    WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond

    WWDC —

    Speculation is rampant that Apple will make its first big moves in generative AI.

    A colorful logo that says

    Enlarge / The logo for WWDC24.

    Apple

    Apple has announced dates for this year’s Worldwide Developers Conference (WWDC). WWDC24 will run from June 10 through June 14 at the company’s Cupertino, California, headquarters, but everything will be streamed online.

    Apple posted about the event with the following generic copy:

    Join us online for the biggest developer event of the year. Be there for the unveiling of the latest Apple platforms, technologies, and tools. Learn how to create and elevate your apps and games. Engage with Apple designers and engineers and connect with the worldwide developer community. All online and at no cost.

    As always, the conference will kick off with a keynote presentation on the first day, which is Monday, June 10. You can be sure Apple will use that event to at least announce the key features of its next round of annual software updates for iOS, iPadOS, macOS, watchOS, visionOS, and tvOS.

    We could also see new hardware—it doesn’t happen every year, but it has of late. We don’t yet know exactly what that hardware might be, though.

    Much of the speculation among analysts and commentators concerns Apple’s first move into generative AI. There have been reports that Apple may work with a partner like Google to include a chatbot in its operating system, that it has been considering designing its own AI tools, or that it could offer an AI App Store, giving users a choice between many chatbots.

    Whatever the case, Apple is playing catch-up with some of its competitors in generative AI and large language models even though it has been using other applications of AI across its products for a couple of years now. The company’s leadership will probably talk about it during the keynote.

    After the keynote, Apple usually hosts a “Platforms State of the Union” talk that delves deeper into its upcoming software updates, followed by hours of developer-focused sessions detailing how to take advantage of newly planned features in third-party apps.

    WWDC 2024 starts on June 10 with announcements about iOS 18 and beyond Read More »

    cities:-skylines-2-gets-long-awaited-official-mod-support-and-map-editor

    Cities: Skylines 2 gets long-awaited official mod support and map editor

    Cities: Skylines 2 —

    Modding was seen as the most important next step by developer’s leader.

    View of a rooftop terrace with sun umbrella in Cities: Skylines 2's Beach Properties expansion.

    Enlarge / Kudos to the designer of this umbrella-shaded rooftop terrace at Colossal Order, perhaps the only worker who can imagine a place that isn’t overwhelmed by Steam reviewers.

    Paradox Interactive

    Under the very unassuming name of patch 1.1.0f1, Cities: Skylines 2 is getting something quite big. The sequel now has the modding, map editing, and code modding support that made its predecessor such a sprawling success.

    Only time will tell if community energy can help restore some of the momentum that has been dispersed by the fraught launch of Cities: Skylines 2 (C:S2). The project of relatively small developer Colossal Order arrived in October 2023 with performance issues and a lack of content compared to its predecessor. Some of that content perception stemmed from the game’s lack of modding support, which had contributed to entire aspects of the original game not yet available in the sequel.

    When Ars interviewed Colossal Order CEO Mariina Hallikainen in December, she said that modding support was the thing she was most looking forward to arriving. Modding support was intended to be available at launch, but the challenges of building the new game’s technical base, amid many other technical issues, pushed it back, along with console releases.

    “[W]e can’t wait to have the support out there, so we can have the modding community ‘fully unleashed,'” Hallikainen said then. “Because I know they are waiting to get to work. They are actually already at it, but this will make it easier. … We just can’t wait to give them the full set of tools.” She noted that character modding, a “technically difficult thing to support,” would arrive further out, and indeed, asset modding is listed as “available later this year.”

    The base-level modding support is now available, though in “Beta” and in a different form than fans are used to. Instead of working through Steam Workshop, C:S2 mods will be available through Paradox Mods to support console players. There are, of course, issues at launch, including slowdown with the in-game mod browser. Most non-incensed commenters and reviewers consider the tools themselves to be an upgrade over the prior game’s editing suite.

    In addition to making mods, the in-game mod tools should make it easier to load preset “Playsets” of mod combinations. We’ll have to see how long it takes assets like Spaghetti Junction, the most popular mod for the original Cities: Skylines, to arrive in C:S2 so that all may experience the municipal engineering regrets of Birmingham, England.

    Along with modding tools, Colossal Order issued some of its first proper DLC for C:S2. Beach Properties, an asset pack, adds both North American and European waterfront zoning and buildings, palm trees, and six signature buildings. There’s also a Deluxe Relax Station that puts 16 new songs and DJ patter on the soundtrack. The recent patch also contains a number of optimizations and bug fixes. Steam reviewers and Paradox forum members are asking why the beach DLC doesn’t contain actual beaches.

    Cities: Skylines 2 gets long-awaited official mod support and map editor Read More »

    chrome-launches-native-build-for-arm-powered-windows-laptops

    Chrome launches native build for Arm-powered Windows laptops

    Firefox works, too —

    When the big Windows-on-Arm relaunch happens in mid-2024, Chrome will be ready.

    Extreme close-up photograph of finger above Chrome icon on smartphone.

    We are quickly barreling toward an age of viable Arm-powered Windows laptops with the upcoming launch of Qualcomm’s Snapdragon X Elite CPU. Hardware options are great, but getting useful computers out of them will require a lot of new software, and a big one has just launched: Chrome for Windows on Arm.

    Google has had a nightly “canary” build running since January, but now it has a blog post up touting a production-ready version of Chrome for “Arm-compatible Windows PCs powered by Snapdragon.” That’s right, Qualcomm has a big hand in this release, too, with its own press announcement touting Google’s browser release for its upcoming chip. Google promises a native version of Chrome will be “fully optimized for your PC’s [Arm] hardware and operating system to make browsing the web faster and smoother.”

    Apple upended laptop CPU architecture when it dumped Intel and launched the Arm-based Apple Silicon M1. A few years later and Qualcomm is ready to answer—mostly by buying a company full of Apple Silicon veterans—with the upcoming launch of the Snapdragon X Elite chip. Qualcomm claims the X Elite will bring Apple Silicon-class hardware to Windows, but the chip isn’t out yet—it’s due for a “mid-2024” release. Most of the software you’ll be running will still be written in x86 and need to go through a translation layer, which will slow things down, but at least it won’t have to be your primary browser.

    Google says the release will be out this week. Assuming you don’t have an Arm laptop yet, you can visit “google.com/chrome,” scroll all the way down to the footer, and click “other platforms,” which will eventually show the new release.

    Chrome launches native build for Arm-powered Windows laptops Read More »

    bridge-collapses-put-transportation-agencies’-emergency-plans-to-the test

    Bridge collapses put transportation agencies’ emergency plans to the test

    The Dali container vessel after striking the Francis Scott Key Bridge that collapsed into the Patapsco River in Baltimore on March 26. The commuter bridge collapsed after being struck by a container ship, causing vehicles to plunge into the water and halting shipping traffic at one of the most important ports on the US East Coast.

    Enlarge / The Dali container vessel after striking the Francis Scott Key Bridge that collapsed into the Patapsco River in Baltimore on March 26. The commuter bridge collapsed after being struck by a container ship, causing vehicles to plunge into the water and halting shipping traffic at one of the most important ports on the US East Coast.

    A container ship rammed into the Francis Scott Key Bridge in Baltimore around 1: 30 am on March 26, 2024, causing a portion of the bridge to collapse into Baltimore Harbor. Officials called the event a mass casualty and were searching for people in the waters of the busy port.

    This event occurred less than a year after a portion of Interstate 95 collapsed in north Philadelphia during a truck fire. That disaster was initially expected to snarl traffic for months, but a temporary six-lane roadway was constructed in 12 days to serve motorists while a permanent overpass was rebuilt.

    US cities often face similar challenges when routine wear and tear, natural disasters, or major accidents damage roads and bridges. Transportation engineer Lee D. Han explains how planners, transit agencies, and city governments anticipate and manage these disruptions.

    How do agencies plan for disruptions like this?

    Planning is a central mission for state and metropolitan transportation agencies.

    Traditional long-term planning focuses on anticipating and preparing for growing and shifting transportation demand patterns. These changes are driven by regional and national economic and population trends.

    Shorter-term planning is about ensuring mobility and safety during service disruptions. These disruption events can include construction, major scheduled events like music festivals, traffic incidents such as crashes and hazardous material spills, emergency evacuations, and events like the bridge collapse in Baltimore.

    Agencies have limited resources, so they typically set priorities based on how likely a given scenario is, its potential adverse effects, and the countermeasures that officials have available.

    For bridges, the Federal Highway Administration sets standards and requires states to carry out periodic inspections. In addition, agencies develop a detouring plan for each bridge in case of a structural failure or service disruption. In Baltimore, Key Bridge traffic will be routed through two tunnels that pass under the harbor, but trucks carrying hazardous materials will have to take longer detours.

    Major bridges, such as those at Mississippi River crossings, are crucial to the nation’s economy and security. They require significant planning, commitment, and coordination between multiple agencies. There usually are multiple contingency plans in place to deal with immediate traffic control, incident response, and field operations during longer-term bridge repair or reconstruction projects.

    What are some major challenges of rerouting traffic?

    Bridges are potential choke points in highway networks. When a bridge fails, traffic immediately stops and begins to flow elsewhere, even without a formal detouring plan. Transportation agencies need to build or find excess capacity before a bridge fails so that the disrupted traffic has alternative routes.

    This is usually manageable in major urban areas that have many parallel routes and bridges and built-in redundancy in their road networks. But for rural areas, failure of a major bridge can mean extra hours or even days of travel.

    When traffic has to be rerouted off an interstate highway, it can cause safety and access problems. If large trucks are diverted to local streets that were not designed for such vehicles, they may get stuck on railroad tracks or in spaces too small for them to turn around. Heavy trucks can damage roads and bridges with low weight limits, and tall trucks may be too large to fit through low-clearance underpasses.

    Successful rerouting requires a lot of coordination between agencies and jurisdictions. They may have to adjust road signal timing to deal with extra cars and changed traffic patterns. Local drivers may need to be directed away from these alternative routes to prevent major congestion.

    It’s also important to communicate with navigation apps like Google Maps and Waze, which every driver has access to. Route choices that speed up individual trips may cause serious congestion if everyone decides to take the same alternate route and it doesn’t have enough capacity to handle the extra traffic.

    Bridge collapses put transportation agencies’ emergency plans to the test Read More »

    florida-braces-for-lawsuits-over-law-banning-kids-from-social-media

    Florida braces for lawsuits over law banning kids from social media

    Florida braces for lawsuits over law banning kids from social media

    On Monday, Florida became the first state to ban kids under 14 from social media without parental permission. It appears likely that the law—considered one of the most restrictive in the US—will face significant legal challenges, however, before taking effect on January 1.

    Under HB 3, apps like Instagram, Snapchat, or TikTok would need to verify the ages of users, then delete any accounts for users under 14 when parental consent is not granted. Companies that “knowingly or recklessly” fail to block underage users risk fines of up to $10,000 in damages to anyone suing on behalf of child users. They could also be liable for up to $50,000 per violation in civil penalties.

    In a statement, Florida governor Ron DeSantis said the “landmark law” gives “parents a greater ability to protect their children” from a variety of social media harm. Florida House Speaker Paul Renner, who spearheaded the law, explained some of that harm, saying that passing HB 3 was critical because “the Internet has become a dark alley for our children where predators target them and dangerous social media leads to higher rates of depression, self-harm, and even suicide.”

    But tech groups critical of the law have suggested that they are already considering suing to block it from taking effect.

    In a statement provided to Ars, a nonprofit opposing the law, the Computer & Communications Industry Association (CCIA) said that while CCIA “supports enhanced privacy protections for younger users online,” it is concerned that “any commercially available age verification method that may be used by a covered platform carries serious privacy and security concerns for users while also infringing upon their First Amendment protections to speak anonymously.”

    “This law could create substantial obstacles for young people seeking access to online information, a right afforded to all Americans regardless of age,” Khara Boender, CCIA’s state policy director, warned. “It’s foreseeable that this legislation may face legal opposition similar to challenges seen in other states.”

    Carl Szabo, vice president and general counsel for Netchoice—a trade association with members including Meta, TikTok, and Snap—went even further, warning that Florida’s “unconstitutional law will protect exactly zero Floridians.”

    Szabo suggested that there are “better ways to keep Floridians, their families, and their data safe and secure online without violating their freedoms.” Democratic state house representative Anna Eskamani opposed the bill, arguing that “instead of banning social media access, it would be better to ensure improved parental oversight tools, improved access to data to stop bad actors, alongside major investments in Florida’s mental health systems and programs.”

    Netchoice expressed “disappointment” that DeSantis agreed to sign a law requiring an “ID for the Internet” after “his staunch opposition to this idea both on the campaign trail” and when vetoing a prior version of the bill.

    “HB 3 in effect will impose an ‘ID for the Internet’ on any Floridian who wants to use an online service—no matter their age,” Szabo said, warning of invasive data collection needed to verify that a user is under 14 or a parent or guardian of a child under 14.

    “This level of data collection will put Floridians’ privacy and security at risk, and it violates their constitutional rights,” Szabo said, noting that in court rulings in Arkansas, California, and Ohio over similar laws, “each of the judges noted the similar laws’ constitutional and privacy problems.”

    Florida braces for lawsuits over law banning kids from social media Read More »