Author name: Kris Guyer

google-can’t-defend-shady-chrome-data-hoarding-as-“browser-agnostic,”-court-says

Google can’t defend shady Chrome data hoarding as “browser agnostic,” court says

Google can’t defend shady Chrome data hoarding as “browser agnostic,” court says

Chrome users who declined to sync their Google accounts with their browsing data secured a big privacy win this week after previously losing a proposed class action claiming that Google secretly collected personal data without consent from over 100 million Chrome users who opted out of syncing.

On Tuesday, the 9th US Circuit Court of Appeals reversed the prior court’s finding that Google had properly gained consent for the contested data collection.

The appeals court said that the US district court had erred in ruling that Google’s general privacy policies secured consent for the data collection. The district court failed to consider conflicts with Google’s Chrome Privacy Notice (CPN), which said that users’ “choice not to sync Chrome with their Google accounts meant that certain personal information would not be collected and used by Google,” the appeals court ruled.

Rather than analyzing the CPN, it appears that the US district court completely bought into Google’s argument that the CPN didn’t apply because the data collection at issue was “browser agnostic” and occurred whether a user was browsing with Chrome or not. But the appeals court—by a 3–0 vote—did not.

In his opinion, Circuit Judge Milan Smith wrote that the “district court should have reviewed the terms of Google’s various disclosures and decided whether a reasonable user reading them would think that he or she was consenting to the data collection.”

“By focusing on ‘browser agnosticism’ instead of conducting the reasonable person inquiry, the district court failed to apply the correct standard,” Smith wrote. “Viewed in the light most favorable to Plaintiffs, browser agnosticism is irrelevant because nothing in Google’s disclosures is tied to what other browsers do.”

Smith seemed to suggest that the US district court wasted time holding a “7.5-hour evidentiary hearing which included expert testimony about ‘whether the data collection at issue'” was “browser-agnostic.”

“Rather than trying to determine how a reasonable user would understand Google’s various privacy policies,” the district court improperly “made the case turn on a technical distinction unfamiliar to most ‘reasonable'” users, Smith wrote.

Now, the case has been remanded to the district court where Google will face a trial over the alleged failure to get consent for the data collection. If the class action is certified, Google risks owing currently unknown damages to any Chrome users who opted out of syncing between 2016 and 2024.

According to Smith, the key focus of the trial will be weighing the CPN terms and determining “what a ‘reasonable user’ of a service would understand they were consenting to, not what a technical expert would.”

The same privacy policy last year triggered a Google settlement with Chrome users whose data was collected despite using “Incognito” mode.

Matthew Wessler, a lawyer for Chrome users suing, told Ars that “we are pleased with the Ninth Circuit’s decision” and “look forward to taking this case on behalf of Chrome users to trial.”

A Google spokesperson, José Castañeda, told Ars that Google disputes the decision.

“We disagree with this ruling and are confident the facts of the case are on our side,” Castañeda told Ars. “Chrome Sync helps people use Chrome seamlessly across their different devices and has clear privacy controls.”

Google can’t defend shady Chrome data hoarding as “browser agnostic,” court says Read More »

ars-technica-content-is-now-available-in-openai-services

Ars Technica content is now available in OpenAI services

Adventures in capitalism —

Condé Nast joins other publishers in allowing OpenAI to access its content.

The OpenAI and Conde Nast logos on a gradient background.

Ars Technica

On Tuesday, OpenAI announced a partnership with Ars Technica parent company Condé Nast to display content from prominent publications within its AI products, including ChatGPT and a new SearchGPT prototype. It also allows OpenAI to use Condé content to train future AI language models. The deal covers well-known Condé brands such as Vogue, The New Yorker, GQ, Wired, Ars Technica, and others. Financial details were not disclosed.

One immediate effect of the deal will be that users of ChatGPT or SearchGPT will now be able to see information from Condé Nast publications pulled from those assistants’ live views of the web. For example, a user could ask ChatGPT, “What’s the latest Ars Technica article about Space?” and ChatGPT can browse the web and pull up the result, attribute it, and summarize it for users while also linking to the site.

In the longer term, the deal also means that OpenAI can openly and officially utilize Condé Nast articles to train future AI language models, which includes successors to GPT-4o. In this case, “training” means feeding content into an AI model’s neural network so the AI model can better process conceptual relationships.

AI training is an expensive and computationally intense process that happens rarely, usually prior to the launch of a major new AI model, although a secondary process called “fine-tuning” can continue over time. Having access to high-quality training data, such as vetted journalism, improves AI language models’ ability to provide accurate answers to user questions.

It’s worth noting that Condé Nast internal policy still forbids its publications from using text created by generative AI, which is consistent with its AI rules before the deal.

Not waiting on fair use

With the deal, Condé Nast joins a growing list of publishers partnering with OpenAI, including Associated Press, Axel Springer, The Atlantic, and others. Some publications, such as The New York Times, have chosen to sue OpenAI over content use, and there’s reason to think they could win.

In an internal email to Condé Nast staff, CEO Roger Lynch framed the multi-year partnership as a strategic move to expand the reach of the company’s content, adapt to changing audience behaviors, and ensure proper compensation and attribution for using the company’s IP. “This partnership recognizes that the exceptional content produced by Condé Nast and our many titles cannot be replaced,” Lynch wrote in the email, “and is a step toward making sure our technology-enabled future is one that is created responsibly.”

The move also brings additional revenue to Condé Nast, Lynch added, at a time when “many technology companies eroded publishers’ ability to monetize content, most recently with traditional search.” The deal will allow Condé to “continue to protect and invest in our journalism and creative endeavors,” Lynch wrote.

OpenAI COO Brad Lightcap said in a statement, “We’re committed to working with Condé Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.”

Ars Technica content is now available in OpenAI services Read More »

disney-cancels-the-acolyte-after-one-season

Disney cancels The Acolyte after one season

haters gonna hate —

Star Wars series was admittedly uneven, but didn’t deserve the online hate it received.

Asian man in white robe with one hand extended in front of him

Enlarge / We have doubts that any amount of Force powers will bring the show back.

YouTube/Disney+

In news that will delight some and disappoint others, Disney has canceled Star Wars series The Acolyte after just one season, Deadline Hollywood reports. The eight-episode series got off to a fairly strong start, with mostly positive reviews and solid ratings, albeit lower than prior Star Wars series. But it couldn’t maintain and build upon that early momentum, and given the production costs, it’s not especially surprising that Disney pulled the plug.

The Acolyte arguably wrapped up its major narrative arc pretty neatly in the season finale, but it also took pains to set the stage for a possible sophomore season. In this streaming age, no series is ever guaranteed renewal. Still, it would have been nice to see what showrunner Leslye Headland had planned; when given the chance, many shows hit their stride on those second-season outings.

(Spoilers for the series below. We’ll give you another heads-up when we get to major spoilers.)

As I’ve written previously, The Acolyte is set at the end of the High Republic Era, about a century before the events of The Phantom Menace. In this period, the Jedi aren’t the underdog rebels battling the evil Galactic Empire. They are at the height of their power and represent the dominant mainstream institution—not necessarily a benevolent one, depending on one’s perspective. That’s a significant departure from most Star Wars media and perhaps one reason why the show was so divisive among fans. (The show had its issues, but I dismiss the profoundly unserious lamentations of those who objected to the female-centric storyline and presence of people of color by dubbing it “The Wokelyte” and launching a review-bombing campaign.)

The Acolyte opened on the planet Ueda, where a mysterious masked woman wielding daggers attacked the Jedi Master Indara (Carrie-Anne Moss) and killed her. The assassin was quickly identified as Osha Aniseya (Amandla Stenberg), a former padawan now working as a meknek, making repairs on spaceships. Osha was arrested by her former classmate, Yord Fandar (Charlie Barnett), but claimed she was innocent. Her twin sister, Mae, died in a fire on their home planet of Brendok when they were both young. Osha concluded that Mae was still alive and had killed Indara. Osha’s former Jedi master, Sol (Lee Jung-jae), believed her, and subsequent events proved Osha right.

Mae’s targets were not random. She was out to kill the four Jedi she blamed for the fire on Brendok: Indara, Sol, Torbin (Dean-Charles Chapman), and a Jedi Wookiee named Kelnacca (Joonas Suotamo). The quartet had arrived on Brendok to demand they be allowed to test the twins as potential Jedi.

The twins had been raised by a coven of “Force witches” there, led by Mother Aniseya (Jodie Turner-Smith), who believed the Jedi were misusing the Force. While Mae was keen to follow in their mother’s footsteps, Osha wanted to train with the Jedi. When the fire broke out, both Mae and Osha believed the other twin had been killed along with the rest of the coven. How the fire really started, and the identity of Mae’s mysterious Master who trained her in the dark side of the Force, were the primary mysteries that played out over the course of the season.

(WARNING: Major spoilers below. Stop reading now if you haven’t finished watching the series.)

Lightsabers and wuxia

wuxia-inspired fight scenes.” height=”320″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/acolyte-olega-640×320.jpg” width=”640″>

Enlarge / The camera moved on a single axis for the wuxia-inspired fight scenes.

Lucasfilm/Disney+

From the start, The Acolyte was a bit of a departure from a typical Star Wars series, weaving in elements from wuxia films and detective stories while remaining true to the established Star Wars aesthetic and design. That alone made it an intriguing effort, with fresh characters and new takes on classic Star Wars lore. And the martial arts-inspired fight choreography was clever and fun to watch—especially in the shocking, action-packed fifth episode (“Night”).

But there were some obvious shortcomings as well, most notably the clunky dialogue—although that’s kind of a long-standing attribute of the Star Wars franchise. (Alec Guinness notoriously hated his dialogue as Obi-Wan Kenobi in A New Hope.) The pacing lagged at times, and there was a surprisingly high body count among the central characters.

A high body count: All of these Jedi are dead.

Enlarge / A high body count: All of these Jedi are dead.

Lucasfilm/Disney+

That alone might have made a second season challenging. I mean, they killed off Moss’ Jedi master in the first 10 minutes (although she reappeared in flashbacks), with Torbin and Kelnacca meeting the same fate over the next few episodes. By the time the final credits rolled, almost all the Jedi lead characters were dead. And senior leader Vernestra (Rebecca Henderson) opted to blame the murders on Sol (RIP) rather than Mae’s master, who turned out to be Vernestra’s former apprentice, Qimir (a scene-stealing Manny Jacinto)—now apprentice to Sith lord Darth Plagueis. (This was strongly implied in the finale and subsequently confirmed by Headland.)

Ultimately, however, it all came down to the ratings. Per Deadline, The Acolyte garnered 11.1 million views over its first five days (and 488 million minutes viewed)—not bad, but below Ahsoka‘s 14 million views over the same period. But those numbers declined sharply over the ensuing weeks, with the finale earning the dubious distinction of posting the lowest minutes viewed (335 million) for any Star Wars series finale. That simply didn’t meet Disney’s threshold for renewal, so we won’t get to learn more about the Qimir/Darth Plagueis connection.

Disney cancels The Acolyte after one season Read More »

ceo-of-failing-hospital-chain-got-$250m-amid-patient-deaths,-layoffs,-bankruptcy

CEO of failing hospital chain got $250M amid patient deaths, layoffs, bankruptcy

“Outrageous corporate greed” —

Steward Health Care System, run by CEO Ralph de la Torre, filed for bankruptcy in May.

 Hospital staff and community members held a protest in front of Carney Hospital  in Boston on August 5 as Steward has announced it will close the hospital.

Enlarge / Hospital staff and community members held a protest in front of Carney Hospital in Boston on August 5 as Steward has announced it will close the hospital. “Ralph” refers to Steward’s CEO, Ralph de la Torre, who owns a yacht.

As the more than 30 hospitals in the Steward Health Care System scrounged for cash to cover supplies, shuttered pediatric and neonatal units, closed maternity wards, laid off hundreds of health care workers, and put patients in danger, the system paid out at least $250 million to its CEO and his companies, according to a report by The Wall Street Journal.

The newly revealed financial details bring yet more scrutiny to Steward CEO Ralph de la Torre, a Harvard University-trained cardiac surgeon who, in 2020, took over majority ownership of Steward from the private equity firm Cerberus. De la Torre and his companies were reportedly paid at least $250 million since that takeover. In May, Steward, which has hospitals in eight states, filed for Chapter 11 bankruptcy.

Critics—including members of the Senate Committee on Health, Education, Labor, and Pensions (HELP)—allege that de la Torre and stripped the system’s hospitals of assets, siphoned payments from them, and loaded them with debt, all while reaping huge payouts that made him obscenely wealthy.

Alleged greed

For instance, de la Torre sold the land under the system’s hospitals to a large hospital landlord, Medical Properties Trust, leaving Steward hospitals on the hook for large rent payments. Under de la Torre’s leadership, Steward also paid a management consulting firm $30 million a year to “provide executive oversight and overall strategic directive.” But, de la Torre was the majority owner of the consulting firm, which also employed other Steward executives. As the WSJ put it, Steward “effectively paid its CEO’s firm, which employed Steward executives, for executive- management services for Steward.”

In 2021, while the COVID-19 pandemic strained hospitals, Steward distributed $111 million to shareholders. With de la Torre owning 73 percent of the company at the time, his share would have been around $81 million, the WSJ reported. That year, de la Torre bought a 190-foot yacht for $40 million. He also owns a $15 million custom-made luxury fishing boat called Jaruco. The Senate Help Committee, meanwhile, notes that a Steward affiliate owned two jets, one valued at $62 million and a second “backup” jet valued at $33 million.

In 2022, de la Torre got married in an elaborate wedding on Italy’s Amalfi Coast and bought a 500-acre Texas ranch for at least $7.2 million. His new wife, Nicole Acosta, 29, is a competitive equestrian who trains at a facility near the ranch. She competes on a horse that was sold in 2014 for $3.5 million, though it’s unclear how much the couple paid for it. Besides the ranch, de la Torre, 58, owns an 11,108-square-foot mansion in Dallas valued at $7.2 million, the WSJ reported.

While de la Torre was living a lavish lifestyle, Steward hospitals faced dire situations—as they had been for years. An investigation by the Senate HELP committee noted that Steward had shut down several hospitals in Massachusetts, Ohio, Arizona, and Texas between 2014 and this year, laying off thousands of health care workers and leaving communities in the lurch. It closed several pediatric wards in Massachusetts and Texas; in Florida, it closed neonatal units and eliminated maternity services. In Louisiana, Steward patients faced “immediate jeopardy.”

“Third-world medicine”

In a July hearing, Sen. Bill Cassidy (R-LA), ranking member of the HELP Committee, spoke of the conditions at Glenwood Regional Medical Center in West Monroe, Louisiana, which Steward allegedly mismanaged. “According to a report from the Centers for Medicare and Medicaid Services, a physician at Glenwood told a Louisiana state inspector that the hospital was performing ‘third-world medicine,'” Cassidy said.

Further, “one patient died while waiting for a transfer to another hospital because Glenwood did not have the resources to treat them,” the Senator said.  “Unfortunately, Glenwood is not unique,” he went on. “At a Steward-owned Massachusetts hospital, a woman died after giving birth when doctors realized mid-surgery that the supplies needed to treat her were previously repossessed due to Steward’s financial troubles.” The hospital reportedly owed the supplier $2.5 million in unpaid bills.

Additionally, the WSJ investigation dug up records that showed that a pest control company discovered 3,000 bats living in one of Steward’s Florida hospitals. In Arizona, a Phoenix-area hospital was without air conditioning during scorching temperatures, and its kitchen was closed for health-code violations. The state ordered it to shut down last week.

“Dr. de la Torre and his executive teams’ poor financial decisions and gross mismanagement of its hospitals is shocking,” Cassidy said. “Patients’ lives are at risk. The American people deserve answers.”

Outrage

Senate HELP Committee chair Bernie Sanders (I-VT) went further, saying that the US health care system “is designed not to make patients well, but to make health care executives and stockholders extraordinarily wealthy. … Perhaps more than anyone else in America, Ralph de la Torre, the CEO of Steward Health Care, epitomizes the type of outrageous corporate greed that is permeating throughout our for-profit health care system.”

Sanders lamented how de la Torre’s payouts could have instead benefited patients and communities, asking: “How many of Steward’s hospitals could have been prevented from closing down, how many lives could have been saved, how many health care workers would still have their jobs if Dr. de la Torre spent $150 million on high-quality health care instead of a yacht, two private jets and a luxury fishing boat?”

On July 25, the committee voted 16–4 to subpoena de la Torre so they could ask him such questions in person. To date, de la Torre has refused to voluntarily appear before the committee and declined to comment on the WSJ report. The committee’s vote marks the first time since 1981 that it has issued a subpoena.

Separately, Steward and de la Torre are under investigation by the Department of Justice over allegations of fraud and corruption in a deal to run hospitals in Malta.

CEO of failing hospital chain got $250M amid patient deaths, layoffs, bankruptcy Read More »

how-accurate-are-wearable-fitness-trackers?-less-than-you-might think

How accurate are wearable fitness trackers? Less than you might think

some misleading metrics —

Wide variance underscores need for a standardized approach to validation of devices.

How accurate are wearable fitness trackers? Less than you might think

Corey Gaskin

Back in 2010, Gary Wolf, then the editor of Wired magazine, delivered a TED talk in Cannes called “the quantified self.” It was about what he termed a “new fad” among tech enthusiasts. These early adopters were using gadgets to monitor everything from their physiological data to their mood and even the number of nappies their children used.

Wolf acknowledged that these people were outliers—tech geeks fascinated by data—but their behavior has since permeated mainstream culture.

From the smartwatches that track our steps and heart rate, to the fitness bands that log sleep patterns and calories burned, these gadgets are now ubiquitous. Their popularity is emblematic of a modern obsession with quantification—the idea that if something isn’t logged, it doesn’t count.

At least half the people in any given room are likely wearing a device, such as a fitness tracker, that quantifies some aspect of their lives. Wearables are being adopted at a pace reminiscent of the mobile phone boom of the late 2000s.

However, the quantified self movement still grapples with an important question: Can wearable devices truly measure what they claim to?

Along with my colleagues Maximus Baldwin, Alison Keogh, Brian Caulfield, and Rob Argent, I recently published an umbrella review (a systematic review of systematic reviews) examining the scientific literature on whether consumer wearable devices can accurately measure metrics like heart rate, aerobic capacity, energy expenditure, sleep, and step count.

At a surface level, our results were quite positive. Accepting some error, wearable devices can measure heart rate with an error rate of plus or minus 3 percent, depending on factors like skin tone, exercise intensity, and activity type. They can also accurately measure heart rate variability and show good sensitivity and specificity for detecting arrhythmia, a problem with the rate of a person’s heartbeat.

Additionally, they can accurately estimate what’s known as cardiorespiratory fitness, which is how the circulatory and respiratory systems supply oxygen to the muscles during physical activity. This can be quantified by something called VO2Max, which is a measure of how much oxygen your body uses while exercising.

The ability of wearables to accurately measure this is better when those predictions are generated during exercise (rather than at rest). In the realm of physical activity, wearables generally underestimate step counts by about 9 percent.

Challenging endeavour

However, discrepancies were larger for energy expenditure (the number of calories you burn when exercising) with error margins ranging from minus-21.27 percent to 14.76 percent, depending on the device used and the activity undertaken.

Results weren’t much better for sleep. Wearables tend to overestimate total sleep time and sleep efficiency, typically by more than 10 percent. They also tend to underestimate sleep onset latency (a lag in getting to sleep) and wakefulness after sleep onset. Errors ranged from 12 percent to 180 percent, compared to the gold standard measurements used in sleep studies, known as polysomnography.

The upshot is that, despite the promising capabilities of wearables, we found conducting and synthesizing research in this field to be very challenging. One hurdle we encountered was the inconsistent methodologies employed by different research groups when validating a given device.

This lack of standardization leads to conflicting results and makes it difficult to draw definitive conclusions about a device’s accuracy. A classic example from our research: one study might assess heart rate accuracy during high-intensity interval training, while another focuses on sedentary activities, leading to discrepancies that can’t be easily reconciled.

Other issues include varying sample sizes, participant demographics, and experimental conditions—all of which add layers of complexity to the interpretation of our findings.

What does it mean for me?

Perhaps most importantly, the rapid pace at which new wearable devices are released exacerbates these issues. With most companies following a yearly release cycle, we and other researchers find it challenging to keep up. The timeline for planning a study, obtaining ethical approval, recruiting and testing participants, analyzing results, and publishing can often exceed 12 months.

By the time a study is published, the device under investigation is likely to already be obsolete, replaced by a newer model with potentially different specifications and performance characteristics. This is demonstrated by our finding that less than 5 percent of the consumer wearables that have been released to date have been validated for the range of physiological signals they purport to measure.

What do our results mean for you? As wearable technologies continue to permeate various facets of health and lifestyle, it is important to approach manufacturers’ claims with a healthy dose of skepticism. Gaps in research, inconsistent methodologies, and the rapid pace of new device releases underscore the need for a more formalized and standardized approach to the validation of devices.

The goal here would be to foster collaborative synergies between formal certification bodies, academic research consortia, popular media influencers, and the industry so that we can augment the depth and reach of wearable technology evaluation.

Efforts are already underway to establish a collaborative network that can foster a richer, multifaceted dialogue that resonates with a broad spectrum of stakeholders—ensuring that wearables are not just innovative gadgets but reliable tools for health and wellness.The Conversation

Cailbhe Doherty, assistant professor in the School of Public Health, Physiotherapy and Sports Science, University College Dublin. This article is republished from The Conversation under a Creative Commons license. Read the original article.

How accurate are wearable fitness trackers? Less than you might think Read More »

that-book-is-poison:-even-more-victorian-covers-found-to-contain-toxic-dyes

That book is poison: Even more Victorian covers found to contain toxic dyes

Arsenic and old books —

Old books with toxic dyes may be in universities, public libraries, private collections.

Composite image showing color variation of emerald green bookcloth on book spines, likely a result of air pollution

Enlarge / Composite image showing color variation of emerald green bookcloth on book spines, likely a result of air pollution

In April, the National Library of France removed four 19th century books, all published in Great Britain, from its shelves because the covers were likely laced with arsenic. The books have been placed in quarantine for further analysis to determine exactly how much arsenic is present. It’s part of an ongoing global effort to test cloth-bound books from the 19th and early 20th centuries because of the common practice of using toxic dyes during that period.

Chemists from Lipscomb University in Nashville, Tennessee, have also been studying Victorian books from that university’s library collection in order to identify and quantify levels of poisonous substances in the covers. They reported their initial findings this week at a meeting of the American Chemical Society in Denver. Using a combination of spectroscopic techniques, they found that several books had lead concentrations more than twice the limit imposed by the US Centers for Disease Control (CDC).

The Lipscomb effort was inspired by the University of Delaware’s Poison Book Project, established in 2019 as an interdisciplinary crowdsourced collaboration between university scientists and the Winterthur Museum, Garden, and Library. The initial objective was to analyze all the Victorian-era books in the Winterthur circulating and rare books collection for the presence of an arsenic compound called cooper acetoarsenite, an emerald green pigment that was very popular at the time to dye wallpaper, clothing, and cloth book covers. Book covers dyed with chrome yellow—favored by Vincent van Gogh— aka lead chromate, were also examined, and the project’s scope has since expanded worldwide.

The Poison Book Project is ongoing, but 50 percent of the 19th century cloth-case bindings tested so far contain lead in the cloth across a range of colors, as well as other highly toxic heavy metals: arsenic, chromium, and mercury. The French National Library’s affected books included the two-volume Ballads of Ireland by Edward Hayes (1855), an anthology of translated Romanian poetry (1856), and the Royal Horticultural Society’s book from 1862–1863.

Levels were especially high in those bindings that contain chrome yellow. However, the project researchers also determined that, for the moment at least, the chromium and lead in chrome yellow dyed book covers are still bound to the cloth. The emerald green pigment, on the other hand, is highly “friable,” meaning that the particles break apart under even small amounts of stress or friction, like rubbing or brushing up against the surface—and that pigment dust is hazardous to human health, particularly if inhaled.

Lipscomb University undergraduate Leila Ais cuts a sample from a book cover to test for toxic dyes.

Enlarge / Lipscomb University undergraduate Leila Ais cuts a sample from a book cover to test for toxic dyes.

Kristy Jones

The project lists several recommendations for the safe handling and storage of such books, such as wearing nitrile gloves—prolonged direct contact with arsenical green pigment, for instance, can lead to skin lesions and skin cancer—and not eating, drinking, biting one’s fingernails or touching one’s face during handling, as well as washing hands thoroughly and wiping down surfaces. Arsenical green books should be isolated for storage and removed from circulating collections, if possible. And professional conservators should work under a chemical fume hood to limit their exposure to arsenical pigment dust.

X-ray diffraction marks the spot

In 2022, Libscomb librarians heard about the Poison Book Project and approached the chemistry department about conducting a similar analytical survey of the 19th century books in the Beaman Library. “These old books with toxic dyes may be in universities, public libraries, and private collections,” said Abigail Hoermann, an undergraduate studying chemistry at Lipscomb University who is among those involved in the effort, led by chemistry professor Joseph Weinstein-Webb. “So, we want to find a way to make it easy for everyone to be able to find what their exposure is to these books, and how to safely store them.”

The team relied upon X-ray fluorescence spectroscopy to conduct a broad survey of the collection to determine the presence of arsenic or other heavy metals in the covers, followed by plasma optical emission spectroscopy to measure the concentrations in snipped samples from book covers where such poisons were found. They also took their analysis one step further by using X-ray diffraction to identify the specific pigment molecules within the detected toxic metals.

The results so far: Lead and chromium were present in several books in the Lipscomb collection, with high levels of lead and chromium in some of those samples. The highest lead level measured was more than twice the CDC limit, while the highest chromium concentration was six times the limit.

The Lipscomb library decided to seal any colored 19th century books not yet tested in plastic for storage pending analysis. Those books, now known to have covers colored with dangerous dyes, have been removed from public circulation and also sealed in plastic bags, per Poison Book Project recommendations.

The XRD testing showed that lead(II) chromate was present in a few of those heavy metals as well—a compound of the chrome yellow pigment. In fact, they were surprised to find that the book covers contained far more lead than chromium, given that there are equal amounts of both in lead(II) chromate. Further research is needed, but the working hypothesis is that there may be other lead-based pigments—lead(II) oxide, perhaps, or lead(II) sulfide—in the dyes used on those covers.

That book is poison: Even more Victorian covers found to contain toxic dyes Read More »

texas-judge-who-bought-tesla-stock-won’t-recuse-himself-from-x-v.-media-matters

Texas judge who bought Tesla stock won’t recuse himself from X v. Media Matters

A judge banging a gavel next to a scale, representing justice

Getty Images | SimpleImages

A federal judge who bought more than $15,000 worth of Tesla stock has rejected a motion that could have forced him to recuse himself from a lawsuit that Elon Musk’s X Corp. filed against the nonprofit Media Matters for America.

US District Judge Reed O’Connor of the Northern District of Texas bought Tesla stock valued between $15,001 and $50,000 in 2022, a financial disclosure report shows. He was overseeing two lawsuits filed by X and recused himself from only one of the cases.

Media Matters argued in a July court filing that Tesla should be disclosed by X as an “interested party” in the case because of the public association between Musk and the Tesla brand. O’Connor rejected the Media Matters motion in a ruling issued Friday.

O’Connor wrote that financial interest “means ownership of a legal or equitable interest, however small, or a relationship as director, adviser, or other active participant in the affairs of a party.” His ruling said the standard is not met in this case and accused Media Matters of gamesmanship:

Defendants failed to show facts that X’s alleged connection to Tesla meets this standard. Instead, it appears Defendants seek to force a backdoor recusal through their Motion to Compel. Gamesmanship of this sort is inappropriate and contrary to the rules of the Northern District of Texas.

Judge should exit case, law professor writes

O’Connor made the ruling three days after recusing himself from a similar lawsuit filed by X. In that case, X sued the World Federation of Advertisers (WFA) and several large corporations that it accuses of an illegal boycott. Antitrust law professors have described X’s claims as weak.

O’Connor didn’t explain why he recused himself, but it seems clear that it wasn’t because of his Tesla stock. O’Connor also invested in Unilever, one of the defendants in X’s advertising lawsuit. Since Unilever is directly involved in the case, that’s likely what drove O’Connor’s recusal decision.

Musk’s case against Media Matters is also related to X’s problem with advertisers fleeing the platform formerly named Twitter. Media Matters published research on ads being placed next to pro-Nazi content on X, and the lawsuit blames the group for X’s advertising losses.

The federal code of judges’ conduct says that “a judge shall disqualify himself or herself in a proceeding in which the judge’s impartiality might reasonably be questioned.” This includes cases in which the judge has a direct financial interest, and cases where the judge has “any other interest that could be affected substantially by the outcome of the proceeding.”

Harvard Law School Professor Noah Feldman argued that O’Connor should recuse himself from X v. Media Matters. While X and Tesla are legally separate entities, Feldman wrote in a Bloomberg Opinion piece last week that O’Connor should exit because of that “impartiality might reasonably be questioned” rule.

“The basic idea is that a judge should recuse himself if a reasonable person in possession of the relevant facts would believe that the judge has reason for bias. And there is good reason to think that this rule covers O’Connor,” Feldman wrote. “Because Musk is so closely identified with both X and Tesla, Tesla share prices are arguably affected by the performance of X.”

Texas judge who bought Tesla stock won’t recuse himself from X v. Media Matters Read More »

your-10-year-old-graphics-card-can-run-dragon-age:-the-veilguard

Your 10-year-old graphics card can run Dragon Age: The Veilguard

Still kicking —

2014’s Nvidia GTX 970 is still a “minimum requirements” workhorse.

At this rate, it might be the only graphics card you'll ever need?

Enlarge / At this rate, it might be the only graphics card you’ll ever need?

When Dragon Age: Inquisition came out nearly 10 years ago, PC players could have invested $329 (~$435 in today’s dollars) in a brand-new GTX 970 graphics card to make the game look as good as possible on their high-end gaming rig. Surprisingly enough, that very same 2014 graphics card will still be able to run follow-up Dragon Age: The Veilguard (previously known as Dreadwolf) when it launches on October 31. If you’re using AMD cards, an even older Radeon R9 that you purchased back in 2013 will be able to run the game.

Veilguard‘s minimum specs are just the latest to show the workmanlike endurance of the humble GTX 970, which is currently available used on Newegg for as low as $140. Relatively recent big-budget PC releases like Baldur’s Gate 3 and Call of Duty: Modern Warfare 3 both use the old card (or the less powerful follow-up variant, the GTX 960) as their “minimum requirement” benchmark.

Not every big-budget PC game these days is so forgiving with its minimum specs, though. When Cyberpunk 2077 and Doom: Eternal launched in 2020, they both asked players to be sporting at least a GTX 1060, which had come out around four years prior.

For a bit of context, the GTX 970 was used as the “recommended” baseline spec for the mid-range “Oculus Ready” PCs needed to power the then-new Rift VR headset when it launched in 2016. Today, a $500 Meta Quest 3 headset gives you much better graphical performance in a self-contained portable package, no gaming PC required.

Veilguard players sticking with a GTX 970 shouldn’t expect to get the best graphical experience, of course. EA suggests an RTX 2070 (circa 2018) or a Radeon RX 5700Xt (circa 2019) to run the game at “recommended” specs. And you’ll need at least 16 GB of RAM and 100 GB of storage space.

Since work on Veilguard began in earnest in 2015, the game has suffered a string of high-profile staff departures: Creative Director Mike Laidlaw left in 2017; Executive Producer Mark Darrah and BioWare General Manager Casey Hudson left in late 2020; Senior Creative Director Matt Goldman left in late 2021; replacement Executive Producer Christian Daley left in early 2022; and producer Mac Walters left in early 2023.

The full requirements for Dragon Age: The Veilguard are as follows.

Minimum Requirements

OS: Windows 10/11 64-bit

Processor: Intel Core i5-8400 / AMD Ryzen 3 3300X(see notes)

Memory: 16GB

Graphics: Nvidia GTX 970/1650 / AMD Radeon R9 290X

DirectX: Version 12

Storage: 100GB available space

Additional Notes: SSD preferred, HDD supported; AMD CPUs on Windows 11 require AGESA V2 1.2.0.7

Recommended Requirements

OS: Windows 10/11 64-bit

Processor: Intel Core i9-9900K / AMD Ryzen 7 3700X (see notes)

Memory: 16GB

Graphics: Nvidia RTX 2070 / AMD Radeon RX 5700XT

DirectX: Version 12

Storage: 100GB SSD available space

Additional Notes: SSD required; AMD CPUs on Windows 11 require AGESA V2 1.2.0.7

Your 10-year-old graphics card can run Dragon Age: The Veilguard Read More »

monthly-roundup-#21:-august-2024

Monthly Roundup #21: August 2024

Strictly speaking I do not have that much ‘good news’ to report, but it’s all mostly fun stuff one way or another. Let’s go.

Is this you?

Patrick McKenzie: This sounds like a trivial observation and it isn’t:

No organization which makes its people pay for coffee wants to win.

There are many other questions you can ask about an organization but if their people pay for coffee you can immediately discount their realized impact on the world by > 90%.

This is not simply for the cultural impact of stupid decisions, though goodness as a Japanese salaryman I have stories to tell. Management, having priced coffee, seeking expenses to cut, put a price on disposable coffee cups, and made engineers diligently count those paper cups.

Just try to imagine how upside down the world is when you think one of the highest priority tasks for a software engineer this Monday is updating the disposable coffee cup consumption spreadsheet.

And no, Japanese megacorps are not the only place where these insanities persist. And there are many isomorphic ones.

Dominic Cummings: Cf No10 Cafe.

One of the secrets of my productivity, such as it is, is that I know many (but not all!) of the things to not track or treat as having a price. Can you imagine thinking it was a good idea to charge the people at No10 for coffee? Well, bad news.

Tyler Cowen asks, why do we no longer compose music like Bach? Or rather, why do we not care when someone does, as when Nikolaus Matthes (born 1981) produced high quality (if not as high quality as Bach’s best) Bach-style work. All reviews strongly positive, stronger than many older musicians who are still popular, yet little interest.

To me the answer is simple enough. There is quite a lot of Bach, and many contemporaries, and we have filtered what is available rather well and turned it into a common frame of reference. One could listen to that music all of one’s life, and there is still plenty of it. Why complicate matters now with modern mimicry, even if it is quite good? In popular music there are cultural reasons to need ‘new music’ periodically even if it is only variation on the old, yet we are increasingly converging on the classic canon instead except for particular ‘new music’ spaces. And I think we are right to do so.

The fabrication of the Venezuelan election wasn’t even trying. This matches my model. Yes, it is possible to generate plausible fake election data that would make fraud hard to prove, but those with the fraudulent election nature rarely do that. Often they actively want you to know. The point generalizes well beyond elections.

Indeed, it seems that in the wake of his new 0% approval rating, Maduro is going Full Stalin, with maximum security reeducation camps for political prisoners. Also the antisemitism, and I could go on. The playbook never changes.

I am guessing this happens a lot, including the He Admit It part.

Kelsey McR: ‼️ HVAC rep legit just said “We know our prices are competitive because we meet with all the other vendors in the area at least once a year to make sure we’re in alignment.” ‼️

This was their defense to my husband’s complaint on how they completely took advantage of my mother.

Some $h!t about to go down in Charlotte, NC if they don’t fix their mistake.

A whole different reason to beware when engaging in Air Conditioner Repair.

Disney tries to pull a literal ‘you signed up for a Disney+ free trial so you can’t sue us for killing your wife’ defense, saying that he agreed to arbitration in ‘all disputes with Disney.’ Others claim this is bad reporting and it’s due to buying tickets for Epcot, and I guess that is slightly better? Still, it’s a bold strategy.

I’ve been everywhere, man. Where am I gonna go?

Kevin Lacker: Peter Thiel on his struggle to leave California:

Seattle: worst weather in the country

Las Vegas: “not that big a fan”

Houston: just an oil town

Dallas: has an inferiority complex

Austin: government town

Miami: the vibe is that you don’t work

Nashville:

Americans spent 1 hour, 39 minutes more per day at home in 2022 than they did in 2003. Or are we sure this isn’t good news?

Abstract: Results show that from 2003 to 2022, average time spent at home among American adults has risen by one hour and 39 minutes in a typical day. Time at home has risen for every subset of the population and for virtually all activities. Preliminary analysis indicates that time at home is associated with lower levels of happiness and less meaning, suggesting the need for enhanced empirical attention to this major shift in the setting of American life.

Vivek: There’s no proof of causation here, but it is interesting that participants reported sleeping half an hour more and commuting half an hour less. And then they reported working at home 40 minutes more and away about the same less, and a smaller identical ~1:1 shift for leisure activity towards home.

As someone who spends most of their time at home? Home is amazing. Up to a point.

I do think I spend too much time at home and don’t go to enough things. It is because home got more awesome, not because away got worse, but it still happened. It’s too damn easy to not go outside.

Tyler Cowen warns that larger teams and difficulty in attributing credit and productivity often means greater credentialism. Without other ways to tell who is good, companies fall back upon legible signals like degrees or GitHub profiles. He predicts credentialism will become more important, not less. I agree with his problem statement, and disagree with his assessment of the impact of AI on this, for which see the post AI #78 (when available).

As with many things, when the capitalists declined to open a grocery store in a ‘food desert’ there was probably a reason. In this case the reason was ‘there aren’t that many people around and they mostly prefer to shop at a Dollar Store or a relatively far away WalMart or other store anyway because it is cheaper.’

I do see the argument. A grocery store in an area provides substantial consumer surplus over and above existing options. It is not crazy to think that such a store could be socially good even if it is not profitable. The problem is that these are poor communities. We might think what the inhabitants want is fresh produce and better availability of otherwise healthy food.

The residents disagree. Their revealed preference is that what they need are lower prices, the ability to buy in bulk and feed families for less, and independent stores have higher supplier costs. Which is another way of saying that consumers mostly prefer the big businesses and their lower prices. Yes, they like having easily available fresh lettuce and a store that is closer, but how much are they willing to pay for that? Not much, as it turns out.

What would happen if we broke up the big supermarket chains, including WalMart? Or if we invalidated their deals with suppliers and forced such suppliers to price match for other customers? There is certainly actively talk of going after Big Grocery. The problem is that where Big Grocery is using its market power is primarily not to raise prices on customers, but to lower prices charged by suppliers. If you destroy that, you do not lower prices and make consumers better off. You raise prices and make consumers worse off.

This could also offer perspective on all the talk about supposedly predatory evil capitalist grocery chains, and how they are supposedly engaging in ‘price gouging’ while their profit margins are 1.5% and often their retail prices are better than some wholesale prices.

In conclusion:

On the congestion pricing front, NYC comptroller Brad Lander has filed two new lawsuits to challenge Hochul’s shameful indefinite pause order. Attempts to replace the lost revenue remain stalled.

(Whereas Congressman Hakeem Jeffries betrays NYC, calls the pause in congestion pricing ‘reasonable.’ No.)

Track records of various people on Manifold. I no longer am mysteriously winning actual 100% of the time, but it is going well.

One big opportunity in the election prediction markets is the spread between electoral college and popular vote. Nate Silver thinks there is a 12% chance that Kalama Harris will win the popular vote but not the electoral college. Polymarket says this is 21%. It could of course happen, but 21% seems clearly too high.

Shameless plug, take two: My 501c3 Balsa Research is looking to fund two Jones Act studies, but only has the funds right now to do one of them. Help us do both instead. I think these are very worth doing, and if it works out we have a model we can scale.

My dear and deeply brilliant and talented friend Sarah Constantin is looking for work on ambitious science and tech projects on strategy, research, marketing and more. Here is her LinkedIn, an in-depth doc and her Caldenly. You should hire her. But also if you cause her to move out of NYC I will not forgive you, you bastard.

YC is doing a fall batch, deadline is August 27 so move fast. If you are considering doing this than you should do it.

If you think you’re applying ‘too early’ or without enough done yet:

Paul Graham: I was sent stats for the YC board meeting tomorrow. The second number is the fraction of companies with no revenue when YC funded them. High is good because it means we’re investing early. If this doesn’t convince you that you don’t have to wait to apply, I don’t know what will.

Adam Veroni: Can you apply with just an idea?

Paul Graham: Yes, many people do.

If I wasn’t already so deep into my writing and didn’t have a family, especially if I was younger, I would 100% be applying, and assume I was getting positive selection – if I was accepted it would be a big sign I should do it and a giant leg up doing it.

(I also would note that this is an example of how metrics, especially involving revenue, can get very weird with venture capital, if you can’t get impressive revenue there are reasons to consider postponing revenue until it can look impressive or you don’t have to get funding for a while.)

IFP is hiring an Assistant Editor for Santi Ruiz, and paying $3k for a successful referral.

Who has food the locals are actually excited to constantly eat?

Epic Maps: Europe’s great divide.

Maia: Revealed preferences for which countries have good cuisine.

The locals, they know. The interesting zone is the Balkans (not counting Greece), you essentially never see their cuisine in America so it’s hard to know if they’re right to stay local. Iceland is presumably more about supply than demand. Otherwise, the border seems to clearly be in the right place.

Tyler Cowen offers thoughts on Ranked Choice Voting, saying it reduces negative campaigning and calling it a ‘voting system for the self-satisfied.’ Yes, it has a moderating influence, but it also opens the door to real change and third parties or independent runs. Tyler has made several similar arguments recently, essentially saying that it is good to shake things up and let essentially arbitrary major party groups govern despite minority support and see what happens, if things are not by default going well, which he believes they are not. This is at most a highly second-best approach, especially given who I expect to most often be doing the shaking up. He doesn’t get too deep into the game theory here given the venue, so I will finish by noting that I do think that if you are going to do something complex, RCV is the way. It has theoretical game theory issues, but from what I can see the similar issues for other complex systems are far worse.

Blackberry invented push notifications exactly so you didn’t have to check your phone.

The goal is to hit the sweet spot. You want sufficient notifications that a lack of them means you can relax and ignore, without notifications that hijack your attention. On the instinctive margin you want less notifications.

Twitter to remove the like and comment counts from replies, and soon from the news feed as well.

I notice I am confused. This is a really stupid idea. The replies were 90% ‘don’t do this.’

Like counts have their downsides. I do like that ACX does not have likes. But in the context of Twitter it is necessary to have that context.

And taking out the reply counts is madness. Taking reply counts out of the newsfeed? That would be complete and utter insanity. You don’t know if there are replies unless you click through? What the hell?

The question to me is not ‘is this a good idea,’ it is ‘is this the kind of thing that does enough damage to endanger Twitter.’ In its full version, I think it very much might.

Emmett Shear: As a (very small) investor in SubStack maybe I should be rooting for this change. It’s the first idea I’ve seen that’s so bad that it could actually destroy Twitter. Incredible stuff. Reminds me of when Digg self-destructed and thrust Reddit into the lead.

Making a tool much shittier does mean it’s harder to do bad things with it, I suppose that’s true. I’ll make you a deal: if this happens you can stay and use the plastic kids cutlery, and I’ll go somewhere they let me have a real fork.

I hope they think better of this, and also hope Tweetdeck does not follow this change.

Also it would be great if Twitter stopped all-but-blocking Substack links.

We keep seeing results like this: 41% of people in this survey would enter a Utopia-level Experience Machine, 17% would do it purely if it was ‘better than real life’ and I am guessing this group is less inclined to do so than many others. This is the experience machine from the thought experiment ‘you would obviously never plus into the experience machine.’ Something is very wrong.

A bizarre claim that the Pixel Watch has a terrible UI, especially by not automatically showing notifications, and this was largely because Google didn’t force those building its products to switch away from iPhones and Apple Watches. Except that I asked Gemini and Claude and no, the Pixel Watch does notifications in the obviously correct way?

The culture issue is still there. You absolutely have to use your own products.

Emmett Shear: On the other hand, when I interned at Microsoft on Hotmail in 2004 everyone used Internet Explorer and Outlook. So when I tried to tell them about Gmail on Firefox and that they were in deep trouble, no one really reacted. They didn’t disagree but they didn’t really *getit.

PRoales: Yes this is why when in an all hands meeting Eric Schmitt was challenged about being photographed using an iPhone he shot back that everyone in Google should switch back and forth between iPhone and Android once a quarter

Switching back and forth is plausibly even better.

Periodically I see people reinvent the proposal of communication services (here text and email, often also phone and so on) where the sender pays money, usually with the option to waive the fee if the communication was legit and worthwhile.

Switches and physical buttons are better than touchscreens, navy finally realized in 2019. When will the rest of us catch up? Certainly there are times and places for touchscreens, but if a system includes a touch screen then on the margin there are never, ever enough buttons and switches.

An in depth case study on the enshittification of Google results, and how major media products and brands are one by one being mined in ‘bust out’ operations that burn their earned credibility for brief revenue via SEO glory. And that’s (mostly) without AI generating the content, which will doubtless accelerate this.

Why is this a hard problem to solve?

I get the argument that ‘if 99% of SEO spam is detected you still lose to the 1%.’

The problem with that argument is that these are brands.

Suppose Google has to deal with 10 million pages, all from different sources, 9.9 million of which are SEO spam optimized to defeat whatever algorithms Google was found to be using yesterday or last month or last year. They can iterate more and faster than you can. You have to use some algorithm on all of it, you have lots of restrictions on how that works, you move at the speed of a megacorp. Sounds hard.

I think there are solutions to that, at least until everyone adjusts again, given that Google has Gemini and can fine tune (or even outright pretrain) versions of it for exactly this purpose.

There are also a bunch of other things one could try. Google has not even tried integrating direct user feedback despite this being the One Known Answer for sorting quality, and Google having every advantage in filtering that data for users that are providing good information. I realize this is a super hard problem and a continuous arms race. But I flat out think if you put me in charge of Google Search and gave me a free hand and their current budget I would solve this.

Where I don’t understand at all are the major brands getting away, for extended periods, with their ‘busting out’ and selling out their quality, often dramatically.

If a large percentage of users know that (without loss of generality, going off the OP’s claim without verifying) Better Homes & Gardens is now SEO Optimized Homes & Gardens, and has increasingly been for years, don’t tell me it is hard for Google to notice.

The point of a major brand is that it has an ongoing linked reputation. It is not as if such moves are not naked eye obvious. If you have to, you can have a human annual review, at a random time, of all major websites above some traffic threshold, based on a random sample of recent Google Search directed activity. Then that modifier gets applied to all searches there for a year, up to and including essentially an Internet Death Penalty. Even if you went overboard on this, it likely costs only eight figures a year to maintain, nine at the most. A small price to pay in context.

Here is a new candidate for most not okay thing someone openly did in a study. So this is mostly offered for fun, but also because Oliver Traldi is importantly right here.

Oliver Traldi: However low your opinion of “studies”, it should probably be lower.

sucks: lmfao. the “dAtA jOuRnAliSt” who did this study didn’t believe the alcoholics either so he just doubled their numbers for no good reason. now people quoting it as if it’s fact. really amazing stuff. at least every other study besides this one is Real And Reliable!!

Forbes: The source for this figure is “Paying the Tab,” by Phillip J. Cook, which was published in 2007. If we look at the section where he arrives at this calculation, and go to the footnote, we find that he used data from 2001-2002 from NESARC, the National Institute on Alcohol Abuse and Alcoholism, which had a reprentative sample of 43,093 adults over the age of 18.

But following this footnote, we find that Cook corrected these data for under- reporting by multiplying the number of drinks each respondent claimed they had drunk by 1.97 in order to comport with the previous year’s sales data for alcohol in the US. Why? It turns out that alcohol sales in the US in 2000 were double what NESARC’s respondents—a nationally representative sample, remember-claimed to have drunk.

I mean… you can’t… just… do that. You know you can’t just do that, right?

One obvious reason is that the distribution looks like that because it is missing people who say they don’t drink and are lying. And in general there’s no reason to think drinks unreported scale linearly with drinks reported.

The other reason is that not all alcohol that gets sold gets consumed? You can’t simply assume that every time someone buys a drink or a bottle that it gets fully consumed. That very obviously is not what happens.

Government actually working, hopefully.

More Perfect Union:

BREAKING: Banks, credit card companies, and more will be required to let customers talk to a human by pressing a single button under a new Biden administration proposed rule.

The @CFPB rule is part of a campaign to crack down on customer service “doom loops.”

The @FCC is launching an inquiry into considering similar requirements for phone, broadband, and cable companies.

And @HHSGov and @USDOL are calling on health plan providers to make it easier to talk to a customer service agent, according to the White House.

Rachel Tobac: From a personal perspective: I love this.

From a hacking-over-the-phone perspective: I’m hoping these Banks, Credit Card companies etc update their ☎️ identity verification protocols or we’re going to see quicker hacking / account takeover when reaching a human is required quick.

Andrew Rettek: Does this apply to when I reach out to government services that have frozen my bank account? It took over a week to get a person on the phone who could do anything at all about the issue.

My cynical take is that this won’t apply to federal or state call centers that cause way more damage than any private company. I hope I’m wrong.

Imagine being so despairing that you think slowing down bank phone calls is necessary to introduce friction into identity theft. Still, yes, that is a real concern, especially if banks are actually stupid enough to continue to allow voice ID. Every time the bank apologizes for asking me security questions, I reply “no, this is good, I would be worried if you weren’t asking, thank you for checking.”

Is graft here in the good old USA different?

Ben Landau-Taylor: Every time I talk about graft in the U.S., someone says “Oh but graft here is different, they have to go through sinecures and patronage networks, no one just steals the money.” And no, that’s ridiculous cope, they can also just steal half a billion dollars. [links to a story about Medicaid fraud and provides text]

Certainly the PPP showed that we do fraud on a massive scale when given the opportunity, or at least allow it, same as everyone else.

Your periodic moment of appreciation for the First Amendment, and periodic reminder that this degree of free speech is a very specifically American thing.

British politician Miriam Cates: But the invention of social media has exponentially increased the speed at which protests can be triggered, organised and spread.

Yet online anonymous users can say whatever they like without repercussions. Freedom without responsibility is just anarchy.

We should not try to regulate what is said online. But what keeps society civilised offline is the accountability of being responsible for what you say. Online anonymity is destroying the values and virtues that underpin peaceful society – responsibility, dignity, empathy.

Richard Ngo: Absolutely disgusting behavior from British authorities, who are becoming more authoritarian on a daily basis.

I lived there for six years, and the decline since then has been deeply disappointing.

If Brits can’t retweet what’s going on then the rest of us will have to.

Joe Rogan: The fact that they’re comfortable with finding people who’ve said something that they disagree with and putting them in a f—king cage in England in 2024 is really wild.

Especially, they’re saying you can get arrested for retweeting something.

Or here’s a call for ‘militant democracy’ which means shutting down the opposition’s media entirely.

Or here’s the UK National Health Service data analytics blaming Twitter having private likes for the UK’s riots.

3,300 people in the UK were arrested in the same year for social media posts.

Or it seems even for posting in private?

Francois Valentin: In the UK you can get arrested and sentenced to prison for offensive jokes in a private whatsapp group.

I’m not an American free speech absolutist but such a vile overreach by the state could radicalise me.

As in, 20 weeks for offensive jokes in a WhatsApp chat group with friends. What?

Also, come and take it has never applied more:

In summary:

The EU also joined the fun, having the nerve to threaten Americans who might dare talk to each other online.

Mason: The EU is threatening X with legal action “in relation to” a planned interview between Elon and Trump, as it may “generate detrimental effects on civic discourse.”

Thierry Breton: With great audience comes greater responsibility #DSA

As there is a risk of amplification of potentially harmful content in 🇪🇺 in connection with events with major audience around the world, I sent this letter to @elonmusk.

EUROPEAN COMMISSION

Thierry Breton

Member of the Commission

Brussels, 12 August 2024

Dear Mr Musk,

I am writing to you in the context of recent events in the United Kingdom and in relation to the planned broadcast on your platform X of a live conversation between a US presidential candidate and yourself, which will also be accessible to users in the EU.

I understand that you are currently doing a stress test of the platform. In this context, I am compelled to remind you of the due diligence obligations set out in the Digital Services Act (DSA), as outlined in my previous letter. As the individual entity ultimately controlling a platform with over 300 million users worldwide, of which one third in the EU, that has been designated as a Very Large Online Platform, you have the legal obligation to ensure X’s compliance with EU law and in particular the DSA in the EU.

This notably means ensuring, on one hand, that freedom of expression and of information, including media freedom and pluralism, are effectively protected and, on the other hand, that all proportionate and effective mitigation measures are put in place regarding the amplification of harmful content in connection with relevant events, including live streaming, which, if unaddressed, might increase the risk profile of X and generate detrimental effects on civic discourse and public security. This is important against the background of recent examples of public unrest brought about by the amplification of content that promotes hatred, disorder, incitement to violence, or certain instances of disinformation.

It also implies i) informing EU judicial and administrative authorities without undue delay on the measures taken to address their orders against content considered illegal, according to national and/ or EU law, ii) taking timely, diligent, non-arbitrary and objective action upon receipt of notices by users considering certain content illegal, iii) informing users concerning the measures taken upon receipt of the relevant notice, and iv) publicly reporting about content moderation measures.

In this respect, I note that the DSA obligations apply without exceptions or discrimination to the moderation of the whole user community and content of X (including yourself as a user with over 190 million followers) which is accessible to EU users and should be fulfilled in line with the risk-based approach of the DSA, which requires greater due diligence in case of a foreseeable increase of the risk profile.

As you know, formal proceedings are already ongoing against X under the DSA, notably in areas linked to the dissemination of illegal content and the effectiveness of the measures taken to combat disinformation.

As the relevant content is accessible to EU users and being amplified also in our jurisdiction, we cannot exclude potential spillovers in the EU. Therefore, we are monitoring the potential risks in the EU associated with the dissemination of content that may incite violence, hate and racism in conjunction with major political – or societal – events around the world, including debates and interviews in the context of elections.

Let me clarify that any negative effect of illegal content on X in the EU, which could be attributed to the ineffectiveness of the way in which X applies the relevant provisions of the DSA, may be relevant in the context of the ongoing proceedings and of the overall assessment of X’s compliance with EU law. This is in line with what has already been done in the recent past, for example in relation to the repercussions and amplification of terrorist content or content that incites violence, hate and racism in the EU, such as in the context of the recent riots in the United Kingdom.

I therefore urge you to promptly ensure the effectiveness of your systems and to report measures taken to my team. My services and I will be extremely vigilant to any evidence that points to breaches of the DSA and will not hesitate to make full use of our toolbox, including by adopting interim measures, should it be warranted to protect EU citizens from serious harm.

Yours sincerely,

Cc: Linda Yaccarino, CEO of X

Thierry Breton

Elon Musk: Bonjour!

Remember the absurdity that is Einstein, Descartes, Feynman and others saying ‘oh I am not especially talented or smart?’ Yeah. Not so much.

Ross Rheingans-Yoo: Once upon a time at [trading firm], I realized that most interns were terribly miscalculated about their own skill level because they only really thought about the other interns who are at their skill level or better.

This rhymes with @RichardMCNgo’s observation that highly-intelligent people are often bad at understanding what it’s like to not be highly-intelligent — I would posit, because their attention tends to slide off the cases around them where people are not!

Today’s mental lightning bolt, courtesy of Richard, is that the same process can happen on other qualities. He notes empathy, but I’d add: – conscientiousness – appearance – enthusiasm for bird-watching – artistic skill – wealth – EA-ness – blog readership.

I definitely underestimated (and at other times overestimated!) my talents and advantages, but I was never under the illusion that I had ‘no special talent.’ But I didn’t before think I was that special about recognizing I had talent, and still can’t actually relate to Einstein thinking he didn’t have any (beyond curiosity).

Richard Ngo is saying, this applies to a lot of other things beyond intelligence.

Richard Ngo: Highly intelligent people understand most things very well, but are often terrible at understanding what it’s like to be dumb. Similarly, highly empathetic people understand most experiences very well, but are often terrible at understanding what it’s like to be selfish or evil.

Anecdotally, people who are brilliant in most other ways can be terrible teachers – picture academics giving talks that only a handful of people can follow.

That last part I thought was common knowledge, which perhaps reinforces the point. Brilliant people can be brilliant teachers, or they can go over your head, and I have been known to draw from both columns.

Some theories on why people do not take advice. It’s a good list. My main emphasis would be that mostly people absolutely do take advice, especially the standard advice. So we’re left giving the advice that people already aren’t listening to, or we focus on the parts they don’t listen to, rightly or wrongly. If I had to guess, I would say people take advice roughly as often as they should?

More speculation on why Rome never had an Industrial Revolution, this time from Maxwell Tabarrok.

Music as intentional barrier to communication, to facilitate communcation?

TLevin: I’m confident enough in this take to write it as a PSA: playing music at medium-size-or-larger gatherings is a Chesterton’s Fence situation.

It serves the very important function of reducing average conversation size: the louder the music, the more groups naturally split into smaller groups, as people on the far end develop a (usually unconscious) common knowledge that it’s too much effort to keep participating in the big one and they can start a new conversation without being unduly disruptive.

If you’ve ever been at a party with no music where people gravitate towards a single (or handful of) group of 8+ people, you’ve experienced the failure mode that this solves: usually these conversations are then actually conversations of 2-3 people with 5-6 observers, which is usually unpleasant for the observers and does not facilitate close interactions that easily lead to getting to know people.

By making it hard to have bigger conversations, the music naturally produces smaller ones; you can modulate the volume to have the desired effect on a typical discussion size. Quiet music (e.g. at many dinner parties) makes it hard to have conversations bigger than ~4-5, which is already a big improvement. Medium-volume music (think many bars) facilitates easy conversations of 2-3. The extreme end of this is dance clubs, where very loud music (not coincidentally!) makes it impossible to maintain conversations bigger than 2.

I suspect that high-decoupler hosts are just not in the habit of thinking “it’s a party, therefore I should put music on,” or even actively think “music makes it harder to talk and hear each other, and after all isn’t that the point of a party?” But it’s a very well-established cultural practice to play music at large gatherings, so, per Chesterton’s Fence, you need to understand what function it plays. The function it plays is to stop the party-destroying phenomenon of big group conversations.

My experience is usually that a conversation with 2-3 people and 5-6 observers is fine, even 20 observers can be fine (that’s a panel!), but only if those 5-6 observers know they are observers. When there are 5+ people trying to actively participate, that is usually a disaster.

There are of course other conversations where you do not want observers, and you benefit from intimacy or privacy. And yes there can be that situation where it would be higher value to split the conversation, but people do not feel social permission or see a good way to do so.

So I can see an argument that some amount of this can be useful. But also, no.

In general, we should be wary of this sort of ‘make things worse in order to make things better.’ You are making all conversations of all sizes worse in order to override people’s decisions.

You should be very suspicious of this, especially given that you have to do actual damage in order to have much impact.

I can see ‘light dinner music’ levels in some settings, especially actual dinner parties, where you really want the groups to stay small. Also the music itself can be nice.

I would still confidently say that by default, the music ends up far too loud for everyone, and a nightmare for people like me that don’t have the best hearing.

For example, I’d offer this slight modification: Dance clubs make it impossible to maintain conversations bigger than 1. The sound is by default, to me, physically painful at all times, potentially injuriously so. You have to yell to the person right next to you to do even the most basic things. Yes, the argument is that you let your body do the talking. Perhaps getting rid of people like me is part of the point. But yikes.

Does typical bar music ‘facilitate easy conversations of 2-3 people?’ Perhaps, but mostly I see it make even those conversations harder. It’s impossible to make an N-person conversation actively hard, without making an (N-2) conversation worse.

It’s so easy to go so loud it’s hard to talk. One of my otherwise favorite restaurants, Tortaria, plays music loud enough that I don’t take people there for conversations.

Eliezer Yudkowsky asks a question I often wonder about: Why do people so often choose to learn via video rather than over text?

Eliezer Yudkowsky: I don’t understand people who learn better from video than text. Why would your own thoughts about absorbing material always run at the same rate, and that rate is the lecturer’s voice?

Do they never stop and think? Do they never need to?

Huh, maybe this is a skill issue and I need to learn the UI? (Quotes Great Big Dot saying “I find it a lot more annoying if it’s not YouTube, because on YouTube I have keyboard shortcuts for pausing, rewinding, fastforwarding, speeding up, and slowing down.”)

I should clarify for the benefit of yung’uns: My words are meant literally enough that when I say “I don’t understand” I actually mean that I am epistemically confused and curious not that I morally disapprove of the act of preferring video.

I really had not expected, before today, that video-likers would consider frequent ongoing speed-manipulation to be part of their standard process! Today I learned!

To me there are two big advantages to voice or video over text.

  1. You can listen to voice in situations where reading won’t work well. The central examples are you are walking down the street, or in a vehicle, or working out. Or you want to do it as more of a relaxation thing.

  2. Audio and especially video is higher bandwidth than the transcript. You get to see people interact and move, you get to hear the details of their voices. If all you do is read the words, you are potentially missing a lot. Sometimes that matters. Or it is important to have good fluid visual aids.

I vastly prefer reading in most cases. I especially hate that videos are impossible to search and scan properly, or to know if you have the right one. Super frustrating. When people send me videos, I have a very high bar to watching, whereas it’s easy to check out text and quickly tell if it has value.

But also I recognize that my hearing and audio processing is if anything below average, whereas my ability to process written words is very good (although vastly slower than others like Tyler Cowen).

Scott Aaronson’s daily reading list is to reading what I am to writing. I am honored that he spends 12 hours a week on my blog, one does not have many of those bullets. He also reads WaPO and NYT, ACX, Not Even Wrong (although this one rarely updates anymore), Quanta, Quillette, The Free Press, Mosaic, Tablet, Commentary, several Twitter accounts (Graham, Yudkowsky, Deutsch), many Facebook updates and comments that he says in total often take hours a day, ~50 arXiv abstracts per day plus books.

He has noticed that this is approaching eight hours a day, seven days a week. And that this means often the day ends and Scott hasn’t created anything, and often without him even feeling ‘more informed.’

So the obvious first thing to say is: He’s going to have to make some cuts.

Let’s start with the newspapers.

I subscribe to Bloomberg, the Washington Post and the Wall Street Journal, so I can access links as needed, and likely I ‘should’ bite the bullet and add a few more to that list even though it feels very bad to subscribe to things you mostly don’t read or even check (e.g. NYT, The Atlantic, FT…)

How many newspapers do I ‘read’ on a daily basis? Zero. I will occasionally scan one, or check for news on a particular event or on AI generally. What I do not find useful is the thing my family used to do in the mornings, which is to ‘read the newspaper.’

Twitter allows me to do this, while having confidence that if something is important it will still come to my attention. I do not think Facebook can substitute for Twitter here, so if concerned with current events one would otherwise still need to scan and partially read one newspaper.

I do think you can very safely cut this down to one newspaper. If you want two, it’s to have both a blue paper and a red paper. You don’t need both WaPO and NYT.

So I would absolutely lose one or the other, and also be more selective on articles.

If you are literally Tyler Cowen and can read at 10x speed, sure, read five papers. The rest of us mortals, not so much.

Next up are what one might call the magazines. This seems like a reasonably sized list of choices here, in terms of places to look for good material. But surely one would not be so foolish as to read most of their offerings? I have The Free Press on my RSS feed, but well over half the time I see a post headline, maybe read one paragraph or do a few seconds of skimming, and move along, most of what they offer is not relevant to my interests. That will be less true for Scott’s interests, but still a lot of it is doubtless irrelevant or duplicative.

As an experiment, I’m going to go to Quanta, a name I didn’t recognize. Okay, it’s a science magazine. A decent chunk of these posts sound potentially interesting to either of us, but how many of them seem vital enough if one is overloaded? I say none, unless I recognized a good author or otherwise got a recommendation.

I decided to keep going with Quillette, which I remember can host good posts sometimes, but again when I checked I didn’t see anything important or compelling. It is odd what they choose to focus on. I went back as far as June 2, when they had a post on AI existential risk that if I’d seen it at the time I would have been compelled to read and cover, but I can already tell it’s bad. I tried the least uninteresting other teaser (about the X trilogy, since I’ve seen two of them) and it was a snoozefest. So I would definitely use the ‘you need a reason’ rule here.

As I do on every magazine-style website. If it’s worthwhile, you’ll find out. At most, check once a month and see what catches your eye, with a short hook.

Then there’s Facebook. One of the decisions I am most happy with is that I am not on Facebook – although many others could say the same about Twitter. Given I’m writing this, I checked it again, and wow the feed was stupider than I thought. If this is taking hours of reading, that’s got to be a big mistake. If it’s a place to chat with friends, sure, I could see that working and being worthwhile. This sure sounds like something else, given it is taking hours. At minimum, I’d start very very aggressively unfollowing all but a core of actual good friends and a few high hit-rate other accounts.

People often ask how I am so productive. One of the keys is that I am ruthless about filtering information and choosing what to consume in what amount of detail. And I’m still nowhere near ruthless enough.

It is indeed frustrating when people deny one’s own lived experiences.

Brittany Wilson: One disorienting thing about getting older that nobody tells you about is how weird it feels to get a really passionate, extremely wrong lecture from a much younger person about verifiable historical events you personally remember pretty well.

Memetic Sisyphus: I worked retail when Obama care became law and before I could work OT as much as I wanted but when it passed it meant my hours got restricted to 34 a week so they didn’t have to give me full benefits. So I didn’t get healthcare and my paychecks were smaller.

Aelita (QTing MS): No, you stopped getting overtime because the economy was in a recession and unemployment spiked to 11 percent, your employer just lied to you.

MS: Yeah this is exactly what the OP was talking about.

The replies are mostly full of other people telling stories about what happened to their jobs, or ability to find jobs, or to their insurance. Almost none of it is good.

My own experience is that Obamacare made it extremely expensive to not have a legible full-time job with a large employer. The marketplace is outrageously expensive, and what you get in exchange is not good insurance. Luckily I didn’t have to deal with employers trying to dodge insurance mandates so I can’t speak to that, but it seems like what people responding to incentives would do.

Do not assume people understand why they do what they do, such as Praying for Rain.

It turns out you pray for rain in order to convince people you caused it to rain.

We study the climate as a determinant of religious belief. People believe in the divine when religious authorities (the “church”) can credibly intervene in nature on their behalf. We present a model in which nature sets the pattern of rainfall over time and the church chooses when optimally to pray in order to persuade people that it has caused the rain. We present evidence from prayers for rain in Murcia, Spain that the church follows such an optimal policy and that its prayers therefore predict rainfall.

In our model, praying for rain can only persuade people to believe if the hazard of rainfall during a dry spell is increasing over time, so that the probability of rainfall is highest when people most want rain.

We test this prediction in an original data set of whether ethnic groups around the world traditionally prayed for rain. We find that prayer for rain is more likely among ethnic groups dependent on intensive agriculture for subsistence and that ethnic groups facing an increasing rainfall hazard are 53% more likely to pray for rain, consistent with our model. We interpret these findings as evidence for the instrumentality of religious belief.

None of this implies that anyone involved understands why the prayers correlate with rain. Instead, everyone involved is making the mistake of confusing correlation with causation. The main thesis suggested is ‘the instrumentality of religious belief’ which seems like one of those ‘why did we need a study for this’ conclusions when this broadly construed. Yes, people choose to believe and be more religious when they think there is something in it for them, the evidence for this is overwhelming. Also overwhelming is the evidence that when people around you are religious, that makes you and future generations more similarly religious.

Still, it’s pretty cool to notice the pattern that in many places prayers for rain happen most when rain is most likely. What else follows this pattern? Many medical remedies are similar, happening when people would naturally get better. Calling timeout or anything else will ‘break up’ a scoring run, since such runs are mostly random. More generally, if there is any kind of mean reversion effect, anything that responds to poor outcomes will correlate with improvement in results.

A fun reminder that the wisdom of crowds technique works best when people do not compare notes. Otherwise people (correctly) mostly discount their private information in the wake of all their public information, which prevents proper accounting for the private info. Robin Hanson suggests the implication would be to ban people who do research from participating in markets, while observing this move would be obviously dumb. I would notice the distinguish the difference between markets, where you express opinion largely directionally, versus wisdom of crowds, where you care a lot about magnitude. For markets giving people more information is fine, you don’t mind if people move towards the market price.

Lyman Stone is back to remind us that the cell phone-based data on church attendance makes no sense and is obvious measurement error.

I loved this especially, because… I mean…

Lyman Stone: and I commend the author for following up the 2023 version with a n~5k sample asking people religion + cell phone behaviors.

he found almost a third of Jews don’t take their phones to church…

… and that’s almost a third of Jews who take online surveys!

As a Jew you are very much not supposed to take your phone to church.

I mean, if you did for some reason go to a church then go ahead, presumably you are visiting a friend or viewing the architecture.

But if you are attending weekly services, which would be at a synagogue, then it would be Shabbat. You are not supposed to operate electronic equipment on Shabbat, or according to many even turn on a light. It is very hard to even carry a cell phone without accidentally doing that. For the Orthodox, it is clearly forbidden, as it is the carrying of a non-essential item. So, yeah.

Even if it were not required anyway, it would seem obvious to me that one should do one’s best not take one’s phone into religious services, for overdetermined reasons.

There is a bunch of other cool stuff in the thread.

Devin Pope then responded to Lyman here, including this chart, which suggests that this method works more generally. Devin admits the task is super hard and notes everyone mentions the Orthodox Jew measuring problem, but suggests this is the best we can do.

So, have you talked to a user?

I laugh, but I have created multiple companies and in no case did I do remotely enough user talking.

Devon: “Allegations of market failures often reflect ‘imagination failures’ by analysts rather than a genuine incentive problem”

“Lighthouses were long used by economists as a textbook example of the free-rider problem—until Coase discovered that many lighthouses were supported by fees charged by nearby ports”

Michael Nielsen: That’s not so much an imagination failure as a basic-lack-of-contact-with-reality failure…

Patrick McKenzie: “Have you actually talked to a user?” is a question which I wish tech could export to e.g. economists researching impact of financial innovation on particular populations of interest.

Dave Guarino: I get many policy people coming to me per month and to all of them I say “oh you should help one person with the process and see what you learn.”

The take up rate is about 10%.

(Epistemic blinders abetted by social norms are blinding!)

Devon: A recent highlight was when a guy who’d never spent time in a high-inflation country sent me an email about this post saying “that’s not right, theory predicts X so Y can’t be true even though you’re seeing right in front of your eyes” 🤣

Dave Guarino: Now that’s some “it’s simple – assume a can opener” energy right there.

Dave Kasten: Corollary: you can rapidly become the person in your office with the argument-winning anecdotes on a subset of issues with <1 week of labor.

(I now wonder if this is the actual causal arrow for why CEOs care about anecdotes so much — it was an early career cheat code for them?)

Mr. Smith: This is one of the secrets of McKinsey; I show up and do that week of work and then I’m the most credible guy until I leave

Anecdotes are a sign that you know the particulars of time and place and have some idea what you’re talking about.

Most people don’t. It sets you apart.

Patrick McKenzie: An internship project worth doing at any age: go out into the world, learn one relevant thing, write it down, then bring it back to us (who are equally capable of going out into the world and writing things down *but will not do this*).

I have literally suggested this to interns over the years, but it was also my default marching order for my executive assistant: if you don’t know what to do, to learn one interesting thing and write it down.

The ceiling for this being useful is crazily high.

And while one could perform years of academic effort to do a study with controls etc etc given how low the fruit hangs you can probably have an artifact worth reading for the price of a single coffee conversation or five user interviews or similar.

There are very many companies at which “conduct five user interviews” is a Deliverable and there is a Process requiring Multi-Stakeholder Coordination and *bah humbugyou have email you have Zoom this can be done any afternoon you decide to do it.

So help me if I have one more conversation with someone whose objection is “But how would I find a user of [a product which has as many users as Macbooks].”

“Have you considered walking into a Starbucks and briefly visually inspecting surroundings?”

“What no that’s crazy.”

Patrick McKenzie’s podcast with Dwarkesh Patel about VaccinateCA and how that group had to be the ones to tell people where to get vaccinated was… suppressed on YouTube out of ‘misinformation’ concerns with a banner telling the user to go to the CDC for more information. Good news is by the time I went to YouTube to verify there at least was no banner, but I can’t tell if it is still surpressed.

“What are my options,” asks the Dangerous Professional. Full thread is recommended.

Patrick McKenzie: Now returning to why I have learned to ask about options here: if you have someone who is either in a rush or very low sophistication, and you *guessat a resolution path, you might have them engage that resolution path even if that is a much worse option.

Patrick McKenzie explains CloudStrike.

Interview with art dealer Larry Gagosian turned into maxims. Great format, would be cool to build a GPT for this, would be a good example except we don’t have the source interview handy.

Thread on ‘busting out.’ Maxing out use of your credit before you default (in any sense of both words) on it is a great trick, except you can only do it once. The good news is we have gotten a lot better at noticing this happening in real time. I had experience with a variation of it myself, the transformation of recreational gamblers into ‘beards’ that place bets for professionals, including the parallel action in actual financial OTC markets.

Patrick discusses the question of who his audience is.

One way to think about Starlink and Elon Musk.

On joining the ‘winning team.’ I consider pressure to join the winning team to be, in various forms and on various levels, one of the most pernicious forces out there. Indeed, Patrick identifies one of them, that the ‘winning team’ cares about things other than winning, and will punish you for caring about other things. But also often the winning team very much does not care about other things. Often it cares exactly about being the winning team, and supporting those who support the winning team, and will punish any signs of caring about anything else at all.

This is very different from the question of ‘do you want to be right, or do you want to win?’ Which has different answers at different times. People forget that the best way to win, either locally or generally, and especially in the ways that matter most, is often to care a lot (but not entirely!) about being right.

Patrick McKenzie on deposit pricing, as in banks not paying a fair price for deposits and in exchange providing lots of other costly stuff for free because you can’t charge directly for that other stuff. And especially this:

Patrick McKenzie: Speaking of which: a professional skill of bankers of the well-off is knowing who you should give the “We’ll knock a percentage point off your new mortgage if you have $1 million in deposits!” pitch to, who you should give the pitch to while winking, and who you never pitch.

Then there’s Wells Fargo. Where the banker will give that pitch (for 50bps not a full 1%), allow you to include other assets like stocks, and then when you flat out ask ‘are you expecting me to keep those assets with you after we close?’ will tell you he does not in any way expect you to keep those assets there after the close.

Spencer Greenberg tests whether astrology works using a cool methodology. He shows lots of astrologers about twelve people. For each he provided detailed biographical information, and asked the astrologers to pick their true full astrological chart from five choices. The astrologers predicted they could do it, afterwards they predicted they had done it. As you would expect, they hadn’t done it, with a success rate under 21% versus a pure chance rate of 20%, and none of them getting more than five charts correct.

Indeed, they failed even to agree on the same wrong answers. Even the most experienced astrologers only agreed with each other 28% of the time.

Shea Levy said this was still a ‘win for astrology’ because it indulges and legitimizes Obvious Nonsense despite showing that it is indeed nonsense. Spenser points out that 20% of Americans say they believe in astrology, and also I don’t see this as ‘legitimizing’ anything.

Even I have encountered enough believers that having more convincing responses is highly useful.

Sarah Constantin: Disagree.

We live in an Eternal September world. There are people who don’t know astrology doesn’t work.

Every now and then somebody has to explicitly argue against an “obviously” dumb idea, or debunk an “obvious” superstition. It renews the credibility of science/inquiry.

There’s an argument for not bringing more attention to bad ideas because you’re “giving them a platform”…but astrology is already hugely popular.

Spencer has a gift for doing lots and lots of social-science stuff that I’d find too dull to do myself, including this study. But there’s nothing intellectually wrong with it! I’m glad somebody’s doing the debunking thing with high standards.

Indeed, I would find doing this study extremely boring. Kudos to Spenser for doing it.

A group of MR links led to a group of links that led to this list of Obvious Travel Advice. It seems like very good Obvious Travel Advice, and I endorse almost all points.

My biggest disagreement is actually jet lag. It can absolutely be beaten (by most people, anyway), if you want to make that a priority and are willing to devote a day to doing that. I did a lot of things right when I won Pro Tour Tokyo, but one of them was flying in a day early in order to spend it on fixing jet lag – I basically rented a hotel room, listened to music, relaxed and did nothing else except go to sleep at the right time. If you have to ‘be on’ badly enough you should totally do that.

With the warning that jetlag when you return tends to be worse, as you’ve tapped out certain resources, and I still don’t know how to properly handle that when going to places like Japan, so ‘do something important right after coming back’ is mostly a bad idea if you couldn’t have done it on the destination’s schedule. Notice that often you very much can.

The list also highlights three things.

  1. A lot of the value of travel is essentially this old Chelm story, you experience things that are worse to make you appreciate how good you have it. Yet I agree with the author here that this does not last long enough to justify such trips repeatedly. Get vaccinated once but ‘booster shots’ are not worth the side effects.

  2. Travel is all about mindset and actual value and who you are with. A lot of travel is about ‘performing a vacation’ or a trip, also some people enjoy the anticipation and preparation work. Whereas for me, I’ve learned that basically the only good reason to travel far is to see particular people – it’s who you are with, and that’s something I can have a good attitude about. But otherwise, why not have the Vacation Nature at home or close to home? This is especially true in a place like New York City, there’s so much available close to home that you’ve ignored.

  3. Most vacation or ‘for fun’ travel is not, as it is actually done, worthwhile, unless it is a proper Quest. Tyler Cowen seems to know how to get a lot out of travel but you are not going to do what he would do even if you follow the Obvious Advice.

You know, in the Atlantic Coast Conference.

Erik Brynjolfsson: Athletes from four California universities won 89 Olympic medals. (The United States won 126 total).

Athletes from Stanford University alone won more medals than all but seven countries in the world.

Olympic success is a choice. You have to want it.

Caitlin Clark started off slow in the WNBA due to the learning curve, but she adapted, and now her numbers are rather insane. She did not make the Olympic team and its probable gold medal because the team was selected a while ago and one could not be confident it would go this way, which is bad for the sport, but that’s how these things go and it’s good not to warp selections for marketing even if in this case it would have worked out.

Are you ready for some football?

The top of this list is very good. Some rather awesome matchups.

However, if falls off quickly. On average there is only about one exciting non-conference game per week. Also some strange rankings here.

And as a Wisconsin fan, I must ask: We wanted Bama? Why would we want Bama?

Aside from ‘playing great games is really cool,’ which it is. With the end of the 4-team playoff era, hopefully we can see more great games. If you have any chance to actually be national champion, a game like this is highly unlikely to actually keep you out under the new system.

The obvious question is, can they reliably tell who is cheating, or not? If they can, then the 1% that cheats will get caught by automated checks, and we should not have a big issue. If they cannot tell, how do they know how many people are cheating? It is easy to catch someone who suddenly plays like Stockfish.

It seems next to impossible to catch a cheater who does something sufficiently subtle, especially if the cheat is ‘in the negative’ and all it is doing is avoiding some portion of your mistakes, and you do not make the mistake of using it only with high leverage.

As usual, I presume what is actually protecting us is that cheaters never stop. It takes a lot to be good enough at chess to play at an elite level even if you use subtle cheats. Once you start using subtle cheats, it is not long before you get greedier with them.

All growth in MMO gaming revenue after 2004 comes from increasing spending by whales. A large portion of the gaming world is completely dominated by whale revenue, who QCU describes here as ‘the bored children of tycoons in the developing world.’ The rest of the players either play for free or they spend amounts too small to matter, the point of all the masses being there is to provide the social context for the whales to enjoy spending their money, plus the opportunity to try to convert a tiny portion of them into whales. That’s it. The extended thread goes into various dynamics involved.

The simple rule in response to this is, of course: If the game allows any form of pay to win or other whale play, then it is not for you. It will make your life miserable in order to motivate whale purchases, use timed actions and delayed variable rewards, it is a Skinner box, get out. Spend your gaming time in places where there is a hard upper limit on what can meaningfully be spent (cosmetics excluded) sufficient for the game to be optimized for the average player and not for the whale. Ideally stick to games where there is a fixed one-time or subscription fee and nothing else.

Collectable card games are a weird case where the good ones (like Magic: the Gathering) are good enough that they can survive quite a bit of heavy spending and justify their costs, but notice the difference between paper Magic, where you can reasonably spend your way out and recoup through trade, and Magic Arena, where the price for getting out of the grinding entirely is prohibitive. You might opt into Arena anyway, Magic is that good, but it the need to minimize costs will warp your actions a lot.

Extend this to other non-game activities, as well. The club where people spend money on tables and drinks and women as eye candy to show they spend money? Don’t go there unless your business networking demands it.

From 2023: Reid Duke tells you everything you need to know about Vintage Cube.

There were more discussions this month about collusion and related issues in Magic. One note by Sam Black is that the ability of players to cooperate on prize splits, on draws and to otherwise help each other was indeed very helpful in forming a positive community. It was one more incentive for everyone to stay on good terms, and when you had a chance to help someone out it reliably won you a friend. And I definitely don’t think we need draconian penalties for people who say the incantations wrong, especially regarding prize splits.

I understand the argument that scooping or even splits can be damaging to tournament integrity. I even hear the arguments against intentional draws. But I disagree and find such arguments mostly misplaced. I especially hear Gerry Thompson’s point that it would be better if we didn’t have vastly asymmetric rewards for winning particular matches. And that the solution is to fix the incentive design.

Proposed solutions within a tournament include expanding use of the rule of ‘first players to X wins automatically make top 8’ which seems great. You could go further, if you wanted to get a bit messy, in engineering the last 1-2 rounds into an explicit bracket, where opponents had identical incentives the way they do in the top 8.

This month’s game activity included continued play of Hades, where I’m rapidly approaching diminishing returns but for now it’s still fun, and Shin Megami Tensei V Vengeance, where I’ve been postponing going for the win to try and figure out how to get to the hidden ending but one of the quests isn’t appearing right and it requires a bunch of grinding. I have enough stashed items that if I wanted to give up on being level 99 and just win on one of the other paths, I could probably do that rather quickly.

I do notice I’m disappointed in the choices I’m offered at the end, given the story, and that they don’t seem to contrast as interestingly as past games in the series.

I tried out Vault of the Void. It has some cool different mechanics than most Slay the Spire variants – you can hold onto cards but only draw up to 5, you carry energy over with a hard cap, you can discard cards for more energy, you build a deck of 20 out of your collection each battle rather than looking for card removes. The game doesn’t support a third full act, so it doesn’t have one, bravo on that.

Alas, it has severe problems. The balance is off. Each character (so far anyway) seems like it has a powerful thing you’re supposed to do that scales, but it’s always fiddly and feels like piling incremental advantages on top of each other.

Most of all, a huge portion of the challenge is in the last fight against the Void, and a lot of this is that it slowly adds a bunch of curses to your deck and otherwise scales. So in a genre where your top priority is always card draw and card selection, they’re screaming at you to do more of that.

My last run I found a card that lets you remove a curse from your deck in-battle, and it’s in a class about deck manipulation and making things cost zero, so I basically recursed that card over and over and I got bored enough I accidentally took one damage (out of 95) and I’m sad about that.

I do like the idea of ‘souls are a currency, and also they reduce the HP of the final boss which is an attrition war so try not to spend them’ but the execution needs work. Another issue is that the other boss battles simply are not scary enough, also your route planning too often forces your hand on a simple ‘which path lets me go to more stuff’ theory.

Also I’m officially sick of all these unlocks and making us play tons of runs to see what games offer.

Once Upon a Galaxy, still in early development, is potentially the lightweight successor to Storybook Brawl. I’ve given a try, and it can be fun. I do miss the complexity of Storybook Brawl, but others might appreciate something lighter. Storybook Brawl had really quite a lot going on. And while I miss (for now) playing against other people directly, being able to proceed at your own pace and never wait or feel time pressure is nice.

His day will come.

This takes the cake.

Deferrence, presumably?

So much this.

That kid has a bright future.

Paul Graham: At a startup event, someone asked 12 yo if he was working on a startup. He convinced her that he had started a company to make hats out of skunks, a restaurant where everything (even the drinks) was made of bass, and a pest control company that used catapults.

Mandrel: Such a bad idea to incentivize kids to do startups instead of enjoying life, and leaning as much as possible at school, something PG advices Stanford student to do 10 years ago.

Paul Graham: He’s not actually starting any of those companies.

Monthly Roundup #21: August 2024 Read More »

judge-calls-foul-on-venu,-blocks-launch-of-espn-warner-fox-streaming-service

Judge calls foul on Venu, blocks launch of ESPN-Warner-Fox streaming service

Out of bounds —

Upcoming launch of $42.99 sports package likely to “substantially lessen competition.”

Texas losing to Alabama in the 2010 BCS championship

Gina Ferazzi via Getty

A US judge has temporarily blocked the launch of a sports streaming service formed by Disney’s ESPN, Warner Bros and Fox, finding that it was likely to “substantially lessen competition” in the market.

The service, dubbed Venu, was expected to launch later this year. But FuboTV, a sports-focused streaming platform, filed an antitrust suit in February to block it, arguing its business would “suffer irreparable harm” as a result.

On Friday, US District Judge Margaret Garnett in New York granted an injunction to halt the launch of the service while Fubo’s lawsuit against the entertainment giants works its way through the court.

The opinion was sealed but the judge noted in an entry on the court docket that Fubo was “likely to succeed on its claims” that by entering the agreement, the companies “will substantially lessen competition and restrain trade in the relevant market” in violation of antitrust law.

In a statement, ESPN, Fox and Warner Bros Discovery said they planned to appeal against the decision.

Venu was aimed at US consumers who had either ditched their traditional pay TV packages for streaming or never signed up for a cable subscription. “Cord cutting” has been eroding the traditional TV business for years, but live sports has remained a primary draw for customers who have held on to their cable subscriptions.

Fubo TV was launched in 2015 as a sports-focused streamer. It offers more than 350 channels—including those carrying major sporting events such as Premier League football matches, baseball, the National Football League and the US National Basketball Association—for monthly subscription prices starting at $79.99. Its offerings included networks owned by Disney and Fox.

ESPN, Fox and Warner Bros said Venu was “pro-competitive,” aimed at reaching “viewers who currently are not served by existing subscription options.”

Venu was expected to charge $42.99 a month when it launched later this month. It “will feature just 15 channels, all featuring popular live sports—the kind of skinny sports bundle that Fubo has tried to offer for nearly a decade, only to encounter tooth-and-nail resistance,” Fubo said in a court filing seeking the injunction.

Venu was expected to aggregate about $16 billion worth of sports rights, analysts have estimated. It was not expected to have an impact on the individual companies’ ability to strike new rights deals.

Analysts had questioned its position in the marketplace. Disney plans to roll out ESPN as a “flagship” streaming service in August 2025 that will carry programming that appears on the TV network as well as gaming, shopping and other interactive content. Disney chief executive Bob Iger said he wants the service to become the “pre-eminent digital sports platform.”

Fubo shares rose 16.8 percent after the ruling, but the stock is down 51 percent this year.

© 2022 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Judge calls foul on Venu, blocks launch of ESPN-Warner-Fox streaming service Read More »

rocket-report:-ula-is-losing-engineers;-spacex-is-launching-every-two-days

Rocket Report: ULA is losing engineers; SpaceX is launching every two days

Every other day —

The first missions of Stoke Space’s reusable Nova rocket will fly in expendable mode.

A Falcon 9 booster returns to landing at Cape Canaveral Space Force Station following a launch Thursday with two WorldView Earth observation satellites for Maxar.

Enlarge / A Falcon 9 booster returns to landing at Cape Canaveral Space Force Station following a launch Thursday with two WorldView Earth observation satellites for Maxar.

Welcome to Edition 7.07 of the Rocket Report! SpaceX has not missed a beat since the Federal Aviation Administration gave the company a green light to resume Falcon 9 launches after a failure last month. In 19 days, SpaceX has launched 10 flights of the Falcon 9 rocket, taking advantage of all three of its Falcon 9 launch pads. This is a remarkable cadence in its own right, but even though it’s a small sample size, it is especially impressive right out of the gate after the rocket’s grounding.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

A quick turnaround for Rocket Lab. Rocket Lab launched its 52nd Electron rocket on August 11 from its private spaceport on Mahia Peninsula in New Zealand, Space News reports. The company’s light-class Electron rocket deployed a small radar imaging satellite into a mid-inclination orbit for Capella Space. This was the shortest turnaround between two Rocket Lab missions from its primary launch base in New Zealand, coming less than nine days after an Electron rocket took off from the same pad with a radar imaging satellite for the Japanese company Synspective. Capella’s Acadia 3 satellite was originally supposed to launch in July, but Capella requested a delay to perform more testing of its spacecraft. Rocket Lab swapped its place in the Electron launch sequence and launched the Synspective mission first.

Now, silence at the launch pad … Rocket Lab hailed the swap as an example of the flexibility provided by Electron, as well as the ability to deliver payloads to specific orbits that are not feasible with rideshare missions, according to Space News. For this tailored launch service, Rocket Lab charges a premium launch price over the price of launching a small payload on a SpaceX rideshare mission. However, SpaceX’s rideshare launches gobble up the lion’s share of small satellites within Rocket Lab’s addressable market. On Friday, a Falcon 9 rocket is slated to launch 116 small payloads into polar orbit. Rocket Lab, meanwhile, projects just one more launch before the end of September and expects to perform 15 to 18 Electron launches this year, a record for the company but well short of the 22 it forecasted earlier in the year. Rocket Lab says customer readiness is the reason it will be far short of projections.

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Defense contractors teaming up on solid rockets. Lockheed Martin and General Dynamics are joining forces to kickstart solid rocket motor production, announcing a strategic teaming agreement today that could see new motors roll off the line as early as 2025, Breaking Defense reports. The new agreement could position a third vendor to enter into the ailing solid rocket motor industrial base, which currently only includes L3Harris subsidiary Aerojet Rocketdyne and Northrop Grumman in the United States. Both companies have struggled to meet demands from weapons makers like Lockheed and RTX, which are in desperate need of solid rocket motors for products such as Javelin or the PAC-3 missiles used by the Patriot missile defense system.

Pressure from startups … Demand for solid rocket motors has skyrocketed since Russia’s invasion of Ukraine as the United States and its partners sought to backfill stocks of weapons like Javelin and Stinger, as well as provide motors to meet growing needs in the space domain. Although General Dynamics has kept its interest in the solid rocket motor market quiet until now, several defense tech startups, such as Ursa Major Technologies, Anduril, and X-Bow Systems, have announced plans to enter the market. (submitted by Ken the Bin)

Going polar with crew. SpaceX will fly the first human spaceflight over the Earth’s poles, possibly before the end of this year, Ars reports. The private Crew Dragon mission will be led by a Chinese-born cryptocurrency entrepreneur named Chun Wang, and he will be joined by a polar explorer, a roboticist, and a filmmaker whom he has befriended in recent years. The “Fram2” mission, named after the Norwegian research ship Fram, will launch into a polar corridor from SpaceX’s launch facilities in Florida and fly directly over the north and south poles. The three- to five-day mission is being timed to fly over Antarctica near the summer solstice in the Southern Hemisphere, to afford maximum lighting.

Wang’s inclination is Wang’s prerogative … Wang told Ars he wanted to try something new, and flying a polar mission aligned with his interests in cold places on Earth. He’s paying the way on a commercial basis, and SpaceX in recent years has demonstrated it can launch satellites into polar orbit from Cape Canaveral, Florida, something no one had done in more than 50 years. The highest-inclination flight ever by a human spacecraft was the Soviet Vostok 6 mission in 1963 when Valentina Tereshkova’s spacecraft reached 65.1 degrees. Now, Fram2 will fly repeatedly and directly over the poles.

Rocket Report: ULA is losing engineers; SpaceX is launching every two days Read More »

Navigating the Unique Landscape of OT Security Solutions

Exploring the operational technology (OT) security sector has been both enlightening and challenging, particularly due to its distinct priorities and requirements compared to traditional IT security. One of the most intriguing aspects of this journey has been understanding how the foundational principles of security differ between IT and OT environments. Typically, IT security is guided by the CIA triad—confidentiality, integrity, and availability, in that order. However, in the world of OT, the priority sequence shifts dramatically to AIC—availability, integrity, and confidentiality. This inversion underscores the unique nature of OT environments where system availability and operational continuity are paramount, often surpassing the need for confidentiality.

Learning through Contrast and Comparison

My initial approach to researching OT security solutions involved drawing parallels with familiar IT security strategies. However, I quickly realized that such a comparison, while useful, only scratches the surface. To truly understand the nuances of OT security, I delved into case studies, white papers, and real-world incidents that highlighted the critical need for availability and integrity above all. Interviews with industry experts and interactive webinars provided deeper insights into why disruptions in service, even for a brief period, can have catastrophic outcomes in sectors like manufacturing, energy, or public utilities, far outweighing concerns about data confidentiality.

Challenges for Adopters

One of the most significant challenges for organizations adopting OT security solutions is the integration of these systems into existing infrastructures without disrupting operational continuity. Many OT environments operate with legacy systems that are not only sensitive to changes but also may not support the latest security protocols. The delicate balance of upgrading security without hampering the availability of critical systems presents a steep learning curve for adopters. This challenge is compounded by the need to ensure that security measures are robust enough to prevent increasingly sophisticated cyberattacks, which are now more frequently targeting vulnerable OT assets.

Surprising Discoveries

Perhaps the most surprising discovery during my research was the level of interconnectedness between IT and OT systems in many organizations. While this is still developing, this convergence is driving a new wave of cybersecurity strategies that must cover the extended surface area without introducing new vulnerabilities. Additionally, the rate of technological adoption in OT—such as IoT devices in industrial settings—has accelerated, creating both opportunities and unprecedented security challenges. The pace at which OT environments are becoming digitized is astonishing and not without risks, as seen in several high-profile security breaches over the past year.

YoY Changes in OT Security

Comparing the state of OT security solutions now to just a year ago, the landscape has evolved rapidly. There has been a marked increase in the adoption of machine learning and artificial intelligence to predict and respond to threats in real time, a trend barely in its nascent stages last year. Vendors are also emphasizing the creation of more integrated platforms that offer both deeper visibility into OT systems and more comprehensive management tools. This shift toward more sophisticated, unified solutions is a direct response to the growing complexity and connectivity of modern industrial environments.

Looking Forward

Moving forward, the OT security sector is poised to continue its rapid evolution. The integration of AI and predictive analytics is expected to deepen, with solutions becoming more proactive rather than reactive. For IT decision-makers, staying ahead means not only adopting cutting-edge security solutions, but also fostering a culture of continuous learning and adaptation within their organizations.

Understanding the unique aspects of researching and implementing OT security solutions highlights the importance of tailored approaches in cybersecurity. As the sector continues to grow and transform, the journey of discovery and adaptation promises to be as challenging as it is rewarding.

Next Steps

To learn more, take a look at GigaOm’s OT security Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Navigating the Unique Landscape of OT Security Solutions Read More »