Author name: Kris Guyer

karaoke-reveals-why-we-blush

Karaoke reveals why we blush

Singing for science —

Volunteers watched their own performances as an MRI tracked brain activity.

A hand holding a microphone against a blurry backdrop, taken from an angle that implies the microphone is directly in front of your face.

Singing off-key in front of others is one way to get embarrassed. Regardless of how you get there, why does embarrassment almost inevitably come with burning cheeks that turn an obvious shade of red (which is possibly even more embarrassing)?

Blushing starts not in the face but in the brain, though exactly where has been debated. Previous thinking often reasoned that the blush reaction was associated with higher socio-cognitive processes, such as thinking of how one is perceived by others.

After studying subjects who watched videos of themselves singing karaoke, however, researchers led by Milica Nicolic of the University of Amsterdam have found that blushing is really the result of specific emotions being aroused.

Nicolic’s findings suggest that blushing “is a consequence of a high level of ambivalent emotional arousal that occurs when a person feels threatened and wants to flee but, at the same time, feels the urge not to give up,” as she and her colleagues put it in a study recently published in Proceedings of the Royal Society B.

Taking the stage

The researchers sought out test subjects who were most likely to blush when watching themselves sing bad karaoke: adolescent girls. Adolescents tend to be much more self-aware and more sensitive to being judged by others than adults are.

The subjects couldn’t pick just any song. Nicolic and her team had made sure to give them a choice of four songs that music experts had deemed difficult, which is why they selected “Hello” by Adele, “Let it Go” from Frozen, “All I Want For Christmas is You” by Mariah Carey, and “All the Things You Said” by tATu. Videos of the subjects were recorded as they sang.

On their second visit to the lab, subjects were put in an MRI scanner and were shown videos of themselves and others singing karaoke. They watched 15 video clips of themselves singing and, as a control, 15 segments of someone who was thought to have similar singing ability, so secondhand embarrassment could be ruled out.

The other control factor was videos of professional singers disguised as participants. Because the professionals sang better overall, it was unlikely they would trigger secondhand embarrassment.

Enough to make you blush

The researchers checked for an increase in cheek temperature, as blood flow measurements had been used in past studies but are more prone to error. This was measured with a fast-response temperature transducer as the subjects watched karaoke videos.

It was only when the subjects watched themselves sing that cheek temperature went up. There was virtually no increase or decrease when watching others—meaning no secondhand embarrassment—and a slight decrease when watching a professional singer.

The MRI scans revealed which regions of the brain were activated as subjects watched videos of themselves. These include the anterior insular cortex, or anterior insula, which responds to a range of emotions, including fear, anxiety, and, of course, embarrassment. There was also the mid-cingulate cortex, which emotionally and cognitively manages pain—including embarrassment—by trying to anticipate that pain and reacting with aversion and avoidance. The dorsolateral prefrontal cortex, which helps process fear and anxiety, also lit up.

There was also more activity detected in the cerebellum, which is responsible for much of the emotional processing in the brain, when subjects watched themselves sing. Those who blushed more while watching their own video clips showed the most cerebellum activity. This could mean they were feeling stronger emotions.

What surprised the researchers was that there was no additional activation in areas known for being involved in the process of understanding one’s mental state, meaning someone’s opinion of what others might think of them may not be necessary for blushing to happen.

So blushing is really more about the surge of emotions someone feels when being faced with things that pertain to the self and not so much about worrying what other people think. That can definitely happen if you’re watching a video of your own voice cracking at the high notes in an Adele song.

Proceedings of the Royal Society B, 2024.  DOI: 10.1098/rspb.2024.0958

Karaoke reveals why we blush Read More »

nothing’s-new-ai-widget-is-trying-to-make-its-cfo-a-news-star

Nothing’s new AI widget is trying to make its CFO a news star

something out of nothing —

Its news app is available on all Nothing and CMF handsets, including the new Phone (2a) Plus.

Nothing’s new AI widget is trying to make its CFO a news star

Nothing has a new smartphone—the Phone (2a) Plus—nearly identical to the Phone (2a) it released earlier this year, but with slightly beefed-up specs. It costs $399 and is available in the US through the same beta program. But it isn’t the new Android handset we find most interesting, it’s the company’s new widget.

The “News Reporter” widget, available by default on all Nothing and CMF smartphones plus other Android and iOS devices via the Nothing X app, lets you quickly play a news bulletin summarized by artificial intelligence. It is read out by the synthesized voice of Tim Holbrow, the company’s chief financial officer. (Nothing is using ElevenLabs’ tech for sound synthesis and output.) As soon as you tap the widget, you’re greeted by a soothing British voice:

“Welcome to Nothing News, where the only thing we take seriously is not taking anything seriously. I’m Tim, your CFO and reluctant news reader. Today, we’re making something out of nothing, because that’s literally our job.”

The widget will start cycling through a selection of news stories—you can press and hold the widget and tap Edit to add or remove categories you’re interested in, such as business, entertainment, tech, and sports. These news stories are pulled from “trusted English-language news sources” through News API, using Meta’s Llama large language models for the summary.

Nothing's News Reporter widget is available on all Nothing and CMF phones by default. If you download the Nothing X app, you can also access it on Android and iOS.

Enlarge / Nothing’s News Reporter widget is available on all Nothing and CMF phones by default. If you download the Nothing X app, you can also access it on Android and iOS.

You can swipe down the notification bar and press the next button on the media playback notification to skip a story, to which Holbrow will add a quip. “Not feeling that one? Let’s find another.” After I skipped quite a few in a row, AI Holbrow asked, “Do you even like news?”

The summaries are one minute each (roughly), and you get eight stories per day. Every morning, the widget will refresh with a fresh batch. Unfortunately, and frustratingly, the widget doesn’t give you much to go on if you want to read more. There’s no attribution to where it pulled the news from, and no links are provided to read directly from the source.

Every smartphone company has been touting some kind of generative AI feature in new devices this year. Samsung has Galaxy AI; Google has its Gemini chatbot and a bevy of AI features in Pixel phones; Motorola introduced Moto AI recently; and even OnePlus has been teasing a few AI features in its phones, like AI Eraser, which lets you remove unwanted objects from photos. Nothing introduced a ChatGPT integration in its earbuds earlier this year, and this widget is the latest generative AI feature to land.

That said, it’s hardly the first time we’ve seen a news summarization feature. Back when Amazon Alexa and Google Assistant were gaining popularity, one of the top features was to ask the voice assistant to play the news—you’d be able to hear short news clips from various sources, like NPR and CNN. That said, I like the implementation in Nothing’s widget, but I’d also like to see attribution and a way to dig deeper into a story if it’s interesting.

What about that phone?

As for the Nothing Phone (2a) Plus, I’ve been using it for several days and it’s … indiscernible from the Phone (2a) I reviewed positively in March. I love the new gray color option, which hides smudges on the rear better and makes the phone’s already fun design pop even more. You still get the same Glyph light functionality, allowing the LEDs to light up for notifications and calendar events, and even double as a visualizer when playing music.

Nothing Phone (2a) on the left, Nothing Phone (2a) Plus on the right.

Enlarge / Nothing Phone (2a) on the left, Nothing Phone (2a) Plus on the right.

The top change here is the processor. Inside is MediaTek’s Dimensity 7350 Pro 5G (as opposed to the Phone (2a)’s Dimensity 7200 Pro), which offers a 10 percent increase in CPU power, and a 30 percent jump in graphics performance. Honestly, I didn’t notice a huge bump in speed, and my benchmark scores show a very tiny boost.

The next upgrade is in the camera, namely, the selfie camera and its new 50-MP sensor that can shoot 4K at 30 frames per second (up from 32 megapixels). The company says it has issued seven updates since the launch of the Phone (2a) with 26 improvements to the camera, which include upgrades to loading speeds, color consistency, and blur accuracy in portrait mode. The Phone (2a) Plus launches with all of those improvements, and the 50-MP front and ultrawide cameras on the rear are the same.

Selfies indeed look much nicer, especially in low light, where my face appears sharper with better HDR and a more balanced exposure. The rear cameras produce nice results considering the price, and I found daytime renders to deliver natural-looking colors. It can still struggle with super high-contrast scenes, but this is a solid camera system.

Lastly, the wired charging on the phone now supports 50 watts (up from 45 watts), which supposedly gets you a 10 percent charging speed boost. Everything else is identical from the Phone (2a)’s specs, from the 6.7-inch AMOLED display to the 5,000-mAh battery.

Nothing new

I’ve enjoyed the phone over the past few days, but its launch is so peculiar, considering it doesn’t introduce any groundbreaking updates to the Phone (2a). So I asked the company why it decided to launch the (2a) Plus now. “We aren’t launching Phone (3) until next year, and we saw an opportunity to enhance the smartphone we launched in March with Phone (2a) Plus, a new smartphone—catered towards power users—at an accessible price point,” says Jane Nho, Nothing’s head of PR in the US. The company launched its last flagship phone, the Phone (2), in July 2023.

So there you have it: The Phone (2a) Plus is a seemingly painless way for Nothing to try and stay relevant amidst all the other smartphone launches, still have an AI story, boost sales, and oddly try and make some sort of digital celebrity out of its CFO.

Nothing says it’ll go on sale August 3 in London at Nothing’s store in Soho, in gray and black, with 12GB RAM and 256GB storage. In the US, the device will follow the same beta program system as the Phone (2a) and CMF Phone 1. That means you’ll have to sign up for the beta, and once you’re accepted, you’ll be able to purchase the device for $399. It’ll be available on August 7 at 9 am ET.

This story originally appeared on wired.com.

Nothing’s new AI widget is trying to make its CFO a news star Read More »

7-million-pounds-of-meat-recalled-amid-deadly-outbreak

7 million pounds of meat recalled amid deadly outbreak

7 million pounds across 71 products —

Authorities worry that the contaminated meats are still sitting in people’s fridges.

Shelves sit empty where Boar's Head meats are usually displayed at a Safeway store on July 31, 2024, in San Anselmo, California.

Enlarge / Shelves sit empty where Boar’s Head meats are usually displayed at a Safeway store on July 31, 2024, in San Anselmo, California.

Over 7 million pounds of Boar’s Head brand deli meats are being recalled amid a bacterial outbreak that has killed two people. The outbreak, which began in late May, has sickened a total of 34 people across 13 states, leading to 33 hospitalizations, according to the US Department of Agriculture.

On June 26, Boar’s Head recalled 207,528 pounds of products, including liverwurst, beef bologna, ham, salami, and “heat and eat” bacon. On Tuesday, the Jarratt, Virginia-based company expanded the recall to include about 7 million additional pounds of meat, including 71 different products sold on the Boar’s Head and Old Country brand labels. The products were sold nationwide.

The meats may be contaminated with Listeria monocytogenes, a foodborne pathogen that is particularly dangerous to pregnant people, people over the age of 65, and people with compromised immune systems. Infections during pregnancy can cause miscarriage, stillbirth, premature delivery, or a life-threatening infection in newborns. For others who develop invasive illness, the fatality rate is nearly 16 percent. Symptoms of listeriosis can include fever, muscle aches, headache, stiff neck, confusion, loss of balance, and convulsions that are sometimes preceded by diarrhea or other gastrointestinal symptoms.

The problem was discovered when the Maryland Department of Health—working with the Baltimore City Health Department—collected an unopened liverwurst product from a retail store and found that it was positive for L. monocytogenes. In later testing, the strain in the liverwurst was linked to those isolated from people sickened in the outbreak.

According to the Centers for Disease Control and Prevention, six of the 34 known cases were identified in Maryland, and 12 were identified in New York. The other 11 states have only reported one or two cases each. However, the CDC expects the true number of infections to be much higher, given that many people recover without medical care and, even if people did seek care, health care providers do not routinely test for L. monocytogenes in people with mild gastrointestinal illnesses.

In the outbreak so far, there has been one case in a pregnant person, who recovered and remained pregnant. The two deaths occurred in New Jersey and Illinois.

In a statement on the company’s website, Boar’s Head said that it learned from the USDA on Monday night that L. monocytogenes strain in the liverwurst linked to the multistate outbreak. “Out of an abundance of caution, we decided to immediately and voluntarily expand our recall to include all items produced at the Jarratt facility. We have also decided to pause ready-to-eat operations at this facility until further notice. As a company that prioritizes safety and quality, we believe it is the right thing to do.”

The USDA said it is “concerned that some product may be in consumers’ refrigerators and in retail deli cases.” The USDA, the company, and CDC warn people not to eat the recalled products. Instead, they should either be thrown away or returned to the store where they were purchased for a full refund. And if you’ve purchased one of the recalled products, the USDA also advises you to thoroughly clean your fridge to prevent cross-contamination.

7 million pounds of meat recalled amid deadly outbreak Read More »

how-kepler’s-400-year-old-sunspot-sketches-helped-solve-a-modern-mystery

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery

A naked-eye sunspot group on 11 May 2024

Enlarge / A naked-eye sunspot group on May 11, 2024. There are typically 40,000 to 50,000 sunspots observed in ~11-year solar cycles.

E. T. H. Teague

A team of Japanese and Belgian astronomers has re-examined the sunspot drawings made by 17th century astronomer Johannes Kepler with modern analytical techniques. By doing so, they resolved a long-standing mystery about solar cycles during that period, according to a recent paper published in The Astrophysical Journal Letters.

Precisely who first observed sunspots was a matter of heated debate in the early 17th century. We now know that ancient Chinese astronomers between 364 and 28 BCE observed these features and included them in their official records. A Benedictine monk in 807 thought he’d observed Mercury passing in front of the Sun when, in reality, he had witnessed a sunspot; similar mistaken interpretations were also common in the 12th century. (An English monk made the first known drawings of sunspots in December 1128.)

English astronomer Thomas Harriot made the first telescope observations of sunspots in late 1610 and recorded them in his notebooks, as did Galileo around the same time, although the latter did not publish a scientific paper on sunspots (accompanied by sketches) until 1613. Galileo also argued that the spots were not, as some believed, solar satellites but more like clouds in the atmosphere or the surface of the Sun. But he was not the first to suggest this; that credit belongs to Dutch astronomer Johannes Fabricus, who published his scientific treatise on sunspots in 1611.

Kepler read that particular treatise and admired it, having made his sunspot observations using a camera obscura in 1607 (published in a 1609 treatise), which he initially thought was a transit of Mercury. He retracted that report in 1618, concluding that he had actually seen a group of sunspots. Kepler made his solar drawings based on observations conducted both in his own house and in the workshop of court mechanic Justus Burgi in Prague.  In the first case, he reported “a small spot in the size of a small fly”; in the second, “a small spot of deep darkness toward the center… in size and appearance like a thin flea.”

The earliest datable sunspot drawings based on Kepler's solar observations with camera obscura in May 1607.

Enlarge / The earliest datable sunspot drawings based on Kepler’s solar observations with camera obscura in May 1607.

Public domain

The long-standing debate that is the subject of this latest paper concerns the period from around 1645 to 1715, during which there were very few recorded observations of sunspots despite the best efforts of astronomers. This was a unique event in astronomical history. Despite only observing some 59 sunspots during this time—compared to between 40,000 to 50,000 sunspots over a similar time span in our current age—astronomers were nonetheless able to determine that sunspots seemed to occur in 11-year cycles.

German astronomer Gustav Spörer noted the steep decline in 1887 and 1889 papers, and his British colleagues, Edward and Annie Maunder, expanded on that work to study how the latitudes of sunspots changed over time. That period became known as the “Maunder Minimum.” Spörer also came up with “Spörer’s law,” which holds that spots at the start of a cycle appear at higher latitudes in the Sun’s northern hemisphere, moving to successively lower latitudes in the southern hemisphere as the cycle runs its course until a new cycle of sunspots begins in the higher latitudes.

But precisely how the solar cycle transitioned to the Maunder Minimum has been far from clear. Reconstructions based on tree rings have produced conflicting data. For instance, one such reconstruction concluded that the gradual transition was preceded either by an extremely short solar cycle of about five years or an extremely long solar cycle of about 16 years. Another tree ring reconstruction concluded the solar cycle would have been of normal 11-year duration.

Independent observational records can help resolve the discrepancy. That’s why Hisashi Hayakawa of Nagoya University in Japan and co-authors turned to Kepler’s drawings of sunspots for additional insight, which predate existing telescopic observations by several years.

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery Read More »

webb-confirms:-big,-bright-galaxies-formed-shortly-after-the-big-bang

Webb confirms: Big, bright galaxies formed shortly after the Big Bang

They grow up so fast —

Structure of galaxy rules out early, bright objects were supermassive black holes.

Image of a field of stars and galaxies.

Enlarge / Some of the galaxies in the JADES images.

One of the things that the James Webb Space Telescope was designed to do was look at some of the earliest objects in the Universe. And it has already succeeded spectacularly, imaging galaxies as they existed just 250 million years after the Big Bang. But these galaxies were small, compact, and similar in scope to what we’d consider a dwarf galaxy today, which made it difficult to determine what was producing their light: stars or an actively feeding supermassive black hole at their core.

This week, Nature is publishing confirmation that some additional galaxies we’ve imaged also date back to just 300 million years after the Big Bang. Critically, one of them is bright and relatively large, allowing us to infer that most of its light was coming from a halo of stars surrounding its core, rather than originating in the same area as the central black hole. The finding implies that it formed through a continuing burst of star formation that started just 200 million years after the Big Bang.

Age checks

The galaxies at issue here were first imaged during the JADES (JWST Advanced Deep Extragalactic Survey) imaging program, which includes part of the area imaged for the Hubble Ultra Deep Field. Initially, old galaxies were identified by using a combination of filters on one of Webb’s infrared imaging cameras.

Most of the Universe is made of hydrogen, and figuring out the age of early galaxies involves looking for the most energetic transitions of hydrogen’s electron, called the Lyman series. These transitions produce photons that are in the UV area of the spectrum. But the redshift of light that’s traveled for billions of years will shift these photons into the infrared area of the spectrum, which is what Webb was designed to detect.

What this looks like in practice is that hydrogen-dominated material will emit a broad range of light right up to the highest energy Lyman transition. Above that energy, photons will be sparse (they may still be produced by things like processes that accelerate particles). This point in the energy spectrum is called the “Lyman break,” and its location on the spectrum will change based on how distant the source is—the greater the distance to the source, the deeper into the infrared the break will appear.

Initial surveys checked for the Lyman break using filters on Webb’s cameras that cut off different areas of the IR spectrum. Researchers looked for objects that showed up at low energies but disappeared when a filter that selected for higher-energy infrared photons was swapped in. The difference in energies between the photons allowed through by the two filters can provide a rough estimate of where the Lyman break must be.

Locating the Lyman break requires imaging with a spectrograph, which can sample the full spectrum of near-infrared light. Fortunately, Webb has one of those, too. The newly published study involved turning the NIRSpec onto three early galaxies found in the JADES images.

Too many, too soon

The researchers involved in the analysis only ended up with data from two of these galaxies. NIRSpec doesn’t gather as much light as one of Webb’s cameras can, and so the faintest of the three just didn’t produce enough data to enable analysis. The other two, however, produced very clear data that placed the galaxies at a redshift measure roughly z = 14, which means we’re seeing them as they looked 300 million years after the Big Bang. Both show sharp Lyman breaks, with the amount of light dropping gradually as you move further into the lower-energy part of the spectrum.

There’s a slight hint of emissions from heavily ionized carbon atoms in one of the galaxies, but no sign of any other specific elements beyond hydrogen.

One of the two galaxies was quite compact, so similar to the other galaxies of this age that we’d confirmed previously. But the other, JADES-GS-ZZ14-0, was quite distinct. For starters, it’s extremely bright, being the third most luminous distant galaxy out of hundreds we’ve imaged so far. And it’s big enough that it’s not possible for all its light to be originating from the core. That rules out the possibility that what we’re looking at is a blurred view of an active galactic nucleus powered by a supermassive black hole feeding on material.

Instead, much of the light we’re looking at seems to have originated in the stars of JADES-GS-ZZ14-0. Most of those stars are young, and there seems to be very little of the dust that characterizes modern galaxies. The researchers estimate that star formation started at least 100 million years earlier (meaning just 200 million years after the Big Bang) and continued at a rapid pace in the intervening time.

Combined with earlier data, the researchers write that this confirms that “bright and massive galaxies existed already only 300 [million years] after the Big Bang, and their number density is more than ten times higher than extrapolations based on pre-JWST observations.” In other words, there were a lot more galaxies around in the early Universe than we thought, which could pose some problems for our understanding of the Universe’s contents and their evolution.

Meanwhile, the early discovery of the extremely bright galaxy implies that there are a number of similar ones out there awaiting our discovery. This means there’s going to be a lot of demand for time on NIRSpec in the coming years.

Nature, 2024. DOI: 10.1038/s41586-024-07860-9  (About DOIs).

Webb confirms: Big, bright galaxies formed shortly after the Big Bang Read More »

charter-failed-to-notify-911-call-centers-and-fcc-about-voip-phone-outages

Charter failed to notify 911 call centers and FCC about VoIP phone outages

Charter admits violations —

Charter blames error with email notification and misunderstanding of FCC rules.

A parked van used by a Spectrum cable technician. The van has the Spectrum logo on its side and a ladder stowed on the roof.

Charter Communications agreed to pay a $15 million fine after admitting that it failed to notify more than a thousand 911 call centers about an outage caused by a denial-of-service attack and separately failed to meet the Federal Communications Commission’s reporting deadlines for hundreds of planned maintenance outages.

“As part of the settlement, Charter admits to violating the agency’s rules regarding notifications to public safety officials and the Commission in connection with three unplanned network outages and hundreds of planned, maintenance-related network outages that occurred last year,” the FCC said in an announcement yesterday.

A consent decree said Charter admits that it “failed to timely notify more than 1,000 PSAPs [Public Safety Answering Points] of an outage on February 19, 2023.” The decree notes that failure to notify the PSAPs, or 911 call centers, “impedes the ability of public safety officials to mediate the effects of an outage by notifying the public of alternate ways to contact emergency services.”

Phone providers like Charter must also provide required outage notifications to the FCC through the Network Outage Reporting System (NORS). However, Charter admits that it “failed to meet reporting deadlines for reports in the NORS associated with the [February 2023] Outage, and separate outages on March 31 and April 26, 2023; and failed to meet other NORS reporting deadlines associated with hundreds of planned maintenance outages, all in violation of the Commission’s rules.”

Error with email notification

With the February 2023 outage, “Charter was required to notify all of the impacted PSAPs ‘as soon as possible,’ but due to a clerical error associated with the sending of an email notification, over 1,000 PSAPs were not contacted,” the consent decree said. Charter also “failed to file the required NORS notification until almost six hours after it was due.”

Failure to meet NORS deadlines “impairs the Commission’s ability to assess the magnitude of major outages, identify trends, and promote network reliability best practices that can prevent or mitigate future disruptions. Therefore, it is imperative for the Commission to hold providers, like Charter, accountable for fulfilling these essential obligations,” the consent decree said.

In addition to paying a $15 million civil penalty to the US Treasury, “Charter has agreed to implement a robust compliance plan, including cybersecurity provisions related to compliance with the Commission’s 911 rules,” the FCC said. Charter reported revenue of $13.7 billion and net income of $1.2 billion in the most recent quarter.

The February 2023 outage was caused by what the FCC described as “a minor, low and slow Denial of Service (DoS) attack.” The resulting outage in Charter’s VoIP service affected about 400,000 “residential and commercial interconnected VoIP customers in portions of 41 states and the District of Columbia.” Charter restored service in less than four hours.

The FCC said its rules require VoIP providers like Charter “to notify 911 call centers as soon as possible of outages longer than 30 minutes that potentially affect such call centers. Providers are also required to file by set deadlines in the FCC’s Network Outage Reporting System when outages reach a certain severity threshold.”

The FCC investigation into the February 2023 outage led to Charter admitting violations related to hundreds of other outages:

Charter indicated that based on a misunderstanding of the Commission’s rules, hundreds of planned maintenance events may have met the criteria for filing a NORS report but were never submitted. Thereafter, Charter also identified two additional, unplanned outages—which occurred on March 31, 2023, and April 26, 2023—that each met the NORS reporting threshold but Charter failed to report.

Charter downplays violations

In a statement provided to Ars, Charter said, “We’re glad to have resolved these issues, which will primarily result in Charter reporting certain planned maintenance to the FCC.” Charter downplayed the outage reporting violations, saying that “the fine has nothing to do with cybersecurity violations and is attributable solely to administrative notifications.”

Charter’s statement emphasized that the company did not violate cybersecurity rules. “No provision within either the CISA Cybersecurity Best Practices or the NIST Cybersecurity Framework would have prevented this attack, and no flaws were identified by the FCC regarding Charter’s cybersecurity practices. We agreed with the FCC that we should continue doing what we’re already doing,” the company said.

Although Charter said the settlement “will primarily result in Charter reporting certain planned maintenance to the FCC,” the consent decree also requires changes to ensure that the company promptly notifies 911 call centers. It says that Charter must create “an automated PSAP notification system to automatically contact PSAPs after a network outage that meets the reporting thresholds in the 911 Rules.”

The FCC said the “compliance plan includes the first-of-its-kind application of certain cybersecurity measures—including network segmentation and vulnerability mitigation management—related to 911 communications services and network outage reporting. Charter has agreed to maintain and evolve its overall cybersecurity risk management program in accordance with the voluntary National Institute of Standards and Technology (NIST) Cyber Security Framework, and other applicable industry standards and best practices, and applicable state and/or federal laws covering cybersecurity risk management and governance practices.”

The compliance plan requirements are set to remain in effect for three years.

Disclosure: The Advance/Newhouse Partnership, which owns 12.4 percent of Charter, is part of Advance Publications, which also owns Ars Technica parent Condé Nast.

Charter failed to notify 911 call centers and FCC about VoIP phone outages Read More »

spacex-moving-dragon-splashdowns-to-pacific-to-solve-falling-debris-problem

SpaceX moving Dragon splashdowns to Pacific to solve falling debris problem

A Crew Dragon spacecraft is seen docked at the International Space Station in 2022. The section of the spacecraft on the left is the pressurized capsule, while the rear section, at right, is the trunk.

Enlarge / A Crew Dragon spacecraft is seen docked at the International Space Station in 2022. The section of the spacecraft on the left is the pressurized capsule, while the rear section, at right, is the trunk.

NASA

Sometime next year, SpaceX will begin returning its Dragon crew and cargo capsules to splashdowns in the Pacific Ocean and end recoveries of the spacecraft off the coast of Florida.

This will allow SpaceX to make changes to the way it brings Dragons back to Earth and eliminate the risk, however tiny, that a piece of debris from the ship’s trunk section might fall on someone and cause damage, injury, or death.

“After five years of splashing down off the coast of Florida, we’ve decided to shift Dragon recovery operations back to the West Coast,” said Sarah Walker, SpaceX’s director of Dragon mission management.

Public safety

In the past couple of years, landowners have discovered debris from several Dragon missions on their property, and the fragments all came from the spacecraft’s trunk, an unpressurized section mounted behind the capsule as it carries astronauts or cargo on flights to and from the International Space Station.

SpaceX returned its first 21 Dragon cargo missions to splashdowns in the Pacific Ocean southwest of Los Angeles. When an upgraded human-rated version of Dragon started flying in 2019, SpaceX moved splashdowns to the Atlantic Ocean and the Gulf of Mexico to be closer to the company’s refurbishment and launch facilities at Cape Canaveral, Florida. The benefits of landing near Florida included a faster handover of astronauts and time-sensitive cargo back to NASA and shorter turnaround times between missions.

The old version of Dragon, known as Dragon 1, separated its trunk after the deorbit burn, allowing the trunk to fall into the Pacific. With the new version of Dragon, called Dragon 2, SpaceX changed the reentry profile to jettison the trunk before the deorbit burn. This meant that the trunk remained in orbit after each Dragon mission, while the capsule reentered the atmosphere on a guided trajectory. The trunk, which is made of composite materials and lacks a propulsion system, usually takes a few weeks or a few months to fall back into the atmosphere and doesn’t have control of where or when it reenters.

Air resistance from the rarefied upper atmosphere gradually slows the trunk’s velocity enough to drop it out of orbit, and the amount of aerodynamic drag the trunk sees is largely determined by fluctuations in solar activity.

SpaceX and NASA, which funded a large portion of the Dragon spacecraft’s development, initially determined the trunk would entirely burn up when it reentered the atmosphere and would pose no threat of surviving reentry and causing injuries or damaging property. However, that turned out to not be the case.

In May, a 90-pound chunk of a SpaceX Dragon spacecraft that departed the International Space Station fell on the property of a “glamping” resort in North Carolina. At the same time, a homeowner in a nearby town found a smaller piece of material that also appeared to be from the same Dragon mission.

These events followed the discovery in April of another nearly 90-pound piece of debris from a Dragon capsule on a farm in the Canadian province of Saskatchewan. SpaceX and NASA later determined the debris fell from orbit in February, and earlier this month, SpaceX employees came to the farm to retrieve the wreckage, according to CBC.

Pieces of a Dragon spacecraft also fell over Colorado last year, and a farmer in Australia found debris from a Dragon capsule on his land in 2022.

SpaceX moving Dragon splashdowns to Pacific to solve falling debris problem Read More »

from-sci-fi-to-state-law:-california’s-plan-to-prevent-ai-catastrophe

From sci-fi to state law: California’s plan to prevent AI catastrophe

Adventures in AI regulation —

Critics say SB-1047, proposed by “AI doomers,” could slow innovation and stifle open source AI.

The California state capital building in Sacramento.

Enlarge / The California State Capitol Building in Sacramento.

California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall “safety” of large artificial intelligence models. But critics are concerned that the bill’s overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today.

SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to “safety incidents.”

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of “critical harms” that an AI system might enable. That includes harms leading to “mass casualties or at least $500 million of damage,” such as “the creation or use of chemical, biological, radiological, or nuclear weapon” (hello, Skynet?) or “precise instructions for conducting a cyberattack… on critical infrastructure.” The bill also alludes to “other grave harms to public safety and security that are of comparable severity” to those laid out explicitly.

An AI model’s creator can’t be held liable for harm caused through the sharing of “publicly accessible” information from outside the model—simply asking an LLM to summarize The Anarchist’s Cookbook probably wouldn’t put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with “novel threats to public safety and security.” More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI “autonomously engaging in behavior other than at the request of a user” while acting “with limited human oversight, intervention, or supervision.”

Would California's new bill have stopped WOPR?

Enlarge / Would California’s new bill have stopped WOPR?

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must “implement the capability to promptly enact a full shutdown” and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require “intent, recklessness, or gross negligence” if performed by a human, suggesting a degree of agency that does not exist in today’s large language models.

Attack of the killer AI?

This kind of language in the bill likely reflects the particular fears of its original drafter, Center for AI Safety (CAIS) co-founder Dan Hendrycks. In a 2023 Time Magazine piece, Hendrycks makes the maximalist existential argument that “evolutionary pressures will likely ingrain AIs with behaviors that promote self-preservation” and lead to “a pathway toward being supplanted as the earth’s dominant species.'”

If Hendrycks is right, then legislation like SB-1047 seems like a common-sense precaution—indeed, it might not go far enough. Supporters of the bill, including AI luminaries Geoffrey Hinton and Yoshua Bengio, agree with Hendrycks’ assertion that the bill is a necessary step to prevent potential catastrophic harm from advanced AI systems.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” wrote Bengio in an endorsement of the bill. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers.

Tech policy expert Dr. Nirit Weiss-Blatt

However, critics argue that AI policy shouldn’t be led by outlandish fears of future systems that resemble science fiction more than current technology. “SB-1047 was originally drafted by non-profit groups that believe in the end of the world by sentient machine, like Dan Hendrycks’ Center for AI Safety,” Daniel Jeffries, a prominent voice in the AI community, told Ars. “You cannot start from this premise and create a sane, sound, ‘light touch’ safety bill.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers,” added tech policy expert Nirit Weiss-Blatt. “With their fictional fears, they try to pass fictional-led legislation, one that, according to numerous AI experts and open source advocates, could ruin California’s and the US’s technological advantage.”

From sci-fi to state law: California’s plan to prevent AI catastrophe Read More »

are-you-a-workaholic?-here’s-how-to-spot-the-signs

Are you a workaholic? Here’s how to spot the signs

bad for business —

Psychologists now view an out-of-control compulsion to work as an addiction.

Man works late in dimly lit cubicle amid a dark office space

An accountant who fills out spreadsheets at the beach, a dog groomer who always has time for one more client, a basketball player who shoots free throws to the point of exhaustion.

Every profession has its share of hard chargers and overachievers. But for some workers—perhaps more than ever in our always-on, always-connected world—the drive to send one more email, clip one more poodle, sink one more shot becomes all-consuming.

Workaholism is a common feature of the modern workplace. A recent review gauging its pervasiveness across occupational fields and cultures found that roughly 15 percent of workers qualify as workaholics. That adds up to millions of overextended employees around the world who don’t know when—or how, or why—to quit.

Whether driven by ambition, a penchant for perfectionism, or the small rush of completing a task, they work past any semblance of reason. A healthy work ethic can cross the line into an addiction, a shift with far-reaching consequences, says Toon Taris, a behavioral scientist and work researcher at Utrecht University in the Netherlands.

“Workaholism” is a word that gets thrown around loosely and sometimes glibly, says Taris, but the actual affliction is more common, more complex, and more dangerous than many people realize.

What workaholism is—and isn’t

Psychologists and employment researchers have tinkered with measures and definitions of workaholism for decades, and today the picture is coming into focus. In a major shift, workaholism is now viewed as an addiction with its own set of risk factors and consequences, says Taris, who, with occupational health scientist Jan de Jonge of Eindhoven University of Technology in the Netherlands, explored the phenomenon in the 2024 Annual Review of Organizational Psychology and Organizational Behavior.

Taris stresses that the “workaholic” label doesn’t apply to people who put in long hours because they love their jobs. Those people are considered engaged workers, he says. “That’s fine. No problems there.” People who temporarily put themselves through the grinder to advance their careers or keep up on car or house payments don’t count, either. Workaholism is in a different category from capitalism.

The growing consensus is that true workaholism encompasses four dimensions: motivations, thoughts, emotions, and behaviors, says Malissa Clark, an industrial/organizational psychologist at the University of Georgia in Athens. In 2020, Clark and colleagues proposed in the Journal of Applied Psychology  that, in sum, workaholism involves an inner compulsion to work, having persistent thoughts about work, experiencing negative feelings when not working, and working beyond what is reasonably expected.

Some personality types are especially likely to fall into the work trap. Perfectionists, extroverts, and people with type A (ambitious, aggressive, and impatient) personalities are prone to workaholism, Clark and coauthors found in a 2016 meta-analysis. They had expected people with low self-esteem to be at risk, but that link was nowhere to be found. Workaholics may put themselves through the wringer, but it’s not necessarily out of a sense of inadequacy or self-loathing.

Are you a workaholic? Here’s how to spot the signs Read More »

hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events

Hang out with Ars in San Jose and DC this fall for two infrastructure events

Arsmeet! —

Join us as we talk about the next few years in AI & storage, and what to watch for.

Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we’re back partnering with IBM again and we’re looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we’re going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it’s titled “Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next.” The idea will be to explore what generative AI means for the future of data management. The topics we’ll be discussing include:

  • Playing the infrastructure long game to address any kind of workload
  • Identifying infrastructure vulnerabilities with today’s AI tools
  • Infrastructure’s environmental footprint: Navigating impacts and responsibilities

We’re getting our panelists locked down right now, and while I don’t have any names to share, many will be familiar to Ars readers from past events—or from the front page.

As a neat added bonus, we’re going to host the event at the Computer History Museum, which any Bay Area Ars reader can attest is an incredibly cool venue. (Just nobody spill anything. I think they’ll kick us out if we break any exhibits!)

October: Washington, DC

Switching coasts, on October 29 we’ll set up shop in our nation’s capital for a similar show. This time, our event title will be “AI in DC: Privacy, Compliance, and Making Infrastructure Smarter.” Given that we’ll be in DC, the tone shifts a bit to some more policy-centric discussions, and the talk track looks like this:

  • The key to compliance with emerging technologies
  • Data security in the age of AI-assisted cyber-espionage
  • The best infrastructure solution for your AI/ML strategy

Same here deal with the speakers as with the September—I can’t name names yet, but the list will be familiar to Ars readers and I’m excited. We’re still considering venues, but hoping to find something that matches our previous events in terms of style and coolness.

Interested in attending?

While it’d be awesome if everyone could come, the old song and dance applies: space, as they say, will be limited at both venues. We’d like to make sure local folks in both locations get priority in being able to attend, so we’re asking anyone who wants a ticket to register for the events at the sign-up pages below. You should get an email immediately confirming we’ve received your info, and we’ll send another note in a couple of weeks with further details on timing and attendance.

On the Ars side, at minimum both our EIC Ken Fisher and I will be in attendance at both events, and we’ll likely have some other Ars staff showing up where we can—free drinks are a strong lure for the weary tech journalist, so there ought to be at least a few appearing at both. Hoping to see you all there!

Hang out with Ars in San Jose and DC this fall for two infrastructure events Read More »

ai-and-ml-enter-motorsports:-how-gm-is-using-them-to-win-more-races

AI and ML enter motorsports: How GM is using them to win more races

not LLM or generative AI —

From modeling tire wear and fuel use to predicting cautions based on radio traffic.

SAO PAULO, BRAZIL - JULY 13: The #02 Cadillac Racing Cadillac V-Series.R of Earl Bamber, and Alex Lynn in action ahead of the Six Hours of Sao Paulo at the Autodromo de Interlagos on July 13, 2024 in Sao Paulo, Brazil.

Enlarge / The Cadillac V-Series.R is one of General Motors’ factory-backed racing programs.

James Moy Photography/Getty Images

It is hard to escape the feeling that a few too many businesses are jumping on the AI hype train because it’s hype-y, rather than because AI offers an underlying benefit to their operation. So I will admit to a little inherent skepticism, and perhaps a touch of morbid curiosity, when General Motors got in touch wanting to show off some of the new AI/machine learning tools it has been using to win more races in NASCAR, sportscar racing, and IndyCar. As it turns out, that skepticism was misplaced.

GM has fingers in a lot of motorsport pies, but there are four top-level programs it really, really cares about. Number one for an American automaker is NASCAR—still the king of motorsport here—where Chevrolet supplies engines to six Cup teams. IndyCar, which could once boast of being America’s favorite racing, is home to another six Chevy-powered teams. And then there’s sportscar racing; right now, Cadillac is competing in IMSA’s GTP class and the World Endurance Championship’s Hypercar class, plus a factory Corvette Racing effort in IMSA.

“In all the series we race we either have key partners or specific teams that run our cars. And part of the technical support that they get from us are the capabilities of my team,” said Jonathan Bolenbaugh, motorsports analytics leader at GM, based at GM’s Charlotte Technical Center in North Carolina.

Unlike generative AI that’s being developed to displace humans from creative activities, GM sees the role of AI and ML as supporting human subject-matter experts so they can make the cars go faster. And it’s using these tools in a variety of applications.

One of GM's command centers at its Charlotte Technical Center in North Carolina.

Enlarge / One of GM’s command centers at its Charlotte Technical Center in North Carolina.

General Motors

Each team in each of those various series (obviously) has people on the ground at each race, and invariably more engineers and strategists helping them from Indianapolis, Charlotte, or wherever it is that the particular race team has its home base. But they’ll also be tied in with a team from GM Motorsport, working from one of a number of command centers at its Charlotte Technical Center.

What did they say?

Connecting all three are streams and streams of data from the cars themselves (in series that allow car-to-pit telemetry) but also voice comms, text-based messaging, timing and scoring data from officials, trackside photographs, and more. And one thing Bolenbaugh’s team and their suite of tools can do is help make sense of that data quickly enough for it to be actionable.

“In a series like F1, a lot of teams will have students who are potentially newer members of the team literally listening to the radio and typing out what is happening, then saying, ‘hey, this is about pitting. This is about track conditions,'” Bolenbaugh said.

Instead of giving that to the internship kids, GM built a real time audio transcription tool to do that job. After trying out a commercial off-the-shelf solution, it decided to build its own, “a combination of open source and some of our proprietary code,” Bolenbaugh said. As anyone who has ever been to a race track can attest, it’s a loud environment, so GM had to train models with all the background noise present.

“We’ve been able to really improve our accuracy and usability of the tool to the point where some of the manual support for that capability is now dwindling,” he said, with the benefit that it frees up the humans, who would otherwise be transcribing, to apply their brains in more useful ways.

Take a look at this

Another tool developed by Bolenbaugh and his team was built to quickly analyze images taken by trackside photographers working for the teams and OEMs. While some of the footage they shoot might be for marketing or PR, a lot of it is for the engineers.

Two years ago, getting those photos from the photographer’s camera to the team was the work of two to three minutes. Now, “from shutter click at the racetrack in a NASCAR event to AI-tagged into an application for us to get information out of those photos is seven seconds,” Bolenbaugh said.

Sometimes you don't need a ML tool to analyze a photo to tell you the car is damaged.

Enlarge / Sometimes you don’t need a ML tool to analyze a photo to tell you the car is damaged.

Jeffrey Vest/Icon Sportswire via Getty Images

“Time is everything, and the shortest lap time that we run—the Coliseum would be an outlier, but maybe like 18 seconds is probably a short lap time. So we need to be faster than from when they pass that pit lane entry to when they come back again,” he said.

At the rollout of this particular tool at a NASCAR race last year, one of GM’s partner teams was able to avoid a cautionary pitstop after its driver scraped the wall, when the young engineer who developed the tool was able to show them a seconds-old photo of the right side of the car that showed it had escaped any damage.

“They didn’t have to wait for a spotter to look, they didn’t have to wait for the driver’s opinion. They knew that didn’t have damage. That team made the playoffs in that series by four points, so in the event that they would have pitted, there’s a likelihood where they didn’t make it,” he said. In cases where a car is damaged, the image analysis tool can automatically flag that and make that known quickly through an alert.

Not all of the images are used for snap decisions like that—engineers can glean a lot about their rivals from photos, too.

“We would be very interested in things related to the geometry of the car for the setup settings—wicker settings, wing angles… ride heights of the car, how close the car is to the ground—those are all things that would be great to know from an engineering standpoint, and those would be objectives that we would have in doing image analysis,” said Patrick Canupp, director of motorsports competition engineering at GM.

Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Enlarge / Many of the photographers you see working trackside will be shooting on behalf of teams or manufacturers.

Steve Russell/Toronto Star via Getty Images

“It’s not straightforward to take a set of still images and determine a lot of engineering information from those. And so we’re working on that actively to help with all the photos that come in to us on a race weekend—there’s thousands of them. And so it’s a lot of information that we have at our access, that we want to try to maximize the engineering information that we glean from all of that data. It’s kind of a big data problem that AI is really geared for,” Canupp said.

The computer says we should pit now

Remember that transcribed audio feed from earlier? “If a bunch of drivers are starting to talk about something similar in the race like the track condition, we can start inferring, based on… the occurrence of certain words, that the track is changing,” said Bolenbaugh. “It might not just be your car… if drivers are talking about something on track, the likelihood of a caution, which is a part of our strategy model, might be going up.”

That feeds into a strategy tool that also takes lap times from timing and scoring, as well as fuel efficiency data in racing series that provide it for all cars, or a predictive model to do the same in series like NASCAR and IndyCar where teams don’t get to see that kind of data from their competitors, as well as models of tire wear.

“One of the biggest things that we need to manage is tires, fuel, and lap time. Everything is a trade-off between trying to execute the race the fastest,” Bolenbaugh said.

Obviously races are dynamic situations, and so “multiple times a lap as the scenario changes, we’re updating our recommendation. So, with tire fall off [as the tire wears and loses grip], you’re following up in real time, predicting where it’s going to be. We are constantly evolving during the race and doing transfer learning so we go into the weekend, as the race unfolds, continuing to train models in real time,” Bolenbaugh said.

AI and ML enter motorsports: How GM is using them to win more races Read More »

lego’s-newest-retro-art-piece-is-a-1,215-piece-super-mario-world-homage

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage

let’s-a-go —

$130 set is available for preorder now, ships on October 1.

  • The Lego Mario & Yoshi set is an homage to 1990’s Super Mario World.

    The Lego Group

  • From the front, it looks like a fairly straightforward re-creation of the game’s 16-bit sprites.

    The Lego Group

  • Behind the facade are complex mechanics that move Yoshi’s feet and arms and bob his body up and down, to make him look like he’s walking. A separate dial opens his mouth and extends his tongue.

    The Lego Group

Nintendo and Lego are at it again—they’ve announced another collaboration today as a follow-up to the interactive Mario sets, the replica Nintendo Entertainment System, the unfolding question mark block with the Mario 64 worlds inside, and other sets besides.

The latest addition is an homage to 1990’s Super Mario World, Mario’s debut outing on the then-new 16-bit Super Nintendo Entertainment System. At first, the 1,215-piece set just looks like a caped Mario sitting on top of Yoshi. But a look at the back reveals more complex mechanics, including a hand crank that makes Yoshi’s feet and arms move and a dial that opens his mouth and extends his tongue.

Most of the Mario sets have included some kind of interactive moving part, even if it’s as simple as the movable mouth on the Lego Piranha Plant. Yoshi’s mechanical crank most strongly resembles the NES set, though, which included a CRT-style TV set with a crank that made the contents of the screen scroll so that Mario could “walk.”

The Mario & Yoshi set is available to preorder from Lego’s online store for $129.99. It begins shipping on October 1.

Lego has also branched out into other video game-themed sets. In 2022, the company began selling a replica Atari 2600, complete with faux-wood paneling. More recently, Lego has collaborated with Epic Games on several Fortnite-themed sets, including the Battle Bus.

Listing image by The Lego Group

Lego’s newest retro art piece is a 1,215-piece Super Mario World homage Read More »