Science

the-urban-rural-death-divide-is-getting-alarmingly-wider-for-working-age-americans

The urban-rural death divide is getting alarmingly wider for working-age Americans

Growing divide —

The cause is unclear, but poverty and worsening health care access are likely factors.

Dental students, working as volunteers, attend to patients at a Remote Area Medical (RAM) mobile dental and medical clinic on October 7, 2023 in Grundy, Virginia. More than a thousand people were expected to seek free dental, medical and vision care at the two-day event in the rural and financially struggling area of western Virginia.

Enlarge / Dental students, working as volunteers, attend to patients at a Remote Area Medical (RAM) mobile dental and medical clinic on October 7, 2023 in Grundy, Virginia. More than a thousand people were expected to seek free dental, medical and vision care at the two-day event in the rural and financially struggling area of western Virginia.

In the 1960s and 1970s, people who lived in rural America fared a little better than their urban counterparts. The rate of deaths from all causes was a tad lower outside of metropolitan areas. In the 1980s, though, things evened out, and in the early 1990s, a gap emerged, with rural areas seeing higher death rates—and the gap has been growing ever since. By 1999, the gap was 6 percent. In 2019, just before the pandemic struck, the gap was over 20 percent.

While this news might not be surprising to anyone following mortality trends, a recent analysis by the Department of Agriculture’s Economic Research Service drilled down further, finding a yet more alarming chasm in the urban-rural divide. The report focused in on a key indicator of population health: mortality among prime working-age adults (people ages 25 to 54) and only their natural-cause mortality (NCM) rates—deaths among 100,000 residents from chronic and acute diseases—clearing away external causes of death, including suicides, drug overdoses, violence, and accidents. On this metric, rural areas saw dramatically worsening trends compared with urban populations.

Change in age-adjusted, prime working-age, external- and natural-cause mortality rates for metro and nonmetro areas, 1999–2001 to 2017–2019.

Enlarge / Change in age-adjusted, prime working-age, external- and natural-cause mortality rates for metro and nonmetro areas, 1999–2001 to 2017–2019.

The federal researchers compared NCM rates of prime working-age adults in two three-year periods: 1999 to 2001, and 2017 to 2019. In 1999, the NCM rate in 25- to 54-year-olds in rural areas was 6 percent higher than the NCM rate of this age group in urban areas. In 2019, the gap had grown to a whopping 43 percent. In fact, prime working-age adults in rural areas was the only age group in the US that saw an increased NCM rate in this time period. In urban areas, working-age adults’ NCM rate declined.

Broken down further, the researchers found that non-Hispanic White people in rural areas had the largest NCM rate increases when compared to their urban counterparts. Among just rural residents, American Indian and Alaska Native (AIAN) and non-Hispanic White people registered the largest increases between the two time periods. In both groups, women had the largest increases. Regionally, rural residents in the South had the highest NCM rate, with the rural residents in the Northeast maintaining the lowest rate. But again, across all regions, women saw larger increases than men.

  • Age-adjusted prime working-age natural-cause mortality rates, metro and nonmetro areas, 1999–2019.

  • Change in natural-cause, crude mortality rates by 5-year age cohorts for metro and nonmetro areas, 1999–2001 to 2017–2019.

Among all rural working-age residents, the leading natural causes of death were cancer and heart disease—which was true among urban residents as well. But, in rural residents, these conditions had significantly higher mortality rates than what was seen in urban residents. In 2019, women in rural areas had a mortality rate from heart disease that was 69 percent higher than their urban counterparts, for example. Otherwise, lung disease- and hepatitis-related mortality saw the largest increases in prevalence in rural residents compared with urban peers. Breaking causes down by gender, rural working-age women saw a 313 percent increase in mortality from pregnancy-related conditions between the study’s two time periods, the largest increase of the mortality causes. For rural working-age men, the largest increase was seen from hypertension-related deaths, with a 132 percent increase between the two time periods.

Nonmetro age-adjusted, prime working-age mortality rates by sex for 15 leading natural causes of death, 1999–2001 and 2017–2019, as percent above or below corresponding metro rates.

Enlarge / Nonmetro age-adjusted, prime working-age mortality rates by sex for 15 leading natural causes of death, 1999–2001 and 2017–2019, as percent above or below corresponding metro rates.

The study, which drew from CDC death certificate and epidemiological data, did not explore the reasons for the increases. But, there are a number of plausible factors, the authors note. Rural areas have higher rates of poverty, which contributes to poor health outcomes and higher probabilities of death from chronic diseases. Rural areas also have differences in health behaviors compared with urban areas, including higher incidences of smoking and obesity. Further, rural areas have less access to health care and fewer health care resources. Both rural hospital closures and physician shortages in rural areas have been of growing concern among health experts, the researchers note. Last, some of the states with higher rural mortality rates, particularly those in the South, have failed to implement Medicaid expansions under the 2010 Affordable Care Act, which could help improve health care access and, thus, mortality rates among rural residents.

The urban-rural death divide is getting alarmingly wider for working-age Americans Read More »

epa’s-pfas-rules:-we’d-prefer-zero,-but-we’ll-accept-4-parts-per-trillion

EPA’s PFAS rules: We’d prefer zero, but we’ll accept 4 parts per trillion

Approaching zero —

For two chemicals, any presence in water supplies is too much.

A young person drinks from a public water fountain.

Today, the Environmental Protection Agency announced that it has finalized rules for handling water supplies that are contaminated by a large family of chemicals collectively termed PFAS (perfluoroalkyl and polyfluoroalkyl substances). Commonly called “forever chemicals,” these contaminants have been linked to a huge range of health issues, including cancers, heart disease, immune dysfunction, and developmental disorders.

The final rules keep one striking aspect of the initial proposal intact: a goal of completely eliminating exposure to two members of the PFAS family. The new rules require all drinking water suppliers to monitor for the chemicals’ presence, and the EPA estimates that as many as 10 percent of them may need to take action to remove them. While that will be costly, the health benefits are expected to exceed those costs.

Going low

PFAS are a collection of hydrocarbons where some of the hydrogen atoms have been swapped out for fluorine. This swap retains the water-repellant behavior of hydrocarbons while making the molecules highly resistant to breaking down through natural processes—hence the forever chemicals moniker. They’re widely used in water-resistant clothing and non-stick cooking equipment and have found uses in firefighting foam. Their widespread use and disposal has allowed them to get into water supplies in many locations.

They’ve also been linked to an enormous range of health issues. The EPA expects that its new rules will have the following effects: fewer cancers, lower incidence of heart attacks and strokes, reduced birth complications, and a drop in other developmental, cardiovascular, liver, immune, endocrine, metabolic, reproductive, musculoskeletal, and carcinogenic effects. These are not chemicals you want to be drinking.

The striking thing was how far the EPA was willing to go to get them out of drinking water. For two chemicals, Perfluorooctanoic acid (PFOA) and Perfluorooctanesulfonic acid (PFOS), the Agency’s ideal contamination level is zero. Meaning no exposure to these chemicals whatsoever. Since current testing equipment is limited to a sensitivity of four parts per trillion, the new rules settle for using that as the standard. Other family members see limits of 10 parts per trillion, and an additional limit sets a cap on how much total exposure is acceptable when a mixture of PFAS is present.

Overall, the EPA estimates that there are roughly 66,000 drinking water suppliers that will be subject to these new rules. They’ll be given three years to get monitoring and testing programs set up and provided access to funds from the Bipartisan Infrastructure Law to help offset the costs. All told, over $20 billion will be made available for the testing and improvements to equipment needed for compliance.

The Agency expects that somewhere between 4,000 and 6,500 of those systems will require some form of decontamination. While those represent a relatively small fraction of the total drinking water suppliers, it’s estimated that nearly a third of the US’ population will see its exposure to PFAS drop. Several technologies, including reverse osmosis and exposure to activated carbon, are capable of pulling PFAS from water, and the EPA is leaving it up to each supplier to choose a preferred method.

Cost/benefit

All of that monitoring and decontamination will not come cheap. The EPA estimates that the annual costs will be in the neighborhood of $150 billion, which will likely be passed on to consumers via their water suppliers. Those same consumers, however, are expected to see health benefits that outweigh these costs. EPA estimates place the impact of just three of the health improvements (cancer, cardiovascular, and birth complications) at $150 billion annually. Adding all the benefits of the rest of the health improvements should greatly exceed the costs.

The problem, of course, is that people will immediately recognize the increased cost of their water bills, while the savings of medical problems that don’t happen are much more abstract.

Overall, the final plan is largely unchanged from the EPA’s original proposal. The biggest differences are that the Agency is giving water suppliers more time to comply, somewhat more specific exposure allowances, and the ability of suppliers with minimal contamination to go longer in between submitting test results.

“People will live longer, healthier lives because of this action, and the benefits justify the costs,” the agency concluded in announcing the new rules.

EPA’s PFAS rules: We’d prefer zero, but we’ll accept 4 parts per trillion Read More »

after-a-fiery-finale,-the-delta-rocket-family-now-belongs-to-history

After a fiery finale, the Delta rocket family now belongs to history

Delta 389 —

“It is bittersweet to see the last one, but there are great things ahead.”

In this video frame from ULA's live broadcast, three RS-68A engines power the Delta IV Heavy rocket into the sky over Cape Canaveral, Florida.

Enlarge / In this video frame from ULA’s live broadcast, three RS-68A engines power the Delta IV Heavy rocket into the sky over Cape Canaveral, Florida.

United Launch Alliance

The final flight of United Launch Alliance’s Delta IV Heavy rocket took off Tuesday from Cape Canaveral, Florida, with a classified spy satellite for the National Reconnaissance Office.

The Delta IV Heavy, one of the world’s most powerful rockets, launched for the 16th and final time Tuesday. It was the 45th and last flight of a Delta IV launcher and the final rocket named Delta to ever launch, ending a string of 389 missions dating back to 1960.

United Launch Alliance (ULA) tried to launch this rocket on March 28 but aborted the countdown about four minutes prior to liftoff due to trouble with nitrogen pumps at an off-site facility at Cape Canaveral. The nitrogen is necessary for purging parts inside the Delta IV rocket before launch, reducing the risk of a fire or explosion during the countdown.

The pumps, operated by Air Liquide, are part of a network that distributes nitrogen to different launch pads at the Florida spaceport. The nitrogen network has caused problems before, most notably during the first launch campaign for NASA’s Space Launch System rocket in 2022. Air Liquide did not respond to questions from Ars.

A flawless liftoff

With a solution in place, ULA gave the go-ahead for another launch attempt Tuesday. After a smooth countdown, the final Delta IV Heavy lifted off from Cape Canaveral Space Force Station at 12: 53 pm EDT (16: 53 UTC).

Three hydrogen-fueled RS-68A engines made by Aerojet Rocketdyne flashed to life in the final seconds before launch and throttled up to produce more than 2 million pounds of thrust. The ignition sequence was accompanied by a dramatic hydrogen fireball, a hallmark of Delta IV Heavy launches, that singed the bottom of the 235-foot-tall (71.6-meter) rocket, turning a patch of its orange insulation black. Then, 12 hold-down bolts fired and freed the Delta IV Heavy for its climb into space with a top-secret payload for the US government’s spy satellite agency.

Heading east from Florida’s Space Coast, the Delta IV Heavy appeared to perform well in the early phases of its mission. After fading from view from ground-based cameras, the rocket’s two liquid-fueled side boosters jettisoned around four minutes into the flight, a moment captured by onboard video cameras. The core stage engine increased power to fire for a couple more minutes. Nearly six minutes after liftoff, the core stage was released, and the Delta IV upper stage took over for a series of burns with its RL10 engine.

At that point, ULA cut the public video and audio feeds from the launch control center, and the mission flew into a news blackout. The final portions of rocket launches carrying National Reconnaissance Office (NRO) satellites are usually performed in secret.

In all likelihood, the Delta IV Heavy’s upper stage was expected to fire its engine at least three times to place the classified NRO satellite into a circular geostationary orbit more than 22,000 miles (nearly 36,000 kilometers) over the equator. In this orbit, the spacecraft will move in lock-step with the planet’s rotation, giving the NRO’s newest spy satellite constant coverage over a portion of the Earth.

It will take about six hours for the rocket’s upper stage to deploy its payload into this high-altitude orbit and only then will ULA and the NRO declare the launch a success.

Eavesdropping from space

While the payload is classified, experts can glean a few insights from the circumstances of its launch. Only the largest NRO spy satellites require a launch on a Delta IV Heavy, and the payload on this mission is “almost certainly” a type of satellite known publicly as an “Advanced Orion” or “Mentor” spacecraft, according to Marco Langbroek, an expert Dutch satellite tracker.

The Advanced Orion satellites require the combination of the Delta IV Heavy rocket’s lift capability, long-duration upper stage, and huge, 65-foot-long (19.8-meter) trisector payload fairing, the largest payload enclosure of any operational rocket. In 2010, Bruce Carlson, then-director of the NRO, referred to the Advanced Orion platform as the “largest satellite in the world.”

When viewed from Earth, these satellites shine with the brightness of an eighth-magnitude star, making them easily visible with small binoculars despite their distant orbits, according to Ted Molczan, a skywatcher who tracks satellite activity.

“The satellites feature a very large parabolic unfoldable mesh antenna, with estimates of the size of this antenna ranging from 20 to 100 (!) meters,” Langbroek writes on his website, citing information leaked by Edward Snowden.

The purpose of these Advanced Orion satellites, each with mesh antennas that unfurl to a diameter of up to 330 feet (100 meters), is to listen in on communications and radio transmissions from US adversaries, and perhaps allies. Six previous Delta IV Heavy missions also likely launched Advanced Orion or Mentor satellites, giving the NRO a global web of listening posts parked high above the planet.

With the last Delta IV Heavy off the launch pad, ULA has achieved a goal of its corporate strategy sent into motion a decade ago, when the company decided to retire the Delta IV and Atlas V rockets in favor of a new-generation rocket named Vulcan. The first Vulcan rocket successfully launched in January, so the last few months have been a time of transition for ULA, a 50-50 joint venture owned by Boeing and Lockheed Martin.

“This is such an amazing piece of technology: 23 stories tall, half a million gallons of propellant, two and a quarter million pounds of thrust, and the most metal of all rockets, setting itself on fire before it goes to space,” Bruno said of the Delta IV Heavy before its final launch. “Retiring it is (key to) the future, moving to Vulcan, a less expensive, higher-performance rocket. But it’s still sad.”

“Everything that Delta has done … is being done better on Vulcan, so this is a great evolutionary step,” said Bill Cullen, ULA’s launch systems director. “It is bittersweet to see the last one, but there are great things ahead.”

After a fiery finale, the Delta rocket family now belongs to history Read More »

rip-peter-higgs,-who-laid-foundation-for-the-higgs-boson-in-the-1960s

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s

A particle physics hero —

Higgs shared the 2013 Nobel Prize in Physics with François Englert.

Smiling Peter Higgs, seated in front of microphone with Edinburgh logo in the background

Enlarge / A visibly emotional Peter Higgs was present when CERN announced Higgs boson discovery in July 2012.

University of Edinburgh

Peter Higgs, the shy, somewhat reclusive physicist who won a Nobel Prize for his theoretical work on how the Higgs boson gives elementary particles their mass, has died at the age of 94. According to a statement from the University of Edinburgh, the physicist passed “peacefully at home on Monday 8 April following a short illness.”

“Besides his outstanding contributions to particle physics, Peter was a very special person, a man of rare modesty, a great teacher and someone who explained physics in a very simple and profound way,” Fabiola Gianotti, director general at CERN and former leader of one of the experiments that helped discover the Higgs particle in 2012, told The Guardian. “An important piece of CERN’s history and accomplishments is linked to him. I am very saddened, and I will miss him sorely.”

The Higgs boson is a manifestation of the Higgs field, an invisible entity that pervades the Universe. Interactions between the Higgs field and particles help provide particles with mass, with particles that interact more strongly having larger masses. The Standard Model of Particle Physics describes the fundamental particles that make up all matter, like quarks and electrons, as well as the particles that mediate their interactions through forces like electromagnetism and the weak force. Back in the 1960s, theorists extended the model to incorporate what has become known as the Higgs mechanism, which provides many of the particles with mass. One consequence of the Standard Model’s version of the Higgs boson is that there should be a force-carrying particle, called a boson, associated with the Higgs field.

Despite its central role in the function of the Universe, the road to predicting the existence of the Higgs boson was bumpy, as was the process of discovering it. As previously reported, the idea of the Higgs boson was a consequence of studies on the weak force, which controls the decay of radioactive elements. The weak force only operates at very short distances, which suggests that the particles that mediate it (the W and Z bosons) are likely to be massive. While it was possible to use existing models of physics to explain some of their properties, these predictions had an awkward feature: just like another force-carrying particle, the photon, the resulting W and Z bosons were massless.

Schematic of the Standard Model of particle physics.

Enlarge / Schematic of the Standard Model of particle physics.

Over time, theoreticians managed to craft models that included massive W and Z bosons, but they invariably came with a hitch: a massless partner, which would imply a longer-range force. In 1964, however, a series of papers was published in rapid succession that described a way to get rid of this problematic particle. If a certain symmetry in the models was broken, the massless partner would go away, leaving only a massive one.

The first of these papers, by François Englert and Robert Brout, proposed the new model in terms of quantum field theory; the second, by Higgs (then 35), noted that a single quantum of the field would be detectable as a particle. A third paper, by Gerald Guralnik, Carl Richard Hagen, and Tom Kibble, provided an independent validation of the general approach, as did a completely independent derivation by students in the Soviet Union.

At that time, “There seemed to be excitement and concern about quantum field theory (the underlying structure of particle physics) back then, with some people beginning to abandon it,” David Kaplan, a physicist at Johns Hopkins University, told Ars. “There were new particles being regularly produced at accelerator experiments without any real theoretical structure to explain them. Spin-1 particles could be written down comfortably (the photon is spin-1) as long as they didn’t have a mass, but the massive versions were confusing to people at the time. A bunch of people, including Higgs, found this quantum field theory trick to give spin-1 particles a mass in a consistent way. These little tricks can turn out to be very useful, but also give the landscape of what is possible.”

“It wasn’t clear at the time how it would be applied in particle physics.”

Ironically, Higgs’ seminal paper was rejected by the European journal Physics Letters. He then added a crucial couple of paragraphs noting that his model also predicted the existence of what we now know as the Higgs boson. He submitted the revised paper to Physical Review Letters in the US, where it was accepted. He examined the properties of the boson in more detail in a 1966 follow-up paper.

RIP Peter Higgs, who laid foundation for the Higgs boson in the 1960s Read More »

epa-seeks-to-cut-“cancer-alley”-pollutants

EPA seeks to cut “Cancer Alley” pollutants

Out of the air —

Chemical plants will have to monitor how much is escaping and stop leaks.

Image of a large industrial facility on the side of a river.

Enlarge / An oil refinery in Louisiana. Facilities such as this have led to a proliferation of petrochemical plants in the area.

On Tuesday, the US Environmental Protection Agency announced new rules that are intended to cut emissions of two chemicals that have been linked to elevated incidence of cancer: ethylene oxide and chloroprene. While production and use of these chemicals takes place in a variety of locations, they’re particularly associated with an area of petrochemical production in Louisiana that has become known as “Cancer Alley.”

The new regulations would require chemical manufacturers to monitor the emissions at their facilities and take steps to repair any problems that result in elevated emissions. Despite extensive evidence linking these chemicals to elevated risk of cancer, industry groups are signaling their opposition to these regulations, and the EPA has seen two previous attempts at regulation set aside by courts.

Dangerous stuff

The two chemicals at issue are primarily used as intermediates in the manufacture of common products. Chloroprene, for example, is used for the production of neoprene, a synthetic rubber-like substance that’s probably familiar from products like insulated sleeves and wetsuits. It’s a four-carbon chain with two double-bonds that allow for polymerization and an attached chlorine that alters its chemical properties.

According to the National Cancer Institute (NCI), chloroprene “is a mutagen and carcinogen in animals and is reasonably anticipated to be a human carcinogen.” Given that cancers are driven by DNA damage, any mutagen would be “reasonably anticipated” to drive the development of cancer. Beyond that, it appears to be pretty nasty stuff, with the NCI noting that “exposure to this substance causes damage to the skin, lungs, CNS, kidneys, liver and depression of the immune system.”

The NCI’s take on Ethylene Oxide is even more definitive, with the Institute placing it on its list of cancer-causing substances. The chemical is very simple, with two carbons that are linked to each other directly, and also linked via an oxygen atom, which makes the molecule look a bit like a triangle. This configuration allows the molecule to participate in a broad range of reactions that break one of the oxygen bonds, making it useful in the production of a huge range of chemicals. Its reactivity also makes it useful for sterilizing items such as medical equipment.

Its sterilization function works through causing damage to DNA, which again makes it prone to causing cancers.

In addition to these two chemicals, the EPA’s new regulations will target a number of additional airborne pollutants, including benzene, 1,3-butadiene, ethylene dichloride, and vinyl chloride, all of which have similar entries at the NCI.

Despite the extensive record linking these chemicals to cancer, The New York Times quotes the US Chamber of Commerce, a pro-industry group, as saying that “EPA should not move forward with this rule-making based on the current record because there remains significant scientific uncertainty.”

A history of exposure

The petrochemical industry is the main source of these chemicals, so their release is associated with areas where the oil and gas industry has a major presence; the EPA notes that the regulations will target sources in Delaware, New Jersey, and the Ohio River Valley. But the primary focus will be on chemical plants in Texas and Louisiana. These include the area that has picked up the moniker Cancer Alley due to a high incidence of the disease in a stretch along the Mississippi River with a large concentration of chemical plants.

As is the case with many examples of chemical pollution, the residents of Cancer Alley are largely poor and belong to minority groups. As a result, the EPA had initially attempted to regulate the emissions under a civil rights provision of the Clean Air Act, but that has been bogged down due to lawsuits.

The new regulations simply set limits on permissible levels of release at what’s termed the “fencelines” of the facilities where these chemicals are made, used, or handled. If levels exceed an annual limit, the owners and operators “must find the source of the pollution and make repairs.” This gets rid of previous exemptions for equipment startup, shutdown, and malfunctions; those exemptions had been held to violate the Clean Air Act in a separate lawsuit.

The EPA estimates that the sites subject to regulation will see their collective emissions of these chemicals drop by nearly 80 percent, which works out to be 54 tons of ethylene oxide, 14 tons of chloroprene, and over 6,000 tons of the other pollutants. That in turn will reduce the cancer risk from these toxins by 96 percent among those subjected to elevated exposures. Collectively, the chemicals subject to these regulations also contribute to smog, so these reductions will have an additional health impact by reducing its levels as well.

While the EPA says that “these emission reductions will yield significant reductions in lifetime cancer risk attributable to these air pollutants,” it was unable to come up with an estimate of the financial benefits that will result from that reduction. By contrast, it estimates that the cost of compliance will end up being approximately $150 million annually. “Most of the facilities covered by the final rule are owned by large corporations,” the EPA notes. “The cost of implementing the final rule is less than one percent of their annual national sales.”

This sort of cost-benefit analysis is a required step during the formulation of Clean Air Act regulations, so it’s worth taking a step back and considering what’s at stake here: the EPA is basically saying that companies that work with significant amounts of carcinogens need to take stronger steps to make sure that they don’t use the air people breathe as a dumping ground for them.

Unsurprisingly, The New York Times quotes a neoprene manufacturer that the EPA is currently suing over its chloroprene emissions as claiming the new regulations are “draconian.”

EPA seeks to cut “Cancer Alley” pollutants Read More »

moments-of-totality:-how-ars-experienced-the-eclipse

Moments of totality: How Ars experienced the eclipse

Total eclipse of the Ars —

The 2024 total eclipse is in the books. Here’s how it looked across the US.

Baily's Beads are visible in this shot taken by Stephen Clark in Athens, Texas.

Enlarge / Baily’s Beads are visible in this shot taken by Stephen Clark in Athens, Texas.

Stephen Clark

“And God said, Let there be light: and there was light. And God saw the light, that it was good: and God divided the light from the darkness. And God called the light Day, and the darkness he called Night. And the evening and the morning were the first day.”

The steady rhythm of the night-day, dark-light progression is a phenomenon acknowledged in ancient sacred texts as a given. When it’s interrupted, people take notice. In the days leading up to the eclipse, excitement within the Ars Orbiting HQ grew, and plans to experience the last total eclipse in the continental United States until 2045 were made. Here’s what we saw across the country.

Kevin Purdy (watched from Buffalo, New York)

  • 3: 19 pm on April 8 in Buffalo overlooking Richmond Ave. near Symphony Circle.

    Kevin Purdy

  • A view of First Presbyterian Church from Richmond Avenue in Buffalo, NY.

    Kevin Purdy

  • The cloudy, strange skies at 3: 12 pm Eastern time in Buffalo on April 8.

    Kevin Purdy

  • A kind of second sunrise at 3: 21 p.m. on April 8 in Buffalo.

    Kevin Purdy

  • A clearer view of the total eclipse from Colden, New York, 30 minutes south of Buffalo on April 8, 2024.

    Sabrina May

Buffalo, New York, is a frequently passed-over city. Super Bowl victories, the shift away from Great Lakes shipping and American-made steel, being the second-largest city in a state that contains New York City: This city doesn’t get many breaks.

So, with Buffalo in the eclipse’s prime path, I, a former resident and booster, wanted to be there. So did maybe a million people, doubling the wider area’s population. With zero hotels, negative Airbnbs, and no flights below trust-fund prices, I arrived early, stayed late, and slept on sofas and air mattresses. I wanted to see if Buffalo’s moment of global attention would go better than last time.

The day started cloudy, as is typical in early April here. With one hour to go, I chatted with Donald Blank. He was filming an eclipse time-lapse as part of a larger documentary on Buffalo: its incredible history, dire poverty, heroes, mistakes, everything. The shot he wanted had the First Presbyterian Church, with its grand spire and Tiffany windows, in the frame. A 200-year-old stone church adds a certain context to a solar event many of us humans will never see again.

The sky darkened. Automatic porch lights flicked on at 3: 15 pm, then street lights, then car lights, for those driving to somehow more important things. People on front lawns cheered, clapped, and quietly couldn’t believe it. When it was over, I heard a neighbor say they forgot their phone inside. Blank walked over and offered to email her some shots he took. It was very normal in Buffalo, even when it was strange.

Benj Edwards (Raleigh, North Carolina)

  • Benj’s low-tech, but creative way of viewing the eclipse.

    Benj Edwards

  • So many crescents.

    Benj Edwards

I’m in Raleigh, North Carolina, and we were lucky to have a clear day today. We reached peak eclipse at around 3: 15 pm (but not total eclipse, sadly), and leading up to that time, the sun slowly began to dim as I looked out my home office window. Around 3 pm, I went outside on the back deck and began crafting makeshift pinhole lenses using cardboard and a steel awl, poking holes so that myself and my kids could see the crescent shape of the eclipse projected indirectly on a dark surface.

My wife had also bought some eclipse glasses from a local toy store, and I very briefly tried them while squinting. I could see the eclipse well, but my eyes were still feeling a little blurry. I didn’t trust them enough to let the kids use them. For the 2017 eclipse, I had purchased very dark welder’s lenses that I have since lost. Even then, I think I got a little bit of eye damage at that time. A floater formed in my left eye that still plagues me to this day. I have the feeling I’ll never learn this lesson, and the next time an eclipse comes around, I’ll just continue to get progressively more blind. But oh what fun to see the sun eclipsed.

Beth Mole (Raleigh, North Carolina)

Another view from Raleigh.

Enlarge / Another view from Raleigh.

Beth Mole

It was a perfect day for eclipse watching in North Carolina—crystal clear blue sky and a high of 75. Our peak was at 3: 15 pm with 78.6 percent sun coverage. The first hints of the moon’s pass came just before 2 pm. The whole family was out in the backyard (alongside a lot of our neighbors!), ready with pin-hole viewers, a couple of the NASA-approved cereal-box viewers, and eclipse glasses. We all watched as the moon progressively slipped in and stole the spotlight. At peak coverage, it was noticeably dimmer and it got remarkably cooler and quieter. It was not nearly as dramatic as being in the path of totality, but still really neat and fun. My 5-year-old had a blast watching the sun go from circle to bitten cookie to banana and back again.

Moments of totality: How Ars experienced the eclipse Read More »

teen’s-vocal-cords-act-like-coin-slot-in-worst-case-ingestion-accident

Teen’s vocal cords act like coin slot in worst-case ingestion accident

What are the chances? —

Luckily his symptoms were relatively mild, but doctors noted ulceration of his airway.

Teen’s vocal cords act like coin slot in worst-case ingestion accident

Most of the time, when kids accidentally gulp down a non-edible object, it travels toward the stomach. In the best-case scenarios for these unfortunate events, it’s a small, benign object that safely sees itself out in a day or two. But in the worst-case scenarios, it can go down an entirely different path.

That was the case for a poor teen in California, who somehow swallowed a quarter. The quarter didn’t head down the esophagus and toward the stomach, but veered into the airway, sliding passed the vocal cords like they were a vending-machine coin slot.

 Radiographs of the chest (Panel A, postero- anterior view) and neck (Panel B, lateral view). Removal with optical forceps (Panel C and Video 1), and reinspection of ulceration (Panel D, asterisks)

Enlarge / Radiographs of the chest (Panel A, postero- anterior view) and neck (Panel B, lateral view). Removal with optical forceps (Panel C and Video 1), and reinspection of ulceration (Panel D, asterisks)

In a clinical report published recently in the New England Journal of Medicine, doctors who treated the 14-year-old boy reported how they found—and later retrieved—the quarter from its unusual and dangerous resting place. Once it passed the vocal cords and the glottis, the coin got lodged in the subglottis, a small region between the vocal cords and the trachea.

Luckily, when the boy arrived at the emergency department, his main symptoms were hoarseness and difficulty swallowing. He was surprisingly breathing comfortably and without drooling, they noted. But imaging quickly revealed the danger his airway was in when the vertical coin lit up his scans.

“Airway foreign bodies—especially those in the trachea and larynx—necessitate immediate removal to reduce the risk of respiratory compromise,” they wrote in the NEJM report.

The teen was given general anesthetic while doctors used long, optical forceps, guided by a camera, to pluck the coin from its snug spot. After grabbing the coin, they re-inspected the boy’s airway noting ulcerations on each side matching the coin’s ribbed edge.

After the coin’s retrieval, the boy’s symptoms improved and he was discharged home, the doctors reported.

Teen’s vocal cords act like coin slot in worst-case ingestion accident Read More »

kamikaze-bacteria-explode-into-bursts-of-lethal-toxins

Kamikaze bacteria explode into bursts of lethal toxins

The needs of the many… —

If you make a big enough toxin, it’s difficult to get it out of the cells.

Colorized scanning electron microscope, SEM, image of Yersinia pestis bacteria

Enlarge / The plague bacteria, Yersina pestis, is a close relative of the toxin-producing species studied here.

Life-forms with no brain are capable of some astounding things. It might sound like sci-fi nightmare fuel, but some bacteria can wage kamikaze chemical warfare.

Pathogenic bacteria make us sick by secreting toxins. While the release of smaller toxin molecules is well understood, methods of releasing larger toxin molecules have mostly eluded us until now. Researcher Stefan Raunser, director of the Max Planck Institute of Molecular Physiology, and his team finally found out how the insect pathogen Yersinia entomophaga (which attacks beetles) releases its large-molecule toxin.

They found that designated “soldier cells” sacrifice themselves and explode to deploy the poison inside their victim. “YenTc appears to be the first example of an anti-eukaryotic toxin using this newly established type of secretion system,” the researchers said in a study recently published in Nature.

Silent and deadly

Y. entomophaga is part of the Yersinia genus, relatives of the plague bacteria, which produce what are known as Tc toxins. Their molecules are huge as far as bacterial toxins go, but, like most smaller toxin molecules, they still need to make it through the bacteria’s three cell membranes before they escape to damage the host. Raunser had already found in a previous study that Tc toxin molecules do show up outside the bacteria. What he wanted to see next was how and when they exit the bacteria that makes them.

To find out what kind of environment is ideal for Y. entomophaga to release YenTC, the bacteria were placed in acidic (PH under 7) and alkaline (PH over 7) mediums. While they did not release much in the acidic medium, the bacteria thrived in the high PH of the alkaline medium, and increasing the PH led it to release even more of the toxin. The higher PH environment in a beetle is around the mid-end of its gut, so it is now thought that most of the toxin is liberated when the bacteria reach that area.

How YenTc is released was more difficult to determine. When the research team used mass spectrometry to take a closer look at the toxin, they found that it was missing something: There was no signal sequence that indicated to the bacteria that the protein needed to be transported outside the bacterium. Signal sequences, also known as signal peptides, are kind of like built-in tags for secretion. They are in charge of connecting the proteins (toxins are proteins) to a complex at the innermost cell membrane that pushes them through. But YenTC apparently doesn’t need a signal sequence to export its toxins into the host.

About to explode

So how does this insect killer release YenTc, its most formidable toxin? The first test was a process of elimination. While YenTc has no signal sequence, the bacteria have different secretion systems for other toxins that it releases. Raunser thought that knocking out these secretion systems using gene editing could possibly reveal which one was responsible for secreting YenTc. Every secretion system in Y. entomophaga was knocked out until no more were left, yet the bacteria were still able to secrete YenTc.

The researchers then used fluorescence microscopy to observe the bacteria releasing its toxin. They inserted a gene that encodes a fluorescent protein into the toxin gene so the bacteria would glow when making the toxin. While not all Y. entomophaga cells produced YenTc, those that did (and so glowed) tended to be larger and more sluggish. To induce secretion, PH was raised to alkaline levels. Non-producing cells went about their business, but YenTc-expressing cells only took minutes to collapse and release the toxin.

This is what’s called a lytic secretion system, which involves the rupture of cell walls or membranes to release toxins.

“This prime example of self-destructive cooperation in bacteria demonstrates that YenTc release is the result of a controlled lysis strictly dedicated to toxin release rather than a typical secretion process, explaining our initially perplexing observation of atypical extracellular proteins,” the researchers said in the same study.

Yersinia also includes pathogenic bacteria that cause tuberculosis and bubonic plague, diseases that have devastated humans. Now that the secretion mechanism of one Yersinia species has been found out, Raunser wants to study more of them, along with other types of pathogens, to see if any others have kamikaze soldier cells that use the same lytic mechanism of releasing toxins.

The discovery of Y. entomophaga’s exploding cells could eventually mean human treatments that target kamikaze cells. In the meantime, we can at least be relieved we aren’t beetles.

Nature Microbiology, 2024. DOI: 10.1038/s41564-023-01571-z

Kamikaze bacteria explode into bursts of lethal toxins Read More »

gravitational-waves-reveal-“mystery-object”-merging-with-a-neutron-star

Gravitational waves reveal “mystery object” merging with a neutron star

mind the gap —

The so-called “mass gap” might be less empty than physicists previously thought.

Artistic rendition of a black hole merging with a neutron star.

Enlarge / Artistic rendition of a black hole merging with a neutron star. LIGO/VIRGO/KAGRA detected a merger involving a neutron star and what might be a very light black hole falling within the “mass gap” range.

LIGO-India/ Soheb Mandhai

The LIGO/VIRGO/KAGRA collaboration searches the universe for gravitational waves produced by the mergers of black holes and neutron stars. It has now announced the detection of a signal indicating a merger between two compact objects, one of which has an unusual intermediate mass—heavier than a neutron star and lighter than a black hole. The collaboration provided specifics of their analysis of the merger and the “mystery object” in a draft manuscript posted to the physics arXiv, suggesting that the object might be a very low-mass black hole.

LIGO detects gravitational waves via laser interferometry, using high-powered lasers to measure tiny changes in the distance between two objects positioned kilometers apart. LIGO has detectors in Hanford, Washington state, and in Livingston, Louisiana. A third detector in Italy, Advanced VIRGO, came online in 2016. In Japan, KAGRA is the first gravitational-wave detector in Asia and the first to be built underground. Construction began on LIGO-India in 2021, and physicists expect it will turn on sometime after 2025.

To date, the collaboration has detected dozens of merger events since its first Nobel Prize-winning discovery. Early detected mergers involved either two black holes or two neutron stars, but in 2021, LIGO/VIRGO/KAGRA confirmed the detection of two separate “mixed” mergers between black holes and neutron stars.

Most objects involved in the mergers detected by the collaboration fall into two groups: stellar-mass black holes (ranging from a few solar masses to tens of solar masses) and supermassive black holes, like the one in the middle of our Milky Way galaxy (ranging from hundreds of thousands to billions of solar masses). The former are the result of massive stars dying in a core-collapse supernova, while the latter’s formation process remains something of a mystery. The range between the heaviest known neutron star and the lightest known black hole is known as the “mass gap” among scientists.

There have been gravitational wave hints of compact objects falling within the mass gap before. For instance, as reported previously, in 2019, LIGO/VIRGO picked up a gravitational wave signal from a black hole merger dubbed “GW190521,” that produced the most energetic signal detected thus far, showing up in the data as more of a “bang” than the usual “chirp.” Even weirder, the two black holes that merged were locked in an elliptical (rather than circular) orbit, and their axes of spin were tipped far more than usual compared to those orbits. And the new black hole resulting from the merger had an intermediate mass of 142 solar masses—smack in the middle of the mass gap.

Masses in the stellar graveyard.

Enlarge / Masses in the stellar graveyard.

xIGO-Virgo-KAGRA / Aaron Geller / Northwestern

That same year, the collaboration detected another signal, GW 190814, a compact binary merger involving a mystery object that also fell within the mass gap. With no corresponding electromagnetic signal to accompany the gravitational wave signal, astrophysicists were unable to determine whether that object was an unusually heavy neutron star or an especially light black hole. And now we have a new mystery object within the mass gap in a merger event dubbed “GW 230529.”

“While previous evidence for mass-gap objects has been reported both in gravitational and electromagnetic waves, this system is especially exciting because it’s the first gravitational-wave detection of a mass-gap object paired with a neutron star,” said co-author Sylvia Biscoveanu of Northwestern University. “The observation of this system has important implications for both theories of binary evolution and electromagnetic counterparts to compact-object mergers.”

See where this discovery falls within the mass gap.

Enlarge / See where this discovery falls within the mass gap.

Shanika Galaudage / Observatoire de la Côte d’Azur

LIGO/VIRGO/KAGRA started its fourth observing run last spring and soon picked up GW 230529’s signal. Scientists determined that one of the two merging objects had a mass between 1.2 to 2 times the mass of our sun—most likely a neutron star—while the other’s mass fell in the mass-gap range of 2.5 to 4.5 times the mass of our sun. As with GW 190814, there were no accompanying bursts of electromagnetic radiation, so the team wasn’t able to conclusively identify the nature of the more massive mystery object located some 650 million light-years from Earth, but they think it is probably a low-mass black hole. If so, the finding implies an increase in the expected rate of neutron star–black hole mergers with electromagnetic counterparts, per the authors.

“Before we started observing the universe in gravitational waves, the properties of compact objects like black holes and neutron stars were indirectly inferred from electromagnetic observations of systems in our Milky Way,” said co-author Michael Zevin, an astrophysicist at the Adler Planetarium. “The idea of a gap between neutron-star and black-hole masses, an idea that has been around for a quarter of a century, was driven by such electromagnetic observations. GW230529 is an exciting discovery because it hints at this ‘mass gap’ being less empty than astronomers previously thought, which has implications for the supernova explosions that form compact objects and for the potential light shows that ensue when a black hole rips apart a neutron star.”

arXiv, 2024. DOI: 10.48550/arXiv.2404.04248  (About DOIs).

Gravitational waves reveal “mystery object” merging with a neutron star Read More »

why-are-there-so-many-species-of-beetles?

Why are there so many species of beetles?

The beetles outnumber us —

Diet played a key role in the evolution of the vast beetle family tree.

A box of beetles

Caroline Chaboo’s eyes light up when she talks about tortoise beetles. Like gems, they exist in myriad bright colors: shiny blue, red, orange, leaf green and transparent flecked with gold. They’re members of a group of 40,000 species of leaf beetles, the Chrysomelidae, one of the most species-rich branches of the vast beetle order, Coleoptera. “You have your weevils, longhorns, and leaf beetles,” she says. “That’s really the trio that dominates beetle diversity.”

An entomologist at the University of Nebraska, Lincoln, Chaboo has long wondered why the kingdom of life is so skewed toward beetles: The tough-bodied creatures make up about a quarter of all animal species. Many biologists have wondered the same thing, for a long time. “Darwin was a beetle collector,” Chaboo notes.

Despite their kaleidoscopic variety, most beetles share the same three-part body plan. The insects’ ability to fold their flight wings, origami-like, under protective forewings called elytra allows beetles to squeeze into rocky crevices and burrow inside trees. Beetles’ knack for thriving in a large range of microhabitats could also help explain their abundance of species, scientists say.

Enlarge / Despite their kaleidoscopic variety, most beetles share the same three-part body plan. The insects’ ability to fold their flight wings, origami-like, under protective forewings called elytra allows beetles to squeeze into rocky crevices and burrow inside trees. Beetles’ knack for thriving in a large range of microhabitats could also help explain their abundance of species, scientists say.

Of the roughly 1 million named insect species on Earth, about 400,000 are beetles. And that’s just the beetles described so far. Scientists typically describe thousands of new species each year. So—why so many beetle species? “We don’t know the precise answer,” says Chaboo. But clues are emerging.

One hypothesis is that there are lots of them because they’ve been around so long. “Beetles are 350 million years old,” says evolutionary biologist and entomologist Duane McKenna of the University of Memphis in Tennessee. That’s a great deal of time in which existing species can speciate, or split into new, distinct genetic lineages. By way of comparison, modern humans have existed for only about 300,000 years.

Yet just because a group of animals is old doesn’t necessarily mean it will have more species. Some very old groups have very few species. Coelacanth fish, for example, have been swimming in the ocean for approximately 360 million years, reaching a maximum of around 90 species and then declining to the two species known to be living today. Similarly, the lizard-like reptile the tuatara is the only living member of a once globally diverse ancient order of reptiles that originated about 250 million years ago.

Another possible explanation for why beetles are so rich in species is that, in addition to being old, they have unusual staying power. “They have survived at least two mass extinctions,” says Cristian Beza-Beza, a University of Minnesota postdoctoral fellow. Indeed, a 2015 study using fossil beetles to explore extinctions as far back as the Permian 284 million years ago concluded that lack of extinction may be at least as important as diversification for explaining beetle species abundance. In past eras, at least, beetles have demonstrated a striking ability to shift their ranges in response to climate change, and this may explain their extinction resilience, the authors hypothesize.

Why are there so many species of beetles? Read More »

the-science-of-smell-is-fragrant-with-submolecules

The science of smell is fragrant with submolecules

Did you smell something? —

A chemical that we smell may be a composite of multiple smell-making pieces.

cartoon of roses being smelled, with the nasal passages, neurons, and brain visible through cutaways.

When we catch a whiff of perfume or indulge in a scented candle, we are smelling much more than Floral Fantasy or Lavender Vanilla. We are actually detecting odor molecules that enter our nose and interact with cells that send signals to be processed by our brain. While certain smells feel like they’re unchanging, the complexity of this system means that large odorant molecules are perceived as the sum of their parts—and we are capable of perceiving the exact same molecule as a different smell.

Smell is more complex than we might think. It doesn’t consist of simply detecting specific molecules. Researcher Wen Zhou and his team from the Institute of Psychology of the Chinese Academy of Sciences have now found that parts of our brains analyze smaller parts of the odor molecules that make things smell.

Smells like…

So how do we smell? Odor molecules that enter our noses stimulate olfactory sensory neurons. They do this by binding to odorant receptors on these neurons (each of which makes only one of approximately 500 different odor receptors). Smelling something activates different neurons depending on what the molecules in that smell are and which receptors they interact with. The sensory neurons in the piriform cortex of the brain then use the information from the sensory neurons and interpret it as a message that makes us smell vanilla. Or a bouquet of flowers. Or whatever else.

Odor molecules were previously thought to be coded only as whole molecules, but Zhou and his colleagues wanted to see whether the brain’s analysis of odor molecules could perceive something less than a complete molecule. They reasoned that, if only whole molecules work, then after being exposed to a part of an odorant molecule, the test subjects would smell the original molecule exactly the same way. If, by contrast, the brain was able to pick up on the smell of a molecule’s substructures, neurons would adapt to the substructure. When re-exposed to the original molecule, subjects would not sense it nearly as strongly.

“If [sub-molecular factors are part of our perception of an odor]—the percept[ion] and its neural representation would be shifted towards those of the unadapted part of that compound,” the researchers said in a study recently published in Nature Human Behavior.

Doesn’t smell like…

To see whether their hypothesis held up, Zhou’s team presented test subjects with a compound abbreviated CP, its separate components C and P, and an unrelated component, U. P and U were supposed to have equal aromatic intensity despite being different scents.

In one session, subjects smelled CP and then sniffed P until they had adapted to it. When they smelled CP again, they reported it smelling more like C than P. Despite being exposed to the entire molecule, they were mostly smelling C, which was unadapted. In another session, subjects adapted to U, after which there was no change in how they perceived CP. So, the effect is specific to smelling a portion of the odorant molecule.

In yet another experiment, subjects were told to first smell CP and then adapt to the smell of P with just one nostril while they kept the other nostril closed. Once adapted, CP and C smelled similar, but only when snorted through the nostril that had been open. The two smelled much more different through the nostril that had been closed.

Previous research has shown that adaptation to odors takes place in the piriform cortex. Substructure adaptation causes this part of the brain to respond differently to the portions of a chemical that the nose has recently been exposed to.

This olfactory experiment showed that our brains perceive smells by doing more than just recognizing the presence of a whole odor molecule. Some molecules can be perceived as a collection of submolecular units that are perceived separately.

“The smells we perceived are the products of continuous analysis and synthesis in the olfactory system,” the team said in the same study, “breath by breath, of the structural features and relationships of volatile compounds in our ever-changing chemical environment.”

Nature Human Behaviour, 2024.  DOI: 10.1038/s41562-024-01849-0

The science of smell is fragrant with submolecules Read More »

here-are-the-winners-and-losers-when-it-comes-to-clouds-for-monday’s-eclipse

Here are the winners and losers when it comes to clouds for Monday’s eclipse

Happy hunting —

News you can use in regard to chasing cloud-free skies.

Cloud cover forecast for 2 pm ET on Monday, April 8.

Enlarge / Cloud cover forecast for 2 pm ET on Monday, April 8.

Tomer Burg

The best opportunity to view a total Solar eclipse in the United States for the next two decades is nearly at hand. Aside from making sure you’re in the path of totality, the biggest question for most eclipse viewers has been, will it be cloudy?

This has posed a challenge to the meteorological community. That’s because clouds are notoriously difficult to forecast for a number of reasons. The first is that they are localized features, sometimes on the order of a few miles or km across, which is smaller than the resolution of global models that provide forecasts five, seven, or more days out.

Weather models also struggle with predicting clouds because they can form anywhere from a few thousand feet (2,000 meters) above the ground to 50,000 feet (15,000 meters), and therefore they require good information about conditions in the atmosphere near the surface all the way into the stratosphere. The problem is that the combination of ground-based observations, weather balloons, data from aircraft, and satellites do not provide the kind of comprehensive atmospheric profile needed at locations around the world for completely accurate cloud forecasting.

Finally, there is the issue of partly cloudy skies and the transience of clouds themselves. Most places, most days, have a mixture of sunshine and cloudy skies. So let’s say the forecast looks pretty good for your location. According to forecasters there is only a 30 percent skycover forecast for Monday afternoon. Sounds great! But if a large cloud moves over the Sun during the few minutes of totality, it won’t matter if the day was mostly sunny.

With that in mind, here’s the forecast at three days out, with some strategies for finding the clear skies on Monday.

The forecast

The cloud forecast has actually been remarkably consistent for the last several days, in general terms. Texas has looked rather poor for visibility, the central region of the United States including bits of Missouri, Arkansas, Illinois, and Indiana have looked fairly good, areas along Lake Erie have been iffy, and the northeastern United States has looked optimal.

Our highest confidence area is northern New York, Vermont, New Hampshire, and Maine. The reason is that high pressure will be firmly in place for these locations on Monday, virtually guaranteeing mostly sunny skies. If you want to be confident of seeing the eclipse in North America, this is the place to be. But there is a catch—isn’t there always? A snowstorm this week, which may persist into Saturday morning, has made travel difficult. Conditions should improve by Sunday, however.

Rising pressures in the central United States will also make for good viewing conditions. The band of totality running from Northern Arkansas through Indiana is not guaranteed to have clear skies, but the odds are favorable for most locations here.

The Lake Erie region, including Cleveland, is probably the biggest wildcard in the national forecast. The atmospheric setup here is fairly complex, with the region just on the edge of high pressure ridging that will help keep skies clear. I’d be cautiously optimistic.

Finally there’s Texas. The forecast overall has been poor since I’ve began tracking it for the last two weeks. (And as I live in Texas, I’ve been following it closely.) The global models with the best predictive value—the European-based ECMWF and US-based GFS—have shown consistently cloudy skies across much of the state on Monday, with a non-zero chance of rain. I do think there will be some breaks in the clouds at the time of the eclipse, perhaps in locations near Dallas or to the west of Austin, and hopefully some of the cloud cover will be thin, high clouds. But whereas the skies at night are big and bright in Texas, the solar eclipse viewing conditions might just bite.

Some strategies for Monday

There are a lot of helpful resources online for tracking cloud cover over the weekend. One of the best hacks is to search the web for the nearest city or town, i.e. “NWS Cleveland, Ohio” and find the “forecaster discussion” section of the National Weather Service website. This will give you a credible local forecaster’s outlook on conditions. Most have been doing a great job of providing eclipse context in twice-daily discussions.

A meteorologist at the University of Oklahoma, Tomer Burg, has set up an excellent website to provide both an overview of the eclipse and a probabilistic outlook for localized conditions. Your best bets are the national blend of models forecast for average cloud cover (direct link), and a city dashboard that provides key information for more than 100 locations about precise eclipse timing and sky cover.

Good luck, Austin!

Enlarge / Good luck, Austin!

Tomer Burg

Finally, if you’re in the path of totality and are expected to have partly to mostly cloudy skies, don’t despair. There’s always a chance the forecast will change, even a few days out. There’s always a chance for a break in the clouds at the right time. There’s always a chance the clouds will be thin and high, with the disk of the Sun shining through.

And finally, if it is thickly overcast, it will still get eerily dark outside in the middle of the day. It will get noticeably colder. Animals will do nighttime things. So it will be special, but unfortunately not special.

Here are the winners and losers when it comes to clouds for Monday’s eclipse Read More »