In their testing, they use a couple of unusual electrode materials, such as a chromium oxide (Cr8O21) and an organic polymer (a sulfurized polyacrylonitrile). Both of these have significant weight advantages over the typical materials used in today’s batteries, although the resulting batteries typically lasted less than 500 cycles before dropping to 80 percent of their original capacity.
But the striking experiment came when they used LiSO2CF3 to rejuvenate a battery that had been manufactured as normal but had lost capacity due to heavy use. Treating a lithium-iron phosphate battery that had lost 15 percent of its original capacity restored almost all of what was lost, allowing it to hold over 99 percent of its original charge. They also ran a battery for repeated cycles with rejuvenation every few thousand cycles. At just short of 12,000 cycles, it still could be restored to 96 percent of its original capacity.
Before you get too excited, there are a couple of things worth noting about lithium-iron phosphate cells. The first is that, relative to their charge capacity, they’re a bit heavy, so they tend to be used in large, stationary batteries like the ones in grid-scale storage. They’re also long-lived on their own; with careful management, they can take over 8,000 cycles before they drop to 80 percent of their initial capacity. It’s not clear whether similar rejuvenation is possible in the battery chemistries typically used for the sorts of devices that most of us own.
The final caution is that the battery needs to be modified so that fresh electrolytes can be pumped in and the gases released by the breakdown of the LiSO2CF3 removed. It’s safest if this sort of access is built into the battery from the start, rather than provided by modifying it much later, as was done here. And the piping needed would put a small dent in the battery’s capacity per volume if so.
All that said, the treatment demonstrated here would replenish even a well-managed battery closer to its original capacity. And it would largely restore the capacity of something that hadn’t been carefully managed. And that would allow us to get far more out of the initial expense of battery manufacturing. Meaning it might make sense for batteries destined for a large storage facility, where lots of them could potentially be treated at the same time.
Getting oxygen from regolith takes 24 kWh per kilogram, and we’d need tonnes.
Without adjustments for relativity, clocks here and on the Moon would rapidly diverge. Credit: NASA
If humanity is ever to spread out into the Solar System, we’re going to need to find a way to put fuel into rockets somewhere other than the cozy confines of a launchpad on Earth. One option for that is in low-Earth orbit, which has the advantage of being located very close to said launch pads. But it has the considerable disadvantage of requiring a lot of energy to escape Earth’s gravity—it takes a lot of fuel to put substantially less fuel into orbit.
One alternative is to produce fuel on the Moon. We know there is hydrogen and oxygen present, and the Moon’s gravity is far easier to overcome, meaning more of what we produce there can be used to send things deeper into the Solar System. But there is a tradeoff: any fuel production infrastructure will likely need to be built on Earth and sent to the Moon.
How much infrastructure is that going to involve? A study released today by PNAS evaluates the energy costs of producing oxygen on the Moon, and finds that they’re substantial: about 24 kWh per kilogram. This doesn’t sound bad until you start considering how many kilograms we’re going to eventually need.
Free the oxygen!
The math that makes refueling from the Moon appealing is pretty simple. “As a rule of thumb,” write the authors of the new study on the topic, “rockets launched from Earth destined for [Earth-Moon Lagrange Point 1] must burn ~25 kg of propellant to transport one kg of payload, whereas rockets launched from the Moon to [Earth-Moon Lagrange Point 1] would burn only ~four kg of propellant to transport one kg of payload.” Departing from the Earth-Moon Lagrange Point for locations deeper into the Solar System also requires less energy than leaving low-Earth orbit, meaning the fuel we get there is ultimately more useful, at least from an exploration perspective.
But, of course, you need to make the fuel there in the first place. The obvious choice for that is water, which can be split to produce hydrogen and oxygen. We know there is water on the Moon, but we don’t yet know how much, and whether it’s concentrated into large deposits. Given that uncertainty, people have also looked at other materials that we know are present in abundance on the Moon’s surface.
And there’s probably nothing more abundant on that surface than regolith, the dust left over from constant tiny impacts that have, over time, eroded lunar rocks. The regolith is composed of a variety of minerals, many of which contain oxygen, typically the heavier component of rocket fuel. And a variety of people have figured out the chemistry involved in separating oxygen from these minerals on the scale needed for rocket fuel production.
But knowing the chemistry is different from knowing what sort of infrastructure is needed to get that chemistry done at a meaningful scale. To get a sense of this, the researchers decided to focus on isolating oxygen from a mineral called ilmenite, or FeTiO3. It’s not the easiest way to get oxygen—iron oxides win out there—but it’s well understood. Someone actually patented oxygen production from ilmenite back in the 1970s, and two hardware prototypes have been developed, one of which may be sent to the Moon on a future NASA mission.
The researchers propose a system that would harvest regolith, partly purify the ilmenite, then combine it with hydrogen at high temperatures, which would strip the oxygen out as water, leaving behind purified iron and titanium (both of which may be useful to have). The resulting water would then be split to feed the hydrogen back into the system, while the oxygen can be sent off for use in rockets.
(This wouldn’t solve the issue of what that oxygen will ultimately oxidize to power a rocket. But oxygen is typically the heavier component of rocket fuel combinations—typically about 80 percent of the mass—and so the bigger challenge to get to a fuel depot.)
Obviously, this process will require a lot of infrastructure, like harvesters, separators, high-temperature reaction chambers, and more. But the researchers focus on a single element: how much power will it suck down?
More power!
To get their numbers, the researchers made a few simplifying assumptions. These include assuming that it’s possible to purify ilmenite from raw regolith and that it will be present in particles small enough that about half the material present will participate in chemical reactions. They ignored both the potential to get even more oxygen from the iron and titanium oxides present, as well as the potential for contamination from problematic materials like hydrogen sulfide or hydrochloric acid.
The team found that almost all of the energy is consumed at three steps in the process: the high-temperature hydrogen reaction that produces water (55 percent), splitting the water afterwards (38 percent), and converting the resulting oxygen to its liquid form (five percent). The typical total usage, depending on factors like the concentration of ilmenite in the regolith, worked out to be about 24 kW-hr for each kilogram of liquid oxygen.
Obviously, the numbers are sensitive to how efficiently you can do things like heat the reaction mix. (It might be possible to do this heating with concentrated solar, avoiding the use of electricity for this entirely, but the authors didn’t analyze that.) But it was also sensitive to less obvious efficiencies. For example, a better separation of the ilmenite from the rest of the regolith means you’re using less energy to heat contaminants. So, while the energetic cost of that separation is small, it pays off to do it effectively.
Based on orbital observations, the researchers map out the areas where ilmenite is present at high enough concentrations for this approach to make sense. These include some of the mares on the near side of the Moon, so they’re easy to get to.
A map of the lunar surface, with areas with high ilmenite concentrations shown in blue.
Credit: Leger, et. al.
A map of the lunar surface, with areas with high ilmenite concentrations shown in blue. Credit: Leger, et. al.
On its own, 24 kWh doesn’t seem like a lot of power. The problem is that we will need a lot of kilograms. The researchers estimate that getting an empty SpaceX Starship from the lunar surface to the Earth-Moon Lagrange Point takes 80 tonnes of liquid oxygen. And a fully fueled starship can hold over 500 tonnes of liquid oxygen.
We can compare that to something like the solar array on the International Space Station, which has a capacity of about 100 kW. That means it could power the production of about four kilograms of oxygen an hour. At that rate, it’ll take a bit over 10 days to produce a tonne, and a bit more than two years to get enough oxygen to get an empty Starship to the Lagrange Point—assuming 24-7 production. Being on the near side, they will only produce for half the time, given the lunar day.
Obviously, we can build larger arrays than that, but it boosts the amount of material that needs to be sent to the Moon from Earth. It may potentially make more sense to use nuclear power. While that would likely involve more infrastructure than solar arrays, it would allow the facilities to run around the clock, thus getting more production from everything else we’ve shipped from Earth.
This paper isn’t meant to be the final word on the possibilities for lunar-based refueling; it’s simply an early attempt to put hard numbers on what ultimately might be the best way to explore our Solar System. Still, it provides some perspective on just how much effort we’ll need to make before that sort of exploration becomes possible.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
And it worked. Repeating the same process with an added PLACER screening step boosted the number of enzymes with catalytic activity by over three-fold.
Unfortunately, all of these enzymes stalled after a single reaction. It turns out they were much better at cleaving the ester, but they left one part of it chemically bonded to the enzyme. In other words, the enzymes acted like part of the reaction, not a catalyst. So the researchers started using PLACER to screen for structures that could adopt a key intermediate state of the reaction. This produced a much higher rate of reactive enzymes (18 percent of them cleaved the ester bond), and two—named “super” and “win”—could actually cycle through multiple rounds of reactions. The team had finally made an enzyme.
By adding additional rounds alternating between structure suggestions using RFDiffusion and screening using PLACER, the team saw the frequency of functional enzymes increase and eventually designed one that had an activity similar to some produced by actual living things. They also showed they could use the same process to design an esterase capable of digesting the bonds in PET, a common plastic.
If that sounds like a lot of work, it clearly was—designing enzymes, especially ones where we know of similar enzymes in living things, will remain a serious challenge. But at least much of it can be done on computers rather than requiring someone to order up the DNA that encodes the enzyme, getting bacteria to make it, and screening for activity. And despite the process involving references to known enzymes, the designed ones didn’t share a lot of sequences in common with them. That suggests there should be added flexibility if we want to design one that will react with esters that living things have never come across.
I’m curious about what might happen if we design an enzyme that is essential for survival, put it in bacteria, and then allow it to evolve for a while. I suspect life could find ways of improving on even our best designs.
It’s the “hydrophilic capillary-enhanced adhesion”of gecko feet that most interested the authors of this latest paper. Per the World Health Organization, 684,000 people die and another 38 million are injured every year in slips and falls, with correspondingly higher health care costs. Most antislip products (crampons, chains, studs, cleats), tread designs, or materials (fiberglass, carbon fiber, rubber) are generally only effective for specific purposes or short periods of time. And they often don’t perform as well on wet ice, which has a nanoscale quasi-liquid layer (QLL) that makes it even more slippery.
So Vipin Richhariya of the University of Minho in Portugal and co-authors turned to gecko toe pads (as well as those of toads) for a better solution. To get similar properties in their silicone rubber polymers, they added zirconia nanoparticles, which attract water molecules. The polymers were rolled into a thin film and hardened, and then a laser etched groove patterns onto the surface—essentially creating micro cavities that exposed the zirconia nanoparticles, thus enhancing the material’s hydrophilic effects.
Infrared spectroscopy and simulated friction tests revealed that the composites containing 3 percent and 5 percent zirconia nanoparticles were the most slip-resistant. “This optimized composite has the potential to change the dynamics of slip-and-fall accidents, providing a nature-inspired solution to prevent one of the most common causes of accidents worldwide,” the authors concluded. The material could also be used for electronic skin, artificial skin, or wound healing.
Sulfur can store a lot more lithium but is problematically reactive in batteries.
If you weren’t aware, sulfur is pretty abundant. Credit: P_Wei
Lithium may be the key component in most modern batteries, but it doesn’t make up the bulk of the material used in them. Instead, much of the material is in the electrodes, where the lithium gets stored when the battery isn’t charging or discharging. So one way to make lighter and more compact lithium-ion batteries is to find electrode materials that can store more lithium. That’s one of the reasons that recent generations of batteries are starting to incorporate silicon into the electrode materials.
There are materials that can store even more lithium than silicon; a notable example is sulfur. But sulfur has a tendency to react with itself, producing ions that can float off into the electrolyte. Plus, like any electrode material, it tends to expand in proportion to the amount of lithium that gets stored, which can create physical strains on the battery’s structure. So while it has been easy to make lithium-sulfur batteries, their performance has tended to degrade rapidly.
But this week, researchers described a lithium-sulfur battery that still has over 80 percent of its original capacity after 25,000 charge/discharge cycles. All it took was a solid electrolyte that was more reactive than the sulfur itself.
When lithium meets sulfur…
Sulfur is an attractive battery material. It’s abundant and cheap, and sulfur atoms are relatively lightweight compared to many of the other materials used in battery electrodes. Sodium-sulfur batteries, which rely on two very cheap raw materials, have already been developed, although they only work at temperatures high enough to melt both of these components. Lithium-sulfur batteries, by contrast, could operate more or less the same way that current lithium-ion batteries do.
With a few major exceptions, that is. One is that the elemental sulfur used as an electrode is a very poor conductor of electricity, so it has to be dispersed within a mesh of conductive material. (You can contrast that with graphite, which both stores lithium and conducts electricity relatively well, thanks to being composed of countless sheets of graphene.) Lithium is stored there as Li2S, which occupies substantially more space than the elemental sulfur it’s replacing.
Both of these issues, however, can be solved with careful engineering of the battery’s structure. A more severe problem comes from the properties of the lithium-sulfur reactions that occur at the electrode. Elemental sulfur exists as an eight-atom ring, and the reactions with lithium are slow enough that semi-stable intermediates with smaller chains of sulfur end up forming. Unfortunately, these tend to be soluble in most electrolytes, allowing them to travel to the opposite electrode and participate in chemical reactions there.
This process essentially discharges the battery without allowing the electrons to be put to use. And it gradually leaves the electrode’s sulfur unavailable for participating in future charge/discharge cycles. The net result is that early generations of the technology would discharge themselves while sitting unused and would only survive a few hundred cycles before performance decayed dramatically.
But there has been progress on all these fronts, and some lithium-sulfur batteries with performance similar to lithium-ion have been demonstrated. Late last year, a company announced that it had lined up the money needed to build the first large-scale lithium-sulfur battery factory. Still, work on improvements has continued, and the new work seems to suggest ways to boost performance well beyond lithium-ion.
The need for speed
The paper describing the new developments, done by a collaboration between Chinese and German researchers, focuses on one aspect of the challenges posed by lithium-sulfur batteries: the relatively slow chemical reaction between lithium ions and elemental sulfur. It presents that aspect as a roadblock to fast charging, something that will be an issue for automotive applications. But at the same time, finding a way to limit the formation of inactive intermediate products during this reaction goes to the root of the relatively short usable life span of lithium-sulfur batteries.
As it turns out, the researchers found two.
One of the problems with the lithium-sulfur reaction intermediates is that they dissolve in most electrolytes. But that’s not a problem if the electrolyte isn’t a liquid. Solid electrolytes are materials that have a porous structure at the atomic level, with the environment inside the pores being favorable for ions. This allows ions to diffuse through the solid. If there’s a way to trap ions on one side of the electrolyte, such as a chemical reaction that traps or de-ionizes them, then it can enable one-way travel.
Critically, pores that favor the transit of lithium ions, which are quite compact, aren’t likely to allow the transit of the large ionized chains of sulfur. So a solid electrolyte should help cut down on the problems faced by lithium-sulfur batteries. But it won’t necessarily help with fast charging.
The researchers began by testing a glass formed from a mixture of boron, sulfur, and lithium (B2S3 and Li2S). But this glass had terrible conductivity, so they started experimenting with related glasses and settled on a combination that substituted in some phosphorus and iodine.
The iodine turned out to be a critical component. While the exchange of electrons with sulfur is relatively slow, iodine undergoes electron exchange (technically termed a redox reaction) extremely quickly. So it can act as an intermediate in the transfer of electrons to sulfur, speeding up the reactions that occur at the electrode. In addition, iodine has relatively low melting and boiling points, and the researchers suggest there’s some evidence that it moves around within the electrolyte, allowing it to act as an electron shuttle.
Successes and caveats
The result is a far superior electrolyte—and one that enables fast charging. It’s typical that fast charging cuts into the total capacity that can be stored in a battery. But when charged at an extraordinarily fast rate (50C, meaning a full charge in just over a minute), a battery based on this system still had half the capacity of a battery charged 25 times more slowly (2C, or a half-hour to full charge).
But the striking thing was how durable the resulting battery was. Even at an intermediate charging rate (5C), it still had over 80 percent of its initial capacity after over 25,000 charge/discharge cycles. By contrast, lithium-ion batteries tend to hit that level of decay after about 1,000 cycles. If that sort of performance is possible in a mass-produced battery, it’s only a slight exaggeration to say it can radically alter our relationships with many battery-powered devices.
What’s not at all clear, however, is whether this takes full advantage of one of the original promises of lithium-sulfur batteries: more charge in a given weight and volume. The researchers specify the battery being used for testing; one electrode is an indium/lithium metal foil, and the other is a mix of carbon, sulfur, and the glass electrolyte. A layer of the electrolyte sits between them. But when giving numbers for the storage capacity per weight, only the weight of the sulfur is mentioned.
Still, even if weight issues would preclude this from being stuffed into a car or cell phone, there are plenty of storage applications that would benefit from something that doesn’t wear out even with 65 years of daily cycling.
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
In recent years, materials scientists experimenting with ceramics have started adding an oxidized form of graphene to the mix to produce ceramics that are tougher, more durable, and more resistant to fracture, among other desirable properties. Researchers at the National University of Singapore (NUS) have developed a new method that uses ultrasound to more evenly distribute graphene oxide (GO) in ceramics, according to a new paper published in the journal ACS Omega. And as a bonus, they collaborated with an artist who used the resulting ceramic tiles to create a unique art exhibit at the NUS Museum—a striking merger of science and art.
As reported previously, graphene is the thinnest material yet known, composed of a single layer of carbon atoms arranged in a hexagonal lattice. That structure gives it many unusual properties that hold great promise for real-world applications: batteries, super capacitors, antennas, water filters, transistors, solar cells, and touchscreens, just to name a few.
In 2021, scientists found that this wonder material might also provide a solution to the fading of colors of many artistic masterpieces. For instance, several of Georgia O’Keeffe’s oil paintings housed in the Georgia O’Keeffe Museum in Santa Fe, New Mexico, have developed tiny pin-sized blisters, almost like acne, for decades. Conservators have found similar deterioration in oil-based masterpieces across all time periods, including works by Rembrandt.
Van Gogh’s Sunflower series has been fading over the last century due to constant exposure to light. A 2011 study found that chromium in the chrome yellow Van Gogh favored reacted strongly with other compounds like barium and sulfur when exposed to sunlight. A 2016 study pointed the finger at the sulfates, which absorb in the UV spectrum, leading to degradation.
Even contemporary art materials are prone to irreversible color changes from exposure to light and oxidizing agents, among other hazards. That’s why there has been recent work on the use of nanomaterials for conservation of artworks. Graphene has a number of properties that make it attractive for art-conservation purposes. The one-atom-thick material is transparent, adheres easily to various substrates, and serves as an excellent barrier against oxygen, gases (corrosive or otherwise), and moisture. It’s also hydrophobic and is an excellent absorber of UV light.
The new work, then, is based on a hypothetical: What if we just threw silicon particles in, let them fragment, and then fixed them afterward?
As mentioned, the reason fragmentation is a problem is that it leads to small chunks of silicon that have essentially dropped off the grid—they’re no longer in contact with the system that shuttles charges into and out of the electrode. In many cases, these particles are also partly filled with lithium, which takes it out of circulation, cutting the battery’s capacity even if there’s sufficient electrode material around.
The researchers involved here, all based at Stanford University, decided there was a way to nudge these fragments back into contact with the electrical system and demonstrated it could restore a lot of capacity to a badly degraded battery.
Bringing things together
The idea behind the new work was that it could be possible to attract the fragments of silicon to an electrode, or at least some other material connected to the charge-handling network. On their own, the fragments in the anode shouldn’t have a net charge; when the lithium gives up an electron there, it should go back into solution. But the lithium is unlikely to be evenly distributed across the fragment, making them a polar material—net neutral, but with regions of higher and lower electron densities. And polar materials will move in an uneven electric field.
And, because of the uneven, chaotic structure of an electrode down at the nano scale, any voltage applied to it will create an uneven electric field. Depending on its local structure, that may attract or repel some of the particles. But because these are mostly within the electrode’s structure, most of the fragments of silicon are likely to bump into some other part of electrode in short order. And that could potentially re-establish a connection to the electrode’s current handling system.
To demonstrate that what should happen in theory actually does happen in an electrode, the researchers started by taking a used electrode and brushing some of its surface off into a solution. They then passed a voltage through the solution and confirmed the small bits of material from the battery started moving toward one of the electrodes that they used to apply a voltage to the solution. So, things worked as expected.
Instead of needing constant power, new system adjusts to use whatever is available.
Mobile desalination plants might be easier to operate with renewable power. Credit: Ismail BELLAOUALI
Fresh water we can use for drinking or agriculture is only about 3 percent of the global water supply, and nearly 70 percent of that is trapped in glaciers and ice caps. So far, that was enough to keep us going, but severe draughts have left places like Jordan, Egypt, sub-Saharan Africa, Spain, and California with limited access to potable water.
One possible solution is to tap into the remaining 97 percent of the water we have on Earth. The problem is that this water is saline, and we need to get the salt out of it to make it drinkable. Desalination is also an energy-expensive process. But MIT researchers led by Jonathan Bessette might have found an answer to that. They built an efficient, self-regulating water desalination system that runs on solar power alone with no need for batteries or a connection to the grid.
Probing the groundwaters
Oceans are the most obvious source of water for desalination. But they are a good option only for a small portion of people who live in coastal areas. Most of the global population—more or less 60 percent—lives farther than 100 kilometers from the coast, which makes using desalinated ocean water infeasible. So, Bessette and his team focused on groundwater instead.
“In terms of global demand, about 50 percent of low- to middle-income countries rely on groundwater,” Bessette says. This groundwater is trapped in underground reservoirs, abundant, and, in most places, present at depths below 300 meters. It comes mostly from the rain that penetrates the ground and fills empty spaces left by fractured rock formations. Sadly, as the rainwater seeps down it also picks up salts from the soil on its way. As a result, in New Mexico, for example, around 75 percent of groundwater is brackish, meaning less salty than seawater, but still too salty to drink.
Getting rid of the salt
We already have the ability to get the salt back out. “There are two broad categories within desalination technologies. The first is thermal and the other is based on using membranes,” Bessette explains.
Thermal desalination is something we figured out ages ago. You just boil the water and condense the steam, which leaves the salt behind. Boiling, however, needs lots of energy. Bringing 1 liter of room temperature water to 100° Celsius costs around 330 kilojoules of energy, assuming there’s no heat lost in the process. If you want a sense of how much energy that is, stop using your electric kettle for a month and see how your bill shrinks.
“So, around 100 years ago we developed reverse osmosis and electrodialysis, which are two membrane-based desalination technologies. This way, we reduced the power consumption by a factor of 10,” Bessette claims.
Reverse osmosis is a pressure-driven process; you push the water through a membrane that works like a very fine sieve that lets the molecules of water pass but stops other things like salts. Technologically advanced implementations of this idea are widely used at industrial facilities such as the Sydney Desalination Plant in Australia. Reverse osmosis today is the go-to technology when you want to desalinate water at scale. But it has its downsides.
“The issue is reverse osmosis requires a lot of pretreatment. We have to treat the water down to a pretty good quality, making sure the physical, chemical, or biological foul doesn’t end up on the membrane before we do the desalination process,” says Bessette. Another thing is that reverse osmosis relies on pressure, so it requires a steady supply of power to maintain this pressure, which is difficult to achieve in places where the grid is not reliable. Sensitivity to power fluctuations also makes it challenging to use with renewable energy sources like wind or solar. This is why to make their system work on solar energy alone, Bessette’s team went for electrodialysis.
Synching with the Sun
“Unlike reverse osmosis, electrodialysis is an electrically driven process,” Bessette says. The membranes are arranged in such a way that the water is not pushed through them but flows along them. On both sides of those membranes are positive and negative electrodes that create an electric field, which draws salt ions through the membranes and out of the water.
Off-grid desalination systems based on electrodialysis operate at constant power levels like toasters or other appliances, which means they require batteries to even out renewable energy’s fluctuations. Using batteries, in most cases, made them too expensive for the low-income communities that need them the most. Bessette and his colleagues solved that by designing a clever control system.
The two most important parameters in electrodialysis desalination are the flow rate of the water and the power you apply to the electrodes. To make the process efficient, you need to match those two. The advantage of electrodialysis is that it can operate at different power levels. When you have more available power, you can just pump more water through the system. When you have less power, you can slow the system down by reducing the water flow rate. You’ll produce less freshwater, but you won’t break anything this way.
Bessette’s team simplified the control down to two feedback loops. The first outer loop was tracking the power coming from the solar panels. On a sunny day, when the panels generated plenty of power, it fed more water into the system; when there was less power, it fed less water. The second inner loop tracked flow rate. When the flow rate was high, it applied more power to the electrodes; when it was low, it applied less power. The trick was to apply maximum available power while avoiding splitting the water into hydrogen and oxygen.
Once Bessette and his colleagues figured out the control system, they built a prototype desalination device. And it worked, with very little supervision, for half a year.
Water production at scale
Bessette’s prototype system, complete with solar panels, pumps, electronics, and an electrodialysis stack with all the electrodes and membranes, was compact enough to fit in a trailer. They took this trailer to the Brackish Groundwater National Research Facility in Alamogordo, New Mexico, and ran it for six months. On average, it desalinated around 5,000 liters of water per day—enough for a community of roughly 2,000 people.
“The nice thing with our technology is it is more of a control method. The concept can be scaled anywhere from this small community treatment system all the way to large-scale plants,” Bessette says. He said his team is now busy building an equivalent of a single water treatment train, a complete water desalination unit designed for big municipal water supplies. “Multiple such [systems] are implemented in such plants to increase the scale of water desalination process,” Bessette says. But he also thinks about small-scale solutions that can be fitted on a pickup truck and deployed rapidly in crisis scenarios like natural disasters.
“We’re also working on building a company. Me, two other staff engineers, and our professor. We’re really hoping to bring this technology to market and see that it reaches a lot of people. Our aim is to provide clean drinking water to folks in remote regions around the world,” Bessette says.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
Swierk et al. use various methods, including Raman spectroscopy, nuclear magnetic resonance spectroscopy, and electron microscopy, to analyze a broad range of commonly used tattoo inks. This enables them to identify specific pigments and other ingredients in the various inks.
Earlier this year, Swierk’s team identified 45 out of 54 inks (90 percent) with major labeling discrepancies in the US. Allergic reactions to the pigments, especially red inks, have already been documented. For instance, a 2020 study found a connection between contact dermatitis and how tattoos degrade over time. But additives can also have adverse effects. More than half of the tested inks contained unlisted polyethylene glycol—repeated exposure could cause organ damage—and 15 of the inks contained a potential allergen called propylene glycol.
Meanwhile, across the pond…
That’s a major reason why the European Commission has recently begun to crack down on harmful chemicals in tattoo ink, including banning two widely used blue and green pigments (Pigment Blue 15 and Pigment Green 7), claiming they are often of low purity and can contain hazardous substances. (US regulations are less strict than those adopted by the EU.) Swierk’s team has now expanded its chemical analysis to include 10 different tattoo inks from five different manufacturers supplying the European market.
According to Swierk et al., nine of those 10 inks did not meet EU regulations; five simply failed to list all the components, but four contained prohibited ingredients. The other main finding was that Raman spectroscopy is not very reliable for figuring out which of three common structures of Pigment Blue 15 has been used. (Only one has been banned.) Different instruments failed to reliably distinguish between the three forms, so the authors concluded that the current ban on Pigment Blue 15 is simply unenforceable.
“There are regulations on the book that are not being complied with, at least in part because enforcement is lagging,” said Swierk. “Our work cannot determine whether the issues with inaccurate tattoo ink labeling is intentional or unintentional, but at a minimum, it highlights the need for manufacturers to adopt better manufacturing standards. At the same time, the regulations that are on the books need to be enforced and if they cannot be enforced, like we argue in the case of Pigment Blue 15, they need to be reevaluated.”
Our planet is choking on plastics. Some of the worst offenders, which can take decades to degrade in landfills, are polypropylene—which is used for things such as food packaging and bumpers—and polyethylene, found in plastic bags, bottles, toys, and even mulch.
Polypropylene and polyethylene can be recycled, but the process can be difficult and often produces large quantities of the greenhouse gas methane. They are both polyolefins, which are the products of polymerizing ethylene and propylene, raw materials that are mainly derived from fossil fuels. The bonds of polyolefins are also notoriously hard to break.
Now, researchers at the University of California, Berkeley have come up with a method of recycling these polymers that uses catalysts that easily break their bonds, converting them into propylene and isobutylene, which are gasses at room temperature. Those gasses can then be recycled into new plastics.
“Because polypropylene and polyethylene are among the most difficult and expensive plastics to separate from each other in a mixed waste stream, it is crucial that [a recycling] process apply to both polyolefins,” the research team said in a study recently published in Science.
Breaking it down
The recycling process the team used is known as isomerizing ethenolysis, which relies on a catalyst to break down olefin polymer chains into their small molecules. Polyethylene and polypropylene bonds are highly resistant to chemical reactions because both of these polyolefins have long chains of single carbon-carbon bonds. Most polymers have at least one carbon-carbon double bond, which is much easier to break.
While isomerizing ethenolysis had been tried by the same researchers before, the previous catalysts were expensive metals that did not remain pure long enough to convert all of the plastic into gas. Using sodium on alumina followed by tungsten oxide on silica proved much more economical and effective, even though the high temperatures required for the reaction added a bit to the cost
In both plastics, exposure to sodium on alumina broke each polymer chain into shorter polymer chains and created breakable carbon-carbon double bonds at the ends. The chains continued to break over and over. Both then underwent a second process known as olefin metathesis. They were exposed to a stream of ethylene gas flowing into a reaction chamber while being introduced to tungsten oxide on silica, which resulted in the breakage of the carbon-carbon bonds.
The reaction breaks all the carbon-carbon bonds in polyethylene and polypropylene, with the carbon atoms released during the breaking of these bonds ending up attached to molecules of ethylene.“The ethylene is critical to this reaction, as it is a co-reactant,” researcher R.J. Conk, one of the authors of the study, told Ars Technica. “The broken links then react with ethylene, which removes the links from the chain. Without ethylene, the reaction cannot occur.”
The entire chain is catalyzed until polyethylene is fully converted to propylene, and polypropylene is converted to a mixture of propylene and isobutylene.
This method has high selectivity—meaning it produces a large amount of the desired product. That means propylene derived from polyethylene, and both propylene and isobutylene derived from polypropylene. Both of these chemicals are in high demand, since propylene is an important raw material for the chemical industry, while isobutylene is a frequently used monomer in many different polymers, including synthetic rubber and a gasoline additive.
Mixing it up
Because plastics are often mixed at recycling centers, the researchers wanted to see what would happen if polypropylene and polyethylene underwent isomerizing ethenolysis together. The reaction was successful, converting the mixture into propylene and isobutylene, with slightly more propylene than isobutylene.
Mixtures also typically include contaminants in the form of additional plastics. So the team also wanted to see whether the reaction would still work if there were contaminants. So they experimented with plastic objects that would otherwise be thrown away, including a centrifuge and a bread bag, both of which contained traces of other polymers besides polypropylene and polyethylene. The reaction yielded only slightly less propylene and isobutylene than it did with unadulterated versions of the polyolefins.
Another test involved introducing different plastics, such as PET and PVC, to polypropylene and polyethylene to see if that would make a difference. These did lower the yield significantly. If this approach is going to be successful, then all but the slightest traces of contaminants will have to be removed from polypropylene and polyethylene products before they are recycled.
While this recycling method sounds like it could prevent tons upon tons of waste, it will need to be scaled up enormously for this to happen. When the research team increased the scale of the experiment, it produced the same yield, which looks promising for the future. Still, we’ll need to build considerable infrastructure before this could make a dent in our plastic waste.
“We hope that the work described…will lead to practical methods for…[producing] new polymers,” the researchers said in the same study. “By doing so, the demand for production of these essential commodity chemicals starting from fossil carbon sources and the associated greenhouse gas emissions could be greatly reduced.”
As power utilities and industrial companies seek to use more renewable energy, the market for grid-scale batteries is expanding rapidly. Alternatives to lithium-ion technology may provide environmental, labor, and safety benefits. And these new chemistries can work in markets like the electric grid and industrial applications that lithium doesn’t address well.
“I think the market for longer-duration storage is just now emerging,” said Mark Higgins, chief commercial officer and president of North America at Redflow. “We have a lot of… very rapid scale-up in the types of projects that we’re working on and the size of projects that we’re working on. We’ve deployed about 270 projects around the world. Most of them have been small off-grid or remote-grid systems. What we’re seeing today is much more grid-connected types of projects.”
“Demand… seems to be increasing every day,” said Giovanni Damato, president of CMBlu Energy. Media projections of growth in this space are huge. “We’re really excited about the opportunity to… just be able to play in that space and provide as much capacity as possible.”
New industrial markets are also becoming active. Chemical plants, steel plants, and metal processing plants have not been able to deploy renewable energy well so far due to batteries’ fire hazards, said Mukesh Chatter, co-founder and CEO of Alsym Energy. “When you already are generating a lot of heat in these plants and there’s a risk of fire to begin with, you don’t want to deploy any battery that’s flammable.”
Chatter said that the definition of long-duration energy storage is not agreed upon by industry organizations. Still, there are a number of potential contenders developing storage for this market. Here, we’ll look at Redflow, CMBlu Energy, and BASF Stationary Energy Storage.
Zinc-bromine batteries
Redflow has been manufacturing zinc-bromine flow batteries since 2010, Higgins said. These batteries do not require the critical minerals that lithium-ion batteries need, which are sometimes from parts of the world that have unsafe labor practices or geopolitical risks. The minerals for these zinc-bromine batteries are affordable and easy to obtain.
The interconversion of energy between electrical and stored chemical energy takes place in the electrochemical cell. This consists of two half cells separated by a porous or an ion-exchange membrane. The battery can be constructed of low-cost and readily available materials, such as thermoplastics and carbon-based materials. Many parts of the battery can be recycled. Electrolytes can be recovered and reused, leading to low cost of ownership.
Building these can be quite different from other batteries. “I would say that our manufacturing process is much more akin to… an automotive manufacturing process than to [an] electronics manufacturing process… like [a] lithium-ion battery,” Higgins said. “Essentially, it is assembling batteries that are made out of plastic tanks, pumps, fans, [and] tubing. It’s a flow battery, so it’s a liquid that flows through the system that goes through an electrical stack that has cells in it, which is where most of Redflow’s intellectual property resides. The rest of the battery is all… parts that we can obtain just about anywhere.”
The charging and discharging happen inside an electrical stack. In the stack, zinc is plated onto a carbon surface during the charging process. It is then dissolved into the liquid during the discharging process, Higgins said.
The zinc-bromine electrolyte is derived from an industrial chemical that has been used in the oil and gas sector for a long time, Higgins added.
This battery cannot catch fire, and all of its parts are recyclable, Higgins told Ars. “You don’t have any of the toxic materials that you do in a lithium-ion battery.” The electrolyte liquid can be reused in other batteries. If it’s contaminated, it can be used by the oil and gas industry. If the battery leaks, the contents can be neutralized quickly and are subsequently not hazardous.
“Right now, we manufacture our batteries in Thailand,” Higgins said. “The process and wages are all fair wages and we follow all relevant environmental and labor standards.” The largest sources of bromine come from the Dead Sea or within the United States. The zinc comes from Northern Europe, the United States, or Canada.
The batteries typically use an annual maintenance program to replace components that wear out or fail, something that’s not possible with many other battery types. Higgins estimated that two to four years down the road, this technology will be “completely competitive with lithium-ion” from a cost perspective. Some government grants have helped with the commercialization process.
Enlarge/ A lot of gold deposits are found embedded in quartz crystals.
One of the reasons gold is so valuable is because it is highly unreactive—if you make something out of gold, it keeps its lustrous radiance. Even when you can react it with another material, it’s also barely soluble, a combination that makes it difficult to purify away from other materials. Which is part of why a large majority of the gold we’ve obtained comes from deposits where it is present in large chunks, some of them reaching hundreds of kilograms.
Those of you paying careful attention to the previous paragraph may have noticed a problem here: If gold is so difficult to get into its pure form, how do natural processes create enormous chunks of it? On Monday, a group of Australian researchers published a hypothesis, and a bit of evidence supporting it. They propose that an earthquake-triggered piezoelectric effect essentially electroplates gold onto quartz crystals.
The hypothesis
Approximately 75 percent of the gold humanity has obtained has come from what are called orogenic gold deposits. Orogeny is a term for the tectonic processes that build mountains, and orogenic gold deposits form in the seams where two bodies of rock are moving past each other. These areas are often filled with hot hydrothermal fluids, and the heat can increase the solubility of gold from “barely there” to “extremely low,” meaning generally less than a single milligram in a liter of water.
The other striking thing about these deposits is that they’re generally associated with the mineral quartz, a crystalline form of silicon dioxide. And that fact formed the foundation for the new hypothesis, which brings together a number of topics that are generally considered largely unrelated.
It turns out that quartz is the only abundant mineral that’s piezoelectric, meaning that it generates a charge when it’s placed under strain. While you don’t need to understand why that’s the case to follow this hypothesis, the researchers’ explanation of the piezoelectric effect is remarkably cogent and clear, so I’ll just quote it here for people who want to come away from this having learned something: “Quartz is the only common mineral that forms crystals lacking a center of symmetry (non-centrosymmetric). Non-centrosymmetric crystals distorted under stress have an imbalance in their internal electric configuration, which produces an electrical potential—or voltage—across the crystal that is directly proportional to the applied mechanical force.”
Quartz happens to be an insulator, so this electric potential doesn’t easily dissipate on its own. It can, however, be eliminated through the transfer of electrons to or from any materials that touch the quartz crystals, including fluids. In practice, that means the charge can drive redox (reduction/oxidation) reactions in any nearby fluids, potentially neutralizing any dissolved ions and causing them to come out of solution.
This has the potential to be self-reinforcing. Once a small metal deposit forms on the surface of quartz, it will ease the exchange of electrons with the fluid in its immediate vicinity, meaning more metal will be deposited in the same location. This will also lower the concentration of the metal in the nearby solution, which will favor the diffusion of additional metal ions into the location, meaning that the fluid itself doesn’t need to keep circulating past the same spot.
Finally, the concept also needs a source of strain to generate the piezoelectric effect in the first place. But remember that this is all happening in an active fault zone, so strain is not in short supply.
And the evidence
Figuring out whether this happens in active fault zones would be extremely challenging for all sorts of reasons. But it’s relatively easy to dunk some quartz crystals in a solution containing gold and see what happens. So the latter is the route the Australians took.
The gold came in the form of either a solution of gold chloride ions or a suspension of gold nanoparticles. Quartz crystals were either pure quartz or obtained from a gold-rich area and already contained some small gold deposits. The crystals themselves were subject to strain at a frequency similar to that produced by small earthquakes, and the experiment was left to run for an hour.
An hour was enough to get small gold deposits to form on the pure quartz crystals, regardless of whether it was from dissolved gold or suspended gold nanoparticles. In the case of the naturally formed quartz, the gold ended up being deposited on the existing sites where gold metal is present, rather than forming additional deposits.
The researchers note that a lot of the quartz in deposits is disordered rather than in the form of single crystals. In disordered material, there are lots of small crystals oriented randomly, meaning the piezoelectric effect of any one of these crystals is typically canceled out by its neighbors. So, gold will preferentially form on single crystals, which also helps explain why it’s found in large lumps in these deposits.
So, this is a pretty compelling hypothesis—it explains something puzzling, relies on well-established processes, and has a bit of experimental support. Given that activity in active faults is likely to remain both slow and inaccessible, the next steps are probably going to involve getting longer-term information on the rate of deposition through this process and a physical comparison of these deposits with those found in natural settings.