music

ten-cool-science-stories-we-almost-missed

Ten cool science stories we almost missed


Bronze Age combat, moral philosophy and Reddit’s AITA, Mondrian’s fractal tree, and seven other fascinating papers.

There is rarely time to write about every cool science paper that comes our way; many worthy candidates sadly fall through the cracks over the course of the year. But as 2024 comes to a close, we’ve gathered ten of our favorite such papers at the intersection of science and culture as a special treat, covering a broad range of topics: from reenacting Bronze Age spear combat and applying network theory to the music of Johann Sebastian Bach, to Spider-Man inspired web-slinging tech and a mathematical connection between a turbulent phase transition and your morning cup of coffee. Enjoy!

Reenacting Bronze Age spear combat

Experiment with experienced fighters who spar freely using different styles.

An experiment with experienced fighters who spar freely using different styles. Credit: Valerio Gentile/CC BY

The European Bronze Age saw the rise of institutionalized warfare, evidenced by the many spearheads and similar weaponry archaeologists have unearthed. But how might these artifacts be used in actual combat? Dutch researchers decided to find out by constructing replicas of Bronze Age shields and spears and using them in realistic combat scenarios. They described their findings in an October paper published in the Journal of Archaeological Science.

There have been a couple of prior experimental studies on bronze spears, but per Valerio Gentile (now at the University of Gottingen) and coauthors, practical research to date has been quite narrow in scope, focusing on throwing weapons against static shields. Coauthors C.J. van Dijk of the National Military Museum in the Netherlands and independent researcher O. Ter Mors each had more than a decade of experience teaching traditional martial arts, specializing in medieval polearms and one-handed weapons. So they were ideal candidates for testing the replica spears and shields.

Of course, there is no direct information on prehistoric fighting styles, so van Dijk and Mors relied on basic biomechanics of combat movements with similar weapons detailed in historic manuals. They ran three versions of the experiment: one focused on engagement and controlled collisions, another on delivering wounding body blows, and the third on free sparring. They then studied wear marks left on the spearheads and found they matched the marks found on similar genuine weapons excavated from Bronze Age sites. They also gleaned helpful clues to the skills required to use such weapons.

DOI: Journal of Archaeological Science, 2024. 10.1016/j.jas.2024.106044 (About DOIs).

Physics of Ned Kahn’s kinetic sculptures

Ned Kahn's Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania.

Shimmer Wall, The Franklin Institute, Philadelphia, Pennsylvania. Credit: Ned Kahn

Environmental artist and sculptor Ned Kahn is famous for his kinematic building facades, inspired by his own background in science. An exterior wall on the Children’s Museum of Pittsburgh, for instance, consists of hundreds of flaps that move in response to wind, creating distinctive visual patterns. Kahn used the same method to create his Shimmer Wall at Philadelphia’s Franklin Institute, as well as several other similar projects.

Physicists at Sorbonne Universite in Paris have studied videos of Kahn’s kinetic facades and conducted experiments to measure the underlying physical mechanisms, outlined in a November paper published in the journal Physical Review Fluids. The authors analyzed 18 YouTube videos taken of six of Kahn’s kinematic facades, working with Kahn and building management to get the dimensions of the moving plates, scaling up from the video footage to get further information on spatial dimensions.

They also conducted their own wind tunnel experiments, using strings of pendulum plates. Their measurements confirmed that the kinetic patterns were propagating waves to create the flickering visual effects. The plates’ movement is driven primarily by their natural resonant frequencies at low speeds, and by pressure fluctuations from the wind at higher speeds.

DOI: Physical Review Fluids, 2024. 10.1103/PhysRevFluids.9.114604 (About DOIs).

How brewing coffee connects to turbulence

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate the puff

Trajectories in time traced out by turbulent puffs as they move along a simulated pipe and in experiments, with blue regions indicate puff “traffic jams.” Credit: Grégoire Lemoult et al., 2024

Physicists have been studying turbulence for centuries, particularly the transitional period where flows shift from predictably smooth (laminar flow) to highly turbulent. That transition is marked by localized turbulent patches known as “puffs,” which often form in fluids flowing through a pipe or channel. In an October paper published in the journal Nature Physics, physicists used statistical mechanics to reveal an unexpected connection between the process of brewing coffee and the behavior of those puffs.

Traditional mathematical models of percolation date back to the 1940s. Directed percolation is when the flow occurs in a specific direction, akin to how water moves through freshly ground coffee beans, flowing down in the direction of gravity. There’s a sweet spot for the perfect cuppa, where the rate of flow is sufficiently slow to absorb most of the flavor from the beans, but also fast enough not to back up in the filter. That sweet spot in your coffee brewing process corresponds to the aforementioned laminar-turbulent transition in pipes.

Physicist Nigel Goldenfeld of the University of California, San Diego, and his coauthors used pressure sensors to monitor the formation of puffs in a pipe, focusing on how puff-to-puff interactions influenced each other’s motion. Next, they tried to mathematically model the relevant phase transitions to predict puff behavior. They found that the puffs behave much like cars moving on a freeway during rush hour: they are prone to traffic jams—i.e., when a turbulent patch matches the width of the pipe, causing other puffs to build up behind it—that form and dissipate on their own. And they tend to “melt” at the laminar-turbulent transition point.

DOI: Nature Physics, 2024. 10.1038/s41567-024-02513-0 (About DOIs).

Network theory and Bach’s music

In a network representation of music, notes are represented by nodes, and transition between notes are represented by directed edges connecting the nodes. Credit: S. Kulkarni et al., 2024

When you listen to music, does your ability to remember or anticipate the piece tell you anything about its structure? Physicists at the University of Pennsylvania developed a model based on network theory to do just that, describing their work in a February paper published in the journal Physical Review Research. Johann Sebastian Bach’s works were an ideal choice given the highly mathematical structure, plus the composer was so prolific, across so many very different kinds of musical compositions—preludes, fugues, chorales, toccatas, concertos, suites, and cantatas—as to allow for useful comparisons.

First, the authors built a simple “true” network for each composition, in which individual notes served as “nodes” and the transitions from note to note served as “edges” connecting them. Then they calculated the amount of information in each network. They found it was possible to tell the difference between compositional forms based on their information content (entropy). The more complex toccatas and fugues had the highest entropy, while simpler chorales had the lowest.

Next, the team wanted to quantify how effectively this information was communicated to the listener, a task made more difficult by the innate subjectivity of human perception. They developed a fuzzier “inferred” network model for this purpose, capturing an essential aspect of our perception: we find a balance between accuracy and cost, simplifying some details so as to make it easier for our brains to process incoming information like music.

The results: There were fewer differences between the true and inferred networks for Bach’s compositions than for randomly generated networks, suggesting that clustering and the frequent repetition of transitions (represented by thicker edges) in Bach networks were key to effectively communicating information to the listener. The next step is to build a multi-layered network model that incorporates elements like rhythm, timbre, chords, or counterpoint (a Bach specialty).

DOI: Physical Review Research, 2024. 10.1103/PhysRevResearch.6.013136 (About DOIs).

The philosophy of Reddit’s AITA

Count me among the many people practically addicted to Reddit’s “Am I the Asshole” (AITA) forum. It’s such a fascinating window into the intricacies of how flawed human beings navigate different relationships, whether personal or professional. That’s also what makes it a fantastic source of illustrative common-place dilemmas of moral decision-making for philosophers like Daniel Yudkin of the University of Pennsylvania. Relational context matters, as Yudkin and several co-authors ably demonstrated in a PsyArXiv preprint earlier this year.

For their study, Yudkin et al. compiled a dataset of nearly 370,000 AITA posts, along with over 11 million comments, posted between 2018 and 2021. They used machine learning to analyze the language used to sort all those posts into different categories. They relied on an existing taxonomy identifying six basic areas of moral concern: fairness/proportionality, feelings, harm/offense, honesty, relational obligation, and social norms.

Yudkin et al. identified 29 of the most common dilemmas in the AITA dataset and grouped them according to moral theme. Two of the most common were relational transgression and relational omission (failure to do what was expected), followed by behavioral over-reaction and unintended harm. Cheating and deliberate misrepresentation/dishonesty were the moral dilemmas rated most negatively in the dataset—even more so than intentional harm. Being judgmental was also evaluated very negatively, as it was often perceived as being self-righteous or hypocritical. The least negatively evaluated dilemmas were relational omissions.

As for relational context, cheating and broken promise dilemmas typically involved romantic partners like boyfriends rather than one’s mother, for example, while mother-related dilemmas more frequently fell under relational omission. Essentially, “people tend to disappoint their mothers but be disappointed by their boyfriends,” the authors wrote. Less close relationships, by contrast, tend to be governed by “norms of politeness and procedural fairness.” Hence, Yudkin et al. prefer to think of morality “less as a set of abstract principles and more as a ‘relational toolkit,’ guiding and constraining behavior according to the demands of the social situation.”

DOI: PsyArXiv, 2024. 10.31234/osf.io/5pcew (About DOIs).

Fractal scaling of trees in art

De grijze boom (Gray tree) Piet Mondrian, 1911.

De grijze boom (Gray tree) by Piet Mondrian, 1911. Credit: Public domain

Leonardo da Vinci famously invented a so-called “rule of trees” as a guide to realistically depicting trees in artistic representations according to their geometric proportions. In essence, if you took all the branches of a given tree, folded them up and compressed them into something resembling a trunk, that trunk would have the same thickness from top to bottom. That rule in turn implies a fractal branching pattern, with a scaling exponent of about 2 describing the proportions between the diameters of nearby boughs and the number of boughs with a given diameter.

According to the authors of a preprint posted to the physics arXiv in February, however, recent biological research suggests a higher scaling exponent of 3 known as Murray’s Law, for the rule of trees. Their analysis of 16th century Islamic architecture, Japanese paintings from the Edo period, and 20th century European art showed fractal scaling between 1.5 and 2.5. However, when they analyzed an abstract tree painting by Piet Mondrian, they found it exhibited fractal scaling of 3, before mathematicians had formulated Murray’s Law, even though Mondrian’s tree did not feature explicit branching.

The findings intrigued physicist Richard Taylor of the University of Oregon, whose work over the last 20 years includes analyzing fractal patterns in the paintings of Jackson Pollock. “In particular, I thought the extension to Mondrian’s ‘trees’ was impressive,” he told Ars earlier this year. “I like that it establishes a connection between abstract and representational forms. It makes me wonder what would happen if the same idea were to be applied to Pollock’s poured branchings.”

Taylor himself published a 2022 paper about climate change and how nature’s stress-reducing fractals might disappear in the future. “If we are pessimistic for a moment, and assume that climate change will inevitably impact nature’s fractals, then our only future source of fractal aesthetics will be through art, design and architecture,” he said. “This brings a very practical element to studies like [this].”

DOI: arXiv, 2024. 10.48550/arXiv.2402.13520 (About DOIs).

IDing George Washington’s descendants

Portrait of George Washington

A DNA study identified descendants of George Washington from unmarked remains. Credit: Public domain

DNA profiling is an incredibly useful tool in forensics, but the most common method—short tandem repeat (STR) analysis—typically doesn’t work when remains are especially degraded, especially if said remains have been preserved with embalming methods using formaldehyde. This includes the remains of US service members who died in such past conflicts as World War II, Korea, Vietnam, and the Cold War. That’s why scientists at the Armed Forces Medical Examiner System’s identification lab at the Dover Air Force Base have developed new DNA sequencing technologies.

They used those methods to identify the previously unmarked remains of descendants of George Washington, according to a March paper published in the journal iScience. The team tested three sets of remains and compared the results with those of a known living descendant, using methods for assessing paternal and maternal relationships, as well as a new method for next-generation sequencing data involving some 95,000 single-nucleotide polymorphisms (SNPs) in order to better predict more distant ancestry. The combined data confirmed that the remains belonged to Washington’s descendants and the new method should help do the same for the remains of as-yet-unidentified service members.

In related news, in July, forensic scientists successfully used descendant DNA to identify a victim of the 1921 Tulsa massacre in Oklahoma City, buried in a mass grave containing more than a hundred victims. C.L. Daniel was a World War I veteran, still in his 20s when he was killed. More than 120 such graves have been found since 2020, with DNA collected from around 30 sets of remains, but this is the first time those remains have been directly linked to the massacre. There are at least 17 other victims in the grave where Daniel’s remains were found.

DOI: iScience, 2024. 10.1016/j.isci.2024.109353 (About DOIs).

Spidey-inspired web-slinging tech

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects

stream of liquid silk quickly turns to a strong fiber that sticks to and lifts objects. Credit: Marco Lo Presti et al., 2024

Over the years, researchers in Tufts University’s Silklab have come up with all kinds of ingenious bio-inspired uses for the sticky fibers found in silk moth cocoons: adhesive glues, printable sensors, edible coatings, and light-collecting materials for solar cells, to name a few. Their latest innovation is a web-slinging technology inspired by Spider-Man’s ability to shoot webbing from his wrists, described in an October paper published in the journal Advanced Functional Materials.

Coauthor Marco Lo Presti was cleaning glassware with acetone in the lab one day when he noticed something that looked a lot like webbing forming on the bottom of a glass. He realized this could be the key to better replicating spider threads for the purpose of shooting the fibers from a device like Spider-Man—something actual spiders don’t do. (They spin the silk, find a surface, and draw out lines of silk to build webs.)

The team boiled silk moth cocoons in a solution to break them down into proteins called fibroin. The fibroin was then extruded through bore needles into a stream. Spiking the fibroin solution with just the right additives will cause it to solidify into fiber once it comes into contact with air. For the web-slinging technology, they added dopamine to the fibroin solution and then shot it through a needle in which the solution was surrounded by a layer of acetone, which triggered solidification.

The acetone quickly evaporated, leaving just the webbing attached to whatever object it happened it hit. The team tested the resulting fibers and found they could lift a steel bolt, a tube floating on water, a partially buried scalpel and a wooden block—all from as far away as 12 centimeters. Sure, natural spider silk is still about 1000 times stronger than these fibers, but it’s still a significant step forward that paves the way for future novel technological applications.

DOI: Advanced Functional Materials, 2024. 10.1002/adfm.202414219

Solving a mystery of a 12th century supernova

Pa 30 is the supernova remnant of SN 1181.

Pa 30 is the supernova remnant of SN 1181. Credit: unWISE (D. Lang)/CC BY-SA 4.0

In 1181, astronomers in China and Japan recorded the appearance of a “guest star” that shone as bright as Saturn and was visible in the sky for six months. We now know it was a supernova (SN1181), one of only five such known events occurring in our Milky Way. Astronomers got a closer look at the remnant of that supernova and have determined the nature of strange filaments resembling dandelion petals that emanate from a “zombie star” at its center, according to an October paper published in The Astrophysical Journal Letters.

The Chinese and Japanese astronomers only recorded an approximate location for the unusual sighting, and for centuries no one managed to make a confirmed identification of a likely remnant from that supernova. Then, in 2021, astronomers measured the speed of expansion of a nebula known as Pa 30, which enabled them to determine its age: around 1,000 years, roughly coinciding with the recorded appearance of SN1181. PA 30 is an unusual remnant because of its zombie star—most likely itself a remnant of the original white dwarf that produced the supernova.

This latest study relied on data collected by Caltech’s Keck Cosmic Web Imager, a spectrograph at the Keck Observatory in Hawaii. One of the unique features of this instrument is that it can measure the motion of matter in a supernova and use that data to create something akin to a 3D movie of the explosion. The authors were able to create such a 3D map of P 30 and calculated that the zombie star’s filaments have ballistic motion, moving at approximately 1,000 kilometers per second.

Nor has that velocity changed since the explosion, enabling them to date that event almost exactly to 1181. And the findings raised fresh questions—namely, the ejected filament material is asymmetrical—which is unusual for a supernova remnant. The authors suggest that asymmetry may originate with the initial explosion.

There’s also a weird inner gap around the zombie star. Both will be the focus of further research.

DOI: Astrophysical Journal Letters, 2024. 10.3847/2041-8213/ad713b (About DOIs).

Reviving a “lost” 16th century score

manuscript page of Aberdeen Breviary : Volume 1 or 'Pars Hiemalis'

Fragment of music from The Aberdeen Breviary: Volume 1 Credit: National Library of Scotland /CC BY 4.0

Never underestimate the importance of marginalia in old manuscripts. Scholars from the University of Edinburgh and KU Leuven in Belgium can attest to that, having discovered a fragment of “lost” music from 16th-century pre-Reformation Scotland in a collection of worship texts. The team was even able to reconstruct the fragment and record it to get a sense of what music sounded like from that period in northeast Scotland, as detailed in a December paper published in the journal Music and Letters.

King James IV of Scotland commissioned the printing of several copies of The Aberdeen Breviary—a collection of prayers, hymns, readings, and psalms for daily worship—so that his subjects wouldn’t have to import such texts from England or Europe. One 1510 copy, known as the “Glamis copy,” is currently housed in the National Library of Scotland in Edinburgh. It was while examining handwritten annotations in this copy that the authors discovered the musical fragment on a page bound into the book—so it hadn’t been slipped between the pages at a later date.

The team figured out the piece was polyphonic, and then realized it was the tenor part from a harmonization for three or four voices of the hymn “Cultor Dei,” typically sung at night during Lent. (You can listen to a recording of the reconstructed composition here.) The authors also traced some of the history of this copy of The Aberdeen Breviary, including its use at one point by a rural chaplain at Aberdeen Cathedral, before a Scottish Catholic acquired it as a family heirloom.

“Identifying a piece of music is a real ‘Eureka’ moment for musicologists,” said coauthor David Coney of Edinburgh College of Art. “Better still, the fact that our tenor part is a harmony to a well-known melody means we can reconstruct the other missing parts. As a result, from just one line of music scrawled on a blank page, we can hear a hymn that had lain silent for nearly five centuries, a small but precious artifact of Scotland’s musical and religious traditions.”

DOI: Music and Letters, 2024. 10.1093/ml/gcae076 (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Ten cool science stories we almost missed Read More »

youtube-tries-convincing-record-labels-to-license-music-for-ai-song-generator

YouTube tries convincing record labels to license music for AI song generator

Jukebox zeroes —

Video site needs labels’ content to legally train AI song generators.

Man using phone in front of YouTube logo

Chris Ratcliffe/Bloomberg via Getty

YouTube is in talks with record labels to license their songs for artificial intelligence tools that clone popular artists’ music, hoping to win over a skeptical industry with upfront payments.

The Google-owned video site needs labels’ content to legally train AI song generators, as it prepares to launch new tools this year, according to three people familiar with the matter.

The company has recently offered lump sums of cash to the major labels—Sony, Warner, and Universal—to try to convince more artists to allow their music to be used in training AI software, according to several people briefed on the talks.

However, many artists remain fiercely opposed to AI music generation, fearing it could undermine the value of their work. Any move by a label to force their stars into such a scheme would be hugely controversial.

“The industry is wrestling with this. Technically the companies have the copyrights, but we have to think through how to play it,” said an executive at a large music company. “We don’t want to be seen as a Luddite.”

YouTube last year began testing a generative AI tool that lets people create short music clips by entering a text prompt. The product, initially named “Dream Track,” was designed to imitate the sound and lyrics of well-known singers.

But only 10 artists agreed to participate in the test phase, including Charli XCX, Troye Sivan and John Legend, and Dream Track was made available to just a small group of creators.

YouTube wants to sign up “dozens” of artists to roll out a new AI song generator this year, said two of the people.

YouTube said: “We’re not looking to expand Dream Track but are in conversations with labels about other experiments.”

Licenses or lawsuits

YouTube is seeking new deals at a time when AI companies such as OpenAI are striking licensing agreements with media groups to train large language models, the systems that power AI products such as the ChatGPT chatbot. Some of those deals are worth tens of millions of dollars to media companies, insiders say.

The deals being negotiated in music would be different. They would not be blanket licenses but rather would apply to a select group of artists, according to people briefed on the discussions.

It would be up to the labels to encourage their artists to participate in the new projects. That means the final amounts YouTube might be willing to pay the labels are at this stage undetermined.

The deals would look more like the one-off payments from social media companies such as Meta or Snap to entertainment groups for access to their music, rather than the royalty-based arrangements labels have with Spotify or Apple, these people said.

YouTube’s new AI tool, which is unlikely to carry the Dream Track brand, could form part of YouTube’s Shorts platform, which competes with TikTok. Talks continue and deal terms could still change, the people said.

YouTube’s latest move comes as the leading record companies on Monday sued two AI start-ups, Suno and Udio, which they allege are illegally using copyrighted recordings to train their AI models. A music industry group is seeking “up to $150,000 per work infringed,” according to the filings.

After facing the threat of extinction following the rise of Napster in the 2000s, music companies are trying to get ahead of disruptive technology this time around. The labels are keen to get involved with licensed products that use AI to create songs using their music copyrights—and get paid for it.

Sony Music, which did not participate in the first phase of YouTube’s AI experiment, is in negotiations with the tech group to make available some of its music to the new tools, said a person familiar with the matter. Warner and Universal, whose artists participated in the test phase, are also in talks with YouTube about expanding the product, these people said.

In April, more than 200 musicians including Billie Eilish and the estate of Frank Sinatra signed an open letter.

“Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter said.

YouTube added: “We are always testing new ideas and learning from our experiments; it’s an important part of our innovation process. We will continue on this path with AI and music as we build for the future.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

YouTube tries convincing record labels to license music for AI song generator Read More »

sony-music-opts-out-of-ai-training-for-its-entire-catalog

Sony Music opts out of AI training for its entire catalog

Taking a hard line —

Music group contacts more than 700 companies to prohibit use of content

picture of Beyonce who is a Sony artist

Enlarge / The Sony Music letter expressly prohibits artificial intelligence developers from using its music — which includes artists such as Beyoncé.

Kevin Mazur/WireImage for Parkwood via Getty Images

Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music—which includes artists such as Harry Styles, Adele and Beyoncé—and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercializing any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno, and Udio, according to those close to the group.

The world’s second-largest music group is also sending separate letters to streaming platforms, including Spotify and Apple, asking them to adopt “best practice” measures to protect artists and songwriters and their music from scraping, mining and training by AI developers without consent or compensation. It has asked them to update their terms of service, making it clear that mining and training on its content is not permitted.

Sony Music declined to comment further.

The letter, which is being sent to tech companies around the world this week, marks an escalation of the music group’s attempts to stop the melodies, lyrics and images from copyrighted songs and artists being used by tech companies to produce new versions or to train systems to create their own music.

The letter says that Sony Music and its artists “recognize the significant potential and advancement of artificial intelligence” but adds that “unauthorized use . . . in the training, development or commercialization of AI systems deprives [Sony] of control over and appropriate compensation.”

It says: “This letter serves to put you on notice directly, and reiterate, that [Sony’s labels] expressly prohibit any use of [their] content.”

Executives at the New York-based group are concerned that their music has already been ripped off, and want to set out a clearly defined legal position that would be the first step to taking action against any developer of AI systems it considers to have exploited its music. They argue that Sony Music would be open to doing deals with AI developers to license the music, but want to reach a fair price for doing so.

The letter says: “Due to the nature of your operations and published information about your AI systems, we have reason to believe that you and/or your affiliates may already have made unauthorized uses [of Sony content] in relation to the training, development or commercialization of AI systems.”

Sony Music has asked developers to provide details of all content used by next week.

The letter also reflects concerns over the fragmented approach to AI regulation around the world. Global regulations over AI vary widely, with some regions moving forward with new rules and legal frameworks to cover the training and use of such systems but others leaving it to creative industries companies to work out relationships with developers.

In many countries around the world, particularly in the EU, copyright owners are advised to state publicly that content is not available for data mining and training for AI.

The letter says the prohibition includes using any bot, spider, scraper or automated program, tool, algorithm, code, process or methodology, as well as any “automated analytical techniques aimed at analyzing text and data in digital form to generate information, including patterns, trends, and correlations.”

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Sony Music opts out of AI training for its entire catalog Read More »

song-lyrics-are-getting-more-repetitive,-angrier

Song lyrics are getting more repetitive, angrier

The song remains the same —

An analysis of 50 years of popular music lyrics reveals a number of trends.

A female singer gestures towards an enthusiastic crowd.

From ‘80s new wave to ‘90s grunge to the latest pop single, music has changed a lot over the decades. Those changes have come not only in terms of sound, though; lyrics have also evolved as time has passed.

So what has changed about the lyrics we can’t get out of our heads? After analyzing 12,000 English-language pop, rock, rap, R&B, and country songs released between 1970 and 2020, researcher Eva Zangerle of Innsbruck University and her team have found that lyrics have been getting simpler and more repetitive over time. This trend is especially evident in rap and rock, but it applies to other genres as well. Another thing Zangerle’s team discovered is that lyrics tend to be more personal and emotionally charged now than they were over 50 years ago.

Know the words…

“Just as literature can be considered a portrayal of society, lyrics also provide a reflection of a society’s shifting norms, emotions, and values over time,” the researchers wrote in a study recently published in Scientific Reports.

That’s why Zangerle created a dataset to find out the different ways in which lyrics have changed. She and her colleagues used the virtual music encyclopedia Genius, which also provides release year and genre information. From the lyric dataset she created, the team pulled data having to do with the structure, language, emotion, and complexity of songs. Five genres—pop, rock, rap, R&B, and country—were chosen because they are genres with the most lyrics that were popular on streaming platform last.fm.

There were two types of analyses done on the music. The first looked for the lyrical trends that were most prevalent for each release year, while the second went deeper into online views of lyrics, characteristics of lyrics (such as emotion), and release year. The researchers obtained the play count from last.fm and the lyrics view count from Genius.

How often people view the lyrics is unexpectedly important. Unlike play counts of songs, this stat shows how important lyrics are despite the popularity (or lack thereof) of the song or genre.

…and the meaning

What can lyrics tell us about different genres and eras? Results for the first analysis showed that certain characteristics are most important across genres, including repeated lines, choruses, and emotional language. The genres in which emotion was most important were country and R&B.

Repeated lines increased over the decades in all genres analyzed, and later lyrics contain more choruses than earlier ones. These increases are further proof that songs have become simpler and more repetitive since the ‘70s.

Lyrics were also more personal and angrier across all genres studied. Personal lyrics were identified by the number of personal pronouns, which especially increased in rap and pop, while rock and R&B saw moderate increases and country stayed nearly the same. Anger and other negative emotions (as expressed through words associated with these emotions) also increased across genres. Rap had the highest increase here, especially in anger, while country showed the lowest increase. Positive emotions decreased in pop and rock, while they increased somewhat in rap.

When looking at the results from the second analysis, Zangerle noticed that lyric views were higher for older rock songs than newer ones, and vice versa for country, which had lower view counts for older songs and higher view counts for new songs. This means that the popularity of country lyrics has increased over time in comparison to rock. Listening count had no relationship to this, meaning interest in the sound of a song was not related to interest in its lyrics.

Through the decades, it seems that music has gotten simpler, more repetitive, and more emotional—especially angrier—and more personal. The study didn’t look into what events and societal changes might have influenced this trend, but the researchers still had some sociological insights. They think pop is all about record sales and what’s hot from one moment to the next, while the preference for older rock songs shows that the main audience of rock is middle-class and against commercialism. Emotionally charged words could also convey feelings toward shifts in society.

The researchers “believe that the role of lyrics has been understudied and that our results can be used to further study and monitor cultural artifacts and shifts in society,” the study said.

Scientific Reports, 2024.  DOI: 10.1038/s41598-024-55742-x

Song lyrics are getting more repetitive, angrier Read More »

new-ai-music-generator-udio-synthesizes-realistic-music-on-demand

New AI music generator Udio synthesizes realistic music on demand

Battle of the AI bands —

But it still needs trial and error to generate high-quality results.

A screenshot of AI-generated songs listed on Udio on April 10, 2024.

Enlarge / A screenshot of AI-generated songs listed on Udio on April 10, 2024.

Benj Edwards

Between 2002 and 2005, I ran a music website where visitors could submit song titles that I would write and record a silly song around. In the liner notes for my first CD release in 2003, I wrote about a day when computers would potentially put me out of business, churning out music automatically at a pace I could not match. While I don’t actively post music on that site anymore, that day is almost here.

On Wednesday, a group of ex-DeepMind employees launched Udio, a new AI music synthesis service that can create novel high-fidelity musical audio from written prompts, including user-provided lyrics. It’s similar to Suno, which we covered on Monday. With some key human input, Udio can create facsimiles of human-produced music in genres like country, barbershop quartet, German pop, classical, hard rock, hip hop, show tunes, and more. It’s currently free to use during a beta period.

Udio is also freaking out some musicians on Reddit. As we mentioned in our Suno piece, Udio is exactly the kind of AI-powered music generation service that over 200 musical artists were afraid of when they signed an open protest letter last week.

But as impressive as the Udio songs first seem from a technical AI-generation standpoint (not necessarily judging by musical merit), its generation capability isn’t perfect. We experimented with its creation tool and the results felt less impressive than those created by Suno. The high-quality musical samples showcased on Udio’s site likely resulted from a lot of creative human input (such as human-written lyrics) and cherry-picking the best compositional parts of songs out of many generations. In fact, Udio lays out a five-step workflow to build a 1.5-minute-long song in a FAQ.

For example, we created an Ars Technica “Moonshark” song on Udio using the same prompt as one we used previously with Suno. In its raw form, the results sound half-baked and almost nightmarish (here is the Suno version for comparison). It’s also a lot shorter by default at 32 seconds compared to Suno’s 1-minute and 32-second output. But Udio allows songs to be extended, or you can try generating a poor result again with different prompts for different results.

After registering a Udio account, anyone can create a track by entering a text prompt that can include lyrics, a story direction, and musical genre tags. Udio then tackles the task in two stages. First, it utilizes a large language model (LLM) similar to ChatGPT to generate lyrics (if necessary) based on the provided prompt. Next, it synthesizes music using a method that Udio does not disclose, but it’s likely a diffusion model, similar to Stability AI’s Stable Audio.

From the given prompt, Udio’s AI model generates two distinct song snippets for you to choose from. You can then publish the song for the Udio community, download the audio or video file to share on other platforms, or directly share it on social media. Other Udio users can also remix or build on existing songs. Udio’s terms of service say that the company claims no rights over the musical generations and that they can be used for commercial purposes.

Although the Udio team has not revealed the specific details of its model or training data (which is likely filled with copyrighted material), it told Tom’s Guide that the system has built-in measures to identify and block tracks that too closely resemble the work of specific artists, ensuring that the generated music remains original.

And that brings us back to humans, some of whom are not taking the onset of AI-generated music very well. “I gotta be honest, this is depressing as hell,” wrote one Reddit commenter in a thread about Udio. “I’m still broadly optimistic that music will be fine in the long run somehow. But like, why do this? Why automate art?”

We’ll hazard an answer by saying that replicating art is a key target for AI research because the results can be inaccurate and imprecise and still seem notable or gee-whiz amazing, which is a key characteristic of generative AI. It’s flashy and impressive-looking while allowing for a general lack of quantitative rigor. We’ve already seen AI come for still images, video, and text with varied results regarding representative accuracy. Fully composed musical recordings seem to be next on the list of AI hills to (approximately) conquer, and the competition is heating up.

New AI music generator Udio synthesizes realistic music on demand Read More »

billie-eilish,-pearl-jam,-200-artists-say-ai-poses-existential-threat-to-their-livelihoods

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

artificial music —

Artists say AI will “set in motion a race to the bottom that will degrade the value of our work.”

Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024 in Beverly Hills, California.

Enlarge / Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024, in Beverly Hills, California.

On Tuesday, the Artist Rights Alliance (ARA) announced an open letter critical of AI signed by over 200 musical artists, including Pearl Jam, Nicki Minaj, Billie Eilish, Stevie Wonder, Elvis Costello, and the estate of Frank Sinatra. In the letter, the artists call on AI developers, technology companies, platforms, and digital music services to stop using AI to “infringe upon and devalue the rights of human artists.” A tweet from the ARA added that AI poses an “existential threat” to their art.

Visual artists began protesting the advent of generative AI after the rise of the first mainstream AI image generators in 2022, and considering that generative AI research has since been undertaken for other forms of creative media, we have seen that protest extend to professionals in other creative domains, such as writers, actors, filmmakers—and now musicians.

“When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the open letter states. It alleges that some of the “biggest and most powerful” companies (unnamed in the letter) are using the work of artists without permission to train AI models, with the aim of replacing human artists with AI-created content.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

In January, Billboard reported that AI research taking place at Google DeepMind had trained an unnamed music-generating AI on a large dataset of copyrighted music without seeking artist permission. That report may have been referring to Google’s Lyria, an AI-generation model announced in November that the company positioned as a tool for enhancing human creativity. The tech has since powered musical experiments from YouTube.

We’ve previously covered AI music generators that seemed fairly primitive throughout 2022 and 2023, such as Riffusion, Google’s MusicLM, and Stability AI’s Stable Audio. We’ve also covered open source musical voice-cloning technology that is frequently used to make musical parodies online. While we have yet to see an AI model that can generate perfect, fully composed high-quality music on demand, the quality of outputs from music synthesis models has been steadily improving over time.

In considering AI’s potential impact on music, it’s instructive to remember historical instances where tech innovations initially sparked concern among artists. For instance, the introduction of synthesizers in the 1960s and 1970s and the advent of digital sampling in the 1980s both faced scrutiny and fear from parts of the music community, but the music industry eventually adjusted.

While we’ve seen fear of the unknown related to AI going around quite a bit for the past year, it’s possible that AI tools will be integrated into the music production process like any other music production tool or technique that came before. It’s also possible that even if that kind of integration comes to pass, some artists will still get hurt along the way—and the ARA wants to speak out about it before the technology progresses further.

“Race to the bottom”

The Artists Rights Alliance is a nonprofit advocacy group that describes itself as an “alliance of working musicians, performers, and songwriters fighting for a healthy creative economy and fair treatment for all creators in the digital world.”

The signers of the ARA’s open letter say they acknowledge the potential of AI to advance human creativity when used responsibly, but they also claim that replacing artists with generative AI would “substantially dilute the royalty pool” paid out to artists, which could be “catastrophic” for many working musicians, artists, and songwriters who are trying to make ends meet.

In the letter, the artists say that unchecked AI will set in motion a race to the bottom that will degrade the value of their work and prevent them from being fairly compensated. “This assault on human creativity must be stopped,” they write. “We must protect against the predatory use of AI to steal professional artist’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.”

The emphasis on the word “human” in the letter is notable (“human artist” was used twice and “human creativity” and “human artistry” are used once, each) because it suggests the clear distinction they are drawing between the work of human artists and the output of AI systems. It implies recognition that we’ve entered a new era where not all creative output is made by people.

The letter concludes with a call to action, urging all AI developers, technology companies, platforms, and digital music services to pledge not to develop or deploy AI music-generation technology, content, or tools that undermine or replace the human artistry of songwriters and artists or deny them fair compensation for their work.

While it’s unclear whether companies will meet those demands, so far, protests from visual artists have not stopped development of ever-more advanced image-synthesis models. On Threads, frequent AI industry commentator Dare Obasanjo wrote, “Unfortunately this will be as effective as writing an open letter to stop the sun from rising tomorrow.”

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods Read More »

official-amazevr-concerts-app-launches-with-an-exclusive-zara-larsson-concert

Official AmazeVR Concerts App Launches With an Exclusive Zara Larsson Concert

Do you remember missing an amazing concert by your favorite artist because you could not travel to another country or continent to attend it? This is no longer a problem. Thanks to AmazeVR, anyone can experience live shows using their newly-launched VR Concerts app.

Drawing on their previous experience working with artists like Megan Thee Stallion and Ceraadi, the company is celebrating the launch of their AmazeVR Concerts app with “Zara Larsson VR Concert”, the one-of-a-kind show by Swedish pop star Zara Larsson. Now, anyone can install the AmazeVR Concerts app and attend any concert available on the platform from the comfort of their home.

Virtual Events – the Future of Entertainment

The global health crisis we experienced made us rethink all types of interactions, from healthcare appointments and business meetings to concerts and theater shows. The VR concerts app developed by AmazeVR is one of the latest additions to immersive and interactive tools for entertainment.

This is a huge step forward both for artists and audiences. For artists, VR shows allow them to interact with more fans and monetize their work in new ways. For music fans, the barriers represented by long distances and finances for traveling suddenly disappear.

Zara Larsson Excited to Collaborate with AmazeVR

Known for hits such as “Lush Life”, “Ain’t My Fault”, and “End of Time”, Swedish pop star Zara Larsson exuded enthusiasm for collaborating with AmazeVR for the launch of  AmazeVR Concerts app.

“I’ve always believed that live music has the power to unite and transcend boundaries. As an artist, finding new ways to connect with my fans and deliver a truly immersive and unforgettable experience is super important to me,” she said in a press release shared with ARPost. “I’m thrilled to be working with AmazeVR to break through the fourth wall, and directly into the homes of fans around the world.”

Bringing Artists and Fans Together in the Virtual World

For AmazeVR, their VR Concerts app, available on Meta Quest 2 (App Lab) and SteamVR, is the crowning of years of developing and improving immersive solutions for the entertainment industry. Creating the first VR concerts and measuring the public response to them showed them that they were on the right path.

At AmazeVR we are ushering a new wave of innovation for music experiences, by providing artists with extraordinary and unparalleled avenues to be up close and personal with their fans,” said AmazeVR co-CEO and co-founder Steve Lee. “It is an honor to be launching the AmazeVR app alongside such an incredible artist like Zara. Her creativity has come together to create a showstopping performance and we can’t wait for her fans to enjoy the experience.”

A Busy Schedule for the Newly Launched AmazeVR Concerts App

The virtual reality concert experience app is set to attract fans of all types of music, including pop-rock, hip-hop, K-pop, rap, and more. Right now, the app is downloadable for free and offers one free song per artist. For the exclusive Zara Larsson VR concert, fans can purchase access for one year at an exclusive launch price of $6.99.

Official AmazeVR Concerts App Launches With an Exclusive Zara Larsson Concert Read More »

vibe-to-hit-music-and-hit-fitness-goals-with-new-music-collections-in-fitxr

Vibe to Hit Music and Hit Fitness Goals With New Music Collections in FitXR

 

FitXR is capping the year off with new music collections of chart-topping hits. The latest collections feature over two dozen new workout classes in Box, HIIT, and Dance that’ll give users a head start in their fitness resolution goals next year.

Themed Collections of Music Hits From Then ‘Til Now

On November 25, FitXR launched its 80s and 90s Favorites Collection featuring iconic hits from Billy Idol, Chic, A-ha, Wham!, Chaka Khan, Survivor, Sister Sledge, the Weather Girls, and Backstreet Boys. Get your fit bod back and work out with throwback music in four Box, three HIIT, and three Dance studio classes.

Right on the heels of this launch is the release of two more collections on December 12. FitXR is bringing us Pop Hits Volume 1 and Holiday Jams for 15 more new classes.

Pop Hits Volume 1 Collection lets you groove to music from Miley Cyrus, Lady Gaga, Pink, Dua Lipa, Zara Larsson, Troye Sivan, Megan Thee Stallion, and the British girl group Little Mix. Enjoy these pop hits in three Box, two HIIT, and three Dance classes.

To feel the holiday cheer you can work out in two Box, two HIIT, and three Dance classes set to music from the Holiday Jams Collection— which will go live just in time to get you looking your best this season while spreading holiday cheer.

We’re in for more awesome music as FitXR prepares to launch Pop Hits Volume 2 in February 2023. Along with more hits from Lady Gaga, Pink, Dua Lipa, and Megan Thee Stallion, FitXR is adding smash hits from Green Day, Tegan & Sara, and Nicky Youre. Music from rappers DMX, Lil Nas X, and SAINt JHN will also pump up classes in this collection.

FitXR: Good Music for Great Workouts

FitXR brings more than good music. At the core of this virtual reality fitness club is a vast library of total body workouts designed by top fitness experts. All on-demand classes are choreographed by professionals and led by world-class trainers.

During the workouts, users are transported to a virtual workout environment as they don their VR headsets. The digital avatars of trainers direct their movements and provide motivation. Multiplayer mode allows users to interact with each other through voice.

An analysis of energy expenditure shows that one workout in FitXR burns approximately 8.34 to 8.84 calories per minute. That’s similar to the energy used in one game of tennis. This substantiates the effectiveness of the virtual workouts even when done in limited space inside homes.

FitXR is available on Meta Quest. A monthly subscription of $9.99 gives members access to on-demand classes within three distinct workout studios—Box, Dance, and HIIT. It also gives them access to a community of professional trainers and fitness enthusiasts where they can share stories, hear advice, and get inspiration.

An Exciting Future for the World of Fitness

Immersive fitness apps like FitXR usher us into a future where having fun and working out overlap. By holding fitness classes in virtual studios, users feel more like they are playing in an arcade rather than having a gym session. This gamification makes them forget they are working out, so they often end up spending more time exercising.

Moreover, virtual studios foster interaction with other people from all over the globe without users having to leave the comfort of their own homes. Indeed, combining music and fitness in a virtual environment is bound to be the norm for group fitness from hereon.

Vibe to Hit Music and Hit Fitness Goals With New Music Collections in FitXR Read More »

with-music-in-new-realities,-we-can-go-deeper-together

With Music In New Realities, We Can Go Deeper Together

 

A look around the media landscape will make it clear that virtual reality has become a major player in the music industry and virtual concerts are on the rise with performances by mainstream artists in popular games and other platforms.

Yet, with all the hope promised by the “metaverse,” not only do these events fail to optimally leverage the innovation of VR, but they also fall short in using music to help create immersive social spaces for people to gather virtually where they feel connected to each other and their humanity.

Today, music-related virtual reality and augmented reality content falls into 3 major categories:

  1. Virtual concerts and music videos by mainstream, popular artists represented by their avatar likeness;
  2. “Rhythm games” and music-making apps focused on popular music;
  3. Music visualizers.

Audiences and Artists Still Adjusting

In the wake of the COVID-19 pandemic and social distancing, many artists are including virtual and hybrid events as part of their tour schedules.

Last year, United Talent Agency (UTA) polls indicated that three out of four people attended online events during the pandemic and, of those, 88% planned to continue even when in-person events came back.

Given the investment in this virtual space by companies including Meta, HTC, ByteDance’s Pico, and soon… Apple with their anticipated headset likely to be announced in 2023, the AR/VR market is a major player in the music industry, even spawning the “Best Metaverse Performance” category in the 2022 MTV VMAs.

With virtual concerts on the rise, major artists like Eminem and Snoop Dogg, Travis Scott, Ariana Grande, and BTS are presenting in-game music events—albeit with mixed results.

Some of these events are being called nothing more than a “kiddie cash grab,” leaving audiences wanting more out of the virtual experience that will truly make use of VR as a medium and a new form of expression.

Possibility for a New Mode of Discovery

There are, however, burgeoning examples of innovative and thoughtful approaches to VR/AR music experiences. The 2018 Sigur Rós and Magic Leap collaboration, Tónandi, demonstrated what can be possible with an immersive and interactive AR music experience, though not currently available on all platforms. This ambitious project featured the Icelandic pop-rock band in a music experience for a high-end AR device that brought music, visuals, and interaction together equally to create a synesthetic experience.

Tónandi - an interactive audio-visual exploration
An interactive audio-visual exploration Tónandi

One of the promises of the metaverse is to bring people together virtually. Traditionally, live music events have been a place where people could gather for a communal experience. This is the missing piece to current VR music events, which have yet to find an organic way for audience members to interact both with the artist and with each other.

Then, there is the possibility of bringing composed scores into virtual spaces, to connect with people’s psyches and emotions as music has done in concert halls, films, and television shows for a long time.

Music and… Miniature Golf?

While not a music-centered app, Mighty Coconut’s Walkabout Mini Golf – a virtual reality game for which I compose the original scores – gives an example of how VR/AR can become a gathering space for people to experience visuals and music while exploring the virtual world or just hanging out together.

VR and music - game Walkabout Mini Golf
VR game Walkabout Mini Golf

Each course presents a captivating world with a distinct mood, created by the music, visuals, and course design that present an alternative to typical VR/AR games and music experiences. Players consider it a place as much as a game, and their connection to the soundtrack has led them to stream it on various services just to bring them back to that sense of place.

VR Music Experience Is Here to Stay

Virtual reality music experiences are here to stay. While VR/AR is currently most strongly associated with games and major companies, there is much to hope for with content put out by independent studios and artists, who are able to be more flexible in adapting to changes in technology and audience demographics. This virtual space will offer new and exciting possibilities for musicians and audiences.

Anyone invested in music going forward—artists, academia, fans, bookers, labels, music supervisors, and even advertisers—would be well advised to keep an eye on VR/AR and to start learning what’s happening in this space.

Like music albums and films, these tools are just another mode of expression for artists to connect to audiences and, hopefully, encourage people to connect with each other.

Guest Post


With Music In New Realities, We Can Go Deeper Together Read More »