audio

treble-technologies-brings-realistic-sound-to-virtual-spaces

Treble Technologies Brings Realistic Sound to Virtual Spaces

Immersive spaces can be very immersive visually. But they can still sound pretty flat. This can disrupt immersion in games and social applications but in automotive, engineering, and construction (AEC), understanding how a space sounds can be crucial to the design process. That’s why Treble is working on realistic sound reproduction for virtual spaces.

We spoke with Treble CEO Finnur Pind about the opportunities and obstacles in believable immersive sound in enterprise and beyond.

Sound Simulation and Rendering

A conversation inside of a car can sound a lot different than a conversation in your living room. A conversation in your living room can sound a lot different than a conversation in an auditorium. If you’re trying to hear that conversation with an assistive device like hearing aids, the conversation can be even more complicated.

Right now, a conversation in any of those spaces recreated in a virtual environment probably sounds about the same. Designers can include environmental sound like water or wind or a crackling fire as they often do for games, but the sonic profile of the environment itself is difficult to replicate.

That’s because sound is caused by vibrations of the air. In different physical environments, the environment itself absorbs and reflects those vibrations in unique ways based on their physical properties. But, virtual environments don’t have physical properties and sound is conveyed electronically rather than acoustically.

The closest we’ve come to real immersive sound is “spatial audio.” Spatial audio represents where a sound is coming from and how far away it is from a listener by manipulating stereo volume but it still doesn’t account for environmental factors. That doesn’t mean spatial audio isn’t good enough. It does what it does and it plays a part in “sound simulation and rendering.”

Sound simulation and sound rendering are “two sides of the same coin,” according to Pind. The process, which has its roots in academia before Treble started in 2020, involves simulating acoustics and rendering the environment that produces them.

How Treble Rethinks Virtual Sound

“Solving the mathematics of sound has been developed for some time but it never found practice because it’s too computationally heavy,” said Pind. “What people have been doing until now is this kind of ray-tracing simulation. … It works up to a certain degree.”

Treble - Acoustic simulation suiteTreble uses a “wave-based approach” that accounts for the source of the audio, as well as the geometry of the space and the physical properties of the building material. In the event that the virtual space includes fantastical or unspecified materials, the company assigns a set of physical characteristics from a known real-world material.

That kind of situation doesn’t arise often so far because, while Pind is open to Treble working with entertainment and consumer applications, the company is mainly focused on enhancing digital design models for the AEC industry.

“It’s not just seeing what your building will look like, but hearing what your building will sound like,” said Pind. “As long as you have a 3D building model … our platform connects directly, understands the geometry, building models, and sound sources.”

Pind says that the concept may one day have applications in augmented reality and mixed reality as well. Say in a platform like Microsoft Mesh or Varjo Reality Cloud where users are essentially sharing or exchanging surroundings via VR, recreating the real spaces of one user as the virtual space of the other user can greatly aid immersion and realism.

Treble - sound in VR

“Research has shown that having realistic sound in a VR environment improves the immersion,” said Pind. “In AR it’s more the idea of being in a real space but having sound augmented.”

Machine Learning, R&D, and Beyond

As strange as it may sound, this approach also works essentially backwards. Instead of recreating a physical environment, Treble can create sound profiles for physically-based spaces that may or may not exist – or ever exist. Why? To model how sound would behave in that environment. It’s an approach called “synthetic data generation.”

Treble - synthetic data generation“AI is kind of the talk of the town these days and one of the major issues of training AI is a lack of data,” said Pind. Training AI to work with sound requires a lot of audio which, historically, had to be sourced from physical equipment transported and set up in physical environments. “Now they’re starting to come to us to synthetically generate it.”

This same approach is increasingly being used to test audio hardware ranging from hearing aids to XR headsets.

Sounds Pretty Good

Pind thinks that the idea of using sound simulation and rendering for things like immersive concerts is interesting, even though that’s not what Treble does right now. It’s another resource already in the hands of forward-thinking companies and potentially soon coming to an XR venue in your headset.

Treble Technologies Brings Realistic Sound to Virtual Spaces Read More »

redefining-immersive-virtual-experiences-with-embodied-audio

Redefining Immersive Virtual Experiences With Embodied Audio

EDGE Sound Research is pioneering “embodied audio,” a new technology that changes the way we experience virtual reality. When we think of “virtual reality,” the focus only seems to be on engaging our sense of sight. EDGE Sound Research’s embodied audio will revolutionize how we experience audio in VR worlds through its use of audible and tactile frequencies.

One of the things that sets this technology apart is that it stems from co-founder Ethan Castro’s experience. Castro had issues with hearing and, as a result, he had to resort to sound. Moreover, Castro loved music and even became a professional audio engineer and composer. He researched how sound can be perceived by combining hearing and feeling. Eventually, he teamed up with co-founder Val Salomaki to start EDGE Sound Research.

Bringing Embodied Audio to Life

Embodied audio adds realism to sound. This groundbreaking technology combines the auditory and physical sensations of sound in an “optimized and singular embodiment.”

“This means a user can enjoy every frequency range they can hear (acoustic audio) and feel (haptic and tactile audio, also known as physical audio),” said Castro and Salomaki.

Castro and Salomaki go on to explain that they invented a new patent-pending technology for embodied audio, which they dubbed ResonX™. This new technology, which has been nominated for the CES Innovation Award, has the capability to transform any physical space or environment into an embodied audio experience that has the ability to reproduce an expansive range of physical (7-5,000+ Hz) and acoustic audio frequencies (80-17,000 Hz).

Crafting New Experiences With the ResonX™ System

“The ResonX™ system is a combination of hardware and software. A user places the ResonX™ Core (hardware component) on the surface of a material and the ResonX™ software calibrates the surface of the material to resonate reliable hi-fidelity sound that the user can hear and feel,” said Castro and Salomaki.

ResonX Core - Embodied audio by Edge Sound Research

For example, when someone uses the ResonX™ system at home, they can attach the ResonX™ Core to their couch, effectively turning it into an embodied audio experience. So, when they sit on the couch while watching their favorite show, say a basketball game, they will feel as if they’re there in person. Users can hear every single sound, including the ball being dribbled and even the more subtle sounds like the squeaking sounds made by sneakers.

According to Castro and Salomaki, if a user wants to take their movie-viewing experience to the next level, here’s what they can do:

“An individual can attach the ResonX™ to flooring and then be fully immersed in walking around a new planet by hearing and feeling every moment to make the experience feel life-like.”

Aside from enriching users’ experiences in the metaverse, this new technology finally enables us to engage our other senses, thus adding a new dimension to how we experience music, games, live entertainment, and more.

Embodied audio - traditional sound vs ReasonX

“This opens the door to new possibilities in storytelling and connectivity around the world as an experience can now begin to blur what is real because of three senses simultaneously informing a user that a moment is happening. Not as an effect, but as an embodied reality,” shared the EDGE Sound Research co-founders.

Embracing Innovation in the VR Space

With ResonX™ and its ability to bring embodied audio to life, users can now have richer experiences in virtual worlds. Not only will they be engaging their sense of sight, but they’ll also get the opportunity to experience these virtual worlds using their sense of hearing and touch. Now, users have the chance to transform their physical environment into a cohesive sound system.

The good news is, users can enjoy the embodied audio experience in many public venues. According to Castro and Salomaki, they’ve already deployed the ResonX™ in various sports stadiums, bars, and art installations. Furthermore, if you want to bring home the ResonX™ experience, you can get in touch with EDGE Sound Research for a custom installation.

What will embodied audio look like in the future?

It’s likely going to become more widely accessible. “Over time, we will release a more widely available consumer version of the ResonX™ system that will make this ResonX™ technology more accessible to all,” said Castro and Salomaki.

Redefining Immersive Virtual Experiences With Embodied Audio Read More »

edifier-honored-with-four-ces-innovation-awards-at-ces-2022

Edifier Honored with Four CES Innovation Awards at CES 2022

December 30, 2021 by

Edifier, the award-winning manufacturer of premium sound systems and bookshelf speakers, today announces that four of their newest products, the NeoBuds Pro true wireless earbuds, MC500 sound console, MP230 portable speaker and M100 Plus portable waterproof speaker, were selected as Innovation Award Honorees for CES 2022.

“This is an immense honor for the brand as we always strive to provide our consumers with cutting edge technology and sound design all at a price point that they can afford,” says Edifier’s CTO, Stanley Wen. “With our years of research within the audio industry, bringing consumers high quality audio products has always been part of our core values. We also seek to remain at the forefront and incorporate the latest technology and features into each of our products for the purest sounds in their respective classes.”

In the $138 Billion global audio market, Edifier’s team of research and design experts always put the consumer’s needs first. As hybrid work environments continue to proliferate, the need for premium headphones and portable speakers are at an all-time high. The NeoBuds Pro are one of the first ever hi-res audio certified true wireless earbuds on the market, which provides users everything they need for a remote work environment. Whether in the office, at home or on the go,

the earbuds’ six microphone setup combined with its active noise cancelling performance of up to 42dB, the NeoBuds Pro provides crystal clear audio for any one-on-one or conference call while also detecting and eliminating all interfering noises.

Within the speaker category, The MP230 and M100 Plus were created to better fit consumers

with varying needs. The MP230 seamlessly blends portability with the design and technology of wood-framed bookshelf speakers, creating unrivaled soundscapes that can match nearly any aesthetic.

For those with a more adventurous spirit, the M100 Plus provides high quality sound with minimum distortion in a portable, palm sized package. With its durable, double woven lanyard and IPX7 rating, the M100 Plus are the waterproof portable speakers that you will want attached to yourself or bag on your next outing.

With the brand also continuing to expand its audio services, Edifier is showcasing its first ever livesteaming sound console and mixer to further assist streamers with their audio needs. As the MC500’s innovative design matches any streamer’s aesthetic, now sound effects, audio control and more are all at the steamer’s disposal, allowing for a more integrated, intuitive and interactive

livestream.

(Visited 2 times, 2 visits today)

Last modified: November 10, 2021

Edifier Honored with Four CES Innovation Awards at CES 2022 Read More »