Author name: Paul Patrick

auto-industry,-take-note:-this-student-made-ev-cleans-the-air-while-driving

Auto industry, take note: This student-made EV cleans the air while driving

An EV that cleans the air while driving might seem like a pipe dream , but a student team based at the Eindhoven University of Technology has made it reality. TU/ecomotive — as the team is called — has been creating inspiring, environmentally conscious concept cars for over a decade now.

Among the concept vehicles presented by the students, last year’s Zem — which stands for “zero emission mobility” — is the most outstanding. It’s a passenger EV that not only paves the way towards vehicle carbon neutrality, but also cleans the air while driving, something that, in turn, reduces CO2 emissions.

EV that cleans the air while driving
Credit: TU/ecomotive

Zem was unveiled in July 2022 at the Louwman Museum in the Hague. Its message is clear: if a team of 32 students can create a car like this in under 12 months, then what’s stopping the automotive industry from doing more?

“We were inspired by the EU’s Green Deal,” Louise de Laat, Industrial Design student and team manager of the Zem project, told TNW. “Reducing our CO2 emissions is something very important for us, and we would really like to make a carbon neutral car. And that’s the reason for the recent project’s focus on zero emission mobility,” she explained.

CO2 neutral mobility requires a vehicle to have zero carbon emissions across its entire lifecycle, and Zem is an apt example of how close to this goal an EV like this can get.

In this piece, we’ll look at how Zem achieves this through its use, production, and afterlife — as well as what the car industry can learn from these sorts of schemes.

The air-cleaning technology

As we mentioned at the start, instead of emitting CO2, Zem captures it. Effectively, it cleans the air while driving. That’s thanks to an innovative technology called direct air capture (CAP), which “traps” carbon dioxide into a filter. Companies such as Climeworks and Carbyon have been applying this air-cleaning method via large installations. But the Zem team decided to implement it on the car.

It works like this: while driving, air moves through the car into a self-designed filter, which captures and stores CO2, allowing clear air to flow out of the vehicle. This compensates for the total emissions of all life phases.

EV that cleans the air
Credit: TU/ecomotive

But what happens when the filter is saturated?

“We have designed a special charging pole for this,” Louise explained. “While Zem is charging you can remove the filter and place it in a special tank inside the pole. Cleaning the filter takes about the same time as charging. At the same time, the CO2 absorbed and saved in the tank can be repurposed and used by industries that need it, to make carbon fibers, for instance,” she added.

And to increase the vehicle’s sustainability even when not in use, TU/ecomotive has equipped it bi-directional charging technology to provide electricity to homes, as well as solar panels to store energy.

Maximising sustainable production and afterlife

To achieve a high level of sustainability, the TU/ecomotive opted for a novel production method: additive manufacturing — or simply, 3D printing. The team collaborated with partners — such as CEAD and Royal3D — to develop the car’s fundamental structure. Specifically, the monocoque and the body panels.

As Louise explained, they also 3D-printed parts of the interior, including the car seat shell, the dashboards, the middle console, the steering wheel, and the roof beams.

According to the team, this manufacturing process results in nearly zero waste materials, as the various car parts were printed in the exact shape needed. At the same time, they did the printing using circular plastics. These are granulates that had already been used and can be shredded and reused afresh in other projects.

“You can use that same material again to make the same event over three times before it loses its specifications,” Louise noted.

The vision of circularity has been applied throughout Zem’s design as well.

For example, the seat upholstery is made from the residue released during pineapple production. The roof upholstery and the floor mats consist of ocean plastics. And, through the collaboration with Black Bear Carbon, recycled black carbon from worn tires has been used for the EV’s coating and tires.

As a result, the concept car boasts “as little CO2 emissions as possible” during the production phase. At the same time, the types of materials, their ease of separation, and their circularity, all contribute to keeping CO2 emissions during the end-of-use phase at a lower level — especially when compared to conventional cars.

Concept EV
Credit: TU/ecomotive

But, according to Louise, it proved extremely challenging to give a specific number to Zem’s overall emissions via the Life Cycle Assessment (LCA) method, revealing a gap in the industry.

“We need a lot of data from the partners where we get the parts from and some of them don’t know the exact LCA of their product,” she said. On the upside, she considers it beneficial that this project meant their partners acknowledged the vehicle’s environmental footprint.She also remains hopeful that respective legislation from national governments and the EU in general will standardise the use of LCA.

As per Louise, Zem has succeeded in reaching its goal to drastically lower CO2 emissions to the maximum extent possible. Yet, the EV does come with disadvantages that would require further work to enable its scaleup into a marketable product.

“If you build a car in less than one year, there will be some flaws that you still need to work on,” she noted. “Zem drove smoothly on the DRC track during the US tour, but the closer you get to the vehicle, the easier it is to see its flaws.” And that’s to be expected when you work with new materials and new technologies within a short period of time, Louise added.

A win-win for students and commercial partners

Now that the Zem project has been concluded, a renewed team has started working on the next concept vehicle. Stijn Plekkenpol — a sustainable innovation student — will lead the next project.

“What we really want to do now is build a climate positive car by 2030. This means, a vehicle which is marketable, which could be produced, and actually have a positive impact on the environment instead of any negative ones,” Stijn told TNW.

In the meantime, Louise aims to keep working on the filter technology and would be excited to see Zem turn into a mass-produced car. After all, it’s not uncommon for a student concept to grow into a startup and a real-life product. Think of Lightyear, the now famous solar EV Dutch startup, which was also started by students of the Eindhoven University of Technology.

EV
Credit: Bart van Overbeeke/ TU/ecomotive

While both Louise and Stijn attribute Zem’s success to the students’ team “long working hours and [their] dedication”, they explained the vital role commercial patterns played as well.

“The majority of our partners are from Eindhoven’s Brainport region, which is known for its high density of R&D, and is called the Silicon Valley of the Netherlands,” Louise said.

These partners supported the project by providing parts, materials, knowledge, and financial support. And as for what they gained in return, Louise summarised three main advantages: employee recruitment, exposure, and the enjoyment and inspiration stemming from the collaboration with young people bringing bold ideas to the table.

Both Louise and Stjn have optimistic views on the future of mobility. They believe that cars will remain an integral part of transportation, but that they have the potential to be climate-positive instead of adding to carbon emissions.

And, as Zem showcases, we should trust in the innovative ideas of the younger generations, further seeking the collaboration between daring university projects and commercial partnerships.

The new concept vehicle will be revealed on July 27 — and I, for one, can’t wait to see what the students have in store for us.

Auto industry, take note: This student-made EV cleans the air while driving Read More »

meta-reality-labs-earnings-reveal-a-less-successful-holiday-season-&-highest-operating-costs-yet

Meta Reality Labs Earnings Reveal a Less Successful Holiday Season & Highest Operating Costs Yet

Meta today revealed its latest quarterly earnings results, showing that Reality Labs, the company’s XR and metaverse arm, had a smaller holiday season than the last, while operating costs have reached their highest levels yet.

Today during the company’s Q4 earnings call, Meta revealed the latest revenue and operating cost figures for its XR and metaverse division, Reality Labs, providing one of the clearest indicators of the success the company is seeing in this space.

The fourth quarter has consistently been the best performer for Reality Labs, no doubt thanks to the holiday season driving sales of the company’s offerings.

In the fourth quarter of 2022, the company saw $727 million in revenue, which was 17% less compared to the fourth quarter of 2021 when the company pulled in $877 million in revenue.

The fourth quarter of 2021 was a good performer for Reality Labs revenue thanks to the success of Quest 2 which had launched earlier that year.

In the fourth quarter of 2022, the company’s latest headset to launch was Quest Pro, it’s high-end MR headset. Unsurprisingly, the more expensive device—which has yet to find a strong value proposition at $1,500—doesn’t seem to have performed as well as Quest 2 did in its launch year. Just days ago, Meta temporarily discounted the price of the headset to $1,100, appearing to test the waters at that lower price. Granted, XR headsets aren’t the only product Reality Labs offers, which means the division’s other product lines—video calling speakers and smart glasses—may have had a role to play.

In addition to a smaller holiday season than last year, the latest earnings for Reality Labs show the division’s expenses were greater than in any previous quarter, surpassing $4 billion for the first time.

This continues a trend of Meta’s ever-growing investments in Reality Labs which the company has warned investors may not flourish until the 2030s.

In the face of operating costs far outpacing revenue, Meta CEO Mark Zuckerberg told investors that his management theme for 2023 was “efficiency,” saying he wants to focus the company on streamlining its structure to move faster while being more aggressive about shutting down projects that aren’t performing.

Meta Reality Labs Earnings Reveal a Less Successful Holiday Season & Highest Operating Costs Yet Read More »

meta’s-social-vr-app-is-coming-to-web-&-mobile-soon,-alpha-begins-for-members-only-rooms

Meta’s Social VR App is Coming to Web & Mobile Soon, Alpha Begins for Members-only Rooms

Horizon Worlds, Meta’s social VR platform for Quest users, is expanding with alpha tests of new members-only spaces, allowing creators to manage up to 150 card-carrying members in their private worlds. Meta says it’s also gearing up to release Horizon Worlds on non-Quest devices for the first time.

Meta is now rolling out alpha access to its new members-only worlds, which aims to let creators build and cultivate a space in Horizon Worlds. Each members-only world can have up to 150 members, although only 25 concurrent visitors can gather at any given time.

“Every community develops its own norms, etiquette, and social rules over time as it fosters a unique culture,” the company says in a blogpost. “To enable that, we’ll provide the tools that allow the creators of members-only worlds to set the rules for their communities and maintain those rules for their closed spaces.”

Meta says moderation responsibilities can be shared among trusted members, so creators can better control who gets in and who’s kicked out, however the company says its Code of Conduct for Virtual Experiences is still in effect in privately owned spaces.

What’s more, the Quest-only social platform is also going to be available on the Web and mobile devices “soon”, the company says, adding that rules will be made and enforced “similarly to how mobile operating systems manage experiences on their platforms.”

As it is today, Horizon Worlds plays host to a growing number of user-generated content in addition to first-party worlds. The release of Horizon Worlds outside of Quest would represent a massive potential influx of users and user-generated content, putting it in direct competition with cross-platform social gaming titans such as Roblox and Rec Room.

As a similar free-to-play app, Horizon Worlds offers an Avatar Store featuring premium digital outfits—very likely only a first step in the company’s monetization strategy. For now, the company says it allows creators to earn revenue from purchases people make in their worlds, which includes hardware platform fees and a Horizon Worlds fee, which Meta says is 25 percent.

In late October, Meta showed off a tempting preview of its next-gen avatars, although it’s clear there’s still a ton of work to be done to satisfy its existing userbase. Floating torsos are still very much a thing in Horizon Worlds, and that’s despite Meta CEO Mark Zuckerberg’s insistence that full body tracking was in the works. It was too good to be true.

For now, Horizon Worlds is only available on Quest 2 headsets in the US, Canada, UK, France, Iceland, Ireland and Spain—something we hope they change well before it ushers in flatscreen users.

Meta’s Social VR App is Coming to Web & Mobile Soon, Alpha Begins for Members-only Rooms Read More »

‘a-new-way-of-doing-artificial-intelligence’:-uk’s-mignon-has-a-fresh-proposition-for-ai-on-the-edge

‘A new way of doing artificial intelligence’: UK’s Mignon has a fresh proposition for AI on the edge

This story is syndicated from the premium edition of PreSeed Now, a newsletter that digs into the product, market, and founder story of UK-founded startups so you can understand how they fit into what’s happening in the wider world and startup ecosystem.

The reignited excitement around the potential of AI as we hurtle into 2023 brings with it concerns about how best to process all the data needed to make it work. This is far from a new challenge though, and next-generation AI chips are being developed in labs around the world to address the challenge in different ways.

One of the first startups we ever covered at PreSeed Now takes a ‘neuromorphic’ approach, influenced by the human brain. Coming from a different direction is a brand new spinout from Newcastle University called Mignon (so new, in fact, that there’s no website yet).

Mignon has developed an artificial intelligence chipset that, according to CEO Xavier Parkhouse-Parker, has “in the order of 10,000x performance improvements against alternative neural-network based chips for classification tasks”

Classification is, essentially, the process of figuring out what the AI is looking at, hearing, reading, etc — the first step in understanding the world around it, whatever use case it’s put to. Mignon’s chipset is designed to be used in edge computing as a “classification coprocessor” on devices, rather than in the cloud.

What’s more, Parkhouse-Parker says Mignon’s chipset can also train AI models on the edge, meaning the models can be optimised for the specific, individual environments in which they’re used.

A prototype design of Mignon’s gen-1 chipset

A propositional proposition

What Mignon says gives its tech an advantage over the competition is a less resource-intensive approach based on propositional logic.

“Neural networks, the dominant algorithm in AI and machine learning today, typically require running many layers of increasingly resource-intensive calculations. They can take a very long time and a huge amount of energy to train and deploy, and they also exist as a black box; you cannot explain why the algorithms have come to a particular conclusion,” Parkhouse-Parker says.

“Mignon is based on an algorithm that can be done in a single layer, using propositional logic, maintaining accuracy but enabling calculations to be run much more quickly, using far less energy.”

And when it comes to launching into the market, Mignon could have a strong advantage, too.

“The investment and commercial scale required for success in the semiconductor industry is significant. Some of the biggest challenges for many other competitors in this sector is that they rely on non-standard, or ‘exotic’, features which are not easily scalable within the current semiconductor manufacturing ecosystem,” says Parkhouse-Parker.

Instead, Mignon’s chipset uses a standard CMOS fabrication approach, meaning mass-production is much more straightforward.

How can it be used?

Edge AI has already made a notable difference to consumers’ lives. Just look at how the likes of Apple and Google have put AI chips into their smartphones to run tasks like face and object recognition in photos or audio transcription locally, increasing privacy and speed, and reducing data transfer costs.

Parkhouse-Parker says Mignon could eventually make a difference here, along with in the next generation of ‘6G’ telecoms networks, where signal processing could be optimised by AI

But the first market they’re looking at is industrial spaces where connectivity and energy resources are low, but there’s a need for high-performance AI classification.

And while the tech isn’t ready for it yet, Parkhouse-Parker says Mignon is working towards another selling point that its offering enables — “explainable AI.” That is, transparency around how and why AI made a particular decision.

To give a timely example, if you ask OpenAI’s ChatGPT to explain a concept to you, you can’t see why it comes up with the specific answer it gives. You just get answer based on the pathway it took through its sea of data in response to your prompt.

In an industrial setting, where AI might be making business-critical decisions, or decisions with safety implications, it would be very useful to be able to look back and see how the AI came to the conclusion that it did.

“With neural networks, all of the inferences are done within a black box, and you cannot see how or why this node connects to this node, or how things have been calculated. With Mignon, because it’s based on propositional logic, it allows for a researcher to be able to look in and see exactly where a decision had been made, and why, and what led it to that point,” explains Parkhouse-Parker.

Mignon wants to make it possible for this kind of accountability to be available via software, which could be appealing in fields such as medicine, defence, and the automotive industry.

The brains behind the Mignon product. L-R: Professor Alex Yakovlev and Dr Rishad Shafik

Building Mignon

Mignon’s technology comes from the work of Professor Alex Yakovlev and Dr Rishad Shafik at Newcastle University.

Their research into taking the Tsetlin machine and putting it into computational hardware caught the attention of deep tech venture builder Cambridge Future Tech, which–among others–also works with GitLife Biotech and Mimicrete, who have previously featured in this newsletter. 

Since spring last year, Parkhouse-Parker (Cambridge Future Tech’s COO) has been working on developing a commercial proposition for Yakovlev and Shafik’s research. He has taken the CEO role at Mignon as it spins out of the university.

Getting to market

First on the to-do list for the new startup is further refining its technology with the development of a ‘generation-2’ chipset before they bring it to market. 

“Even though we’ve got fantastic performance improvements, and it’s actually quite remarkable, this has all been done on the 65-nanometer node, which is an old technology and should mean worse performance improvements, because effectively the transistors are bigger, and that’s what makes us really remarkable,” says Parkhouse-Parker. 

“We think that when we move to a 28-nanometer node, that all of the numbers we have the benchmarks are going to be significantly greater at this scale.”

Commercial validation is obviously another important step after that. The eventual goal is to partner with fabless chip companies to build the Mignon technology into a commercially available system-on-a-chip. Mignon has a number of hires planned for the near future to help it get there.

Mignon CEO, Xavier Parkhouse-Parker

Investment plans and future potential

Parkhouse-Parker expects the spin-out process to be complete in March this year, after which they will formally open a £2.55 million funding round.

This will be used to expand the team, develop, test, and fabricate the next generation of chipset, and to get commercial validation in a number of verticals. Software to allow AI development on the chipset is also a key part of the roadmap.

Eventually, Parkhouse-Parker wants Mignon’s combination of low-power performance and widespread compatibility to usher in whole new opportunities for AI

“What Mignon does is open up a possibility for what is genuinely a completely new world of devices that people haven’t even thought about yet. Think about the opportunities that would be there with product people like a Steve Jobs or a Jony Ive that could use this and run wild with the potential. I think there really is a completely new world of possibilities.”

The big “hump”

There’s no clear road from where Mignon is now to that future. Aside from the additional development work to refine the chipset, there’s a shift in mindset required from the people who build AI applications.

“The big ‘hump’, as one of our advisors calls it, is that it’s a new way of doing artificial intelligence,” says Parkhouse-Parker. “The transition between neural networks and Tsetlin is not incredibly significant, but it will require a little bit of a mindset difference. It may require new ways of thinking around how artificial intelligence problems can be designed and how these things can be brought into market.

“There’s a great community already being built around this, but that’s one of the biggest challenges — building a Tsetlin ecosystem and transitioning things that are neural networks into Tsetlin.”

But despite these challenges, Parkhouse-Parker believes Mignon’s vision is very much achievable. 

“Several orders of magnitude improvement warrant a look at something that’s new, novel, and exciting.”

The article you just read is from the premium edition of PreSeed Now. This is a newsletter that digs into the product, market, and story of startups that were founded in the UK. The goal is to help you understand how these businesses fit into what’s happening in the wider world and startup ecosystem.

‘A new way of doing artificial intelligence’: UK’s Mignon has a fresh proposition for AI on the edge Read More »

critically-acclaimed-propaganda-sim-‘not-for-broadcast’-coming-to-quest-2-&-pc-vr-in-march

Critically Acclaimed Propaganda Sim ‘Not for Broadcast’ Coming to Quest 2 & PC VR in March

NotGames, the indie studio behind ingenious propaganda simulator Not For Broadcast (2022), announced it’s releasing a separate VR version in March, coming to SteamVR and Meta Quest 2.

Releasing on Steam and the Quest Store on March 23rd, Not For Broadcast VR is putting the power of mass media into your hands, as you control what people see and how they see it in your very own TV studio control booth, set in an alternate ’80s timeline in Britain.

Promising all of the original game’s dystopian tale of power, greed and resistance, the VR adaptation seems like a natural fit for the seated, button-heavy game—looking a bit like Please, Don’t Touch Anything.

The game is chock full of egotistical celebrities, dishonest politicians, and strange sponsors—and the show must go on uninterrupted. Pop in your lineup of VHS tapes, frame and edit shots, bleep out expletives, and keep everything moving smoothly—even as disaster strikes outside your window. Whatever you do, your mission is to keep those ratings up.

You can wishlist the game now on Steam. We’re still waiting for the Store link for Quest, however we’ll update this article when we see it. In addition to its VR launch, the game is also coming to PlayStation and Xbox on March 23rd as well.

At the time of this writing, the flatscreen version of Not For Broadcast has garnered an ‘Overwhelmingly Positive’ user review score on Steam, coming from over 7,000 players.

Critically Acclaimed Propaganda Sim ‘Not for Broadcast’ Coming to Quest 2 & PC VR in March Read More »

virtex-stadium-holds-first-major-events,-inches-toward-open-access

Virtex Stadium Holds First Major Events, Inches Toward Open Access

A number of the attractions of watching live sports carry over into esports. However, unless you’re watching an esports tournament in person, a lot of those attractions go away. Interactions with other fans are limited. The game view is limited. The game is flattened and there’s little environment ambiance. Virtex wants to fix that.

A History of Virtex

Virtex co-founders Tim Mcguinness and Christoph Ortlepp met at an esports event in 2019. Mcguinness presented the idea of “taking that whole experience that we were doing there in the physical world and bringing it into the virtual world,” Ortlepp said in a video call with ARPost. The two officially launched the company in 2020.

The following year saw the company’s first major hires (and its first coverage from ARPost). The company was focusing on integrating Echo VR and needed permission from Meta (then Facebook), who purchased the game’s developer Ready At Dawn in 2022.

“The first thing we had to do was get something that we could show to Meta,” said Ortlepp. “For us, Echo was a good community to start with.”

Virtex got the green light from Meta. It also got Jim Purbrick who had previously been a technical director at Linden Lab and an engineering manager for Oculus.

“Moderation is an area where he had a big impact on us,” said Ortlepp. “We need live moderators to keep people safe… If now we have two or three hundred people in the platform, what if we have ten thousand people? Can we keep users safe and prevent a toxic environment?”

Meta’s support also meant that Virtex could finally launch its beta application. The beta is still technically closed – meaning that it isn’t on any app store, and you have to go through the Virtex website to access it. However, the closed beta isn’t limited. Testers have the opportunity to participate in “test sessions” – live streamed games every Thursday.

The platform held its first major tournament in December, with another about to kick off as this article was being written. Games are scheduled every week into the spring.

A Tour of the Stadium

Right now, the Virtex virtual world consists of a stadium entrance, a lounge area, and a commentator booth in addition to the stadium itself.

“The purpose [of the entrance and lounge] is really to set the stage for the user, to welcome them,” said Ortlepp.

Virtex Stadium Environment - Exterior

In the lounge, users can socialize, modify their avatars (through a Ready Player Me integration), and even watch a miniaturized version of the live match. The lounge itself is still being developed with plans for mini-games and walls of fame. Connected areas including a virtual store and bar area are also in the works.

In the stadium itself, users can see and interact with other spectators. They can watch a 3D reproduction of the live game in real time, or watch a Twitch stream of the game on a jumbo screen above the stadium floor.

“We feature the video because we didn’t want to take away from esports viewers what they’re currently used to,” said Ortlepp. Virtex wants to give spectators options to explore viewing in new ways, without leaving them in an entirely unfamiliar setting.

A teleport system allows faster movement to different areas of the stadium, including the stadium floor to watch from within the game or even follow players through the action. This is possible thanks to the unique solution that Virtex has developed for recreating the game within the virtual stadium.

virtex stadium virtual reality stadium streaming

The studio also adds special recording and hosting tools like camera bots for streaming games within the stadium to Twitch and YouTube. Aspects of the stadium’s appearance can even be changed to match whatever game is being played.

“We are the platform. Ideally, we don’t ever want to be the content creators,” said Ortlepp. “So we have certain user modes for the ones that are actually operating the tournaments.”

When Can We Expect an App?

Virtex Stadium is up and running. But, the team plans to spend at least the next few months in their “closed” beta phase. For one thing, they really want to have their moderation plan in place before making the app more discoverable. They’re also still collecting feedback on their production tools – and thinking of new ones.

Further, while the platform currently has a decent schedule, the team wants to work with more games and more gaming communities. That includes other VR titles as well as more traditional esports. Ideally, one day, something will be happening in Virtex no matter when a user signs in.

“Where do we take it from here? There are no standards – no one has done this before,” said Ortlepp. “The virtual home of esports is basically the vision. It’s something we don’t claim yet – we have to earn it.”

It’s Not Too Early to Check It Out

Everything about Virtex is exciting, from their plans for the virtual venue itself, to their passion and concern for their community. Ortlepp said that the company is “careful about making dated timeline promises.” In a way that’s a little frustrating but it’s only because the company would rather hold off on something amazing than push something that falls short of their vision.

Virtex Stadium Holds First Major Events, Inches Toward Open Access Read More »

europe’s-homegrown-battery-cells-could-end-its-reliance-on-china-by-2027

Europe’s homegrown battery cells could end its reliance on China by 2027

Europe’s homegrown battery cells could end its reliance on China by 2027

Ioanna Lykiardopoulou

Story by

Ioanna Lykiardopoulou

Ioanna is a writer at SHIFT. She likes the transition from old to modern, and she’s all about shifting perspectives. Ioanna is a writer at SHIFT. She likes the transition from old to modern, and she’s all about shifting perspectives.

By 2027, Europe has the potential to fully rely on domestic production of battery cells, meeting its EV and energy storage demands without any Chinese imports. That’s according to the latest forecast by Transport & Environment (T&E), a campaign group, which analyzed a range of manufacturer reports and press releases.

The European NGO further estimates that, in 2030, the companies with the largest battery cell production in the continent will be CATL, Northvolt, ACC, Freyr, and the Volkswagen Group.

About two-thirds of Europe’s needs for cathodes — an integral battery part — could also be produced in-house, the report finds. So far, 12 companies plan to become active in this part of the battery supply chain, with 17 plants announced in the region. Existing and scheduled projects include Umicore in Poland, Northvolt in Sweden, and BASF in Germany.

Northvolt battery cell
Northvolt’s first battery cell produced at the company’s Ett gigafactory in Sweden. Credit: Northvolt

Projections about the refining and processing of lithium are optimistic as well. While 100% of the refined lithium required for European batteries is imported from China and other countries, the bloc is expected to meet 50% of its demand by 2030. T&E has identified 24 projects so far, including Vulcan Energy Resources in Germany and Eramet in France.

The NGO warns, however, that these scenarios will not be realized unless backed by sufficient and timely funding, highlighting that the US’ Inflation Reduction Act (IRA) could attract European talent and factories to America.

“Europe needs the financial firepower to support its green industries in the global race with America and China,” Julia Poliscanova, senior director for vehicles and e-mobility at T&E, said. “A European Sovereignty Fund would support a truly European industrial strategy and not just countries with deep pockets. But spending rules need to be streamlined so that building a battery plant does not take the same amount of time as a coal plant.”

Also tagged with


Europe’s homegrown battery cells could end its reliance on China by 2027 Read More »

how-will-chatgpt,-dall-e-and-other-ai-tools-impact-the-future-of-work?-we-asked-5-experts

How will ChatGPT, DALL-E and other AI tools impact the future of work? We asked 5 experts

From steam power and electricity to computers and the internet, technological advancements have always disrupted labor markets, pushing out some careers while creating others. Artificial intelligence remains something of a misnomer — the smartest computer systems still don’t actually know anything — but the technology has reached an inflection point where it’s poised to affect new classes of jobs: artists and knowledge workers.

Specifically, the emergence of large language models – AI systems that are trained on vast amounts of text – means computers can now produce human-sounding written language and convert descriptive phrases into realistic images. The Conversation asked five artificial intelligence researchers to discuss how large language models are likely to affect artists and knowledge workers. And, as our experts noted, the technology is far from perfect, which raises a host of issues — from misinformation to plagiarism — that affect human workers.

To jump ahead to each response, here’s a list of each:

Creativity for all – but loss of skills?

Potential inaccuracies, biases and plagiarism

With humans surpassed, niche and ‘handmade’ jobs will remain

Old jobs will go, new jobs will emerge

Leaps in technology lead to new skills

Creativity for all – but loss of skills?

Lynne Parker, Associate Vice Chancellor, University of Tennessee

Large language models are making creativity and knowledge work accessible to all. Everyone with an internet connection can now use tools like ChatGPT or DALL-E 2 to express themselves and make sense of huge stores of information by, for example, producing text summaries.

Especially notable is the depth of humanlike expertise large language models display. In just minutes, novices can create illustrations for their business presentations, generate marketing pitches, get ideas to overcome writer’s block, or generate new computer code to perform specified functions, all at a level of quality typically attributed to human experts.

These new AI tools can’t read minds, of course. A new, yet simpler, kind of human creativity is needed in the form of text prompts to get the results the human user is seeking. Through iterative prompting — an example of human-AI collaboration — the AI system generates successive rounds of outputs until the human writing the prompts is satisfied with the results. For example, the (human) winner of the recent Colorado State Fair competition in the digital artist category, who used an AI-powered tool, demonstrated creativity, but not of the sort that requires brushes and an eye for color and texture.

While there are significant benefits to opening the world of creativity and knowledge work to everyone, these new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills. Educational institutes need to craft and enforce policies on allowable uses of large language models to ensure fair play and desirable learning outcomes.

High-flying Co-op Adventure ‘Windlands 2’ is Finally Coming to Quest 2 Next Month Read More »

hands-on:-pimax-crystal-touts-impressive-clarity,-but-suffers-from-a-(potentially-fixable)-flaw

Hands-on: Pimax Crystal Touts Impressive Clarity, But Suffers From a (potentially fixable) Flaw

At CES 2023 Pimax was showing off its latest high-resolution headset, the Pimax Crystal, which uses new lenses and new displays for what the company says is its clearest looking image yet. And while it’s definitely an improvement in many areas over the company’s headsets, there’s a key flaw that I hope the Pimax will be able to address.

Pimax Crystal employs new lenses and promises to be rid of glare and god rays that were apparent in prior Pimax headsets (and many others) which used Fresnel lenses. That, along with high-resolution displays, purported HDR capability, swappable lenses (to trade field-of-view for pixel density), and up to a 160Hz refresh rate. For a full breakdown of the headset’s spec, see our announcement article.

At CES 2023 I got to see the headset myself for the first time. Although the headset is technically capable of running in standalone mode, I saw it running as a PC VR headset with SteamVR Tracking.

Pimax Crystal (pictured without the SteamVR Tracking faceplate) | Photo by Road to VR

Naturally, the demo I was shown was running Half-Life: Alyx—arguably VR’s best looking game—to show off the detail the headset can reproduce with its 2,880 × 2,880 (8.3MP) per-eye displays. From the quick hands-on I got with Pimax Crystal, I could see this was a big step up in clarity over the company’s prior headsets, especially with regards to edge-to-edge clarity. The visual basics were solid too in terms of pupil swim, geometric distortion, and chromatic aberration. There was a little mura visible on this headset but nothing egregious as far as I could tell.

But there was one thing that immediately stood out to my eyes which otherwise foils a good looking image: blur during head movement. While the static image seen through the headset looks quite sharp, as soon as you start moving your head to look around the world you’ll see a lot of blur—that’s a problem for VR considering that your head is very frequently in motion.

Photo by Road to VR

My best guess is this is being caused by persistence blur; a display artifact that’s mostly solved on other headsets and is thus rarely seen anymore. Persistence blurring is is caused by the display staying lit for too long, such that as you turn your head the pixels remain lit even while their position becomes inaccurate (because they are ‘frozen’ in place each frame, until the next frame comes along and updates the position to account for your head movement). Most headsets employ a form of ‘low-persistence’ which counteracts this issue by illuminating the display for only a fraction of the time between frames, such that as you move your head the pixels aren’t ‘frozen’ in place, but are actually unlit, leaving your brain to fill in the gaps without seeing the pixels blur between frames.

The amount of blur I saw through Pimax Crystal I would say notably compromises what is otherwise an impressively clean image, though there’s a chance that Pimax could fix this issue, depending upon exactly what’s causing it.

For one, it’s possible that the headsets being shown at CES 2023 were still not fully tuned and that low-persistence hasn’t been properly tuned (or maybe isn’t even enabled yet). In that case it might be a matter of final tweaks before they get the correct display behavior which could reduce persistence blur.

Another factor could be the headset’s ‘HDR’ capability. While I don’t believe Pimax has shared any information on peak brightness, it’s possible that the display can’t do both low-persistence and HDR brightness at the same time (indeed this is a challenge because HDR needs high brightness while low-persistence needs pixels to be illuminated only for a minimal amount of time).

Curiously, I also noticed what appeared to be persistence blurring on pre-release versions of PSVR 2… which also purports to have an HDR display. For both PSVR 2 and Pimax Crystal, I’m hoping we’ll see improvements by the time the finished headsets are headed to customers.

And still there’s other possibilities—this might not be persistence blur at all, but simply slow pixel switching time causing some form of ghosting, which could be an inherent limitation of the display or maybe something that could be tweaked.

– – — – –

Ultimately I’m pretty impressed with the clarity and wide field-of-view of the Pimax Crystal, but the blur I’ve seen during head movement compromises the image in my book. My gut says this is probably a persistence blurring issue, though it could be something else. We’ll have to wait to see what Pimax says about this and if they’re able to make improvements by the time Crystal ships.

Photo by Road to VR

Speaking about Crystal shipping; the headset was originally planned for release in Q3 of 2022, but that date has slipped. Although the company hosted a ‘Pimax Crystal Launch Event‘ back in November, at CES 2023 Pimax said the first headsets will start being delivered at the end of this month, though the company also indicates that it won’t reach full production capacity until the middle of the year. Even when the first units do start shipping, key accessories and features, like the headset’s standalone mode—which makes up about half of its value proposition—aren’t expected to be available until unspecified points in the future.

Hands-on: Pimax Crystal Touts Impressive Clarity, But Suffers From a (potentially fixable) Flaw Read More »

‘gorilla-tag’-reports-$26m-in-revenue,-over-700k-users-played-on-christmas-day

‘Gorilla Tag’ Reports $26M in Revenue, Over 700K Users Played on Christmas Day

Gorilla Tag is undoubtedly a hit. Its primate-centric locomotion style and infectious game of tag has vaulted it into the top spot as the most-rated game on the Quest Store, surpassing even the Meta-owned rhythm game Beat Saber. Now, the indie team behind Quest’s most popular game revealed they’ve generated over $26 million with Gorilla Tag.

Speaking to VentureBeat, developer Another Axiom has reported that its gorilla-themed game has not only brought it home big with $26 million from in-app purchases, but it’s also attracted a larger glut of players than previously reported.

Having initially launched on App Lab in March 2021 and later released on the official Quest Store this past December, devs behind the free-to-play game say it’s managed to reach a peak monthly active user count of 2.3 million now. On Christmas, which is when Meta typically sees a big influx of users, over 760,000 users played Gorilla Tag.

It is free-to-play on Quest—its biggest platform—although a paid Steam Early Access version is available as well for PC VR headsets, costing $20, which comes along with an equal value of its in-game currency, shiny rocks.

Therein lies Gorilla Tag’s monetization strategy, as in-app purchases include a range of cosmetic items such as hats, glasses, and seasonal items like Santa beards and candy canes.

Developer Kerestell Smith told Road to VR last month that its main driver to get players in the door (and spending cash) was via some well-timed virality on TikTok, with the hashtag #gorillatag seeing 4.4 billion views to date.

Today, the game sits at over 52,000 reviews, ranking above Beat Saber’s 46,000 reviews, making it the most-rated game on the platform. At the time of this writing, Gorilla Tag is the fourth best-rated free game on Quest, sitting behind GYM CLASS – BASKETBALL VR, Innerworld, and First Steps for Quest 2.

Check out the full rankings from this month, which we break down into best and most rated games for both paid and free titles on Quest.

‘Gorilla Tag’ Reports $26M in Revenue, Over 700K Users Played on Christmas Day Read More »

realwear-announces-navigator-520-assisted-reality-enterprise-headset

RealWear Announces Navigator 520 Assisted Reality Enterprise Headset

RealWear’s Navigator series of enterprise “assisted reality” headsets just got bigger. The company recently announced the Navigator 520, an updated version of the series flagship model released just over a year ago.

Improvements Due to New “HyperDisplay”

The RealWear Navigator 500 launched in December of 2021 and it does what it was designed for well. But, in XR, doing something well is seldom used as an excuse not to improve. As a result, you have to look pretty closely to notice the differences between the 500 and the recently announced 520. At least, looking at it from the outside.

Looking at side-by-side product images, you can notice that the Navigator 520 has improved eye relief – that is to say, that the screen is farther from the wearer’s eye. In industry settings, this means that users can see more of their surroundings while still getting what they need on the display. It also improves eye comfort, which is important in a device designed for all-day wear.

RealWear Navigator 500 vs Navigator 520

Of course, RealWear didn’t just move the same display and called it a new product. The company was able to improve eye relief by improving the display itself. The Navigator 520 features the company’s new HyperDisplay technology integrating a larger eye box and a higher-definition screen with brighter colors.

“With the launch of RealWear Navigator 520 we’ve continued to put ourselves in the shoes of a modern frontline professional who wants to stay connected and empowered,” RealWear Chief Product Officer Rama Oruganti said in a release. “This product brings together a year of major improvements and innovations on the RealWear platform.”

Navigator 520

The hardware similarities are a benefit to the Navigator 520, as the modular device is compatible with a number of components and accessories already developed for the Navigator 500, including the voice-operated thermal camera announced by the company in November 2022.

Is Upgrading to the Navigator 520 Worth It?

Whenever an updated version of a standby comes out, there are two natural responses: excitement and skepticism. Is it worth updating to the 520 if you already use the 500? Is the 520 worth the extra money while the 500 is still available for less?

There are demos that simulate the 520’s resolution difference behind the HyperDisplay link above so you can get an idea of the display changes. It’s also worth asking whether your particular use case would benefit from improved eye relief. Are long shifts and situational awareness pain points in your particular situation?

It’s also worth remembering that given the cross-compatibility between the two devices, upgrading from the 500 to the 520 doesn’t necessarily mean that you have to replace any modules, accessories, and mounts that you may already be using.

RealWear Navigator 520 worker

What is the cost difference? The Navigator 500 is $2,500 and the Navigator 520 is $2,700. If you’re looking at getting started with RealWear, the difference may be negligible given all of the improvements of the newer model.

If you already have a fleet of 500s, replacing them all could be rough. However, replacing 500s with 520s as needed might be the way to go given component compatibility. And, after all, one year seems to be becoming the standard XR product cycle these days. RealWear headsets are built to last, but that doesn’t mean that the specs were never going to go out of date.

Options for Improvement

RealWear is keeping up with the trend in XR wearables these days, namely releasing new devices while the previous generation still has a shelf-life. While this can be frustrating when it means replacing whole fleets of units, the Navigator 520 in RealWear’s product structure provides flexibility for users at different stages of device deployment.

RealWear Announces Navigator 520 Assisted Reality Enterprise Headset Read More »