Earlier this year, TikToker Gabrielle Judge, aka the “anti-work girlboss,” posted a now-viral two and a half minute video.
The subject was the “Lazy Girl Job,” which captured the imagination of viewers to such an extent that Judge’s video now has more than 345,000 views. The concept then took on a life of its own and has spawned the proliferation of the #lazygirljobs hashtag, which has rocketed past 17 million views on the platform.
Judge was quick to quantify what a lazy girl job actually is. “A lazy girl job is basically something you can just quiet quit […] There’s lots of jobs out there where you could make, like, 60 to 80k, and not do that much work and be remote.”
Not working unsociable hours, and having time for childcare are two elements she flags as being essential to the quintessential lazy girl job, which, she says is more easily found in non-technical tech jobs, such as marketing associates, account managers, or customer success roles.
Typically these jobs offer decent pay and equity in the company. They’re safe, and represent, “an easy job that is extremely flexible,” Judge says. But, as the name suggests, this is a concept that is gendered, and without proper examination could be misconstrued as women wanting to sit back and be carried in the workplace.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
After decades of women striving to be treated equally, the gender pay gap in the EU stood at 12.7% in 2021, meaning women earn 13% on average less per hour than men. Given that context, it is important to consider that the idea of the lazy girl job is actually a reaction to the grind culture that has beset the workplace over the past number of years.
Judge has clarified that the term is not actually about laziness, or what she calls “mouse jiggling”—also known as being present, but doing little.
Reasonable responsibilities and expectations
Where Millennial workers popularised the idea of side hustles, and working all the hours there are, Gen Z is pushing back against these expectations. This cohort of workers want careers that support their work-life balance—and which don’t leave them wrung out at the end of the day.
Reframed in this way, there’s a lot to recommend a lazy girl job, for all genders. Or, as it is otherwise known, a job with reasonable responsibilities and expectations, decent pay and a manageable level of stress.
While Judge is from the US, and operating in an environment where paid time off, maternity leave, and social security protections are significantly less than those enjoyed by European workers, employees on this side of the Atlantic are also burned out and looking for better work options.
After a pandemic period where 44% of workers said that their work stress had increased as a result of the Covid-19 crisis, 46% said they’re being exposed to severe time pressure or work overload.
Work-related health issues have also increased in Europe, with 30% reporting at least one health problem such as overall fatigue, headaches, eyestrain, muscle problems or pain, caused or made worse by work.
Lack of engagement
That bloc-wide stress and fatigue has led to another trending topic: the phenomenon of quiet quitting, which is what happens when employees put in the minimum amount of effort to keep their jobs and get paid, but never go above and beyond.
McKinsey data has found that this is happening in Europe, and that workplace engagement is poor here too. It also says that 79% of Europeans who report low levels of engagement or support factors are likely to leave their jobs.
What workers want, says McKinsey, is more workplace flexibility, as well as a physically and psychologically safe workplace. According to Amy Edmondson, the Novartis Professor of Leadership and Management at Harvard Business School, this, “means an absence of interpersonal fear. When psychological safety is present, people are able to speak up with work-relevant content.”
That sounds a lot like an engaged workforce—which really matters, because employees who are engaged with their roles are committed to not only their work, but are also more invested in their company’s success.
The Gaia spacecraft has unearthed a new treasure trove of secrets about our galaxy — and beyond.
The European Space Agency (ESA) mission plans to produce the largest, most precise 3D map of the Milky Way. To achieve this lofty goal, Gaia is surveying almost 2 billion celestial objects.
Using two optical telescopes, the satellite is monitoring their motions, luminosity, temperature, and computation. Every observation could unravel new details about Earth and the surrounding Universe.
The new release fills in some big gaps in the maps.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Gaia had already provided a pretty comprehensive view of the Milky Way and beyond — but some areas of sky required further exploration.
Regions that are densely packed with stars required particular attention. One notable example is globular clusters.
Globular clusters are among the oldest objects in the universe, which makes them valuable sources for our cosmic history. Telescopes, however, struggle to scrutinise their bright cores crammed with stars.
To find new jigsaw piexes in the puzzle, Gaia targeted Omega Centauri — the largest globular cluster that’s viewable from Earth.
Typically, the spacecraft would focus on individual stars. But in this case, the observatory surveyed a wider stretch of sky around Omega Centauri core, which was mapped whenever the cluster came into view.
The technique exposed over half a million new stars in space — all within a single cluster.
With the new data, scientists can study the cluster’s structure, as well as the distribution and movements of the constituent stars. Together, these details could produce a complete large-scale map of Omega Centauri.
“It’s using Gaia to its full potential — we’ve deployed this amazing cosmic tool at maximum power,” Alexey Mints, a member of the Gaia collaboration, said in a statement.
In fact, the findings exceed the initial objectives for Gaia. To discover the stars, the researchers used an observing mode that was designed only to keep the spacecraft’s instruments running smoothly.
“We didn’t expect to ever use it for science, which makes this result even more exciting,” said the study’s lead author, Katja Weingrill, a researcher at the Leibniz-Institute for Astrophysics Potsdam (AIP).
Gaia is now using the technique to explore eight more regions. The results will deepen our understanding of what happens in the ancient bodies.
According to ESA, the data will help scientists confirm our galaxy’s age, locate its centre, and determine whether it ever experienced any collisions.
Astronomers could also verify changes to stars and constrains models of galactic evolution. They could even infer the possible age of the Universe.
These lenses form when a large quantity of matter, such as a galaxy cluster, sits between Earth and a distant light source. The mass then creates a multiple-image effect.
By studying the configurations, scientists can uncover new information about the Universe’s history.
Gaia revealed that certain objects in gravitational lenses aren’t what they appear to be. While the bodies look like stars, they’re actually distant lensed quasars — extremely bright, energetic galactic cores powered by black holes.
“With this data release, Gaia is the first mission to achieve an all-sky survey of gravitational lenses at high resolution,” said Laurent Galluccio, a member of the Gaia collaboration.
Further studies published today pinpointed the positions of asteroids and mapped the disc of the Milky Way. Another papercharacterises the dynamics of 10,000 pulsating and binary red giant stars.
“This data release further demonstrates Gaia’s broad and fundamental value – even on topics it wasn’t initially designed to address,” said Timo Prusti, Project Scientist for Gaia at ESA.
“Although its key focus is as a star surveyor, Gaia is exploring everything from the rocky bodies of the Solar System to multiply imaged quasars lying billions of light-years away, far beyond the edges of the Milky Way.”
One of the most interesting things about Vision Pro is the way Apple is positioning its fully immersive capabilities. While many have interpreted the company’s actions as relegating VR to an afterthought, the reality is much more considered.
Vision Pro is somewhat ironic. It’s an incredibly powerful and capable VR headset, but Apple has done extensive work to make the default mode feel as little like being in VR as possible. This is of course what’s called ‘passthrough AR’, or sometimes ‘mixed reality’. We’re not quite there yet, but it’s clear that in Apple’s ideal world when you first put on the headset it should feel like nothing around you has even changed.
Apple doesn’t want Vision Pro to take over your reality… at least not all the time. It has gone to extensive lengths to try to seamlessly blend virtual imagery into the room around you. When floating UI panels are created, the are not only subtly transparent (to reveal the real world behind them), but the system even estimates the room’s lighting to cast highlights and shadows on the panels to make them look like they’re really floating there in front of you. It’s impressively convincing.
But none of this negates the fact that Vision Pro is a powerful VR headset. In my hands-on demo earlier this year, Apple clearly showed the headset is not only capable of fully immersive VR experiences, but that VR is a core capability of the platform. It even went so far as to add the ‘digital crown’ dial on the top of the headset to make it easy for people to transition between passthrough AR and a fully immersive view.
Much of the commentary surrounding Vision Pro focused on the fact that Apple never actually said the words “virtual reality,” and how the headset lacks the kind of dedicated controllers that are core to most VR headsets today. It was reasoned that this is because the company doesn’t really want Vision Pro to have anything to do with VR.
As I’ve had more time to process my experience of using the headset and my post-demo discussions with some of the people behind the product, it struck me that Apple doesn’t want to avoid fully immersive VR, it’s actually embracing it—but in a way that’s essentially the opposite of what we seen in most other headsets today. And frankly, I think their way is probably the approach the entire industry will adopt.
To understand that, let’s think about Meta’s Quest headsets. Though things might be changing soon with the release of Quest 3, up to this point the company has essentially used VR as the primary mode on its headsets, while passthrough AR was a sort of optional and occasional bonus mode—something apps only sometimes used, or something the user has to consciously toggle on.
On Vision Pro, Apple is doing the reverse. Passthrough AR is the default mode. But fully immersive VR is not being ignored; to the contrary, the company is treating VR as the most focused presentation of content on the headset.
In short, Apple is treating VR like a ‘full-screen’ mode for Vision Pro; the thing you consciously enable when you want to rid yourself of other distractions and get lost in one specific piece of media.
If you think about it, that’s exactly how we use full-screen on our computers and phones today.
Not every application on my computer launches in full-screen and removes my system UI or hides my other windows. In fact, the majority of apps on my computer don’t work this way. Most of the time I want to see my taskbar and my desktop and the various windows and controls that I use to manipulate data on my screen.
But if I’m going to watch a movie or play a game? Full-screen, every time.
That’s because these things are focused experiences where we don’t want to be distracted by anything else. We want to be engrossed by them so we remove the clutter and even let the application hide the mouse and give us a custom interface to better blend it with the media we’re about to engage with.
In the same way that you wouldn’t want every application on your computer to be in full-screen mode—with its own interface and style—Apple doesn’t think every application on your headset should be that way either.
Most should follow familiar patterns and share common interface language. And most do not need to be full-screen (or immersive). In fact, some things not only don’t benefit from being more immersive, in some cases they are made worse. I don’t need a fully immersive environment to view a PDF or spreadsheet. Nor do I need to get rid of all of my other windows and data if I want to play a game of chess. All of those things can still happen, but they don’t need to be my one and only focus.
Most apps can (and should) work seamlessly alongside each other. It’s only when we want that ‘full-screen’ experience that we should give an app permission to take over completely and block out the rest.
And that’s how Apple is treating fully immersive VR on Vision Pro. It isn’t being ignored; the company is simply baking-in the expectation that people don’t want their apps ‘full-screen’ all the time. When someone does want to go full-screen, it’s always a conscious opt-in action, rather than opt-out.
As for the dial on the top of the headset—while some saw this as evidence that Apple wants to make it quick and easy for people to escape fully immersive VR experiences on the headset, I’d argue the company sees the dial as a two way street: it’s both an ‘enter full-screen’ and ‘exit full-screen’ button—the same we expect to see on most media apps.
Ultimately, I think the company’s approach to this will become the norm across the industry. Apple is right: people don’t want their apps full-screen by all the time. Wanting to be fully immersed in one thing is the exception, not the rule.
Meta Connect 2023 has wrapped up, bringing with it a deluge of info from one of the XR industry’s biggest players. Here’s a look at the biggest announcements from Connect 2023, but more importantly, what it all means for the future of XR.
Last week marked the 10th annual Connect conference, and the first Connect conference after the Covid pandemic to have an in-person component. The event originally began as Oculus Connect in 2014. Having been around for every Connect conference, it’s amazing when I look around at just how much has changed and how quickly it all flew by. For those of you who have been reading and following along for just as long—I’m glad you’re still on this journey with us!
So here we are after 10 Connects. What were the big announcements and what does it all mean?
Meta Quest 3
Obviously, the single biggest announcement is the reveal and rapid release of Meta’s latest headset, Quest 3. You can check out the full announcement details and specs here and my hands-on preview with the headset here. The short and skinny is that Quest 3 is a big hardware improvement over Quest 2 (but still being held back by its software) and it will launch on October 10th starting at $500.
Quest 3 marks the complete dissolution of Oculus—the VR startup that Facebook bought back in 2014 to jump-start its entrance into XR. It’s the company’s first headset to launch following Facebook’s big rebrand to Meta, leaving behind no trace of the original and very well-regarded Oculus brand.
Apples and Oranges
On stage at Connect, Meta CEO Mark Zuckerberg called Quest 3 the “first mainstream mixed reality headset.” By “mainstream” I take it he meant ‘accessible to the mainstream’, given its price point. This was clearly in purposeful contrast to Apple’s upcoming Vision Pro which, to his point, is significantly less accessible given its $3,500 price tag. Though he didn’t mention Apple by name, his comments about accessibility, ‘no battery pack’, and ‘no tether’ were clearly aimed at Vision Pro.
Mixed Marketing
Meta is working hard to market Quest 3’s mixed reality capabilities, but for all the potential the feature has, there is no killer app for the technology. And yes, having the tech out there is critical to creating more opportunity for such a killer app to be created, but Meta is substantially treating its developers and customers as beta testers of this technology. The ‘market it and they will come’ approach that didn’t seem to pan out too well for Quest Pro.
Personally I worry about the newfangled feature being pushed so heavily by Meta that it will distract the body of VR developers who would otherwise better serve an existing customer base that’s largely starving for high-quality VR content.
Regardless of whether or not there’s a killer app for Quest 3’s improved mixed reality capabilities, there’s no doubt that the tech could be a major boon to the headset’s overall UX, which is in substantial need of a radical overhaul. I truly hope the company has mixed reality passthrough turned on as the default mode, so when people put on the headset they don’t feel immediately blind and disconnected from reality—or need to feel around to find their controllers. A gentle transition in and out of fully immersive experiences is a good idea, and one that’s well served with a high quality passthrough view.
Apple, on the other hand, has already established passthrough mixed reality as the default when putting on the headset, and for now even imagines it’s the mode users will spend most of their time in. Apple has baked this in from the ground-up, but Meta still has a long way to go to perfect it in their headsets.
Augments vs. Volumes
Several Connect announcements also showed us how Meta is already responding to the threat of Apple’s XR headset, despite the vast price difference between the offerings.
For one, Meta announced ‘Augments’, which are applets developers will be able to build that users can place in permanently anchored positions in their home in mixed reality. For instance, you could place a virtual clock on your wall and always see it there, or a virtual chessboard on your coffee table.
This is of course very similar to Apple’s concept of ‘Volumes’, and while Apple certainly didn’t invent the idea of having MR applets that live indefinitely in the space around you (nor Meta), it’s clear that the looming Vision Pro is forcing Meta to tighten its focus on this capability.
Meta says developers will be able to begin building ‘Augments’ on the Quest platform sometime next year, but it isn’t clear if that will happen before or after Apple launches Vision Pro.
Microgrestures
Augments aren’t the only way that Meta showed at Connect that it’s responding to Apple. The company also announced that its working on a system for detecting ‘microgestures’ for hand-tracking input—planned for initial release to developers next year—which look awfully similar to the subtle pinching gestures that are primarily used to control Vision Pro:
Again, neither Apple nor Meta can take credit for inventing this ‘microgesture’ input modality. Just like Apple, Meta has been researching this stuff for years, but there’s no doubt the sudden urgency to get the tech into the hands of developers is related to what Apple is soon bringing to market.
A Leg Up for Developers
Meta’s legless avatars have been the butt of many-a-joke. The company had avoided the issue of showing anyone’s legs because they are very difficult to track with an inside-out headset like Quest, and doing a simple estimation can result in stilted and awkward leg movements.
But now the company is finally adding leg estimation to its avatar models, and giving developers access to the same tech to incorporate it into their games and apps.
And it looks like the company isn’t just succumbing to the pressure of the legless avatar memes by spitting out the same kind of third-party leg IK solutions that are being used in many existing VR titles. Meta is calling its solution ‘generative legs’, and says the system leans on tracking of the user’s upper body to estimate plausibly realistic leg movements. A demo at Connect shows things looking pretty good:
It remains to be seen how flexible the system is (for instance, how will it look if a player is bowling or skiing, etc?).
Meta says the system can replicate common leg movements like “standing, walking, jumping, and more,” but also notes that there are limitations. Because the legs aren’t actually being tracked (just estimated) the generative legs model won’t be able to replicate one-off movements, like raising your knee toward your chest or twisting your feet at different angles.
Virtually You
The addition of legs coincides with another coming improvement to Meta’s avatar modeling, which the company is calling inside-out body tracking (IOBT).
While Meta’s headsets have always tracked the player’s head and hands using the headset and controllers, the rest of the torso (arms, shoulders, neck) was entirely estimated using mathematical modeling to figure out what position they should be in.
For the first time on Meta’s headsets, IOBT will actually track parts of the player’s upper body, allowing the company’s avatar model to incorporate more of the player’s real movements, rather than making guesses.
Specifically Meta says its new system can use the headset’s cameras to track wrist, elbows, shoulders, and torso positions, leading to more natural and accurate avatar poses. The IOBT capability can work with both controller tracking and controller-free hand-tracking.
Both capabilities will be rolled into Meta’s ‘Movement SDK’. The company says ‘generative legs’ will be coming to Quest 2, 3, and Pro, but the IOBT capability might end up being exclusive to Quest 3 (and maybe Pro) given the different camera placements that seem aimed toward making IOBT possible.
Calm Before the Storm, or Calmer Waters in General?
At Connect, Meta also shared the latest revenue milestone for the Quest store: more than $2 billion has been spent on games an apps. That means Meta has pocketed some $600 million from its store, while the remaining $1.4 billion has gone to developers.
That’s certainly nothing to sneeze at, and while many developers are finding success on the Quest store, the figure amounts to a slowdown in revenue momentum over the last 12 months, one which many developers have told me they’d been feeling.
The reason for the slowdown is likely a combination of Quest 2’s age (now three years old), the rather early announcement of Quest 3, a library of content that’s not quite meeting user’s expectations, and a still struggling retention rate driven by core UX issues.
Quest 3 is poised for a strong holiday season, but with its higher price point and missing killer app for the heavily marketed mixed reality feature, will it do as well as Quest 2’s breakout performance in 2021?
The much lauded Echo VR might no longer be with us, but one of its innovations is living on in a new wave of VR games.
Echo VR (and its single-player counterpart, Lone Echo) were among the first major VR games to build a game around a virtual movement system based entirely on the player’s arm movement. While most VR games used (and continue to use) thumbsticks to allow players to glide around on their feet, the Echo games actually gave players no control over their feet, and instead had them floating around exclusively in zero-G environments with only their hands to push and pull themselves around the game space.
While other early VR games definitely contributed to the idea of arm-based movement rather than sliding thumbstick movement (shout-out to Lucid TripsClimbey, Sprint Vector and many more), the Echo games did a lot of heavy lifting to popularize this novel locomotion concept.
And from there, the idea has grown and evolved.
Gorilla Tag (2021), whose creator specifically says he was inspired by Echo VR, has become one of VR’s most popular games, bringing its spin on arm-based locomotion to a much wider audience. With that exposure, more and more players are learning how this particular way of moving in VR can be fun, making them more likely to try games with similar mechanics.
And this goes far beyond the smattering of Gorilla Tag clones you can find on Steam.
Nock (2022) went several steps further with a much faster type of sliding and gliding arm movement, while also weaving in bows and arrows, challenging players to both navigate and shoot with their hands in a continuous flow.
Space Ball (2023) took the Gorilla Tag movement and fused it with a Rocket League style game, letting players bound around the arena and launch themselves to dunk a huge ball into a hoop.
It’s not just multiplayer games either. Arm-based locomotion systems are popping up in single player adventures like Phantom Covert Ops (2020) which had a very literal take on arm-movement in VR—asking players to paddle themselves around in a covert kayak. It sounds silly on the surface, but there’s no doubt the game’s arm-based movement was both unique and successful.
In 2023 alone we’ve seen more arm-based movement games like No More Rainbows, Toss!, and Outta Hand. If you peruse the reviews of these games, you find a common theme of advice from reviewers: ‘if you liked Gorilla Tag, check this out!’. Clearly the players enjoying these games want more like them, with the desired similarity being the use of arms for movement.
And there’s more to come. One of the most intriguing upcoming Quest titles, Underdogs, takes the concept in a different direction, where a player brawls it out in a mech using their arms to pull themselves around the arena.
And in a truly full-circle moment, the creators of Gorilla Tag (which were inspired by Echo VR) are building a spiritual successor to Echo VR. Currently codenamed ‘Project A2’, the game will revisit arm-based movement in zero-G in an effort to revive the very game that popularized arm-based movement to so many in the first place.
It’s apparent that VR developers and players alike are beginning to find that controlling your arms with… your arms, is much more engaging than controlling your legs with… a thumbstick. I have a feeling that this new wave of games built entirely around arm-based movement is here to stay. The question on my mind is if they will remain as their own genre within VR, or perhaps come to define the way movement works in most VR games.
Blade & Sorcery, the hit physics-based combat sandbox, is moving towards its 1.0 release, which is coming with a major update slated to arrive early next year.
In development by indie studio WarpFrog for almost five years now, Blade & Sorcery has basically been the go-to fantasy combat simulator for PC VR headset users, letting you live out all of the sword and sorcery dreams with suitably malleable enemies at the ready.
WarpFrog is nearly ready to bring the game out of Early Access too, detailing on Steam the game’s massive 1.0 update, known as “Crystal Hunt”, which will feature a host of new content, including a bona fide storyline, a new dungeon biome, and new game mode. It’s also said to be the game’s final update.
Revolving around an ancient and mysterious race called the Dalgarians, the game’s storyline will be presented through environmental storytelling, written text, and ciphers—something that the studio says will offer players “deep lore” exploration for the first time.
The Crystal Hunt also involves character progression, loot gathering, and a unique skill tree—no small feats. Loot gathered in dungeons can be sold to purchase weapons and armor from a physical shop in the game. Then there are the much sought after Crystals Core themselves.
The titular crystals are “the very rare resource found in the Dalgarian ruins and what you and everyone else is chasing,” the studio explains. “Crystal Cores can be siphoned of their magical power to make a sorcerer more powerful. In game terms, this is the currency you will use to unlock new skills branches, which you can then invest your shards.”
The update also introduces new armors and weapons, including tiered weapons with functional benefits.
The release date for the update is estimated to be in Q1 2024, Warpfrog says, but it’s subject to change due to development challenges, and also the team’s anti-crunch culture—now counting 21 full-time and six part-time members.
Exactly when 1.0 arrives, we aren’t sure. The final release date will be confirmed when the 1.0 trailer drops, so stay tuned to The Baron’s YouTube channel, who is producer and community lead for the game—and of course, check back here for all of the latest news.
In a scientific first, researchers at theUniversity of Oxford have 3D printed stem cells that can mimic the architecture of the cerebral cortex, the human brain’s outer layer. The technique could potentially be used to treat brain injuries.
Such injuries typically cause significant damage to the cerebral cortex, leading to movement, cognition, and communication challenges. Currently, there’s no effective treatment for severe cases, which negatively impacts the patients’ quality of life.
Hoping to change this, the research team fabricated a two-layered brain tissue by 3D printing human neural stem cells.
To achieve this, the researchers used human induced pluripotent stem cells (hiPSCs), which can be easily derived from cells harvested from the patients themselves, reducing the risk of an immune response.
Initially, the hiPSCs were differentiated into neural progenitor cells for two separate layers of the cerebral cortex. They were then suspended in a solution to produce two “bioinks,” which were printed to create a two-layered structure of brain tissue.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Notably, when implanted into mouse brains, the printed cells showed both structural and functional integration with the host tissue.
“Our droplet printing technique provides a means to engineer living 3D tissues with desired architectures, which brings us closer to the creation of personalised implantation treatments for brain injury,” said Dr Linna Zhou, senior author of the study.
The researchers now aim to further evolve their technique and create complex multi-layered cerebral cortex tissues that can mimic the human brain’s architecture in a more realistic way. Beyond brain injuries, these 3D-printed cells could benefit drug evaluation and our knowledge on brain development and cognition.
Today, France’s Eviden (part of cybersecurity, cloud, and high-performance computing group Atos) and German modular supercomputing company ParTec, announced they had won a contract to provide the very first exascale supercomputer in Europe.
The JUPITER project will cost €500mn in total. The computer itself will cost €273mn and run on Arm architecture SiPearl Rhea processors and Nvidia accelerator technology. It will be operated by the Jülich Supercomputing Centre in Germany.
JUPITER will be the first system in Europe to surpass the threshold of one billion billion calculations per second. The aim is for it to support the development of high-precision models of complex systems, which could help solve questions regarding areas such as climate change, pandemics, and fusion energy. Of course, it would also enable intensive use of AI and analysis of large data volumes.
JUPITER stands for Joint Undertaking Pioneer for Innovative and Transformative Exascale Research (just in case you were wondering). The European High Performance Computing Joint Undertaking (EuroHPC JU) announced the project last year, and put out a call for tenders in January.
But let’s back up for a moment — what exactly is an exascale supercomputer?
One billion billion flops per second
An exascale system, as already mentioned, is a supercomputer or high-performance computing (HPC) system capable of performing a billion billion calculations per second. This is equivalent to one exaflop.
In other words, an exaflop is a measure of performance for a supercomputer that can calculate at least one quintillion (exa-) floating point operations (flop) per second. Meanwhile, an exabyte is a memory subsystem packing a quintillion bytes of data.
Building and operating exascale systems pose various technical challenges, including power consumption, heat management, scalability, and software optimisation.
The world’s first exascale supercomputer is the Frontier, built by Hewlett Packard Enterprise (HPE) and housed at the Oak Ridge National Laboratory in Tennessee. It was deployed in 2021 and reached full capacity in 2022. It is set to be superseded by El Capitan at the Lawrence Livermore National Laboratory in California, also by HPE. El Capitan will deliver over 2 exaflops when it comes online mid-2024.
Meanwhile, the fastest supercomputer in Europe, owned by the EuroHPC JU, is the Lumi (Large Unified Modern Infrastructure). It sits in the CSC data centre in Kajaani, Finland, began operating in 2021, and can achieve more than 375 petaflops (one thousand million million flops per second), with a “theoretical peak” at 550 petaflops. That makes it the third fastest supercomputer in the world as of June 2022.
The UK’s competition watchdog (CMA) has opened an investigation into Amazon’s and Microsoft’s cloud services after concerns over their dominant position in the market.
The move follows a study by telecoms regulator Ofcom, which “identified features that make it more difficult for UK businesses to switch and use multiple cloud suppliers,” mainly concerning the two US tech giants.
“We welcome Ofcom’s referral of public cloud infrastructure services to us for in-depth scrutiny,” said Sarah Carell, CEO at CMA. “This is a £7.5bn market that underpins a whole host of online services — from social media to AI foundation models. Many businesses now completely rely on cloud services, making effective competition in this market essential.”
Ofcom’s study found that Amazon and Microsoft are the two leading cloud providers in the UK, with a combined market share of 70% to 80% in 2022. Google came third at 5%-10%.
The regulator is mostly worried about three features of their services:
Egress fees: charges that customers pay to move their data out of a cloud.
Discounts: that may incentivise customers to use only one cloud provider, even if better quality alternatives are available.
Technical barriers to interoperability: which can prevent customers from switching between different clouds, or use more than one provider.
“Some UK businesses have told us they’re concerned about it being too difficult to switch or mix and match cloud providers, and it’s not clear that competition is working well,” said Fergal Farragher, the Ofcom director responsible for the study.
The CMA will conclude its investigation by April 2025. It has the power to make recommendations to the government or even impose its own remedies, including requiring companies to sell off parts of their business to improve competition.
Meanwhile, Microsoft and Amazon are facing tough competition measures enforced by the EU’s Digital Markets Act (DMA). Amazon Marketplace and Amazon Ads alongside Microsoft’s LinkedIn and Windows PC OS have five months to comply with a list of rules, such as allowing consumers to uninstall pre-installed apps, or enabling business users to promote and sell their products on their own websites.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
Bigscreen Beyond is the most interesting and promising new dedicated PC VR headset to come out in years, and while there’s a lot to like, we’re still waiting on a key piece that will make or break the headset.
Bigscreen Beyond has one goal in mind: make the smallest possible headset with the highest possible image quality.
Generally speaking, this unlikely headset (born from a VR software startup, after all) has ‘pulled it off.’ It’s an incredibly compact VR headset with built-in SteamVR tracking. It feels like a polished, high-end product with a look and feel that’s all its own. The visuals are great, though not without a few compromises. And it delivers something that no other headset to date has: a completely custom facepad that’s specially made for each customer.
I’ll dig more into the visual details soon, but first I need to point out that Bigscreen Beyond missing something important: built-in audio.
While there’s an official deluxe audio strap on the way, as of right now the only way to use Bigscreen Beyond is with your own headphones. In my case that means a pair of wireless gaming headphones connected to my PC. And it also means another thing to put on my head.
For some headsets this would be a notable but not deal-breaking inconvenience, for Bigscreen Beyond, however, it’s amplified because the headset’s custom-fit facepad means absolutely zero light leakage. It wasn’t until I started using Beyond that I realize just how often I use the nose-gap in the bottom of most headsets to get a quick glimpse into the real world, whether that’s to grab controllers, make sure I didn’t miss an important notification on my phone, or even pick up a pair of headphones.
With no nose-gap and no passthrough camera, you are 100% blind to the real world when you put on Beyond. Then you need to feel around to find your headphones. Then you need to feel around for your controllers.
Oops, something messed up on your PC and you need to restart SteamVR? Sure, you can lift the headset to your forehead to deal with it in a pinch, but then you put it back down and realize you got some oil on the lenses from your hair or forehead. So now you need to wipe the lenses… ok, let me put down the controllers, take off the headphones, take off the headset, wipe the lenses, then put on the headset, feel around for my headphones, then feel around for my controllers. Now I want to fix my headstrap… oops the headphones are in the way. Let me take those off for a minute…
All of this and more was the most frustrating part of an otherwise quite good experience when using Beyond. And sure, I could use wireless earbuds or even external speakers. But both have downsides that don’t exist with a built-in audio solution.
A lack of built-in audio on a VR headset just feels like a huge step back in 2023. It’s a pain in the ass. Full stop.
Until we have the upcoming deluxe audio strap to pair with Beyond, it feels incomplete. We’re patiently waiting to get our hands on the strap—as it will really make-or-break the headset—and plan to update our review when that time comes. Bigscreen says it expects the deluxe audio start to be available sometime in Q4.
Bigscreen Beyond Review
With the audio situation in the back of our minds, we can certainty talk about the rest of the headset. Before we dive in, here’s a look at the tech specs for some context:
IPD (fixed, customized per headset) eye-relief (fixed, customized per facepad)
IPD Adjustment Range
53–74mm (fixed, single IPD value per device)
Connectors
DisplayPort 1.4, USB 3.0 (2x)
Accessory Ports
USB 2.0 (USB-C connector) (1x)
Cable Length
5m
Tracking
SteamVR Tracking 1.0 or 2.0 (external beacons)
On-board Cameras
None
Input
SteamVR Tracking controllers
On-board Audio
None
Optional Audio
Audio Strap accessory, USB-C audio output
Microphone
Yes (2x)
Pass-through view
No
Weight
170–185g
MSRP
$1,000
MSRP (with tracking & controllers)
$1,580
And here’s where it fits into the landscape of high-end PC VR headsets from a pricing standpoint:
Bigscreen Beyond
Varjo Aero
Vive Pro 2
Reverb G2
Valve Index
Headset Only
$1,000
$1,000
$800
–
$500
Full Kit
$1,580
$1,580
$1,400
$600
$1,000
Smaller Than it Looks
Bigscreen Beyond is an incredibly unique offering in a landscape of mostly much larger and much bulkier PC VR headsets. Beyond is even smaller than it looks in photos. In fact, it’s so small that it nearly fits inside other VR headsets.
Getting it so small required that the company individually create custom-fit facepads for each and every customer. Doing so involves using an app to 3D scan your face, which is sent to the company and used as the blueprint to make the facepad that ships with your headset. At present the face scan is only supported on iOS devices (specifically iPhone XR or newer) which means anyone without access to such a device can’t even order the headset.
And this isn’t an illusion of customization, the company isn’t just picking from one of, say, 5 or 10 facepad shapes to find the one that most closely fits your face. Each facepad is completely unique—and the result is that it fits your face like a glove.
That means zero light leakage (which can be good for immersion, but problematic for the reasons described above). The headset is also dialed in—at the hardware level—for your specific IPD, based on your face scan.
Eyebox is Everything
If there’s one thing you should take away from this review it’s that Bigscreen Beyond has very good visuals and is uniquely conformable, but getting your eyes in exactly the correct position is critical for a good experience.
The eyebox (the optimal optical position relative to the lenses) is so tight that even small deviations can amplify artifacts and reduce the field-of-view. In any other headset it would be far too small to make the headset even a viable product, but Beyond’s commitment to custom-fit facepads makes it possible because they have relatively precise control over where the customer’s pupil will sit.
The first facepad the company sent me fit my face well, but the headset’s sweet spot (the clarity across lens) felt so tight that it made the already somewhat small field-of-view feel even smaller—too small for my taste. But by testing the headset without any facepad, I could tell that having my eyes closer would give me a notably better visual experience.
When I reached out to the company about this, they sent back a newly made facepad, this time with and even tighter eye-relief. This was the key to opening up the headset’s field-of-view, sweet spot, and improving some other artifacts just enough to the point that it didn’t feel too much of a sacrifice next other headsets.
Here’s a look at my field-of-view measurements for Bigscreen Beyond (with the optimal facepad), next to some other PC VR headsets. While the field-of-view only increased slightly from the first facepad to the second, the improvement in the sweet spot was significant.
Personal Measurements – 64mm IPD (minimum-comfortable eye-relief, no glasses, measured with TestHMD 1.2)
Bigscreen Beyond
Varjo Aero
Vive Pro 2
Reverb G2
Valve Index
Horizontal FOV
98°
84°
102°
82°
106°
Vertical FOV
90°
65°
78°
78°
106°
It’s sort of incredible that moving from the first facepad to the second made such an improvement. At most, the difference in my pupil position between the two facepad was likely just a handful of milimeters. But the headset’s eye-box is just so tight that even small deviations will influence the visual experience.
Comfort & Visuals
With the ideal facepad—and ignoring the annoyance of dealing with an off-board audio solution—Bigscreen beyond felt like I jumped a few years forward into the future of headsets. It’s tiny, fits my face perfectly, the OLED displays offer true blacks, and the resolution is incredibly sharp with zero evidence of any screen-door-effect (unlit space between pixels).
While it does feel like you give up some field-of-view compared to other headsets, and there’s notable glare, the compact form-factor and light weight really makes a big difference to wearability.
With most VR headsets today I find myself adjusting them slightly on my head every 10 or 15 minutes to relieve pressure points and stay comfortable over a longer period. With Beyond, I found myself making those adjustments far less often, or not at all in some sessions. When playing over longer periods you just don’t notice the headset nearly as much as others, and you’re even less likely to have the occasional bonk on the headset from your flailing controllers, thanks to its much smaller footprint.
Brightness vs. Persistence
While Beyond’s resolution is very good—with resolving power that I found about equal to Varjo’s Aero headset—the default brightness level (100) leads to more persistence blur than I personally think is reasonable. Fortunately Bigscreen makes available a simple utility that lets you turn down brightness in favor of lower persistence blur.
I found that dialing it down to 50 was roughly the optimal balance between brightness and persistence for my taste. This level keeps the image sharp during head movement, but leaves dark scenes truly dark. Granted you can adjust the brightness on the fly if you really want.
Of course this will be content dependent, and Bigscreen is ostensibly tuning the headset with an eye toward movie viewing (considering their VR app is all about movie watching), where persistence blur wouldn’t be quite as bad because you move your head considerably less while watching a movie vs. playing a VR game.
Clarity
While Beyond doesn’t have Fresnel lenses, its pancake optics still end up with a lot of glare in high contrast scenes. I’d say it’s not quite as bad as what you get with most Fresnel optics, but it’s still quite notable. While Fresnel lenses tend to create ‘god rays’ which emanate from specific objects in the scene, Beyond’s pancake optics create glare that’s appears less directly attached to what’s in the scene.
Beyond the issues noted so far, other visual factors are all top notch: no pupil swim, geometric distortion, or chromatic aberration (again, this is all highly dependent on how well your facepad fits, so if you see much of the above, you might want to look into the fit of the headset).
Glassbreakers: Champions of Moss, the 1v1 battler from Polyarc launched into early access late last month, is releasing a new character today which aims to “hook” players into returning for more tactical rat-bashing action.
Revealed last week, new champion ‘Mojo’ is now available for players on Quest, which for now is the only platform with an open beta.
Polyarc says can wishlist Glassbreakers now on Steam, with a planned open beta release slated to arrive sometime in October.
Mojo (aka ‘MJ22’) brings a few new ranged abilities to the 1v1 real-time battler, such as the ‘Free Hugs’ ability which lets Mojo launch a hook attack to pull the opposing squad’s Champions in towards them.
Leveling up, the hook not only launches farther, but it also applies a slowing effect to enemies. Besides grabbing enemies, Mojo’s hook can also snag high-priority targets that the other squad is trying to protect.
The studio announced it’s also hosting a special ‘Quest for the Chest’ event from now until October 5th, which is boosting the speed at which players level up their weekly chests. What’s more, for the next two weeks the top-tier chests will contain “extra special rewards including a chance at never-before-seen masks and emblems for players and their squads,” Polyarc says.
The game is slated to make the transition from App Lab game to the Quest Store proper early next year.
To follow along with progress, take a look at the game’s Trello board to see how events are shaping up, and how bug fixes are coming along.
German energy giant RWE has put a new airborne wind test facility in Ireland to work for the first time, as it explores alternative forms of green electricity.
The experimental technology was developed by Dutch startup Kitepower. It connects a large kite to a generator with an ultra-strong rope, generating electricity as the kite goes up in altitude.
“Kitepower, as the name suggests, uses a large kite structure with a hybrid inflatable and fixed fibreglass skeleton to hold the kite open. It has a wingspan of 60 square metres and weighs only 80kg, including the Kite Control and sensor unit,” explains Johannes Peschel, the company’s CEO.
The Kite Control Unit (KCU) is attached to the tether and controls the direction the kite flies. The Dyneema tether (an ultra-strong rope which is stronger than a steel wire of the same dimension, but has less than one-tenth of its weight) is attached to a Ground Station, housed in a conventional seven-metre container.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Electricity is produced when the kite is flown in a cross-wind, figure-of-eight pattern, achieving a high pulling force. This hauls/draws the tether from the winch in the ground station. Once the maximum tether length is reached, the kite is reeled in and the process starts anew. Normally, these two operations take just 100 seconds — 80 seconds for reeling out and 20 seconds for reeling in. To find out more about the tech, check out the video below: