NVIDIA

nvidia’s-new-app-doesn’t-require-you-to-log-in-to-update-your-gpu-driver

Nvidia’s new app doesn’t require you to log in to update your GPU driver

Some updates are good, actually —

Removing little-used features also improved responsiveness and shrank the size.

Nvidia app promo image

Nvidia

Nvidia has announced a public beta of a new app for Windows, one that does a few useful things and one big thing.

The new app combines the functions of three apps you’d previously have to hunt through—the Nvidia Control Panel, GeForce Experience, and RTX Experience—into one app. Setting display preferences on games and seeing exactly how each notch between “Performance” and “Quality” will affect its settings is far easier and more visible inside the new app. The old-fashioned control panel is still there if you right-click the Nvidia app’s notification panel icon. Installing the new beta upgrades and essentially removes the Experience and Control Panel apps, but they’re still available online.

But perhaps most importantly, Nvidia’s new app allows you to update the driver for your graphics card, the one you paid for, without having to log in to an Nvidia account. I tested it, it worked, and I don’t know why I was surprised, but I’ve been conditioned that way. Given that driver updates are something people often do with new systems and the prior tendencies of Nvidia’s apps to log you out, this is a boon that will pay small but notable cumulative dividends for some time to come.

Proof that you can, miracle of miracles, download an Nvidia driver update in Nvidia's new app without having to sign in.

Proof that you can, miracle of miracles, download an Nvidia driver update in Nvidia’s new app without having to sign in.

Game performance tools are much easier to use, or at least understand, in the new Nvidia app. It depends on the game, but you get a slider to move between “Performance” and “Quality.” Some games don’t offer more than one or two notches to use, like Monster Train or Against the Storm. Some, like Hitman 3 or Deep Rock Galactic, offer so many notches that you could make a day out of adjusting and testing. Whenever you move the slider, you can see exactly what changed in a kind of diff display.

Changing the settings in <em>Elden Ring</em> with the more granular controls available in Nvidia’s new beta app.” height=”1009″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/Screenshot-2024-02-22-134416.png” width=”1282″></img><figcaption>
<p>Changing the settings in <em>Elden Ring</em> with the more granular controls available in Nvidia’s new beta app.</p>
<p>Nvidia/Kevin Purdy</p>
</figcaption></figure>
<p>If you use Nvidia’s in-game overlay, triggered with Alt+Z, you can test that out, see its new look and feel, set up performance metrics, and change its settings from Nvidia’s beta app. Driver updates now come with more information about what changed, rather than sending you to a website of release notes. On cards with AI-powered offerings, you’ll also get tools for Nvidia Freestyle, RTX Dynamic Vibrance, RTX HDR, and other such nit-picky options.</p>
<p>Not everything available in the prior apps is making it into this new all-in-one app, however. Nvidia notes that GPU overclocking and driver rollback are on the way. And the company says it has decided to “discontinue a few features that were underutilized,” including the ability to broadcast to Twitch and YouTube, share video or stills to Facebook and YouTube, and make Photo 360 and Stereo captures. Noting that “good alternatives exist,” Nvidia says culling these things halves the new app’s install time, improves responsiveness by 50 percent, and takes up 17 percent less disk space.</p>
</p></div>
</section></div>
			</div>
		<p class= Nvidia’s new app doesn’t require you to log in to update your GPU driver Read More »

us-funds-$5b-chip-effort-after-lagging-on-semiconductor-innovation

US funds $5B chip effort after lagging on semiconductor innovation

Now hiring? —

US had failed to fund the “science half” of CHIPS and Science Act, critic said.

US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

Enlarge / US President Joe Biden speaks before signing the CHIPS and Science Act of 2022.

The Biden administration announced investments Friday totaling more than $5 billion in semiconductor research and development intended to re-establish the US as a global leader manufacturing the “next generation of semiconductor technologies.”

Through sizeable investments, the US will “advance US leadership in semiconductor R&D, cut down on the time and cost of commercializing new technologies, bolster US national security, and connect and support workers in securing good semiconductor jobs,” a White House press release said.

Currently, the US produces “less than 10 percent” of the global chips supply and “none of the most advanced chips,” the White House said. But investing in programs like the National Semiconductor Technology Center (NSTC)—considered the “centerpiece” of the CHIPS and Science Act’s four R&D programs—and training a talented workforce could significantly increase US production of semiconductors that the Biden administration described as the “backbone of the modern economy.”

The White House projected that the NSTC’s workforce activities would launch in the summer of 2024. The Center’s prime directive will be developing new semiconductor technologies by “supporting design, prototyping, and piloting and through ensuring innovators have access to critical capabilities.”

Moving forward, the NSTC will operate as a public-private consortium, involving both government and private sector institutions, the White House confirmed. It will be run by a recently established nonprofit called the National Center for the Advancement of Semiconductor Technology (Natcast), which will coordinate with the secretaries of Commerce, Defense, and Energy, as well as the National Science Foundation’s director. Any additional stakeholders can provide input on the NSTC’s goals by joining the NSTC Community of Interest at no cost.

The National Institute of Standards and Technology (NIST) has explained why achieving the NSTC’s mission to develop cutting-edge semiconductor technology in the US will not be easy:

The smallest dimensions of leading-edge semiconductor devices have reached the atomic scale and the complexity of the circuit architecture is increasing exponentially with the use of three-dimensional structures, the incorporation of new materials, and improvements in the thousands of process steps needed to make advanced chips. Into the future, as new applications demand higher-performance semiconductors, their design and production will become even more complex. This complexity makes it increasingly difficult and costly to implement innovations because of the dependencies between design and manufacturing, between manufacturing steps, and between front-end and back-end processes.

The complexity of keeping up with semiconductor tech is why it’s critical for the US to create clear pathways for skilled workers to break into this burgeoning industry. The Biden administration said it plans to invest “at least hundreds of millions of dollars in the NSTC’s workforce efforts,” creating a Workforce Center of Excellence with locations throughout the US and piloting new training programs, including initiatives engaging underserved communities. The Workforce Center will start by surveying best practices in semiconductor education programs, then establish a baseline program to attract workers seeking dependable paths to break into the industry.

Last year, the Semiconductor Industry Association (SIA) released a study showing that the US was not adequately preparing a highly skilled workforce. Between “67,000, or 58 percent, of projected new jobs, may remain unfulfilled at the current trajectory,” SIA estimated.

A skilled workforce is just part of the equation, though. The US also needs facilities where workers can experiment with new technologies without breaking the bank. To that end, the Department of Commerce announced it would be investing “at least $200 million” in a first-of-its-kind CHIPS Manufacturing USA Institute. That institute will “allow innovators to replicate and experiment with physical manufacturing processes at low cost.”

Other Commerce Department investments announced include “up to $300 million” for advanced packaging R&D necessary for discovering new applications for semiconductor technologies and over $100 million in funding for dozens of projects to help inventors “more easily scale innovations into commercial products.”

A Commerce Department spokesperson told Ars that “the location of the NSTC headquarters has not yet been determined” but will “directly support the NSTC research strategy and give engineers, academics, researchers, engineers at startups, small and large companies, and workforce developers the capabilities they need to innovate.” In 2024, NSTC’s efforts to kick off research appear modest, with the center expecting to prioritize engaging community members and stakeholders, launching workforce programs, and identifying early start research programs.

So far, Biden’s efforts to ramp up semiconductor manufacturing in the US have not gone smoothly. Earlier this year, TSMC predicted further delays at chips plants under construction in Arizona and confirmed that the second plant would not be able to manufacture the most advanced chips, as previously expected.

That news followed criticism from private entities last year. In November, Nvidia CEO Jensen Huang predicted that the US was “somewhere between a decade and two decades away” from semiconductor supply chain independence. The US Chamber of Commerce said last August that the reason why the US remained so far behind was because the US had so far failed to prioritize funding in the “science half” of the CHIPS and Science Act.

In 2024, the Biden administration appears to be attempting to finally start funding a promised $11 billion total in research and development efforts. Once NSTC kicks off research, the pressure will be on to chase the Center’s highest ambition of turning the US into a consistent birthplace of life-changing semiconductor technologies once again.

US funds $5B chip effort after lagging on semiconductor innovation Read More »

nvidia’s-g-sync-pulsar-is-anti-blur-monitor-tech-aimed-squarely-at-your-eyeball

Nvidia’s G-Sync Pulsar is anti-blur monitor tech aimed squarely at your eyeball

What will they sync of next? —

Branded monitors can sync pixels to backlighting, refresh rate, and GPU frames.

Motion blur demonstration of G-Sync Pulsar, with

Enlarge / None of this would be necessary if it weren’t for your inferior eyes, which retain the colors of pixels for fractions of a second longer than is optimal for shooting dudes.

Nvidia

Gaming hardware has done a lot in the last decade to push a lot of pixels very quickly across screens. But one piece of hardware has always led to complications: the eyeball. Nvidia is targeting that last part of the visual quality chain with its newest G-Sync offering, Pulsar.

Motion blur, when it’s not caused by slow LCD pixel transitions, is caused by “the persistence of an image on the retina, as our eyes track movement on-screen,” as Nvidia explains it. Prior improvements in display tech, like variable rate refresh, Ultra Low Motion Blur, and Variable Overdrive have helped with the hardware causes of this deficiency. The eyes and their object permanence, however, can only be addressed by strobing a monitor’s backlight.

You can’t just set that light blinking, however. Variable strobing frequencies causes flicker, and timing the strobe to the monitor refresh rate—itself also tied to the graphics card output—was tricky. Nvidia says it has solved that issue with its G-Sync Pulsar tech, employing “a novel algorithm” in “synergizing” its variable refresh smoothing and monitor pulsing. The result is that pixels are transitioned from one color to another at a rate that reduces motion blur and pixel ghosting.

Nvidia also claims that Pulsar can help with the visual discomfort caused by some strobing effects, as the feature “intelligently controls the pulse’s brightness and duration.”

  • The featureless axis labels make my brain hurt, but I believe this chart suggests that G-Sync Pulsar does the work of timing out exactly when to refresh screen pixels at 360 Hz.

    Nvidia

  • The same, but this time at 200 Hz.

    Nvidia

  • And again, this time at 100 Hz. Rapidly changing pixels are weird, huh?

    Nvidia

To accommodate this “radical rethinking of display technology,” a monitor will need Nvidia’s own chips built in. There are none yet, but the Asus ROG Swift PG27 Series G-Sync and its 360 Hz refresh rate is coming “later this year.” No price for that monitor is available yet.

It’s hard to verify how this looks and feels without hands-on time. PC Gamer checked out Pulsar at CES this week and verified that, yes, it’s easier to read the name of the guy you’re going to shoot while you’re strafing left and right at an incredibly high refresh rate. Nvidia also provided a video, captured at 1,000 frames per second, for those curious.

Nvidia’s demonstration of G-Sync Pulsar, using Counter-Strike 2 filmed at 1000 fps, on a 360 Hz monitor, with Pulsar on and off, played back at 1/24 speed.

Pulsar signals Nvidia’s desire to once again create an exclusive G-Sync monitor feature designed to encourage a wraparound Nvidia presence on the modern gaming PC. It’s a move that has sometimes backfired on the firm before. The company relented to market pressures in 2019 and enabled G-Sync in various variable refresh rate monitors powered by VESA’s Display port Adaptive-Sync tech (more commonly known by its use in AMD’s FreeSync monitors). G-Sync monitors were selling for typically hundreds of dollars more than their FreeSync counterparts, and while they technically had some exclusive additional features, the higher price points likely hurt Nvidia’s appeal when a gamer was looking at the full cost of new or upgraded system.

There will not be any such cross-standard compatibility with G-Sync Pulsar, which will only be offered on monitors with a G-Sync Ultimate badge, and then further support Pulsar, specifically. There’s always a chance that another group will develop its own synced-strobe technology that could work across GPUs, but nothing is happening as of yet.

In related frame-rate news, Nvidia also announced this week that its GeForce Now game streaming service will offer G-Sync capabilities to those on Ultimate or Priority memberships and playing on capable screens. Nvidia claims that, paired with its Reflex offering on GeForce Now, the two “make cloud gaming experiences nearly indistinguishable from local ones.” I’ll emphasize here that those are Nvidia’s words, not the author’s.

Nvidia’s G-Sync Pulsar is anti-blur monitor tech aimed squarely at your eyeball Read More »

they’re-not-cheap,-but-nvidia’s-new-super-gpus-are-a-step-in-the-right-direction

They’re not cheap, but Nvidia’s new Super GPUs are a step in the right direction

supersize me —

RTX 4080, 4070 Ti, and 4070 Super arrive with price cuts and/or spec bumps.

Nvidia's latest GPUs, apparently dropping out of hyperspace.

Enlarge / Nvidia’s latest GPUs, apparently dropping out of hyperspace.

Nvidia

  • Nvidia’s latest GPUs, apparently dropping out of hyperspace.

    Nvidia

  • The RTX 4080 Super.

    Nvidia

  • Comparing it to the last couple of xx80 GPUs (but not the original 4080).

    Nvidia

  • The 4070 Ti Super.

    Nvidia

  • Comparing to past xx70 Ti generations.

    Nvidia

  • The 4070 Super.

    Nvidia

  • Compared to past xx70 generations.

    Nvidia

If there’s been one consistent criticism of Nvidia’s RTX 40-series graphics cards, it’s been pricing. All of Nvidia’s product tiers have seen their prices creep up over the last few years, but cards like the 4090 raised prices to new heights, while lower-end models like the 4060 and 4060 Ti kept pricing the same but didn’t improve performance much.

Today, Nvidia is sprucing up its 4070 and 4080 tiers with a mid-generation “Super” refresh that at least partially addresses some of these pricing problems. Like older Super GPUs, the 4070 Super, 4070 Ti Super, and 4080 Super use the same architecture and support all the same features as their non-Super versions, but with bumped specs and tweaked prices that might make them more appealing to people who skipped the originals.

The 4070 Super will launch first, on January 17, for $599. The $799 RTX 4070 Ti Super launches on January 24, and the $999 4080 Super follows on January 31.

RTX 4090 RTX 4080 RTX 4080 Super RTX 4070 Ti RTX 4070 Ti Super RTX 4070 RTX 4070 Super
CUDA Cores 16,384 9,728 10,240 7,680 8,448 5,888 7,168
Boost Clock 2,520 MHz 2,505 MHz 2,550 MHz 2,610 MHz 2,610 MHz 2,475 MHz 2,475 MHz
Memory Bus Width 384-bit 256-bit 256-bit 192-bit 256-bit 192-bit 192-bit
Memory Clock 1,313 MHz 1,400 MHz 1,437 MHz 1,313 MHz 1,313 MHz 1,313 MHz 1,313 MHz
Memory size 24GB GDDR6X 16GB GDDR6X 16GB GDDR6X 12GB GDDR6X 16GB GDDR6X 12GB GDDR6X 12GB GDDR6X
TGP 450 W 320 W 320 W 285 W 285 W 200 W 220 W

Of the three cards, the 4080 Super probably brings the least significant spec bump, with a handful of extra CUDA cores and small clock speed increases but the same amount of memory and the same 256-bit memory interface. Its main innovation is its price, which at $999 is $200 lower than the original 4080’s $1,199 launch price. This doesn’t make it a bargain—we’re still talking about a $1,000 graphics card—but the 4080 Super feels like a more proportionate step down from the 4090 and a good competitor to AMD’s flagship Radeon RX 7900 XTX.

The 4070 Ti Super stays at the same $799 price as the 4070 Ti (which, if you’ll recall, was nearly launched at $899 as the “RTX 4080 12GB“) but addresses two major gripes with the original by stepping up to a 256-bit memory interface and 16GB of RAM. It also picks up some extra CUDA cores, while staying within the same power envelope as the original 4070 Ti. These changes should help it keep up with modern 4K games, where the smaller pool of memory and narrower memory interface of the original 4070 Ti could sometimes be a drag on performance.

Most of the RTX 40-series lineup. The original 4080 and 4070 Ti are going away, while the original 4070 now slots in at $549. It's not shown here, but Nvidia confirmed that the 16GB 4060 Ti is also sticking around at $449.

Enlarge / Most of the RTX 40-series lineup. The original 4080 and 4070 Ti are going away, while the original 4070 now slots in at $549. It’s not shown here, but Nvidia confirmed that the 16GB 4060 Ti is also sticking around at $449.

Nvidia

Finally, we get to the RTX 4070 Super, which also keeps the 4070’s $599 price tag but sees a substantial uptick in processing hardware, from 5,888 CUDA cores to 7,168 (the power envelope also increases, from 200 W to 220 W). The memory system remains unchanged. The original 4070 was already a decent baseline for entry-level 4K gaming and very good 1440p gaming, and the 4070 Super should make 60 FPS 4K attainable in even more games.

Nvidia says that the original 4070 Ti and 4080 will be phased out. The original 4070 will stick around at a new $549 price, $50 less than before, but not particularly appealing compared to the $599 4070 Super. The 4090, 4060, and the 8GB and 16GB versions of the 4060 Ti all remain available for the same prices as before.

  • The Super cards’ high-level average performance compared to some past generations of GPU, without DLSS 3 frame generation numbers muddying the waters. The 4070 should be a bit faster than an RTX 3090 most of the time.

    Nvidia

  • Some RTX 4080 performance comparisons. Note that the games at the top all have DLSS 3 frame generation enabled for the 4080 Super, while the older cards don’t support it.

    Nvidia

  • The 4070 Ti Super vs the 3070 Ti and 2070 Super.

    Nvidia

  • The 4070 Super versus the 3070 and the 2070.

    Nvidia

Nvidia’s performance comparisons focus mostly on older-generation cards rather than the non-Super versions, and per usual for 40-series GPU announcements, they lean heavily on performance numbers that are inflated by DLSS 3 frame generation. In terms of pure rendering performance, Nvidia says the 4070 Super should outperform an RTX 3090—impressive, given that the original 4070 was closer to an RTX 3080. The RTX 4080 Super is said to be roughly twice as fast as an RTX 3080, and Nvidia says the RTX 4070 Ti Super will be roughly 2.5 times faster than a 3070 Ti.

Though all three of these cards provide substantially more value than their non-Super predecessors at the same prices, the fact remains that prices have still gone up compared to past generations. Nvidia last released a Super refresh during the RTX 20-series back in 2019; the RTX 2080 Super went for $699 and the 2070 Super for $499. But the 4080 Super, 4070 Ti Super, and 4070 Super will give you more for your money than you could get before, which is at least a move in the right direction.

They’re not cheap, but Nvidia’s new Super GPUs are a step in the right direction Read More »

2023-was-the-year-that-gpus-stood-still

2023 was the year that GPUs stood still

2023 was the year that GPUs stood still

Andrew Cunningham

In many ways, 2023 was a long-awaited return to normalcy for people who build their own gaming and/or workstation PCs. For the entire year, most mainstream components have been available at or a little under their official retail prices, making it possible to build all kinds of PCs at relatively reasonable prices without worrying about restocks or waiting for discounts. It was a welcome continuation of some GPU trends that started in 2022. Nvidia, AMD, and Intel could release a new GPU, and you could consistently buy that GPU for roughly what it was supposed to cost.

That’s where we get into how frustrating 2023 was for GPU buyers, though. Cards like the GeForce RTX 4090 and Radeon RX 7900 series launched in late 2022 and boosted performance beyond what any last-generation cards could achieve. But 2023’s midrange GPU launches were less ambitious. Not only did they offer the performance of a last-generation GPU, but most of them did it for around the same price as the last-gen GPUs whose performance they matched.

The midrange runs in place

Not every midrange GPU launch will get us a GTX 1060—a card roughly 50 percent faster than its immediate predecessor and beat the previous-generation GTX 980 despite costing just a bit over half as much money. But even if your expectations were low, this year’s midrange GPU launches have been underwhelming.

The worst was probably the GeForce RTX 4060 Ti, which sometimes struggled to beat the card it replaced at around the same price. The 16GB version of the card was particularly maligned since it was $100 more expensive but was only faster than the 8GB version in a handful of games.

The regular RTX 4060 was slightly better news, thanks partly to a $30 price drop from where the RTX 3060 started. The performance gains were small, and a drop from 12GB to 8GB of RAM isn’t the direction we prefer to see things move, but it was still a slightly faster and more efficient card at around the same price. AMD’s Radeon RX 7600, RX 7700 XT, and RX 7800 XT all belong in this same broad category—some improvements, but generally similar performance to previous-generation parts at similar or slightly lower prices. Not an exciting leap for people with aging GPUs who waited out the GPU shortage to get an upgrade.

The best midrange card of the generation—and at $600, we’re definitely stretching the definition of “midrange”—might be the GeForce RTX 4070, which can generally match or slightly beat the RTX 3080 while using much less power and costing $100 less than the RTX 3080’s suggested retail price. That seems like a solid deal once you consider that the RTX 3080 was essentially unavailable at its suggested retail price for most of its life span. But $600 is still a $100 increase from the 2070 and a $220 increase from the 1070, making it tougher to swallow.

In all, 2023 wasn’t the worst time to buy a $300 GPU; that dubious honor belongs to the depths of 2021, when you’d be lucky to snag a GTX 1650 for that price. But “consistently available, basically competent GPUs” are harder to be thankful for the further we get from the GPU shortage.

Marketing gets more misleading

1.7 times faster than the last-gen GPU? Sure, under exactly the right conditions in specific games.

Enlarge / 1.7 times faster than the last-gen GPU? Sure, under exactly the right conditions in specific games.

Nvidia

If you just looked at Nvidia’s early performance claims for each of these GPUs, you might think that the RTX 40-series was an exciting jump forward.

But these numbers were only possible in games that supported these GPUs’ newest software gimmick, DLSS Frame Generation (FG). The original DLSS and DLSS 2 improve performance by upsampling the images generated by your GPU, generating interpolated pixels that make lower-res image into higher-res ones without the blurriness and loss of image quality you’d get from simple upscaling. DLSS FG generates entire frames in between the ones being rendered by your GPU, theoretically providing big frame rate boosts without requiring a powerful GPU.

The technology is impressive when it works, and it’s been successful enough to spawn hardware-agnostic imitators like the AMD-backed FSR 3 and an alternate implementation from Intel that’s still in early stages. But it has notable limitations—mainly, it needs a reasonably high base frame rate to have enough data to generate convincing extra frames, something that these midrange cards may struggle to do. Even when performance is good, it can introduce weird visual artifacts or lose fine detail. The technology isn’t available in all games. And DLSS FG also adds a bit of latency, though this can be offset with latency-reducing technologies like Nvidia Reflex.

As another tool in the performance-enhancing toolbox, DLSS FG is nice to have. But to put it front-and-center in comparisons with previous-generation graphics cards is, at best, painting an overly rosy picture of what upgraders can actually expect.

2023 was the year that GPUs stood still Read More »

magiscan-app-lets-users-create-3d-models-with-their-smartphone

MagiScan App Lets Users Create 3D Models With Their Smartphone

As if our smartphones weren’t already incredible enough, startup company AR-Generation is using them to bridge the gap between the real and virtual worlds. With their new cutting-edge app, you can create 3D models with only your smartphone and use them for any AR or metaverse application.

Introducing MagiScan

Meet MagiScan, an AI-powered 3D scanner app that produces high-quality 3D models for any AR or metaverse application. Developed by AR-Generation, a member of the NVIDIA Inception program, MagiScan is the first and only 3D scanner in NVIDIA Omniverse, a real-time 3D graphics collaboration platform.

The MagiScan app, available on both iOS and Android devices, allows users to capture an image of any object using their smartphone camera and quickly generate its detailed 3D model. AR-Feneration co-founder and CEO, Kiryl Sidarchuk, estimates that this process is up to 100 times less expensive and 10 times faster than manual 3D modeling, making it an accessible and user-friendly option for creators. With MagiScan, creators can easily refine their work and increase accessibility to AR technology.

3D scanning objects with MagiScan

While 3D scanning with smartphones is not new technology, it has significantly improved over the years. In 2015, researchers at Carnegie Mellon University developed a tool for measuring real objects in 3D space using “average” cellphone cameras. They developed their technology to perform accurate measurements so it could be helpful in self-driving cars and virtual shopping for eyeglass frames.

A similar technology was also created in 2021, called the PhotoCatch app, which uses the then-new Apple Object Capture photogrammetry technology.

How MagiScan Works

MagiScan is incredibly easy to use. Simply open the app, scan an object from all angles, and wait for a few seconds for the app to generate a 3D model. Once done, you can export your 3D model in various formats, including STL which allows you to print your model.

In addition to personal use, brands can also use MagiScan for their online platforms. Just enable “Connect MagiScan for Business,” then scan your products and add their 3D models to your website.

Exporting 3D Models Directly to Omniverse

AR-Generation also created an extension allowing MagiScan users to export their 3D models directly to the NVIDIA Omniverse. “We customized our app to allow export of 3D models based on real-world objects directly to Omniverse, enabling users to showcase the models in AR and integrate them into any metaverse or game,” Sidarchuk said.

Magiscan to Omniverse

This extension is made possible by OpenUSD or Universal Scene Description, an open-source software originally developed by Pixar Animation Studios for simulating, describing, composing, and collaborating in the 3D realm. The OpenUSD compatibility is Sidarchuk’s favorite Ominverse feature, and he believes that OpenUSD is the “format of the future.”

The company chose to build an extension for Omniverse because the platform, according to Sidarchuk,  “provides a convenient environment that integrates all the tools for working with 3D and generative AI.”

MagiScan and Augmented Reality’s Impact on E-Commerce

The impact of 3D models and AR is not limited to the gaming and metaverse realms. E-commerce businesses can also benefit from the rapid advancements in this technology.

About 60% of online shoppers consider high-quality images a critical factor in their purchasing decisions. To keep up with the competition, brands must provide more than just photos with a white background. They can also display 3D models of their products on their websites or online marketplaces to provide a more immersive browsing experience.

Through MagiScan, AR-Generation believes that conversion rates can increase up to 94%, while returns can drop up to 58%. Crisp and accurate 3D models allow consumers to visualize a product in real life, helping them make more informed purchasing decisions. This may be the same reason why Carnegie Mellon University researchers developed their 3D scanner to aid people in buying eyeglass frames online.

The Growing Significance of AR in Daily Life

Sidarchuk believes that AR will become an integral part of everyday life. And it’s not hard to see why. AR has grown in popularity over the years and is now widely used in various industries, from gaming to shopping to employee training. With AR, individuals and corporations can experience immersive virtual environments in a safe and secure way.

Thanks to advancements in technology, high-quality 3D experiences are now possible on smartphones. This means that AR and the Omniverse have the potential to impact even the most mundane activities of our daily lives. With this in mind, it’s clear that AR technology is here to stay.

MagiScan App Lets Users Create 3D Models With Their Smartphone Read More »

nvidia-cloudxr-4.0-enables-developers-to-customize-the-sdk-and-scale-xr-deployment

NVIDIA CloudXR 4.0 Enables Developers to Customize the SDK and Scale XR Deployment

In January, NVIDIA announced new products and innovations at CES 2023. At this year’s NVIDIA GTC, “the developer conference for the era of AI and the metaverse,” NVIDIA announced the latest release of CloudXR. Businesses can definitely look forward to boosting their AR and VR capabilities with the new NVIDIA CloudXR developments, enhanced to bring more flexibility and scalability for XR deployments.

The latest release augurs well for developers looking to improve the customer experience while using their apps whether on the cloud, through 5G Mobile Edge Computing, or corporate networks.

In CloudXR 4.0, new APIs allow flexibility in the development of client apps as well as in using various distribution points to deliver XR experiences. Scalability in multi-platform use is another plus as broader options for CloudXR interface are likewise made available. The new NVIDIA CloudXR also makes it possible for developers to create custom user interfaces through the use of Unity plug-in architecture.

Among the benefits that developers can enjoy with the new NVIDIA CloudXR 4.0 are:

  • No Need for OpenVR or OpenXR Runtime – CloudXR Server API lets developers build CloudXR directly into their applications, although OpenVR API via the SteamVR runtime continues to be fully supported by the new version.
  • More Deployment Options With the Use of the Unity Plug-in – Developers can build on the Unity engine and create a full-featured CloudXR Client using Unity APIs.

NVIDIA CloudXR 4.0 - Unity Plug-in

  • Reduced Lag and Delay Problems Through the l4s Technology – Lags on interactive cloud-based video streaming are reduced as the new NVIDIA CloudXR release makes use of a convenient “togglable” feature in the implementation of the advanced 5G packet delivery optimization.

More Immersive Experiences With the New NVIDIA CloudXR Developments

The new NVIDIA CloudXR developments now make it possible to provide more immersive high-fidelity XR experiences to the users. Developers and businesses can offer high-performance XR streaming to their customers through the most accessible platforms and devices. They can now customize their applications to give the kind of XR experiences their customers are looking for.

“At VMware we’re using NVIDIA CloudXR to enable our customers to stream high-fidelity XR experiences from platforms, like VMware Horizon, to standalone VR devices running VMware Workspace ONE XR Hub,” said VMware Director of Product Management, Matt Coppinger, in a press release shared with ARPost. “This gives our customers the power of a graphics workstation along with the mobility of a standalone VR device.” 

With CloudXR 4.0, developers are able to improve integrations and consequently the overall performance of their apps and solutions.

NVIDIA also revealed strategic partnerships with tech companies like Ericsson and Deutsche Telekom to ensure that integrations, specifically of the L4S, into the new NVIDIA CloudXR developments, are implemented seamlessly.

The availability of high bandwidth, low latency networks for optimal streaming performance are also assured through these alliances. Head of Deutsche Telecom’s Edge Computing, Dominik Schnieders, reiterates how they believe that the CloudXR and L4S optimization is a critical component of streaming XR for both enterprises and consumers in the public 5G network.

Most Requested Features on the New NVIDIA CloudXR Developments

The new version of the CloudXR puts together more “in-demand” features. Among these are: generic controller support, call back-based logging, and flexible stream creation. This demonstrates great responsiveness to the needs of the XR developer community and is perceived as a significant improvement in the distribution of enterprise XR software.

NVIDIA CloudXR 4.0 Enables Developers to Customize the SDK and Scale XR Deployment Read More »

ces-2023-highlights-featuring-news-and-innovations-from-canon,-micledi,-and-nvidia

CES 2023 Highlights Featuring News and Innovations From Canon, MICLEDI, and NVIDIA

CES is considered the world’s tech event, showcasing groundbreaking technologies and innovations from some of the world’s biggest brands, developers, manufacturers, and suppliers of consumer technology. At CES 2023, attendees saw the unveiling of the latest developments from over 3,200 exhibitors, including technology companies Canon, MICLEDI, and NVIDIA.

Canon Immersive Movie Experience and Immersive Calling Experience

Canon USA has partnered with filmmaker and director M. Night Shyamalan (The Sixth Sense, The Village, and Signs) to create an immersive movie experience for CES 2023 attendees. Featuring M. Night Shyamalan’s upcoming film Knock at the Cabin (which will be in theaters February 3), Canon unveiled Kokomo, an immersive virtual reality software that leverages VR to give users an immersive calling experience.

Canon Kokomo - CES 2023
Kokomo

With Kokomo, users can now connect with their friends and family as if they’re there in person by using a compatible VR headset and smartphone. In a 3D call, Kokomo will emulate a photo-real environment and mirror the physical appearance of the user. CES 2023 participants were able to witness Kokomo in action at the Canon booth, where they were able to have a one-on-one Kokomo conversation with select characters from the movie Knock at the Cabin.

Aside from Kokomo, Canon also unveiled its Free Viewpoint Video System, which creates point-cloud-based 3D models for more immersive viewing experiences in larger areas like arenas and stadiums. At CES 2023, attendees were able to experience the Free Viewpoint System, which allowed them to watch an action scene from Knock at the Cabin from multiple viewpoints.

CES 2023 attendees also had the opportunity to see Canon’s mixed reality system MREAL in action, by experiencing a scene from Knock at the Cabin as if they were a character in the movie.

Canon MREAL X1 headset
MREAL X1

MICLEDI Demonstrates New Red µLEDs at CES 2023

MICLEDI Microdisplays, a technology company developing the microLED displays for the augmented reality market, also showcased its advancements in microLED display tech for AR glasses at CES 2023.

At the event, the company demonstrated its new red microLEDs on AllnGaP starting material. This development is in line with MICLEDI’s aim to create high-performance individual color-performing microLEDs that can be combined with the company’s full-color microLED display module.

Through MICLEDI’s innovations in microLED technology, users can begin to experience clearer and more precise digital images via AR glasses that are more portable and lightweight. The red AllnGaP microLEDs, along with MICLEDI’s three-panel full-color microLED display module, are poised to raise the standards of AR glasses in the coming years.

MICLEDI - Red GaN and Red AlInGaP microLED displays - CES 2023

“There is no one-size-fits-all solution for AR glasses,” said MICLEDI CEO, Sean Lord. “This achievement, with our previously announced blue, green, and red GaN µLEDs, opens the door to a broader offering of display module performance parameters which enables MICLEDI to serve customers developing AR glasses from medium to high resolution and medium to high brightness.”

Demonstration units of both Red GaN and Red AlInGaP were shown at the company’s booth at CES 2023.

NVIDIA Announces New Products and Innovations at CES 2023

NVIDIA announced new developments and NVIDIA Omniverse capabilities at CES 2023. The tech company, which is known for designing and building GPUs, unveiled its new GeForce RTX GPUs, which come with a host of new features that can be found in NVIDIA’s new studio laptops and GeForce RTX 4070 Ti graphics cards. This new series of portable laptops gives artists, creators, and gamers access to more powerful solutions and AI tools that will help them create 2D and 3D content faster.

NVIDIA also shared new developments to its Omniverse, including AI add-ons for Blender, access to new and free USD assets, and an update on the NVIDIA Canvas, which will be available for download in the future.

Aside from these updates, the company also released a major update to its Omniverse Enterprise, which enables users to access enhancements that will let them develop and operate more accurate virtual worlds. This major update is also set to expand the Omniverse’s capabilities through features such as new connectors, Omniverse Cloud, and Omniverse DeepSearch. More new partners are planning to use NVIDIA Omniverse to streamline their workflows and operations. These include Dentsu International, Zaha Hadid Architects, and Mercedes Benz.

NVIDIA Omniverse ACE - CES 2023
NVIDIA Omniverse ACE

Moreover, this January, NVIDIA opened its early-access program for NVIDIA Omniverse Avatar Cloud Engine (ACE), allowing developers and teams to build interactive avatars and virtual assistants at scale.

Demos of VITURE One XR Glasses and Mobile Dock

Aside from these established tech companies, VITURE, a new XR startup that received accolades from CES, TIME, and the Fast Company for its flagship product, the VITURE One XR glasses, also prepared something interesting for the CES 2023 attendees.

VITURE One XR glasses and Mobile Dock
VITURE One XR glasses and Mobile Dock

The company made both their VITURE One XR glasses, compatible with Steam Deck, laptops, and PCs, and their Mobile Dock, which introduces co-op play and Nintendo Switch compatibility, available for testing.

CES 2023 Highlights Featuring News and Innovations From Canon, MICLEDI, and NVIDIA Read More »

nvidia-is-giving-up-on-gamestream-to-the-dismay-of-shield-tv-owners

Nvidia is giving up on GameStream to the dismay of Shield TV owners

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

Nvidia is giving up on GameStream to the dismay of Shield TV owners Read More »

how-different-xr-companies-approach-cloud-services

How Different XR Companies Approach Cloud Services

 

XR hardware is on the move. But, software is important too. The bigger your XR needs are, the larger your software needs are. So, more and more XR providers are providing cloud services in addition to their hardware and platform offerings. But, what is the cloud anyway?

Generally, “the cloud” refers to remote servers that do work off of a device. This allows devices to become smaller while running more robust software. For example, some of the cloud services that we’ll look at are cloud storage solutions. Cloud storage is increasingly important because 3D assets can take up a lot of space. Others run computations on the cloud.

Other solutions make up “local clouds.” These are networks of devices managed from a central portal all on location. This kind of solution is usually used by organizations managing a large number of devices from one central computer.

Varjo’s Reality Cloud

“Cloud” takes on yet another meaning for Varjo. For Varjo clients, a lot of the management and IT solutions that make up cloud services for other developers are handled through software subscriptions bundled with almost all Varjo hardware. Varjo’s “Reality Cloud” allows users to join XR meetings including remotely present coworkers and virtual assets.

Varjo Reality Cloud - XR cloud services

“Varjo Reality Cloud is our platform that will allow the ultimate science fiction dream – photo-realistic teleportation – to come true,” CTO Urho Konttori said in a launch event last summer. “What this means, in practice, is true virtual teleportation – sharing your reality, your environment, with other people in real time so that others can experience your world.”

At the beginning of this year, Varjo announced that XR content will soon stream through Reality Cloud services as well. Just like streaming other forms of media, XR streaming aims to provide more content to smaller devices by hosting that content remotely and serving it to users on demand.

“These scalability opportunities that the cloud provides are significantly meaningful when we talk about XR deployment in the corporate world,” Konttori told ARPost in January. “We are now at the level that we are super happy with the latency and deployments.”

In a recent funding announcement, Varjo announced the most recent development in their cloud services. Patrick Wyatt, a C-suite veteran, has been appointed the company’s new CPO and “will be the primary lead for Varjo’s software and cloud development initiatives.” As this article was being written, Varjo further expanded its cloud with Unreal and Unity engine integrations.

CloudXR From NVIDIA

XR streaming is already a reality on other cloud platforms. NVIDIA offers CloudXR that streams XR content to Android and Windows devices. (Remember that Android isn’t a hardware manufacturer, but an operating system. While almost all non-Apple mobile devices run Android, it is also the backbone of many XR headsets.)

NVIDIA CloudXR - XR cloud services

According to NVIDIA, “CloudXR lets you leverage NVIDIA RTX-powered servers with GPU virtualization software to stream stunning augmented and virtual reality experiences from any OpenVR application. This means you can run the most complex VR and AR experiences from a remote server across 5G and Wi-Fi networks to any device, while embracing the freedom to move—no wires, no limits.”

This can be a “pure” cloud application, but it can also be an “edge” application that does some lifting on the device and some remotely. While NVIDIA promotes their cloud services for use cases like location-based experiences and virtual production, edge computing is being embraced by enterprises who may want to keep sensitive content offline.

RealWear’s New Cloud Services

Enterprise XR hardware manufacturer RealWear recently launched their own cloud. This is of the last kind of cloud discussed above. The solution allows IT specialists to “easily control and manage their entire RealWear device fleet from one easy-to-use interface.” That includes content, but it also includes managing updates.

If you own one headset, you know that installing software and updates can be a chore. Now, imagine owning a dozen headsets, or even a hundred or more. Putting on each headset individually to add content and install updates quickly becomes unscalable. The RealWear Cloud also allows real-time tech support, which wouldn’t be possible otherwise.

RealWear Cloud

The RealWear Cloud also allows data analysis across headsets. This is vital in enterprise applications which may be tracking items as they move through a supply chain or tracking employees as they move through tasks or training modules. Handling this data for an individual on an individual headset is possible but, again, becomes unbearable at scale sans cloud.

Cloud Storage in Lens Studio

As for cloud storage, Snapchat recently announced a solution in a Lens Studio update that gives creators up to 25MB of remote storage. While the file size is still capped per asset (you can’t have one 25MB asset), it drastically increases the abilities of Lens Creators working with large or complex models.

Snap Lens Cloud

“Prior to the launch of Remote Assets, if a project was over the Lens size limit, you only had two options: either remove the asset if it wasn’t critical to the experience or resize the image to lower its RAM usage and re-submit,” reads the release. “Now you can utilize our Lens Cloud service to host assets of larger sizes outside of the Lens, and then load them in at run time.”

This is significant because Snap Lenses run on mobile devices that not only have limited space but also share that computing power with a slew of non-XR applications. At least, until Snapchat makes a consumer version of Spectacles.

“At first, we were just building for the phone and porting to the glasses,” Lens Creator Alex Bradt told me when I got to demo Snap’s Spectacles at AWE. “Now we’re like, ‘what can we actually do with these that will solve problems for people that they didn’t know they had?’”

Parents and Partners

Not all XR companies offer their own cloud services. For example, Magic Leap has had a partnership with Google Cloud for the past year now. Likewise, AutoDesk offers its XR cloud services through a partnership with Amazon.

Similarly, ThinkReality cloud services are offered through parent company Lenovo. A similar relationship exists between Azure and Microsoft’s MR hardware.

Partnerships like these help each company get the most out of their existing offerings without needing to build services from the ground up. As enterprises explore entering XR, these offerings also help them integrate into cloud services offered by suppliers that they may already be working with, like Microsoft, Google, Amazon, or Lenovo.

Your Forecast: Cloudy

Right now, a lot of cloud services serve industry – where it is doing very impactful things for industry. That doesn’t mean that people with just one headset (or a phone) shouldn’t be taking note. Developments in XR cloud development (for enterprise or for consumer applications) are making smoother, faster, lighter-weight, and more robust XR applications possible for everyone.

How Different XR Companies Approach Cloud Services Read More »

nvidia-and-autodesk-bring-collaborative-xr-experiences-to-the-cloud

NVIDIA and Autodesk Bring Collaborative XR Experiences to the Cloud

 

XR technology is evolving quickly. Today, millions of people use AR and VR as they go through their daily lives. While many of the popular use cases of AR and VR are still in the realm of gaming and entertainment, other industries are finding practical use cases unique to their sectors.

Developments in extended reality are expanding from innovating hardware to elaborating experiences through advanced technologies and accessible systems. In October, tech giants NVIDIA and Autodesk announced the official launch of NVIDIA CloudXR and Autodesk VRED on Amazon Web Services (AWS), a cloud computing platform for users to run their choice of applications and software.

The joint NVIDIA-Autodesk release is available as a “Quick Start” deployment system on AWS. Virtually, any user has access to leverage Autodesk VRED with the powerful NVIDIA CloudXR infrastructure. Embracing a collaborative environment of manipulating and designing high-fidelity immersive XR experiences on the cloud hastens the design workflows of industry professionals. In addition, it advocates accessible extended reality environments, accelerating the adaptability of XR technologies.

Bolstering the Future of Accessible XR Technologies

The world’s first virtual reality (VR) machine was built in 1956 (and patended in 1961)—the Sensorama was a movie booth incorporating 3D, audio, and video with a vibrating seat for an immersive viewing experience.

Inspired by the Sensorama came the development of the world’s first VR headset in 1961. The Headsight headset was built for military operations, complete with motion tracking technology. By 1968, the world witnessed the creation of the first augmented reality headset. Invented by Ivan Sutherland, a Harvard professor, the Sword of Damocles set the blueprint for generating present-day immersive AR experiences.

The long and exciting evolution of XR has yet to reach its turning point: becoming accessible for mainstream use. The general public has yet to have firsthand experience of using extended reality technologies.

The World Economic Forum states that user experience is pivotal to the mainstream success of many technologies, including XR and the metaverse. For now, the target demographic is strongly engaged in 2D platforms, sharing and communicating content in 2D format. Web3 developers have yet to devise a solution for users to relay their immersive experiences to one another.

The Significance of Collaboration for Globally Immersive XR Experiences

The joint decision of NVIDIA and Autodesk to launch their technologies as a “Quick Start” option on AWS is a step forward toward closing the gap between extended reality technologies and mainstream use. Users can now execute NVIDIA CloudXR and Autodesk VRED to create high-quality and immersive XR experiences, anytime, anywhere.

NVIDIA Autodesk VRED on AWS

Autodesk VRED is a 3D visualization solution that professionals in the architecture, engineering, and construction (AEC) industries are familiar with. VRED users design dynamic presentations and interactive environments with real-time 3D assets.

NVIDIA CloudXR is based on NVIDIA RTX technology, delivering seamless streaming of extended reality experiences across various networks—on the cloud, from data centers, or mobile data networks.

Anyone can easily access these technologies via AWS Quick Start. VRED users can maximize designing and streaming immersive XR experiences with the support of NVIDIA CloudXR with dedicated NVIDIA RTX graphic cards and virtual workstation platforms.

Transformative Partnerships to Scale XR Across Industries

The collaborative effort between Autodesk and NVIDIA did not come out of the blue. In fact, NVIDIA has been sealing partnership deals with various tech and automotive firms to scale extended reality in industrial action.

For instance, NVIDIA collaborated with automaker BMW to showcase a digital twin of the brand’s car assembly system. This summer, both NVIDIA and Autodesk collaborated with Lenovo and Varjo to bring Porsche Mission R to life with AR and MR demo.

Germany-based infrastructure company Siemens engaged with NVIDIA to leverage extended reality technologies and the metaverse for the production and manufacturing industries. NVIDIA Omniverse enables digital twin design and simulation of workflows in factories.

Autodesk also collaborated with game developer Epic Games to streamline workflows and tools for AEC designers. In fact, XR headsets manufacturer Varjo worked with Autodesk VRED for AR/VR headset support and remote collaboration through its Reality Cloud platform.

The recent Autodesk University event welcomed industry professionals to discover more of the CloudXR Quick Start option. Featured courses were led by David Randle, the Global Head of GTM for Spatial Computing at AWS.

NVIDIA and Autodesk Bring Collaborative XR Experiences to the Cloud Read More »