Artificial Intelligence

spotify-ceo’s-startup-for-ai-powered-preventive-healthcare-raises-e60m

Spotify CEO’s startup for AI-powered preventive healthcare raises €60M

Spotify founder and CEO Daniel Ek’s preventive healthcare startup just received a very strong vote of confidence from venture capitalists. Earlier today, Neko Health announced it had raised €60mn in a round led by Lakestar and backed by Atomico and General Catalyst.

The funds will be put towards expanding the concept outside of the company’s native Sweden, where it currently operates a private body-scan clinic. 

Neko Health, named after the Japanese word for cat, was founded in 2018 by Ek and Hjalmar Nilsonne. After much secrecy, its first clinic opened in February this year in Stockholm. Within two hours, it was fully booked out and 5,000 people were placed on a waiting list. 

“I’ve spent more than 10 years exploring the untapped potential of healthcare innovation,” Ek said in a statement. “We are dedicated to building a healthcare system that focuses on prevention and patient care, aiming to serve not just our generation, but those that follow.”

3D body scans

At the clinic, people go through a 3D full-body scan in a minimalist booth that would not look out of place in an episode of Star Trek, fitted with dozens of sensors and powered by, you guessed it, artificial intelligence. In particular, algorithms can immediately detect potential skin conditions and risk of cardiovascular disease. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Patients (are they still called that in preventative care?) also go through laser scans and an ECG, which, altogether, takes between 10 and 20 minutes. They may not be met by Bones himself after the examinations, but their results are looked over and explained by an actual, human, doctor. 

“We have our own nurses, doctors and specialists,” Nilsonne told Bloomberg. “We have dermatologists employed just to review the skin images. There is a doctor on site who can make qualified medical judgments for anything that comes up.”

The price for a Neko Health assessment is €250, and the company has performed over 1,000 scans since launch. Close to 80% of customers have reportedly prepaid for follow-up scans after a year. 

AI does indeed hold great potential for disease prevention and early detection — but only if the results are interpretable. It is unclear how much insight Neko Health physicians have into how the algorithm makes its predictions (as in, which factor contributes to the risk of cardiovascular disease so the patient/client can be better informed about what measures to take).

Purpose and ambition

One of the company’s backers is Skype co-founder and founder of Atomico, Niklas Zennström, who will also be taking a seat on the board. He sees enormous potential in the new venture from the man who essentially changed how we consume music. 

“Neko Health is exactly the type of mission that gets us excited at Atomico. It’s that rare combination of a firm with a purpose and outsized ambition, and founders with a world class track record,” Zennström said. “They’re solving a problem we can all relate to, with the potential to fundamentally transform global healthcare forever.” 

Spotify CEO’s startup for AI-powered preventive healthcare raises €60M Read More »

ai-to-help-everyone-unleash-their-inner-creator-with-masterpiece-x

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X

Empowering independent creators is an often-touted benefit of AI in XR. We’ve seen examples from professional development studios with little to no public offering, but precious few examples of AI-powered authoring tools for individual users. Masterpiece Studio is adding one more, “Masterpiece X”, to help everyone “realize and elevate more of their creative potential.”

“A New Form of Literacy”

Masterpiece Studio doesn’t just want to release an app – they want to start a movement. The team believes that “everyone is a creator” but the modern means of creation are inaccessible to the average person – and that AI is the solution.

Masterpiece X Meta Quest 3D Remix screenshot

“As our world increasingly continues to become more digital, learning how to create becomes a crucial skill: a new form of literacy,” says a release shared with ARPost.

Masterpiece Studio has already been in the business of 3D asset generation for over eight years now. The company took home the 2021 Auggie Award for Best Creator and Authoring Tool, and is a member of the Khronos Group and the Metaverse Standards Forum.

So, what’s the news? A new AI-powered asset generation platform called Masterpiece X, currently available as a beta application through a partnership with Meta.

The Early Days of Masterpiece X

Masterpiece X is already available on the Quest 2, and it’s already useful if you have your own 3D assets to import. There’s a free asset library, but it only contains sample content at the moment. The big feature of the app – creating 3D models from text prompts – is still rolling out and will (hopefully) result in a more highly populated asset library.

Masterpiece X Meta community library

“Please keep in mind that this is an ‘early release’ phase of the Masterpiece X platform. Some features are still in testing with select partners,” reads the release.

That doesn’t mean that it’s too early to bother getting the app. It’s already a powerful tool. Creators that download and master the app now will be better prepared to unlock its full potential when it’s ready.

Creating an account isn’t a lengthy process, but it’s a bit clunky – it can’t be done entirely online or entirely in-app, which means switching between a desktop and the VR headset to enter URLs and passwords. After that, you can take a brief tutorial or experiment on your own.

The app already incorporates a number of powerful tools into the entirely spatial workflow. Getting used to the controls might take some work, though people who already have experience with VR art tools might have a leg up. Users can choose a beginner menu with a cleaner look and fewer tools, or an expert menu with more options.

So far, tools allow users to change the size, shape, color, and texture of assets. Some of these are simple objects, while others come with rigged skeletons that can take on a variety of animations.

I Had a Dream…

For someone like me who isn’t very well-versed in 3D asset editing, now is the moment to spend time in Masterpiece X – honing my skills until the day that asset creation on the platform is streamlined by AI. Maybe then I can finally make a skateboarding Gumby-shaped David Bowie to star in an immersive music video for “Twinkle Song” by Miley Cyrus. Maybe.

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X Read More »

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »

mistral-ai-secures-e105m-in-europe’s-largest-ever-seed-round

Mistral AI secures €105M in Europe’s largest-ever seed round

Mistral AI secures €105M in Europe’s largest-ever seed round

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

The artificial intelligence hype shows no sign of fading just yet, and investors are practically falling over themselves to fund the next big thing in AI. Yesterday, Paris-based startup Mistral AI announced it had secured €105mn in what is reportedly Europe’s largest-ever seed round. 

Mistral AI was founded only four weeks ago, by a trio of AI researchers. Arthur Mensch, the company’s CEO, was formerly employed by Google’s DeepMind. His co-founders, Timothée Lacroix (CTO) and Guillaume Lample (Chief Science Officer), previously worked for Meta. 

The company has yet to develop its first product. However, on a mission to “make AI useful,” it plans to launch a large language model (LLM) similar to the system behind OpenAI’s ChatGPT in early 2024. 

A large part of the funds raised will be used towards renting the computer power to train it. The idea is to only use publicly available data to avoid the legal issues and copyright backlash faced by others in the industry. 

While Mistral hopes to take on OpenAI with actual open-sourced models and data sets, it is setting itself apart from the Microsoft-backed step-change initiator by targeting enterprises instead of consumers. The company says its goal is to help business clients improve processes around R&D, customer care, and marketing, as well as giving them the tools to build new products with AI.

Vision coupled with hands-on experience

The funding round is led by Lightspeed Venture Partners. The VC’s partner, Antoine Moyroud, says that Lightspeed has had the opportunity to meet with several talented researchers-turned-founders in AI, but few had a vision beyond the technical field. 

Mensch, Lacroix and Lample, on the other hand, according to Moyroud, are “part of a group of select few globally who have both the technical understanding required to build out their own vision along with the hands-on experience of training and operating large language models at scale.”

Generally, the American VC says it believes that Europe has “a decisive role” to play in the AI field. Just in September last year, the firm opened up new offices in London, Berlin, and Paris, and says it is looking to partner with more European founders of the same ambition behind Mistral.



“Our investment in Mistral, and all our portfolio companies in Europe, are evidence of our firm conviction that a new generation of global players will emerge from this ecosystem,” Lightspeed said in a statement.  

JCDecaux Holding, Motier Ventures, La Famiglia, Headline, Exor Ventures, Sofina, First Minute Capital, and LocalGlobe also participated in the round, along with private investors including French billionaires Rodolphe Saadé and Xavier Niel, as well as former Google CEO Eric Schmidt and French investment bank BpiFrance.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Mistral AI secures €105M in Europe’s largest-ever seed round Read More »

italy-to-launch-e150m-fund-for-ai-startups

Italy to launch €150M fund for AI startups

Italy to launch €150M fund for AI startups

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

Italy is the latest country looking to quickly shore up domestic development of an AI ecosystem. As part of its Strategic Program for Artificial Intelligence, the government will “soon” launch a €150 million fund to support startups in the field, backed by development bank Cassa Depositi e Prestiti (CDP). 

As reported by Corriere Communazione, Alessio Butti, Italy’s cabinet undersecretary in charge of technological innovation, relayed the news of the state-backed fund yesterday. While he didn’t provide specific details on the amount to be made available, government sources subsequently told Reuters the figure being discussed in Rome was in the vicinity of €150 million. 

“Our goal is to increase the independence of Italian industry and cultivate our national capacity to develop skills and research in the sector,” Butti said. “This is why we are working with CDP on the creation of an investment fund for the most innovative startups, so that study, research, and programming on AI can be promoted in Italy.”

Navigating regulation and support

Indeed, the AI boom is here in earnest. Yesterday, Nvidia became the first chipmaker to hit $1 trillion in valuation. The boost to stocks followed a prediction of sales reaching $11 billion in Q2 off the back of the company’s chips powering OpenAI’s ChatGPT (which, coincidentally got off on a bit of a bad foot with Italy).

Those who do not yet have their hands in the (generative) AI pie are now racing to be part of the algorithm-driven gold rush of the 21st Century. 

While intent on regulatory oversight, governments are also, for various reasons, keen on supporting domestic developers in the field of artificial intelligence. Last month, the UK made £100 million in funding available for a task force to help build and adopt the “next generation of safe AI.” 

Italy is also looking to set up its own “ad hoc” task force. Butti stated, “In Italy we must update the strategy of the sector, and therefore the Department for Digital Transformation is working on the establishment of an authoritative group of Italian experts and scholars.”

Part of national AI strategy

Italy adopted the Strategic Program for Artificial Intelligence 2022-2024 in 2021 but, of course, the industry is evolving at breakneck speed. The strategy is a joint project between the ministries for university and research, economic development, and technological innovation and digital transition. Additionally, it is guided by a working group on the national strategy for AI. 

The program outlines 24 policies the government will have implemented over the course of the three years. Beyond measures to support the domestic development of AI, these include promotion of STEM subjects, and increasing the number of doctorates to attract international researchers. Furthermore, they target the creation of data infrastructure for public administration and specific support for startups working in GovTech and looking to solve critical problems in the public sector.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Italy to launch €150M fund for AI startups Read More »

google-launches-e10m-social-innovation-ai-fund-for-european-entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Linnea Ahlgren

Story by

Linnea Ahlgren

In conjunction with a visit of CEO Sundar Pichai’s visit to Stockholm yesterday, Google announced the launch of the second Google.org Social Innovation Fund on AI to “help social enterprises solve some of Europe’s most pressing challenges.” 

Through the fund, Google is making €10 million available, along with mentoring and support, for entrepreneurs from underserved backgrounds. The aim is to help them develop transformative AI solutions that specifically target problems they face on a daily basis.

The fund will provide capital via a grant to INCO for the expansion of Social Tides, an accelerator program funded by Google.org, that will provide cash support of up to $250,000 (€232,000). 

In 2021, Google put up €20 million for European AI social innovation startups through the same mechanism. Among the beneficiaries at that time was The Newsroom in Portugal, which uses an AI-powered app to encourage a more contextualised reading experience to take people out of their bubble and reduce polarisation.

Mini-European tour ahead of AI Act

Of the money offered by the tech giant this time around €1 million will be earmarked for nonprofits that are helping to strengthen and grow social entrepreneurship in Sweden.

During his brief stay, Pichai met with the country’s prime minister and visited the KTH Royal Institute of Technology to meet with students and professors.

Googles vd Sundar Pichai gästade KTH och pratade om artificiell intelligens. Han konstaterar att det är ok att vara rädd om rädslan används till någonting vettigt. https://t.co/imbtxxbSVn pic.twitter.com/oWal43dc2a

— KTH Royal Institute of Technology (@KTHuniversity) May 24, 2023

Sweden currently holds the six-month-long rotating Presidency of the European Union. Pichai’s visit to Stockholm preceded a trip to meet with European Commission deputy chief Vera Jourova and EU industry chief Thierry Breton on Wednesday. 

Breton is one of the drivers behind the EU’s much-anticipated AI Act, a world-first attempt at far-reaching AI regulation. One of the biggest sources of contention — and surely subject to much lobbying from the industry — is whether so-called general purpose AI, such as the technology behind ChatGPT or Google’s Bard should be considered “high-risk.” 

Speaking to Swedish news outlet SVT on the day of his visit, Pichai stated that he believes that AI is indeed too important not to regulate, and to regulate well. “It is definitely going to involve governments, companies, academic universities, nonprofits, and other stakeholders,” Google’s top executive said. 

However, he may be doing some convincing of his own in Brussels, further adding, “These AI systems are going to be used for everything, from recommending a nearby coffee shop to potentially recommending a health treatment for you. As you can imagine, these are very different applications. So where we could get it wrong is to apply a high-risk assessment to all these use cases.” 

Will Pichai be successful in convincing the Commission? Then, just maybe, Bard will launch in Europe too

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Google launches €10M social innovation AI fund for European entrepreneurs Read More »

tech’s-role-in-the-quest-for-climate-justice:-what-not-to-miss-at-tnw-conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Linnea Ahlgren

Story by

Linnea Ahlgren

Award-winning innovators Caroline Lair and Lucia Gallardo will be speaking at TNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

Social inequality and climate risk have become central to understanding what will drive innovation – and investment – for the future. On day two of TNW Conference, Caroline Lair, founder of startup and scaleup communities The Good AI and Women in AI, and Lucia Gallardo, founder and CEO of impact innovation “socio-technological experimentation lab” Emerge, will be on the Growth Quarters stage for a session titled Technology-Driven Climate Justice.” 

The climate crisis is itself the result of a deeply embedded and systemic exploitation of nature and people in the name of profit. Its impact is already being felt disproportionately over the world, with severe heat waves, droughts, and entire nations disappearing below sea level. What’s more, the people worst affected are those who have contributed little to the greenhouse gas emissions driving global warming.

Climate justice is the idea that climate change is not just an environmental but also a social justice issue, and aims to ensure that the transition to a low-carbon economy is equitable and benefits everyone. Lair and Gallardo will specifically speak about how technologies such as AI, blockchain, and Web3 can play a crucial role in addressing these issues. 

AI for good

Artificial intelligence can be applied in the quest for climate justice in several ways, given that it is implemented in a way that ensures transparency, accountability, and fairness. These include data analysis and prediction, discovering patterns and informing policies, as well as evaluating their effectiveness. 

It can also enhance climate modelling capabilities, crucial for developing adaptation strategies. Furthermore, AI-powered technologies can monitor, for instance, weather systems with real-time data and also optimise resource allocation and energy distribution.

Reimagining value

Emerge’s objective is to “reimagine impact innovation with regenerative monetisation models.” Regenerative finance goes beyond traditional models that focus on profit, taking into account broader social, environmental, and economic impacts. 

Blockchain technology can, for instance, offer transparency for transactions, ensuring that funds are indeed directed to regenerative investments. It can also tokenise regenerative assets such as renewable energy installations, sustainable agriculture initiatives, or ecosystem restoration projects, representing them as digital tokens and making them more accessible to a broader range of investors. 

Meanwhile, in the words of Gallardo, “Integrating crypto into existing ecological initiatives doesn’t automatically mean it is applied regenerative finance. We must be intentional about how we’re reimagining value.”

Reclaiming an equitable future

Why am I looking forward to this session? The theme of this year’s TNW Conference is “Reclaim The Future”. In all honesty, I belong to a generation that, while I hopefully have several decades more of on-earth experience ahead of me, will most likely not have to deal with full-on dystopian scenarios, battling to survive climate catastrophe.

I am also privileged in terms of geographical location and socioeconomic status not to have to worry about immediate drought and famine. (Flooding may be another matter, but as someone said when convincing me to move to Amsterdam – “wouldn’t you prefer to live in a place that is already used to keeping water out?”) 

However, this does not mean that we who enjoy such privileges get to simply shrug our shoulders and carry on indulging in business as usual. TNW has always been about the good technology can do in the world. And what is better than employing it in service of one of the greatest challenges of our time?

The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.

Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:

A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:

NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.

NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.

When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.

You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”

And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.

“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”

Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.

For now I have it set up so that NPCs say “let me think” to indicate that I have been heard and the response is in the process of being generated, but you’re right this can be expanded to choose from a few different filler lines instead of repeating the same one every time.

And while the video is noticeably sped up after prompts, this mostly comes down to the voice generation software xVASynth, which admittedly slows the response pipeline down since it’s being run locally. ChatGPT itself doesn’t affect performance, the developer says.

This isn’t the first project we’ve seen using chatbots to enrich user interactions. Lee Vermeulen, a long-time VR pioneer and developer behind Modboxreleased a video in 2021 showing off one of his first tests using OpenAI GPT 3 and voice acting software Replica. In Vermeulen’s video, he talks about how he set parameters for each NPC, giving them the body of knowledge they should have, all of which guides the sort of responses they’ll give.

Check out Vermeulen’s video below, the very same that inspired ‘Art from the Machine’ to start working on the Skyrim VR mod:

As you’d imagine, this is really only the tip of the iceberg for AI-driven NPC interactions. Being able to naturally talk to NPCs, even if a little stuttery and not exactly at human-level, may be preferable over having to wade through a ton of 2D text menus, or go through slow and ungainly tutorials. It also offers up the chance to bond more with your trusty AI companion, like Skyrim’s Lydia or Fallout 4’s Nick Valentine, who instead of offering up canned dialogue might actually, you know, help you out every once in a while.

And that’s really only the surface level stuff that a mod like ‘Art from the Machine’ might deliver to existing games that aren’t built with AI-driven NPCs. Imagining a game that is actually predicated on your ability to ask the right questions and do your own detective work—well, that’s a role-playing game we’ve never experienced before, either in VR our otherwise.

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level Read More »

german-creatives-wants-eu-to-address-chatgpt-copyright-concerns

German creatives wants EU to address ChatGPT copyright concerns

German creatives wants EU to address ChatGPT copyright concerns

Linnea Ahlgren

Story by

Linnea Ahlgren

ChatGPT has had anything but a triumphant welcome tour around Europe. Following grumbling regulators in Italy and the European Parliament, the turn has come for German trade unions to express their concerns over potential copyright infringement. 

No less than 42 trade organisations representing over 140,000 of the country’s authors and performers have signed a letter urging the EU to impose strict rules for the AI’s use of copyrighted material. 

As reported first by Reuters, the letter, which underlined increasing concerns about copyright and privacy issues stemming from the material used to train the large language model (LLM), stated, 

“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs.”

Signatories include major German trade unions Verdi and DGB, as well as other associations for photographers, designers, journalists and illustrators. The letter’s authors further added that, 

“Generative AI needs to be at the centre of any meaningful AI market regulation.”

ChatGPT is not the only target of copyright contention. In January, visual media company Getty Images filed a copyright claim against Stability AI. According to the lawsuit, the image making tool developer allegedly copied over 12 million photos, captions, and metadata without permission.  

LLM training offers diminishing returns

The arrival of OpenAI’s ChatGPT has sparked a flurry of concerns. Thus far, these have covered everything from aggressive development due to a commercially motivated AI “arms race,” to matters of privacy, data protection and copyright. The latest model, GPT-4, was trained using over a trillion words. 

Meanwhile, one of the originators of the controversy, the company’s CEO Sam Altman, stated last week that the amplified machine learning strategy behind ChatGPT has run its course. Indeed, OpenAI forecasts diminishing returns on scaling up model size. The company trained its latest model, GPT-4, using over a trillion words at the cost of about $100 million. 

At the same time, the EU’s Artificial Intelligence Act is nearing its home stretch. While it may well set a global regulatory standard, the question is how well it will be able to adapt as developers find other new and innovative ways of making algorithms more efficient.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


German creatives wants EU to address ChatGPT copyright concerns Read More »