Artificial Intelligence

ai-startup-launches-‘fastest-data-processing-engine’-on-the-market

AI startup launches ‘fastest data processing engine’ on the market

Paris-based and female-founded AI startup Pathway has announced the general launch of its data processing engine. Reportedly, it is up to 90x faster than existing streaming solutions, and promises to be the “fastest data processing engine on the market.” 

The secret? A unique ability to mix batch and streaming logic in the same workflow, which lets the system forget things that are no longer useful. Basically, this means it can learn and react to changes in real-time — like humans. 

Traditionally, the complexity of building batch and streaming architectures has resulted in a division between the two approaches, says Pathway CEO and co-founder Zuzanna Stamirowska. 

This, she adds, has slowed down the adoption of data streaming for AI systems and fixed their intelligence at a moment in time. Not to mention the added complexity of a third workflow — generative AI. 

According to Stamirowska, there’s now a “critical need” for rapid data processing and more adaptable AI. “That’s why our mission has been to enable real-time data processing, while giving developers a simple experience regardless of whether they work with batch, streaming, or LLM systems,” she states.

Photo of Zuzanna Stamirowska in garden
Stamirowska says Pathway’s aim is to give developers a simple experience, regardless of whether they work with batch, streaming, or LLM systems. Credit: Pathway

Revisions to data points without AI retraining

Machines “forgetting” incorrect or outdated information in real-time has been a near-impossible feat in the past, due to models being trained on static data uploads. Traditionally, unlearning would require retraining of the model. 

Indeed, when we put the question to ChatGPT for fun — can you unlearn things should they prove to be inaccurate — this is the response we received: 

“As an AI language model, I don’t have the ability to “unlearn” information in the same way humans do.

“However, the developers and researchers at OpenAI can update and retrain the model based on new data and improvements.”

But Pathway says it can make revisions to certain data points without requiring a full batch data upload, akin to updating the value of one cell within an Excel document. The updated cells doesn’t reprocess the whole document, but just the cells dependent on it. 

One of the startup’s existing clients, German logistics specialist DB Schenker, reduced the time-to-market of anomaly-detection analytics projects from three months to one hour. Meanwhile, French postal services company La Poste saw a fleet CAPEX reduction of 16%.

‘Lingua franca’ for developers

Polish-French duo Stamirowska and Claire Nouet, the company’s COO, founded Pathway in 2020. Thus far, the startup has raised $4.5mn (approx. €4mn) in a pre-seed round December last year, and counts 20+ employees across Europe and North America. 

The female-led deep tech startup is hoping for its system to become a “lingua franca” of all data pipelines (stream, batch, and generative AI). Beyond cutting costs for clients, it says it is looking to democratise the ability for developers to design streaming workflows, which have typically required a specialist skill set. 

AI startup launches ‘fastest data processing engine’ on the market Read More »

greek-ai-shipping-startup-acquired-by-japanese-automation-giant

Greek AI shipping startup acquired by Japanese automation giant

Greek shipping software startup DeepSea Technologies has sold a majority share to Japan’s automation giant Nabtesco for an undisclosed amount. 

DeepSea will continue to develop the company’s fuel optimisation platforms that reduce emissions (and cut costs) of fossil-based maritime fleets, while also becoming a “centre of excellence for AI research and product development.” Furthermore, the Athens-based startup will support Nabtesco Marine Control Systems in its quest for scalable semi-autonomous shipping. 

The company will continue to function (fittingly enough) autonomously, and carry on work on its two platforms — Cassandra and Pythia — and on “the broader digital transformation of the maritime industry.” 

Cassandra is a vessel monitoring and optimisation platform that allows customers to see emissions for a specific vessel and across an entire fleet, while also understanding how each component of the ship contributes to its performance. In addition, the tool offers notifications in real-time when something requires attention, such as fuel waste and maintenance requirements. 

Meanwhile, Pythia is a world-first weather routing platform, tailored to the exact performance of a specific ship. It comes up with tailor-made routes, speed and trim policies, assessing overall cost and CO2 emissions, while providing minute-by-minute updates on conditions. 

The company claims that it is possible to unlock energy efficiency improvements of up to 10% across almost any fleet in 12 months, using its optimisation technologies. 

Best of both worlds

DeepSea was founded in 2017 and has previously raised €8mn, five of which came from Nabtesco Technology Ventures in 2021. The company has offices in Athens, London, and Rotterdam, and employs over 70 specialised engineers, most in AI and software development. The two co-founders of DeepSea, Dr. Konstantinos Kyriakopoulos and Roberto Coustas, will continue on in their roles of CEO and President, respectively. 

“The deepening of our existing partnership with Nabtesco unlocks even greater potential for our technology and approach, and will be key to unlocking the next wave of innovation for our customers,” Kyriakopoulous stated when announcing the news last week.



“It’s truly the best of both worlds: DeepSea will maintain its startup culture and focus on disruptive technology, whilst harnessing all the expertise and support of a global powerhouse.”

Staying on top of CO2  emissions will become increasingly important for shipping companies worldwide. Not only from a “the world is burning, let’s get our act together” kind of perspective, but also from a business and regulatory point of view. 

Maritime carbon emissions regulations

According to the International Energy Agency (IEA), in 2022, maritime shipping accounted for about 2% of energy-related global CO2 emissions. While there is no legally binding agreement holding the industry to emission reduction targets, the International Maritime Organisation (IMO), a specialised agency of the UN, has adopted measures to reduce emissions of greenhouse gases from international shipping. 

As stated in the latest version of the IMO’s GHG strategy from July 2023, it is now targeting net-zero carbon emissions by 2050. Member states have agreed to “indicative checkpoints.” These include reducing total emissions by 20% and striving for 30% by 2030, with targets increased to 70% and 80% by 2040.

From 2024, shipping will also be included in the EU emissions trading scheme (ETS), which means that every kilogram of CO2 will be of financial — not to mention planetary — importance.

Greek AI shipping startup acquired by Japanese automation giant Read More »

eu-rules-on-ai-must-do-more-to-protect-human-rights,-ngos-warn

EU rules on AI must do more to protect human rights, NGOs warn

A group of 150 NGOs including Human Rights Watch, Amnesty International, Transparency International, and Algorithm Watch has signed a statement addressed to the European Union. In it, they entreat the bloc not only to maintain but enhance human rights protection when adopting the AI Act. 

Between the apocalypse-by-algorithm and the cancer-free utopia different camps say the technology could bring, lies a whole spectrum of pitfalls to avoid for the responsible deployment of AI

As Altman, Musk, Zuckerberg, et al., dive head first into the black box, legislation aiming to at least curb their enthusiasm is on the way. The European Union’s proposed law on artificial intelligence — the AI Act — is the first of its kind by any major regulatory body. Two different camps are claiming that it is either a) crippling Europe’s tech sovereignty or b) not going far enough in curtailing dangerous deployment of AI. 

Transparency and redress

The signatories of Wednesday’s collective statement warn that, “Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making, and environmental damage.” 

This is no “AI poses risk of extinction” one-liner statement. It includes specific segments of the Act the writers feel must be kept or enhanced. For instance, a “framework of accountability, transparency, accessibility, and redress,” must include the obligation of AI deployers to publish fundamental rights impact assessments, register use in a publicly accessible database, and ensure people affected by AI-made decisions have the right to be informed. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The NGOs are also taking a strong stance against AI-based public surveillance (such as the one deployed during the coronation of King Charles). They are calling for a full ban on “real-time and post remote biometric identification in publicly accessible spaces, by all actors, without exception.” They also ask that the EU prohibit AI in predictive and profiling systems in law enforcement, as well as migration contexts and emotional recognition systems. 

In addition, the letter writers urge lawmakers not to “give into lobbying efforts of big tech companies to circumvent regulation for financial interest,” and uphold an objective process to determine which systems will be classified as high-risk. 

The proposed act will divide AI systems into four tiers, depending on the level of risk they pose to health and safety or fundamental rights. The tiers are: unacceptable, high, limited, and minimal. 

High risk vs. general purpose AI

Unacceptable are applications such as social scoring systems used by governments, whereas systems used for things like spam filters or video games would be considered minimal risk. 

Under the proposed legislation, the EU will allow high-risk systems (for instance those used for medical equipment or autonomous vehicles), but deployers must adhere to strict rules regarding testing, data collection documentation, and accountability frameworks. 

The original proposal did not contain any reference to general purpose or generative AI. However, following the meteoric rise of ChatGPT last year, the EU approved last minute amendments to include an additional section. 

Business leaders have been hard at work the past few months trying to influence the EU to water down the proposed text. They have been particularly keen on what should be classified as high-risk AI, resulting in much higher costs. Some, such as OpenAI’s Sam Altman, went on a personal charm offensive (throwing a threat or two in the mix). 

Others, specifically more than 160 executives from major companies around the world (including Meta, Renault, and Heineken), have also sent a letter to the Commission. In it, they warned that the draft legislation would “jeopardise Europe’s competitiveness and technological sovereignty.”

The European Parliament adopted its negotiating position on the AI Act on June 14, and trilogue negotiations have now begun. These entail discussions between the Parliament, the Commission, and the Council, before they will adopt the final text. 

With the law set to establish a global precedent (albeit hopefully one capable of evolving as the technology does), Brussels is, in all likelihood, currently abuzz with solicitous advocates — on behalf of all interested parties. 

EU rules on AI must do more to protect human rights, NGOs warn Read More »

europe-must-act-against-ai-written-reviews-before-it’s-too-late

Europe must act against AI-written reviews before it’s too late

Parts of modern life are inescapable. We all use mapping software for directions, check the news on our phones, and read online reviews of products before buying them.

Technology didn’t create these things, but what it has done is democratise them, make them easier to access or add to. Take online reviews. Nowadays, people can share their honest opinions about products and services in a way that, back in the times gone by, would’ve been impossible.

Yet, what the internet giveth, it can, uh, taketh away too?

It didn’t take long for nefarious actors to realise they could exploit this new-found ability technology to flood the market with fake reviews, creating an entirely new industry along the way.

In recent years, much of the discussion around fake reviews has dissipated, but now? They’re back with a vengeance — and it’s all because of AI.

The ascension of large language models (LLMs) like ChatGPT means we’re entering a new era of fake reviews, and governments in Europe and the rest of the world need to act before it’s too late.

AI-written reviews? Who cares?

As pithy as that sounds, it’s a valid question. Fake reviews have been part of online life for almost as long as the internet has existed. Will things really change if it’s sophisticated machines writing them instead of humans?

Spoiler: yes. Yes it will.

The key differentiator is scale. Previously, text-generating software was relatively unsophisticated. What they created was often sloppy and vague, meaning the public could immediately see it was untrustworthy, crafted by a dumb computer rather than a slightly less dumb person. 

This meant that for machine-written fake reviews to be successful and trick people, other humans had to be involved with the text. The rise of LLMs and AI means that’s no longer the case.

Using ChatGPT, almost anyone can produce hundreds of fake reviews that, to all intents and purposes, read like they could be written by a real person.

But, again, so what? More fake reviews? Who cares? I put this to Kunal Purohit, Chief Digital Services Officer at Tech Mahindra, an IT consulting firm.

He tells me that “reviews are essential for businesses of any size.” The reason for this is it helps them “build brand recognition and trust with potential customers or prospects.”

This is increasingly important in the modern world, as the competitiveness of the business sector is causing customers to become more aware and demanding of companies.

Now that user experience is a core selling point — and brands prioritise this aspect of their business — Purohit says that bad reviews can shatter organisations’ abilities to do business effectively.

To put that another way, fake reviews aren’t just something that can convince you to buy a well-reviewed book that, in reality, is a bit boring. They can be used for both negative and positive reasons, and, when levelled at a company, can seriously impact that business’ reputation and ability to work.

This is why we — and the EU — must take computer-generated reviews seriously.

But what’s actually going on out there?

At this point, much of the discussion is academic. Yes, we’re aware that AI-written reviews could be a problem, but are they? What’s actually happening?

Purohit tells me that, already, “AI-powered chatbots are being used to create fake reviews on marketplace products.” Despite the platforms’ best efforts, they’ve become inundated with computer-generated reviews.

This is confirmed by Keith Nealon, the CEO of Bazaarvoice, a company that helps retailers show user-generated content on their site. He says he’s seen how “generative AI has recently been used to write fake product reviews,” with the goal being to “increase the volume of reviews for a product with the intent to drive greater conversion.”

AI-written reviews are gaining momentum, but, friends, this is just the beginning.

Long, hard years are on the horizon

The trust we have in reviews is about to be shattered.

Nealon from Bazaarvoice says the use of AI at scale could have “serious implications for the future of online shopping,” especially if we reach a situation where “shoppers can’t trust whether a product review is authentic.”

The temptation to use computer-generated reviews on the business side of things will also only increase.

“We all want our apps to be at the top of the rankings, and we all know one way to get this is through user engagement with reviews,” Simon Bain — CEO of OmniIndex, an encrypted data platform — tells me. “If there’s the option to mass produce these quickly with AI, then some companies are going to take that route, just as some already do with click farms for other forms of user engagement.”

He continues, saying that while the danger of computer-written reviews are bad, the fact this methodology becomes an extra tool for click farms is even worse. Bain foresees a world where AI-generated text can “be combined with other activities like click fraud and mass-produced in a professional way very cheaply.”

What this means is rather than AI-written reviews being a standalone problem, they have the potential to become a huge cog in an even bigger misinformation machine. This could lead to trust in all aspects of online life being eroded.

So… can anything be done?

Hitting back against AI-written reviews

There were two common themes across all the experts I spoke with regarding fighting computer-generated reviews. The first was that it’s going to be tough. And the second? We’re going to need artificial intelligence to fight against… artificial intelligence.

“It can be incredibly difficult to spot AI-written content  — especially if it is being produced by professionals,” Bain says. He believes we need to crack down on this practice in the same way we’ve been doing so for similar fraudulent activities: with AI.

According to Bain, this would function by analysing huge pools of data around app use and engagement. This would use tactics like “pattern recognition, natural language processing, and machine learning” to spot fraudulent content.

Purohit and Nealon agree with this, each of them pointing towards the potential AI has to solve its problems in our conversations.

Despite this, it’s Chelsea Ashbrook — Senior Manager, Corporate Digital Experience at Genentech, a biotechnology company — who summed it up best: “Looking into the future, though, we might need to develop new tools and techniques. It is what it is; AI is getting smarter, and so must we.”

The government must get involved

At this stage, we encounter another problem: yes, AI tools can combat computer-generated reviews, but how does this actually work? What can be done?

And this is where governing bodies like the EU come in. 

I put this to Ashbook: “They certainly have their work cut out for them,” she says. Ashbrook then suggests one way governments can combat this upcoming plague may be to “establish guidelines that necessitate transparency about the origin of reviews.”

Bain from OmniIndex, on the other hand, mentions the importance of ensuring that existing laws and regulations around elements like “fraud, and cybercrime keep up to date with how [AI] is being used.”

Purohit from Tech Mahindra believes we’re already seeing many positive initiatives and policies from governments and key AI industry professionals around the responsible use of the tech. Despite this, “there are several ways official bodies such as the EU … can prevent [it] from getting out of hand.”

He points towards “increasing research and development, [and] strengthening regulatory frameworks” as two important elements of this strategy.

Beyond that, Purohit believes governments should update consumer protection laws to combat the dangers posed by AI-generated content. This could cover a range of things, including “enforcing penalties for the misuse or manipulation of AI-generated reviews” or “holding platforms accountable for providing accurate and reliable information to consumers.”

There you go, Europe, feel free to use those ideas to get the ball rolling.

AI-written reviews: Here to stay

Want to read the least shocking thing you’ve seen in some time? AI is going to change the world.

Despite that, the topics the press tends to be obsessed with are things like the singularity of a potential AI-driven apocalypse. Which, to be honest, sound far sexier — and drive far more clicks.

But in my mind, it’s the smaller things like AI-written reviews that will have the most impact on our immediate lives.

Fundamentally, society is based on trust, on the idea that there are people around us who share a vague set of similar values. AI being used in these small ways has the potential to undercut that. If we can no longer believe what we see, hear, or read, then we no longer trust anything. 

And if that happens? Well, it won’t take long until things start crumbling around us.

This is why governmental bodies like the EU can’t adopt a “wait-and-see” approach to regulating areas as seemingly inconsequential as AI-written reviews. There must be regulation — and it must be fast. Because if we delay too long, it may already be too late.

Europe must act against AI-written reviews before it’s too late Read More »

spotify-ceo’s-startup-for-ai-powered-preventive-healthcare-raises-e60m

Spotify CEO’s startup for AI-powered preventive healthcare raises €60M

Spotify founder and CEO Daniel Ek’s preventive healthcare startup just received a very strong vote of confidence from venture capitalists. Earlier today, Neko Health announced it had raised €60mn in a round led by Lakestar and backed by Atomico and General Catalyst.

The funds will be put towards expanding the concept outside of the company’s native Sweden, where it currently operates a private body-scan clinic. 

Neko Health, named after the Japanese word for cat, was founded in 2018 by Ek and Hjalmar Nilsonne. After much secrecy, its first clinic opened in February this year in Stockholm. Within two hours, it was fully booked out and 5,000 people were placed on a waiting list. 

“I’ve spent more than 10 years exploring the untapped potential of healthcare innovation,” Ek said in a statement. “We are dedicated to building a healthcare system that focuses on prevention and patient care, aiming to serve not just our generation, but those that follow.”

3D body scans

At the clinic, people go through a 3D full-body scan in a minimalist booth that would not look out of place in an episode of Star Trek, fitted with dozens of sensors and powered by, you guessed it, artificial intelligence. In particular, algorithms can immediately detect potential skin conditions and risk of cardiovascular disease. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Patients (are they still called that in preventative care?) also go through laser scans and an ECG, which, altogether, takes between 10 and 20 minutes. They may not be met by Bones himself after the examinations, but their results are looked over and explained by an actual, human, doctor. 

“We have our own nurses, doctors and specialists,” Nilsonne told Bloomberg. “We have dermatologists employed just to review the skin images. There is a doctor on site who can make qualified medical judgments for anything that comes up.”

The price for a Neko Health assessment is €250, and the company has performed over 1,000 scans since launch. Close to 80% of customers have reportedly prepaid for follow-up scans after a year. 

AI does indeed hold great potential for disease prevention and early detection — but only if the results are interpretable. It is unclear how much insight Neko Health physicians have into how the algorithm makes its predictions (as in, which factor contributes to the risk of cardiovascular disease so the patient/client can be better informed about what measures to take).

Purpose and ambition

One of the company’s backers is Skype co-founder and founder of Atomico, Niklas Zennström, who will also be taking a seat on the board. He sees enormous potential in the new venture from the man who essentially changed how we consume music. 

“Neko Health is exactly the type of mission that gets us excited at Atomico. It’s that rare combination of a firm with a purpose and outsized ambition, and founders with a world class track record,” Zennström said. “They’re solving a problem we can all relate to, with the potential to fundamentally transform global healthcare forever.” 

Spotify CEO’s startup for AI-powered preventive healthcare raises €60M Read More »

ai-to-help-everyone-unleash-their-inner-creator-with-masterpiece-x

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X

Empowering independent creators is an often-touted benefit of AI in XR. We’ve seen examples from professional development studios with little to no public offering, but precious few examples of AI-powered authoring tools for individual users. Masterpiece Studio is adding one more, “Masterpiece X”, to help everyone “realize and elevate more of their creative potential.”

“A New Form of Literacy”

Masterpiece Studio doesn’t just want to release an app – they want to start a movement. The team believes that “everyone is a creator” but the modern means of creation are inaccessible to the average person – and that AI is the solution.

Masterpiece X Meta Quest 3D Remix screenshot

“As our world increasingly continues to become more digital, learning how to create becomes a crucial skill: a new form of literacy,” says a release shared with ARPost.

Masterpiece Studio has already been in the business of 3D asset generation for over eight years now. The company took home the 2021 Auggie Award for Best Creator and Authoring Tool, and is a member of the Khronos Group and the Metaverse Standards Forum.

So, what’s the news? A new AI-powered asset generation platform called Masterpiece X, currently available as a beta application through a partnership with Meta.

The Early Days of Masterpiece X

Masterpiece X is already available on the Quest 2, and it’s already useful if you have your own 3D assets to import. There’s a free asset library, but it only contains sample content at the moment. The big feature of the app – creating 3D models from text prompts – is still rolling out and will (hopefully) result in a more highly populated asset library.

Masterpiece X Meta community library

“Please keep in mind that this is an ‘early release’ phase of the Masterpiece X platform. Some features are still in testing with select partners,” reads the release.

That doesn’t mean that it’s too early to bother getting the app. It’s already a powerful tool. Creators that download and master the app now will be better prepared to unlock its full potential when it’s ready.

Creating an account isn’t a lengthy process, but it’s a bit clunky – it can’t be done entirely online or entirely in-app, which means switching between a desktop and the VR headset to enter URLs and passwords. After that, you can take a brief tutorial or experiment on your own.

The app already incorporates a number of powerful tools into the entirely spatial workflow. Getting used to the controls might take some work, though people who already have experience with VR art tools might have a leg up. Users can choose a beginner menu with a cleaner look and fewer tools, or an expert menu with more options.

So far, tools allow users to change the size, shape, color, and texture of assets. Some of these are simple objects, while others come with rigged skeletons that can take on a variety of animations.

I Had a Dream…

For someone like me who isn’t very well-versed in 3D asset editing, now is the moment to spend time in Masterpiece X – honing my skills until the day that asset creation on the platform is streamlined by AI. Maybe then I can finally make a skateboarding Gumby-shaped David Bowie to star in an immersive music video for “Twinkle Song” by Miley Cyrus. Maybe.

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X Read More »

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »

mistral-ai-secures-e105m-in-europe’s-largest-ever-seed-round

Mistral AI secures €105M in Europe’s largest-ever seed round

Mistral AI secures €105M in Europe’s largest-ever seed round

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

The artificial intelligence hype shows no sign of fading just yet, and investors are practically falling over themselves to fund the next big thing in AI. Yesterday, Paris-based startup Mistral AI announced it had secured €105mn in what is reportedly Europe’s largest-ever seed round. 

Mistral AI was founded only four weeks ago, by a trio of AI researchers. Arthur Mensch, the company’s CEO, was formerly employed by Google’s DeepMind. His co-founders, Timothée Lacroix (CTO) and Guillaume Lample (Chief Science Officer), previously worked for Meta. 

The company has yet to develop its first product. However, on a mission to “make AI useful,” it plans to launch a large language model (LLM) similar to the system behind OpenAI’s ChatGPT in early 2024. 

A large part of the funds raised will be used towards renting the computer power to train it. The idea is to only use publicly available data to avoid the legal issues and copyright backlash faced by others in the industry. 

While Mistral hopes to take on OpenAI with actual open-sourced models and data sets, it is setting itself apart from the Microsoft-backed step-change initiator by targeting enterprises instead of consumers. The company says its goal is to help business clients improve processes around R&D, customer care, and marketing, as well as giving them the tools to build new products with AI.

Vision coupled with hands-on experience

The funding round is led by Lightspeed Venture Partners. The VC’s partner, Antoine Moyroud, says that Lightspeed has had the opportunity to meet with several talented researchers-turned-founders in AI, but few had a vision beyond the technical field. 

Mensch, Lacroix and Lample, on the other hand, according to Moyroud, are “part of a group of select few globally who have both the technical understanding required to build out their own vision along with the hands-on experience of training and operating large language models at scale.”

Generally, the American VC says it believes that Europe has “a decisive role” to play in the AI field. Just in September last year, the firm opened up new offices in London, Berlin, and Paris, and says it is looking to partner with more European founders of the same ambition behind Mistral.



“Our investment in Mistral, and all our portfolio companies in Europe, are evidence of our firm conviction that a new generation of global players will emerge from this ecosystem,” Lightspeed said in a statement.  

JCDecaux Holding, Motier Ventures, La Famiglia, Headline, Exor Ventures, Sofina, First Minute Capital, and LocalGlobe also participated in the round, along with private investors including French billionaires Rodolphe Saadé and Xavier Niel, as well as former Google CEO Eric Schmidt and French investment bank BpiFrance.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Mistral AI secures €105M in Europe’s largest-ever seed round Read More »

italy-to-launch-e150m-fund-for-ai-startups

Italy to launch €150M fund for AI startups

Italy to launch €150M fund for AI startups

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

Italy is the latest country looking to quickly shore up domestic development of an AI ecosystem. As part of its Strategic Program for Artificial Intelligence, the government will “soon” launch a €150 million fund to support startups in the field, backed by development bank Cassa Depositi e Prestiti (CDP). 

As reported by Corriere Communazione, Alessio Butti, Italy’s cabinet undersecretary in charge of technological innovation, relayed the news of the state-backed fund yesterday. While he didn’t provide specific details on the amount to be made available, government sources subsequently told Reuters the figure being discussed in Rome was in the vicinity of €150 million. 

“Our goal is to increase the independence of Italian industry and cultivate our national capacity to develop skills and research in the sector,” Butti said. “This is why we are working with CDP on the creation of an investment fund for the most innovative startups, so that study, research, and programming on AI can be promoted in Italy.”

Navigating regulation and support

Indeed, the AI boom is here in earnest. Yesterday, Nvidia became the first chipmaker to hit $1 trillion in valuation. The boost to stocks followed a prediction of sales reaching $11 billion in Q2 off the back of the company’s chips powering OpenAI’s ChatGPT (which, coincidentally got off on a bit of a bad foot with Italy).

Those who do not yet have their hands in the (generative) AI pie are now racing to be part of the algorithm-driven gold rush of the 21st Century. 

While intent on regulatory oversight, governments are also, for various reasons, keen on supporting domestic developers in the field of artificial intelligence. Last month, the UK made £100 million in funding available for a task force to help build and adopt the “next generation of safe AI.” 

Italy is also looking to set up its own “ad hoc” task force. Butti stated, “In Italy we must update the strategy of the sector, and therefore the Department for Digital Transformation is working on the establishment of an authoritative group of Italian experts and scholars.”

Part of national AI strategy

Italy adopted the Strategic Program for Artificial Intelligence 2022-2024 in 2021 but, of course, the industry is evolving at breakneck speed. The strategy is a joint project between the ministries for university and research, economic development, and technological innovation and digital transition. Additionally, it is guided by a working group on the national strategy for AI. 

The program outlines 24 policies the government will have implemented over the course of the three years. Beyond measures to support the domestic development of AI, these include promotion of STEM subjects, and increasing the number of doctorates to attract international researchers. Furthermore, they target the creation of data infrastructure for public administration and specific support for startups working in GovTech and looking to solve critical problems in the public sector.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Italy to launch €150M fund for AI startups Read More »

google-launches-e10m-social-innovation-ai-fund-for-european-entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Linnea Ahlgren

Story by

Linnea Ahlgren

In conjunction with a visit of CEO Sundar Pichai’s visit to Stockholm yesterday, Google announced the launch of the second Google.org Social Innovation Fund on AI to “help social enterprises solve some of Europe’s most pressing challenges.” 

Through the fund, Google is making €10 million available, along with mentoring and support, for entrepreneurs from underserved backgrounds. The aim is to help them develop transformative AI solutions that specifically target problems they face on a daily basis.

The fund will provide capital via a grant to INCO for the expansion of Social Tides, an accelerator program funded by Google.org, that will provide cash support of up to $250,000 (€232,000). 

In 2021, Google put up €20 million for European AI social innovation startups through the same mechanism. Among the beneficiaries at that time was The Newsroom in Portugal, which uses an AI-powered app to encourage a more contextualised reading experience to take people out of their bubble and reduce polarisation.

Mini-European tour ahead of AI Act

Of the money offered by the tech giant this time around €1 million will be earmarked for nonprofits that are helping to strengthen and grow social entrepreneurship in Sweden.

During his brief stay, Pichai met with the country’s prime minister and visited the KTH Royal Institute of Technology to meet with students and professors.

Googles vd Sundar Pichai gästade KTH och pratade om artificiell intelligens. Han konstaterar att det är ok att vara rädd om rädslan används till någonting vettigt. https://t.co/imbtxxbSVn pic.twitter.com/oWal43dc2a

— KTH Royal Institute of Technology (@KTHuniversity) May 24, 2023

Sweden currently holds the six-month-long rotating Presidency of the European Union. Pichai’s visit to Stockholm preceded a trip to meet with European Commission deputy chief Vera Jourova and EU industry chief Thierry Breton on Wednesday. 

Breton is one of the drivers behind the EU’s much-anticipated AI Act, a world-first attempt at far-reaching AI regulation. One of the biggest sources of contention — and surely subject to much lobbying from the industry — is whether so-called general purpose AI, such as the technology behind ChatGPT or Google’s Bard should be considered “high-risk.” 

Speaking to Swedish news outlet SVT on the day of his visit, Pichai stated that he believes that AI is indeed too important not to regulate, and to regulate well. “It is definitely going to involve governments, companies, academic universities, nonprofits, and other stakeholders,” Google’s top executive said. 

However, he may be doing some convincing of his own in Brussels, further adding, “These AI systems are going to be used for everything, from recommending a nearby coffee shop to potentially recommending a health treatment for you. As you can imagine, these are very different applications. So where we could get it wrong is to apply a high-risk assessment to all these use cases.” 

Will Pichai be successful in convincing the Commission? Then, just maybe, Bard will launch in Europe too

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Google launches €10M social innovation AI fund for European entrepreneurs Read More »

tech’s-role-in-the-quest-for-climate-justice:-what-not-to-miss-at-tnw-conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Linnea Ahlgren

Story by

Linnea Ahlgren

Award-winning innovators Caroline Lair and Lucia Gallardo will be speaking at TNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

Social inequality and climate risk have become central to understanding what will drive innovation – and investment – for the future. On day two of TNW Conference, Caroline Lair, founder of startup and scaleup communities The Good AI and Women in AI, and Lucia Gallardo, founder and CEO of impact innovation “socio-technological experimentation lab” Emerge, will be on the Growth Quarters stage for a session titled Technology-Driven Climate Justice.” 

The climate crisis is itself the result of a deeply embedded and systemic exploitation of nature and people in the name of profit. Its impact is already being felt disproportionately over the world, with severe heat waves, droughts, and entire nations disappearing below sea level. What’s more, the people worst affected are those who have contributed little to the greenhouse gas emissions driving global warming.

Climate justice is the idea that climate change is not just an environmental but also a social justice issue, and aims to ensure that the transition to a low-carbon economy is equitable and benefits everyone. Lair and Gallardo will specifically speak about how technologies such as AI, blockchain, and Web3 can play a crucial role in addressing these issues. 

AI for good

Artificial intelligence can be applied in the quest for climate justice in several ways, given that it is implemented in a way that ensures transparency, accountability, and fairness. These include data analysis and prediction, discovering patterns and informing policies, as well as evaluating their effectiveness. 

It can also enhance climate modelling capabilities, crucial for developing adaptation strategies. Furthermore, AI-powered technologies can monitor, for instance, weather systems with real-time data and also optimise resource allocation and energy distribution.

Reimagining value

Emerge’s objective is to “reimagine impact innovation with regenerative monetisation models.” Regenerative finance goes beyond traditional models that focus on profit, taking into account broader social, environmental, and economic impacts. 

Blockchain technology can, for instance, offer transparency for transactions, ensuring that funds are indeed directed to regenerative investments. It can also tokenise regenerative assets such as renewable energy installations, sustainable agriculture initiatives, or ecosystem restoration projects, representing them as digital tokens and making them more accessible to a broader range of investors. 

Meanwhile, in the words of Gallardo, “Integrating crypto into existing ecological initiatives doesn’t automatically mean it is applied regenerative finance. We must be intentional about how we’re reimagining value.”

Reclaiming an equitable future

Why am I looking forward to this session? The theme of this year’s TNW Conference is “Reclaim The Future”. In all honesty, I belong to a generation that, while I hopefully have several decades more of on-earth experience ahead of me, will most likely not have to deal with full-on dystopian scenarios, battling to survive climate catastrophe.

I am also privileged in terms of geographical location and socioeconomic status not to have to worry about immediate drought and famine. (Flooding may be another matter, but as someone said when convincing me to move to Amsterdam – “wouldn’t you prefer to live in a place that is already used to keeping water out?”) 

However, this does not mean that we who enjoy such privileges get to simply shrug our shoulders and carry on indulging in business as usual. TNW has always been about the good technology can do in the world. And what is better than employing it in service of one of the greatest challenges of our time?