European contributions might have been a little late to join the generative AI investment party, but that does not mean they will not end up rivalling some of the earlier North American frontrunners. According to people familiar with the matter, Mistral AI, the French genAI seed-funding sensation, is just about to conclude the raising of about €450mn from investors.
Unlike Germany’s Aleph Alpha who just raised a similar sum, most investors come from beyond the confines of the continent. The round is led by Silicon Valley VC firm Andreessen Horowitz, and also includes backing from Nvidia and Salesforce.
Sources close to the deal told Bloomberg that Andreessen Horowitz would invest €200mn in funding, whereas Nvidia and Salesforce would be down for €120mn in convertible debt, although this was still subject to change. If it goes through, this would value the Paris-based startup at nearly $2bn — less than a year after it was founded.
The key thing that sets Mistral apart is that it is specifically building smaller models that target the developer space. Speaking at the SLUSH conference in Helsinki last week, co-founder and CEO Arthur Mensch said this was exactly what separates the philosophy of the company from its competitors.
“You can start with a very big model with hundreds of billions of parameters — maybe it’s going to solve your task. But you could actually have something which is a hundred times smaller,” Mensch stated. “And when you make a production application that targets a lot of users, you want to make choices that lower the latency, lower the costs, and leverage the actual populated data that you may have. And this is something that I think is not the topic of our competitors — they’re really targeting multi-usage, very large models.” Mensch, who previously worked for Google DeepMind, added that this approach would also allow for strong differentiation through proprietary data, a key factor for actors to survive in the mature application market space.
Mistral AI and the reported investors have all declined to comment on the potential proceedings.
In a hiring freeze that CEO Sebastian Siemiatkowski attributes to the rise of AI, Swedish fintech unicorn Klarna is no longer recruiting staff beyond its engineering department.
“There will be a shrinking of the company,” Siemiatkowski told the Telegraph. “We’re not currently hiring at all, apart from engineers.”
The chief exec of the buy now, pay later appsaid that the productivity gains from using tools like ChatGPT meant the company now needs “fewer people to do the same thing.”
Klarna is not planning layoffs. But as people depart voluntarily, the CEO said he expects the size of the company to shrink over time. AI is “a threat to a lot of jobs” across the economy, he added.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The news comes as employees — from writers to artists — are becoming increasingly worried about the impact that AI could have on their livelihoods. Comments like these from the Klarna boss will no doubt exacerbate these anxieties.
Klarna isn’t the first to make such moves either, and surely not the last. In May, the CEO of IBM told Bloomberg that the company was pausing hiring for roles that it believed could be replaced by AI in coming years. While in April, Dropbox announced it was slicing its workforce by 16%, or 500 employees, blaming AI for forcing a shift in strategies.
Elon Musk, for one, thinks that AI will replace all jobs in the future. However, some believe AI won’t replace jobs, but simply make us more productive at what we already do, while a few think it will create more jobs.
Whatever your stance, the reality is that big tech companies across the world are implementing hiring freezes or laying off employees at a worrying rate — for reasons pertaining to AI or other matters.
Just today, Spotify, another Swedish tech giant, announced that it will lay off around 1,500 employees, or 17% of its workforce, to bring down costs.
With COP28 underway in Dubai making it again glaringly obvious just how little lawmakers are prepared to bend for the sake of future generations of Earthlings, the release of the first “green filter” generative AI search chatbot could not have been more timely.
Berlin-based Ecosia, the world’s largest not-for-profit search engine, hopes the launch of its new product will assist users in making better choices for the planet, and further differentiate its offerings from the “monolithic giants” of internet search.
Powered by OpenAI’s API, Ecosia’s chatbot has a “green answers” option. This triggers a layered green persona that will provide users with more sustainable results and answers. Say, suggest train rides over air travel.
GenAI + DMA = search market disruption?
Ecosia, which uses the ad revenue from its site (read, all its profits) to plant trees across the globe, is among the first independent search engines to roll out its own GenAI-powered chatbot. When speaking to TNW last month, Ecosia founder and CEO Christian Kroll stated how important it was for small independent players to stay up to date with the technology.
Further, he highlighted the opportunities generative AI could present in terms of disrupting the status quo in the internet search market. “I think there is potential for us to innovate as well — and maybe even leapfrog some of the established players,” he said.
Upon the launch of the company’s “green chatbot,” Kroll today added that the past year had introduced more change to the internet search landscape than the previous 14 combined (Ecosia was founded in 2009). “Generative AI has the potential to revolutionise the search market — no longer does it cost hundreds of billions to develop best-in-class search technologies,” he said, adding that Ecosia was targeting a “global increase in search engine market share.”
Something else that could potentially disrupt the market is the coming into play of the EU Digital Markets Act. From March 2024 onwards, consumers will no longer be “encouraged” to use default apps on their devices (say Safari web browser on an iPhone, or Google Maps on an Android device). This may come to include offering users a “choice screen” when setting up a device, which would invite them to select which browsers, search engines, and virtual assistants to install, rather than defaulting to the preferences of Apple and Android. Ecosia says it is “pushing hard” for this provision.
Green chatbot powered by clean energy
Many companies pay lip service to sustainability. Ecosia actually puts its money where its mouth is. A few years ago, its founder turned Ecosia into a steward-owned company. This means that no shares can be sold at a profit or owned by people outside of the company. In addition, no profits can be taken out of the company — as previously mentioned, all profits go to Ecosia’s tree-planting endeavours.
“It [tree planting] is one of the most effective measures we can take to fight the climate crisis. But unfortunately, it’s often not done properly. So that’s why it also gets a lot of criticism,” Kroll told TNW.
“We’re trying to define the standards of what good tree planting means. So first of all, you count the trees that survived, not just the ones that you have planted — then you also have to check on them.” This, we should add, falls under the purview of Ecosia’s Chief Tree Planting Officer. To date, the community has planted over 187,000,000 trees and counting.
In addition, Ecosia’s search engine is powered by solar energy — accounting for 200% of the carbon emitted from the server usage and broader operations.
LLMs and CO2 are still an undisclosed relationship
You may ask how adding generative AI to a search function is compatible with an environmental agenda. After all, Google’s use of generative AI alone could use as much energy as a small-ish country.
Ecosia admits that it does not yet have “oversight of the carbon emissions created by LLM-based genAI functions,” since OpenAI does not openly share this information. However, initial testing indicates that the new GenAI function will increase CO2 emissions by 5%, Ecosia said, for which it will increase investment in solar power, regenerative agriculture, and other nature-based solutions.
Environmental credentials aside, a search engine still has to perform when it comes to its core function. “For us to compete against monolithic giants that have a 99% market share, we have to offer our users a product they’ll want to use day in, day out,” Michael Metcalf, chief product officer at Ecosia, shared. “That means not only offering a positive impact on climate action, but a best-in-class search engine that can go head-to-head with the likes of Bing and Google.”
Metcalf added that user testing had shown very positive feedback on the company’s sustainability-minded AI chatbot. “We’re going to market with generative AI products before peers precisely because we want to grow: Grow our user base, grow our profits, and then grow our positive climate impact — which is mission critical for our warming planet.”
A year has passed since OpenAI unleashed ChatGPT on the world and popularised terms like foundational model, LLM, and GenAI. However, the promised benefits of generative AI technology are still much more likely to be derived by those who speak English, over other languages.
There are over 7,000 languages in the world. Yet, most large languagemodels (LLMs) work far more effectively in English. Naturally, this threatens to amplify language bias when it comes to access to knowledge, research, innovation — and competitive advantage for businesses.
In November, Finland’s Silo AI released its multilingual open European LLM Poro 34B developed in collaboration with the University of Turku. Poro, which means reindeer in Finnish, has been trained on Europe’s most powerful supercomputer LUMI in Kajani, Finland. (Interestingly, LUMI runs on AMD architecture, as opposed to all-the-rage LLM-training Nvidia.)
Along with Poro 1, the company unveiled a research checkpoint program that will release checkpoints as the model completes (the first three points were announced with the model last month).
Now, the company, through its branch SiloGen, has trained more than 50% of the model and has just published the next two checkpoints in the program. With these five checkpoints now complete, Poro 34B has shown best-in-class performance for low-resource languages like Finnish (compared to Llama, Mistral, FinGPT, etc) — without compromising performance in English.
Research Fellow Sampo Pyysalo from TurkuNLP says that they expect to have trained the model fully within the next few weeks. As the next step, the model will add support for other Nordic languages, including Swedish, Norwegian, Danish, and Icelandic.
“It’s imperative for Europe’s digital sovereignty to have access to language models aligned with European values, culture and languages. We’re proud to see that Poro shows best-in-class performance on a low-resource language like Finnish,” Silo AI’s co-founder and CEO, Peter Sarlin, told TNW. “In line with the intent to cover all European languages, it’s a natural step to start with an extension to the Nordic languages.”
Furthermore, SiloGen has commenced training Poro 2. Through a partnership with non-profit LAION (Large-scale Artificial Intelligence Open Network), it will add multimodality to the model.
“It’s likewise natural to extend Poro with vision,” Sarlin added. “Like textual data, we see an even larger potential for generative AI to consolidate large amounts of data of different modalities.”
LAION says it is “passionate about advancing the field of machine learning for the greater good.” In keeping with Silo AI’s intentions for building its GenAI model and LAION’s overall mission to increase access to large-scale ML models, and datasets, Poro 2 will be freely available under the Apache 2.0 Licence. This means developers will also be able to build proprietary solutions on top.
Silo AI, which calls itself “Europe’s largest private AI lab” launched in 2017 on the idea that Europe needed an AI flagship. The company is based in Helsinki, Finland, and builds AI-driven solutions and products to enable smart devices, autonomous vehicles, industry 4.0, and smart cities. Currently, Silo AI counts over 300 employees and also has offices in Sweden, Denmark, the Netherlands, and Canada.
Google DeepMind researchers have trained a deep learning model to predict the structure of over 2.2 million crystalline materials — 45 times more than the number discovered in the entire history of science.
Of the two million-plus new materials, some 381,000 are thought to be stable, meaning they wouldn’t decompose — an essential characteristic for engineering purposes. These new materials have the potential to supercharge the development of key future technologies such as semiconductors, supercomputers, and batteries, said the British-American company.
Modern technologies, from electronics to EVs, can make use of just 20,000 inorganic materials. These were largely discovered through trial and error over centuries. Google DeepMind’s new tool, known as Graph Networks for Materials Exploration (GNoME), has discovered hundreds of thousands of stable ones in just a year.
Of the new materials, the AI found 52,000 new layered compounds similar to graphene that could be used to develop more efficient superconductors — crucial components in MRI scanners, experimental quantum computers, and nuclear fusion reactors. It also found 528 potential lithium ion conductors, 25 times more than a previous study, which could be used to boost the performance of EV batteries.
To achieve these discoveries, the deep learning model was trained on extensive data from the Materials Project. The programme, led by the Lawrence Berkeley National Laboratory in the US, has used similar AI techniques to discover about 28,000 new stable materials over the past decade. Google DeepMind has expanded this number eight-fold, in what the company calls an “order of magnitude expansion in stable materials known to humanity.”
While the new materials are technically just predictions, DeepMind researchers say independent experimenters have already made 736 of the materials, verifying their stability. And a team from the Berkeley Lab has already been using autonomous robots to synthesise materials it discovered through the Materials Project as well as the new treasure trove unearthed by DeepMind. As detailed in this study, the autonomous AI-powered robot was able to bring 41 of 58 predicted materials to life, in just 17 hours.
“Industry tends to be a little risk-averse when it comes to cost increases, and new materials typically take a bit of time before they become cost-effective,” Kristin Persson, director of the Materials Project, told Reuters. “If we can shrink that even a bit more, it would be considered a real breakthrough.”
DeepMind researchers say they will immediately release data on the 381,000 compounds predicted to be stable and make the code for its AI publicly available. By giving scientists the full catalogue of the promising ‘recipes’ for new candidate materials, the company said it hopes to speed up discovery and drive down costs.
It has been nearly a year since OpenAI unleashed ChatGPT on the world, and it seems as if no one (at least in tech) has stopped talking about generative AI since. Meanwhile, the applications of GenAI go way beyond chatbots and copyright-grey-area image ‘artistry’.
For instance, Cradle, a biotech software startup out of Delft, Netherlands, is using it to help biologists engineer improved proteins, making it easier and quicker to bring synthetic bio-solutions for human and planetary health to market.
In synthetic biology, people use engineering principles to design and build new biological systems. Scientists can use parts of DNA or other biological elements to give existing organisms new abilities.
This has tremendous potential for programming things like bacteria to produce medicine, non-animal whey proteins, detergents and plastics without petrochemicals, yeast to make biofuel, or for instance crops that can survive in tough environments… the list goes on.
“At the core of all these products are proteins, which are little cellular machinery,” Stef van Grieken, co-founder and CEO of Cradle, told TNW a little while back. “If you want to change them to be better for the application you have in mind, you have to alter the DNA sequence. That’s a really complicated task because DNA is basically an alien programming language.”
This whole process takes a long time — and costs a lot of money. Which is where Cradle’s web-based software comes in. When prompted, Cradle’s AI platform can generate a sequence of a molecule that has a higher probability of matching what researchers are looking for, than when scientists need to try everything out for themselves.
This equals fewer and more successful experiments, drastically reducing R&D time and costs. For most projects, this means they can proceed at twice the speed compared to the industry average.
Putting AI to good use for future generations
Cradle’s proprietary AI model has been trained on billions of protein sequences, as well as data generated in the company’s own wet laboratory.
“It dramatically increases the probability that your molecule has the characteristics that you care about when you try it out in your laboratory, and that significantly accelerates R&D time, and therefore dramatically reduces the cost of the R&D phase for bringing these types of products to market,” van Grieken, previously Senior Product Manager for Google AI, added.
Why did van Grieken leave a cosy job at Google to set up a synbio AI startup in the Netherlands? “Because I have two young daughters who are three and six. And they’re gonna ask me, 10 years from now, ‘what did you do when the earth caught fire?’ And the answer could not be ‘I was a plumber for an advertising company’.”
“For me personally, success would be helping a company to make a bio-based version that replaces some petrochemical or animal-based product, and does that better and cheaper. And that more of these companies start existing in the world.”
Increased demand for synbio software
Cradle, founded by van Grieken and bioengineer Elise de Reuse in 2021, has already signed up with industry partners including Johnson & Johnson, and just announced the $24mn Series A funding round. It is now working on more than 12 R&D projects including vaccines and antibodies.
The team currently consists of 20 people, split between Delft, the Netherlands and Zurich, Switzerland. The latest funding round was led by Index Ventures with participation from Kindred Capital. Angel investors including Chris Gibson, co-founder and CEO of Recursion and Tom Glocer, former CEO of Thomson Reuters and Lead Director, Merck, also participated in the round.
The fresh injection of capital will allow Cradle to grow the team, build out additional laboratory and engineering facilities in Amsterdam, and continue to develop its platform and user experience to allow it to onboard more customers, in line with growing demand.
Say the application that has not been attributed to generative AI by now. Anything from your virtual boyfriend/girlfriend to vaccines and the energy transition will apparently be solved by the tech currently sweeping the globe. PhysicsX, a UK-based startup “on a mission to reimagine simulation for science and engineering using AI,” wants to add advanced tech superpowers to the list.
The company, with a team of more than 50 simulation engineers, machine learning and software engineers, and data scientists is building AI it says will dramatically accelerate accurate physics simulation. This will enable generative engineering solutions for sectors including aerospace, automotive, renewables, and materials production.
PhysicsX says it is looking to solve engineering bottlenecks. These include time-consuming physics simulation, painstaking reconciliation of virtual simulation and real-world data collection, as well as the many limitations of optimising over a large design space.
The company, co-founded by theoretical physicist and former head of R&D at Formula One team Renault (Alpine) Robin Tuluie, just announced a €30mn Series A funding round led by US-based VC firm General Catalyst.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
“Engineering design processes were transformed by numerical simulation and the availability of high-performance compute infrastructure,” Tuluie, also the company’s co-CEO, commented. “The move from numerical simulation to deep learning represents a similar leap and will unlock new levels of product performance and ways of practising engineering itself. PhysicsX exists to help pioneer and enable that transformation, giving engineers and manufacturers superpowers in bringing new technologies to the real world.”
Coming up against the limit of today’s computational tools
Standard Industries, NGP Energy, Radius Capital, and KKR co-founder and co-executive chairman, Henry Kravis, also participated in the round. The capital will allow PhysicsX to grow across customer delivery and product. It will also support its fundamental research to advance its AI models and methods.
“Our customers are designing the most important technologies of our time, including wind turbines, aircraft engines, electric vehicles, semiconductors, metals, and biofuels,” Jacomo Corbo, co-founder and co-CEO, said. However, they are now exceeding the limits of today’s computer aided tools, which is what PhysicsX has set its technology against.
“We’re delighted that this financing will enable us to partner more deeply with our customers to enable breakthrough engineering,” Corbo added.
Most of us would love a break from the 9-to-5 grind, evidenced by the recent hype over the four-day workweek. But a new startup from the UK promises to help you toil even less than that — while remaining just as productive.
The company is called Tomoro and it’s on a mission to cut the working week to just three days within the next five years. It aims to achieve this using AI “agents” — essentially, large language models (LLMs) which can freely make decisions within defined guardrails, as opposed to rule-based machines. These agents will act like robotic personal assistants.
“We’re not talking about simple automation or the removal of repetitive tasks,” explained founder Ed Broussard. “Tomoro will be integrating synthetic employees into businesses alongside real people that have the ability to reason, grow, increase their knowledge, adapt their tone and problem solve. This is a huge departure from what’s currently on the market.”
Originally from Scotland, Broussard is best known as the founder of data and AI consultancy Mudano, which was acquired by Accenture in 2020 for an undisclosed sum. The entrepreneur’s latest venture, Tomoro, is working “in alliance” with OpenAI, the creator of ChatGPT (which just sacked its CEO Sam Altman in one of tech’s biggest upsets this year).
Headquartered in London, Tomoro claims it has no intention to replace people’s jobs but only to enhance their productivity. “We need to stop thinking about AI as a like for like job replacement — its value extends far beyond that,” he said. Tomoro aims to accelerate workplace efficiency at large corporations by up to eight times the current rate.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
To execute its ambitious plan, Tomoro says it will recruit a “world class” R&D team, leaders in behavioural science, and AI experts. Having launched just this week, the company has already secured its first client, British insurance firm PremFina.
“AI is as big a societal shift as the invention of farming,” added Broussard. “Imagine telling a hunter-gatherer that in the future there would be an abundance of food, and it would take no effort to eat. That’s what AI will do for productivity in the workplace.”
The launch of Tomoro comes just weeks after Elon Musk said that he believed AI would replace all jobs in the future and we would all live on a universal income (doing god knows what with all that free time).
For Broussard, however, the future of the workplace is AI working in tandem with its human overlords.
Over the past year, AI image generators have taken the world by storm. Heck, even our distinguished writers at TNW use them from time to time.
Truth is, tools like Stable Diffusion, Latent Diffusion, or DALL·E can be incredibly useful for producing unique images from simple prompts — like this picture of Elon Musk riding a unicorn.
But it’s not all fun and games. Users of these AI models can just as easily generate hateful, dehumanising, and pornographic images at the click of a button — with little to no repercussions.
“People use these AI tools to draw all kinds of images, which inherently presents a risk,” said researcher Yiting Qu from the CISPA Helmholtz Center for Information Security in Germany. Things become especially problematic when disturbing or explicit images are shared on mainstream media platforms, she stressed.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
While these risks seem quite obvious, there has been little research undertaken so far to quantify the dangers and create safe guardrails for their use. “Currently, there isn’t even a universal definition in the research community of what is and is not an unsafe image,” said Qu.
To illuminate the issue, Qu and her team investigated the most popular AI image generators, the prevalence of unsafe images on these platforms, and three ways to prevent their creation and circulation online.
The researchers fed four prominent AI image generators with text prompts from sources known for unsafe content, such as the far-right platform 4chan. Shockingly, 14.56% of images generated were classified as “unsafe,” with Stable Diffusion producing the highest percentage at 18.92%. These included images with sexually explicit, violent, disturbing, hateful, or political content.
Creating safeguards
The fact that so many uncertain images were generated in Qu’s study shows that existing filters do not do their job adequately. The researcher developed her own filter, which scores a much higher hit rate in comparison, but suggests a number of other ways to curb the threat.
One way to prevent the spread of inhumane imagery is to program AI image generators to not generate this imagery in the first place, she said. Essentially, if AI models aren’t trained on unsafe images, they can’t replicate them.
Beyond that, Qu recommends blocking unsafe words from the search function, so that users can’t put together prompts that produce harmful images. For those images already circulating, “there must be a way of classifying these and deleting them online,” she said.
With all these measures, the challenge is to find the right balance. “There needs to be a trade-off between freedom and security of content,” said Qu. “But when it comes to preventing these images from experiencing wide circulation on mainstream platforms, I think strict regulation makes sense.”
While initiatives like the AI Safety Summit, which took place in the UK this month, aim to create guardrails for the technology,criticsclaim big tech companies hold too much sway over the negotiations. Whether that’s true or not, the reality is that, at present, proper, safe management of AI is patchy at best and downright alarming at its worst.
While not yet as illustrious as its North American counterparts OpenAI, Anthropic, or Cohere, Europe’s own cohort of generative AI startups is beginning to crystallise. Just yesterday, news broke that Germany’s Aleph Alpha had raised €460mn, in one of the largest funding rounds ever for a European AI company.
The European tech community received news of the investment with some enthusiasm. While much focus has been on how the EU will regulate the tech (and how the UK will or will not), there hasn’t been a whole heap of attention on how the bloc will support artificial intelligence innovation and reduce the risk of being left behind in yet another technological leap.
During a press conference about the investment, Germany’s Vice Chancellor and Minister for Economic Affairs Robert Habeck stressed the importance of supporting domestic AI enterprises.
“The thought of having our own sovereignty in the AI sector is extremely important,” Habeck said. “If Europe has the best regulation but no European companies, we haven’t won much.”
Transparency, traceability, and sovereignty
At the same press conference, Jonas Andrulis, Aleph Alpha’s founder and CEO, stated that the investors participating in the latest round (including the likes of SAP, Bosch Ventures, and owners of budget supermarket giant Lidl) were all partners the company had worked with before. Notably, all but a small contribution from Hewlett Packard came from European investors or grants.
“What was so important for me, right from the beginning with our research, is transparency, traceability, and sovereignty,” Andrulis added, playing on ethical considerations that could potentially set a European LLM apart, as well as geopolitical objectives.
Aleph Alpha is building a large language model (LLM) similar to OpenAI’s GPT-4, but focusing on serving corporations and governments, rather than individual consumers. But there are other things separating the two companies — OpenAI has 1,200 employees, whereas 61 people work at Aleph Alpha. Furthermore, the former has secured over €11bn in funding.
However, with the construction of the €2bn Innovation Park Artificial Intelligence (Ipai) in Aleph Alpha’s hometown of Heilbronn in southwest Germany, the startup may end up receiving the boost it needs to level the playing field. Construction of Ipai is slated to be complete in 2027, by when it will be able to accommodate 5,000 people. The project is supported by the Dieter Schwarz Foundation, which also participated in Aleph Alpha’s latest funding round.
Europe’s lack of AI contender geopolitical issue
Founded in 2019, Aleph Alpha is not a newcomer to the game. In 2021, before ChatGPT-induced investment hysteria, the company raised €23mn, in a round led by Lakestar Advisors. Such an amount has, of course, since been overshadowed by numbers in the billions of dollars on the other side of the Atlantic. However, Aleph Alpha did raise another €100mn, backed by Nvidia among others, in June this year.
And the German startup isn’t the only European player raking in the dough. Only a few weeks after the company’s founding in May this year, France’s Mistral AI raised €133mn in the reportedly largest-ever seed round for a European startup.
Presenting a challenge to Aleph Alpha’s claim to the European GenAI throne, the company is also developing an LLM for enterprises. Although, its very first model, Mistral 7B, is totally free to use. In its pitches to investors, Mistral, founded by former Google and Meta employees, reportedly warned it was a “major geopolitical issue” that Europe did not have its own serious contender in generative AI.
Meanwhile, Germany’s is not the only government looking to shore up domestic generative AI capabilities. The Netherlands recently commenced the development of its very own homegrown LLM to provide what it said would be a “transparent, fair, and verifiable” GenAI alternative.
If you are generally interested in technology, and are not aware by now of the meeting that took place yesterday between Rishi Sunak and Elon Musk during the UK’s AI Security Summit at Bletchley Park, you have probably been living under a rock.
As others have already commented, the session more resembled a fan meeting, or at the very least a club of mutual admiration, than an actual interview. The PM’s fawning giggles were perhaps not entirely suited to the gravity of the occasion bringing the two together (but hey, who are we to judge).
You have probably also read Elon Musk‘s statements along the lines of “no work will be necessary in the future,” and that any AI safety conference not including China would have been “pointless.” He also predicted that humans will create deep friendships with AI, once the technology becomes… intelligent enough. But, perhaps less quote friendly or headline-making material, prompted by a question from the Summit’s selected audience, the tech tycoon also discussed what he believes it would take to shift the culture in the UK so that it could become “a real breeding ground for unicorn companies,” and turn being a founder into a more obvious career choice for technical talent. “There should be a bias towards supporting small companies,” Musk said, referring to policy and investment. “Because they are the ones that really need nurturing. The larger companies really don’t need nurturing. You can think of it as a garden — if it’s a little sprout it needs nurturing, if it’s a mighty oak it does not need quite as much.”
After praising London as a leading centre for AI in the world (behind the San Francisco Bay Area), accompanied by some smug nodding from the PM, Musk added that it would take sufficient infrastructure support.
“You need landlords that are willing to rent to new companies,” Musk said. “You need law firms and accountants that are willing to support new companies. And that is a mindset change.”
He further stated that he believes culturally, in the UK, this is happening, but people just really need to decide that “this is a good thing.” For Brits to become more comfortable with failing might be a more tricky cultural shift to facilitate.
“If you don’t succeed with your first startup it shouldn’t be a catastrophic sort of career-ending thing,” Musk mused. “It should be more like ‘ok, you gave it a good shot, and now try again.’ “Most startups fail. You hear about the startups that succeed, but most startups consist of a massive amount of work, followed by failure. So it’s a high-risk high reward situation.”
There are plenty of conversations around how AI can progress healthcare. And indeed, it has numerous applications in the form of diagnostics and accelerated drug discovery. However, wouldn’t it be even better if artificial intelligence could help prevent us from getting sick in the first place?
Anyone who has ever tried to get past (or even to) a GP in the Netherlands or any other European country feeling the healthcare capacity crunch knows that it can be a process leaving you almost as drained as being unwell itself. Often, getting access to proper care can become a matter of being able to advocate for yourself, at times across language barriers.
HealthCaters is a female-founded startup based in Berlin that wants to change the status quo of healthcare gatekeeping. Its founders say they are not simply looking to make money and grow the company. Rather, they want to change the future of preventative medicine — in a way that is accessible and affordable for all. And when speaking to its two co-founders, Yale-educated Dr Lily Kruse and ex-VC and former head of business development at medical travel platform Medigo Tanya Eliseeva, I feel that I believe them.
“I was a heart surgeon in the US. And for me, the most important thing was to help people and to make sure that they are healthy. And I realised quickly that medicine is not so much about that. It’s more about treating people who are already sick,” Dr Kruse says.
“But if you think about it, there’s so many people that are getting sick, but are not sick yet. It was quite emotional to have somebody on the operating table, knowing that this could have been prevented.”
Preventative health beyond statistics
According to the WHO, by 2030, the proportion of total global deaths due to chronic diseases (or noncommunicable diseases, NCDs) is expected to reach 70% — up from 61% in 2005. Apart from the tremendous individual suffering they inflict, lifestyle-influenced diseases, such as heart disease, chronic respiratory disease, and diabetes, are also a huge burden on health services worldwide.
Considering that in 2011, Harvard Business School predicted that NCDs would cost society more than $30 trillion over the coming two decades, it is baffling that national healthcare plans hardly, if ever, take preventive measures into account.
Hoping to function as an extension of the existing healthcare system, HealthCaters has developed what they call DIY health screenings, using a portable testing station and an accompanying app. Taking around 30 minutes, the system guides the user through a series of steps to measure things such as blood pressure, cholesterol, lung function, heart rate, kidney and liver health, metabolic health, etc.
A proprietary algorithm then analyses the results and provides a comprehensive risk assessment, including probability and factors impacting each risk. The app also provides practical advice on how to mitigate said risks, based on scientific evidence.
“It’s not just a sheet of different values, and then a thumbs up, like you’re okay or you’re not okay, and that’s it,” Eliseeva says. “We put together a very comprehensive risk assessment that allows you to understand what your actual risks are. What do I need to pay attention to? And for every risk that we identify, we show the probability and the factors that impact that risk.”
Unlike the doctor’s office, or by employing the services of a test lab, the user doesn’t just receive the numbers, but the algorithm also takes lifestyle into account to determine what the specific numbers mean to each individual in terms of risk.
“Our goal is preventative medicine that is not just statistics,” Eliseevea continues. “It’s not about having a 10% chance of developing something. It’s more like ‘what do I have to care for, in order for me to stay healthy?’”
‘Explosive demand’ from corporates
HealthCaters offers their services to corporates that can set up health screening days for employees with the portable boxes and access to the app with personalised plans. Customers include the likes of IBM, WeWork, and Barmenia. Furthermore, the startup just opened up its first centre for the public in Berlin (prices start at €39).
“We are seeing explosive demand from corporates right now,” Eliseeva says, adding that the company signed three new clients last month — no small thing for a startup its size.
“Usually we do it in the framework of a health day or health week, which means we go there and we set up, and employees can book slots. They come to the health case, and they can do the screening themselves,” Dr Kruse explains. “So it’s completely self-managed.”
Sometimes, the process is arranged by an insurance company that acts as the intermediary, which also allows corporations to lower premiums for employee insurance.
Growing a team that understands tech for health
While any startup in preventative or diagnostic healthcare over the past few years may have had to battle the ghost of Theranos in the minds of investors, HealthCaters recently secured $1.2mn in seed funding led by Barmenia Next Strategies. They will use the funds to expand the team, both on the tech and operational business side, as well as “double down” on the product.
“It’s so important to make sure that there is a strong team behind your tech that understands at what point and how to take all this technology [generative AI] in, and to implement it without jeopardising the actual essence of the product,” Eliseeva says. “Because it’s not tech for tech, right? It’s tech for health.”
Lack of accessibility increases anxiety
The company has also held pop-up events throughout Europe. Circling back to the Dutch healthcare system we referenced earlier in this article, when HealthCaters held an event in Amsterdam earlier this year, the audience was in parts unexpected.
“You always think that out-of-pocket expenses for health care is something that is upper-middle class territory. But many of the people who came to us were taxi drivers,” Eliseeva retells the experience. “And they said, ‘Well, I tried to have a doctor’s appointment, they asked me to wait for three months. When I finally went, it was over in literally two minutes. They looked at me and said — you’re okay, why are you here?’ They said that it makes them very anxious, and this word, anxious, is repeated by so many people we talk to [in relation to healthcare].”
The company has plans to further integrate the product with wearables and data such as genetic screenings and family history. With so much personal medical data being involved, I cannot help but ask about privacy and security. Eliseeva highlights that it was not something she and Dr Kruse would ever cut corners on as it is a “cornerstone” of the product.
“Before we hired a single person, we already asked ourselves, how are we going to deal with data security? We even interviewed on the basis of what they would do with this data,” she stated, further adding that the company runs its own trial audits, and is passing on every single criteria.