Artificial Intelligence

cops-bogged-down-by-flood-of-fake-ai-child-sex-images,-report-says

Cops bogged down by flood of fake AI child sex images, report says

“Particularly heinous” —

Investigations tied to harmful AI sex images will grow “exponentially,” experts say.

Cops bogged down by flood of fake AI child sex images, report says

Law enforcement is continuing to warn that a “flood” of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

“Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm’s way, and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested “for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft.” A search of Thompson’s iCloud revealed “four additional instances” where Thompson allegedly recorded other minors in the lavatory, as well as “over 50 images of a 9-year-old unaccompanied minor” sleeping in her seat. While police attempted to identify these victims, they also “further alleged that hundreds of images of AI-generated child pornography” were found on Thompson’s phone.

The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.

Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force, confirmed to the NYT that due to AI, “investigations are way more challenging.”

And because image generators and AI models that can be trained on photos of children are widely available, “using AI to alter photos” of children online “is becoming more common,” Michael Bourke—a former chief psychologist for the US Marshals Service who spent decades supporting investigations into sex offenses involving children—told the NYT. Richards said that cops don’t know what to do when they find these AI-generated materials.

Currently, there aren’t many cases involving AI-generated child sex abuse materials (CSAM), The NYT reported, but experts expect that number will “grow exponentially,” raising “novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.”

Platforms struggle to monitor harmful AI images

At a Senate Judiciary Committee hearing today grilling Big Tech CEOs over child sexual exploitation (CSE) on their platforms, Linda Yaccarino—CEO of X (formerly Twitter)—warned in her opening statement that artificial intelligence is also making it harder for platforms to monitor CSE. Yaccarino suggested that industry collaboration is imperative to get ahead of the growing problem, as is providing more resources to law enforcement.

However, US law enforcement officials have indicated that platforms are also making it harder to police CSAM and CSE online. Platforms relying on AI to detect CSAM are generating “unviable reports” gumming up investigations managed by already underfunded law enforcement teams, The Guardian reported. And the NYT reported that other investigations are being thwarted by adding end-to-end encryption options to messaging services, which “drastically limit the number of crimes the authorities are able to track.”

The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing “virtual” or “computer-generated child pornography.” South Carolina’s attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month. That includes the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which creates a federal civil remedy for any victims of any age who are identifiable in AI images depicting them as nude or engaged in sexually explicit conduct or sexual scenarios.

There’s also the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years imprisonment for sharing harmful images.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Mani said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Cops bogged down by flood of fake AI child sex images, report says Read More »

game-developer-survey:-50%-work-at-a-studio-already-using-generative-ai-tools

Game developer survey: 50% work at a studio already using generative AI tools

Do androids dream of Tetris? —

But 84% of devs are at least somewhat concerned about ethical use of those tools.

The future of game development?

Enlarge / The future of game development?

A new survey of thousands of game development professionals finds a near-majority saying generative AI tools are already in use at their workplace. But a significant minority of developers say their company has no interest in generative AI tools or has outright banned their use.

The Game Developers Conference’s 2024 State of the Industry report, released Thursday, aggregates the thoughts of over 3,000 industry professionals as of last October. While the annual survey (conducted in conjunction with research partner Omdia) has been running for 12 years, this is the first time respondents were asked directly about their use of generative AI tools such as ChatGPT, DALL-E, GitHub Copilot, and Adobe Generative Fill.

Forty-nine percent of the survey’s developer respondents said that generative AI tools are currently being used in their workplace. That near-majority includes 31 percent (of all respondents) that say they use those tools themselves and 18 percent that say their colleagues do.

A majority of game developers said their workplace was at least interested in using generative AI tools.

Enlarge / A majority of game developers said their workplace was at least interested in using generative AI tools.

The survey also found that different studio departments showed different levels of willingness to embrace AI tools. Forty-four percent of employees in business and finance said they were using AI tools, for instance, compared to just 16 percent in visual arts and 13 percent in “narrative/writing.”

Among the 38 percent of respondents who said their company didn’t use AI tools, 15 percent said their company was “interested” in pursuing them, while 23 percent said they had “no interest.” In a separate question, 12 percent of respondents said their company didn’t allow the use of AI tools at all, a number that went up to 21 percent for respondents working at the largest “AAA developers.” An additional 7 percent said the use of some specific AI tools was not allowed, while 30 percent said AI tool use was “optional” at their company.

Worries abound

The wide embrace of AI tools hasn’t seemed to lessen worries about their use among developers, though. A full 42 percent of respondents said they were “very concerned” about the ethics of using generative AI in game development, with an additional 42 percent being “somewhat concerned.” Only 12 percent said they were “not concerned at all” about those usage ethics.

Developer policies on AI use varied greatly, with a plurality saying their company had no official policy.

Enlarge / Developer policies on AI use varied greatly, with a plurality saying their company had no official policy.

Overall, respondents offered a split opinion on whether the use of AI tools would be overall positive (21 percent) or negative (18 percent) for the industry. Most respondents seemed split, with 57 percent saying the impact would be “mixed.”

Developers cited coding assistance, content creation efficiency, and the automation of repetitive tasks as the primary uses for AI tools, according to the report.

“I’d like to see AI tools that help with the current workflows and empower individual artists with their own work,” one anonymous respondent wrote. “What I don’t want to see is a conglomerate of artists being enveloped in an AI that just does 99% of the work a creative is supposed to do.”

Elsewhere in the report, the survey found that only 17 percent of developers were at least somewhat interested in using blockchain technology in their upcoming projects, down significantly from 27 percent in 2022. An overwhelming 77 percent of respondents said they had no interest in blockchain technology, similar to recent years.

The survey also found that 57 percent of respondents thought that workers in the game industry should unionize, up from 53 percent last year. Despite this, only 23 percent said they were either in a union or had discussed unionization at their workplace.

Game developer survey: 50% work at a studio already using generative AI tools Read More »

google’s-gemini-ai-won’t-be-available-in-europe-—-for-now

Google’s Gemini AI won’t be available in Europe — for now

Yesterday, Google launched its much anticipated response to OpenAI’s ChatGPT (the first release of Bard didn’t really count, did it?). However, the new set of generative AI models that Google is dubbing “the start of the Gemini era” will not yet be available in Europe — due to regulatory hurdles. 

The tech giant is calling Gemini the “most capable model ever” and says it has been trained to recognise, understand, and combine different types of information including text, images, audio, video, and code. 

According to Demis Hassabis, CEO of Google DeepMind, it is as good as the best human experts in the 50 different subject areas they tested the model on. Furthermore, it scored more than 90% on industry standard benchmarks for large language models (LLMs). 

The three models of the Gemini AI family

The Gemini family of models will be available in three sizes. Gemini Ultra is the largest (but also slowest), intended to perform highly complex tasks; Gemini Pro the best-performing for a broad range of tasks; and Gemini Nano for on-device tasks.

Google says it has trained Gemini 1.0 on its AI-optimised infrastructure using the company’s in-house Tensor Processing Units (TPUs) v4 and v5e. Along with unveiling the Gemini family, Google also announced the Cloud TPU v5p, which is specifically designed for training cutting-edge AI models. 

The Google TPU v5p supercomputer processors
Google’s TPU v5p is designed especially from training advanced AI models. Credit: Google

What is truly an evolution in LLM application is perhaps the Nano, optimised for mobile devices. As told to the Financial Times, Nano will allow developers to build AI applications that can also work offline — with the additional benefits of enhanced data privacy options.

Explained in greater detail by the company in a blog post, Google is also providing the AI Studio — a free, web-based developer tool to prototype and launch apps using an API key. It will make Gemini Pro available to developers and enterprise customers from December 13. 

Just as for Bard, Europe will need to wait for Gemini

A “fine-tuned” version of Gemini Pro launched for Google’s existing Bard chatbot yesterday in 170 countries and territories. The company says it will also be available across more of its services, such as Search, Ads, and Chrome, in the coming months. 

However, users in the EU and the UK eager to test the mettle of Google’s “new era” of AI will have to wait a little longer. Google did not give extensive details, but said it is planning to “expand to different modalities and support new languages and locations in the near future.” 

Indeed, Google is reportedly planning a preview of “Bard Advanced,” powered by the multimodal Gemini Ultra next year. Google first released Bard in March 2023, but due to concerns around compliance with the GDPR, it did not reach European users until June. Let’s see how long we will have to wait for Gemini. 

Google’s Gemini AI won’t be available in Europe — for now Read More »

eu-settles-on-rules-for-generative-ai,-moves-to-surveillance

EU settles on rules for generative AI, moves to surveillance

The tech world is waiting with bated breath for the results from the final negotiations in Brussels regarding the EU’s landmark AI Act. The discussions that commenced at 14: 00 CET on Wednesday failed to reach a conclusion before the end of the day. However, negotiators did reportedly reach a compromise for the control of generative AI systems, such as ChatGPT. 

According to sources familiar with the talks, they will now continue on the topic of the controversial use of AI for biometric surveillance — which lawmakers want to ban. As reported by Reuters, governments may have made concessions on other accounts in order to be able to use the tech for purposes related to “national security, defence, and military.” 

Sources expect negotiations to continue for several more hours on Thursday. 

AI Act: innovation vs. regulation

While the AI Act — the first attempt globally at regulating artificial intelligence — has been in the works since April 2021, the rapid evolution of the technology and the emergence of GenAI has thrown a wrench in the gears of the Brussels machinery. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In addition to having to understand the technological side to foundation models — and anticipate the evolution of the technology over time so as not to render regulation obsolete within a couple of years — member states have settled into different camps. 

Lawmakers have proposed requirements for developers to maintain information on how they train their models, along with disclosing use of copyrighted material, and labelling content produced by AI, as opposed to humans. 

France and Germany (home to European frontrunners Mistral AI and Aleph Alpha) have opposed binding rules they say would handicap the bloc’s homegrown generative AI companies. Along with Italy, they would prefer to let developers self-regulate, adhering to a code of conduct. 

If Thursday’s talks fail to generate (see what we did there) any definitive conclusions, fears are that the whole act could be shelved until after the European elections next year — which will usher in a new Commission and Parliament. Given the barrage of news of developments, such as Google’s Gemini and ADM’s new super AI chip, regulators may well need to rewrite the rules entirely by then. Oh well, that is Brussels bureaucracy for you.

Published

Back to top

EU settles on rules for generative AI, moves to surveillance Read More »

mistral-ai-nears-$2b-valuation-—-less-than-12-months-after-founding

Mistral AI nears $2B valuation — less than 12 months after founding

European contributions might have been a little late to join the generative AI investment party, but that does not mean they will not end up rivalling some of the earlier North American frontrunners. According to people familiar with the matter, Mistral AI, the French genAI seed-funding sensation, is just about to conclude the raising of about €450mn from investors. 

Unlike Germany’s Aleph Alpha who just raised a similar sum, most investors come from beyond the confines of the continent. The round is led by Silicon Valley VC firm Andreessen Horowitz, and also includes backing from Nvidia and Salesforce. 

Sources close to the deal told Bloomberg that Andreessen Horowitz would invest €200mn in funding, whereas Nvidia and Salesforce would be down for €120mn in convertible debt, although this was still subject to change. If it goes through, this would value the Paris-based startup at nearly $2bn — less than a year after it was founded. 

Mistral AI was one of the few European AI companies to participate in the UK’s AI Safety Summit held at Bletchley Park last month. The generative AI startup released its first large language model (LLM), Mistral 7B, under the open source Apache 2.0 licence in September. 

Targeting dev space with smaller size LLMs

The key thing that sets Mistral apart is that it is specifically building smaller models that target the developer space. Speaking at the SLUSH conference in Helsinki last week, co-founder and CEO Arthur Mensch said this was exactly what separates the philosophy of the company from its competitors.

“You can start with a very big model with hundreds of billions of parameters — maybe it’s going to solve your task. But you could actually have something which is a hundred times smaller,” Mensch stated. “And when you make a production application that targets a lot of users, you want to make choices that lower the latency, lower the costs, and leverage the actual populated data that you may have. And this is something that I think is not the topic of our competitors — they’re really targeting multi-usage, very large models.”



Mensch, who previously worked for Google DeepMind, added that this approach would also allow for strong differentiation through proprietary data, a key factor for actors to survive in the mature application market space. 

Mistral AI and the reported investors have all declined to comment on the potential proceedings.

Published

Back to top

Mistral AI nears $2B valuation — less than 12 months after founding Read More »

klarna-freezes-hiring,-citing-ai-‘productivity-gains’

Klarna freezes hiring, citing AI ‘productivity gains’

In a hiring freeze that CEO Sebastian Siemiatkowski attributes to the rise of AI, Swedish fintech unicorn Klarna is no longer recruiting staff beyond its engineering department.

“There will be a shrinking of the company,” Siemiatkowski told the Telegraph. “We’re not currently hiring at all, apart from engineers.”

The chief exec of the buy now, pay later app said that the productivity gains from using tools like ChatGPT meant the company now needs “fewer people to do the same thing.”

Klarna is not planning layoffs. But as people depart voluntarily, the CEO said he expects the size of the company to shrink over time. AI is “a threat to a lot of jobs” across the economy, he added.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The news comes as employees — from writers to artists — are becoming increasingly worried about the impact that AI could have on their livelihoods. Comments like these from the Klarna boss will no doubt exacerbate these anxieties. 

Klarna isn’t the first to make such moves either, and surely not the last. In May, the CEO of IBM told Bloomberg that the company was pausing hiring for roles that it believed could be replaced by AI in coming years. While in April, Dropbox announced it was slicing its workforce by 16%, or 500 employees, blaming AI for forcing a shift in strategies.   

Elon Musk, for one, thinks that AI will replace all jobs in the future. However, some believe AI won’t replace jobs, but simply make us more productive at what we already do, while a few think it will create more jobs.

Whatever your stance, the reality is that big tech companies across the world are implementing hiring freezes or laying off employees at a worrying rate — for reasons pertaining to AI or other matters.

Just today, Spotify, another Swedish tech giant, announced that it will lay off around 1,500 employees, or 17% of its workforce, to bring down costs.

Published

Back to top

Klarna freezes hiring, citing AI ‘productivity gains’ Read More »

tree-planting-search-engine-ecosia-launches-‘green’-ai-chatbot

Tree-planting search engine Ecosia launches ‘green’ AI chatbot

With COP28 underway in Dubai making it again glaringly obvious just how little lawmakers are prepared to bend for the sake of future generations of Earthlings, the release of the first “green filter” generative AI search chatbot could not have been more timely. 

Berlin-based Ecosia, the world’s largest not-for-profit search engine, hopes the launch of its new product will assist users in making better choices for the planet, and further differentiate its offerings from the “monolithic giants” of internet search. 

Powered by OpenAI’s API, Ecosia’s chatbot has a “green answers” option. This triggers a layered green persona that will provide users with more sustainable results and answers. Say, suggest train rides over air travel.

GenAI + DMA = search market disruption?

Ecosia, which uses the ad revenue from its site (read, all its profits) to plant trees across the globe, is among the first independent search engines to roll out its own GenAI-powered chatbot. When speaking to TNW last month, Ecosia founder and CEO Christian Kroll stated how important it was for small independent players to stay up to date with the technology. 

Further, he highlighted the opportunities generative AI could present in terms of disrupting the status quo in the internet search market. “I think there is potential for us to innovate as well — and maybe even leapfrog some of the established players,” he said. 

Upon the launch of the company’s “green chatbot,” Kroll today added that the past year had introduced more change to the internet search landscape than the previous 14 combined (Ecosia was founded in 2009). “Generative AI has the potential to revolutionise the search market — no longer does it cost hundreds of billions to develop best-in-class search technologies,” he said, adding that Ecosia was targeting a “global increase in search engine market share.” 

Something else that could potentially disrupt the market is the coming into play of the EU Digital Markets Act. From March 2024 onwards, consumers will no longer be “encouraged” to use default apps on their devices (say Safari web browser on an iPhone, or Google Maps on an Android device). This may come to include offering users a “choice screen” when setting up a device, which would invite them to select which browsers, search engines, and virtual assistants to install, rather than defaulting to the preferences of Apple and Android. Ecosia says it is “pushing hard” for this provision.

Green chatbot powered by clean energy

Many companies pay lip service to sustainability. Ecosia actually puts its money where its mouth is. A few years ago, its founder turned Ecosia into a steward-owned company. This means that no shares can be sold at a profit or owned by people outside of the company. In addition, no profits can be taken out of the company — as previously mentioned, all profits go to Ecosia’s tree-planting endeavours. 

“It [tree planting] is one of the most effective measures we can take to fight the climate crisis. But unfortunately, it’s often not done properly. So that’s why it also gets a lot of criticism,” Kroll told TNW. 

“We’re trying to define the standards of what good tree planting means. So first of all, you count the trees that survived, not just the ones that you have planted — then you also have to check on them.” This, we should add, falls under the purview of Ecosia’s Chief Tree Planting Officer. To date, the community has planted over 187,000,000 trees and counting. 

In addition, Ecosia’s search engine is powered by solar energy — accounting for 200% of the carbon emitted from the server usage and broader operations. 

LLMs and CO2 are still an undisclosed relationship

You may ask how adding generative AI to a search function is compatible with an environmental agenda. After all, Google’s use of generative AI alone could use as much energy as a small-ish country

Ecosia admits that it does not yet have “oversight of the carbon emissions created by LLM-based genAI functions,” since OpenAI does not openly share this information. However, initial testing indicates that the new GenAI function will increase CO2 emissions by 5%, Ecosia said, for which it will increase investment in solar power, regenerative agriculture, and other nature-based solutions. 

Environmental credentials aside, a search engine still has to perform when it comes to its core function. “For us to compete against monolithic giants that have a 99% market share, we have to offer our users a product they’ll want to use day in, day out,” Michael Metcalf, chief product officer at Ecosia, shared. “That means not only offering a positive impact on climate action, but a best-in-class search engine that can go head-to-head with the likes of Bing and Google.” 

Metcalf added that user testing had shown very positive feedback on the company’s sustainability-minded AI chatbot. “We’re going to market with generative AI products before peers precisely because we want to grow: Grow our user base, grow our profits, and then grow our positive climate impact — which is mission critical for our warming planet.”

Tree-planting search engine Ecosia launches ‘green’ AI chatbot Read More »

silo-ai-releases-checkpoint-on-mission-to-democratise-llms

Silo AI releases checkpoint on mission to democratise LLMs

A year has passed since OpenAI unleashed ChatGPT on the world and popularised terms like foundational model, LLM, and GenAI. However, the promised benefits of generative AI technology are still much more likely to be derived by those who speak English, over other languages. 

There are over 7,000 languages in the world. Yet, most large language models (LLMs) work far more effectively in English. Naturally, this threatens to amplify language bias when it comes to access to knowledge, research, innovation — and competitive advantage for businesses. 

In November, Finland’s Silo AI released its multilingual open European LLM Poro 34B developed in collaboration with the University of Turku. Poro, which means reindeer in Finnish, has been trained on Europe’s most powerful supercomputer LUMI in Kajani, Finland. (Interestingly, LUMI runs on AMD architecture, as opposed to all-the-rage LLM-training Nvidia.) 

Along with Poro 1, the company unveiled a research checkpoint program that will release checkpoints as the model completes (the first three points were announced with the model last month). 

Now, the company, through its branch SiloGen, has trained more than 50% of the model and has just published the next two checkpoints in the program. With these five checkpoints now complete, Poro 34B has shown best-in-class performance for low-resource languages like Finnish (compared to Llama, Mistral, FinGPT, etc) — without compromising performance in English. 

Research Fellow Sampo Pyysalo from TurkuNLP says that they expect to have trained the model fully within the next few weeks. As the next step, the model will add support for other Nordic languages, including Swedish, Norwegian, Danish, and Icelandic. 

“It’s imperative for Europe’s digital sovereignty to have access to language models aligned with European values, culture and languages. We’re proud to see that Poro shows best-in-class performance on a low-resource language like Finnish,” Silo AI’s co-founder and CEO, Peter Sarlin, told TNW. “In line with the intent to cover all European languages, it’s a natural step to start with an extension to the Nordic languages.” 

Furthermore, SiloGen has commenced training Poro 2. Through a partnership with non-profit LAION (Large-scale Artificial Intelligence Open Network), it will add multimodality to the model.

“It’s likewise natural to extend Poro with vision,” Sarlin added. “Like textual data, we see an even larger potential for generative AI to consolidate large amounts of data of different modalities.”

LAION says it is “passionate about advancing the field of machine learning for the greater good.” In keeping with Silo AI’s intentions for building its GenAI model and LAION’s overall mission to increase access to large-scale ML models, and datasets, Poro 2 will be freely available under the Apache 2.0 Licence. This means developers will also be able to build proprietary solutions on top. 

Silo AI, which calls itself “Europe’s largest private AI lab” launched in 2017 on the idea that Europe needed an AI flagship. The company is based in Helsinki, Finland, and builds AI-driven solutions and products to enable smart devices, autonomous vehicles, industry 4.0, and smart cities. Currently, Silo AI counts over 300 employees and also has offices in Sweden, Denmark, the Netherlands, and Canada.

Silo AI releases checkpoint on mission to democratise LLMs Read More »

deepmind’s-ai-has-found-more-new-materials-in-a-year-than-scientists-have-in-centuries

DeepMind’s AI has found more new materials in a year than scientists have in centuries

Google DeepMind researchers have trained a deep learning model to predict the structure of over 2.2 million crystalline materials — 45 times more than the number discovered in the entire history of science.

Of the two million-plus new materials, some 381,000 are thought to be stable, meaning they wouldn’t decompose — an essential characteristic for engineering purposes. These new materials have the potential to supercharge the development of key future technologies such as semiconductors, supercomputers, and batteries, said the British-American company.

Modern technologies, from electronics to EVs, can make use of just 20,000 inorganic materials. These were largely discovered through trial and error over centuries. Google DeepMind’s new tool, known as Graph Networks for Materials Exploration (GNoME), has discovered hundreds of thousands of stable ones in just a year.

Of the new materials, the AI found 52,000 new layered compounds similar to graphene that could be used to develop more efficient superconductors — crucial components in MRI scanners, experimental quantum computers, and nuclear fusion reactors. It also found 528 potential lithium ion conductors, 25 times more than a previous study, which could be used to boost the performance of EV batteries. 

To achieve these discoveries, the deep learning model was trained on extensive data from the Materials Project. The programme, led by the Lawrence Berkeley National Laboratory in the US, has used similar AI techniques to discover about 28,000 new stable materials over the past decade. Google DeepMind has expanded this number eight-fold, in what the company calls an “order of magnitude expansion in stable materials known to humanity.” 

While the new materials are technically just predictions, DeepMind researchers say independent experimenters have already made 736 of the materials, verifying their stability. And a team from the Berkeley Lab has already been using autonomous robots to synthesise materials it discovered through the Materials Project as well as the new treasure trove unearthed by DeepMind. As detailed in this study, the autonomous AI-powered robot was able to bring 41 of 58 predicted materials to life, in just 17 hours. 

“Industry tends to be a little risk-averse when it comes to cost increases, and new materials typically take a bit of time before they become cost-effective,” Kristin Persson, director of the Materials Project, told Reuters. “If we can shrink that even a bit more, it would be considered a real breakthrough.” 

DeepMind researchers say they will immediately release data on the 381,000 compounds predicted to be stable and make the code for its AI publicly available. By giving scientists the full catalogue of the promising ‘recipes’ for new candidate materials, the company said it hopes to speed up discovery and drive down costs.

The unveiling of GNoME comes on the heels of several impressive developments at Google DeepMind, which was formed in April when UK-based DeepMind and US-headquartered Google Brain merged into a single AI research unit. The latest being the launch of the world’s most accurate 10-day global weather forecasting system.

DeepMind’s AI has found more new materials in a year than scientists have in centuries Read More »

dutch-biotech-startup-bags-e22m-for-proprietary-generative-ai-model

Dutch biotech startup bags €22M for proprietary generative AI model

It has been nearly a year since OpenAI unleashed ChatGPT on the world, and it seems as if no one (at least in tech) has stopped talking about generative AI since. Meanwhile, the applications of GenAI go way beyond chatbots and copyright-grey-area image ‘artistry’. 

For instance, Cradle, a biotech software startup out of Delft, Netherlands, is using it to help biologists engineer improved proteins, making it easier and quicker to bring synthetic bio-solutions for human and planetary health to market.

In synthetic biology, people use engineering principles to design and build new biological systems. Scientists can use parts of DNA or other biological elements to give existing organisms new abilities. 

This has tremendous potential for programming things like bacteria to produce medicine, non-animal whey proteins, detergents and plastics without petrochemicals, yeast to make biofuel, or for instance crops that can survive in tough environments… the list goes on. 

“At the core of all these products are proteins, which are little cellular machinery,” Stef van Grieken, co-founder and CEO of Cradle, told TNW a little while back. “If you want to change them to be better for the application you have in mind, you have to alter the DNA sequence. That’s a really complicated task because DNA is basically an alien programming language.” 

woman in lab coat performing experiments
Cradle is based in Delft, Netherlands, and Zurich, Switzerland.

This whole process takes a long time — and costs a lot of money. Which is where Cradle’s web-based software comes in. When prompted, Cradle’s AI platform can generate a sequence of a molecule that has a higher probability of matching what researchers are looking for, than when scientists need to try everything out for themselves. 

This equals fewer and more successful experiments, drastically reducing R&D time and costs. For most projects, this means they can proceed at twice the speed compared to the industry average.

Putting AI to good use for future generations

Cradle’s proprietary AI model has been trained on billions of protein sequences, as well as data generated in the company’s own wet laboratory.

“It dramatically increases the probability that your molecule has the characteristics that you care about when you try it out in your laboratory, and that significantly accelerates R&D time, and therefore dramatically reduces the cost of the R&D phase for bringing these types of products to market,” van Grieken, previously Senior Product Manager for Google AI, added. 

Why did van Grieken leave a cosy job at Google to set up a synbio AI startup in the Netherlands? “Because I have two young daughters who are three and six. And they’re gonna ask me, 10 years from now, ‘what did you do when the earth caught fire?’ And the answer could not be ‘I was a plumber for an advertising company’.”

“For me personally, success would be helping a company to make a bio-based version that replaces some petrochemical or animal-based product, and does that better and cheaper. And that more of these companies start existing in the world.”

Increased demand for synbio software

Cradle, founded by van Grieken and bioengineer Elise de Reuse in 2021, has already signed up with industry partners including Johnson & Johnson, and just announced the $24mn Series A funding round. It is now working on more than 12 R&D projects including vaccines and antibodies. 

Cradle team offsite
The Cradle team currently consists of 20 people. Credit: Cradle

The team currently consists of 20 people, split between Delft, the Netherlands and Zurich, Switzerland. The latest funding round was led by Index Ventures with participation from Kindred Capital. Angel investors including Chris Gibson, co-founder and CEO of Recursion and Tom Glocer, former CEO of Thomson Reuters and Lead Director, Merck, also participated in the round.

The fresh injection of capital will allow Cradle to grow the team, build out additional laboratory and engineering facilities in Amsterdam, and continue to develop its platform and user experience to allow it to onboard more customers, in line with growing demand. 

Dutch biotech startup bags €22M for proprietary generative AI model Read More »

want-engineering-superpowers?-this-genai-startup-is-here-to-help

Want engineering superpowers? This GenAI startup is here to help

Say the application that has not been attributed to generative AI by now. Anything from your virtual boyfriend/girlfriend to vaccines and the energy transition will apparently be solved by the tech currently sweeping the globe. PhysicsX, a UK-based startup “on a mission to reimagine simulation for science and engineering using AI,” wants to add advanced tech superpowers to the list.

The company, with a team of more than 50 simulation engineers, machine learning and software engineers, and data scientists is building AI it says will dramatically accelerate accurate physics simulation. This will enable generative engineering solutions for sectors including aerospace, automotive, renewables, and materials production. 

PhysicsX says it is looking to solve engineering bottlenecks. These include time-consuming physics simulation, painstaking reconciliation of virtual simulation and real-world data collection, as well as the many limitations of optimising over a large design space. 

The company, co-founded by theoretical physicist and former head of R&D at Formula One team Renault (Alpine) Robin Tuluie, just announced a €30mn Series A funding round led by US-based VC firm General Catalyst. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“Engineering design processes were transformed by numerical simulation and the availability of high-performance compute infrastructure,” Tuluie, also the company’s co-CEO, commented.



“The move from numerical simulation to deep learning represents a similar leap and will unlock new levels of product performance and ways of practising engineering itself. PhysicsX exists to help pioneer and enable that transformation, giving engineers and manufacturers superpowers in bringing new technologies to the real world.” 

Coming up against the limit of today’s computational tools

Standard Industries, NGP Energy, Radius Capital, and KKR co-founder and co-executive chairman, Henry Kravis, also participated in the round. The capital will allow PhysicsX to grow across customer delivery and product. It will also support its fundamental research to advance its AI models and methods. 

“Our customers are designing the most important technologies of our time, including wind turbines, aircraft engines, electric vehicles, semiconductors, metals, and biofuels,” Jacomo Corbo, co-founder and co-CEO, said. However, they are now exceeding the limits of today’s computer aided tools, which is what PhysicsX has set its technology against. 

“We’re delighted that this financing will enable us to partner more deeply with our customers to enable breakthrough engineering,” Corbo added. 

Want engineering superpowers? This GenAI startup is here to help Read More »

these-ai-agents-could-cut-your-workweek-to-just-3-days

These AI agents could cut your workweek to just 3 days

Most of us would love a break from the 9-to-5 grind, evidenced by the recent hype over the four-day workweek. But a new startup from the UK promises to help you toil even less than that — while remaining just as productive.  

The company is called Tomoro and it’s on a mission to cut the working week to just three days within the next five years. It aims to achieve this using AI “agents” — essentially, large language models (LLMs) which can freely make decisions within defined guardrails, as opposed to rule-based machines. These agents will act like robotic personal assistants.

“We’re not talking about simple automation or the removal of repetitive tasks,” explained founder Ed Broussard. “Tomoro will be integrating synthetic employees into businesses alongside real people that have the ability to reason, grow, increase their knowledge, adapt their tone and problem solve. This is a huge departure from what’s currently on the market.”

Originally from Scotland, Broussard is best known as the founder of data and AI consultancy Mudano, which was acquired by Accenture in 2020 for an undisclosed sum. The entrepreneur’s latest venture, Tomoro, is working “in alliance” with OpenAI, the creator of ChatGPT (which just sacked its CEO Sam Altman in one of tech’s biggest upsets this year). 

Headquartered in London, Tomoro claims it has no intention to replace people’s jobs but only to enhance their productivity. “We need to stop thinking about AI as a like for like job replacement — its value extends far beyond that,” he said. Tomoro aims to accelerate workplace efficiency at large corporations by up to eight times the current rate.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

To execute its ambitious plan, Tomoro says it will recruit a “world class” R&D team, leaders in behavioural science, and AI experts. Having launched just this week, the company has already secured its first client, British insurance firm PremFina.

“AI is as big a societal shift as the invention of farming,” added Broussard. “Imagine telling a hunter-gatherer that in the future there would be an abundance of food, and it would take no effort to eat. That’s what AI will do for productivity in the workplace.” 

The launch of Tomoro comes just weeks after Elon Musk said that he believed AI would replace all jobs in the future and we would all live on a universal income (doing god knows what with all that free time). 

For Broussard, however, the future of the workplace is AI working in tandem with its human overlords.

These AI agents could cut your workweek to just 3 days Read More »