Artificial Intelligence

‘unsafe’-ai-images-proliferate-online.-study-suggests-3-ways-to-curb-the-scourge

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge

Over the past year, AI image generators have taken the world by storm. Heck, even our distinguished writers at TNW use them from time to time. 

Truth is, tools like Stable Diffusion, Latent Diffusion, or DALL·E can be incredibly useful for producing unique images from simple prompts — like this picture of Elon Musk riding a unicorn.

But it’s not all fun and games. Users of these AI models can just as easily generate hateful, dehumanising, and pornographic images at the click of a button — with little to no repercussions. 

“People use these AI tools to draw all kinds of images, which inherently presents a risk,” said researcher Yiting Qu from the CISPA Helmholtz Center for Information Security in Germany. Things become especially problematic when disturbing or explicit images are shared on mainstream media platforms, she stressed.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

While these risks seem quite obvious, there has been little research undertaken so far to quantify the dangers and create safe guardrails for their use. “Currently, there isn’t even a universal definition in the research community of what is and is not an unsafe image,” said Qu. 

To illuminate the issue, Qu and her team investigated the most popular AI image generators, the prevalence of unsafe images on these platforms, and three ways to prevent their creation and circulation online.

The researchers fed four prominent AI image generators with text prompts from sources known for unsafe content, such as the far-right platform 4chan. Shockingly, 14.56% of images generated were classified as “unsafe,” with Stable Diffusion producing the highest percentage at 18.92%. These included images with sexually explicit, violent, disturbing, hateful, or political content.

Creating safeguards

The fact that so many uncertain images were generated in Qu’s study shows that existing filters do not do their job adequately. The researcher developed her own filter, which scores a much higher hit rate in comparison, but suggests a number of other ways to curb the threat.  

One way to prevent the spread of inhumane imagery is to program AI image generators to not generate this imagery in the first place, she said. Essentially, if AI models aren’t trained on unsafe images, they can’t replicate them. 

Beyond that, Qu recommends blocking unsafe words from the search function, so that users can’t put together prompts that produce harmful images. For those images already circulating, “there must be a way of classifying these and deleting them online,” she said.

With all these measures, the challenge is to find the right balance. “There needs to be a trade-off between freedom and security of content,” said Qu. “But when it comes to preventing these images from experiencing wide circulation on mainstream platforms, I think strict regulation makes sense.” 

Aside from generating harmful content, the makers of AI text-to-image software have come under fire for a range of issues, such as stealing artists’ work and amplifying dangerous gender and race stereotypes

While initiatives like the AI Safety Summit, which took place in the UK this month, aim to create guardrails for the technology, critics claim big tech companies hold too much sway over the negotiations. Whether that’s true or not, the reality is that, at present, proper, safe management of AI is patchy at best and downright alarming at its worst.  

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge Read More »

how-europe-is-racing-to-resolve-its-ai-sovereignty-woes

How Europe is racing to resolve its AI sovereignty woes

While not yet as illustrious as its North American counterparts OpenAI, Anthropic, or Cohere, Europe’s own cohort of generative AI startups is beginning to crystallise. Just yesterday, news broke that Germany’s Aleph Alpha had raised €460mn, in one of the largest funding rounds ever for a European AI company. 

The European tech community received news of the investment with some enthusiasm. While much focus has been on how the EU will regulate the tech (and how the UK will or will not), there hasn’t been a whole heap of attention on how the bloc will support artificial intelligence innovation and reduce the risk of being left behind in yet another technological leap. 

During a press conference about the investment, Germany’s Vice Chancellor and Minister for Economic Affairs Robert Habeck stressed the importance of supporting domestic AI enterprises. 

“The thought of having our own sovereignty in the AI sector is extremely important,” Habeck said. “If Europe has the best regulation but no European companies, we haven’t won much.”

Transparency, traceability, and sovereignty

At the same press conference, Jonas Andrulis, Aleph Alpha’s founder and CEO, stated that the investors participating in the latest round (including the likes of SAP, Bosch Ventures, and owners of budget supermarket giant Lidl) were all partners the company had worked with before. Notably, all but a small contribution from Hewlett Packard came from European investors or grants. 

“What was so important for me, right from the beginning with our research, is transparency, traceability, and sovereignty,” Andrulis added, playing on ethical considerations that could potentially set a European LLM apart, as well as geopolitical objectives. 

Aleph Alpha is building a large language model (LLM) similar to OpenAI’s GPT-4, but focusing on serving corporations and governments, rather than individual consumers. But there are other things separating the two companies — OpenAI has 1,200 employees, whereas 61 people work at Aleph Alpha. Furthermore, the former has secured over €11bn in funding.

However, with the construction of the €2bn Innovation Park Artificial Intelligence (Ipai) in Aleph Alpha’s hometown of Heilbronn in southwest Germany, the startup may end up receiving the boost it needs to level the playing field. Construction of Ipai is slated to be complete in 2027, by when it will be able to accommodate 5,000 people. The project is supported by the Dieter Schwarz Foundation, which also participated in Aleph Alpha’s latest funding round. 

Europe’s lack of AI contender geopolitical issue

Founded in 2019, Aleph Alpha is not a newcomer to the game. In 2021, before ChatGPT-induced investment hysteria, the company raised €23mn, in a round led by Lakestar Advisors. Such an amount has, of course, since been overshadowed by numbers in the billions of dollars on the other side of the Atlantic. However, Aleph Alpha did raise another €100mn, backed by Nvidia among others, in June this year.

And the German startup isn’t the only European player raking in the dough. Only a few weeks after the company’s founding in May this year, France’s Mistral AI raised €133mn in the reportedly largest-ever seed round for a European startup.

Presenting a challenge to Aleph Alpha’s claim to the European GenAI throne, the company is also developing an LLM for enterprises. Although, its very first model, Mistral 7B, is totally free to use. In its pitches to investors, Mistral, founded by former Google and Meta employees, reportedly warned it was a “major geopolitical issue” that Europe did not have its own serious contender in generative AI. 

Meanwhile, Germany’s is not the only government looking to shore up domestic generative AI capabilities. The Netherlands recently commenced the development of its very own homegrown LLM to provide what it said would be a “transparent, fair, and verifiable” GenAI alternative. 

How Europe is racing to resolve its AI sovereignty woes Read More »

musk-on-how-to-turn-the-uk-into-a-‘unicorn-breeding-ground’

Musk on how to turn the UK into a ‘unicorn breeding ground’

If you are generally interested in technology, and are not aware by now of the meeting that took place yesterday between Rishi Sunak and Elon Musk during the UK’s AI Security Summit at Bletchley Park, you have probably been living under a rock.

As others have already commented, the session more resembled a fan meeting, or at the very least a club of mutual admiration, than an actual interview. The PM’s fawning giggles were perhaps not entirely suited to the gravity of the occasion bringing the two together (but hey, who are we to judge). 

You have probably also read Elon Musk‘s statements along the lines of “no work will be necessary in the future,” and that any AI safety conference not including China would have been “pointless.” He also predicted that humans will create deep friendships with AI, once the technology becomes… intelligent enough.



But, perhaps less quote friendly or headline-making material, prompted by a question from the Summit’s selected audience, the tech tycoon also discussed what he believes it would take to shift the culture in the UK so that it could become “a real breeding ground for unicorn companies,” and turn being a founder into a more obvious career choice for technical talent. 



“There should be a bias towards supporting small companies,” Musk said, referring to policy and investment. “Because they are the ones that really need nurturing. The larger companies really don’t need nurturing. You can think of it as a garden — if it’s a little sprout it needs nurturing, if it’s a mighty oak it does not need quite as much.” 

After praising London as a leading centre for AI in the world (behind the San Francisco Bay Area), accompanied by some smug nodding from the PM, Musk added that it would take sufficient infrastructure support. 

“You need landlords that are willing to rent to new companies,” Musk said. “You need law firms and accountants that are willing to support new companies. And that is a mindset change.”

He further stated that he believes culturally, in the UK, this is happening, but people just really need to decide that “this is a good thing.” For Brits to become more comfortable with failing might be a more tricky cultural shift to facilitate. 

“If you don’t succeed with your first startup it shouldn’t be a catastrophic sort of career-ending thing,” Musk mused. “It should be more like ‘ok, you gave it a good shot, and now try again.’



“Most startups fail. You hear about the startups that succeed, but most startups consist of a massive amount of work, followed by failure. So it’s a high-risk high reward situation.”

Published

Back to top

Musk on how to turn the UK into a ‘unicorn breeding ground’ Read More »

how-this-berlin-startup-deploys-ai-to-make-preventative-healthcare-accessible

How this Berlin startup deploys AI to make preventative healthcare accessible

There are plenty of conversations around how AI can progress healthcare. And indeed, it has numerous applications in the form of diagnostics and accelerated drug discovery. However, wouldn’t it be even better if artificial intelligence could help prevent us from getting sick in the first place?

Anyone who has ever tried to get past (or even to) a GP in the Netherlands or any other European country feeling the healthcare capacity crunch knows that it can be a process leaving you almost as drained as being unwell itself. Often, getting access to proper care can become a matter of being able to advocate for yourself, at times across language barriers. 

HealthCaters is a female-founded startup based in Berlin that wants to change the status quo of healthcare gatekeeping. Its founders say they are not simply looking to make money and grow the company. Rather, they want to change the future of preventative medicine — in a way that is accessible and affordable for all. And when speaking to its two co-founders, Yale-educated Dr Lily Kruse and ex-VC and former head of business development at medical travel platform Medigo Tanya Eliseeva, I feel that I believe them. 

“I was a heart surgeon in the US. And for me, the most important thing was to help people and to make sure that they are healthy. And I realised quickly that medicine is not so much about that. It’s more about treating people who are already sick,” Dr Kruse says. 

“But if you think about it, there’s so many people that are getting sick, but are not sick yet. It was quite emotional to have somebody on the operating table, knowing that this could have been prevented.”

Preventative health beyond statistics

According to the WHO, by 2030, the proportion of total global deaths due to chronic diseases (or noncommunicable diseases, NCDs) is expected to reach 70% — up from 61% in 2005. Apart from the tremendous individual suffering they inflict, lifestyle-influenced diseases, such as heart disease, chronic respiratory disease, and diabetes, are also a huge burden on health services worldwide. 

Considering that in 2011, Harvard Business School predicted that NCDs would cost society more than $30 trillion over the coming two decades, it is baffling that national healthcare plans hardly, if ever, take preventive measures into account. 

Hoping to function as an extension of the existing healthcare system, HealthCaters has developed what they call DIY health screenings, using a portable testing station and an accompanying app. Taking around 30 minutes, the system guides the user through a series of steps to measure things such as blood pressure, cholesterol, lung function, heart rate, kidney and liver health, metabolic health, etc. 

HealthCaters portable testing station
The screening takes about 30 minutes. Credit: HealthCaters

A proprietary algorithm then analyses the results and provides a comprehensive risk assessment, including probability and factors impacting each risk. The app also provides practical advice on how to mitigate said risks, based on scientific evidence. 

“It’s not just a sheet of different values, and then a thumbs up, like you’re okay or you’re not okay, and that’s it,” Eliseeva says. “We put together a very comprehensive risk assessment that allows you to understand what your actual risks are. What do I need to pay attention to? And for every risk that we identify, we show the probability and the factors that impact that risk.” 

Unlike the doctor’s office, or by employing the services of a test lab, the user doesn’t just receive the numbers, but the algorithm also takes lifestyle into account to determine what the specific numbers mean to each individual in terms of risk. 

“Our goal is preventative medicine that is not just statistics,” Eliseevea continues. “It’s not about having a 10% chance of developing something. It’s more like ‘what do I have to care for, in order for me to stay healthy?’” 

‘Explosive demand’ from corporates

 HealthCaters offers their services to corporates that can set up health screening days for employees with the portable boxes and access to the app with personalised plans. Customers include the likes of IBM, WeWork, and Barmenia. Furthermore, the startup just opened up its first centre for the public in Berlin (prices start at €39). 

“We are seeing explosive demand from corporates right now,” Eliseeva says, adding that the company signed three new clients last month — no small thing for a startup its size. 

“Usually we do it in the framework of a health day or health week, which means we go there and we set up, and employees can book slots. They come to the health case, and they can do the screening themselves,” Dr Kruse explains. “So it’s completely self-managed.” 

Sometimes, the process is arranged by an insurance company that acts as the intermediary, which also allows corporations to lower premiums for employee insurance. 

Growing a team that understands tech for health

While any startup in preventative or diagnostic healthcare over the past few years may have had to battle the ghost of Theranos in the minds of investors, HealthCaters recently secured $1.2mn in seed funding led by Barmenia Next Strategies. They will use the funds to expand the team, both on the tech and operational business side, as well as “double down” on the product.

“It’s so important to make sure that there is a strong team behind your tech that understands at what point and how to take all this technology [generative AI] in, and to implement it without jeopardising the actual essence of the product,” Eliseeva says. “Because it’s not tech for tech, right? It’s tech for health.” 

Lack of accessibility increases anxiety

The company has also held pop-up events throughout Europe. Circling back to the Dutch healthcare system we referenced earlier in this article, when HealthCaters held an event in Amsterdam earlier this year, the audience was in parts unexpected. 

“You always think that out-of-pocket expenses for health care is something that is upper-middle class territory. But many of the people who came to us were taxi drivers,” Eliseeva retells the experience.



“And they said, ‘Well, I tried to have a doctor’s appointment, they asked me to wait for three months. When I finally went, it was over in literally two minutes. They looked at me and said — you’re okay, why are you here?’ They said that it makes them very anxious, and this word, anxious, is repeated by so many people we talk to [in relation to healthcare].” 

The company has plans to further integrate the product with wearables and data such as genetic screenings and family history. With so much personal medical data being involved, I cannot help but ask about privacy and security. Eliseeva highlights that it was not something she and Dr Kruse would ever cut corners on as it is a “cornerstone” of the product. 

“Before we hired a single person, we already asked ourselves, how are we going to deal with data security? We even interviewed on the basis of what they would do with this data,” she stated, further adding that the company runs its own trial audits, and is passing on every single criteria. 

How this Berlin startup deploys AI to make preventative healthcare accessible Read More »

why-ai-progress-hitting-the-brakes-is-more-likely-than-world-domination

Why AI progress hitting the brakes is more likely than world domination

There’s a looming global computing capacity crunch that cannot be sustainably addressed the way we’re doing things right now. 

Simply put, between artificial intelligence (AI) models growing exponentially and an ongoing global digital transformation, data centres are running out of space. Their vacancy rates are hitting record-lows and prices are rising in response to demand, which is cause for much unease among tech leaders.  

If this trend continues, at some point, we will reach a juncture where we can no longer accomplish all the things that technology theoretically allows us to do, because our capacity to process data will be constrained. 

Perhaps the biggest worry is that AI’s transformative potential, which we’re only just beginning to tap into, will be throttled by purely physical constraints. This will hinder new discoveries and the development of more advanced machine learning (ML) models, which is bad news for all, except AI apocalypse alarmists

Is there any way to avoid the computing capacity crisis? Since massively scaling back our computational demands isn’t really an option, the only alternative is to significantly boost capacity, which boils down to two available courses of action: build more data centres and develop better digital infrastructure.

But that’s easier said than done — here’s why. 

Why more data centres isn’t the answer

Until now, increasing demand for computing capacity has been, in part, met by building more data centres, with conservative estimates putting real estate taken up by data centres growing at ~40% per year. It’s a figure that you can expect to remain fairly steady, as supply issues, power challenges, and construction delays are severely limiting scaling capacity expansion. 

In other words, today, demand cannot be simply met by ramping up data centre construction. 

Nor should that be something we aspire to. Each of these football-field-sized warehouses gobbles up gargantuan amounts of energy and water, placing severe strain on the environment, both locally and globally. A single data centre can consume as much electricity and water as 50,000 homes and the cloud’s carbon footprint already exceeds that of the aviation industry.

Credit where credit is due — data centres have come a long way in minimising their environmental impact. This is in large part thanks to a fierce sustainability race, which has propelled innovation, particularly as it relates to cooling and energy efficiency. Nowadays, you’ll find data centres in underground mines, in the sea, and using other natural cooling opportunities such as fjord water flows, all to reduce energy and water consumption. 

The trouble is, this isn’t scalable globally, nor is boiling our seas a viable path forward. Erecting more data centres — no matter how efficient — will continue to wreak havoc on local ecosystems and impede national and international sustainability efforts. All while still failing to meet the demand for compute resources. 

Still, two chips are better than one, unless…

Think inside the box

… unless that single chip operates at twice the speed. To avoid the capacity crunch, all hopes rest on improving the digital infrastructure, namely, the chips, the switches, the wires, and other components that can improve data speeds and bandwidth while consuming less energy. 

Let me reiterate — the evolution of AI depends on finding ways to transfer more data, without using more energy. 

Essentially, this means two things. First, the development of more powerful and AI-centric chips. Second, the enhancement of data transfer speeds.

1. Designing custom chips for AI

Existing digital infrastructure isn’t particularly well suited for the efficient development of ML models. General-purpose central processing units (CPUs), which continue to be the primary computing components in data centres, struggle with AI-specific tasks due to their lack of specialisation and computational efficiency. 

When it comes to AI, graphics processing units (GPUs) fare much better thanks to better processing power, higher energy efficiency, and parallelism. That’s why everyone’s snatching them up, which has led to a chip shortage

Yet GPUs inevitably hit the same brick wall. They’re not inherently optimised for AI tasks, leading to energy waste and suboptimal performance in handling the increasingly intricate and data-intensive demands of modern AI applications. 

That’s why companies such as IBM are designing chips tailored to AI’s computational demands that promise to squeeze out the most performance while minimising energy consumption and space.

2. Improving data transfer capacity

No modern AI model operates on a single chip. Instead, to get the most of available resources, you assemble multiple chips into clusters. These clusters often form a part of larger networks, each designed for specific tasks.

Accordingly, the interconnect, or the system facilitating communication between chips, clusters, and networks, becomes a critical component. Unless it can keep up with the speed of the rest of the system, it risks being a bottleneck that hinders performance. 

The challenges for data transfer devices mirror those for chips: they must operate at high speeds, consume minimal energy, and occupy as little physical space as possible. With traditional electrical interconnects fast reaching their limits in terms of bandwidth and energy efficiency, all eyes are on optical computing — and silicon photonics, in particular.

Unlike electrical systems, optical systems use light to transmit information, providing key advantages in the areas that matter — photonic signals can travel at the speed of light and carry a higher density of data. Plus, optical systems consume less power and photonic components can be much smaller than their electrical counterparts, allowing for more compact chip designs. 

The operative words here are “can be.” 

The growing pains of cutting-edge tech

Optical computing, while extremely fast and energy-efficient, currently faces challenges in miniaturisation, compatibility, and cost. 

Optical switches and other components can be bulkier and more complex than their electronic counterparts, leading to challenges in achieving the same level of miniaturisation. As of now, we are yet to find materials that can act as both an effective optical medium and are scalable for high-density computing applications.

Adoption would also be an uphill battle. Data centres are generally optimised for electronic, not photonic, processing, and integrating optical components with existing electronic architectures poses a major challenge. 

Plus, just like any cutting edge technology, optical computing has yet to prove itself in the field. There is a critical lack of research into the long-term reliability of optical components, particularly under the high-load, high-stress conditions typical of data centre environments.

And to top it all off — the specialised materials required in optical components are expensive, making widespread adoption potentially cost-prohibitive, especially for smaller data centres or those with tight budget constraints.

So, are we moving fast enough to avoid the crunch?

Probably not. Definitely not to stop building data centres in the short term. 

If it’s any consolation, know that scientists and engineers are very aware of the problem and working hard to find solutions that won’t destroy the planet by constantly pushing the boundaries and making significant advances in data centre optimisation, chip design, and all facets of optical computing.

My team alone has broken three world records in symbol rate for data centre interconnects using intensity modulation and direct detection approach. 

But there are serious challenges, and it’s essential to address them head-on for modern technologies to realise their full potential. 

Professor Oskars Ozoliņš received his Dr.sc.ing. degree in optical communications from Riga Technical University (Latvia) in 2013 and received a habilitation degree in physics with a specialization in optical communication from KTH Royal Institute of Technology in 2021. He is the author of around 270 international journal publications, conference contributions, invited talks/tutorials/keynotes/lectures, patents, and book chapters. You can follow him on LinkedIn here.

Why AI progress hitting the brakes is more likely than world domination Read More »

amsterdam’s-orquesta-raises-e800,000-for-no-code-llm-gateway

Amsterdam’s Orquesta raises €800,000 for no-code LLM gateway

Amsterdam-based startup Orquesta today announced an oversubscribed €800,000 pre-seed funding round. The company has developed a platform through which companies can integrate various Large Language Models (LLMs) directly into their business operations. 

Generative AI and LLMs have been developing at breakneck speed over the past year — a velocity seemingly matched  by investor appetite for all things GenAI. Indeed, just last week, Europe’s homegrown contribution in the form of Mistral 7B was made available free of charge to the public. 

Adding to the plethora of offerings, the technology itself will keep on developing rapidly (to where and to what end only time will tell). As such, organisations will undoubtedly struggle to keep up and, what’s more, quite probably end up with patchwork solutions as they try to integrate various models. (Who doesn’t love a good Frankenstein system?)  

Rather than relying on a fragmented AI tech stack, Orquesta proposes a single gateway no-code platform it says allows SaaS companies to centralise prompt management, streamline experimentation, collect feedback, and get real-time insights into performance and costs. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“The accelerated development of new AI functionalities, leads to increased fragmentation, complexity, and a need for speed within the technology landscape for our clients,” said Sohrab Hosseini, co-founder of Orquesta. “This prevents the already scarce talent of our clients from focusing on strategic work and growth.” 

Shortening release cycles

Hosseini, who founded Orquesta together with Anthony Diaz in 2022, said that with one LLM gateway, including all the necessary tooling, Orquesta was the “partner for them [clients] to remain competitive. He further added that the company was very excited about the focus and multidisciplinary experience of the various VCs and angel investors who had joined to “match and support Orquesta’s level of ambition.”

The round was led by Dutch venture capital firm Curiosity VC, while Spacetime, the investment office of Dutch entrepreneur Adriaan Mol, was co-lead. Meanwhile, angel investors include Koen Köppen (Mollie), Milan Daniels and Max Klijnstra (Otrium), and Arjé Cahn (Bloomreach). 

“Due to the acceleration of AI capabilities and adoption, we expect the landscape of LLM providers and models to grow exponentially,” said Adriaan Mol, founder of Mollie, Messagebird, and Spacetime. “Orquesta has built a best-in-class no-code platform that allows engineers to focus on their proprietary product and shorten release cycles instead of managing LLM integrations, configurations, and rules.” 

Amsterdam’s Orquesta raises €800,000 for no-code LLM gateway Read More »

paris-based-mistral-releases-first-generative-ai-model-—-and-it’s-totally-free

Paris-based Mistral releases first generative AI model — and it’s totally free

Europe’s startup contribution to the generative AI bonanza, Mistral, has released its first model. Mistral 7B is free to download and be used anywhere — including locally. 

French AI developer Mistral says its Large Language Model is optimal for low latency, text summarisation, classification, text completion, and code completion. The startup has opted to release Mistral 7B under the Apache 2.0 licence, which has no restrictions on use or reproduction beyond attribution.

“Working with open models is the best way for both vendors and users to build a sustainable business around AI solutions,” the company commented in a blog post accompanying the release. “Open models can be finely adapted to solve many new core business problems, in all industry verticals — in ways unmatched by black-box models.” 

They further added that they believed the future will see many different specialised models, each adapted to specific tasks, compressed as much as possible, and connected to specific modalities.

Mistral has said that moving forward, it will specifically target business clients and their needs related to R&D, customer care, and marketing, as well as giving them the tools to build new products with AI. 

“We’re committing to release the strongest open models in parallel to developing our commercial offering,” Mistral said. “We’re already training much larger models, and are shifting toward novel architectures. Stay tuned for further releases this fall.”

Do not expect a user-friendly ChatGPT web interface to engage with the made-in-Europe LLM. However, it is downloadable via a 13.4GB torrent, and the company has set up a Discord channel to engage with the user community. 

European funding record for startups

Paris-based Mistral made headlines in June when it became the reportedly largest ever seeded startup in Europe, raising €105mn in a round led by Lightspeed Venture Partners. While it may seem as if an almost insurmountable amount of work has happened over the past three months, one should take into consideration that the company’s three founders all came from Google’s DeepMind or Meta. 

The 7.3 billion parameter Mistral 7B reportedly outperforms larger models, such as Meta’s 13 billion parameter Lama 2, and requires less computing power. Indeed, according to its developers, it “outperforms all currently available open models up to 13B parameters on all standard English and code benchmarks.” 

“Mistral 7B’s performance demonstrates what small models can do with enough conviction.”

Paris-based Mistral releases first generative AI model — and it’s totally free Read More »

european-central-bank-assembles-‘infinity-team’-to-identify-genai-applications

European Central Bank assembles ‘infinity team’ to identify GenAI applications

European Union bureaucracy might not conjure the most exciting of connotations. However, being part of the “infinity team” surely puts a superhero-esque spin on your average Frankfurt grey high-rise working day. 

Minute takers watch out. After surveying employees on where deploying generative AI could be most effective, the ECB’s newly established working group has launched nine trials, the results of which could speed up day to day activities of the financial institution.

Large language models, the organisation says, could be deployed for tasks including writing draft codes, test out software faster, summarising supervision documents, drafting briefings, and “improving text written by staff members making the ECB’s communication easier to understand for the public.” 

The Central Bank’s chief service officer, Myriam Moufakkir, discussed the ECB’s use of AI on Thursday, in an organisation blog post. (To be perfectly straight, she doesn’t mention the “infinity team” by name, but other reports submit this to be the assigned designation.)

Commenting on the ECB’s core work of analysing vast amounts of data, Moufakkir said, “​​AI offers new ways for us to collect, clean, analyse, and interpret this wealth of available data, so that the insights can feed into the work of areas like statistics, risk management, banking supervision, and monetary policy analysis.”

Existing ECB applications of AI

Moufakkir said that the ECB is already deploying AI in a number of areas. This includes data collection and classification, as well as applying web scraping and machine learning to understand price setting dynamics and inflation behaviour. 

Furthermore, it applies natural language processing models trained with supervisory feedback to analyse documents related to banking supervision. This is done through its in-house Athena platform, which helps with topic classification, sentiment analysis, dynamic topic modelling, and entity recognition. It also uses AI to translate documents into member state languages. 

“Naturally, we are cautious about the use of AI and conscious of the risks it entails,” Moufakkar further noted. “We have to ask ourselves questions like ‘how can we harness the potential that large language models offer in a safe and responsible manner?’, and ‘how can we ensure proper data management?’.”

Without specifying exactly how, she added that the ECB was looking into “key questions” in the fields of data privacy, legal constraints, and ethical considerations. With the EU’s landmark AI Act in the works, the use of financial data of its citizens should be a particularly intriguing landscape to navigate. 

European Central Bank assembles ‘infinity team’ to identify GenAI applications Read More »

‘world’s-most-accurate’-startup-data-platform-to-identify-gaps-in-ai-ecosystem

‘World’s most accurate’ startup data platform to identify gaps in AI ecosystem

Today, the Saïd Business School at the University of Oxford, along with early-stage VC OpenOcean, released what they call “the world’s most accurate open access startup insights platform” — the O3

The platform is the result of three years of research from Oxford Saïd and 13 years of experience of data economy investing from OpenOcean. It leverages public and private data sources and is meant to help improve decision making across the UK tech ecosystem, through “granular data on startups and their technology stack, solutions, and go-to-market strategies.” 

“In my time in venture capital, far too often the choice of which startups receive funding has come down to instinct and opportunistic use of data, rather than accurate definition and comparison of startups,” said Ekaterina Almasque, General Partner at OpenOcean. “We wanted to change that, creating a platform that cuts through the noise, and removes bias from decision-making.” 

The O3 platform has already screened 16,913 UK startups according to a unique taxonomy developed by Oxford Saïd. For an initial analysis, it has looked specifically at high-growth startups that use or facilitate AI (close to 1,300), and has come up with some interesting findings about the sector.

AI startups a small portion of UK tech ecosystem

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The UK government has asserted that it is going to make the country an AI powerhouse. However, thus far, the number of UK startups with a pronounced AI focus make up less than 10% of the ecosystem. 

Mari Sako, Professor of Management Studies at Oxford Saïd, believes that the O3 will enable policymakers, researchers, founders, and investors to clearly identify gaps and opportunities in the AI ecosystem. “We believe this platform has immense potential to aid innovators in making informed decisions based on sound data, and to boost research on AI,” Sako said.  

UK fintech is generally a top investment destination in Europe, but when it comes to AI, the health tech startup segment receives more funding (£2.6bn vs. £3.4bn, respectively). 

Startups using AI for recognition tasks, including but not exclusive to facial recognition, have collectively raised the most funding (£6bn). Meanwhile, among the least funded are those focusing on privacy protection (£1.2bn). 

The analysis has also uncovered, perhaps unsurprisingly, a significant bias towards London for startup fundraising. However, there is also notable support in Bristol, Oxford, and Cambridge. 

According to the originators of the platform, the UK is only the beginning. “We want to see it expand to cover more markets and geographies,” Almasque continued. “It is a community driven project, built on open access, where the more stakeholders participate, the more powerful is the common knowledge.”

‘World’s most accurate’ startup data platform to identify gaps in AI ecosystem Read More »

germany-doubles-funds-for-ai-‘made-in-europe’

Germany doubles funds for AI ‘made in Europe’

On Wednesday, the German government announced that it would nearly double its funding for artificial intelligence research. The money pledged towards the development of AI systems now amounts to nearly €1bn, which is still far behind the $3.3bn (€3.04bn) in public funding the US reportedly threw at the field last year. 

The Federal Ministry for Education and Research said that AI is a “key technology” that offers enormous opportunities for science, growth, prosperity, competitiveness, and social added value. It further added that “technological sovereignty in AI must be secured,” and that Germany and Europe should take a leading position in a world “powered by AI.” 

This means that Germany on its own is drawing level with the funds pledged by the EU. The European Commission has also committed €1bn to AI research per year through the Horizon Europe program. Meanwhile, the Commissionstates that it will mobilise additional investments from both the private sector and the member states to reach an annual volume of €20bn. 

The increased funding was presented along with Germany’s Artificial Intelligence Action Plan by Federal Research Minister Bettina Stark-Watzinger. Earlier this month, the Minister argued that Germany “must bring its academic practices in line with its security interests in light of tensions with systemic rivals such as China.” 

The global AI race

Figures for public spending in China are notoriously tricky to pin down. However, in 2022, private AI investments in China were at $13.4bn (€12.35bn), still trailing far behind the US with a total of $47.4bn (€43.4bn). 

This week, the German government also proposed harsher export curbs on China for semiconductors and AI technologies, similar to the executive order signed by US President Biden a little while ago. Furthermore, it laid out plans to tighten the screening process for Chinese FDI.

With the funds, Germany is looking to set up 150 new university labs dedicated to researching artificial intelligence, expand data centres, and increase access to datasets for training advanced AI models. The goal is to then convert the research and skills to “visible and measurable economic success and a concrete, noticeable benefit for society.”

Additionally, the government says it hopes to show the unique selling point of AI “Made in Germany” (or “Made in Europe”). “We have AI that is explainable, trustworthy and transparent,” Stark-Watzinger said. “That’s a competitive advantage.”  

Indeed it is, if you have the intention of using it somewhere affected by forthcoming artificial intelligence regulation. While the world waits for the EU AI Act, which will set different rules for developers and deployers of AI systems according to a risk classification system, the Cyberspace Administration of China last month published its own “interim measures” rules for generative AI.

Although the internet watchdog says the state “encourages the innovative use of generative AI in all industries and fields,” AI developers must register their algorithms with the government, if their services are capable of influencing public opinion or can “mobilise” the public.

Germany doubles funds for AI ‘made in Europe’ Read More »

uk-commits-100m-to-secure-ai-chip-components

UK commits £100M to secure AI chip components

In an intensifying global battle for semiconductor self-sufficiency, the UK will reportedly spend £100mn of taxpayer money to buy AI chip technology from AMD, Intel, and Nvidia. The initiative is part of the  UK’s intention to build a national AI resource, but critics say it is too little, too late. 

As reported by The Guardian, a government official has confirmed that the UK is already in negotiations with Nvidia for the purchase of up to 5,000 graphic processing units (GPU). However, they also stated that the £100mn was way too little compared to what for instance the EU, US, and China are committing to the growth of a domestic semiconductor industry.

The UK accounts for only 0.5% of global semiconductor sales, according to a Parliamentary report. In May this year, the government announced it would commit £1bn over 10 years to bolster the industry. In comparison, the US and the EU are investing $50bn (€46bn) and €43bn, respectively. 

The idea, Rishi Sunak said at the time, is to play to the UK’s strengths. This means to focus efforts on areas like research and design, the PM stated. In turn, this can be contrasted to the approach of building large fabs, such as the ones Germany is spending billions in state aid to make a reality. 

UK’s AI summit at historic computer science site

Alongside a push (no matter its size) toward more advanced AI-powering chip capabilities, the UK recently announced the time for its much-publicised AI safety summit. The meeting between officials from “like-minded” states, tech executives, and academics will take place at Bletchley Park, situated between Cambridge and Oxford, at the beginning of November. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The site is considered the birthplace of the first programmable digital computer, that helped crack the Enigma encryption machine used by the Nazis, essentially tipping the scales of World War II. Today, it is home to the National Museum of Computing, run by the Bletchley Park Trust. 

Meanwhile, the race to secure components for chips, or the chips themselves, that can power AI systems, such as large language models (LLM), is accelerating. According to the Financial Times, Saudi Arabia has bought at least 3,000 of Nvidia’s processors, especially designed for generative AI. Its neighbour, the UAE, has far-reaching AI ambitions of its own, also having purchased thousands of advanced chips and developing its own LLM.  

Meanwhile, Chinese internet giants have reportedly rushed to secure billions of dollars worth of Nvidia chips ahead of a US tech investment ban coming into effect early next year. It is still unclear whether the UK will invite China to the gathering at Bletchley Park in November. The geopolitics and tech diplomacy of semiconductors could be entering its most delicate phase to date.

UK commits £100M to secure AI chip components Read More »

ai-could-fall-short-on-climate-change-due-to-biased-datasets,-study-finds

AI could fall short on climate change due to biased datasets, study finds

Among the many benefits of artificial intelligence touted by its proponents is the technology’s potential ability to help solve climate change. If this is indeed the case, the recent step changes in AI couldn’t have come any sooner. This summer, evidence has continued to mount that Earth is already transitioning from warming to boiling. 

However, as intense as the hype has been around AI over the past months, there is also a lengthy list of concerns accompanying it. Its prospective use in spreading disinformation for one, along with potential discrimination, privacy, and security issues.

Furthermore, researchers at the University of Cambridge, UK, have found that bias in the datasets used to train AI models could limit their application as a just tool in the fight against global warming and its impact on planetary and human health. 

As is often the case when it comes to global bias, it is a matter of Global North vs. South. With most data gathered by researchers and businesses with privileged access to technology, the effects of climate change will, invariably, be seen from a limited perspective. As such, biased AI has the potential to misrepresent climate information. Meaning, the most vulnerable will suffer the most dire consequences. 

Call for globally inclusive datasets

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In a paper titled “Harnessing human and machine intelligence for planetary-level climate action” published in the prestigious journal Nature, the authors admit that “using AI to account for the continually changing factors of climate change allows us to generate better-informed predictions about environmental changes, allowing us to deploy mitigation strategies earlier.” 

This, they say, remains one of the most promising applications of AI in climate action planning. However, only if datasets used to train the systems are globally inclusive. 

“When the information on climate change is over-represented by the work of well-educated individuals at high-ranking institutions within the Global North, AI will only see climate change and climate solutions through their eyes,” said primary author and Cambridge Zero Fellow Dr Ramit Debnath. 

In contrast, those who have less access to technology and reporting mechanisms will be underrepresented in the digital sources AI developers rely upon. 

“No data is clean or without prejudice, and this is particularly problematic for AI which relies entirely on digital information,” the paper’s co-author Professor Emily Shuckburgh said. “Only with an active awareness of this data injustice can we begin to tackle it, and consequently, to build better and more trustworthy AI-led climate solutions.”

The authors advocate for human-in-the-loop AI designs that can contribute to a planetary epistemic web supporting climate action, directly enable mitigation and adaptation interventions, and reduce the data injustices associated with AI pretraining datasets. 

The need of the hour, the study concludes, is to be sensitive to digital inequalities and injustices within the machine intelligence community, especially when AI is used as an instrument for addressing planetary health challenges like climate change.

If we fail to address these issues, the authors argue, there could be catastrophic outcomes impacting societal collapse and planetary stability, including not fulfilling any climate mitigation pathways.

AI could fall short on climate change due to biased datasets, study finds Read More »