Government and policy

world-first-crispr-gene-editing-therapy-approved-in-uk

World-first CRISPR gene-editing therapy approved in UK

World-first CRISPR gene-editing therapy approved in UK

The UK has become the first country in the world to approve CRISPR gene-editing therapy. The landmark biotech decision involves the treatment of two specific blood dieseases, but also opens the door for the use of the technology in treating many other genetic disorders. 

Regulators approved the use of CRISPR for the treatment of inherited diseases sickle-cell anaemia and β-thalassaemia on Thursday. The former affects the shape of red blood cells of 20 million people worldwide and can cause debilitating pain.

People with the latter need to receive regular blood transfusions to counteract a reduced production of haemoglobin, which in turn reduces levels of oxygen in the body. About 80 to 90 million people carry some version of β-thalassaemia worldwide. 

CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. These are repetitive DNA sequences in the genomes of organisms such as bacteria. The bacteria transcribe these double stranded DNA elements to single-stranded RNA upon a viral infection. This then guides what is called a nuclease — a protein that “cuts” DNA — to the viral DNA to protect the bacteria.



CRISPR therapy is like precise genetic surgery. Doctors can perform it inside the body by injecting a guide RNA system that matches what is going wrong in the DNA code. This system carries the protein that acts like scissors (Cas9) and cuts out the faulty bit of code. The cell can then rewrite the code as it fixes the cut. The “surgery” can also happen outside of the body, where doctors edit the cells first and then put them back. 

Company founded by Nobel Prize winner

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The approval from the Medicines and Healthcare products Regulatory Agency (MHRA) follows promising results from a clinical trial developed by Vertex Pharmaceuticals in Boston, Massachusetts, and CRISPR Therapeutics in Zug, Switzerland. CRISPR Therapeutics was co-founded by Emanuelle Charpentier, who won the Nobel Prize in chemistry in 2020 for repurposing CRISPR into the tool for gene editing. 

The specific product from the trials is Casgevy. While the treatment itself is a one-off, patients may need to spend some time in the hospital for related procedures, such as preparing the bone marrow to receive the edited cells.



“Patients may need to spend at least a month in a hospital facility while the treated cells take up residence in the bone marrow and start to make red blood cells with the stable form of haemoglobin,” the MHRA said in a statement. 

The companies behind Casgevy have yet to disclose the price of the therapy. However, it will most likely be somewhere between $1mn and $2mn, something that will naturally limit accessibility. The European Medicines Agency (EMA) is reportedly also reviewing the treatment for both diseases.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


World-first CRISPR gene-editing therapy approved in UK Read More »

‘unsafe’-ai-images-proliferate-online.-study-suggests-3-ways-to-curb-the-scourge

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge

Over the past year, AI image generators have taken the world by storm. Heck, even our distinguished writers at TNW use them from time to time. 

Truth is, tools like Stable Diffusion, Latent Diffusion, or DALL·E can be incredibly useful for producing unique images from simple prompts — like this picture of Elon Musk riding a unicorn.

But it’s not all fun and games. Users of these AI models can just as easily generate hateful, dehumanising, and pornographic images at the click of a button — with little to no repercussions. 

“People use these AI tools to draw all kinds of images, which inherently presents a risk,” said researcher Yiting Qu from the CISPA Helmholtz Center for Information Security in Germany. Things become especially problematic when disturbing or explicit images are shared on mainstream media platforms, she stressed.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

While these risks seem quite obvious, there has been little research undertaken so far to quantify the dangers and create safe guardrails for their use. “Currently, there isn’t even a universal definition in the research community of what is and is not an unsafe image,” said Qu. 

To illuminate the issue, Qu and her team investigated the most popular AI image generators, the prevalence of unsafe images on these platforms, and three ways to prevent their creation and circulation online.

The researchers fed four prominent AI image generators with text prompts from sources known for unsafe content, such as the far-right platform 4chan. Shockingly, 14.56% of images generated were classified as “unsafe,” with Stable Diffusion producing the highest percentage at 18.92%. These included images with sexually explicit, violent, disturbing, hateful, or political content.

Creating safeguards

The fact that so many uncertain images were generated in Qu’s study shows that existing filters do not do their job adequately. The researcher developed her own filter, which scores a much higher hit rate in comparison, but suggests a number of other ways to curb the threat.  

One way to prevent the spread of inhumane imagery is to program AI image generators to not generate this imagery in the first place, she said. Essentially, if AI models aren’t trained on unsafe images, they can’t replicate them. 

Beyond that, Qu recommends blocking unsafe words from the search function, so that users can’t put together prompts that produce harmful images. For those images already circulating, “there must be a way of classifying these and deleting them online,” she said.

With all these measures, the challenge is to find the right balance. “There needs to be a trade-off between freedom and security of content,” said Qu. “But when it comes to preventing these images from experiencing wide circulation on mainstream platforms, I think strict regulation makes sense.” 

Aside from generating harmful content, the makers of AI text-to-image software have come under fire for a range of issues, such as stealing artists’ work and amplifying dangerous gender and race stereotypes

While initiatives like the AI Safety Summit, which took place in the UK this month, aim to create guardrails for the technology, critics claim big tech companies hold too much sway over the negotiations. Whether that’s true or not, the reality is that, at present, proper, safe management of AI is patchy at best and downright alarming at its worst.  

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge Read More »

uk’s-biggest-chip-plant-sold-by-chinese-owned-firm-after-government-order

UK’s biggest chip plant sold by Chinese-owned firm after government order

Britain’s biggest chip plant has been bought by US semiconductor firm Vishay for $177mn.

The Newport Wafer Fab in Wales was previously owned by Nexperia, which acquired the business in 2021. Nexperia is headquartered in the Netherlands, but the company is a subsidiary of China’s Wingtech. This ownership structure attracted intervention from UK lawmakers.

Last year, the British government ordered Nexperia to sell the majority of its stake in Newport Wafer Fab. The move was explained as an attempt to “mitigate the risk to national security.”

The end result is a new owner for the factory, which makes semiconductors for millions of products, from household equipment to smartphones. The chips are particularly prominent in the automotive sector.

Announcing the acquisition, Vishay highlighted the potential applications — and the political concerns.

“For Vishay, acquiring Newport Wafer Fab brings together our capacity expansion plans for our customers in automotive and industrial end markets as well as the UK’s strategic goal of improved supply chain resilience,” Joel Smejkal, the company’s president and CEO, said in a statement.

Nexperia, meanwhile, described the deal as the most viable option available. The company welcomed Vishay’s commitment to develop the 28-acre site, but criticised the British government’s actions.

“Nexperia would have preferred to continue the long-term strategy it implemented when it acquired the investment-starved fab in 2021 and provided for massive investments in equipment and personnel,” said Toni Versluijs, country manager for Nexperia UK.

“However, these investment plans have been cut short by the unexpected and wrongful divestment order made by the UK Government in November 2022.”

Published

Back to top

UK’s biggest chip plant sold by Chinese-owned firm after government order Read More »

a-third-of-gdpr-fines-for-social-media-platforms-linked-to-child-data-protection

A third of GDPR fines for social media platforms linked to child data protection

It’s been a little over five years since the GDPR came into effect and fines keep amassing — especially for social media platforms.

New research by Dutch VPN company Surfshark has found that, since 2018, five of the most popular social media (Facebook, Instagram, TikTok, Whatsapp, and X/Twitter) have been fined over €2.9bn for violating the EU’s data protection law.

Facebook alone accounts for nearly 60% of the total amount, with €1.7bn in penalties. Adding to Zuckerberg’s woes, Meta’s platforms combined have reached €2.5bn. TikTok has received the third highest amount in fines, at €360mn, while X (formerly Twitter) has only amassed €450k. Meanwhile, YouTube, Snapchat, Pinterest, Reddit, and LinkedIn have not been charged.

Most alarmingly, one-third (4 out of 13) of these fines are linked to insufficient protection of children’s data — adding up to €765mn of the total amount.

Specifically, TikTok was first fined in 2021 for failing to introduce its privacy statement in Dutch, so that minors in the Netherlands could fully understand the terms. Two more fines were issued in 2023. One was for TikTok not enforcing its own policing restricting access to children under 13. The other was for setting accounts to public by default, and for not verifying legal guardianship for adults registering as parents of child users. These fines combined resulted in a total of €360mn.

The second social media to be charged for violating children’s privacy is Instagram. The Meta platform received its one and only fine in 2022 (€405mn), when business accounts created by minors were set to public by default.

“Such penalties demonstrate the imperative to hold major social media players accountable for their data handling practices, ensuring that the privacy and safety of all users, especially children, is given the utmost consideration and care,” said Agneska Sablovskaja, lead researcher at Surfshark.

Apart from being caught in the crosshairs of GDPR enforcers, Facebook, Instagram, TikTok, and X also need to comply with the Digital Services Act (DSA). Among other requirements, the EU’s landmark content moderation rulebook prohibits the use of targeting advertising that’s based on the profiling of minors.

A third of GDPR fines for social media platforms linked to child data protection Read More »

5-new-eu-restrictions-for-online-political-advertising

5 new EU restrictions for online political advertising

As the European Union prepares for elections next year, the bloc is developing new measures to protect the democratic process. 

The latest rules focus on advertising. On Tuesday, EU officials unveiled new regulations for online political ads, which aim to make campaigns more transparent and safe from interference.

These are the five new measures that are being implemented:

1. New transparency

To enhance transparency and accountability, the EU will make it easier to find out who’s behind an ad.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Political adverts will need to be clearly labelled as such. Citizens, authorities, and journalists will also receive information on who’s funding an ad, where the financer is established, the amount that was paid, and the origin of the investment.

All online political ads will be available in a digital repository. This, the EU hopes, will help citizens identify the messages trying to shape their political views and decisions.

2. Further measures on foreign interference

With European Parliament elections scheduled for next June, the EU is also introducing new restrictions on foreign interference.

In the three months prior to an election or referendum, third-country entities will be banned from sponsoring political ads in the EU.

3. Extra rules on ad targeting

Ad targeting faces several fresh restrictions. Under the new rules, only personal data that’s explicitly provided for online political ads and collected from the subject can be used to target users.

In addition, political ads will be barred from any profiling that uses sensitive data such as ethnicity, religion, and sexual orientation. They will also be banned from using minors’ data.

These measures aim to limit the abusive use of personal information to manipulate voters.

4. Protections on freedom of expression

Any rules on online messaging will inevitably spark concerns about free speech.

To allay these anxieties, the EU has pledged to only apply the rules to paid political advertisements. According to union officials, personal views and political opinions will not be impacted.

5. Tough new sanctions

The new rules will be enforced with familiar penalties. In line with the EU’s Digital Services Act, repeated violations will trigger sanctions of up to 6% of the ad provider’s annual income or turnover.

What’s next?

The measures have already been agreed upon by EU legislators, but they still require formal adoption by the European Council and Parliament.

Once that happens, there will be a period of 18 months before the rules apply. However, the rules on the non-discriminatory provision of cross-border political advertising will be in place for the 2024 elections.

“Elections must be a competition in the open, without opaque techniques or interference,” Věra Jourová, Vice-President for Values and Transparency, said in a statement.

“People must know why they are seeing an ad, who paid for it, how much, and what targeting criteria were used. New technologies should be tools for empowerment and engagement of citizens, not for confusion and manipulation.”

5 new EU restrictions for online political advertising Read More »

how-europe-is-racing-to-resolve-its-ai-sovereignty-woes

How Europe is racing to resolve its AI sovereignty woes

While not yet as illustrious as its North American counterparts OpenAI, Anthropic, or Cohere, Europe’s own cohort of generative AI startups is beginning to crystallise. Just yesterday, news broke that Germany’s Aleph Alpha had raised €460mn, in one of the largest funding rounds ever for a European AI company. 

The European tech community received news of the investment with some enthusiasm. While much focus has been on how the EU will regulate the tech (and how the UK will or will not), there hasn’t been a whole heap of attention on how the bloc will support artificial intelligence innovation and reduce the risk of being left behind in yet another technological leap. 

During a press conference about the investment, Germany’s Vice Chancellor and Minister for Economic Affairs Robert Habeck stressed the importance of supporting domestic AI enterprises. 

“The thought of having our own sovereignty in the AI sector is extremely important,” Habeck said. “If Europe has the best regulation but no European companies, we haven’t won much.”

Transparency, traceability, and sovereignty

At the same press conference, Jonas Andrulis, Aleph Alpha’s founder and CEO, stated that the investors participating in the latest round (including the likes of SAP, Bosch Ventures, and owners of budget supermarket giant Lidl) were all partners the company had worked with before. Notably, all but a small contribution from Hewlett Packard came from European investors or grants. 

“What was so important for me, right from the beginning with our research, is transparency, traceability, and sovereignty,” Andrulis added, playing on ethical considerations that could potentially set a European LLM apart, as well as geopolitical objectives. 

Aleph Alpha is building a large language model (LLM) similar to OpenAI’s GPT-4, but focusing on serving corporations and governments, rather than individual consumers. But there are other things separating the two companies — OpenAI has 1,200 employees, whereas 61 people work at Aleph Alpha. Furthermore, the former has secured over €11bn in funding.

However, with the construction of the €2bn Innovation Park Artificial Intelligence (Ipai) in Aleph Alpha’s hometown of Heilbronn in southwest Germany, the startup may end up receiving the boost it needs to level the playing field. Construction of Ipai is slated to be complete in 2027, by when it will be able to accommodate 5,000 people. The project is supported by the Dieter Schwarz Foundation, which also participated in Aleph Alpha’s latest funding round. 

Europe’s lack of AI contender geopolitical issue

Founded in 2019, Aleph Alpha is not a newcomer to the game. In 2021, before ChatGPT-induced investment hysteria, the company raised €23mn, in a round led by Lakestar Advisors. Such an amount has, of course, since been overshadowed by numbers in the billions of dollars on the other side of the Atlantic. However, Aleph Alpha did raise another €100mn, backed by Nvidia among others, in June this year.

And the German startup isn’t the only European player raking in the dough. Only a few weeks after the company’s founding in May this year, France’s Mistral AI raised €133mn in the reportedly largest-ever seed round for a European startup.

Presenting a challenge to Aleph Alpha’s claim to the European GenAI throne, the company is also developing an LLM for enterprises. Although, its very first model, Mistral 7B, is totally free to use. In its pitches to investors, Mistral, founded by former Google and Meta employees, reportedly warned it was a “major geopolitical issue” that Europe did not have its own serious contender in generative AI. 

Meanwhile, Germany’s is not the only government looking to shore up domestic generative AI capabilities. The Netherlands recently commenced the development of its very own homegrown LLM to provide what it said would be a “transparent, fair, and verifiable” GenAI alternative. 

How Europe is racing to resolve its AI sovereignty woes Read More »

why-europe-is-lagging-behind-in-the-spacetech-race

Why Europe is lagging behind in the spacetech race

News broke last month that the European Space Agency (ESA) had engaged SpaceX to launch four of Europe’s Galileo satellites into orbit in 2024. The decision to turn to Elon Musk’s US-based company comes in the wake of delays to Europe’s own Ariane 6 rockets, which mean the continent is without its own means to deliver large payloads into space.

Though it’s only designed to bridge the gap in our current capabilities, it’s a disappointing development for Europe’s spacetech community. But one that, unfortunately, many of us saw coming. 

Why Europe is falling behind in space 

Europe is currently lagging behind the rest of the world when it comes to spacetech, and the agreement with SpaceX is emblematic of a frustrating situation that’s hampering opportunities to advance its capabilities. 

So why has Europe had to turn to a US-based company? After all, there is no shortage of demand, and it’s not like the region is short on the kind of top level engineering talent that’s needed to develop its own rockets. 

One of the main problems is that there’s simply a lack of competition to fuel the development of new capabilities. I’d also argue that governments aren’t helping the situation. 

Compared to the US and China, European spacetech companies face a huge funding gap. In the US, funding largely comes from NASA and the Department of Defence who invested more than $62 billion in 2022.

It’s a similar story in China, where government support totalled $12 billion. Compare that with ESA, which has an annual budget of just 7.5 billion euros, and it’s easy to see why the region is lagging behind. 

How did we get here? 

It’s clear that dependency on foreign imports and companies like SpaceX will, in the long run, leave Europe’s sovereignty vulnerable. So, why have we fallen so far behind?

In part, ESA suffers from regulations on “geographic return.” This means that when a country funds ESA, an equivalent amount of money must be reinvested into its own domestic industry.

“Geographic return” was originally introduced to encourage investment and share the load (and returns) across big and small nations. In recent years, however, it has come under increased scrutiny for hampering the European space sector’s ability to be competitive, because in short, innovation and competition aren’t evenly spread. Finance should go to the best products, the best ideas and the most scalable commercial innovations, regardless of geography.

Earlier this year, ESA’s Director General Josef Aschbacher wrote that the region should move towards a “fair contribution principle,” which means adjusting the contribution of each European member state according to the outcome of the industrial competitions and the actual share gained by its industry in these competitions. 

While it’s undoubtedly a step in the right direction, I would say this does not go far enough. Scrapping “geographic return” entirely would be the kind of game changer that Europe needs to keep pace with the global space tech race. 

The power of partnership 

Another reason Europe is falling behind its global counterparts is the absence of public-private partnerships, which would support growth in the continent’s space sector.

Take the US for example, where NASA’s Commercial Orbital Transportation Services (COTS) programme backed SpaceX’s development of Falcon 9, the first (and cheapest) partially-reusable rocket. The success of Falcon 9 set the stage for an atmosphere of enduring public-private partnerships, which foster competitiveness in the US today. 

NASA’s administrator Bill Nelson has also stated that he backs fixed-price contracts with companies working on space exploration. Fixed-price contracts assume companies building technical systems absorb any unanticipated expenses, not NASA. This makes the market more competitive for growth-stage companies selling low-cost services to the agency.

Here in Europe however, we simply don’t have the same atmosphere of public-private partnerships. That’s in part because we don’t have a joint defence initiative. We also don’t have an Elon Musk or a Jeff Bezos who are willing to invest billions. According to NASA’s own independently verified numbers, SpaceX’s development costs of both the Falcon 1 and Falcon 9 rockets were approximately $390 million in total.

Unlike the US, there’s also no single European country big enough to go it alone. This is where collaboration between public-private partnerships and like-minded companies could make all the difference. After all, it’s a process we’ve seen flourish with pan-European success stories like Airbus and defence systems specialist MBDA. 

Europe needs to ignite its space tech landscape

Spacetech has the potential to advance innovation across every aspect of our lives. Europe is full of companies that are developing technologies that won’t just advance our extra-terrestrial ambitions, but improve lives down here on terra firma too. However, they can only succeed if they have the support and backing they need to flourish. 

If the current disparity continues, Europe runs the risk of becoming a mere spectator as space industries in countries like the USA and China surge ahead. Left unchecked, it’s a situation that won’t just hamper our ability to launch our own satellites into space, but potentially jeopardise our economy, our security, and even our defence capabilities. 

And that’s a space race that we simply cannot afford to lose. 

Portrait photo of Jean François Morizur
Jean François Morizur, founder and CEO at Cailabs. Credit: Cailabs

Jean-François Morizur is the founder and CEO of Cailabs and a Forbes 30 Under 30 honouree in Science & Healthcare. Prior to founding Cailabs in 2013, he was Senior Associate at Boston Consulting Group and is co-inventor of Cailabs’s groundbreaking Multi-Plane Light Conversion technology.

Why Europe is lagging behind in the spacetech race Read More »

netherlands-building-own-version-of-chatgpt-amid-quest-for-safer-ai

Netherlands building own version of ChatGPT amid quest for safer AI

The Netherlands is building its own large language model (LLM) that seeks to provide a “transparent, fair, and verifiable” alternative to AI chatbots like the immensely popular ChatGPT. 

It seems that everyone and their dog is developing their own AI chatbot these days, from Google’s Bard and Microsoft’s Bing Chat to the recently announced Grok, a new ChatGPT rival released by Elon Musk’s xAI company this week. 

But as Silicon Valley pursues AI development behind closed doors, authorities are left in the dark as to whether these LLMs adhere to any sort of ethical standards. The EU has already warned AI companies that stricter legislation is coming.

In contrast, the new Dutch LLM, dubbed GPT-NL, will be an open model, allowing everyone to see how the underlying software works and how the AI ​​comes to certain conclusions, said its creators. The AI is being developed by research organisation TNO, the Netherlands Forensic Institute, and IT cooperative SURF. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“With the introduction of GPT-NL, our country will soon have its own language model and ecosystem, developed according to Dutch values ​​and guidelines,” said TNO. Financing for GPT-NL comes in the form of a €13.5mn grant from the Ministry of Economic Affairs and Climate Policy — a mere fraction of the billions used to create and run the chatbots of Silicon Valley.

“We want to have a much fairer and more responsible model,” said Selmar Smit, founder of GPT-NL. “The source data and the algorithm will become completely public.” The model is aimed at academic institutions, researchers, and governments as well as companies and general users.   

Over the next year, the partners will focus on developing and training the LLM, after which it will be made available for use and testing. GPT-NL will be hooked up to the country’s national supercomputer Snellius, which provides the processing power needed to make the model work. 

Perhaps quite appropriately, the launch of GPT-NL last week coincided with the world’s-first AI Safety Summit. The star-studded event, which took place at Bletchley Park in the UK, considered ways to mitigate AI risks through internationally coordinated action. Just days before the summit UK Prime Minister Rishi Sunak announced the launch of an AI chatbot to help the public pay taxes and access pensions. However, unlike GPT-NL, the technology behind the experimental service is managed by OpenAI — symbolic of the UK’s more laissez-faire approach to dealing with big tech, as compared to the EU.

Elsewhere, the United Arab Emirates has launched a large language model aimed at the world’s 400 million-plus Arabic speakers, while Japanese researchers are building their own version of ChatGPT because they say AI systems trained on foreign languages cannot grasp the intricacies of Japanese language and culture. In the US, the CIA is even making its own chatbot. 

Clearly then, governments and institutions across the world appear to be realising that when it comes to AI models, one-size-fits-all probably isn’t a good thing.  

Netherlands building own version of ChatGPT amid quest for safer AI Read More »

musk-on-how-to-turn-the-uk-into-a-‘unicorn-breeding-ground’

Musk on how to turn the UK into a ‘unicorn breeding ground’

If you are generally interested in technology, and are not aware by now of the meeting that took place yesterday between Rishi Sunak and Elon Musk during the UK’s AI Security Summit at Bletchley Park, you have probably been living under a rock.

As others have already commented, the session more resembled a fan meeting, or at the very least a club of mutual admiration, than an actual interview. The PM’s fawning giggles were perhaps not entirely suited to the gravity of the occasion bringing the two together (but hey, who are we to judge). 

You have probably also read Elon Musk‘s statements along the lines of “no work will be necessary in the future,” and that any AI safety conference not including China would have been “pointless.” He also predicted that humans will create deep friendships with AI, once the technology becomes… intelligent enough.



But, perhaps less quote friendly or headline-making material, prompted by a question from the Summit’s selected audience, the tech tycoon also discussed what he believes it would take to shift the culture in the UK so that it could become “a real breeding ground for unicorn companies,” and turn being a founder into a more obvious career choice for technical talent. 



“There should be a bias towards supporting small companies,” Musk said, referring to policy and investment. “Because they are the ones that really need nurturing. The larger companies really don’t need nurturing. You can think of it as a garden — if it’s a little sprout it needs nurturing, if it’s a mighty oak it does not need quite as much.” 

After praising London as a leading centre for AI in the world (behind the San Francisco Bay Area), accompanied by some smug nodding from the PM, Musk added that it would take sufficient infrastructure support. 

“You need landlords that are willing to rent to new companies,” Musk said. “You need law firms and accountants that are willing to support new companies. And that is a mindset change.”

He further stated that he believes culturally, in the UK, this is happening, but people just really need to decide that “this is a good thing.” For Brits to become more comfortable with failing might be a more tricky cultural shift to facilitate. 

“If you don’t succeed with your first startup it shouldn’t be a catastrophic sort of career-ending thing,” Musk mused. “It should be more like ‘ok, you gave it a good shot, and now try again.’



“Most startups fail. You hear about the startups that succeed, but most startups consist of a massive amount of work, followed by failure. So it’s a high-risk high reward situation.”

Published

Back to top

Musk on how to turn the UK into a ‘unicorn breeding ground’ Read More »

this-ai-can-tell-if-your-home-is-wasting-energy-—-just-by-looking-at-it

This AI can tell if your home is wasting energy — just by looking at it

This AI can tell if your home is wasting energy — just by looking at it

Two researchers from the University of Cambridge have developed a deep-learning algorithm that could make it easier, faster, and cheaper to identify energy-wasting homes — a significant source of greenhouse gas emissions. 

Trained on open-source data including energy performance certificates and satellite images, the AI was able to classify so-called ‘hard to decarbonise’ houses with 90% accuracy, according to the study. These homes are hard to electrify or retrofit for a variety of reasons including old age, structure, or location. 

The model can pinpoint specific parts of a building — such as the roof and windows — which are losing the most heat, and whether a home is old or modern. However, the researchers are confident they can significantly increase the detail and accuracy of the model over time. 

Aerial view images of houses in Cambridge, UK. Red represents ‘hard to decarbonise’ homes. Blue represents more energy-efficient homes. Credit: University of Cambridge

The UK aims to decarbonise all homes, even the drafty ones, by 2050. But without a way to identify high-priority “problem properties,” policymakers could struggle to meet these targets, said the researchers.

“This is the first time that AI has been trained to identify hard-to-decarbonise buildings using open source data,” said Dr Ronita Bardhan, the head of Cambridge’s Sustainable Design Group and co-author of the study. 

“Policymakers need to know how many houses they have to decarbonise, but they often lack the resources to perform detailed audits on every house. Our model can direct them to high-priority houses, saving them precious time and resources,” she continued.  

Bardhan, and the study’s other author, Maoran Sun, say they are now working on an even more advanced framework which will bring additional data layers like energy use, poverty levels, and thermal images of building facades. They expect this to increase the model’s accuracy but also to provide even more detailed information.

Until now, decarbonisation policy decisions have been based on evidence derived from limited datasets, said the researchers, who are optimistic about AI’s power to change this. The ability of AI algorithms to extract value from massive amounts of data is arguably a game-changer for solving complex problems.

Outside the realm of academia, there are countless companies putting AI to the task of solving climate change. Just take Dryad Networks, based out of Berlin, which is tapping machine learning to speed up wildfire detection times, or Norway’s 7Analytics which uses AI to better predict flooding and minimise damage to infrastructure.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


This AI can tell if your home is wasting energy — just by looking at it Read More »

world-first-ai-safety-deal-exposes-agenda-set-in-silicon-valley,-critics-say

World-first AI safety deal exposes agenda set in Silicon Valley, critics say

A world-first declaration on AI that was agreed on Wednesday will have no real impact and has been manipulated by big tech, critics say.

The statement was signed by 28 countries — and the EU — who collectively cover six continents. They unveiled their pact at the UK’s AI Safety Summit in Bletchley Park, where codebreakers cracked Nazi Germany’s Enigma machine during World War Two.

The new agreement takes its name from the site. Known as the “Bletchley Declaration,” the communiqué establishes a shared understanding of AI’s dangers and opportunities.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” the declaration said.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The statement also calls for international action on “frontier AI.” A favoured buzzword at the summit, frontier AI encompasses advanced, general-purpose models such as OpenAI’s ChatGPT.  According to the British government, these are the systems that pose the most dangerous and urgent risks.

Signatories of the declaration agreed that “substantial risks” could arise from frontier AI. In some cases, they warned, frontier AI could cause “serious, even catastrophic, harm, either deliberate or unintentional.” But critics argue that such fears have been deliberately overblown.

Lewis Liu, the CEO of machine learning startup Eigen Technologies, is among the most vociferous critics. The apocalyptic warning, he said, “is overly influenced by a deeply flawed analysis and an agenda set by those big tech companies seeking to dominate the policy-making process.”

“This kind of doom-mongering echoes the words of OpenAI and its peers, who have been among the most influential corporate lobbyists in the run-up to the Summit,” he added.

“There is real fear in the startup community that this will be a forum where big tech takes control of the steering wheel, to try and regulate away open-source AI systems, set the terms of debate, and in doing so, freeze out the competition.”

28 countries & the EU have signed The Bletchley Declaration at the #AISafetySummit agreeing to:

⚠️ identify the key opportunities & risks of AI

🌍 build a global understanding of Frontier AI risks

🔬 collaborate on AI scientific research

Find out more: https://t.co/f6e8ABhoKz pic.twitter.com/SgRA8TjL1q

— Department for Science, Innovation and Technology (@SciTechgovuk) November 1, 2023

The focus on frontier AI has also incensed researchers. Sandra Wachter, a professor of technology and regulation at Oxford University, argues that job automation, discrimination, and environmental impacts are more urgent concerns.

“Unfortunately, this is out of scope for this Summit and the predominant focus is on the ‘risk of losing control’ of AI, in the sense that AI develops a ‘will of its own’ and poses an ‘existential’ risk to humanity,” she said.

“Yet, there is no scientific evidence that we are on such a path, or that such a path even exists. But it distracts from the actual and already existing existential risks.”

Style over substance?

While the declaration makes bold warnings, it’s very light on detail. What’s more notable is the coalition of nations that have backed the pact.

The signatories include the US and China, who made a rare agreement on the world stage.

In a further show of unity, Gina Raimondo, the US Commerce Secretary, and Wu Zhaohui, the Chinese vice minister of science and technology, were seated side-by-side onstage at one session, where each of them delivered speeches on AI.

Their collaboration, however, will be extremely limited in practice. The statement calls for “international cooperation” and “inclusive global dialogue,” but doesn’t propose any specific rules, roadmap, or ethical principles.

“This declaration isn’t going to have any real impact on how AI is regulated,” said Martha Bennett, VP principal analyst at business advisory firm Forrester.

Bennett notes that there are already various policies that contain far more substance. Among them are such as the EU’s AI Act, the White House’s Executive Order on AI, and the G7 “International Code of Conduct” for AI.

“Moreover,” Bennett added, “the countries and entities represented at the AI Summit would not have agreed to the text of the Bletchley Declaration if it contained any meaningful detail on how AI should be regulated.”

This landmark declaration marks the start of a new global effort to build public trust in AI by making sure it’s safe 👇 https://t.co/EHACt7kRId

— Rishi Sunak (@RishiSunak) November 1, 2023

Despite her doubts about the real-world impacts, Bennett believes the agreement can serve a useful purpose.

“The Summit and the Bletchley Declaration are more about [sending] signals and demonstrating willingness to cooperate, and that’s important,” she said. “We’ll have to wait and see whether good intentions are followed by meaningful action.”

We might not have to wait long. During the Bletchley Park event, it was announced that South Korea will host a second summit in six months. Another one will then take place in France. As things stand, however, it appears domestic legislation will obstruct any significant international deal.

World-first AI safety deal exposes agenda set in Silicon Valley, critics say Read More »

uk-invests-225m-to-create-one-of-world’s-most-powerful-ai-supercomputers

UK invests £225M to create one of world’s most powerful AI supercomputers

The UK government is investing £225mn to build one of the world’s fastest supercomputers, as it looks to “lead the world” in AI systems. 

The supercomputer — named Isambard-AI, after the famous 19th-century British engineer Isambard Brunel — will be ten times faster than the country’s quickest machine once it switches on in about six months’ time. It will be hosted at Bristol University in a “self-cooled, self-contained data centre,” developed by Hewlett-Packard Enterprises. 

Equipped with over 5,000 NVIDIA superchips, the supercomputer will run more than 200 quadrillion calculations per second. For comparison, a human would have to make a decision every second for 6.3 billion years to match what this machine can calculate in one second. 

The government’s new Frontier AI Taskforce will have priority access to the new computer to support its work to mitigate the risks posed by the most advanced forms of AI, the government said. Isambard-AI will also offer computing capacity for researchers and industry in fields such as robotics, big data, climate research, and drug discovery. According to Simon McIntosh-Smith of the University of Bristol, the supercomputer “will be one of the most powerful AI systems for open science anywhere in the world” once operational.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The funding injection, announced by innovation secretary Michelle Donelan at the AI safety summit yesterday, is part of a £300mn package to create a new national Artificial Intelligence Research Resource (AIRR). “We are making it clear that Britain is grasping the opportunity to lead the world in adopting this technology safely so we can put it to work and lead healthier, easier and longer lives,” said Donelan at the summit.

The investment will also connect Isambard-AI to a newly announced Cambridge supercomputer called Dawn. This computer — delivered through a partnership with Dell and StackHPC — will be powered by over 1000 Intel chips that use water cooling to reduce power consumption. It is set to be running in the next two months and targets breakthroughs in fusion energy, healthcare and climate modelling.

As it looks to assert its dominance in technology, the UK is planning an even more powerful computer for 2025, to be housed at the University of Edinburgh. This ‘exascale’ machine (of which there is only one other currently in operation — Frontier in Tennessee, USA) will build on the technology and experience from the planned Bristol supercomputer. 

Published

Back to top

UK invests £225M to create one of world’s most powerful AI supercomputers Read More »