Artificial Intelligence

why-ai-progress-hitting-the-brakes-is-more-likely-than-world-domination

Why AI progress hitting the brakes is more likely than world domination

There’s a looming global computing capacity crunch that cannot be sustainably addressed the way we’re doing things right now. 

Simply put, between artificial intelligence (AI) models growing exponentially and an ongoing global digital transformation, data centres are running out of space. Their vacancy rates are hitting record-lows and prices are rising in response to demand, which is cause for much unease among tech leaders.  

If this trend continues, at some point, we will reach a juncture where we can no longer accomplish all the things that technology theoretically allows us to do, because our capacity to process data will be constrained. 

Perhaps the biggest worry is that AI’s transformative potential, which we’re only just beginning to tap into, will be throttled by purely physical constraints. This will hinder new discoveries and the development of more advanced machine learning (ML) models, which is bad news for all, except AI apocalypse alarmists

Is there any way to avoid the computing capacity crisis? Since massively scaling back our computational demands isn’t really an option, the only alternative is to significantly boost capacity, which boils down to two available courses of action: build more data centres and develop better digital infrastructure.

But that’s easier said than done — here’s why. 

Why more data centres isn’t the answer

Until now, increasing demand for computing capacity has been, in part, met by building more data centres, with conservative estimates putting real estate taken up by data centres growing at ~40% per year. It’s a figure that you can expect to remain fairly steady, as supply issues, power challenges, and construction delays are severely limiting scaling capacity expansion. 

In other words, today, demand cannot be simply met by ramping up data centre construction. 

Nor should that be something we aspire to. Each of these football-field-sized warehouses gobbles up gargantuan amounts of energy and water, placing severe strain on the environment, both locally and globally. A single data centre can consume as much electricity and water as 50,000 homes and the cloud’s carbon footprint already exceeds that of the aviation industry.

Credit where credit is due — data centres have come a long way in minimising their environmental impact. This is in large part thanks to a fierce sustainability race, which has propelled innovation, particularly as it relates to cooling and energy efficiency. Nowadays, you’ll find data centres in underground mines, in the sea, and using other natural cooling opportunities such as fjord water flows, all to reduce energy and water consumption. 

The trouble is, this isn’t scalable globally, nor is boiling our seas a viable path forward. Erecting more data centres — no matter how efficient — will continue to wreak havoc on local ecosystems and impede national and international sustainability efforts. All while still failing to meet the demand for compute resources. 

Still, two chips are better than one, unless…

Think inside the box

… unless that single chip operates at twice the speed. To avoid the capacity crunch, all hopes rest on improving the digital infrastructure, namely, the chips, the switches, the wires, and other components that can improve data speeds and bandwidth while consuming less energy. 

Let me reiterate — the evolution of AI depends on finding ways to transfer more data, without using more energy. 

Essentially, this means two things. First, the development of more powerful and AI-centric chips. Second, the enhancement of data transfer speeds.

1. Designing custom chips for AI

Existing digital infrastructure isn’t particularly well suited for the efficient development of ML models. General-purpose central processing units (CPUs), which continue to be the primary computing components in data centres, struggle with AI-specific tasks due to their lack of specialisation and computational efficiency. 

When it comes to AI, graphics processing units (GPUs) fare much better thanks to better processing power, higher energy efficiency, and parallelism. That’s why everyone’s snatching them up, which has led to a chip shortage

Yet GPUs inevitably hit the same brick wall. They’re not inherently optimised for AI tasks, leading to energy waste and suboptimal performance in handling the increasingly intricate and data-intensive demands of modern AI applications. 

That’s why companies such as IBM are designing chips tailored to AI’s computational demands that promise to squeeze out the most performance while minimising energy consumption and space.

2. Improving data transfer capacity

No modern AI model operates on a single chip. Instead, to get the most of available resources, you assemble multiple chips into clusters. These clusters often form a part of larger networks, each designed for specific tasks.

Accordingly, the interconnect, or the system facilitating communication between chips, clusters, and networks, becomes a critical component. Unless it can keep up with the speed of the rest of the system, it risks being a bottleneck that hinders performance. 

The challenges for data transfer devices mirror those for chips: they must operate at high speeds, consume minimal energy, and occupy as little physical space as possible. With traditional electrical interconnects fast reaching their limits in terms of bandwidth and energy efficiency, all eyes are on optical computing — and silicon photonics, in particular.

Unlike electrical systems, optical systems use light to transmit information, providing key advantages in the areas that matter — photonic signals can travel at the speed of light and carry a higher density of data. Plus, optical systems consume less power and photonic components can be much smaller than their electrical counterparts, allowing for more compact chip designs. 

The operative words here are “can be.” 

The growing pains of cutting-edge tech

Optical computing, while extremely fast and energy-efficient, currently faces challenges in miniaturisation, compatibility, and cost. 

Optical switches and other components can be bulkier and more complex than their electronic counterparts, leading to challenges in achieving the same level of miniaturisation. As of now, we are yet to find materials that can act as both an effective optical medium and are scalable for high-density computing applications.

Adoption would also be an uphill battle. Data centres are generally optimised for electronic, not photonic, processing, and integrating optical components with existing electronic architectures poses a major challenge. 

Plus, just like any cutting edge technology, optical computing has yet to prove itself in the field. There is a critical lack of research into the long-term reliability of optical components, particularly under the high-load, high-stress conditions typical of data centre environments.

And to top it all off — the specialised materials required in optical components are expensive, making widespread adoption potentially cost-prohibitive, especially for smaller data centres or those with tight budget constraints.

So, are we moving fast enough to avoid the crunch?

Probably not. Definitely not to stop building data centres in the short term. 

If it’s any consolation, know that scientists and engineers are very aware of the problem and working hard to find solutions that won’t destroy the planet by constantly pushing the boundaries and making significant advances in data centre optimisation, chip design, and all facets of optical computing.

My team alone has broken three world records in symbol rate for data centre interconnects using intensity modulation and direct detection approach. 

But there are serious challenges, and it’s essential to address them head-on for modern technologies to realise their full potential. 

Professor Oskars Ozoliņš received his Dr.sc.ing. degree in optical communications from Riga Technical University (Latvia) in 2013 and received a habilitation degree in physics with a specialization in optical communication from KTH Royal Institute of Technology in 2021. He is the author of around 270 international journal publications, conference contributions, invited talks/tutorials/keynotes/lectures, patents, and book chapters. You can follow him on LinkedIn here.

Why AI progress hitting the brakes is more likely than world domination Read More »

amsterdam’s-orquesta-raises-e800,000-for-no-code-llm-gateway

Amsterdam’s Orquesta raises €800,000 for no-code LLM gateway

Amsterdam-based startup Orquesta today announced an oversubscribed €800,000 pre-seed funding round. The company has developed a platform through which companies can integrate various Large Language Models (LLMs) directly into their business operations. 

Generative AI and LLMs have been developing at breakneck speed over the past year — a velocity seemingly matched  by investor appetite for all things GenAI. Indeed, just last week, Europe’s homegrown contribution in the form of Mistral 7B was made available free of charge to the public. 

Adding to the plethora of offerings, the technology itself will keep on developing rapidly (to where and to what end only time will tell). As such, organisations will undoubtedly struggle to keep up and, what’s more, quite probably end up with patchwork solutions as they try to integrate various models. (Who doesn’t love a good Frankenstein system?)  

Rather than relying on a fragmented AI tech stack, Orquesta proposes a single gateway no-code platform it says allows SaaS companies to centralise prompt management, streamline experimentation, collect feedback, and get real-time insights into performance and costs. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“The accelerated development of new AI functionalities, leads to increased fragmentation, complexity, and a need for speed within the technology landscape for our clients,” said Sohrab Hosseini, co-founder of Orquesta. “This prevents the already scarce talent of our clients from focusing on strategic work and growth.” 

Shortening release cycles

Hosseini, who founded Orquesta together with Anthony Diaz in 2022, said that with one LLM gateway, including all the necessary tooling, Orquesta was the “partner for them [clients] to remain competitive. He further added that the company was very excited about the focus and multidisciplinary experience of the various VCs and angel investors who had joined to “match and support Orquesta’s level of ambition.”

The round was led by Dutch venture capital firm Curiosity VC, while Spacetime, the investment office of Dutch entrepreneur Adriaan Mol, was co-lead. Meanwhile, angel investors include Koen Köppen (Mollie), Milan Daniels and Max Klijnstra (Otrium), and Arjé Cahn (Bloomreach). 

“Due to the acceleration of AI capabilities and adoption, we expect the landscape of LLM providers and models to grow exponentially,” said Adriaan Mol, founder of Mollie, Messagebird, and Spacetime. “Orquesta has built a best-in-class no-code platform that allows engineers to focus on their proprietary product and shorten release cycles instead of managing LLM integrations, configurations, and rules.” 

Amsterdam’s Orquesta raises €800,000 for no-code LLM gateway Read More »

paris-based-mistral-releases-first-generative-ai-model-—-and-it’s-totally-free

Paris-based Mistral releases first generative AI model — and it’s totally free

Europe’s startup contribution to the generative AI bonanza, Mistral, has released its first model. Mistral 7B is free to download and be used anywhere — including locally. 

French AI developer Mistral says its Large Language Model is optimal for low latency, text summarisation, classification, text completion, and code completion. The startup has opted to release Mistral 7B under the Apache 2.0 licence, which has no restrictions on use or reproduction beyond attribution.

“Working with open models is the best way for both vendors and users to build a sustainable business around AI solutions,” the company commented in a blog post accompanying the release. “Open models can be finely adapted to solve many new core business problems, in all industry verticals — in ways unmatched by black-box models.” 

They further added that they believed the future will see many different specialised models, each adapted to specific tasks, compressed as much as possible, and connected to specific modalities.

Mistral has said that moving forward, it will specifically target business clients and their needs related to R&D, customer care, and marketing, as well as giving them the tools to build new products with AI. 

“We’re committing to release the strongest open models in parallel to developing our commercial offering,” Mistral said. “We’re already training much larger models, and are shifting toward novel architectures. Stay tuned for further releases this fall.”

Do not expect a user-friendly ChatGPT web interface to engage with the made-in-Europe LLM. However, it is downloadable via a 13.4GB torrent, and the company has set up a Discord channel to engage with the user community. 

European funding record for startups

Paris-based Mistral made headlines in June when it became the reportedly largest ever seeded startup in Europe, raising €105mn in a round led by Lightspeed Venture Partners. While it may seem as if an almost insurmountable amount of work has happened over the past three months, one should take into consideration that the company’s three founders all came from Google’s DeepMind or Meta. 

The 7.3 billion parameter Mistral 7B reportedly outperforms larger models, such as Meta’s 13 billion parameter Lama 2, and requires less computing power. Indeed, according to its developers, it “outperforms all currently available open models up to 13B parameters on all standard English and code benchmarks.” 

“Mistral 7B’s performance demonstrates what small models can do with enough conviction.”

Paris-based Mistral releases first generative AI model — and it’s totally free Read More »

european-central-bank-assembles-‘infinity-team’-to-identify-genai-applications

European Central Bank assembles ‘infinity team’ to identify GenAI applications

European Union bureaucracy might not conjure the most exciting of connotations. However, being part of the “infinity team” surely puts a superhero-esque spin on your average Frankfurt grey high-rise working day. 

Minute takers watch out. After surveying employees on where deploying generative AI could be most effective, the ECB’s newly established working group has launched nine trials, the results of which could speed up day to day activities of the financial institution.

Large language models, the organisation says, could be deployed for tasks including writing draft codes, test out software faster, summarising supervision documents, drafting briefings, and “improving text written by staff members making the ECB’s communication easier to understand for the public.” 

The Central Bank’s chief service officer, Myriam Moufakkir, discussed the ECB’s use of AI on Thursday, in an organisation blog post. (To be perfectly straight, she doesn’t mention the “infinity team” by name, but other reports submit this to be the assigned designation.)

Commenting on the ECB’s core work of analysing vast amounts of data, Moufakkir said, “​​AI offers new ways for us to collect, clean, analyse, and interpret this wealth of available data, so that the insights can feed into the work of areas like statistics, risk management, banking supervision, and monetary policy analysis.”

Existing ECB applications of AI

Moufakkir said that the ECB is already deploying AI in a number of areas. This includes data collection and classification, as well as applying web scraping and machine learning to understand price setting dynamics and inflation behaviour. 

Furthermore, it applies natural language processing models trained with supervisory feedback to analyse documents related to banking supervision. This is done through its in-house Athena platform, which helps with topic classification, sentiment analysis, dynamic topic modelling, and entity recognition. It also uses AI to translate documents into member state languages. 

“Naturally, we are cautious about the use of AI and conscious of the risks it entails,” Moufakkar further noted. “We have to ask ourselves questions like ‘how can we harness the potential that large language models offer in a safe and responsible manner?’, and ‘how can we ensure proper data management?’.”

Without specifying exactly how, she added that the ECB was looking into “key questions” in the fields of data privacy, legal constraints, and ethical considerations. With the EU’s landmark AI Act in the works, the use of financial data of its citizens should be a particularly intriguing landscape to navigate. 

European Central Bank assembles ‘infinity team’ to identify GenAI applications Read More »

‘world’s-most-accurate’-startup-data-platform-to-identify-gaps-in-ai-ecosystem

‘World’s most accurate’ startup data platform to identify gaps in AI ecosystem

Today, the Saïd Business School at the University of Oxford, along with early-stage VC OpenOcean, released what they call “the world’s most accurate open access startup insights platform” — the O3

The platform is the result of three years of research from Oxford Saïd and 13 years of experience of data economy investing from OpenOcean. It leverages public and private data sources and is meant to help improve decision making across the UK tech ecosystem, through “granular data on startups and their technology stack, solutions, and go-to-market strategies.” 

“In my time in venture capital, far too often the choice of which startups receive funding has come down to instinct and opportunistic use of data, rather than accurate definition and comparison of startups,” said Ekaterina Almasque, General Partner at OpenOcean. “We wanted to change that, creating a platform that cuts through the noise, and removes bias from decision-making.” 

The O3 platform has already screened 16,913 UK startups according to a unique taxonomy developed by Oxford Saïd. For an initial analysis, it has looked specifically at high-growth startups that use or facilitate AI (close to 1,300), and has come up with some interesting findings about the sector.

AI startups a small portion of UK tech ecosystem

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The UK government has asserted that it is going to make the country an AI powerhouse. However, thus far, the number of UK startups with a pronounced AI focus make up less than 10% of the ecosystem. 

Mari Sako, Professor of Management Studies at Oxford Saïd, believes that the O3 will enable policymakers, researchers, founders, and investors to clearly identify gaps and opportunities in the AI ecosystem. “We believe this platform has immense potential to aid innovators in making informed decisions based on sound data, and to boost research on AI,” Sako said.  

UK fintech is generally a top investment destination in Europe, but when it comes to AI, the health tech startup segment receives more funding (£2.6bn vs. £3.4bn, respectively). 

Startups using AI for recognition tasks, including but not exclusive to facial recognition, have collectively raised the most funding (£6bn). Meanwhile, among the least funded are those focusing on privacy protection (£1.2bn). 

The analysis has also uncovered, perhaps unsurprisingly, a significant bias towards London for startup fundraising. However, there is also notable support in Bristol, Oxford, and Cambridge. 

According to the originators of the platform, the UK is only the beginning. “We want to see it expand to cover more markets and geographies,” Almasque continued. “It is a community driven project, built on open access, where the more stakeholders participate, the more powerful is the common knowledge.”

‘World’s most accurate’ startup data platform to identify gaps in AI ecosystem Read More »

germany-doubles-funds-for-ai-‘made-in-europe’

Germany doubles funds for AI ‘made in Europe’

On Wednesday, the German government announced that it would nearly double its funding for artificial intelligence research. The money pledged towards the development of AI systems now amounts to nearly €1bn, which is still far behind the $3.3bn (€3.04bn) in public funding the US reportedly threw at the field last year. 

The Federal Ministry for Education and Research said that AI is a “key technology” that offers enormous opportunities for science, growth, prosperity, competitiveness, and social added value. It further added that “technological sovereignty in AI must be secured,” and that Germany and Europe should take a leading position in a world “powered by AI.” 

This means that Germany on its own is drawing level with the funds pledged by the EU. The European Commission has also committed €1bn to AI research per year through the Horizon Europe program. Meanwhile, the Commissionstates that it will mobilise additional investments from both the private sector and the member states to reach an annual volume of €20bn. 

The increased funding was presented along with Germany’s Artificial Intelligence Action Plan by Federal Research Minister Bettina Stark-Watzinger. Earlier this month, the Minister argued that Germany “must bring its academic practices in line with its security interests in light of tensions with systemic rivals such as China.” 

The global AI race

Figures for public spending in China are notoriously tricky to pin down. However, in 2022, private AI investments in China were at $13.4bn (€12.35bn), still trailing far behind the US with a total of $47.4bn (€43.4bn). 

This week, the German government also proposed harsher export curbs on China for semiconductors and AI technologies, similar to the executive order signed by US President Biden a little while ago. Furthermore, it laid out plans to tighten the screening process for Chinese FDI.

With the funds, Germany is looking to set up 150 new university labs dedicated to researching artificial intelligence, expand data centres, and increase access to datasets for training advanced AI models. The goal is to then convert the research and skills to “visible and measurable economic success and a concrete, noticeable benefit for society.”

Additionally, the government says it hopes to show the unique selling point of AI “Made in Germany” (or “Made in Europe”). “We have AI that is explainable, trustworthy and transparent,” Stark-Watzinger said. “That’s a competitive advantage.”  

Indeed it is, if you have the intention of using it somewhere affected by forthcoming artificial intelligence regulation. While the world waits for the EU AI Act, which will set different rules for developers and deployers of AI systems according to a risk classification system, the Cyberspace Administration of China last month published its own “interim measures” rules for generative AI.

Although the internet watchdog says the state “encourages the innovative use of generative AI in all industries and fields,” AI developers must register their algorithms with the government, if their services are capable of influencing public opinion or can “mobilise” the public.

Germany doubles funds for AI ‘made in Europe’ Read More »

uk-commits-100m-to-secure-ai-chip-components

UK commits £100M to secure AI chip components

In an intensifying global battle for semiconductor self-sufficiency, the UK will reportedly spend £100mn of taxpayer money to buy AI chip technology from AMD, Intel, and Nvidia. The initiative is part of the  UK’s intention to build a national AI resource, but critics say it is too little, too late. 

As reported by The Guardian, a government official has confirmed that the UK is already in negotiations with Nvidia for the purchase of up to 5,000 graphic processing units (GPU). However, they also stated that the £100mn was way too little compared to what for instance the EU, US, and China are committing to the growth of a domestic semiconductor industry.

The UK accounts for only 0.5% of global semiconductor sales, according to a Parliamentary report. In May this year, the government announced it would commit £1bn over 10 years to bolster the industry. In comparison, the US and the EU are investing $50bn (€46bn) and €43bn, respectively. 

The idea, Rishi Sunak said at the time, is to play to the UK’s strengths. This means to focus efforts on areas like research and design, the PM stated. In turn, this can be contrasted to the approach of building large fabs, such as the ones Germany is spending billions in state aid to make a reality. 

UK’s AI summit at historic computer science site

Alongside a push (no matter its size) toward more advanced AI-powering chip capabilities, the UK recently announced the time for its much-publicised AI safety summit. The meeting between officials from “like-minded” states, tech executives, and academics will take place at Bletchley Park, situated between Cambridge and Oxford, at the beginning of November. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The site is considered the birthplace of the first programmable digital computer, that helped crack the Enigma encryption machine used by the Nazis, essentially tipping the scales of World War II. Today, it is home to the National Museum of Computing, run by the Bletchley Park Trust. 

Meanwhile, the race to secure components for chips, or the chips themselves, that can power AI systems, such as large language models (LLM), is accelerating. According to the Financial Times, Saudi Arabia has bought at least 3,000 of Nvidia’s processors, especially designed for generative AI. Its neighbour, the UAE, has far-reaching AI ambitions of its own, also having purchased thousands of advanced chips and developing its own LLM.  

Meanwhile, Chinese internet giants have reportedly rushed to secure billions of dollars worth of Nvidia chips ahead of a US tech investment ban coming into effect early next year. It is still unclear whether the UK will invite China to the gathering at Bletchley Park in November. The geopolitics and tech diplomacy of semiconductors could be entering its most delicate phase to date.

UK commits £100M to secure AI chip components Read More »

ai-could-fall-short-on-climate-change-due-to-biased-datasets,-study-finds

AI could fall short on climate change due to biased datasets, study finds

Among the many benefits of artificial intelligence touted by its proponents is the technology’s potential ability to help solve climate change. If this is indeed the case, the recent step changes in AI couldn’t have come any sooner. This summer, evidence has continued to mount that Earth is already transitioning from warming to boiling. 

However, as intense as the hype has been around AI over the past months, there is also a lengthy list of concerns accompanying it. Its prospective use in spreading disinformation for one, along with potential discrimination, privacy, and security issues.

Furthermore, researchers at the University of Cambridge, UK, have found that bias in the datasets used to train AI models could limit their application as a just tool in the fight against global warming and its impact on planetary and human health. 

As is often the case when it comes to global bias, it is a matter of Global North vs. South. With most data gathered by researchers and businesses with privileged access to technology, the effects of climate change will, invariably, be seen from a limited perspective. As such, biased AI has the potential to misrepresent climate information. Meaning, the most vulnerable will suffer the most dire consequences. 

Call for globally inclusive datasets

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In a paper titled “Harnessing human and machine intelligence for planetary-level climate action” published in the prestigious journal Nature, the authors admit that “using AI to account for the continually changing factors of climate change allows us to generate better-informed predictions about environmental changes, allowing us to deploy mitigation strategies earlier.” 

This, they say, remains one of the most promising applications of AI in climate action planning. However, only if datasets used to train the systems are globally inclusive. 

“When the information on climate change is over-represented by the work of well-educated individuals at high-ranking institutions within the Global North, AI will only see climate change and climate solutions through their eyes,” said primary author and Cambridge Zero Fellow Dr Ramit Debnath. 

In contrast, those who have less access to technology and reporting mechanisms will be underrepresented in the digital sources AI developers rely upon. 

“No data is clean or without prejudice, and this is particularly problematic for AI which relies entirely on digital information,” the paper’s co-author Professor Emily Shuckburgh said. “Only with an active awareness of this data injustice can we begin to tackle it, and consequently, to build better and more trustworthy AI-led climate solutions.”

The authors advocate for human-in-the-loop AI designs that can contribute to a planetary epistemic web supporting climate action, directly enable mitigation and adaptation interventions, and reduce the data injustices associated with AI pretraining datasets. 

The need of the hour, the study concludes, is to be sensitive to digital inequalities and injustices within the machine intelligence community, especially when AI is used as an instrument for addressing planetary health challenges like climate change.

If we fail to address these issues, the authors argue, there could be catastrophic outcomes impacting societal collapse and planetary stability, including not fulfilling any climate mitigation pathways.

AI could fall short on climate change due to biased datasets, study finds Read More »

norwegian-wealth-fund-warns-of-ai-risks-while-reeling-in-billions-from-the-tech

Norwegian wealth fund warns of AI risks while reeling in billions from the tech

To say that Norway’s Sovereign Wealth Fund made a killing these past months would be an understatement. The world’s largest investor in the stock market earned 1,501 billion crowns (€131.1bn) in the first half of 2023, and much of it due to the recent boom in AI.

To a large extent, the profits came from the fund’s shares in tech companies such as Apple, Alphabet, Microsoft, and Nvidia that all saw a surge from the current AI craze. Meanwhile, the fund is telling the very same companies to get serious about the responsible deployment and risks of artificial intelligence

“As AI becomes ubiquitous across the economy, it is likely to bring great opportunities, but also severe and uncharted risks,” the €1.28 trillion fund said in a letter published this week.

It added that the technology continues to develop at a pace that makes it challenging to predict and manage risks in the form of regulatory and reputational risk to companies, as well as broader societal implications related to, for instance, discrimination and disinformation. 

In order to mitigate the threats posed by the technology, the letter suggested the fund’s 9,000+ portfolio companies develop their expertise on AI on the board. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Boards “absolutely not on top” of AI, fund CEO says

In an interview with the Financial Times, the fund’s CEO, Nicolai Tangen, stated that “Boards are absolutely not on top of this.” He further added that the fund would vote against companies that failed to deliver on AI expertise at directorial level. 

The oil fund also wants companies to disclose and explain how they use AI, and how systems are designed and trained, so-called transparency and explainability. Furthermore, it is looking for robust risk management beyond a traditional business focus, adding human oversight and control to mitigate potential threats to privacy and discrimination. It did not go so far as to mention the doom of humankind

Meanwhile, Tangen is not shy in stating that, “If you don’t think there are opportunities with AI, then in my mind you are a complete moron.” 

In the letter, the fund also states that it supports the development of “a comprehensive and cohesive regulatory framework for AI that facilitates safe innovation and mitigation of adverse impacts.”

Yet, Tangen acknowledges that this will be “very hard” to achieve on a global scale, due to the technology’s near ubiquitous application in everything from education and military to cars and finance.

Norwegian wealth fund warns of AI risks while reeling in billions from the tech Read More »

composition-co-written-by-ai-performed-by-choir-and-published-as-sheet-music

Composition co-written by AI performed by choir and published as sheet music

Ed Newton-Rex, a creative AI pioneer and VP Audio at Stability AI, says that he’s become the first author to publish a piece of classical music that uses generative AI. The musician and entrepreneur wrote his “I stand in the library,” a piece for the choir and piano, to a poem produced by OpenAI‘s generative AI model GPT3.

Written back in 2022, the 15-minute composition has been premiered at the Live from London online classical music festival, performed by a choral group VOCES8. The full version of the performance is available (behind paywall) on the festival’s website, but here’s an exclusive preview to give you a general idea: