Artificial Intelligence

revitalising-european-democracy:-ai-supported-civic-tech-on-the-rise

Revitalising European democracy: AI-supported civic tech on the rise

Revitalising European democracy: AI-supported civic tech on the rise

Linnea Ahlgren

Story by

Linnea Ahlgren

According to a study by the International Institute for Democracy and Electoral Assistance (IDEA) released late last week, digital technologies will become an increasing factor in European democracy in the coming decade. This is perhaps not entirely surprising; after all, the pandemic shifted much of our lives into the digital realm, why shouldn’t our political participation?

The report, based on interviews with more than 50 government and industry representatives, finds that the market for online participation and deliberation in Europe is expected to grow to €300mn in the next five years, whereas the market for e-voting will grow to €500mn. The respondents also state that there is a “window of opportunity” for European providers of democracy technology to expand beyond Europe.

Authors of the report further believe that digital democracy technology can support outreach to demographics that may otherwise be difficult to reach, such as youth and immigrant communities. This also includes broader populations under difficult circumstances, such as those brought on by the pandemic and Russia’s war of aggression against Ukraine. 

“In case of war, electronic democracy tools have to be even stronger. Because we understand we have to live for the society and give citizens tools,” said Oleg Polovynko, Director of IT at Kyiv Digital, City Council of Kyiv, and one of the speakers at the TNW Conference 2023

Not without controversy

Digital democracy refers to the use of digital technologies and platforms to enhance democratic processes and increase citizens’ participation in government decision-making. This is also referred to as civic tech (not to be confused with govtech, which focuses on technologies that help governments perform their functions more efficiently).

Examples of tools include online petitions, open data portals, and participatory budgeting systems, where citizens come together to discuss community needs and priorities and then allocate public funds accordingly. 

In a best-case scenario, it has the potential to reinvigorate democracy by allowing citizens to participate from anywhere at any time. In a worst-case scenario, it could be used for disinformation or just plain good old online toxic behaviour. 

Furthermore, the discussion of a potential ‘digital divide – who will benefit and who will be excluded due to access or lack thereof to technology – is not one that is easily settled. 

Inviting AI into collective decision making

IDEA states that there are more than 100 vendors in Europe in the online participation, deliberation and voting sector, most of whom are active on a national level. The majority of those operating internationally are startups with between 10 and 60 employees, but expanding quickly.

Many of these democracy technology platforms have already begun taking advantage of the recent step-change developments in artificial intelligence to introduce new features or enhance existing ones. 

“We foresee a future where citizens and AI collaboratively engage with governments to address intricate social issues by merging collective intelligence with artificial intelligence,” Robert Bjarnason, co-founder and President of Citizens.is tells TNW. 

We advocate for a model in which citizens work alongside powerful AI systems to help shape policy, rather than allowing centralised government AI models to exert excessive influence.

Following the collapse of Icelandic banks in 2008, distrust of politicians was at an all-time high in the Nordic island nation. Together with a fellow programmer, Gunnar Grímsson, Bjarnasson created a software platform called Your Priorities that allows citizens to suggest laws and policies that can then be up- or down-voted by other users. 

Just before local elections in 2010, the open-source software was used to set up the Better Reykjavik portal. Five years later, a poll on the site managed to name a street in the Icelandic capital after Darth Vader (well, his Icelandic moniker of Svarthöfði, or Black-cape, which already fitted well with the names of the streets in the area). 

Of course, there have been much ‘weightier’ decisions influenced by the platform, such as crowdsourcing ideas on how to prioritise the City’s educational objectives.

Thus far, over 70,000 of the capital’s inhabitants have engaged with Better Reykjavik. Pretty impressive for a population of 120,000. Furthermore, Your Priorities has been trialled in Malta, Norway, Scotland, and Estonia. 

The Baltic tech-forward nation has adopted several laws suggested through the platform, which features a unique debating system, crowdsourcing of content and prioritisation, a ‘toxicity sensor’ to alert admins about potentially abusive content – and extensive use of AI. In fact, Citizens.is recently entered into collaboration with OpenAI, and has deployed GPT-4 for its AI assistant – in Icelandic. 

GPT-4 now empowers digital democracy and collective intelligence in Iceland 🤖❤️ Thnx to a collaboration btw @OpenAI, the government, and Miðeind, we’re launching our AI assistant in Icelandic. Thanks @sama, @gdb, @vthorsteinsson, @cohere, @langchain, @weaviate_io & @buildWithLit pic.twitter.com/LNxAAFe2nf

— Citizens Foundation (@CitizensFNDN) March 19, 2023

Don’t worry if the language barrier felt a little steep. Citizens.is has been kind enough to provide TNW with a screenshot of the company’s AI assistant in action from a project in Oakland, California. 

Screenshot of OpenAI chatbot conversation
Credit: Citizens.is

Other examples of civic tech focused companies in Europe include Belgium-founded scaleup CitizenLab, which now works with more than 300 local governments and organisations across 18 countries, and Berlin-based non-profit Liquid Democracy. Liquid’s open source deliberation and collaborative decision-making Adhocracy+ software platform also helps facilitate face-to-face meetings throughout the timeline of participation projects. 

Gaining the trust of the citizen

The main product trends identified in the IDEA study are: artificial intelligence, voting, and administration and reporting. Meanwhile, it also found that it is important to address issues around inclusiveness, data usage, accountability and transparency, and to develop security standards for end-to-end verified voting.

One solution proposed is the introduction of a Europe-wide quality trust mark for democracy technologies. 

“If a citizen can trust the banking application to make transactions, then equivalently our service can be trusted to make the citizen’s voice heard,” stated Nicholas Tsounis, CEO of online voting platform Electobox. “We want people to trust this application because we know that it is there for them to protect the right to speak and vote.” 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Revitalising European democracy: AI-supported civic tech on the rise Read More »

can-ai-save-lives?-cancer-detection-study-suggests-yes

Can AI save lives? Cancer detection study suggests yes

Can AI save lives? Cancer detection study suggests yes

Linnea Ahlgren

Story by

Linnea Ahlgren

Much of the world may currently be fretting about how to limit the impact (lack of privacy, copyright issues, loss of jobs, world domination, etc.) of artificial intelligence. However, that does not mean that there isn’t enormous potential for AI to improve quality of life on earth. 

One such application is healthcare. With the ability to process big data sets, the deployment of AI could lead to significant advances in predictive diagnostics, including early detection of cancer. While more research is needed, one of the latest studies in the field shows promising results for AI-assisted diagnosis of lung cancer. 

Doctors and researchers at the Royal Marsden NHS foundation trust, the Institute of Cancer Research, and Imperial College London have built an AI algorithm they say can diagnose cancerous growths more efficiently than current methods. 

In the study named OCTAPUS-AI, researchers used imaging and clinical data from over 900 patients from the UK and Netherlands following curative radiotherapy to develop and test ML algorithms to see how accurately the models could predict recurrence. 

Specifically, the study looked at if AI could help identify the risk of cancer returning in non-small cell lung cancer (NSCLC) patients. Researchers used CT scans to develop an AI algorithm using radiomics. This is a quantitative approach which extracts novel data and predictive biomarkers from medical imaging. 

Research algorithm superior to current technology

NSCLC patients make up 85% of lung cancer cases. While the disease is often treatable when caught early, in over a third of patients, the cancer returns. The study found that using the algorithm, clinicians may eventually be able to identify recurrence earlier in high-risk patients. 

The scientists used a measure called area under the curve (AUC) to see how efficient the model was at detecting cancer. A perfect 100% accuracy score would be a 1, whereas a model that was purely guessing 50-50 would get 0.5. In the study, the AI algorithm built by the researchers scored 0.87. This can be compared to the 0.67 score of the technology currently in use. 

“Next, we want to explore more advanced machine learning techniques, such as deep learning, to see if we can get even better results,” Dr Sumeet Hindocha, Clinical Oncology Specialist Registrar at The Royal Marsden NHS Foundation Trust, and Clinical Research Fellow at Imperial College London, said. “We then want to test this model on newly diagnosed NSCLC patients and follow them to see if the model can accurately predict their risk of recurrence.”

Support for practitioners – and patients

Rather than believing it will replace doctors, most now view AI in healthtech as a tool that will assist practitioners in providing the best possible care – including improved bedside manners. Despite investors growing gradually more risk-averse over the past year, the healthcare AI sector is still expected to grow from close to $14 billion in 2023 to $103 billion by 2028. 

The UK is teeming with AI healthtech startups. Many are focused on drug development, genomic analysis or more consumer-centric telehealth symptom checking and wearables. However, some are intent on improving disease detection and diagnosis. These include the likes of Mendelian, who just received close to £1.5 million to roll out its AI-based solution for rare disease diagnosis as part of the government’s investment into AI technology within the NHS. 

The rest of Europe also has its fair share of diagnostic AI startups. Among them are Liége-based Radiomics. The company focuses on the detection and phenotypic quantification of solid tumours based on standard-of-care imaging. In Norway, DoMore diagnostics is using AI and deep learning to increase the prognostic and predictive value of cancer tissue biopsies. The company’s founders also say it could help guide the selection of therapy to avoid over- and undertreatment. 

Meanwhile, a few percentage points of more accurate diagnosis, vital though they may be for the affected individual, may not be the only positive impact AI could have on our care systems. 

According to Eric Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, “the greatest opportunity offered by AI is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honoured connection and trust—the human touch—between patients and doctors.”

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Can AI save lives? Cancer detection study suggests yes Read More »

this-‘skyrim-vr’-mod-shows-how-ai-can-take-vr-immersion-to-the-next-level

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level

ChatGPT isn’t perfect, but the popular AI chatbot’s access to large language models (LLM) means it can do a lot of things you might not expect, like give all of Tamriel’s NPC inhabitants the ability to hold natural conversations and answer questions about the iconic fantasy world. Uncanny, yes. But it’s a prescient look at how games might one day use AI to reach new heights in immersion.

YouTuber ‘Art from the Machine’ released a video showing off how they modded the much beloved VR version of The Elder Scrolls V: Skyrim.

The mod, which isn’t available yet, ostensibly lets you hold conversations with NPCs via ChatGPT and xVASynth, an AI tool for generating voice acting lines using voices from video games.

Check out the results in the most recent update below:

The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.

Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:

A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:

NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.

NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.

When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.

You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”

And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.

“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”

Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.

For now I have it set up so that NPCs say “let me think” to indicate that I have been heard and the response is in the process of being generated, but you’re right this can be expanded to choose from a few different filler lines instead of repeating the same one every time.

And while the video is noticeably sped up after prompts, this mostly comes down to the voice generation software xVASynth, which admittedly slows the response pipeline down since it’s being run locally. ChatGPT itself doesn’t affect performance, the developer says.

This isn’t the first project we’ve seen using chatbots to enrich user interactions. Lee Vermeulen, a long-time VR pioneer and developer behind Modboxreleased a video in 2021 showing off one of his first tests using OpenAI GPT 3 and voice acting software Replica. In Vermeulen’s video, he talks about how he set parameters for each NPC, giving them the body of knowledge they should have, all of which guides the sort of responses they’ll give.

Check out Vermeulen’s video below, the very same that inspired ‘Art from the Machine’ to start working on the Skyrim VR mod:

As you’d imagine, this is really only the tip of the iceberg for AI-driven NPC interactions. Being able to naturally talk to NPCs, even if a little stuttery and not exactly at human-level, may be preferable over having to wade through a ton of 2D text menus, or go through slow and ungainly tutorials. It also offers up the chance to bond more with your trusty AI companion, like Skyrim’s Lydia or Fallout 4’s Nick Valentine, who instead of offering up canned dialogue might actually, you know, help you out every once in a while.

And that’s really only the surface level stuff that a mod like ‘Art from the Machine’ might deliver to existing games that aren’t built with AI-driven NPCs. Imagining a game that is actually predicated on your ability to ask the right questions and do your own detective work—well, that’s a role-playing game we’ve never experienced before, either in VR our otherwise.

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level Read More »

german-creatives-wants-eu-to-address-chatgpt-copyright-concerns

German creatives wants EU to address ChatGPT copyright concerns

German creatives wants EU to address ChatGPT copyright concerns

Linnea Ahlgren

Story by

Linnea Ahlgren

ChatGPT has had anything but a triumphant welcome tour around Europe. Following grumbling regulators in Italy and the European Parliament, the turn has come for German trade unions to express their concerns over potential copyright infringement. 

No less than 42 trade organisations representing over 140,000 of the country’s authors and performers have signed a letter urging the EU to impose strict rules for the AI’s use of copyrighted material. 

As reported first by Reuters, the letter, which underlined increasing concerns about copyright and privacy issues stemming from the material used to train the large language model (LLM), stated, 

“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs.”

Signatories include major German trade unions Verdi and DGB, as well as other associations for photographers, designers, journalists and illustrators. The letter’s authors further added that, 

“Generative AI needs to be at the centre of any meaningful AI market regulation.”

ChatGPT is not the only target of copyright contention. In January, visual media company Getty Images filed a copyright claim against Stability AI. According to the lawsuit, the image making tool developer allegedly copied over 12 million photos, captions, and metadata without permission.  

LLM training offers diminishing returns

The arrival of OpenAI’s ChatGPT has sparked a flurry of concerns. Thus far, these have covered everything from aggressive development due to a commercially motivated AI “arms race,” to matters of privacy, data protection and copyright. The latest model, GPT-4, was trained using over a trillion words. 

Meanwhile, one of the originators of the controversy, the company’s CEO Sam Altman, stated last week that the amplified machine learning strategy behind ChatGPT has run its course. Indeed, OpenAI forecasts diminishing returns on scaling up model size. The company trained its latest model, GPT-4, using over a trillion words at the cost of about $100 million. 

At the same time, the EU’s Artificial Intelligence Act is nearing its home stretch. While it may well set a global regulatory standard, the question is how well it will be able to adapt as developers find other new and innovative ways of making algorithms more efficient.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


German creatives wants EU to address ChatGPT copyright concerns Read More »

meta-shows-new-progress-on-key-tech-for-making-ar-genuinely-useful

Meta Shows New Progress on Key Tech for Making AR Genuinely Useful

Meta has introduced the Segment Anything Model, which aims to set a new bar for computer-vision-based ‘object segmentation’—the ability for computers to understand the difference between individual objects in an image or video. Segmentation will be key for making AR genuinely useful by enabling a comprehensive understanding of the world around the user.

Object segmentation is the process of identifying and separating objects in an image or video. With the help of AI, this process can be automated, making it possible to identify and isolate objects in real-time. This technology will be critical for creating a more useful AR experience by giving the system an awareness of various objects in the world around the user.

The Challenge

Imagine, for instance, that you’re wearing a pair of AR glasses and you’d like to have two floating virtual monitors on the left and right of your real monitor. Unless you’re going to manually tell the system where your real monitor is, it must be able to understand what a monitor looks like so that when it sees your monitor it can place the virtual monitors accordingly.

But monitors come in all shapes, sizes, and colors. Sometimes reflections or occluded objects make it even harder for a computer-vision system to recognize.

Having a fast and reliable segmentation system that can identify each object in the room around you (like your monitor) will be key to unlocking tons of AR use-cases so the tech can be genuinely useful.

Computer-vision based object segmentation has been an ongoing area of research for many years now, but one of the key issues is that in order to help computers understand what they’re looking at, you need to train an AI model by giving it lots images to learn from.

Such models can be quite effective at identifying the objects they were trained on, but if they will struggle on objects they haven’t seen before. That means that one of the biggest challenges for object segmentation is simply having a large enough set of images for the systems to learn from, but collecting those images and annotating them in a way that makes them useful for training is no small task.

SAM I Am

Meta recently published work on a new project called the Segment Anything Model (SAM). It’s both a segmentation model and a massive set of training images the company is releasing for others to build upon.

The project aims to reduce the need for task-specific modeling expertise. SAM is a general segmentation model that can identify any object in any image or video, even for objects and image types that it didn’t see during training.

SAM allows for both automatic and interactive segmentation, allowing it to identify individual objects in a scene with simple inputs from the user. SAM can be ‘prompted’ with clicks, boxes, and other prompts, giving users control over what the system is attempting to identifying at any given moment.

It’s easy to see how this point-based prompting could work great if coupled with eye-tracking on an AR headset. In fact that’s exactly one of the use-cases that Meta has demonstrated with the system:

Here’s another example of SAM being used on first-person video captured by Meta’s Project Aria glasses:

You can try SAM for yourself in your browser right now.

How SAM Knows So Much

Part of SAM’s impressive abilities come from its training data which contains a massive 10 million images and 1 billion identified object shapes.  It’s far more comprehensive than contemporary datasets, according to Meta, giving SAM much more experience in the learning process and enabling it to segment a broad range of objects.

Image courtesy Meta

Meta calls the SAM dataset SA-1B, and the company is releasing the entire set for other researchers to build upon.

Meta hopes this work on promptable segmentation, and the release of this massive training dataset, will accelerate research into image and video understanding. The company expects the SAM model can be used as a component in larger systems, enabling versatile applications in areas like AR, content creation, scientific domains, and general AI systems.

Meta Shows New Progress on Key Tech for Making AR Genuinely Useful Read More »

these-are-the-new-jobs-generative-ai-could-create-in-the-future

These are the new jobs generative AI could create in the future

Search interest in ChatGPT has reached a 2,633% boost in interest since last December, shortly after its launch. For the artificial intelligence and machine learning industry, and for those working in tech as a whole, OpenAI’s chatbot represents a true crossing of the Rubicon.

A generative form of AI, it uses prompts to produce content and conversations, whereas traditional AI looks at things such as pattern detection, decision making, or classifying data. We already benefit from artificial intelligence, whether we realise it or not—from Siri in our Apple phones to the choices Netflix or Amazon Prime make for us to the personalisations and cyber protection that lie behind our commercial interactions.

ChatGPT is just one of an increasing number of generative AI tools, including Bing Chat and Google Bard. DeepMind’s Alpha Code writes computer programs at a competitive level; Jasper is an AI copywriter, and DALL-E, MidJourney and Stable Diffusion can all create realistic images and art from a description you give them.

As a result, generative AI is now firmly embedded in the mainstream consciousness, with much credit going to ChatGPT’s easy to use interface, and its ability to produce results that can be as sublime as they are ridiculous. Want it to produce some Python code? Sure thing—and it can generate you a funny Limerick too, if you’d like.

How generative AI will impact the job market

According to Salesforce, 57% of senior IT leaders believe generative AI is a game changer, and because it is intuitive and helpful, end users like it as well.

While your job may be safe from AI (for the moment), ChatGPT-generated content has gotten into the the top 20% of all candidates shortlisted for a communications consultant role at marketing company Schwa, and it has also passed Google’s level 3 engineering coding interview.

Roles that are likely to resist the advent of generative AI include graphic designers, programmers (though they are likely to adopt AI tools that speed up their process) and blockchain developers, but many other jobs are likely to be performed by AI in the (near) future.

These include customer service jobs—chatbots can do this efficiently. Bookkeeping or accounts roles are also likely to be replaced as software can do many of these tasks. Manufacturing will see millions of jobs replaced with smart machinery that does the same job, but faster.

But, while AI may replace some jobs, it will also generate a slew of new ones.

The World Economic Forum predicts that the technology will create 97 million new jobs by 2025. Jobs specifically related to the development and maintenance of AI and automation will see growing adoption as AI integrates across multiple industries.

These could include data detectives or scientists, prompt engineers, robotics engineers, machine managers, and programmers, particularly those who can code in Python which is key for AI development. AI trainers and those with capabilities related to modelling, computational intelligence, machine learning, mathematics, psychology, linguistics, and neuroscience will also be in demand.

Healthcare looks set to benefit too, with PwC estimating that AI-assisted healthcare technician jobs will see an upward surge. A sector that is already creating new jobs is automated transportation with Tesla, Uber, and Google investing billions into AI-driven self-driving cars and trucks.

If you want to work in AI now, there are plenty of jobs on offer. Discover three below, or check out the House of Talent Job Board for many more opportunities.

Staff Data Engineer, Data & ML Products, Adevinta Group, Amsterdam

Adevinta is on the lookout for a top-notch Staff Data Engineer to join the team and make a global impact in an exciting and dynamic environment. You will build and run production-grade data and machine learning pipelines and products at scale in an agile setup. You will work closely with data scientists, engineers, architects, and product managers to create the technology that generates and transforms data into applications, insights, and experiences for users. You should be familiar with privacy regulation, be an ambassador of privacy by design, and actively participate in department-wide, cross-functional tech initiatives. Discover more here.

AIML – Annotation Analyst, German Market, Apple, Barcelona

Apple’s AIML team is passionate about technology with a focus on enriching the customer experience. It is looking for a motivated Annotation Analyst who can demonstrate active listening, integrity, acute attention to detail, and is passionate about impacting customers’ experience. You’ll need fluency in the German language with excellent comprehension, grammar, and proofreading skills, as well as excellent English reading comprehension and writing skills. You should also have excellent active listening skills, with the ability to understand verbal nuances. Find out more about the job here.

Artificial Intelligence Product Owner – M/F, BNP Paribas, Paris

As Artificial Intelligence Product Owner, you’ll report to the head of the CoE IA, ensuring improvements to data science tools (Stellar, Domino, D3) to integrate the needs of data scientists and data analysts in particular. You will also participate in all the rituals of Agile methodology and will organise sprint planning, sprint review, retrospective, and more for team members. You will also be the Jira and Confluence expert. If this sounds like a position for you, you can find more information here.

These are the new jobs generative AI could create in the future Read More »

what-to-expect-from-ai-in-2023

What to expect from AI in 2023

Here we go again! For the sixth year running, we present Neural’s annual AI predictions. 2022 was an incredible year for the fields of machine learning and artificial intelligence. From the AI developer who tried to convince the world that one of Google’s chatbots had become sentient to the recent launch of OpenAI’s ChatGPT, it’s been 12 months of non-stop drama and action. And we have every reason to believe that next year will be both bigger and weirder.

That’s why we reached out to three thought leaders whose companies are highly invested in artificial intelligence and the future. Without further ado, here are the predictions for AI in 2023:

First up, Alexander Hagerup, co-founder and CEO at Vic.ai, told us that we’d continue to see the “progression from humans using AI and ML software to augment their work, to humans relying on software to autonomously do the work for them.” According to him, this will have a lot to do with generative AI for creatives — we’re pretty sure he’s talking about the ChatGPTs and DALL-Es of the AI world — as well as “reliance on truly autonomous systems for finance and other back-office functions.”

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

He believes a looming recession could increase this progress as much as two-fold, as businesses may be forced to find ways to cut back on labor costs.

Next, we heard from Jonathan Taylor, Chief Technology Officer at Zoovu. He’s predicting global disruption for the consumer buyer experience in 2023 thanks to “innovative zero-party solutions, leveraging advanced machine learning techniques and designed to interact directly and transparently with consumers.” I know that sounds like corporate jargon, but the fact of the matter is sometimes marketing-speak hits the nail on the head.

Consumers are sick and tired of the traditional business interaction experience. We’ve been on hold since we were old enough to pay bills. It’s a bold new world and the companies that know how to use machine learning to make us happy will be the cream that rises to the top in 2023 and beyond.

Jonathan Taylor, Chief Technology Officer
Jonathan Taylor, Chief Technology Officer at Zoovu

Taylor also predicts that Europe’s world-leading consumer protection and data privacy legislation will force companies large and small to “adopt these new approaches before the legacy approaches either become regulated out of existence by government or mandated out of existence by consumers.”

The writing’s on the wall. As he puts it, “the only way to make these zero-party solutions truly scalable and as effective as the older privacy-invading alternatives, will be to use advanced machine learning and transfer learning techniques.”

Finally, we got in touch with Gabriel Mecklenburg, co-founder at Hinge Health. He told us that the future of AI in 2023 is diversity. In order for the field to progress, especially when it comes to medicine, machine learning needs to work for everyone.

In his words, “AI is clearly the future of motion tracking for health and fitness, but it’s still extremely hard to do well. Many apps will work if you’re a white person with an average body and a late-model iPhone with a big screen. However, equitable access means that AI-powered care experiences must work on low-end phones, for people of all shapes and colors, and in real environments.”

Gabriel Mecklenburg, Co-Founder and Executive Chairman of Hinge Health
Gabriel Mecklenburg, co-founder of Hinge Health

Mecklenburg explained that more than one in five people suffer from musculoskeletal conditions such as neck, back, and joint pain. According to him, “it is a global crisis with a severe human and economic toll.”

He believes that, with AI, medical professionals have what they need to help those people. “For example,” says Mecklenberg, “AI technology can now help identify and track many unique joints and reference points on the body using just the phone camera.”

But, as mentioned above, this only matters if these tools work for everyone. Per Mecklenburg, “we must ensure AI is used to bridge the care gap, not widen it.”

From the editor of Neural:

It’s been a privilege curating and publishing these predictions all these years. When we started, over half a decade ago, we made the conscious decision to highlight voices from smaller companies. And, as long-time readers might recall, I even ventured a few predictions myself back in 2019.

But, considering we spent all of 2020 in COVID lockdown, I’m reticent to tempt fate yet again. I won’t venture any predictions for AI in 2023 save one: the human spirit will endure.

When we started predicting the future of AI here at Neural, a certain portion of the population found it clever to tell creatives to “learn to code.” At the time, it seemed like journalists and artists were on the verge of being replaced by machines.

Yet, six years later, we still have journalists and artists. That’s the problem with humans: we’re never satisfied. Build an AI that understands us today, and it’ll be out of date tomorrow.

The future is all about finding ways to make AI work for us, not the other way around.

What to expect from AI in 2023 Read More »

ukraine-has-become-the-world’s-testing-ground-for-military-robots

Ukraine has become the world’s testing ground for military robots

The war in Ukraine has become the largest testing ground for artificial intelligence-powered autonomous and uncrewed vehicles in history. While the use of military robots is nothing new — World War II saw the birth of remote-controlled war machines and the US has deployed fully-autonomous assault drones as recently as 2020 — what we’re seeing in Ukraine is the proliferation of a new class of combat vehicle. 

This article discusses the “killer robot” technology being used by both sides in Russia’s war in Ukraine. Our main takeaway is that the “killer” part of “killer robots” doesn’t apply here. Read on to find out why. 

Uncrewed versus autonomous

This war represents the first usage of the modern class of uncrewed vehicles and automated weapons platforms in a protracted invasion involving forces with relatively similar tech. While Russia’s military appears, on paper, to be superior to Ukraine’s, the two sides have fielded forces with similar capabilities. Compared to forces Russia faced during its involvement in the Syrian civil war or, for example, those faced by the US during the Iraq and Afghanistan engagements, what’s happening on the ground in Ukraine right now demonstrates a more paralleled engagement theater. 

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

It’s important, however, to mention that this is not a war being fought by machines. It’s unlikely that autonomous or uncrewed weapons and vehicles will have much impact in the war, simply because they’re untested and, currently, unreliable. 

Uncrewed vehicles and autonomous vehicles aren’t necessarily the same thing. While almost all autonomous vehicles — those which can operate without human intervention — are uncrewed, many uncrewed vehicles can only be operated remotely by humans. Perhaps most importantly, many of these vehicles have never been tested in combat. This means that they’re more likely to be used in “support” roles than as autonomous combat vehicles, even if that’s what they were designed to do. 

But, before we get into the how’s and why’s behind the usage of military robots in modern warfare, we need to explain what kind of vehicles are currently in use. There are no “killer robots” in warfare. That’s a catch-all term used to describe military vehicles both autonomous and uncrewed.

These include uncrewed aerial vehicles (UAVs), uncrewed ground vehicles (UGVs), and uncrewed surface vehicles (USVs, another term for uncrewed maritime or water-based vehicles).

So, the first question we have to answer is: why not just turn the robots into killers and let them fight the war for us? You might be surprised to learn that the answer has very little to do with regulations or rules regarding the use of “killer robots.” 

To put it simply: militaries have better things to do with their robots than just sending fire downrange. That doesn’t mean they won’t be tested that way, there’s already evidence that’s happened

A British “Harrier” USV, credit: Wikicommons

However, we’ve seen all that before. The use of “killer robots” in warfare is old hat now. The US deployed drones in Iraq and Afghanistan and, as we reported here at TNW, it even sent a Predator drone to autonomously assassinate an Iranian general.

What’s different in this war is the proliferation of UAVs and UGVs in combat support roles. We’ve seen drones and autonomous land vehicles in war before, but never at this scale. Both forces are using uncrewed vehicles to perform tasks that, traditionally, either couldn’t be done or require extra humanpower. It does also bear mentioning that they’re using gear that’s relatively untested, which explains why we’re not seeing either country deploying these units enmasse.

A developmental crucible

Developing wartime technology is a tricky gambit. Despite the best assurances of the manufacturers, there’s simply no way to know what could possibly go wrong until a given tech sees actual field use.

In the Vietnam war, we saw a prime example of this paradigm in the debut of the M-16 rifle. It was supposed to replace the trusty old M-14. But, as the first soldiers to use the new weapon tragically found out, it wasn’t suitable for use in the jungle environment without modifications to its design and special training for the soldiers who’d use it. A lot of soldiers died as a result.

A US Marine cleaning their M16 during the US-Vietnam War, credit: Wikicommons

That’s one of the many reasons why a number of nations who’ve so far refused any direct involvement in the war are eager to send cutting-edge robots and weapons to the Ukrainian government in hopes of testing out their tech’s capabilities without risking their own soldiers’ skin. 

TNW spoke with Alex Stronell, a Land Platforms Analyst and UGV lead at Janes, the defense intelligence provider. They explained that one of the more interesting things to note about the use of UGVs, in particular, in the war in Ukraine, is the absence of certain designs we might have otherwise expected.

“For example, an awful lot of attention has been paid inside and outside of Russia to the Uran-9 … It certainly looks like a menacing vehicle, and it has been touted as the world’s most advanced combat UGV,” Stronell told us, before adding “however, I have not seen any evidence that the Russians have used the Uran-9 in Ukraine, and this could be because it still requires further development.”

Uran-9 armed combat robot UGV Unmanned Ground Vehicle Rosboronexport Russia Russian Defense Industry – YouTube

On the other side, Stronell previously wrote that Ukrainian forces will soon wield the world’s largest complement of THeMIS UGVs (see the video below). That’s exceptional when you consider that the nation’s arsenal is mostly lend-leased from other countries. 

Milrem, the company that makes the THeMIS UGV, recently announced that the German Ministry of Defence ordered 14 of its vehicles to be sent to the Ukrainian forces for immediate use. According to Stronell, these vehicles will not be armed. They’re equipped for casualty evacuation, and for finding and removing landmines and similar devices. 

Milrem Robotics’ THeMIS UGVs used in a live-fire manned-unmanned teaming exercise – YouTube

But it’s also safe to say that the troops on the ground will find other uses for them. As anyone who’s ever deployed to a combat zone can tell you, space is at a premium and there’s no point in bringing more than you can carry.

The THeMIS, however, is outfitted with Milrem’s “Intelligence Function Kit,” which includes the “follow me” ability. This means that it would make for an excellent battle mule to haul ammo and other gear. And there’s certainly nothing stopping anyone from rekitting the THeMIS with combat modules or simply strapping a homemade autonomous weapon system to the top of it.

D.I.Y. Scrap Metal Auto-Turret (RaspberryPi Auto-Tracking Airsoft Sentry?!) – YouTube

On-the-job training

As much as the world fears the dawning of the age of killer robots in warfare, the current technology just simply isn’t there yet. Stronell waved off the idea that a dozen or so UGVs could, for example, be outfitted as killer guard robots that could be deployed in the defense of strategic points. Instead, he described a hybrid human/machine paradigm referred to as “manned-unmanned teaming, or M-UMT,” where-in, as described above, unmounted infantry address the battlefield with machine support. 

In the time since the M-16 was mass-adopted during an ongoing conflict, the world’s militaries have refined the methodology they use to deploy new technologies. Currently, the war in Ukraine is teaching us that autonomous vehicles are useful in support roles.

The simple fact of the matter is that we’re already exceptionally good at killing each other when it comes to war. And it’s still cheaper to train a human to do everything a soldier needs to do than it is to build massive weapons platforms for every bullet we want to send downrange. The actual military need for “killer robots” is likely much lower than the average civilian might expect. 

However, AI’s gifts when it comes to finding needles in haystacks, for example, make it the perfect recon unit, but soldiers have to do a lot more than just identify the enemy and pull a trigger.

However, that’s something that will surely change as AI technology matures. Which is why, Stronell told us, other European countries are either currently in the process of adopting autonomous weaponry or already have. 

In the Netherlands, for example, the Royal Army has engaged in training ops in Lithuania to test their own complement of THeMIS units in what they’re referring to as a “pseudo-operational” theater. Due to the closeness of the war in Ukraine and its ongoing nature, nearby nations are able to run analogous military training operations based on up-to-the-minute intel of the ongoing conflict. In essence, the rest of Europe’s watching what Ukraine and Russia do with their robots and simulating the war at home. 

Soldiers in the Netherlands Royal Army in front of a Netherlands Royal Air Force AH-64 Apache helicopter, credit: Wikicommons

This represents an intel bonanza for the related technologies and there’s no telling how much this period of warfare will advance things. We could see innumerable breakthroughs in both military and civilian artificial intelligence technology as the lessons learned from this war begin to filter out. 

To illustrate this point, it bears mention that Russia’s put out a one million ruble bounty (about €15,000) to anyone who captures a Milrem THeMIS unit from the battlefield in Ukraine. These types of bounties aren’t exactly unusual during war times, but the fact that this particular one was so publicized is a testament to how desperate Russia is to get its hands on the technology. 

An eye toward the future

It’s clear that not only is the war in Ukraine not a place where we’ll see “killer robots” deployed enmasse to overwhelm their fragile, human, enemy soldier counterparts, but that such a scenario is highly unlikely in any form of modern warfare.

However, when it comes to augmenting our current forces with UGVs or replacing crewed aerial and surface recon vehicles with robots, military leaders are excited about AI’s potential usefulness. And what we’re seeing right now in the war in Ukraine is the most likely path forward for the technology. 

That’s not to say that the world shouldn’t be worried about killer robots or their development and proliferation through wartime usage. We absolutely should be worried, because Russia’s war in Ukraine has almost certainly lowered the world’s inhibitions surrounding the development of autonomous weapons. 

Ukraine has become the world’s testing ground for military robots Read More »

psst,-automating-these-3-parts-of-your-business-is-the-best-thing-you-can-do-right-now

Psst, automating these 3 parts of your business is the best thing you can do right now

Content provided by IBM and TNW

Thanks to the convergence of several trends and changes across different markets and industries, automation is becoming a critical factor in the success of businesses and products. Advances in artificial intelligence, in parallel with the accelerating digitization of all aspects of business, are creating plenty of opportunities to automate operations, reduce waste, and increase efficiency.

From managing your Information Technology (IT) bill to finding bottlenecks in your business processes and taking control of your own network operations, here are three areas where companies can gain from applying automation.

1. IT automation

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

Practically every large organization has IT. Even small companies that don’t have in-house IT staff may pay for another company to do it for them. The growing demand for IT can put extra strain on professionals who must deal with the ever-expanding and changing landscape of application and compute platforms.

“I’ve never met an IT person or CIO who said they have so much time and budget that they can do everything the business asks and more. There’s always a shortage of ability to drive projects through IT,” says Bill Lobig, Vice President of IBM Automation Product Management.

The talent shortage is highlighting the need to provide automation tools to IT staff so they can manage application uptime and keep IT operations stable.

Fortunately, advances in artificial intelligence are helping companies move toward smart automation by gathering and processing all sorts of structured and unstructured data.

“We’re seeing companies have more confidence in applying AI to a broader set of data, including log files and metrics and information that are spinning off of the systems that are running in your business (databases, app servers, Kubernetes, VMs),” Lobig says.

Previously, IT experts may have optimized their infrastructure through informed judgments and overprovisioning their resources. Now, they can take the guesswork out of their decisions by using AI to analyze the data of the IT infrastructure, find patterns, estimate usage, and optimize their resources.

For example, J.B. Hunt, a logistics and transportation company, uses IBM Turbonomic software to automate the scaling of its cloud and on-premise resources. For their on-premises environment, J.B. Hunt is automating all non-disruptive actions 24×7 and scaling non-production actions during a nightly maintenance window.

“Workloads scale and spike—it’s not static. No matter how much performance testing and capacity you put into sizing an application deployment, it’s a guess, albeit an educated one. You don’t really know how your customers’ workloads are going to vary across different times,” Lobig says.

In their public cloud environment, the J.B. Hunt team has been using a combination of recommendations and automated actions to manage their resources. Over the course of 12 months, Turbonomic executed nearly 2,000 resizing actions which—assuming manual intervention requires 20 minutes per action—freed up over 650 hours of the team’s time to focus on strategic initiatives.

2. Business processes

Business processes are another area that can gain from advances in AI and automation. The previous wave of automation in business processes was mostly driven by robotic process automation (RPA). While RPA has had a tremendous effect on productivity, like other solutions, it has limits too.

RPA only addresses tasks that you think need automation. It can automate a poorly designed process but can’t optimize it. It also can’t handle tasks that can’t be defined through deterministic rules. This is where “process and task mining” enter the picture. According to Lobig:

RPA executes scripts to automate what you tell it to do. It’s very deterministic and rigid in what it can do, automating highly repeatable tasks. Process and task mining find inefficiencies you can’t see.

Process and task mining can answer questions such as, is your business really running the way you think it is? Is everyone completing processes in the same way? What should you optimize first? It helps you get past the low-hanging fruit and find the hidden inefficiencies of your business that can also be addressed with automation.

3. Networking

In the past, networking was a specialized hardware-based discipline largely controlled by big telecommunications companies. Today, the networking ecosystem is more complex as enterprises now require ubiquitous application distribution in a hybrid multi-cloud environment, from customer prem, to edge, to private and public clouds.

The challenge is deploying and connecting all application endpoints at scale. Networks must be agile and dynamic to maintain application performance, availability, security and user experience. Today’s networks, however, face unprecedented challenges that can render them unresponsive and unadaptable to change. Enterprise and service providers can address those needs, delivering custom enterprise network value with self-service enterprise control.

Organizations can now own and manage their networking functions and end-to-end connectivity without being experts in switches, routers, radio-access networks, and other hardware.

“Networking has become just another part of the application supply chain (like databases, VMs, and containers) that companies are already running. Why not have your network be part of your full IT landscape so that you can apply AI to optimize it?” Lobig says.

For example, consider a large multinational bank that provides its customers access to their accounts overseas through ATM machines. The company previously outsourced network connectivity to a big telco. When the telco faced an outage in one country where the bank provided service, the customers could not access their funds. Although the bank didn’t have control over the networking service, it was fined for the outage.

Now, thanks to software-defined wide area network (SD-WAN) and automation and orchestration tools such as IBM’s AIOps solutions and IBM SevOne Network Performance Management, the bank can assume control of its own software-defined network, instead of shifting such an important responsibility to another company. New application-centric network connectivity can enhance those capabilities. This can drive enhanced security, intelligent observability, and service assurance, while providing a common way to manage networks across the diversity of infrastructure, tools, and security constructs.

Another area of networking that will provide new opportunities for automation is 5G.

“A lot of people think about 5G as a faster networking technology. But 5G is going to transform and disrupt B2B use cases. It can really bring edge computing to the forefront,” Lobig says.

There’s an opportunity for organizations to leverage software-defined networking and 5G to unlock new business models where high-bandwidth, low latency, and local connectivity is crucial.

An example is DISH Wireless, a company that’s working with IBM to automate the first greenfield cloud-native 5G network in the US. DISH Wireless is using IBM’s network orchestration software and services to bring 5G network orchestration to its business and operations platforms. One application they’re working on is enabling logistics companies to track package locations down to the centimeter, thanks to edge connectivity, RFID tags, and network management software.

“We’re helping them do this with our telco and network computing automation, edge computing automation, and enabling them to set up state-of-the-art orchestration for their customers. These unexpected industries can use 5G to really transform how business gets done across different areas,” Lobig says.

Where is the industry headed?

Automation is quickly evolving and we’re bound to see many new applications in the coming months and years. For companies that are at the beginning of their automation journey, Lobig has a few tips.

In the business automation space, look at process and task mining. Do you really know where the time is being spent in your enterprise? Do you know how work is getting done? If you use this technology, you’ll be able to identify the patterns and sequence of events that go into good outcomes and those that go into bad outcomes. Armed with these insights, you can redesign and automate the processes that have the biggest impact upon your business.

Lobig also believes that IT automation will be a bigger theme in 2023 as the world faces an energy crisis and electricity costs potentially become an escalating problem. IT automation can help organizations to use the capacity they need, which may translate into savings.

IT automation can also be important in tackling the climate change crisis.

“These days, you can tell whether your organization’s data center or workload is running on a renewable energy source,” Lobig says. “With that data, IT automation has the potential to automatically move workloads from cloud to on-prem and back and across hyper-scalers to optimize for costs and efficiencies.”

As for the future, Lobig believes that low-code/no-code application platforms will play an important role in automation by enabling more employees to build the automations that can enhance productivity.

Psst, automating these 3 parts of your business is the best thing you can do right now Read More »

what-you-need-to-know-about-aiops

What you need to know about AIOps


Content provided by IBM and TNW As our lives become more digitized, the IT infrastructure supporting the applications and services we use have become increasingly complex. There are a variety of options to run services in the cloud, on-premise, serverless, and hybrid, which makes it possible to accommodate different kinds of applications, environments, and audiences. However, managing such complex IT architectures is becoming increasingly difficult. There are too many moving parts, which makes it difficult to optimize IT, predict and prevent outages, and respond to incidents after they happen. Fortunately, AIOps — the use of AI in IT operations —…

This story continues at The Next Web

What you need to know about AIOps Read More »

5-ai-chatbot-mobile-apps-that-aim-to-put-a-therapist-in-your-pocket

5 AI Chatbot Mobile Apps That Aim to Put a Therapist in Your Pocket

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

5 AI Chatbot Mobile Apps That Aim to Put a Therapist in Your Pocket Read More »

deepfakes-explained:-the-ai-that’s-making-fake-videos-too-convincing

Deepfakes Explained: The AI That’s Making Fake Videos Too Convincing

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

Deepfakes Explained: The AI That’s Making Fake Videos Too Convincing Read More »