Data and security

why-don’t-we-leave-the-internet-platforms-we-dislike?

Why don’t we leave the internet platforms we dislike?

The internet is filled with sites and services we loathe yet it seems, to paraphrase Brokeback Mountain, we just don’t know how to quit them.

Consider the evidence: Facebook was widely reviled after its role in the Cambridge Analytica scandal, yet it still has over 3 billion monthly active users. Since Elon Musk’s takeover of Twitter there has been huge public outcry about his actions and decisions, but the platform remains relevant. And, most recently, Bandcamp was bought by Songtradr which swiftly laid off 50% of its staff. But guess what? It’s still by far-and-away the leader in its category.

This points to an environment where big platforms can act in ways swathes of people find distasteful, yet still remain in dominant positions. The more things change, the more they stay the same. For companies trying to upend this hegemony and Silicon Valley’s grip on the tech world, it could feel disheartening.  

But here at TNW we had some questions: is all this necessarily true? Are certain sites really too big to fail? And could smaller companies use burgeoning tech like decentralisation to fight back against the might of Silicon Valley?

Well, we’re going to find out. Let’s begin by looking at a specific example: Bandcamp.

The battle for our ears: Bandcamp and Artcore

If you aren’t familiar with it, Bandcamp is a music retail platform. Think of it like an online record store where artists can sell their music and merchandise.

Widely beloved by fans and musicians alike, Bandcamp has a reputation for being artist-friendly. It offers good cuts on sales and runs schemes like Bandcamp Friday, where it waives commission fees. Long story short, Bandcamp is one of the few places in this current music environment where artists can actually make some money.

Yet like all good things on the internet, it couldn’t last. The platform was purchased by Epic Games in 2022 before being bought up by Songtradr this year. After getting rid of a gamut of staff, it became clear to many users that the days of Bandcamp as an artist-first haven are coming to a close

In many ways, the platform is ripe for a competitor. Its audience consists of people who value independence as a concept and it has a user base in the tens of millions rather than billions. Yet that hasn’t happened.

To dig into the reasons, I got in touch with one of these competitor platforms, the just launched London-based Artcore. In many ways, it offers a broadly comparable service to Bandcamp: a place to sell music with relatively manageable commissions fees (20% in this case).

I spoke with Tom Burnell, Artcore’s founder, about the challenges of trying to take on a much bigger platform. He tells me that “building any startup is a challenge,” but he wouldn’t be go into further detail about their battle with Bandcamp.

Despite a request, Burnell didn’t share users numbers or sales figures, but a quick check on Similarweb (which is only a rough estimate), put Artcore’s visitors to its website at around 30,000 in October of this year. While the site is growing, it’s not going to be challenging Bandcamp any time soon.

The question then is why? What would need to happen for Artcore and other such challenger platforms to usurp the current status quo?

David vs. Goliath: A tech tale

“Smaller companies and startups have to first cut through the noise to raise awareness of their offering, which takes time, effort, and considerable resources,” Matt Iliffe, CEO of Beyond tells me. Beyond has worked with businesses including Google, Snap, and YouTube in order to optimise product experiences.

Alongside this, Iliffe believes many smaller companies fail to compete is down to public perception. There’s “safety in established platforms,” he tells me. 

Effectively, better the devil you know than the one you don’t.

This explains why competitors to the likes of Twitter/X, Facebook, and Bandcamp struggle to gain traction: they need to spend huge amounts of money to capture an audience that’d rather keep using a product they’re familiar with.

The question, then, is beyond spending billions of euros, how can a smaller company compete with the might of established bodies?

“A new platform must be ten times better than the one it hopes to win users from. Or be radically new,” Nicki Sprinz, Global MD of ustwo tells me. Their business helps create and design new products, something it’s done with the Peloton Lanebreak and The Body Coach.

The problem is, Sprinz explains, that huge tech companies are “too big to fail” when the platforms trying to compete with them do a similar thing with a near identical business model.

What this means is a service attempting to be another version of Twitter or Bandcamp won’t succeed. It needs to look beyond being a copycat.

But there’s hope: “Technology is today’s agent of creative destruction,” Sprinz says.

Smaller companies can challenge huge business, but they need to be doing something noticeably different in order to take away market share, whether that’s offering a new user experience or utilising the latest technological advancements. 

It makes sense: Facebook didn’t upend MySpace by copying it, it did so by creating something that was noticeably different.

Now one of the technologies that’s offering a way of doing things differently is decentralisation. The question is whether it could be the remedy for smaller businesses to fight back against the biggest players?

The decentralisation question

To find out, I spoke with Martina Larkin, CEO of Project Liberty. This is a body spearheaded by billionaire Frank McCourt to build a new, decentralised internet.  

Larkin tells me that the goal of decentralisation is to take “the power and control of social media out of the hands of a few platform providers and [give] it to users and developers.”

The benefit of these types of systems is they give people ownership over their information, meaning they can “take their data such as their followers from one app to another” while also connecting with people across other apps.

I asked why this shift to decentralised platforms hasn’t happened yet, and Larkin says that the technology to create these sorts of systems — such as blockchain — is only just maturing.

“People are increasingly uneasy about the way social media influences and manipulates their online presence, especially how big tech controls their data,” she says, “decentralised technology systems provide the opportunity for companies to both operate sustainably and provide a fair and equitable economic value to all participants.”

The simple life

These are excellent points and is the way I hope platforms shift in the future, but there remain two broad issues for me.

The first is ease. It’s no coincidence that Apple has become the biggest company in the world when it could broadly sum up its approach as making previously fiddly things easy. Fundamentally, that’s what people want: a simple life.

While decentralised networks like Mastodon and Bluesky are growing, they are nowhere near as user-friendly as Twitter. Until they can adequately solve that complexity — which may never be entirely the case — I feel that huge amounts of the public will not opt in.

The second point is around payments. Decentralisation may work for social media, but when there’s a platform like Bandcamp on which money is swapping hands, most people would prefer there to be a reliable middle figure. 

You only need to look at how cryptocurrency has — so far at least — failed to become a de facto payment method despite huge pushes. There’s reliability in a middle man, and this is especially true when it comes to money.

It doesn’t matter whether these beliefs are logical, it’s simply the state we’re in.

Power from the platforms

What we’ve discovered isn’t rocket science: it is not easy to dislodge pre-existing online platforms with large user bases. In fact, if you’re trying to do pretty much the same thing as them, it’s nigh-on impossible to overcome that market share and attract the product.

This gives huge companies a certain amount of licence to do whatever the hell they want, users be damned. I’m certain there is a tipping point somewhere, but the fact Facebook hasn’t already found it suggests it’s pretty dark.

But don’t get disheartened — this doesn’t mean there’s no hope for change.

For upstart platforms to alter the current system, the key is they need to do something different. Whether that’s offering a new way of engaging with content (think of how TikTok reimagined YouTube) or incorporating a burgeoning technology like decentralisation.

Looking to compete with Instagram or Twitter or Bandcamp isn’t going to work. Companies need to look beyond them, to think of a new way of delivering what those platforms are striving to.

Yet this isn’t all, as simplicity and ease-of-use is king. New platforms need to show people that it is not only much better than the previous one, but it’s also just as easy to use. 

Without that? Well, we probably won’t be quitting them any time soon.

Why don’t we leave the internet platforms we dislike? Read More »

gta-vi-trailer-leak-linked-to-rockstar-dev’s-son

GTA VI trailer leak linked to Rockstar dev’s son

Shady behaviour might be part of the Grand Theft Auto DNA, but leaking video game trailers on TikTok before launch is probably not what developers had in mind. Especially not when it can be traced back to a senior Rockstar developer’s son. 

The fact that fans will need to wait more than a year for the next instalment in the GTA saga (or, as one viewer close to the author expressed this morning, “2025 just means not 2024”) did not diminish the enthusiasm when Rockstar Games released the GTA VI trailer in the early hours of Tuesday CET.

Our trailer has leaked so please watch the real thing on YouTube: https://t.co/T0QOBDHwBe

— Rockstar Games (@RockstarGames) December 4, 2023

Vice City looks slicker than ever indeed. However, Rockstar released the trailer to the public some hours earlier than intended. The reason? The leaking of an off-cam clip of the footage to TikTok over the weekend and a subsequent leak of the trailer on X on Monday. Plot twist — the TikTok user in question has reportedly been identified as the son of a senior Rockstar North employee. 

Incriminating evidence?

Rockstar North, based in Edinburgh, Scotland, has been part of the Rockstar Games family since 1999 and is responsible for the development of the Grand Theft Auto series. The evidence that the seven-second TikTok leak came from a developer’s family member has been labelled by some social media users as “fairly convincing.”

Reportedly, it involves the TikTok user posing with the Rockstar employee and calling them “dad.” But as the TikTok (it is a noun, right?) has been deleted, this shall have to remain second-hand speculation on our part. Of course, it could all be a part of a deceitful ruse to deflect culpability, in keeping with the spirit of the game. 

The evidence to suggest the video has come from someone related to the employee in question is fairly convincing.

Again, if this is true it’s extremely disappointing that this has occurred so close to the official reveal.

— GTABase.com (@GTABase) December 2, 2023

In another noteworthy turn of events, the trailer revealed that GTA VI will feature the game’s first female protagonist (Bonnie and Clyde storylines FTW). Rockstar Games says it will be released on PS5 and Xbox Series X / S.

Other notable vide game leaks

Leaks to social media are not unusual in the gaming world. A prototype of Horizon Forbidden West was leaked to Twitter one week before its release. A Russian website published a version of the script to Mass Effect 3 before the game’s official release in March 2012 (although we cannot see the appeal of reading it — it would be like sneaking a peek at your Christmas presents before they are wrapped). 

However, it is unusual for leaks to come from such intimate sources, and so close to the official release. Whoever may prove to be behind the leaks, let’s hope the repercussions are more akin to being grounded than ending up in jail, like the last teenagers who messed with Rockstar and GTA.

Published

Back to top

GTA VI trailer leak linked to Rockstar dev’s son Read More »

tiktok-pledges-e12b-european-investment-as-norway-data-centre-nears-completion

TikTok pledges €12B European investment as Norway data centre nears completion

TikTok has promised to invest €12bn as part of an ongoing push to appease European regulators, who have raised suspicions that the app’s user data is being monitored by the Chinese government.

In response to repeated allegations of this nature, the short-form video app launched Project Clover in March. While it might sound like a secret military sting operation, the programme is pretty mundane. 

Essentially, Project Clover aims to build three massive data centres on the continent to keep European user data in Europe — and “within reach” of local authorities.   

Yesterday, TikTok pledged €12bn over the next 10 years for the project. The first data centre, a facility in Dublin, Ireland, was completed in September. The second one is currently under construction in the frosty climes of Hamar, Norway. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

TikTok this week announced it took possession of the first of three buildings at the site, and will begin migrating European data to the servers housed there from mid-2024. It said the centre will run solely on renewable energy and will be the largest facility of its kind in Europe once complete. The third and final data centre will also be built in Ireland. 

tiktok's new data centre under construction in norway
A worker walks outside TikTok’s largest data centre in Europe, currently under construction in Hamar, Norway, November 30, 2023. REUTERS/Victoria Klesty

TikTok’s mammoth investment also covers the consultancy fee of British cybersecurity firm NCC, whom the social media firm hired to audit its data controls and provide third-party accountability. 

“All of these controls and operations are designed to ensure that the data of our European users is safeguarded in a specially-designed protective environment, and can only be accessed by approved employees subject to strict independent oversight and verification,” said Theo Bertram, TikTok’s VP of Public Policy in Europe.

A series of institutions including the EU Commision, the UK Parliament, and the French government have banned use of TikTok on work-related devices, over fears that the app has been infiltrated by the Chinese government — allegations which the company has vehemently denied.

The full migration of TikTok’s 150 million European users in the region is expected by the end of 2024. Currently, the company stores its global user data in Singapore, Malaysia, and the US.

Published

Back to top

TikTok pledges €12B European investment as Norway data centre nears completion Read More »

can-you-‘degoogle’-a-phone?-murena-tried-—-and-added-a-kill-switch

Can you ‘deGoogle’ a phone? Murena tried — and added a kill switch

My ancient Samsung Galaxy is ready for retirement. Cracks expand across the screen, photos are hazy blurs, and the battery barely survives a day. It’s time to buy a replacement.

The initial contenders for my cash were the usual mix: Androids and iPhones with old names, incremental upgrades, and eye-watering price tags. While mulling over the options, a serendipitous email arrived in my inbox. A budding phonemaker called Murena was building a new handset with a bullish promise: “the ultimate pro-privacy smartphone.” 

To substantiate the slogan, the company flaunted two compelling features: a physical “kill switch” to disconnect the device and an anti-tracking operating system. Consider me intrigued. 

The announcement of the phone — named the Murena 2 — was timely. Just hours later, a news story provided an inadvertent advertisement for the product.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Several US government agencies had been illegally using location data taken from mobile apps. In one case, an official had tracked coworkers for personal reasons.

Such scandals have become commonplace.

Photo of the Murena 2 on green wooden boards
The Murena 2 introduces new privacy features for both hardware and software. Credit: Murena

In the past few weeks alone, politicians have accused the Indian government of phone tapping, big box repair stores have snooped on customer devices, and Motorola users have sued the company for “surreptitiously” taking data from their selfies. Prince Harry has also won the latest stage in his lawsuit over alleged phone hacking by newspapers.

The frequency of the offences has a numbing effect. In the decade since Edward Snowden exposed rampant surveillance of our devices, eavesdropping has become just another boring dystopia. 

Our nonchalance is reinforced by a sense that ordinary folk aren’t impacted — but that may be wishful thinking. Just a fortnight ago, reports emerged that British police are requesting data from menstrual tracking apps after “unexplained” pregnancy losses. 

Average Janes and Joes face a further threat from big tech’s push into health insurance. Any company that sets insurance rates will find enormous value in the personal data on our phones. 

There’s also a more pressing danger lurking.

“You have no guarantee that the data is never going to be hacked,” Alexis Noetinger, Murena’s COO, tells TNW. “For us, this is the biggest issue. The more data that is collected, the more risk there is that this data can fall into the wrong hands.”

The Murena 2 aims to mitigate this risk. Set to launch in December, the handset promises “unparalleled” levels of privacy. To test the claim, we got our hands on a prototype of the device.

Our trial doesn’t have the most encouraging start. After turning on the phone, a warning message appears on the screen: “Orange state: Your device has been unlocked and can’t be trusted.”

It’s an inauspicious welcome, but Murena assures us that it’s just a teething issue with the pre-release model. From that point on, the software ran smoothly — which we had expected from Murena.

The French startup emerged from /e/OS, a “deGoogled” operating system. A privacy-focused fork of Android, /e/OS is an anti-tracking, democratised version of its progenitor. 

The operating system is open-source, which means anyone can probe the privacy protections. By default, it doesn’t send any data to Google or third parties.  

On launch, the Big G’s apps and services have been replaced by open-source versions. If you do install more familiar alternatives, the tracking can be restricted.

“The idea we had was to tilt the status quo on its head, and instead of promoting proprietary and closed solutions, to develop an alternative based on open-source software,” Noetinger says.

That status quo is a duopoly that’s dominated the sector for over a decade. 

The /e/os operating system on the Munera 2
The similarity to Android makes adapting to /e/OS pretty quick. Credit: Munera

After Blackberry plummeted from the industry’s pinnacle, Android and Apple devoured the smartphone market between them. Regular consumers now only have two real choices: go with Android and its voracious data collection, or opt for the iPhone’s closed ecosystem, which may provide more privacy, but still gobbles up ample user information.

In 2021, researchers at Trinity College Dublin found that both operating systems share data with their motherships every 4.5 minutes on average — even when the handsets aren’t being used. 

The data that they send is diverse and detailed. It includes your location, phone number, cookies, local network, and even information about other devices nearby. Some of this is shared when the phone is sitting idle in a pocket or bag. 

According to the Trinity team, it could allow location tracking when location services are disabled. They found that both Apple’s iOS and Android transmit telemetry — despite users explicitly opting out of this.

The researchers also contested Apple’s claims of superior protection. They argued that iOS offered “no greater privacy than Google devices.”

“I think most people accept that Apple and Google need to collect data from our phones to provide services such as iCloud or Google Drive,” said study author Professor Douglas Leith.

“But when we simply use our phones as phones — to make and receive calls and nothing more — it is much harder to see why Apple and Google need to collect data.”

There is one obvious reason why it’s necessary: advertising.

Online ads provide the bulk of Google’s revenues, and data provides the biggest selling point. It creates detailed profiles of our real-world tastes, demographics, and behaviours, which advertisers use to target us with ads. 

This personalised marketing can be convenient for consumers. But it can also turbocharge political propaganda, disinformation, echo chambers, and exploitation of the vulnerable.

Another bugbear for privacy advocates is government access. Authorities can request the data with a warrant — and they do. Google regularly gives law enforcement agencies search and location data.

Sundar Pichai at the World Economic Forum
Google CEO Sundar Pichai has tried to reassure the US Congress that his company is responsive to law enforcement requests. Credit: Greg Beadle

These issues extend from operating systems to apps. Facebook, for instance, tracks you across all its apps and sites — even after you log off the social network. The social network requests a dizzying array of permissions, from your contacts, calls, and messages to your camera, microphone, and storage. To use Facebook, you must give the company almost full control of your device. 

Once you open the app, the Meta behemoth monitors when you log in, what you browse, where you go, which products you buy, and how long you’re on the platform. All of this determines the ads we receive. Sometimes, it also serves more nefarious purposes.

Personal data has been stolen from Facebook by hackers, misused by third-party apps, publicly exposed, and shared without permission. Most infamously, the data of up to 87 million users was harvested without permission by Cambridge Analytica during the 2016 US Presidential election.

It was a damning breach for Facebook. But the platform is far from the only app that puts our data at risk. Murena’s pitch for /e/OS is protection from them all.

On /e/OS, every tracker is removed by default. Extra privacy protections are also installed, while connections to Google are cut.

The deGoogling is extensive. The Google default search engine is replaced by a Murena system, Google apps are switched for open source equivalent, no Google servers are used to check connectivity, geolocation uses Mozilla services, and the Google Play Store is ditched for Murena’s App Lounge.

The full extent of the deGoogling is too broad to catalogue here — although some still wish it was wider. More on that later.

Alexis Noetinger (right) next to Murena founder Gaël Duval
Noetinger (right) alongside Murena founder Gaël Duval, who also created /e/OS. Credit: Murena

The OS is paired with an advanced privacy module. Once inside, you can monitor each app’s permissions, as well as the hidden trackers, which collect your data and follow your activity. You can then cut the tracking.

“We give the user the visibility on which app is trying to access the data — and which tracker is trying to access the data,” Noetinger says.

You can also find privacy scores for each app, which contains some big surprises. Facebook, for instance, got a whopping nine out of 10 for privacy — the same as Signal. LinkedIn and Spotify, meanwhile, were both given zero out of 10. TikTok, a bogey app for many in the Western world, received a middling four.

Facebook’s apparent superiority has a simple explanation: the app doesn’t use trackers. Yet it obviously still collects copious user data. As Murena told TNW, Facebook doesn’t need a tracker “because it is already one big tracker.” Unfortunately, this somewhat devalues the privacy scores.

Thankfully, you can still fortify your defences against these snooping apps. /e/OS users can fake their location to random and specific places, use a dummy email, or even hide their IP address.

Four of the advanced privacy features depicted on Murena 2's screen: monitor app trackers, hide your IP address, fake your location, and check your privacy status at a glance
The advanced privacy and anti-tracking features are unusual in consumer smartphones. Credit: Murena

Alongside the operating system and privacy features, /e/OS provides a default set of open-source apps and online services. Among them is the Murena Cloud, which includes an email account, cloud storage,  and an online office suite.

In our experience, the software performed pretty well. Like the operating system, the apps are fairly intuitive, functional, and familiar to Android users — although they lack the slickness and style of the Google versions.

Then there is the app store — which is where the deGoogling gets contentious.

Additional applications for the Murena 2 are downloaded from the /e/OS App Lounge, an open-source system that connects directly to the Google Play Store.

The App Lounge combines common apps, open-source apps, and progressive web apps (PWAs) — which work directly from a browser — in one single repository.  According to Murena, there’s no other app store that does this today. 

To access Google products, the system has a compatibility layer. This means that you can still access Android apps. The free ones are accessible via anonymous browsing to circumvent trackers, but the paid apps still require a Google account.

These concessions have irked early adopters. Murena wants to create a deGoogled world, but won’t fully cut connections to the tech giant.

It’s struck a balance that won’t satisfy every privacy advocate, but the business case is clear. An absence of Play Store apps and Google services would likely send the device to an early grave. It would certainly be a dealbreaker for me.

The compromise evokes the “privacy paradox.” This phenomenon arises when people claim to highly value their privacy, but readily disregard the protection of their personal data. Noetinger sympathises with their plight.

“We know that people need to access some applications because they don’t have the choice,” he says.

“This way, you can still use the applications you need. If they feature trackers, you can block them, and we have additional features that can be quite aggressive.”

Users can still download popular consumer apps and then review their trackers
After searching for apps, users can find their privacy scores. Credit: Murena

Another issue with the Google link is the App Lounge. When the Murena One launched last year, the developer community XDA claimed that the App Lounge is in a legal “grey area,” because it pulls apps from Google’s servers while bypassing the requirement for a Google account. 

Murena acknowledges that there’s an issue here. The company told TNW that Google has hardened its account usage policy this year. Murena said that it proactively warns users about the potential curbs, but that it hadn’t received any reports of restrictions related to the App Lounge. The company assures users that the App Lounge’s terms and conditions are compatible with those of Google.

After finding a foothold in smartphone software, Murena ventured into hardware with last year’s launch of the Murena 1. Its successor adds several compelling features.

One that really caught our eye is the new physical kill switch. This disables all the device’s microphones and cameras, which many apps use for unspecified reasons. They’re also vulnerable to hacking.

During our trial, the button worked seamlessly. With the flick of a finger, a circuit block instantly deactivated the mic and cams. To reconnect them for a call, we just hit the switch again. 

It’s a feature that should impress even Mark Zuckerberg. The Meta boss was once photographed next to a laptop with a physical cover over its webcam and microphone. And if anyone should know about privacy threats, it’s the founder of Facebook.

Mark Zuckerberg smiling while the camera picks up in the background a cover on his laptop webcam and microphone
Zuckerberg’s protective tape technique isn’t ideal for phones.

A second new addition is a disconnection switch. With a tap, the button disables all network activity and mutes the phone.

This one is primarily a “do not disturb” feature. The concept taps into the growing demands for distraction-free environments and digital detoxes. In the future, Murena may add an option to customise the switch’s purpose.

As for the conventional specs, they’re comparable to typical mid-range devices. There’s a 6.43″ high-resolution display, 128GB of storage, 8GB of RAM, a 4000mAh battery, and a 2.1GHz octa-core processor. 

For photos, you get a 25mp front selfie camera and a rear triple camera (5mp, 13mp, 64mp). Undoubtedly, the pictures dramatically outclassed those from my decrepit Samsung — although that’s pretty faint praise. If you want the finest photos and the leading specs, you’ll need a top-end phone. 

“Our standpoint is not to compete on the specs, because if you want to compete on specs, it never ends,” Noetinger says. “At the end of the day, even if the device is not premium, it will most likely be enough for most people out there for day-to-day use.”

Murena hopes the smartphone’s privacy edge attracts these regular consumers. But the mass market remains a formidable target.

Side view of the Murena kill switch
The kill switch adds a rare innovation to a stagnant sector. Credit: Murena

The mobile industry is mired in a historic downturn. Stocks have slumped, sales have slowed, and innovation has stagnated. In 2023, global smartphone shipments are projected to decline by 4.7% to 1.15 billion units — a ten-year low. These are challenging times for new entrants to the market — but they also present opportunities. 

A key problem for the sector is consumer apathy. With massive price tags for minor upgrades to interchangeable devices, the big brands no longer provide a big bang for our buck. The latest iPhones and Androids simply aren’t as special as they used to be.

In these uninspiring times, the Munera 2 stands out. By combining inventive hardware, privacy-centric software, and an alternative to the Android/iPhone duopoly, the device has a unique appeal. 

Those charms, however, won’t attract everyone. Without the familiar Android interface, a recognisable name, and default Google service, the device will struggle to reach the mainstream.

But for privacy enthusiasts, early adopters, and big tech boycotters, the release date is worth adding to the calendar. Until that day comes, my ageing Samsung will have to survive a few more charges. 

The Murena 2 is now available for preorder on Indiegogo. The retail price is $499 in the US, €499 in the EU, £429 in the UK, $679 in Canada, $829 in Australia, and 479 CHF in Switzerland. Shipments are estimated to start in December 2023. The official public launch is planned for early 2024.

Can you ‘deGoogle’ a phone? Murena tried — and added a kill switch Read More »

why-security-compliance-is-no-longer-a-nice-to-have-for-uk-startups

Why security compliance is no longer a nice to have for UK startups

Security compliance (and particularly ISO 27001) is like the project in school you had the whole year to complete — and ended up starting in a panic the night before.

Given the time, resources, and complexity of completing the certification, it’s one of the things startup founders are most likely to put off for a later date in favour of growth-focused tasks like sales and product development.

What many don’t realise is that security compliance not only has a big impact on your company’s resilience to security breaches and data leaks but also your bottom line.

If you’re experiencing these signs, it might be time to start building your own security compliance programme:

1. You’re unable to close deals

According to the UK’s Cyber Security longitudinal survey, it’s not the potential for cyberattacks that’s driving SMEs to obtain security compliance. Instead, more and more are finding that it’s become a contractual requirement to work with public sector bodies and large companies.

With cyberattacks on the rise across the UK, established brands are becoming much more vigilant about who they decide to do business with. In some cases, meeting security compliance criteria is essential just to bid on a contract.

More mature organisations will often require potential vendors and partners to be compliant with some of the main cybersecurity standards. As your business begins targeting larger enterprise deals, sales teams will often face difficult security questions and closed doors when expectations aren’t met. This can block your business from the revenue boost it needs to move from startup to fast-growing scaleup.

2. You aren’t following common best practices

Have you noticed your security practices differ greatly from your competitors and partners? Organisational inertia, process friction, and complexity make it difficult to introduce change once your business is already established. That’s why implementing the right processes from the start will save you a lot of time, headaches, and ultimately money.

3. Increasing regulatory or social pressure

Security regulations are continuously changing. If you’re in violation of a security standard, you could be at risk of being hit with a significant fine. Not only will this impact your finances, it could also slow down your business operations until changes can be made.

This is particularly the case if you’re in a field or area that’s highly contentious, high risk, or potentially viewed with a high level of scepticism. Keeping up to date with security compliance measures ensures you’re also up to date with the latest regulations.

4. You’re unable to answer security questionnaires fully or transparently

Whether you’re communicating with current or potential customers, not being able to answer questions about your security is a sign of business immaturity and a red flag for prospects.

At the same time, having a strong security programme in place is becoming a new selling point for UK startups, helping them to fend off cyberattacks and build trust with new customers.

Making security compliance your competitive advantage

According to the UK’s National Cyber Security Centre (NCSC), ransomware attacks and data leaks are on the rise with UK businesses suffering major losses.

While it was long thought that large enterprises were the main target of cyberattacks, the UK’s startups are experiencing a rapid uptick in security concerns and data breaches. According to a study by Vodafone, more than half (54%) of SMEs in the UK had experienced some form of cyberattack in 2022, up from 39% in 2020.

Despite the worsening security landscape (and the potential for fines), a government survey found only 32% of UK businesses have one or more security certifications.

Break down of standards or certifications adhered to by organisations in the UK
Source: The Cyber Security Longitudinal Survey 2022

As larger enterprises feel the pressure to introduce strict security measures to keep customer data safe, startups that want to land growth-driving deals will need to prove they can be trusted.

And with so few startups on the market with compliance certifications, those that do prioritise security can gain a competitive advantage.

Similarly, startups looking to expand to new markets could benefit from adopting local security practices. For example, SOC 2 is a standard that’s become common business practice in North America.

The main factor holding startups back from security compliance from the start is the perceived complexity.

Many don’t know the difference between some of the most common security frameworks, like ISO 27001 and SOC 2, and which are most relevant for them. Others aren’t sure how to get started building a strong security programme.

Luckily, trust management platform Vanta created a handy guide for UK startups including:

  • How to determine which security framework is right for you
  • Steps for starting a security compliance programme
  • How to take advantage of compliance automation

Download it for free here.

Why security compliance is no longer a nice to have for UK startups Read More »

deepfake-fraud-attempts-are-up-3000%-in-2023-—-here’s-why

Deepfake fraud attempts are up 3000% in 2023 — here’s why

Deepfake fraud attempts have increased by a whopping 31 times in 2023 — a  3,000% increase year-on-year.

That’s according to a new report by Onfido, an ID verification unicorn based in London. The company attributes the surge to the growing availability of cheap and simple online tools and generative AI.

Face-swapping apps are the most common example. The most basic versions crudely paste one face on top of another to create a “cheapfake.” More sophisticated systems use AI to morph and blend a source face onto a target, but these require greater resources and skills. 

The simple software, meanwhile, is easy to run and cheap or even free. An array of forgeries can then be simultaneously used in multiple attacks. 

These cheapfakes aim to penetrate facial verification systems, conduct fraudulent transactions, or access sensitive business information. They may be crude, but only one needs to succeed.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

By emphasising quantity over quality, the fraudsters target the maximum reward from the minimum effort. 

Research suggests that this is their preferred approach. Onfido found that “easy” or less sophisticated fraud accounts for 80.3% of all attacks in 2023 —  7.4% higher than last year. 

Graph showing the volume of deepfake attempts over time. Credit: Onfido
The volume of deepfake attempts over time on Onfido’s Video and Motion products. Credit: Onfido

Despite the rise of deepfake fraud, Onfido insists that biometric verification is an effective deterrent. As evidence, the company points to its latest research. The report found that biometrics received three times fewer fraudulent attempts than documents.

The criminals, however, are becoming more creative at attacking these defences. As GenAI tools become more common, malicious actors are increasingly producing fake documents, spoofing biometric defences, and hijacking camera signals.

“Fraudsters are pioneers, always seeking opportunities and continually evolving their tactics,” Vincent Guillevic, the head of Onfido’s fraud lab, told TNW.

To stop them, Onfido recommends “liveness” biometric verification tech. These systems verify the user by determining that they’re genuinely present at that moment — rather than a deepfake, photo, recording, or a masked person.

At present, fraudsters typically attempt to spoof liveness checks with a very basic method: submitting a video of a video displayed on a screen. This approach currently accounts for over 80% of attacks.

In the future, however, tech will offer far more sophisticated options. 

“The developments we’re likely to see with deepfakes and quantum computing will make fakes indistinguishable to the human eye,” Guillevic said.

In response, Guillevic expects businesses to apply more automated solutions. He also sees a crucial role for non-visual fraud signals, such as device intelligence, geolocation, and repeat fraud signals that work in the background.

Undoubtedly, the fraudsters will develop counterattacks. Both sides will have to upgrade their weapons on the AI versus AI battleground.

Deepfake fraud attempts are up 3000% in 2023 — here’s why Read More »

everything-startups-need-to-know-about-building-a-security-compliance-program

Everything startups need to know about building a security compliance program

With cybercrime on the rise across the UK and more SMEs being targeted, security is more important than ever before.

Even if you believe your business is secure from data leaks and cyberattacks, if you’re not able to demonstrate this to potential clients, your sales team could be missing out on growth-driving deals. This is especially the case for enterprise clients that often require potential partners to demonstrate compliance with some of the key measures such as ISO 27001 and SOC 2.

All this means that security compliance is no longer a nice to have for UK startups.

Security compliance programs help your organisation identify, implement, and maintain appropriate cybersecurity controls to protect sensitive data, comply with laws and contractual obligations, and adhere to the standards, regulatory requirements, and frameworks needed to protect customers and enable the business to succeed.

Steps for getting started

Step 1: Define your organisational goals and needs

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Are you starting the program to close deals? Do you want to proactively demonstrate trust or compliance? More importantly, what are you trying to accomplish and why? After answering these questions, we recommend identifying your desired end state and vetting and aligning this with key stakeholders and their needs. The more granular you can be about your intended goals and desired end state, the easier it’ll be to work backward towards your objectives and bring others on board as well.

Before worrying about which standard to implement or what tools to buy, it’s critical to ensure these goals are doing more for the organisation than just unblocking deals or solving one problem.

At Vanta, we leverage our compliance efforts as force multipliers wherever possible. For instance, a known compliant process in one business unit could potentially be adapted to work in another, which could streamline cross-functional work and alignment across different projects.‍

Step 2: Define your roadmap and timeline

Consider breaking your timeline down into specific milestones you’ll be able to track and work toward. In addition, think through whether there are any dependencies you’ll need to account for and how they relate.

This step should include identifying the answer to questions such as:‍

  • What are our known technology needs or gaps?
  • Do we expect we will need to invest in some additional tooling or support?
  • Do we have an understanding of the technical demands of where we want to go?
  • Do we build, buy, or partner?

‍For instance, if you’d like to build and are planning to hire for the role, consider whether you need someone who’s more of a manager who can set direction or someone who’s willing to roll up their sleeves as a doer. This is especially important for a foundational role like your first compliance hire.

If you opt to buy or partner, consider whether using services such as a virtual CISO (vCISO), Managed Service Provider (MSP), or other fractional resources could address your needs and objectives more cost-effectively. This is especially important if you have a very broad tech stack or complex operations, as an MSP or vCISO firm will usually have access to more expert resources than any one person can be expected to know.

If you’re building a program from the ground up or for the first time, it may be more cost-effective to use a trusted third party to supplement your work than to hire one or more FTEs to build a program in-house. Regardless of what option you go with, you’re likely looking for an individual—or even a team—with privacy and/or compliance knowledge as well as technical engineering knowledge.

Part of defining your objectives also includes measuring your progress and ensuring that what you’re measuring is relevant to your intended outcomes. As you develop your program, be sure to identify key metrics that help your organisation understand and share the achievements and outcomes of your security compliance program.

Remember you’ll need to prioritise what you’ll build and when. This is especially true given that you’ll likely have a long list of action items, and more tools and needs than you have budget for. The approach we’ve taken at Vanta is to align our security compliance program with our business objectives—which also ensures we’re meeting the needs of our customers and our overall business.

As a tip, our team likes to reference Verizon’s Five Constraints of Organisational Proficiency as described in their 2019 Payment Security Report to help structure our approach to our compliance program. This framework highlights the importance of capacity, capability, competence, commitment, and communication as key to the health and effectiveness of a strong data protection compliance program—we suggest giving it a quick read if you’re interested!

Step 3: Prioritise and start building

Now that you have an understanding of your needs and timeline, it’s time to start prioritising your efforts based on the needs and constraints of your business. You can start by taking the following steps:

  • Double-check alignment with business objectives—is your plan still what the business needs or has it had some scope creep or plan drift that might introduce unnecessary friction?
  • Set up official deadlines based on your new understanding of the project goals, and officially kick off the implementation of your program.

Remember, security and compliance are infinite black holes without context. Make sure that what you’re planning on doing for compliance has guardrails to ensure you’re spending your time and effort in places that drive measurable business outcomes.

‍Lastly, understanding, defining and communicating why you’re working toward these objectives—whether toward meeting customer needs, revenue goals, or internal risk reduction—can bring others on board as well.

Additional considerations: stakeholders and resources

Don’t forget that executive sponsorship, commitment, and budget are some of the most critical components of a strong security compliance program. We suggest seeking these out earlier rather than later and continuing to build this bridge by highlighting risks, impact (including positive!) and your company’s overall security compliance journey.

‍After you determine your goals and identify your tooling and technology needs, it helps to know what tooling is available and what meets those needs most. Referencing industry trends and feedback can be a good place to start, as well as networking with others in the industry who are or have addressed similar challenges.‍

Tips and suggestions for building your security compliance program

While every team and company approaches building security compliance programs slightly differently, here are a few tips we’d suggest:

  • Build repeatability: While it may be tempting to aim for quick wins, focus on repeatable processes and repeatable outcomes within your program. Remember that fire drills are often an indication of broken processes.
  • Start with a strong foundation: Focus on the fundamentals and do your basics well—no matter how mature your program, the fundamentals always matter.
  • Avoid shiny object syndrome: Tools and technology may help, but will only exacerbate broken processes.

Ready to start building a strong security compliance program?

Check out Vanta’s guide for UK startups to learn more about the differences and similarities between ISO 27001 and SOC 2 and which is right for your organisation. You’ll also learn how to leverage compliance automation to streamline certification and support your business through an international expansion.

Everything startups need to know about building a security compliance program Read More »

‘unsafe’-ai-images-proliferate-online.-study-suggests-3-ways-to-curb-the-scourge

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge

Over the past year, AI image generators have taken the world by storm. Heck, even our distinguished writers at TNW use them from time to time. 

Truth is, tools like Stable Diffusion, Latent Diffusion, or DALL·E can be incredibly useful for producing unique images from simple prompts — like this picture of Elon Musk riding a unicorn.

But it’s not all fun and games. Users of these AI models can just as easily generate hateful, dehumanising, and pornographic images at the click of a button — with little to no repercussions. 

“People use these AI tools to draw all kinds of images, which inherently presents a risk,” said researcher Yiting Qu from the CISPA Helmholtz Center for Information Security in Germany. Things become especially problematic when disturbing or explicit images are shared on mainstream media platforms, she stressed.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

While these risks seem quite obvious, there has been little research undertaken so far to quantify the dangers and create safe guardrails for their use. “Currently, there isn’t even a universal definition in the research community of what is and is not an unsafe image,” said Qu. 

To illuminate the issue, Qu and her team investigated the most popular AI image generators, the prevalence of unsafe images on these platforms, and three ways to prevent their creation and circulation online.

The researchers fed four prominent AI image generators with text prompts from sources known for unsafe content, such as the far-right platform 4chan. Shockingly, 14.56% of images generated were classified as “unsafe,” with Stable Diffusion producing the highest percentage at 18.92%. These included images with sexually explicit, violent, disturbing, hateful, or political content.

Creating safeguards

The fact that so many uncertain images were generated in Qu’s study shows that existing filters do not do their job adequately. The researcher developed her own filter, which scores a much higher hit rate in comparison, but suggests a number of other ways to curb the threat.  

One way to prevent the spread of inhumane imagery is to program AI image generators to not generate this imagery in the first place, she said. Essentially, if AI models aren’t trained on unsafe images, they can’t replicate them. 

Beyond that, Qu recommends blocking unsafe words from the search function, so that users can’t put together prompts that produce harmful images. For those images already circulating, “there must be a way of classifying these and deleting them online,” she said.

With all these measures, the challenge is to find the right balance. “There needs to be a trade-off between freedom and security of content,” said Qu. “But when it comes to preventing these images from experiencing wide circulation on mainstream platforms, I think strict regulation makes sense.” 

Aside from generating harmful content, the makers of AI text-to-image software have come under fire for a range of issues, such as stealing artists’ work and amplifying dangerous gender and race stereotypes

While initiatives like the AI Safety Summit, which took place in the UK this month, aim to create guardrails for the technology, critics claim big tech companies hold too much sway over the negotiations. Whether that’s true or not, the reality is that, at present, proper, safe management of AI is patchy at best and downright alarming at its worst.  

‘Unsafe’ AI images proliferate online. Study suggests 3 ways to curb the scourge Read More »

uk’s-biggest-chip-plant-sold-by-chinese-owned-firm-after-government-order

UK’s biggest chip plant sold by Chinese-owned firm after government order

Britain’s biggest chip plant has been bought by US semiconductor firm Vishay for $177mn.

The Newport Wafer Fab in Wales was previously owned by Nexperia, which acquired the business in 2021. Nexperia is headquartered in the Netherlands, but the company is a subsidiary of China’s Wingtech. This ownership structure attracted intervention from UK lawmakers.

Last year, the British government ordered Nexperia to sell the majority of its stake in Newport Wafer Fab. The move was explained as an attempt to “mitigate the risk to national security.”

The end result is a new owner for the factory, which makes semiconductors for millions of products, from household equipment to smartphones. The chips are particularly prominent in the automotive sector.

Announcing the acquisition, Vishay highlighted the potential applications — and the political concerns.

“For Vishay, acquiring Newport Wafer Fab brings together our capacity expansion plans for our customers in automotive and industrial end markets as well as the UK’s strategic goal of improved supply chain resilience,” Joel Smejkal, the company’s president and CEO, said in a statement.

Nexperia, meanwhile, described the deal as the most viable option available. The company welcomed Vishay’s commitment to develop the 28-acre site, but criticised the British government’s actions.

“Nexperia would have preferred to continue the long-term strategy it implemented when it acquired the investment-starved fab in 2021 and provided for massive investments in equipment and personnel,” said Toni Versluijs, country manager for Nexperia UK.

“However, these investment plans have been cut short by the unexpected and wrongful divestment order made by the UK Government in November 2022.”

Published

Back to top

UK’s biggest chip plant sold by Chinese-owned firm after government order Read More »

a-third-of-gdpr-fines-for-social-media-platforms-linked-to-child-data-protection

A third of GDPR fines for social media platforms linked to child data protection

It’s been a little over five years since the GDPR came into effect and fines keep amassing — especially for social media platforms.

New research by Dutch VPN company Surfshark has found that, since 2018, five of the most popular social media (Facebook, Instagram, TikTok, Whatsapp, and X/Twitter) have been fined over €2.9bn for violating the EU’s data protection law.

Facebook alone accounts for nearly 60% of the total amount, with €1.7bn in penalties. Adding to Zuckerberg’s woes, Meta’s platforms combined have reached €2.5bn. TikTok has received the third highest amount in fines, at €360mn, while X (formerly Twitter) has only amassed €450k. Meanwhile, YouTube, Snapchat, Pinterest, Reddit, and LinkedIn have not been charged.

Most alarmingly, one-third (4 out of 13) of these fines are linked to insufficient protection of children’s data — adding up to €765mn of the total amount.

Specifically, TikTok was first fined in 2021 for failing to introduce its privacy statement in Dutch, so that minors in the Netherlands could fully understand the terms. Two more fines were issued in 2023. One was for TikTok not enforcing its own policing restricting access to children under 13. The other was for setting accounts to public by default, and for not verifying legal guardianship for adults registering as parents of child users. These fines combined resulted in a total of €360mn.

The second social media to be charged for violating children’s privacy is Instagram. The Meta platform received its one and only fine in 2022 (€405mn), when business accounts created by minors were set to public by default.

“Such penalties demonstrate the imperative to hold major social media players accountable for their data handling practices, ensuring that the privacy and safety of all users, especially children, is given the utmost consideration and care,” said Agneska Sablovskaja, lead researcher at Surfshark.

Apart from being caught in the crosshairs of GDPR enforcers, Facebook, Instagram, TikTok, and X also need to comply with the Digital Services Act (DSA). Among other requirements, the EU’s landmark content moderation rulebook prohibits the use of targeting advertising that’s based on the profiling of minors.

A third of GDPR fines for social media platforms linked to child data protection Read More »

5-new-eu-restrictions-for-online-political-advertising

5 new EU restrictions for online political advertising

As the European Union prepares for elections next year, the bloc is developing new measures to protect the democratic process. 

The latest rules focus on advertising. On Tuesday, EU officials unveiled new regulations for online political ads, which aim to make campaigns more transparent and safe from interference.

These are the five new measures that are being implemented:

1. New transparency

To enhance transparency and accountability, the EU will make it easier to find out who’s behind an ad.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Political adverts will need to be clearly labelled as such. Citizens, authorities, and journalists will also receive information on who’s funding an ad, where the financer is established, the amount that was paid, and the origin of the investment.

All online political ads will be available in a digital repository. This, the EU hopes, will help citizens identify the messages trying to shape their political views and decisions.

2. Further measures on foreign interference

With European Parliament elections scheduled for next June, the EU is also introducing new restrictions on foreign interference.

In the three months prior to an election or referendum, third-country entities will be banned from sponsoring political ads in the EU.

3. Extra rules on ad targeting

Ad targeting faces several fresh restrictions. Under the new rules, only personal data that’s explicitly provided for online political ads and collected from the subject can be used to target users.

In addition, political ads will be barred from any profiling that uses sensitive data such as ethnicity, religion, and sexual orientation. They will also be banned from using minors’ data.

These measures aim to limit the abusive use of personal information to manipulate voters.

4. Protections on freedom of expression

Any rules on online messaging will inevitably spark concerns about free speech.

To allay these anxieties, the EU has pledged to only apply the rules to paid political advertisements. According to union officials, personal views and political opinions will not be impacted.

5. Tough new sanctions

The new rules will be enforced with familiar penalties. In line with the EU’s Digital Services Act, repeated violations will trigger sanctions of up to 6% of the ad provider’s annual income or turnover.

What’s next?

The measures have already been agreed upon by EU legislators, but they still require formal adoption by the European Council and Parliament.

Once that happens, there will be a period of 18 months before the rules apply. However, the rules on the non-discriminatory provision of cross-border political advertising will be in place for the 2024 elections.

“Elections must be a competition in the open, without opaque techniques or interference,” Věra Jourová, Vice-President for Values and Transparency, said in a statement.

“People must know why they are seeing an ad, who paid for it, how much, and what targeting criteria were used. New technologies should be tools for empowerment and engagement of citizens, not for confusion and manipulation.”

5 new EU restrictions for online political advertising Read More »

netherlands-building-own-version-of-chatgpt-amid-quest-for-safer-ai

Netherlands building own version of ChatGPT amid quest for safer AI

The Netherlands is building its own large language model (LLM) that seeks to provide a “transparent, fair, and verifiable” alternative to AI chatbots like the immensely popular ChatGPT. 

It seems that everyone and their dog is developing their own AI chatbot these days, from Google’s Bard and Microsoft’s Bing Chat to the recently announced Grok, a new ChatGPT rival released by Elon Musk’s xAI company this week. 

But as Silicon Valley pursues AI development behind closed doors, authorities are left in the dark as to whether these LLMs adhere to any sort of ethical standards. The EU has already warned AI companies that stricter legislation is coming.

In contrast, the new Dutch LLM, dubbed GPT-NL, will be an open model, allowing everyone to see how the underlying software works and how the AI ​​comes to certain conclusions, said its creators. The AI is being developed by research organisation TNO, the Netherlands Forensic Institute, and IT cooperative SURF. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“With the introduction of GPT-NL, our country will soon have its own language model and ecosystem, developed according to Dutch values ​​and guidelines,” said TNO. Financing for GPT-NL comes in the form of a €13.5mn grant from the Ministry of Economic Affairs and Climate Policy — a mere fraction of the billions used to create and run the chatbots of Silicon Valley.

“We want to have a much fairer and more responsible model,” said Selmar Smit, founder of GPT-NL. “The source data and the algorithm will become completely public.” The model is aimed at academic institutions, researchers, and governments as well as companies and general users.   

Over the next year, the partners will focus on developing and training the LLM, after which it will be made available for use and testing. GPT-NL will be hooked up to the country’s national supercomputer Snellius, which provides the processing power needed to make the model work. 

Perhaps quite appropriately, the launch of GPT-NL last week coincided with the world’s-first AI Safety Summit. The star-studded event, which took place at Bletchley Park in the UK, considered ways to mitigate AI risks through internationally coordinated action. Just days before the summit UK Prime Minister Rishi Sunak announced the launch of an AI chatbot to help the public pay taxes and access pensions. However, unlike GPT-NL, the technology behind the experimental service is managed by OpenAI — symbolic of the UK’s more laissez-faire approach to dealing with big tech, as compared to the EU.

Elsewhere, the United Arab Emirates has launched a large language model aimed at the world’s 400 million-plus Arabic speakers, while Japanese researchers are building their own version of ChatGPT because they say AI systems trained on foreign languages cannot grasp the intricacies of Japanese language and culture. In the US, the CIA is even making its own chatbot. 

Clearly then, governments and institutions across the world appear to be realising that when it comes to AI models, one-size-fits-all probably isn’t a good thing.  

Netherlands building own version of ChatGPT amid quest for safer AI Read More »