“I’m fighting with Google now,” Townsend told Ars. “I don’t expect any real answers from them.”
How YouTubers can avoid being targeted
As YouTube appears evasive, Townsend has been grateful for long-time subscribers commenting to show support, which may help get his videos amplified more by the algorithm. On YouTube, he also said that because “the outpouring of support was beyond anything” he could’ve expected, it kept him “sane” through sometimes 24-hour periods of silence without any updates on when his account would be restored.
Townsend told Ars that he rarely does sponsorships, but like many in the fighting game community, his inbox gets spammed with offers constantly, much of which he assumes are scams.
“If you are a YouTuber of any size,” Townsend explained in his YouTube video, “you are inundated with this stuff constantly,” so “my BS detector is like, okay, fake, fake, fake, fake, fake, fake, fake. But this one just, it looked real enough, like they had their own social media presence, lots of followers. Everything looked real.”
Brian_F echoed that in his video, which breaks down how the latest scam evolved from more obvious scams, tricking even skeptical YouTubers who have years of experience dodging phishing scams in their inboxes.
“The game has changed,” Brian_F said.
Townsend told Ars that sponsorships are rare in the fighting game community. YouTubers are used to carefully scanning supposed offers to weed out the real ones from the fakes. But Brian_F’s video pointed out that scammers copy/paste legitimate offer letters, so it’s already hard to distinguish between potential sources of income and cleverly masked phishing attacks using sponsorships as lures.
Part of the vetting process includes verifying links without clicking through and verifying identities of people submitting supposed offers. But if YouTubers are provided with legitimate links early on, receiving offers from brands they really like, and see that contacts match detailed LinkedIn profiles of authentic employees who market the brand, it’s much harder to detect a fake sponsorship offer without as many obvious red flags.
LinkedIn admitted Wednesday that it has been training its own AI on many users’ data without seeking consent. Now there’s no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.
In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn’s user agreement and privacy policy will be changed to better explain how users’ personal data powers AI on the platform.
Under the new privacy policy, LinkedIn now informs users that “we may use your personal data… [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”
An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time.
That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities.
LinkedIn’s AI models powering generative AI features “may be trained by LinkedIn or another provider,” such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said.
A potentially major privacy risk for users, LinkedIn’s FAQ noted, is that users who “provide personal data as an input to a generative AI powered feature” could end up seeing their “personal data being provided as an output.”
LinkedIn claims that it “seeks to minimize personal data in the data sets used to train the models,” relying on “privacy enhancing technologies to redact or remove personal data from the training dataset.”
While Lawit’s blog avoids clarifying if data already collected can be removed from AI training data sets, the FAQ affirmed that users who automatically opted in to sharing personal data for AI training can only opt out of the invasive data collection “going forward.”
Opting out “does not affect training that has already taken place,” the FAQ said.
A LinkedIn spokesperson told Ars that it “benefits all members” to be opted in to AI training “by default.”
“People can choose to opt out, but they come to LinkedIn to be found for jobs and networking and generative AI is part of how we are helping professionals with that change,” LinkedIn’s spokesperson said.
By allowing opt-outs of future AI training, LinkedIn’s spokesperson additionally claimed that the platform is giving “people using LinkedIn even more choice and control when it comes to how we use data to train our generative AI technology.”
How to opt out of AI training on LinkedIn
Users can opt out of AI training by navigating to the “Data privacy” section in their account settings, then turning off the option allowing collection of “data for generative AI improvement” that LinkedIn otherwise automatically turns on for most users.
The only exception is for users in the European Economic Area or Switzerland, who are protected by stricter privacy laws that either require consent from platforms to collect personal data or for platforms to justify the data collection as a legitimate interest. Those users will not see an option to opt out, because they were never opted in, LinkedIn repeatedly confirmed.
Additionally, users can “object to the use of their personal data for training” generative AI models not used to generate LinkedIn content—such as models used for personalization or content moderation purposes, The Verge noted—by submitting the LinkedIn Data Processing Objection Form.
Last year, LinkedIn shared AI principles, promising to take “meaningful steps to reduce the potential risks of AI.”
One risk that the updated user agreement specified is that using LinkedIn’s generative features to help populate a profile or generate suggestions when writing a post could generate content that “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.”
Users are advised that they are responsible for avoiding sharing misleading information or otherwise spreading AI-generated content that may violate LinkedIn’s community guidelines. And users are additionally warned to be cautious when relying on any information shared on the platform.
“Like all content and other information on our Services, regardless of whether it’s labeled as created by ‘AI,’ be sure to carefully review before relying on it,” LinkedIn’s user agreement says.
Back in 2023, LinkedIn claimed that it would always “seek to explain in clear and simple ways how our use of AI impacts people,” because users’ “understanding of AI starts with transparency.”
Legislation like the European Union’s AI Act and the GDPR—especially with its strong privacy protections—if enacted elsewhere, would lead to fewer shocks to unsuspecting users. That would put all companies and their users on equal footing when it comes to training AI models and result in fewer nasty surprises and angry customers.