video

“go-generate-a-bridge-and-jump-off-it”:-how-video-pros-are-navigating-ai

“Go generate a bridge and jump off it”: How video pros are navigating AI


I talked with nine creators about economic pressures and fan backlash.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Many fans interpreted Miyazaki’s remarks as rejecting AI-generated video in general. So they didn’t like it when, in October 2024, filmmaker PJ Accetturo used AI tools to create a fake trailer for a live-action version of Miyazaki’s animated classic Princess Mononoke. The trailer earned him 22 million views on X. It also earned him hundreds of insults and death threats.

“Go generate a bridge and jump off of it,” said one of the funnier retorts. Another urged Accetturo to “throw your computer in a river and beg God’s forgiveness.”

Someone tweeted that Miyazaki “should be allowed to legally hunt and kill this man for sport.”

PJ Accetturo is a director and founder of Genre AI, an AI ad agency. Credit: PJ Accetturo

The development of AI image and video generation models has been controversial, to say the least. Artists have accused AI companies of stealing their work to build tools that put people out of a job. Using AI tools openly is stigmatized in many circles, as Accetturo learned the hard way.

But as these models have improved, they have sped up workflows and afforded new opportunities for artistic expression. Artists without AI expertise might soon find themselves losing work.

Over the last few weeks, I’ve spoken to nine actors, directors, and creators about how they are navigating these tricky waters. Here’s what they told me.

Actors have emerged as a powerful force against AI. In 2023, SAG-AFTRA, the Hollywood actors’ union, had its longest-ever strike, partly to establish more protections for actors against AI replicas.

Actors have lobbied to regulate AI in their industry and beyond. One actor I talked with, Erik Passoja, has testified before the California Legislature in favor of several bills, including for greater protections against pornographic deepfakes. SAG-AFTRA endorsed SB 1047, an AI safety bill regulating frontier models. The union also organized against the proposed moratorium on state AI bills.

A recent flashpoint came in September, when Deadline Hollywood reported that talent agencies were interested in signing “AI actress” Tilly Norwood.

Actors weren’t happy. Emily Blunt told Variety, “This is really, really scary. Come on agencies, don’t do that.”

Natasha Lyonne, star of Russian Doll, posted on an Instagram Story: “Any talent agency that engages in this should be boycotted by all guilds. Deeply misguided & totally disturbed.”

The backlash was partly specific to Tilly Norwood—Lyonne is no AI skeptic, having cofounded an AI studio—but it also reflects a set of concerns around AI common to many in Hollywood and beyond.

Here’s how SAG-AFTRA explained its position:

Tilly Norwood is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion and, from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience. It doesn’t solve any “problem” — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.

This statement reflects three broad criticisms that come up over and over in discussions of AI art:

Content theft: Most leading AI video models have been trained on broad swathes of the Internet, including images and films made by artists. In many cases, companies have not asked artists for permission to use this content, nor compensated them. Courts are still working out whether this is fair use under copyright law. But many people I talked to consider AI companies’ training efforts to be theft of artists’ work.

Job loss:  If AI tools can make passable video quickly or drastically speed up editing tasks, that potentially takes jobs away from actors or film editors. While past technological advancements have also eliminated jobs—the adoption of digital cameras drastically reduced the number of people cutting physical film—AI could have an even broader impact.

Artistic quality:  A lot of people told me they just didn’t think AI-generated content could ever be good art. Tess Dinerstein stars in vertical dramas—episodic programs optimized for viewing on smartphones. She told me that AI is “missing that sort of human connection that you have when you go to a movie theater and you’re sobbing your eyes out because your favorite actor is talking about their dead mom.”

The concern about theft is potentially solvable by changing how models are trained. Around the time Accetturo released the “Princess Mononoke” trailer, he called for generative AI tools to be “ethically trained on licensed datasets.”

Some companies have moved in this direction. For instance, independent filmmaker Gille Klabin told me he “feels pretty good” using Adobe products because the company trains its AI models on stock images that it pays royalties for.

But the other two issues—job losses and artistic integrity—will be harder to finesse. Many creators—and fans—believe that AI-generated content misses the fundamental point of art, which is about creating an emotional connection between creators and viewers.

But while that point is compelling in theory, the details can be tricky.

Dinerstein, the vertical drama actress, told me that she’s “not fundamentally against AI”—she admits “it provides a lot of resources to filmmakers” in specialized editing tasks—but she takes a hard stance against it on social media.

“It’s hard to ever explain gray areas on social media,” she said, and she doesn’t want to “come off as hypocritical.”

Even though she doesn’t think that AI poses a risk to her job—“people want to see what I’m up to”—she does fear people (both fans and vertical drama studios) making an AI representation of her without her permission. And she has found it easiest to just say, “You know what? Don’t involve me in AI.”

Others see it as a much broader issue. Actress Susan Spano told me it was “an issue for humans, not just actors.”

“This is a world of humans and animals,” she said. “Interaction with humans is what makes it fun. I mean, do we want a world of robots?”

It’s relatively easy for actors to take a firm stance against AI because they inherently do their work in the physical world. But things are more complicated for other Hollywood creatives, such as directors, writers, and film editors. AI tools can genuinely make them more productive, and they’re at risk of losing work if they don’t stay on the cutting edge.

So the non-actors I talked to took a range of approaches to AI. Some still reject it. Others have used the tools reluctantly and tried to keep their heads down. Still others have openly embraced the technology.

Kavan Cardoza is a director and AI filmmaker. Credit: Phantom X

Take Kavan Cardoza, for example. He worked as a music video director and photographer for close to a decade before getting his break into filmmaking with AI.

After the image model Midjourney was first released in 2022, Cardoza started playing around with image generation and later video generation. Eventually, he “started making a bunch of fake movie trailers” for existing movies and franchises. In December 2024, he made a fan film in the Batman universe that “exploded on the Internet,” before Warner Bros. took it down for copyright infringement.

Cardoza acknowledges that he re-created actors in former Batman movies “without their permission.” But he insists he wasn’t “trying to be malicious or whatever. It was truly just a fan film.”

Whereas Accetturo received death threats, the response to Cardoza’s fan film was quite positive.

“Every other major studio started contacting me,” Cardoza said. He set up an AI studio, Phantom X, with several of his close friends. Phantom X started by making ads (where AI video is catching on quickest), but Cardoza wanted to focus back on films.

In June, Cardoza made a short film called Echo Hunter, a blend of Blade Runner and The Matrix. Some shots look clearly AI-generated, but Cardoza used motion-capture technology from Runway to put the faces of real actors into his AI-generated world. Overall, the piece pretty much hangs together.

Cardoza wanted to work with real actors because their artistic choices can help elevate the script he’s written: “There’s a lot more levels of creativity to it.” But he needed SAG-AFTRA’s approval to make a film that blends AI techniques with the likenesses of SAG-AFTRA actors. To get it, he had to promise not to reuse the actors’ likenesses in other films.

In Cardoza’s view, AI is “giving voices to creators that otherwise never would have had the voice.”

But Cardoza isn’t wedded to AI. When an interviewer asked him whether he’d make a non-AI film if required to, he responded, “Oh, 100 percent.” Cardoza added that if he had the budget to do it now, “I’d probably still shoot it all live action.”

He acknowledged to me that there will be losers in the transition—“there’s always going to be changes”—but he compares the rise of AI with past technological developments in filmmaking, like the rise of visual effects. This created new jobs making visual effects digitally, but reduced jobs making elaborate physical sets.

Cardoza expressed interest in reducing the amount of job loss. In another interview, Cardoza said that for his film project, “we want to make sure we include as many people as possible,” not just actors, but sound designers, script editors, and other specialized roles.

But he believes that eventually, AI will get good enough to do everyone’s job. “Like I say with tech, it’s never about if, it’s just when.”

Accetturo’s entry into AI was similar. He told me that he worked for 15 years as a filmmaker, “mostly as a commercial director and former documentary director.” During the pandemic, he “raised millions” for an animated TV series, but it got caught up in development hell.

AI gave him a new chance at success. Over the summer of 2024, he started playing around with AI video tools. He realized that he was in the sweet spot to take advantage of AI: experienced enough to make something good, but not so established that he was risking his reputation. After Google released Veo 3 in May, Accetturo released a fake medicine ad that went viral. His studio now produces ads for prominent companies like Oracle and Popeyes.

Accetturo says the backlash against him has subsided: “It truly is nothing compared to what it was.” And he says he’s committed to working on AI: “Everyone understands that it’s the future.”

Between the anti- and pro-AI extremes, there are a lot of editors and artists quietly using AI tools without disclosing it. Unsurprisingly, it’s difficult to find people who will speak about this on the record.

“A lot of people want plausible deniability right now,” according to Ryan Hayden, a Hollywood talent agent. “There is backlash about it.”

But if editors don’t use AI tools, they risk becoming obsolete. Hayden says that he knows a lot of people in the editing field trying to master AI because “there’s gonna be a massive cut” in the total number of editors. Those who know AI might survive.

As one comedy writer involved in an AI project told Wired, “We wanted to be at the table and not on the menu.”

Clandestine AI usage extends into the upper reaches of the industry. Hayden knows an editor who works with a major director who has directed $100 million films. “He’s already using AI, sometimes without people knowing.”

Some artists feel morally conflicted but don’t think they can effectively resist. Vinny Dellay, a storyboard artist who has worked on Marvel films and Super Bowl ads, released a video detailing his views on the ethics of using AI as a working artist. Dellay said that he agrees that “AI being trained off of art found on the Internet without getting permission from the artist, it may not be fair, it may not be honest.” But refusing to use AI products won’t stop their general adoption. Believing otherwise is “just being delusional.”

Instead, Dellay said that the right course is to “adapt like cockroaches after a nuclear war.” If they’re lucky, using AI in storyboarding workflows might even “let a storyboard artist pump out twice the boards in half the time without questioning all your life’s choices at 3 am.”

Gille Klabin is an independent writer, director, and visual effects artist. Credit: Gille Klabin

Gille Klabin is an indie director and filmmaker currently working on a feature called Weekend at the End of the World.

As an independent filmmaker, Klabin can’t afford to hire many people. There are many labor-intensive tasks—like making a pitch deck for his film—that he’d otherwise have to do himself. An AI tool “essentially just liberates us to get more done and have more time back in our life.”

But he’s careful to stick to his own moral lines. Any time he mentioned using an AI tool during our interview, he’d explain why he thought that was an appropriate choice. He said he was fine with AI use “as long as you’re using it ethically in the sense that you’re not copying somebody’s work and using it for your own.”

Drawing these lines can be difficult, however. Hayden, the talent agent, told me that as AI tools make low-budget films look better, it gets harder to make high-budget films, which employ the most people at the highest wage levels.

If anything, Klabin’s AI uptake is limited more by the current capabilities of AI models. Klabin is an experienced visual effects artist, and he finds AI products to generally be “not really good enough to be used in a final project.”

He gave me a concrete example. Rotoscoping is a process in which you trace out the subject of the shot so you can edit the background independently. It’s very labor-intensive—one has to edit every frame individually—so Klabin has tried using Runway’s AI-driven rotoscoping. While it can make for a decent first pass, the result is just too messy to use as a final project.

Klabin sent me this GIF of a series of rotoscoped frames from his upcoming movie. While the model does a decent job of identifying the people in the frame, its boundaries aren’t consistent from frame to frame. The result is noisy.

Current AI tools are full of these small glitches, so Klabin only uses them for tasks that audiences don’t see (like creating a movie pitch deck) or in contexts where he can clean up the result afterward.

Stephen Robles reviews Apple products on YouTube and other platforms. He uses AI in some parts of the editing process, such as removing silences or transcribing audio, but doesn’t see it as disruptive to his career.

Stephen Robles is a YouTuber, podcaster, and creator covering tech, particularly Apple. Credit: Stephen Robles

“I am betting on the audience wanting to trust creators, wanting to see authenticity,” he told me. AI video tools don’t really help him with that and can’t replace the reputation he’s sought to build.

Recently, he experimented with using ChatGPT to edit a video thumbnail (the image used to advertise a video). He got a couple of negative reactions about his use of AI, so he said he “might slow down a little bit” with that experimentation.

Robles didn’t seem as concerned about AI models stealing from creators like him. When I asked him about how he felt about Google training on his data, he told me that “YouTube provides me enough benefit that I don’t think too much about that.”

Professional thumbnail artist Antioch Hwang has a similarly pragmatic view toward using AI. Some channels he works with have audiences that are “very sensitive to AI images.” Even using “an AI upscaler to fix up the edges” can provoke strong negative reactions. For those channels, he’s “very wary” about using AI.

Antioch Hwang is a YouTube thumbnail artist. Credit: Antioch Creative

But for most channels he works for, he’s fine using AI, at least for technical tasks. “I think there’s now been a big shift in the public perception of these AI image generation tools,” he told me. “People are now welcoming them into their workflow.”

He’s still careful with his AI use, though, because he thinks that having human artistry helps in the YouTube ecosystem. “If everyone has all the [AI] tools, then how do you really stand out?” he said.

Recently, top creators have started using more rough-looking thumbnails for their videos. AI has made polished thumbnails too easy to create, so top creators are using what Hwang would call “poorly made thumbnails” to help videos stand out.

Hwang told me something surprising: even as AI makes it easier for creators to make thumbnails themselves, business has never been better for thumbnail artists, even at the lower end. He said that demand has soared because “AI as a whole has lowered the barriers for content creation, and now there’s more creators flooding in.”

Still, Hwang doesn’t expect the good times to last forever. “I don’t see AI completely taking over for the next three-ish years. That’s my estimated timeline.”

Everyone I talked to had different answers to when—if ever—AI would meaningfully disrupt their part of the industry.

Some, like Hwang, were pessimistic. Actor Erik Passoja told me he thought the big movie studios—like Warner Bros. or Paramount—would be gone in three to five years.

But others were more optimistic. Tess Dinerstein, the vertical drama actor, said, “I don’t think that verticals are ever going to go fully AI.” Even if it becomes technologically feasible, she argued, “that just doesn’t seem to be what the people want.”

Gille Klabin, the independent filmmaker, thought there would always be a place for high-quality human films. If someone’s work is “fundamentally derivative,” then they are at risk. But he thinks the best human-created work will still stand out. “I don’t know how AI could possibly replace the borderline divine element of consciousness,” he said.

The people who were most bullish on AI were, if anything, the least optimistic about their own career prospects. “I think at a certain point it won’t matter,” Kavan Cardoza told me. “It’ll be that anyone on the planet can just type in some sentences” to generate full, high-quality videos.

This might explain why Accetturo has become something of an AI evangelist; his newsletter tries to teach other filmmakers how to adapt to the coming AI revolution.

AI “is a tsunami that is gonna wipe out everyone” he told me. “So I’m handing out surfboards—teaching people how to surf. Do with it what you will.”

Kai Williams is a reporter for Understanding AI, a Substack newsletter founded by Ars Technica alum Timothy B. Lee. His work is supported by a Tarbell FellowshipSubscribe to Understanding AI to get more from Tim and Kai.

“Go generate a bridge and jump off it”: How video pros are navigating AI Read More »

dead-celebrities-are-apparently-fair-game-for-sora-2-video-manipulation

Dead celebrities are apparently fair game for Sora 2 video manipulation

But deceased public figures obviously can’t consent to Sora 2’s cameo feature or exercise that kind of “end-to-end” control of their own likeness. And OpenAI seems OK with that. “We don’t have a comment to add, but we do allow the generation of historical figures,” an OpenAI spokesperson recently told PCMag.

The countdown to lawsuits begins

The use of digital re-creations of dead celebrities isn’t exactly a new issue—back in the ’90s, we were collectively wrestling with John Lennon chatting to Forrest Gump and Fred Astaire dancing with a Dirt Devil vacuum. Back then, though, that kind of footage required painstaking digital editing and technology only easily accessible to major video production houses. Now, more convincing footage of deceased public figures can be generated by any Sora 2 user in minutes for just a few bucks.

In the US, the right of publicity for deceased public figures is governed by various laws in at least 24 states. California’s statute, which dates back to 1985, bars unauthorized post-mortem use of a public figure’s likeness “for purposes of advertising or selling, or soliciting purchases of products, merchandise, goods, or services.” But a 2001 California Supreme Court ruling explicitly allows those likenesses to be used for “transformative” purposes under the First Amendment.

The New York version of the law, signed in 2022, contains specific language barring the unauthorized use of a “digital replicas” that are “so realistic that a reasonable observer would believe it is a performance by the individual being portrayed and no other individual” and in a manner “likely to deceive the public into thinking it was authorized by the person or persons.” But video makers can get around this prohibition with a “conspicuous disclaimer” explicitly noting that the use is unauthorized.

Dead celebrities are apparently fair game for Sora 2 video manipulation Read More »

can-today’s-ai-video-models-accurately-model-how-the-real-world-works?

Can today’s AI video models accurately model how the real world works?

But on other tasks, the model showed much more variable results. When asked to generate a video highlighting a specific written character on a grid, for instance, the model failed in nine out of 12 trials. When asked to model a Bunsen burner turning on and burning a piece of paper, it similarly failed nine out of 12 times. When asked to solve a simple maze, it failed in 10 of 12 trials. When asked to sort numbers by popping labeled bubbles in order, it failed 11 out of 12 times.

For the researchers, though, all of the above examples aren’t evidence of failure but instead a sign of the model’s capabilities. To be listed under the paper’s “failure cases,” Veo 3 had to fail a tested task across all 12 trials, which happened in 16 of the 62 tasks tested. For the rest, the researchers write that “a success rate greater than 0 suggests that the model possesses the ability to solve the task.”

Thus, failing 11 out of 12 trails of a certain task is considered evidence for the model’s capabilities in the paper. That evidence of the model “possess[ing] the ability to solve the task” includes 18 tasks where the model failed in more than half of its 12 trial runs and another 14 where it failed in 25 to 50 percent of trials.

Past results, future performance

Yes, in all of these cases, the model technically demonstrates the capability being tested at some point. But the model’s inability to perform that task reliably means that, in practice, it won’t be performant enough for most use cases. Any future model that could become a “unified, generalist vision foundation models” will have to be able to succeed much more consistently on these kinds of tests.

Can today’s AI video models accurately model how the real world works? Read More »

with-new-gen-4-model,-runway-claims-to-have-finally-achieved-consistency-in-ai-videos

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos

For example, it was used in producing the sequence in the film Everything Everywhere All At Once, where two rocks with googly eyes had a conversation on a cliff, and it has also been used to make visual gags for The Late Show with Stephen Colbert.

Whereas many competing startups were started by AI researchers or Silicon Valley entrepreneurs, Runway was founded in 2018 by art students at New York University’s Tisch School of the Arts—Cristóbal Valenzuela and Alejandro Matamala from Chilé, and Anastasis Germanidis from Greece.

It was one of the first companies to release a usable video-generation tool to the public, and its team also contributed in foundational ways to the Stable Diffusion model.

It is vastly outspent by competitors like OpenAI, but while most of its competitors have released general-purpose video creation tools, Runway has sought an Adobe-like place in the industry. It has focused on marketing to creative professionals like designers and filmmakers, and has implemented tools meant to make Runway a support tool into existing creative workflows.

The support tool argument (as opposed to a standalone creative product) helped Runway secure a deal with motion picture company Lionsgate, wherein Lionsgate allowed Runway to legally train its models on its library of films, and Runway provided bespoke tools for Lionsgate for use in production or post-production.

That said, Runway is, along with Midjourney and others, one of the subjects of a widely publicized intellectual property case brought by artists who claim the companies illegally trained their models on their work, so not all creatives are on board.

Apart from the announcement about the partnership with Lionsgate, Runway has never publicly shared what data is used to train its models. However, a report in 404 Media seemed to reveal that at least some of the training data included video scraped from the YouTube channels of popular influencers, film studios, and more.

With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos Read More »

openai-shows-off-sora-ai-video-generator-to-hollywood-execs

OpenAI shows off Sora AI video generator to Hollywood execs

No lights, no camera, action —

CEO Sam Altman met with Universal, Paramount, and Warner Bros Discovery.

a robotic intelligence works as a cameraman (3d rendering)

OpenAI has launched a charm offensive in Hollywood, holding meetings with major studios including Paramount, Universal, and Warner Bros Discovery to showcase its video generation technology Sora and allay fears the artificial intelligence model will harm the movie industry.

Chief Executive Sam Altman and Chief Operating Officer Brad Lightcap gave presentations to executives from the film industry giants, said multiple people with knowledge of the meetings, which took place in recent days.

Altman and Lightcap showed off Sora, a new generative AI model that can create detailed videos from simple written prompts.

The technology first gained Hollywood’s attention after OpenAI published a selection of videos produced by the model last month. The clips quickly went viral online and have led to debate over the model’s potential impact on the creative industries.

“Sora is causing enormous excitement,” said media analyst Claire Enders. “There is a sense it is going to revolutionize the making of movies and bring down the cost of production and reduce the demand for [computer-generated imagery] very strongly.”

AI-generated video of a cat and human, generated via video generation model Sora.

Those involved in the meetings said OpenAI was seeking input from the film bosses on how Sora should be rolled out. Some who watched the demonstrations said they could see how Sora or similar AI products could save time and money on production but added the technology needed further development.

OpenAI’s overtures to the studios come at a delicate moment in Hollywood. Last year’s monthslong strikes ended with the Writers Guild of America and the Screen Actors Guild securing groundbreaking protections from AI in their contracts. This year, contract negotiations are underway with the International Alliance of Theatrical Stage Employees—and AI is again expected to be a hot-button issue.

Earlier this week, OpenAI released new Sora videos generated by a number of visual artists and directors, including short films, as well as their impressions of the technology. The model will aim to compete with several available text-to-video services from start-ups, including Runway, Pika, and Stability AI. These other services already offer commercial uses for content.

An AI-generated video from Sora of a dog.

However, Sora has not been widely released. OpenAI has held off announcing a launch date or the circumstances under which it will be available. One person with knowledge of its strategy said the company was deciding how to commercialize the technology. Another person said there were safety steps still to take before the company considered putting Sora into a product.

OpenAI is also working to improve the system. Currently, Sora can only make videos under one minute in length, and its creations have limitations, such as glass bouncing off the floor instead of shattering or adding extra limbs to people and animals.

Some studios appeared open to using Sora in filmmaking or TV production in future, but licensing and partnerships have not yet been discussed, said people involved in the talks.

“There have been no meetings with OpenAI about partnerships,” one studio executive said. “They’ve done demos, just like Apple has been demo-ing the Vision Pro [mixed-reality headset]. They’re trying to get people excited.”

OpenAI has been previewing the model in a “very controlled manner” to “industries that are likely to be impacted first,” said one person close to OpenAI.

Media analyst Enders said the reception from the movie industry had been broadly optimistic on Sora as it is “seen completely as a cost-saving element, rather than impacting the creative ethos of storytelling.”

OpenAI declined to comment.

An AI-generated video from Sora of a woman walking down a Tokyo street.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

OpenAI shows off Sora AI video generator to Hollywood execs Read More »

unifi-devices-broadcasted-private-video-to-other-users’-accounts

UniFi devices broadcasted private video to other users’ accounts

CASE OF MISTAKEN IDENTITY —

“I was presented with 88 consoles from another account,” one user reports.

an assortment of ubiquiti cameras

Enlarge / An assortment of Ubiquiti cameras.

Users of UniFi, the popular line of wireless devices from manufacturer Ubiquiti, are reporting receiving private camera feeds from, and control over, devices belonging to other users, posts published to social media site Reddit over the past 24 hours show.

“Recently, my wife received a notification from UniFi Protect, which included an image from a security camera,” one Reddit user reported. “However, here’s the twist—this camera doesn’t belong to us.”

Stoking concern and anxiety

The post included two images. The first showed a notification pushed to the person’s phone reporting that their UDM Pro, a network controller and network gateway used by tech-enthusiast consumers, had detected someone moving in the backyard. A still shot of video recorded by a connected surveillance camera showed a three-story house surrounded by trees. The second image showed the dashboard belonging to the Reddit user. The user’s connected device was a UDM SE, and the video it captured showed a completely different house.

Less than an hour later, a different Reddit user posting to the same thread replied: “So it’s VERY interesting you posted this, I was just about to post that when I navigated to unifi.ui.com this morning, I was logged into someone else’s account completely! It had my email on the top right, but someone else’s UDM Pro! I could navigate the device, view, and change settings! Terrifying!!”

Two other people took to the same thread to report similar behavior happening to them.

Other Reddit threads posted in the past day reporting UniFi users connecting to private devices or feeds belonging to others are here and here. The first one reported that the Reddit poster gained full access to someone else’s system. The post included two screenshots showing what the poster said was the captured video of an unrecognized business. The other poster reported logging into their Ubiquiti dashboard to find system controls for someone else. “I ended up logging out, clearing cookies, etc seems fine now for me…” the poster wrote.

Yet another person reported the same problem in a post published to Ubiquiti’s community support forum on Thursday, as this Ars story was being reported. The person reported logging into the UniFi console as is their routine each day.

“However this time I was presented with 88 consoles from another account,” the person wrote. “I had full access to these consoles, just as I would my own. This was only stopped when I forced a browser refresh, and I was presented again with my consoles.”

Ubiquity on Thursday said it had identified the glitch and fixed the errors that caused it.

“Specifically, this issue was caused by an upgrade to our UniFi Cloud infrastructure, which we have since solved,” officials wrote. They went on:

1. What happened?

1,216 Ubiquiti accounts (“Group 1”) were improperly associated with a separate group of 1,177 Ubiquiti accounts (“Group 2”).

2. When did this happen?

December 13, from 6: 47 AM to 3: 45 PM UTC.

3. What does this mean?

During this time, a small number of users from Group 2 received push notifications on their mobile devices from the consoles assigned to a small number of users from Group 1.

Additionally, during this time, a user from Group 2 that attempted to log into his or her account may have been granted temporary remote access to a Group 1 account.

The reports are understandably stoking concern and even anxiety for users of UniFi products, which include wireless access points, switches, routers, controller devices, VoIP phones, and access control products. As the Internet-accessible portals into the local networks of users, UniFi devices provide a means for accessing cameras, mics, and other sensitive resources inside the home.

“I guess I should stop walking around naked in my house now,” a participant in one of the forums joked.

To Ubiquiti’s credit, company employees proactively responded to reports, signaling they took the reports seriously and began actively investigating early on. The employees said the problem has been corrected, and the account mix-ups are no longer occurring.

It’s useful to remember that this sort of behavior—legitimately logging into an account only to find the data or controls belonging to a completely different account—is as old as the Internet. Recent examples: A T-Mobile mistake in September, and similar glitches involving Chase Bank, First Virginia Banks, Credit Karma, and Sprint.

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

In an email, a Ubiquiti official said company employees are still gathering “information to provide an accurate assessment.”

UniFi devices broadcasted private video to other users’ accounts Read More »

the-best-new-movies-you-can-watch-at-home-right-now

The best new movies you can watch at home right now

internal/modules/cjs/loader.js: 905 throw err; ^ Error: Cannot find module ‘puppeteer’ Require stack: – /home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js at Function.Module._resolveFilename (internal/modules/cjs/loader.js: 902: 15) at Function.Module._load (internal/modules/cjs/loader.js: 746: 27) at Module.require (internal/modules/cjs/loader.js: 974: 19) at require (internal/modules/cjs/helpers.js: 101: 18) at Object. (/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js:2: 19) at Module._compile (internal/modules/cjs/loader.js: 1085: 14) at Object.Module._extensions..js (internal/modules/cjs/loader.js: 1114: 10) at Module.load (internal/modules/cjs/loader.js: 950: 32) at Function.Module._load (internal/modules/cjs/loader.js: 790: 12) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js: 75: 12) code: ‘MODULE_NOT_FOUND’, requireStack: [ ‘/home/760439.cloudwaysapps.com/jxzdkzvxkw/public_html/wp-content/plugins/rss-feed-post-generator-echo/res/puppeteer/puppeteer.js’ ]

The best new movies you can watch at home right now Read More »