cameras

apple-vision-pro,-new-cameras-fail-user-repairability-analysis

Apple Vision Pro, new cameras fail user-repairability analysis

Apple's Vision Pro scored 0 points in US PIRG's self-repairability analysis.

Enlarge / Apple’s Vision Pro scored 0 points in US PIRG’s self-repairability analysis.

Kyle Orland

In December, New York became the first state to enact a “Right to Repair” law for electronics. Since then, other states, including Oregon and Minnesota, have passed similar laws. However, a recent analysis of some recently released gadgets shows that self-repair still has a long way to go before it becomes ubiquitous.

On Monday, the US Public Interest Research Group (PIRG) released its Leaders and Laggards report that examined user repairability of 21 devices subject to New York’s electronics Right to Repair law. The nonprofit graded devices “based on the quality and accessibility of repair manuals, spare parts, and other critical repair materials.”

Nathan Proctor, one of the report’s authors and senior director for the Campaign for the Right to Repair for the US PIRG Education Fund, told Ars Technica via email that PIRG focused on new models since the law only applies to new products, adding that PIRG “tried to include a range of covered devices from well-known brands.”

While all four smartphones included on the list received an A-minus or A, many other types of devices got disappointing grades. The HP Spectre Fold foldable laptop, for example, received a D-minus due to low parts (2 out of 10) and manual (4 out of 10) scores.

The report examined four camera models—Canon’s EOS r100, Fujifilm’s GFX 100 ii, Nikon’s Zf, and Sony’s Alpha 6700—and all but one received an F. The outlier, the Sony camera, managed a D-plus.

Two VR headsets were also among the losers. US PIRG gave Apple’s Vision Pro and Meta’s Quest 3 an F.

You can see PIRG’s full score breakdown below:

Repair manuals are still hard to access

New York’s Digital Fair Repair Act requires consumer electronics brands to allow consumers access to the same diagnostic tools, parts, and repair manuals that its own repair technicians use. However, the PIRG organization struggled to access manuals for some recently released tech that’s subject to the law.

For example, Sony’s PlayStation 5 Slim received a 1/10 score. PIRG’s report includes an apparent screenshot of an online chat with Sony customer support, where a rep said that the company doesn’t have a copy of the console’s service manual available and that “if the unit needs repair, we recommend/refer customers to the service center.”

Apple’s Vision Pro, meanwhile, got a 0/10 manual score, while the Meta Quest 3 got a 1/10.

According to the report, “only 12 of 21 products provided replacement procedures, and 11 listed which tools are required to disassemble the product.”

The report suggests difficulties in easily accessing repair manuals, with the report’s authors stating that reaching out to customer service representatives “often” proved “unhelpful.” The group also pointed to a potential lack of communication between customer service reps and the company’s repairability efforts.

For example, Apple launched its Self Service Repair Store in April 2022. But PIRG’s report said:

 … our interaction with their customer service team seemed to imply that there was no self-repair option for [Apple] phones. We were told by an Apple support representative that ‘only trained Apple Technician[s]’ would be able to replace our phone screen or battery, despite a full repair manual and robust parts selection available on the Apple website.

Apple didn’t immediately respond to Ars Technica’s request for comment.

Apple Vision Pro, new cameras fail user-repairability analysis Read More »

new-camera-design-can-id-threats-faster,-using-less-memory

New camera design can ID threats faster, using less memory

Image out the windshield of a car, with other vehicles highlighted by computer-generated brackets.

Elon Musk, back in October 2021, tweeted that “humans drive with eyes and biological neural nets, so cameras and silicon neural nets are only way to achieve generalized solution to self-driving.” The problem with his logic has been that human eyes are way better than RGB cameras at detecting fast-moving objects and estimating distances. Our brains have also surpassed all artificial neural nets by a wide margin at general processing of visual inputs.

To bridge this gap, a team of scientists at the University of Zurich developed a new automotive object-detection system that brings digital camera performance that’s much closer to human eyes. “Unofficial sources say Tesla uses multiple Sony IMX490 cameras with 5.4-megapixel resolution that [capture] up to 45 frames per second, which translates to perceptual latency of 22 milliseconds. Comparing [these] cameras alone to our solution, we already see a 100-fold reduction in perceptual latency,” says Daniel Gehrig, a researcher at the University of Zurich and lead author of the study.

Replicating human vision

When a pedestrian suddenly jumps in front of your car, multiple things have to happen before a driver-assistance system initiates emergency braking. First, the pedestrian must be captured in images taken by a camera. The time this takes is called perceptual latency—it’s a delay between the existence of a visual stimuli and its appearance in the readout from a sensor. Then, the readout needs to get to a processing unit, which adds a network latency of around 4 milliseconds.

The processing to classify the image of a pedestrian takes further precious milliseconds. Once that is done, the detection goes to a decision-making algorithm, which takes some time to decide to hit the brakes—all this processing is known as computational latency. Overall, the reaction time is anywhere between 0.1 to half a second. If the pedestrian runs at 12 km/h they would travel between 0.3 and 1.7 meters in this time. Your car, if you’re driving 50 km/h, would cover 1.4 to 6.9 meters. In a close-range encounter, this means you’d most likely hit them.

Gehrig and Davide Scaramuzza, a professor at the University of Zurich and a co-author on the study, aimed to shorten those reaction times by bringing the perceptual and computational latencies down.

The most straightforward way to lower the former was using standard high-speed cameras that simply register more frames per second. But even with a 30-45 fps camera, a self-driving car would generate nearly 40 terabytes of data per hour. Fitting something that would significantly cut the perceptual latency, like a 5,000 fps camera, would overwhelm a car’s onboard computer in an instant—the computational latency would go through the roof.

So, the Swiss team used something called an “event camera,” which mimics the way biological eyes work. “Compared to a frame-based video camera, which records dense images at a fixed frequency—frames per second—event cameras contain independent smart pixels that only measure brightness changes,” explains Gehrig. Each of these pixels starts with a set brightness level. When the change in brightness exceeds a certain threshold, the pixel registers an event and sets a new baseline brightness level. All the pixels in the event camera are doing that continuously, with each registered event manifesting as a point in an image.

This makes event cameras particularly good at detecting high-speed movement and allows them to do so using far less data. The problem with putting them in cars has been that they had trouble detecting things that moved slowly or didn’t move at all relative to the camera. To solve that, Gehrig and Scaramuzza went for a hybrid system, where an event camera was combined with a traditional one.

New camera design can ID threats faster, using less memory Read More »

“so-violated”:-wyze-cameras-leak-footage-to-strangers-for-2nd-time-in-5-months

“So violated”: Wyze cameras leak footage to strangers for 2nd time in 5 months

Wyze's Cam V3 Pro indoor/outdoor smart camera mounted outside

Enlarge / Wyze’s Cam V3 Pro indoor/outdoor smart camera.

Wyze cameras experienced a glitch on Friday that gave 13,000 customers access to images and, in some cases, video, from Wyze cameras that didn’t belong to them. The company claims 99.75 percent of accounts weren’t affected, but for some, that revelation doesn’t eradicate feelings of “disgust” and concern.

Wyze claims that an outage on Friday left customers unable to view camera footage for hours. Wyze has blamed the outage on a problem with an undisclosed Amazon Web Services (AWS) partner but hasn’t provided details.

Monday morning, Wyze sent emails to customers, including those Wyze says weren’t affected, informing them that the outage led to 13,000 people being able to access data from strangers’ cameras, as reported by The Verge.

Per Wyze’s email:

We can now confirm that as cameras were coming back online, about 13,000 Wyze users received thumbnails from cameras that were not their own and 1,504 users tapped on them. Most taps enlarged the thumbnail, but in some cases an Event Video was able to be viewed. …

According to Wyze, while it was trying to bring cameras back online from Friday’s outage, users reported seeing thumbnails and Event Videos that weren’t from their own cameras. Wyze’s emails added:

The incident was caused by a third-party caching client library that was recently integrated into our system. This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts.

In response to customers reporting that they were viewing images from strangers’ cameras, Wyze said it blocked customers from using the Events tab, then made an additional verification layer required to access the Wyze app’s Event Video section. Wyze co-founder and CMO David Crosby also said Wyze logged out people who had used the Wyze app on Friday in order to reset tokens.

Wyze’s emails also said the company modified its system “to bypass caching for checks on user-device relationships until [it identifies] new client libraries that are thoroughly stress tested for extreme events” like the one that occurred on Friday.

“So violated”: Wyze cameras leak footage to strangers for 2nd time in 5 months Read More »

novel-camera-system-lets-us-see-the-world-through-eyes-of-birds-and-bees

Novel camera system lets us see the world through eyes of birds and bees

A fresh perspective —

It captures natural animal-view moving images with over 90 percent accuracy.

A new camera system and software package allows researchers and filmmakers to capture animal-view videos. Credit: Vasas et al., 2024.

Who among us hasn’t wondered about how animals perceive the world, which is often different from how humans do so? There are various methods by which scientists, photographers, filmmakers, and others attempt to reconstruct, say, the colors that a bee sees as it hunts for a flower ripe for pollinating. Now an interdisciplinary team has developed an innovative camera system that is faster and more flexible in terms of lighting conditions than existing systems, allowing it to capture moving images of animals in their natural setting, according to a new paper published in the journal PLoS Biology.

“We’ve long been fascinated by how animals see the world. Modern techniques in sensory ecology allow us to infer how static scenes might appear to an animal,” said co-author Daniel Hanley, a biologist at George Mason University in Fairfax, Virginia. “However, animals often make crucial decisions on moving targets (e.g., detecting food items, evaluating a potential mate’s display, etc.). Here, we introduce hardware and software tools for ecologists and filmmakers that can capture and display animal-perceived colors in motion.”

Per Hanley and his co-authors, different animal species possess unique sets of photoreceptors that are sensitive to a wide range of wavelengths, from ultraviolet to the infrared, dependent on each animal’s specific ecological needs. Some animals can even detect polarized light. So every species will perceive color a bit differently. Honeybees and birds, for instance, are sensitive to UV light, which isn’t visible to human eyes. “As neither our eyes nor commercial cameras capture such variations in light, wide swaths of visual domains remain unexplored,” the authors wrote. “This makes false color imagery of animal vision powerful and compelling.”

However, the authors contend that current techniques for producing false color imagery can’t quantify the colors animals see while in motion, an important factor since movement is crucial to how different animals communicate and navigate the world around them via color appearance and signal detection. Traditional spectrophotometry, for instance, relies on object-reflected light to estimate how a given animal’s photoreceptors will process that light, but it’s a time-consuming method, and much spatial and temporal information is lost.

Peacock feathers through eyes of four different animals: (a) a peafowl; (b) humans; (c) honeybees; and (d) dogs. Credit: Vasas et al., 2024.

Multispectral photography takes a series of photos across various wavelengths (including UV and infrared) and stacks them into different color channels to derive camera-independent measurements of color. This method trades some accuracy for better spatial information and is well-suited for studying animal signals, for instance, but it only works on still objects, so temporal information is lacking.

That’s a shortcoming because “animals present and perceive signals from complex shapes that cast shadows and generate highlights,” the authors wrote. ‘These signals vary under continuously changing illumination and vantage points. Information on this interplay among background, illumination, and dynamic signals is scarce. Yet it forms a crucial aspect of the ways colors are used, and therefore perceived, by free-living organisms in natural settings.”

So Hanley and his co-authors set out to develop a camera system capable of producing high-precision animal-view videos that capture the full complexity of visual signals as they would be perceived by an animal in a natural setting. They combined existing methods of multispectral photography with new hardware and software designs. The camera records video in four color channels simultaneously (blue, green, red, and UV). Once that data has been processed into “perceptual units,” the result is an accurate video of how a colorful scene would be perceived by various animals, based on what we know about which photoreceptors they possess. The team’s system predicts the perceived colors with 92 percent accuracy. The cameras are commercially available, and the software is open source so that others can freely use and build on it.

The video at the top of this article depicts the colors perceived by honeybees watching fellow bees foraging and interacting (even fighting) on flowers—an example of the camera system’s ability to capture behavior in a natural setting. Below, Hanley applies UV-blocking sunscreen in the field. His light-toned skin looks roughly the same in human vision and honeybee false color vision “because skin reflectance increases progressively at longer wavelengths,” the authors wrote.

Novel camera system lets us see the world through eyes of birds and bees Read More »