cameras

engineer-proves-that-kohler’s-smart-toilet-cameras-aren’t-very-private

Engineer proves that Kohler’s smart toilet cameras aren’t very private


Kohler is getting the scoop on people’s poop.

A Dekoda smart toilet camera. Credit: Kohler

Kohler is facing backlash after an engineer pointed out that the company’s new smart toilet cameras may not be as private as it wants people to believe. The discussion raises questions about Kohler’s use of the term “end-to-end encryption” (E2EE) and the inherent privacy limitations of a device that films the goings-on of a toilet bowl.

In October, Kohler announced its first “health” product, the Dekoda. Kohler’s announcement described the $599 device (it also requires a subscription that starts at $7 per month) as a toilet bowl attachment that uses “optical sensors and validated machine-learning algorithms” to deliver “valuable insights into your health and wellness.” The announcement added:

Data flows to the personalized Kohler Health app, giving users continuous, private awareness of key health and wellness indicators—right on their phone. Features like fingerprint authentication and end-to-end encryption are designed for user privacy and security.

The average person is most likely to be familiar with E2EE through messaging apps, like Signal. Messages sent via apps with E2EE are encrypted throughout transmission. Only the message’s sender and recipient can view the decrypted messages, which is intended to prevent third parties, including the app developer, from reading them.

But how does E2EE apply to a docked camera inside a toilet?

Software engineer and former Federal Trade Commission technology advisor Simon Fondrie-Teitler sought answers about this, considering that “Kohler Health doesn’t have any user-to-user sharing features,” he wrote in a blog post this week:

 … emails exchanged with Kohler’s privacy contact clarified that the other ‘end’ that can decrypt the data is Kohler themselves: ‘User data is encrypted at rest, when it’s stored on the user’s mobile phone, toilet attachment, and on our systems. Data in transit is also encrypted end-to-end, as it travels between the user’s devices and our systems, where it is decrypted and processed to provide our service.’

Ars Technica contacted Kohler to ask if the above statement is an accurate summary of Dekoda’s “E2EE” and if Kohler employees can access data from Dekoda devices. A spokesperson responded with a company statement that basically argued that data gathered from Dekoda devices is encrypted from one end (the toilet camera) until it reaches another end, in this case, Kohler’s servers. The statement reads, in part:

The term end-to-end encryption is often used in the context of products that enable a user (sender) to communicate with another user (recipient), such as a messaging application. Kohler Health is not a messaging application. In this case, we used the term with respect to the encryption of data between our users (sender) and Kohler Health (recipient).

We encrypt data end-to-end in transit, as it travels between users’ devices and our systems, where it is decrypted and processed to provide and improve our service. We also encrypt sensitive user data at rest, when it’s stored on a user’s mobile phone, toilet attachment, and on our systems.

Although Kohler somewhat logically defines the endpoints in what it considers E2EE, at a minimum, Kohler’s definition goes against the consumer-facing spirit of E2EE. Because E2EE is, as Kohler’s statement notes, most frequently used in messaging apps, people tend to associate it with privacy from the company that enables the data transmission. Since that’s not the case with the Dekoda, Kohler’s misuse of the term E2EE can give users a false sense of privacy.

As IBM defines it, E2EE “ensures that service providers facilitating the communications … can’t access the messages.” Kohler’s statement implies that the company understood how people typically think about E2EE and still chose to use the term over more accurate alternatives, such as Transport Layer Security (TLS) encryption, which “encrypts data as it travels between a client and a server. However, it doesn’t provide strong protection against access by intermediaries such as application servers or network providers,” per IBM.

“Using terms like ‘anonymized’ and ‘encrypted’ gives an impression of a company taking privacy and security seriously—but that doesn’t mean it actually is,” RJ Cross, director of the consumer privacy program at the Public Interest Research Group (PIRG), told Ars Technica.

Smart toilet cameras are so new (and questionable) that there are few comparisons we can make here. But the Dekoda’s primary rival, the Throne, also uses confusing marketing language. The smart camera’s website makes no mention of end-to-end encryption but claims that the device uses “bank-grade encryption,” a vague term often used by marketers but that does not imply E2EE, which isn’t a mandatory banking security standard in the US.

Why didn’t anyone notice before?

As Fondrie-Teitler pointed out in his blog, it’s odd to see E2EE associated with a smart toilet camera. Despite this, I wasn’t immediately able to find online discussion around Dekoda’s use of the term, which includes the device’s website saying that the Dekoda uses “encryption at every step.”

Numerous stories about the toilet cam’s launch (examples hereherehere, and here) mentioned the device’s purported E2EE but made no statements about how E2EE is used or the implications that E2EE claims have, or don’t have, for user privacy.

It’s possible there wasn’t much questioning about the Dekoda’s E2EE claim since the type of person who worries about and understands such things is often someone who wouldn’t put a camera anywhere near their bathroom.

It’s also possible that people had other ideas for how the smart toilet camera might work. Speaking with The Register, Fondrie-Teitler suggested a design in which data never leaves the camera but admitted that he didn’t know if this is possible.

“Ideally, this type of data would remain on the user’s device for analysis, and client-side encryption would be used for backups or synchronizing historical data to new devices,” he told The Register.

What is Kohler doing with the data?

For those curious about why Kohler wants data about its customers’ waste, the answer, as it often is today, is marketing and AI.

As Fondrie-Teitler noted, Kohler’s privacy policy says Kohler can use customer data to “create aggregated, de-identified and/or anonymized data, which we may use and share with third parties for our lawful business purposes, including to analyze and improve the Kohler Health Platform and our other products and services, to promote our business, and to train our AI and machine learning models.”

In its statement, Kohler said:

If a user consents (which is optional), Kohler Health may de-identify the data and use the de-identified data to train the AI that drives our product. This consent check-box is displayed in the Kohler Health app, is optional, and is not pre-checked.

Words matter

Kohler isn’t the first tech company to confuse people with its use of the term E2EE. In April, there was debate over whether Google was truly giving Gmail for business users E2EE, since, in addition to the sender and recipient having access to decrypted messages, people inside the users’ organization who deploy and manage the KACL (Key Access Control List) server can access the key necessary for decryption.

In general, what matters most is whether the product provides the security users demand. As Ars Technica Senior Security Editor Dan Goodin wrote about Gmail’s E2EE debate:

“The new feature is of potential value to organizations that must comply with onerous regulations mandating end-to-end encryption. It most definitely isn’t suitable for consumers or anyone who wants sole control over the messages they send. Privacy advocates, take note.”

When the product in question is an Internet-connected camera that lives inside your toilet bowl, it’s important to ask whether any technology could ever make it private enough. For many, no proper terminology could rationalize such a device.

Still, if a company is going to push “health” products to people who may have health concerns and, perhaps, limited cybersecurity and tech privacy knowledge, there’s an onus on that company for clear and straightforward communication.

“Throwing security terms around that the public doesn’t understand to try and create an illusion of data privacy and security being a high priority for your company is misleading to the people who have bought your product,” Cross said.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Engineer proves that Kohler’s smart toilet cameras aren’t very private Read More »

apple-vision-pro,-new-cameras-fail-user-repairability-analysis

Apple Vision Pro, new cameras fail user-repairability analysis

Apple's Vision Pro scored 0 points in US PIRG's self-repairability analysis.

Enlarge / Apple’s Vision Pro scored 0 points in US PIRG’s self-repairability analysis.

Kyle Orland

In December, New York became the first state to enact a “Right to Repair” law for electronics. Since then, other states, including Oregon and Minnesota, have passed similar laws. However, a recent analysis of some recently released gadgets shows that self-repair still has a long way to go before it becomes ubiquitous.

On Monday, the US Public Interest Research Group (PIRG) released its Leaders and Laggards report that examined user repairability of 21 devices subject to New York’s electronics Right to Repair law. The nonprofit graded devices “based on the quality and accessibility of repair manuals, spare parts, and other critical repair materials.”

Nathan Proctor, one of the report’s authors and senior director for the Campaign for the Right to Repair for the US PIRG Education Fund, told Ars Technica via email that PIRG focused on new models since the law only applies to new products, adding that PIRG “tried to include a range of covered devices from well-known brands.”

While all four smartphones included on the list received an A-minus or A, many other types of devices got disappointing grades. The HP Spectre Fold foldable laptop, for example, received a D-minus due to low parts (2 out of 10) and manual (4 out of 10) scores.

The report examined four camera models—Canon’s EOS r100, Fujifilm’s GFX 100 ii, Nikon’s Zf, and Sony’s Alpha 6700—and all but one received an F. The outlier, the Sony camera, managed a D-plus.

Two VR headsets were also among the losers. US PIRG gave Apple’s Vision Pro and Meta’s Quest 3 an F.

You can see PIRG’s full score breakdown below:

Repair manuals are still hard to access

New York’s Digital Fair Repair Act requires consumer electronics brands to allow consumers access to the same diagnostic tools, parts, and repair manuals that its own repair technicians use. However, the PIRG organization struggled to access manuals for some recently released tech that’s subject to the law.

For example, Sony’s PlayStation 5 Slim received a 1/10 score. PIRG’s report includes an apparent screenshot of an online chat with Sony customer support, where a rep said that the company doesn’t have a copy of the console’s service manual available and that “if the unit needs repair, we recommend/refer customers to the service center.”

Apple’s Vision Pro, meanwhile, got a 0/10 manual score, while the Meta Quest 3 got a 1/10.

According to the report, “only 12 of 21 products provided replacement procedures, and 11 listed which tools are required to disassemble the product.”

The report suggests difficulties in easily accessing repair manuals, with the report’s authors stating that reaching out to customer service representatives “often” proved “unhelpful.” The group also pointed to a potential lack of communication between customer service reps and the company’s repairability efforts.

For example, Apple launched its Self Service Repair Store in April 2022. But PIRG’s report said:

 … our interaction with their customer service team seemed to imply that there was no self-repair option for [Apple] phones. We were told by an Apple support representative that ‘only trained Apple Technician[s]’ would be able to replace our phone screen or battery, despite a full repair manual and robust parts selection available on the Apple website.

Apple didn’t immediately respond to Ars Technica’s request for comment.

Apple Vision Pro, new cameras fail user-repairability analysis Read More »

new-camera-design-can-id-threats-faster,-using-less-memory

New camera design can ID threats faster, using less memory

Image out the windshield of a car, with other vehicles highlighted by computer-generated brackets.

Elon Musk, back in October 2021, tweeted that “humans drive with eyes and biological neural nets, so cameras and silicon neural nets are only way to achieve generalized solution to self-driving.” The problem with his logic has been that human eyes are way better than RGB cameras at detecting fast-moving objects and estimating distances. Our brains have also surpassed all artificial neural nets by a wide margin at general processing of visual inputs.

To bridge this gap, a team of scientists at the University of Zurich developed a new automotive object-detection system that brings digital camera performance that’s much closer to human eyes. “Unofficial sources say Tesla uses multiple Sony IMX490 cameras with 5.4-megapixel resolution that [capture] up to 45 frames per second, which translates to perceptual latency of 22 milliseconds. Comparing [these] cameras alone to our solution, we already see a 100-fold reduction in perceptual latency,” says Daniel Gehrig, a researcher at the University of Zurich and lead author of the study.

Replicating human vision

When a pedestrian suddenly jumps in front of your car, multiple things have to happen before a driver-assistance system initiates emergency braking. First, the pedestrian must be captured in images taken by a camera. The time this takes is called perceptual latency—it’s a delay between the existence of a visual stimuli and its appearance in the readout from a sensor. Then, the readout needs to get to a processing unit, which adds a network latency of around 4 milliseconds.

The processing to classify the image of a pedestrian takes further precious milliseconds. Once that is done, the detection goes to a decision-making algorithm, which takes some time to decide to hit the brakes—all this processing is known as computational latency. Overall, the reaction time is anywhere between 0.1 to half a second. If the pedestrian runs at 12 km/h they would travel between 0.3 and 1.7 meters in this time. Your car, if you’re driving 50 km/h, would cover 1.4 to 6.9 meters. In a close-range encounter, this means you’d most likely hit them.

Gehrig and Davide Scaramuzza, a professor at the University of Zurich and a co-author on the study, aimed to shorten those reaction times by bringing the perceptual and computational latencies down.

The most straightforward way to lower the former was using standard high-speed cameras that simply register more frames per second. But even with a 30-45 fps camera, a self-driving car would generate nearly 40 terabytes of data per hour. Fitting something that would significantly cut the perceptual latency, like a 5,000 fps camera, would overwhelm a car’s onboard computer in an instant—the computational latency would go through the roof.

So, the Swiss team used something called an “event camera,” which mimics the way biological eyes work. “Compared to a frame-based video camera, which records dense images at a fixed frequency—frames per second—event cameras contain independent smart pixels that only measure brightness changes,” explains Gehrig. Each of these pixels starts with a set brightness level. When the change in brightness exceeds a certain threshold, the pixel registers an event and sets a new baseline brightness level. All the pixels in the event camera are doing that continuously, with each registered event manifesting as a point in an image.

This makes event cameras particularly good at detecting high-speed movement and allows them to do so using far less data. The problem with putting them in cars has been that they had trouble detecting things that moved slowly or didn’t move at all relative to the camera. To solve that, Gehrig and Scaramuzza went for a hybrid system, where an event camera was combined with a traditional one.

New camera design can ID threats faster, using less memory Read More »

“so-violated”:-wyze-cameras-leak-footage-to-strangers-for-2nd-time-in-5-months

“So violated”: Wyze cameras leak footage to strangers for 2nd time in 5 months

Wyze's Cam V3 Pro indoor/outdoor smart camera mounted outside

Enlarge / Wyze’s Cam V3 Pro indoor/outdoor smart camera.

Wyze cameras experienced a glitch on Friday that gave 13,000 customers access to images and, in some cases, video, from Wyze cameras that didn’t belong to them. The company claims 99.75 percent of accounts weren’t affected, but for some, that revelation doesn’t eradicate feelings of “disgust” and concern.

Wyze claims that an outage on Friday left customers unable to view camera footage for hours. Wyze has blamed the outage on a problem with an undisclosed Amazon Web Services (AWS) partner but hasn’t provided details.

Monday morning, Wyze sent emails to customers, including those Wyze says weren’t affected, informing them that the outage led to 13,000 people being able to access data from strangers’ cameras, as reported by The Verge.

Per Wyze’s email:

We can now confirm that as cameras were coming back online, about 13,000 Wyze users received thumbnails from cameras that were not their own and 1,504 users tapped on them. Most taps enlarged the thumbnail, but in some cases an Event Video was able to be viewed. …

According to Wyze, while it was trying to bring cameras back online from Friday’s outage, users reported seeing thumbnails and Event Videos that weren’t from their own cameras. Wyze’s emails added:

The incident was caused by a third-party caching client library that was recently integrated into our system. This client library received unprecedented load conditions caused by devices coming back online all at once. As a result of increased demand, it mixed up device ID and user ID mapping and connected some data to incorrect accounts.

In response to customers reporting that they were viewing images from strangers’ cameras, Wyze said it blocked customers from using the Events tab, then made an additional verification layer required to access the Wyze app’s Event Video section. Wyze co-founder and CMO David Crosby also said Wyze logged out people who had used the Wyze app on Friday in order to reset tokens.

Wyze’s emails also said the company modified its system “to bypass caching for checks on user-device relationships until [it identifies] new client libraries that are thoroughly stress tested for extreme events” like the one that occurred on Friday.

“So violated”: Wyze cameras leak footage to strangers for 2nd time in 5 months Read More »

novel-camera-system-lets-us-see-the-world-through-eyes-of-birds-and-bees

Novel camera system lets us see the world through eyes of birds and bees

A fresh perspective —

It captures natural animal-view moving images with over 90 percent accuracy.

A new camera system and software package allows researchers and filmmakers to capture animal-view videos. Credit: Vasas et al., 2024.

Who among us hasn’t wondered about how animals perceive the world, which is often different from how humans do so? There are various methods by which scientists, photographers, filmmakers, and others attempt to reconstruct, say, the colors that a bee sees as it hunts for a flower ripe for pollinating. Now an interdisciplinary team has developed an innovative camera system that is faster and more flexible in terms of lighting conditions than existing systems, allowing it to capture moving images of animals in their natural setting, according to a new paper published in the journal PLoS Biology.

“We’ve long been fascinated by how animals see the world. Modern techniques in sensory ecology allow us to infer how static scenes might appear to an animal,” said co-author Daniel Hanley, a biologist at George Mason University in Fairfax, Virginia. “However, animals often make crucial decisions on moving targets (e.g., detecting food items, evaluating a potential mate’s display, etc.). Here, we introduce hardware and software tools for ecologists and filmmakers that can capture and display animal-perceived colors in motion.”

Per Hanley and his co-authors, different animal species possess unique sets of photoreceptors that are sensitive to a wide range of wavelengths, from ultraviolet to the infrared, dependent on each animal’s specific ecological needs. Some animals can even detect polarized light. So every species will perceive color a bit differently. Honeybees and birds, for instance, are sensitive to UV light, which isn’t visible to human eyes. “As neither our eyes nor commercial cameras capture such variations in light, wide swaths of visual domains remain unexplored,” the authors wrote. “This makes false color imagery of animal vision powerful and compelling.”

However, the authors contend that current techniques for producing false color imagery can’t quantify the colors animals see while in motion, an important factor since movement is crucial to how different animals communicate and navigate the world around them via color appearance and signal detection. Traditional spectrophotometry, for instance, relies on object-reflected light to estimate how a given animal’s photoreceptors will process that light, but it’s a time-consuming method, and much spatial and temporal information is lost.

Peacock feathers through eyes of four different animals: (a) a peafowl; (b) humans; (c) honeybees; and (d) dogs. Credit: Vasas et al., 2024.

Multispectral photography takes a series of photos across various wavelengths (including UV and infrared) and stacks them into different color channels to derive camera-independent measurements of color. This method trades some accuracy for better spatial information and is well-suited for studying animal signals, for instance, but it only works on still objects, so temporal information is lacking.

That’s a shortcoming because “animals present and perceive signals from complex shapes that cast shadows and generate highlights,” the authors wrote. ‘These signals vary under continuously changing illumination and vantage points. Information on this interplay among background, illumination, and dynamic signals is scarce. Yet it forms a crucial aspect of the ways colors are used, and therefore perceived, by free-living organisms in natural settings.”

So Hanley and his co-authors set out to develop a camera system capable of producing high-precision animal-view videos that capture the full complexity of visual signals as they would be perceived by an animal in a natural setting. They combined existing methods of multispectral photography with new hardware and software designs. The camera records video in four color channels simultaneously (blue, green, red, and UV). Once that data has been processed into “perceptual units,” the result is an accurate video of how a colorful scene would be perceived by various animals, based on what we know about which photoreceptors they possess. The team’s system predicts the perceived colors with 92 percent accuracy. The cameras are commercially available, and the software is open source so that others can freely use and build on it.

The video at the top of this article depicts the colors perceived by honeybees watching fellow bees foraging and interacting (even fighting) on flowers—an example of the camera system’s ability to capture behavior in a natural setting. Below, Hanley applies UV-blocking sunscreen in the field. His light-toned skin looks roughly the same in human vision and honeybee false color vision “because skin reflectance increases progressively at longer wavelengths,” the authors wrote.

Novel camera system lets us see the world through eyes of birds and bees Read More »