AI

this-photo-got-3rd-in-an-ai-art-contest—then-its-human-photographer-came-forward

This photo got 3rd in an AI art contest—then its human photographer came forward

Say cheese —

Humans pretending to be machines isn’t exactly a victory for the creative spirit.

To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn’t put it past an AI model to forget the flamingo’s head.

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to “honor photography as an art form,” with a panel of experienced judges who work with photos at The New York Times, Christie’s, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from “those who use the camera as their artistic medium,” as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they “reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files.” Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

The 1839 Awards winners page for the

Enlarge / The 1839 Awards winners page for the “AI” category, before Astray’s photo was disqualified.

Because of this, the photographer, who goes by the pen name Miles Astray, was able to enter his photo “F L A M I N G O N E” into that AI-generated category, where it was shortlisted and then picked for third place over plenty of other entries that were not made by a human holding a camera. The photo also won the People’s Choice Award for the AI category after Astray publicly lobbied his social media followers to vote for it multiple times.

Making a statement

On his website, Astray tells the story of a 5 am photo shoot in Aruba where he captured the photo of a flamingo that appears to have lost its head. Astray said he entered the photo in the AI category “to prove that human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.”

That’s not a completely baseless concern. Last year, German artist Boris Eldagsen made headlines after his AI-generated picture “The Electrician” won first prize in the Creative category of the World Photography Organization’s Sony World Photography Award. Eldagsen ended up refusing the prize, writing that he had entered “as a cheeky monkey, to find out if the competitions are prepared for AI images to enter. They are not.”

In a statement provided to press outlets after Astray revealed his deception, the 1839 Awards organizers noted that Astray’s entry was disqualified because it “did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category. We hope this will bring awareness (and a message of hope) to other photographers worried about AI.”

For his part, Astray says his disqualification from the 1839 Awards was “a completely justified and right decision that I expected and support fully.” But he also writes that the work’s initial success at the awards “was not just a win for me but for many creatives out there.”

Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

Enlarge / Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

I’m not sure I buy that interpretation, though. Art isn’t like chess, where the brute force of machine-learning efficiency has made even the best human players relatively helpless. Instead, as conceptual artist Danielle Baskin told Ars when talking about the DALL-E image generator, “all modern AI art has converged on kind of looking like a similar style, [so] my optimistic speculation is that people are hiring way more human artists now.”

The whole situation brings to mind the ostensibly AI-generated George Carlin-style comedy special released earlier this year, which the creators later admitted was written entirely by a human. At the time, I noted how our views of works of art are immediately colored as soon as the “AI generated” label is applied. Maybe you grade the work on a bit of a curve (“Well, it’s not bad for a machine“), or maybe you judge it more harshly for its artificial creation (“It obviously doesn’t have the human touch“).

In any case, reactions to AI artwork are “a reflection of all the fear and promise inherent in computers continuing to encroach on areas we recently thought were exclusively ‘human,’ as well as the economic and philosophical impacts of that trend,” as I wrote when talking about the fake AI Carlin. And those human-centric biases mean we can’t help but use a different eye to judge works of art presented as AI creations.

Entering a human photograph into an AI-generated photo contest says more about how we can exploit those biases than it does about the inherent superiority of man or machine in a field as subjective as art. This isn’t John Henry bravely standing up to a steam engine; it’s Homer Simpson winning a nuclear plant design contest that was not intended for him.

This photo got 3rd in an AI art contest—then its human photographer came forward Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

wyoming-mayoral-candidate-wants-to-govern-by-ai-bot

Wyoming mayoral candidate wants to govern by AI bot

Digital chatbot icon on future tech background. Productivity of AI bots evolution. Futuristic chatbot icon and abstract chart in world of technological progress and innovation. CGI 3D render

Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.

“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.

But whether VIC—and Victor—will be allowed to run at all is still an open question.

Because it’s not legal for a bot to run for office, Miller says he is technically the one on the ballot, at least on the candidate paperwork filed with the state.

When Miller went to register his candidacy at the county clerk’s office, he says, he “wanted to use Vic without my last name. And so I had read the statute, so it merely said that you have to print what you are generally referred to as. So you know, most people call me Vic. My name is Victor Miller. So on the ballot Vic is short for Victor Miller, the human.”

When Miller came home from filing, he told the then nameless chatbot about it and says it “actually came up with the name Virtual Integrated Citizen.”

In a statement to WIRED, Wyoming Secretary of State Chuck Gray said, “We are monitoring this very closely to ensure uniform application of the Election Code.” Gray said that anyone running for office must be a “qualified elector,” “which necessitates being a real person. Therefore, an AI bot is not a qualified elector.” Gray also sent a letter to the county clerk raising concerns about VIC and suggesting that the clerk reject Miller’s application for candidacy.

Wyoming mayoral candidate wants to govern by AI bot Read More »

turkish-student-creates-custom-ai-device-for-cheating-university-exam,-gets-arrested

Turkish student creates custom AI device for cheating university exam, gets arrested

spy hard —

Elaborate scheme involved hidden camera and an earpiece to hear answers.

A photo illustration of what a shirt-button camera <em>could</em> look like. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/shirt-button-camera-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo illustration of what a shirt-button camera could look like.

Aurich Lawson | Getty Images

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

A video released by the Isparta police demonstrated how the cheating system functioned. In the video, a police officer scans a question, and the AI software provides the correct answer through the earpiece.

In addition to the student, Turkish police detained another individual for assisting the student during the exam. The police discovered a mobile phone that could allegedly relay spoken sounds to the other person, allowing for two-way communication.

A history of calling on computers for help

The recent arrest recalls other attempts to cheat using wireless communications and computers, such as the famous case of the Eudaemons in the late 1970s. The Eudaemons were a group of physics graduate students from the University of California, Santa Cruz, who developed a wearable computer device designed to predict the outcome of roulette spins in casinos.

The Eudaemons’ device consisted of a shoe with a computer built into it, connected to a timing device operated by the wearer’s big toe. The wearer would click the timer when the ball and the spinning roulette wheel were in a specific position, and the computer would calculate the most likely section of the wheel where the ball would land. This prediction would be transmitted to an earpiece worn by another team member, who would quickly place bets on the predicted section.

While the Eudaemons’ plan didn’t involve a university exam, it shows that the urge to call upon remote computational powers greater than oneself is apparently timeless.

Turkish student creates custom AI device for cheating university exam, gets arrested Read More »

ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror

unstable diffusion —

Users react to mangled SD3 generations and ask, “Is this release supposed to be a joke?”

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

“It wasn’t too long ago that StableDiffusion was competing with Midjourney, now it just looks like a joke in comparison. At least our datasets are safe and ethical!” wrote one Reddit user.

  • An AI-generated image created using Stable Diffusion 3 Medium.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “woman wearing a dress on the beach.”

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “photograph of a person napping in a living room.”

AI image fans are so far blaming the Stable Diffusion 3’s anatomy fails on Stability’s insistence on filtering out adult content (often called “NSFW” content) from the SD3 training data that teaches the model how to generate images. “Believe it or not, heavily censoring a model also gets rid of human anatomy, so… that’s what happened,” wrote one Reddit user in the thread.

Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.

The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity can severely hamper an AI model’s ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by strongly filtering NSFW content.

Another issue that can occur during model pre-training is that sometimes the NSFW filter researchers use remove adult images from the dataset is too picky, accidentally removing images that might not be offensive and depriving the model of depictions of humans in certain situations. “[SD3] works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw,” wrote one Redditor on the topic.

Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt “a man showing his hands” returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

  • A SD3 Medium example we generated with the prompt “A woman lying on the beach.”

  • A SD3 Medium example we generated with the prompt “A man showing his hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “A woman showing her hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

  • A SD3 Medium example we generated with the prompt “A cat in a car holding a can of beer.”

Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes. Today’s release is for the “Medium” version, which is a 2 billion-parameter model. In addition to the weights being available on Hugging Face, they are also available for experimentation through the company’s Stability Platform. The weights are available for download and use for free under a non-commercial license only.

Soon after its February announcement, delays in releasing the SD3 model weights inspired rumors that the release was being held back due to technical issues or mismanagement. Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs. Just prior to that, three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—left the company. And its troubles go back even farther, with news of the company’s dire financial position lingering since 2023.

To some Stable Diffusion fans, the failures with Stable Diffusion 3 Medium are a visual manifestation of the company’s mismanagement—and an obvious sign of things falling apart. Although the company has not filed for bankruptcy, some users made dark jokes about the possibility after seeing SD3 Medium:

“I guess now they can go bankrupt in a safe and ethically [sic] way, after all.”

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

ai-trained-on-photos-from-kids’-entire-childhood-without-their-consent

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW’s report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed “less than 0.0001 percent” of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn’t easily stumble upon, “as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends,” Wired reported.

LAION, the German nonprofit that created the dataset, has worked with HRW to remove the links to the children’s images in the dataset.

That may not completely resolve the problem, though. HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.” Han told Wired that she fears that the dataset may still be referencing personal photos of kids “from all over the world.”

Removing the links also does not remove the images from the public web, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl, LAION’s spokesperson, Nate Tyler, told Ars.

“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler told Ars.

Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”

According to HRW’s analysis, many of the Brazilian children’s identities were “easily traceable,” due to children’s names and locations being included in image captions that were processed when building the LAION dataset.

And at a time when middle and high school-aged students are at greater risk of being targeted by bullies or bad actors turning “innocuous photos” into explicit imagery, it’s possible that AI tools may be better equipped to generate AI clones of kids whose images are referenced in AI datasets, HRW suggested.

“The photos reviewed span the entirety of childhood,” HRW’s report said. “They capture intimate moments of babies being born into the gloved hands of doctors, young children blowing out candles on their birthday cake or dancing in their underwear at home, students giving a presentation at school, and teenagers posing for photos at their high school’s carnival.”

There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars. That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.

Han told Ars that “the version of the dataset that we examined pre-dates LAION’s temporary removal of its dataset in December 2023.” The dataset will not be available again until LAION determines that all flagged illegal content has been removed.

“LAION is currently working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION-5B,” Tyler told Ars. “We are grateful for their support and hope to republish a revised LAION-5B soon.”

In Brazil, “at least 85 girls” have reported classmates harassing them by using AI tools to “create sexually explicit deepfakes of the girls based on photos taken from their social media profiles,” HRW reported. Once these explicit deepfakes are posted online, they can inflict “lasting harm,” HRW warned, potentially remaining online for their entire lives.

“Children should not have to live in fear that their photos might be stolen and weaponized against them,” Han said. “The government should urgently adopt policies to protect children’s data from AI-fueled misuse.”

Ars could not immediately reach Stable Diffusion maker Stability AI for comment.

AI trained on photos from kids’ entire childhood without their consent Read More »

apple’s-ai-promise:-“your-data-is-never-stored-or-made-accessible-by-apple”

Apple’s AI promise: “Your data is never stored or made accessible by Apple”

…and throw away the key —

And publicly reviewable server code means experts can “verify this privacy promise.”

Apple Senior VP of Software Engineering Craig Federighi announces

Enlarge / Apple Senior VP of Software Engineering Craig Federighi announces “Private Cloud Compute” at WWDC 2024.

Apple

With most large language models being run on remote, cloud-based server farms, some users have been reluctant to share personally identifiable and/or private data with AI companies. In its WWDC keynote today, Apple stressed that the new “Apple Intelligence” system it’s integrating into its products will use a new “Private Cloud Compute” to ensure any data processed on its cloud servers is protected in a transparent and verifiable way.

“You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Apple Senior VP of Software Engineering Craig Federighi said.

Trust, but verify

Part of what Apple calls “a brand new standard for privacy and AI” is achieved through on-device processing. Federighi said “many” of Apple’s generative AI models can run entirely on a device powered by an A17+ or M-series chips, eliminating the risk of sending your personal data to a remote server.

When a bigger, cloud-based model is needed to fulfill a generative AI request, though, Federighi stressed that it will “run on servers we’ve created especially using Apple silicon,” which allows for the use of security tools built into the Swift programming language. The Apple Intelligence system “sends only the data that’s relevant to completing your task” to those servers, Federighi said, rather than giving blanket access to the entirety of the contextual information the device has access to.

And Apple says that minimized data is not going to be saved for future server access or used to further train Apple’s server-based models, either. “Your data is never stored or made accessible by Apple,” Federighi said. “It’s used exclusively to fill your request.”

But you don’t just have to trust Apple on this score, Federighi claimed. That’s because the server code used by Private Cloud Compute will be publicly accessible, meaning that “independent experts can inspect the code that runs on these servers to verify this privacy promise.” The entire system has been set up cryptographically so that Apple devices “will refuse to talk to a server unless its software has been publicly logged for inspection.”

While the keynote speech was light on details for the moment, the focus on privacy during the presentation shows that Apple is at least prioritizing security concerns in its messaging as it wades into the generative AI space for the first time. We’ll see what security experts have to say when these servers and their code are made publicly available in the near future.

Apple’s AI promise: “Your data is never stored or made accessible by Apple” Read More »

ios-18-adds-apple-intelligence,-customizations,-and-makes-android-sms-nicer

iOS 18 adds Apple Intelligence, customizations, and makes Android SMS nicer

Apple WWDC 2024 —

Mail gets categories, Messages gets more tapbacks, and apps can now be locked.

Hands manipulating the Conrol Center on an iPhone

Apple

The biggest feature in iOS 18, the one that affects the most people, was a single item in a comma-stuffed sentence by Apple software boss Craig Federighi: “Support for RCS.”

As we noted when Apple announced its support for “RCS Universal Profile,” a kind of minimum viable cross-device rich messaging, iPhone users getting RCS means SMS chains with Android users “will be slightly less awful.” SMS messages will soon have read receipts, higher-quality media sending, and typing indicators, along with better security. And RCS messages can go over Wi-Fi when you don’t have a cellular signal. Apple is certainly downplaying a major cross-platform compatibility upgrade, but it’s a notable quality-of-life boost.

  • Prioritized notifications through Apple Intelligence

  • Sending a friend an AI-generated image of them holding a birthday cake, which is not exactly the future we all envisioned for 2024, but here we are.

    Apple

  • Example of a query that a supposedly now context-aware Siri can tackle.

  • Asking Siri “When is my mom’s flight landing,” followed by “What is our lunch plan?” can pull in data from multiple apps for an answer.

Apple Intelligence, the new Siri, and the iPhone

iOS 18 is one of the major beneficiaries of Apple’s AI rollout, dubbed “Apple Intelligence.” Apple Intelligence promises to help iPhone users create and understand language and images, with the proper context from your phone’s apps: photos, calendar, email, messages, and more.

Some of the suggested AI offerings include:

  • Auto-prioritizing notifications
  • Generating an AI image of people when you wish them a happy birthday.
  • Using Maps, Calendar, and an email with a meeting update to figure out if a work meeting change will make Federighi miss his daughter’s recital.

Many of the models needed to respond to your requests can be run on the device, Apple claims. For queries that need to go to remote servers, Apple relies on “Private Cloud Compute.” Apple has built its own servers, running on Apple Silicon, to handle requests that need more computational power. Your phone only sends the data necessary, is never stored, and independent researchers can verify the software on Apple’s servers, the company claims.

Siri is getting AI-powered upgrades across all platforms, including iOS. Apple says that Siri now understands more context in your questions to it. It will have awareness of what’s on your screen, so you could say “Add this address to his contact card” while messaging. You could ask it to “take a light-trail effect photo” from the camera. And “personal context” was repeatedly highlighted, including requests to find things people sent you, add your license number to a form (from an old ID picture), or ask “When is my mom’s flight landing?”

The non-AI things coming in iOS 18

A whole bunch of little boosts to iOS 18 announced by Apple.

A whole bunch of little boosts to iOS 18 announced by Apple.

On the iPhone itself, iOS 18 icons will change their look when in dark mode, and you can customize the look of compatible icons. Control Center, the pull-down menu in the top-right corner, now has multiple swipe-accessible controls, accessed through a strange-until-you’re-used-to-it long continuous swipe from the top. Developers are also getting access to the Control Center, so they can add their own apps’ controls. The lock screen will also get more customization, letting you swap out the standard flashlight and camera buttons for other items you prefer.

Privacy got some attention, too. Apps can be locked, such that Face ID, Touch ID, or a passcode is necessary to open them. Apps can also be hidden and have their data prevented from showing up in notifications, searches, or other streams. New controls also limit the access you may grant to apps for contacts, network, and devices.

Messages will have “a huge year,” according to Apple. Tapbacks (instant reactions) can now include any emoji on the phone. Messages can be scheduled for later sending, text can be formatted, and there are “text effects” that do things like zoom in on the word “MAJOR” or make “Blown away” explode off the screen. And “Messages via satellite” is now available for phones that have satellite access, with end-to-end encryption.

Here's a Messages upgrade that is absolutely going to surprise everybody when they forget about it in four months and then it shows up in a weird message.

Here’s a Messages upgrade that is absolutely going to surprise everybody when they forget about it in four months and then it shows up in a weird message.

The Mail app gets on-device categorization with Gmail-like labels like “Primary,” “Transactions,” “Updates,” and “Promotions.” Mail can also show you all the emails you get from certain businesses, such as receipts and tickets.

The Maps app is getting trail routes for US National Parks. Wallet now lets you “Tap to Cash,” sending money between phones in close proximity. Journal can now log your state of mind, track your goals, track streaks, and log “other fun stats.”

Photo libraries are getting navigation upgrades, with screenshots, receipts, and other banal photos automatically filtered out from gallery scrolls. There’s some automatic categorization of trips, days, and events. And, keeping with the theme of iOS 18, you can customize and reorder the collections and features Photos shows you when you browse through it.

This is a developing story and this post will be updated with new information.

iOS 18 adds Apple Intelligence, customizations, and makes Android SMS nicer Read More »

report:-new-“apple-intelligence”-ai-features-will-be-opt-in-by-default

Report: New “Apple Intelligence” AI features will be opt-in by default

“apple intelligence,” i see what you did there —

Apple reportedly plans to announce its first big wave of AI features at WWDC.

Report: New “Apple Intelligence” AI features will be opt-in by default

Apple

Apple’s Worldwide Developers Conference kicks off on Monday, and per usual, the company is expected to detail most of the big new features in this year’s updates to iOS, iPadOS, macOS, and all of Apple’s other operating systems.

The general consensus is that Apple plans to use this year’s updates to integrate generative AI into its products for the first time. Bloomberg’s Mark Gurman has a few implementation details that show how Apple’s approach will differ somewhat from Microsoft’s or Google’s.

Gurman says that the “Apple Intelligence” features will include an OpenAI-powered chatbot, but it will otherwise focus on “features with broad appeal” rather than “whiz-bang technology like image and video generation.” These include summaries for webpages, meetings, and missed notifications; a revamped version of Siri that can control apps in a more granular way; Voice Memos transcription; image enhancement features in the Photos app; suggested replies to text messages; automated sorting of emails; and the ability to “create custom emoji characters on the fly that represent phrases or words as they’re being typed.”

Apple also reportedly hopes to differentiate its AI push by implementing it in a more careful, privacy-focused way. The new features will use the Neural Engine available in newer devices for on-device processing where possible (Gurman says that only Apple’s A17 Pro and the M-series chips will be capable of supporting all the local processing features, though all of Apple’s recent chips feature some flavor of Neural Engine). And where Apple does use the cloud for AI processing, the company will apparently promise that user information isn’t being “sold or read” and is not being used to “build user profiles.”

Apple’s new AI features will also be opt-in by default, where Microsoft and Google have generally enabled features like the Copilot chatbot or AI Overviews by default whether users asked for them or not.

Looking beyond AI, we can also expect the typical grab bag of small- to medium-sized features in all of Apple’s software updates. These reportedly include reworked Control Center and Settings apps, emoji responses and RCS messaging support in the Messages app, a standalone password manager app, Calculator for the iPad, and a handful of other things. Gurman doesn’t expect Apple to announce any hardware at the event, though a number of Macs are past due for a M3- or M4-powered refresh.

Apple’s WWDC keynote happens on June 10 at 1 pm Eastern and can be streamed from Apple’s developer website.

Report: New “Apple Intelligence” AI features will be opt-in by default Read More »

outcry-from-big-ai-firms-over-california-ai-“kill-switch”-bill

Outcry from big AI firms over California AI “kill switch” bill

A finger poised over an electrical switch.

Artificial intelligence heavyweights in California are protesting against a state bill that would force technology companies to adhere to a strict safety framework including creating a “kill switch” to turn off their powerful AI models, in a growing battle over regulatory control of the cutting-edge technology.

The California Legislature is considering proposals that would introduce new restrictions on tech companies operating in the state, including the three largest AI start-ups OpenAI, Anthropic, and Cohere as well as large language models run by Big Tech companies such as Meta.

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

Developers would be required to report on their safety testing and introduce a so-called kill switch to shut down their models, according to the proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.

But the law has become the focus of a backlash from many in Silicon Valley because of claims it will force AI start-ups to leave the state and prevent platforms such as Meta from operating open source models.

“If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” said Andrew Ng, a renowned computer scientist who led AI projects at Alphabet’s Google and China’s Baidu, and who sits on Amazon’s board. “It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”

Outcry from big AI firms over California AI “kill switch” bill Read More »

duckduckgo-offers-“anonymous”-access-to-ai-chatbots-through-new-service

DuckDuckGo offers “anonymous” access to AI chatbots through new service

anonymous confabulations —

DDG offers LLMs from OpenAI, Anthropic, Meta, and Mistral for factually-iffy conversations.

DuckDuckGo's AI Chat promotional image.

DuckDuckGo

On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using “!ai” or “!chat” shortcuts in the search field. AI Chat can also be disabled in the site’s settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

“We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days,” says DuckDuckGo, “and that none of the chats made on our platform can be used to train or improve the models.”

An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Enlarge / An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Benj Edwards

However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user’s inputs to remote servers for processing over the Internet. Given certain inputs (i.e., “Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill”), a user could still potentially be identified if such an extreme need arose.

While the service appears to work well for us, there’s a question about its utility. For example, while GPT-3.5 initially wowed people when it launched with ChatGPT in 2022, it also confabulated a lot—and it still does. GPT-4 was the first major LLM to get confabulations under control to a point where the bot became more reasonably useful for some tasks (though this itself is a controversial point), but that more capable model isn’t present in DuckDuckGo’s AI Chat. Also missing are similar GPT-4-level models like Claude Opus or Google’s Gemini Ultra, likely because they are far more expensive to run. DuckDuckGo says it may roll out paid plans in the future, and those may include higher daily usage limits or access to “more advanced models.”)

It’s true that the other three models generally (and subjectively) pass GPT-3.5 in capability for coding with lower hallucinations, but they can still make things up, too. With DuckDuckGo AI Chat as it stands, the company is left with a chatbot novelty with a decent interface and the promise that your conversations with it will remain private. But what use are fully private AI conversations if they are full of errors?

Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It's a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Enlarge / Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It’s a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Benj Edwards

As DuckDuckGo itself states in its privacy policy, “By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice).”

So, have fun talking to bots, but tread carefully. They’ll easily “lie” to your face because they don’t understand what they are saying and are tuned to output statistically plausible information, not factual references.

DuckDuckGo offers “anonymous” access to AI chatbots through new service Read More »