Author name: Kris Guyer

this-photo-got-3rd-in-an-ai-art-contest—then-its-human-photographer-came-forward

This photo got 3rd in an AI art contest—then its human photographer came forward

Say cheese —

Humans pretending to be machines isn’t exactly a victory for the creative spirit.

To be fair, I wouldn't put it past an AI model to forget the flamingo's head.

Enlarge / To be fair, I wouldn’t put it past an AI model to forget the flamingo’s head.

A juried photography contest has disqualified one of the images that was originally picked as a top three finisher in its new AI art category. The reason for the disqualification? The photo was actually taken by a human and not generated by an AI model.

The 1839 Awards launched last year as a way to “honor photography as an art form,” with a panel of experienced judges who work with photos at The New York Times, Christie’s, and Getty Images, among others. The contest rules sought to segregate AI images into their own category as a way to separate out the work of increasingly impressive image generators from “those who use the camera as their artistic medium,” as the 1839 Awards site puts it.

For the non-AI categories, the 1839 Awards rules note that they “reserve the right to request proof of the image not being generated by AI as well as for proof of ownership of the original files.” Apparently, though, the awards did not request any corresponding proof that submissions in the AI category were generated by AI.

The 1839 Awards winners page for the

Enlarge / The 1839 Awards winners page for the “AI” category, before Astray’s photo was disqualified.

Because of this, the photographer, who goes by the pen name Miles Astray, was able to enter his photo “F L A M I N G O N E” into that AI-generated category, where it was shortlisted and then picked for third place over plenty of other entries that were not made by a human holding a camera. The photo also won the People’s Choice Award for the AI category after Astray publicly lobbied his social media followers to vote for it multiple times.

Making a statement

On his website, Astray tells the story of a 5 am photo shoot in Aruba where he captured the photo of a flamingo that appears to have lost its head. Astray said he entered the photo in the AI category “to prove that human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.”

That’s not a completely baseless concern. Last year, German artist Boris Eldagsen made headlines after his AI-generated picture “The Electrician” won first prize in the Creative category of the World Photography Organization’s Sony World Photography Award. Eldagsen ended up refusing the prize, writing that he had entered “as a cheeky monkey, to find out if the competitions are prepared for AI images to enter. They are not.”

In a statement provided to press outlets after Astray revealed his deception, the 1839 Awards organizers noted that Astray’s entry was disqualified because it “did not meet the requirements for the AI-generated image category. We understand that was the point, but we don’t want to prevent other artists from their shot at winning in the AI category. We hope this will bring awareness (and a message of hope) to other photographers worried about AI.”

For his part, Astray says his disqualification from the 1839 Awards was “a completely justified and right decision that I expected and support fully.” But he also writes that the work’s initial success at the awards “was not just a win for me but for many creatives out there.”

Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

Enlarge / Even a mediocre human-written comedy special might seem impressive if you thought an AI wrote it.

I’m not sure I buy that interpretation, though. Art isn’t like chess, where the brute force of machine-learning efficiency has made even the best human players relatively helpless. Instead, as conceptual artist Danielle Baskin told Ars when talking about the DALL-E image generator, “all modern AI art has converged on kind of looking like a similar style, [so] my optimistic speculation is that people are hiring way more human artists now.”

The whole situation brings to mind the ostensibly AI-generated George Carlin-style comedy special released earlier this year, which the creators later admitted was written entirely by a human. At the time, I noted how our views of works of art are immediately colored as soon as the “AI generated” label is applied. Maybe you grade the work on a bit of a curve (“Well, it’s not bad for a machine“), or maybe you judge it more harshly for its artificial creation (“It obviously doesn’t have the human touch“).

In any case, reactions to AI artwork are “a reflection of all the fear and promise inherent in computers continuing to encroach on areas we recently thought were exclusively ‘human,’ as well as the economic and philosophical impacts of that trend,” as I wrote when talking about the fake AI Carlin. And those human-centric biases mean we can’t help but use a different eye to judge works of art presented as AI creations.

Entering a human photograph into an AI-generated photo contest says more about how we can exploit those biases than it does about the inherent superiority of man or machine in a field as subjective as art. This isn’t John Henry bravely standing up to a steam engine; it’s Homer Simpson winning a nuclear plant design contest that was not intended for him.

This photo got 3rd in an AI art contest—then its human photographer came forward Read More »

iv-infusion-enables-editing-of-the-cystic-fibrosis-gene-in-lung-stem-cells

IV infusion enables editing of the cystic fibrosis gene in lung stem cells

Right gene in the right place —

Approach relies on lipid capsules like those in the mRNA vaccines.

Abstract drawing of a pair of human hands using scissors to cut a DNA strand, with a number of human organs in the background.

The development of gene editing tools, which enable the specific targeting and correction of mutations, hold the promise of allowing us to correct those mutations that cause genetic diseases. However, the technology has been around for a while now—two researchers were critical to its development in 2020—and there have been only a few cases where gene editing has been used to target diseases.

One of the reasons for that is the challenge of targeting specific cells in a living organism. Many genetic diseases affect only a specific cell type, such as red blood cells in sickle-cell anemia, or specific tissue. Ideally, to limit potential side effects, we’d like to ensure that enough of the editing takes place in the affected tissue to have an impact, while minimizing editing elsewhere to limit side effects. But our ability to do so has been limited. Plus, a lot of the cells affected by genetic diseases are mature and have stopped dividing. So, we either need to repeat the gene editing treatments indefinitely or find a way to target the stem cell population that produces the mature cells.

On Thursday, a US-based research team said that they’ve done gene editing experiments that targeted a high-profile genetic disease: cystic fibrosis. Their technique largely targets the tissue most affected by the disease (the lung), and occurs in the stem cell populations that produce mature lung cells, ensuring that the effect is stable.

Getting specific

The foundation of the new work is the technology that gets the mRNAs of the COVID-19 mRNA vaccines inside cells. The nucleic acids of an mRNA are large molecules with a lot of charged pieces, which makes it difficult for them to cross a membrane to get inside of a cell. To overcome that problem, the researchers package the mRNA inside a bubble of lipids, which can then fuse with cell membranes, dumping the mRNA inside the cell.

This process, as the researchers note, has two very large advantages: We know it works, and we know it’s safe. “More than a billion doses of lipid nanoparticle–mRNA COVID-19 vaccines have been administered intramuscularly worldwide,” they write, “demonstrating high safety and efficacy sustained through repeatable dosing.” (As an aside, it’s interesting to contrast the research community’s view of the mRNA vaccines to the conspiracies that circulate widely among the public.)

There’s one big factor that doesn’t matter for vaccine delivery but does matter for gene editing: They’re not especially fussy about what cells they target for delivery. So, if you want to target something like blood stem cells, then you need to alter the lipid particles in some way to get them to preferentially target the cells of your choice.

There are a lot of ideas on how to do this, but the team behind this new work found a relatively simple one: changing the amount of positively charged lipids on the particle. In 2020, they published a paper in which they describe the development of selective organ targeting (SORT) lipid nanoparticles. By default, many of the lipid particles end up in the liver. But, as the fraction of positively charged lipids increases, the targeting shifts to the spleen and then to the lung.

So, presumably, because they know they can target the lung, they decided to use SORT particles to send a gene editing system specific to cystic fibrosis, which primarily affects that tissue and is caused by mutations in a single gene. While it’s relatively easy to get things into the lung, it’s tough to get them to lung cells, given all the mucus, cilia, and immune cells that are meant to take care of foreign items in the lung.

IV infusion enables editing of the cystic fibrosis gene in lung stem cells Read More »

retired-engineer-discovers-55-year-old-bug-in-lunar-lander-computer-game-code

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code

The world’s oldest feature —

A physics simulation flaw in text-based 1969 computer game went unnoticed until today.

Illustration of the Apollo lunar lander Eagle over the Moon.

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon.

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game’s physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module’s descent onto the Moon’s surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Lunar Lander game, provided by Jim Storer.” height=”524″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/lunar_lander_teletype_output-640×524.jpg” width=”640″>

Enlarge / A scan of printed teletype output from the original Lunar Lander game, provided by Jim Storer.

Jim Storer

Fast forward to 2024, when Martin—an AI expert, game developer, and former postdoctoral associate at MIT—stumbled upon a bug in Storer’s high school code while exploring what he believed was the optimal strategy for landing the module with maximum fuel efficiency—a technique known among Kerbal Space Program enthusiasts as the “suicide burn.” This method involves falling freely to build up speed and then igniting the engines at the last possible moment to slow down just enough to touch down safely. He also tried another approach—a more gentle landing.

“I recently explored the optimal fuel burn schedule to land as gently as possible and with maximum remaining fuel,” Martin wrote on his blog. “Surprisingly, the theoretical best strategy didn’t work. The game falsely thinks the lander doesn’t touch down on the surface when in fact it does. Digging in, I was amazed by the sophisticated physics and numerical computing in the game. Eventually I found a bug: a missing ‘divide by two’ that had seemingly gone unnoticed for nearly 55 years.”

A matter of division

Diagram of launch escape system on top of the Apollo capsule.

Enlarge / Diagram of launch escape system on top of the Apollo capsule.

NASA

Despite applying what should have been a textbook landing strategy, Martin found that the game inconsistently reported that the lander had missed the Moon’s surface entirely. Intrigued by the anomaly, Martin dug into the game’s source code and discovered that the landing algorithm was based on highly sophisticated physics for its time, including the Tsiolkovsky rocket equation and a Taylor series expansion.

As mentioned in the quote above, the root of the problem was a simple computational oversight—a missing division by two in the formula used to calculate the lander’s trajectory. This seemingly minor error had big consequences, causing the simulation to underestimate the time until the lander reached its lowest trajectory point and miscalculate the landing.

Despite the bug, Martin was impressed that Storer, then a high school senior, managed to incorporate advanced mathematical concepts into his game, a feat that remains impressive even by today’s standards. Martin reached out to Storer himself, and the Lunar Lander author told Martin that his father was a physicist who helped him derive the equations used in the game simulation.

While people played and enjoyed Storer’s game for years with the bug in place, it goes to show that realism isn’t always the most important part of a compelling interactive experience. And thankfully for Aldrin and Armstrong, the real Apollo lunar landing experience didn’t suffer from the same issue.

You can read more about Martin’s exciting debugging adventure over on his blog.

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code Read More »

apple-punishes-women-for-same-behaviors-that-get-men-promoted,-lawsuit-says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.

A victory for women suing could mean that more than 12,000 current and former female employees in California could collectively claw back potentially millions in lost wages from an apparently ever-widening wage gap allegedly perpetuated by Apple policies.

The lawsuit was filed by two employees who have each been with Apple for more than a decade, Justina Jong and Amina Salgado. They claimed that Apple violated California employment laws between 2020 and 2024 by unfairly discriminating against California-based female employees in Apple’s engineering, marketing, and AppleCare divisions and “systematically” paying women “lower compensation than men with similar education and experience.”

Apple allegedly has displayed an ongoing bias toward male employees, offering them higher starting salaries and promoting them for the “same behaviors” that female employees allegedly were punished for.

Jong, currently a customer/technical training instructor on Apple’s global developer relations/app review team, said that she only became aware of a stark pay disparity by chance.

“One day, I saw a W-2 left on the office printer,” Jong said. “It belonged to my male colleague, who has the same job position. I noticed that he was being paid almost $10,000 more than me, even though we performed substantially similar work. This revelation made me feel terrible.”

But Salgado had long been aware of the problem. Salgado, currently on a temporary assignment as a development manager in the AppleCare division, spent years complaining about her lower wages, prompting Apple internal investigations that never led to salary increases.

Finally, late last year, Salgado’s insistence on fair pay was resolved after Apple hired a third-party firm that concluded she was “paid less than men performing substantially similar work.” Apple subsequently increased her pay rate but dodged responsibility for back pay that Salgado now seeks to recover.

Eve Cervantez, a lawyer for women suing, said in a press release shared with Ars that these women were put in “a no-win situation.”

“Once women are hired into a lower pay range at Apple, subsequent pay raises or any bonuses are tracked accordingly, meaning they don’t correct the gender pay gap,” Cervantez said. “Instead, they perpetuate and widen the gap because raises and bonuses are based on a percentage of the employee’s base salary.”

Apple did not immediately respond to Ars’ request to comment.

Apple punishes women for same behaviors that get men promoted, lawsuit says Read More »

Customer-Centric Marketing for Technology Vendors

In today’s fast-paced, highly competitive market, technology vendors often struggle to connect with their customers on a meaningful level. Traditional marketing approaches, which focus on pushing products and services to a broad audience, are no longer effective. Customers demand more personalized and relevant experiences. Without a customer-centric approach, companies risk losing customer loyalty and market share to competitors who better understand and cater to their customers’ needs.

Historical Context

Marketing has evolved significantly over the decades. In the early 20th century, marketing was primarily product-focused, emphasizing mass production and broad-reaching advertising. As markets became more saturated, the focus shifted to differentiation and brand building in the mid-20th century.

The late 20th and early 21st centuries saw the rise of digital marketing, enabling more targeted and data-driven approaches. However, despite these advancements, many businesses continued to prioritize their products and services over the needs and preferences of their customers.

The advent of the internet and social media further transformed the marketing landscape, giving customers a powerful voice and more choices than ever before. This shift necessitated a more customer-centric approach, but many companies have struggled to fully embrace this change.

Why It’s Critical Now

The importance of customer-centric marketing has never been more pronounced. Today’s consumers are more informed, connected, and empowered. They have higher expectations for personalized experiences and are quick to switch brands if their expectations are not met. Additionally, the rise of digital technologies has created a more competitive environment, where startups and smaller companies can challenge established players by offering superior customer experiences.

COVID-19 has also accelerated the need for customer-centric marketing. The pandemic has fundamentally changed consumer behavior, driving more people online and increasing the demand for seamless digital interactions. Customers now expect brands to understand their unique situations and provide relevant solutions.

Investing in customer-centric marketing is not just a trend; it’s a necessity. Companies that prioritize their customers are better positioned to build long-term loyalty, increase customer lifetime value, and achieve sustainable growth. By truly understanding and addressing customer needs, businesses can differentiate themselves and thrive in an increasingly competitive market.

Practical Strategies for Customer-Centric Marketing

1. Understand Your Customer

First things first, you need to know your customer. Not just demographics, but their pain points, preferences, and behaviors. Start by regularly asking for customer feedback through short, focused surveys to understand their needs and expectations. Additionally, analyze purchase history, website interactions, and social media engagement to gather deeper insights into their behavior. By combining direct feedback with data analysis, you can create a comprehensive profile of your customer that goes beyond basic demographics.

Segment your customers based on industry and role. Each segment will have different pain points, preferences, and behaviors. Understand the unique challenges and trends in each industry you serve. For example, the needs of a healthcare provider will differ significantly from those of a financial services firm. Tailor your understanding to the specific roles within these industries. A CTO might focus on technological innovation, while a CFO might prioritize cost efficiency.

2. Personalize Your Communication

Customers today expect personalization. They want to feel like you understand them. Implementing segmentation allows you to divide your audience into groups based on their behavior and preferences. This enables you to tailor your messages to each group, ensuring relevance and increasing engagement. Utilize dynamic content tools that allow you to change the content of your emails or website based on who is viewing them. This could mean showing different product recommendations or messaging depending on the customer’s past interactions with your brand.

Develop segmented communication strategies that cater to the unique needs of different industries and roles. Customize your messages to address industry-specific challenges. Use industry jargon and case studies relevant to their field. Personalize your communication based on the roles of your customers. For example, send technical insights to IT professionals and strategic overviews to executive leaders.

3. Create Valuable Content

Content is still king, but it needs to be valuable. Focus on providing educational content that helps your customers solve their problems. Blog posts, how-to videos, and webinars can be very effective in this regard. Additionally, share engaging stories that highlight customer success. By making your customers the heroes of your narratives, you not only build trust but also demonstrate real-life applications of your products or services. Valuable content should aim to inform, entertain, and inspire your audience, making your brand a go-to resource.

Content should be tailored to provide value to different industries and roles. Develop content that addresses industry-specific pain points. For example, create whitepapers on compliance for healthcare and financial industries. Generate role-specific content, such as technical guides for IT professionals, financial analyses for CFOs, and strategic trends for CEOs.

4. Be Where Your Customers Are

You need to be present on the platforms your customers use. This could be social media, forums, or even offline events. Engage in social listening to monitor what your customers are talking about and join the conversation where relevant. Providing an omnichannel presence ensures a seamless experience across all touchpoints. Your customers should feel like they’re dealing with the same brand whether they’re on your website, your app, or in your store. This consistency builds trust and reinforces your brand’s reliability.

Ensure your presence on platforms popular in different industries and roles. Participate in industry-specific forums, trade shows, and online communities. Engage on platforms and at events where specific roles are active, such as LinkedIn for professionals and GitHub for developers.

5. Build a Community

People like to feel part of a community. Foster this by creating forums where your customers can interact with each other and your brand. These forums can be online spaces such as social media groups or dedicated sections on your website. Develop engagement programs such as loyalty programs or ambassador programs to reward your most engaged customers. These programs not only incentivize repeat business but also encourage word-of-mouth promotion, as loyal customers are more likely to recommend your brand to others.

Foster a sense of community within each industry and role. Create industry-specific forums or groups where customers can interact. Develop role-specific engagement programs, such as technical meetups for developers or financial strategy workshops for CFOs.

6. Measure and Adapt

Finally, always measure your efforts and be ready to adapt. Regularly check in with your customers to see how they feel about your marketing efforts. Use customer feedback to gauge their satisfaction and areas for improvement. This means looking at key metrics like conversion rates, engagement rates, and customer retention rates. By continuously measuring and adapting your strategies, you ensure that your marketing efforts remain effective and aligned with customer needs.

Validate the effectiveness of your strategies and be ready to adapt based on industry and role-specific feedback. Use analytics tools to track industry-specific performance metrics. Gather role-specific feedback to understand the impact on different positions within your customer base.

By putting your customers at the center of your marketing efforts, you not only meet their needs but also build lasting relationships that drive your business forward. Let’s move beyond the jargon and focus on what truly matters—delivering value to our customers.

Customer-Centric Marketing for Technology Vendors Read More »

cop-busted-for-unauthorized-use-of-clearview-ai-facial-recognition-resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Secret face scans —

Indiana cop easily hid frequent personal use of Clearview AI face scans.

Cop busted for unauthorized use of Clearview AI facial recognition resigns

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear “misuse” of Clearview AI’s controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as “the world’s largest facial recognition network.” The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

But these scans must always be linked to an investigation, and Evansville police chief Philip Smith said that instead, the disgraced cop repeatedly disguised his personal searches by deceptively “utilizing an actual case number associated with an actual incident” to evade detection.

Smith’s department discovered the officer’s unauthorized use after performing an audit before renewing their Clearview AI subscription in March. That audit showed “an anomaly of very high usage of the software by an officer whose work output was not indicative of the number of inquiry searches that they had.”

Another clue to the officer’s abuse of the tool was that most face scans conducted during investigations are “usually live or CCTV images”—shots taken in the wild—Smith said. However, the officer who resigned was mainly searching social media images, which was a red flag.

An investigation quickly “made clear that this officer was using Clearview AI” for “personal purposes,” Smith said, declining to name the officer or verify if targets of these searchers were notified.

As a result, Smith recommended that the department terminate the officer. However, the officer resigned “before the Police Merit Commission could make a final determination on the matter,” Smith said.

Easily dodging Clearview AI’s built-in compliance features

Clearview AI touts the face image network as a public safety resource, promising to help law enforcement make arrests sooner while committing to “ethical and responsible” use of the tech.

On its website, the company says that it understands that “law enforcement agencies need built-in compliance features for increased oversight, accountability, and transparency within their jurisdictions, such as advanced admin tools, as well as user-friendly dashboards, reporting, and metrics tools.”

To “help deter and detect improper searches,” its website says that a case number and crime type is required, and “every agency is required to have an assigned administrator that can see an in-depth overview of their organization’s search history.”

It seems that neither of those safeguards stopped the Indiana cop from repeatedly scanning social media images for undisclosed personal reasons, seemingly rubber-stamping the case number and crime type requirement and going unnoticed by his agency’s administrator. This incident could have broader implications in the US, where its technology has been widely used by police to conduct nearly 1 million searches, Clearview AI CEO Hoan Ton-That told the BBC last year.

In 2022, Ars reported when Clearview AI told investors it had ambitions to collect more than 100 billion face images, ensuring that “almost everyone in the world will be identifiable.” As privacy concerns about the controversial tech mounted, it became hotly debated. Facebook moved to stop the company from scraping faces on its platform, and the ACLU won a settlement that banned Clearview AI from contracting with most businesses. But the US government retained access to the tech, including “hundreds of police forces across the US,” Ton-That told the BBC.

Most law enforcement agencies are hesitant to discuss their Clearview AI tactics in detail, the BBC reported, so it’s often unclear who has access and why. But the Miami Police confirmed that “it uses this software for every type of crime,” the BBC reported.

Now, at least one Indiana police department has confirmed that an officer can sneakily abuse the tech and conduct unapproved face scans with apparent ease.

According to Kashmir Hill—the journalist who exposed Clearview AI’s tech—the disgraced cop was following in the footsteps of “billionaires, Silicon Valley investors, and a few high-wattage celebrities” who got early access to Clearview AI tech in 2020 and considered it a “superpower on their phone, allowing them to put a name to a face and dig up online photos of someone that the person might not even realize were online.”

Advocates have warned that stronger privacy laws are needed to stop law enforcement from abusing Clearview AI’s network, which Hill described as “a Shazam for people.”

Smith said the officer disregarded department guidelines by conducting the improper face scans.

“To ensure that the software is used for its intended purposes, we have put in place internal operational guidelines and adhere to the Clearview AI terms of service,” Smith said. “Both have language that clearly states that this is a tool for official use and is not to be used for personal reasons.

Cop busted for unauthorized use of Clearview AI facial recognition resigns Read More »

musk-says-he’s-winning-tesla-shareholder-vote-on-pay-plan-by-“wide-margin”

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin”

Tesla shareholder vote —

Court battle over pay plan will continue even if Musk wins shareholder vote.

Elon Musk wearing a suit and waving with his hand as he walks away from a courthouse.

Enlarge / Elon Musk.

Getty Images | Bloomberg

Elon Musk said last night that Tesla shareholders provided enough votes to re-approve his 2018 pay package, which was previously nullified by a Delaware judge. A proposal to transfer Tesla’s state of incorporation from Delaware to Texas also has enough votes to pass, according to a post by Musk.

“Both Tesla shareholder resolutions are currently passing by wide margins!” Musk wrote. His post included charts indicating that both shareholder resolutions had more than enough yes votes to surpass the “guaranteed win” threshold.

The Wall Street Journal notes that the “results provided by Musk are preliminary, and voters can change their votes until the polls close at the meeting on Thursday.” The shareholder meeting is at 3: 30 pm Central Time. An official announcement on the results is expected today.

Under a settlement with the Securities and Exchange Commission, Musk is required to get pre-approval from a Tesla securities lawyer for social media posts that may contain information material to the company or its shareholders. Tesla today submitted an SEC filing containing a screenshot of Musk’s X post describing the preliminary results, but the company otherwise did not make an announcement.

Legal uncertainty remains

The vote isn’t the last word on the pay package that was once estimated to be worth $56 billion and more recently valued at $46 billion based on Tesla’s stock price. The pay plan was nullified by a Delaware Court of Chancery ruling in January 2024 after a lawsuit filed by a shareholder.

Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla’s shareholders, saying the proxy information given to investors before 2018 was materially deficient. McCormick said that “the proxy statement inaccurately described key directors as independent and misleadingly omitted details about the process.”

As the Financial Times wrote, there would still be legal uncertainty even if shareholders re-approve the pay deal today:

In asking shareholders to approve of the same 2018 pay package that was nullified by the Delaware Court of Chancery in January, Tesla is relying on a legal principle known as “ratification,” in which the validity of a corporate action can be cemented by a shareholder vote. Ratification, the company told shareholders in a proxy note earlier this year, “will restore Tesla’s stockholder democracy.”

This instance, however, is the first time a company has tried to leverage that principle after its board was found to have breached its fiduciary duty to approve the deal in the first place.

Even Tesla admits it does not know what happens next. “The [Tesla board] special committee and its advisers noted that they could not predict with certainty how a stockholder vote to ratify the 2018 CEO performance award would be treated under Delaware law in these novel circumstances,” it said in a proxy statement sent to shareholders.

The BBC writes that “legal experts say it is not clear if a court that blocked the deal will accept the re-vote, which is not binding, and allow the company to restore the pay package.”

New lawsuit challenges re-vote

The re-vote was already being challenged in the same Delaware court that nullified the 2018 vote. Donald Ball, who owns 28,245 shares of Tesla stock, last week sued Musk and Tesla in a complaint that alleges the Tesla “Board has not disclosed a complete or fair picture” to shareholders of the impact of re-approving Musk’s pay plan.

That includes “radical tax implications for Tesla that will potentially wipe out Tesla’s pre-tax profits for the last two years,” the lawsuit said. The Ball lawsuit also alleged that “Musk has engaged in strong-arm, coercive tactics to obtain stockholder approval for both the Redomestication Vote and the Ratification Vote.”

Tesla Board Chairperson Robyn Denholm urged shareholders to re-approve the Musk pay plan, suggesting that Musk could leave Tesla or devote less time to the company if the resolution is voted down.

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin” Read More »

turkish-student-creates-custom-ai-device-for-cheating-university-exam,-gets-arrested

Turkish student creates custom AI device for cheating university exam, gets arrested

spy hard —

Elaborate scheme involved hidden camera and an earpiece to hear answers.

A photo illustration of what a shirt-button camera <em>could</em> look like. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/shirt-button-camera-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo illustration of what a shirt-button camera could look like.

Aurich Lawson | Getty Images

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

A video released by the Isparta police demonstrated how the cheating system functioned. In the video, a police officer scans a question, and the AI software provides the correct answer through the earpiece.

In addition to the student, Turkish police detained another individual for assisting the student during the exam. The police discovered a mobile phone that could allegedly relay spoken sounds to the other person, allowing for two-way communication.

A history of calling on computers for help

The recent arrest recalls other attempts to cheat using wireless communications and computers, such as the famous case of the Eudaemons in the late 1970s. The Eudaemons were a group of physics graduate students from the University of California, Santa Cruz, who developed a wearable computer device designed to predict the outcome of roulette spins in casinos.

The Eudaemons’ device consisted of a shoe with a computer built into it, connected to a timing device operated by the wearer’s big toe. The wearer would click the timer when the ball and the spinning roulette wheel were in a specific position, and the computer would calculate the most likely section of the wheel where the ball would land. This prediction would be transmitted to an earpiece worn by another team member, who would quickly place bets on the predicted section.

While the Eudaemons’ plan didn’t involve a university exam, it shows that the urge to call upon remote computational powers greater than oneself is apparently timeless.

Turkish student creates custom AI device for cheating university exam, gets arrested Read More »

ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror

unstable diffusion —

Users react to mangled SD3 generations and ask, “Is this release supposed to be a joke?”

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

“It wasn’t too long ago that StableDiffusion was competing with Midjourney, now it just looks like a joke in comparison. At least our datasets are safe and ethical!” wrote one Reddit user.

  • An AI-generated image created using Stable Diffusion 3 Medium.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “woman wearing a dress on the beach.”

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “photograph of a person napping in a living room.”

AI image fans are so far blaming the Stable Diffusion 3’s anatomy fails on Stability’s insistence on filtering out adult content (often called “NSFW” content) from the SD3 training data that teaches the model how to generate images. “Believe it or not, heavily censoring a model also gets rid of human anatomy, so… that’s what happened,” wrote one Reddit user in the thread.

Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.

The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity can severely hamper an AI model’s ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by strongly filtering NSFW content.

Another issue that can occur during model pre-training is that sometimes the NSFW filter researchers use remove adult images from the dataset is too picky, accidentally removing images that might not be offensive and depriving the model of depictions of humans in certain situations. “[SD3] works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw,” wrote one Redditor on the topic.

Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt “a man showing his hands” returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

  • A SD3 Medium example we generated with the prompt “A woman lying on the beach.”

  • A SD3 Medium example we generated with the prompt “A man showing his hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “A woman showing her hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

  • A SD3 Medium example we generated with the prompt “A cat in a car holding a can of beer.”

Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes. Today’s release is for the “Medium” version, which is a 2 billion-parameter model. In addition to the weights being available on Hugging Face, they are also available for experimentation through the company’s Stability Platform. The weights are available for download and use for free under a non-commercial license only.

Soon after its February announcement, delays in releasing the SD3 model weights inspired rumors that the release was being held back due to technical issues or mismanagement. Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs. Just prior to that, three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—left the company. And its troubles go back even farther, with news of the company’s dire financial position lingering since 2023.

To some Stable Diffusion fans, the failures with Stable Diffusion 3 Medium are a visual manifestation of the company’s mismanagement—and an obvious sign of things falling apart. Although the company has not filed for bankruptcy, some users made dark jokes about the possibility after seeing SD3 Medium:

“I guess now they can go bankrupt in a safe and ethically [sic] way, after all.”

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror Read More »

let’s-unpack-some-questions-about-russia’s-role-in-north-korea’s-rocket-program

Let’s unpack some questions about Russia’s role in North Korea’s rocket program

In this pool photo distributed by Sputnik agency, Russia's President Vladimir Putin and North Korea's leader Kim Jong Un visit the Vostochny Cosmodrome in Amur region in 2023. An RD-191 engine is visible in the background.

Enlarge / In this pool photo distributed by Sputnik agency, Russia’s President Vladimir Putin and North Korea’s leader Kim Jong Un visit the Vostochny Cosmodrome in Amur region in 2023. An RD-191 engine is visible in the background.

Vladimir Smirnov/Pool/AFP/Getty Images

Russian President Vladimir Putin will reportedly visit North Korea later this month, and you can bet collaboration on missiles and space programs will be on the agenda.

The bilateral summit in Pyongyang will follow a mysterious North Korean rocket launch on May 27, which ended in a fireball over the Yellow Sea. The fact that this launch fell short of orbit is not unusual—two of the country’s three previous satellite launch attempts failed. But North Korea’s official state news agency dropped some big news in the last paragraph of its report on the May 27 launch.

The Korean Central News Agency called the launch vehicle a “new-type satellite carrier rocket” and attributed the likely cause of the failure to “the reliability of operation of the newly developed liquid oxygen + petroleum engine” on the first stage booster. A small North Korean military spy satellite was destroyed. The fiery demise of the North Korean rocket was captured in a video recorded by the Japanese news broadcaster NHK.

Petroleum almost certainly means kerosene, a refined petroleum fuel used on a range of rockets, including SpaceX’s Falcon 9, United Launch Alliance’s Atlas V, and Russia’s Soyuz and Angara.

“The North Koreans are clearly toying with us,” said Jeffrey Lewis, a nonproliferation expert at the Middlebury Institute of International Studies. “They went out of their way to tell us what the propellant was, which is very deliberate because it’s a short statement and they don’t normally do that. They made a point of doing that, so I suspect they want us to be wondering what’s going on.”

Surprise from Sohae

Veteran observers of North Korea’s rocket program anticipated the country’s next satellite launch would use the same Chollima-1 rocket it used on three flights last year. But North Korea’s official statement suggests this was something different, and entirely unexpected, at least by anyone without access to classified information.

Ahead of the launch, North Korea released warning notices outlining the drop zones downrange where sections of the rocket would fall into the sea after lifting off from Sohae Satellite Launching Station on the country’s northwestern coast.

A day before the May 27 launch, South Korea’s Yonhap news agency reported a “large number of Russian experts” entered North Korea to support the launch effort. A senior South Korean defense official told Yonhap that North Korea staged more rocket engine tests than expected during the run-up to the May 27 flight.

Then, North Korea announced that this wasn’t just another flight of the Chollima-1 rocket but something new. The Chollima 1 used the same mix of hydrazine and nitrogen tetroxide propellants as North Korea’s ballistic missiles. This combination of toxic propellants has the benefit of simplicity—these liquids are hypergolic, meaning they combust upon contact with one another. No ignition source is needed.

A television monitor at a train station in South Korea shows an image of the launch of North Korea's Chollima-1 rocket last year.

Enlarge / A television monitor at a train station in South Korea shows an image of the launch of North Korea’s Chollima-1 rocket last year.

Kim Jae-Hwan/SOPA Images/LightRocket via Getty Images

Kerosene and liquid oxygen are nontoxic and more fuel-efficient. But liquid oxygen has to be kept at super-cold temperatures, requiring special handling and insulation to prevent boil-off as it is loaded into the rocket.

Let’s unpack some questions about Russia’s role in North Korea’s rocket program Read More »

apple-quietly-improves-mac-virtualization-in-macos-15-sequoia

Apple quietly improves Mac virtualization in macOS 15 Sequoia

virtual realities —

It only works for macOS 15 guests on macOS 15 hosts, but it’s a big improvement.

Macs running a preview build of macOS 15 Sequoia.

Enlarge / Macs running a preview build of macOS 15 Sequoia.

Apple

We’ve written before about Apple’s handy virtualization framework in recent versions of macOS, which allows users of Apple Silicon Macs with sufficient RAM to easily set up macOS and Linux virtual machines using a number of lightweight third-party apps. This is useful for anyone who needs to test software in multiple macOS versions but doesn’t own a fleet of Mac hardware or multiple boot partitions. (Intel Macs support the virtualization framework, too, but only for Linux VMs, making it less useful.)

But up until now, you haven’t been able to sign into iCloud using macOS on a VM. This made the feature less useful for developers or users hoping to test iCloud features in macOS, or whose apps rely on some kind of syncing with iCloud, or people who just wanted easy access to their iCloud data from within a VM.

This limitation is going away in macOS 15 Sequoia, according to developer documentation that Apple released yesterday. As long as your host operating system is macOS 15 or newer and your guest operating system is macOS 15 or newer, VMs will now be able to sign into and use iCloud and other Apple ID-related services just as they would when running directly on the hardware.

This is still limiting for developers, who might want to run an older version of macOS on their hardware while still testing macOS 15 in a VM, or those who want to do the reverse so that they can more easily support multiple versions of macOS with their apps. It also doesn’t apply to VMs that are upgraded from an older version of macOS to Sequoia—it has to be a brand-new VM created from a macOS 15 install image. But it’s a welcome change, and it will steadily get more useful as Apple releases more macOS versions in the future that can take advantage of it.

“When you create a VM in macOS 15 from a macOS 15 software image… Virtualization configures an identity for the VM that it derives from security information in the host’s Secure Enclave,” Apple’s documentation reads. “Just as individual physical devices have distinct identities based on their Secure Enclaves, this identity is distinct from other VMs.”

If you move that VM from one host to another, a new distinct identity will be created, and your iCloud account will presumably be logged out. This is the same thing that happens if you backup a copy of one Mac’s disk and restore it to another Mac. A new identity will also be created if a second copy of a VM is launched on the same machine.

Mac users hoping to virtualize the Arm version of Windows 10 or 11 will still need to look to third-party products for help. Both Parallels and VMware offer virtualization products that are officially blessed by Microsoft as a way to run Windows on Apple Silicon Macs, and Broadcom recently made VMware Fusion free for individuals.

Apple quietly improves Mac virtualization in macOS 15 Sequoia Read More »