X

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

“fascists”:-elon-musk-responds-to-proposed-fines-for-disinformation-on-x

“Fascists”: Elon Musk responds to proposed fines for disinformation on X

Being responsible is so hard —

“Elon Musk’s had more positions on free speech than the Kama Sutra,” says lawmaker.

A smartphone displays Elon Musk's profile on X, the app formerly known as Twitter.

Getty Images | Dan Kitwood

Elon Musk has lambasted Australia’s government as “fascists” over proposed laws that could levy substantial fines on social media companies if they fail to comply with rules to combat the spread of disinformation and online scams.

The billionaire owner of social media site X posted the word “fascists” on Friday in response to the bill, which would strengthen the Australian media regulator’s ability to hold companies responsible for the content on their platforms and levy potential fines of up to 5 percent of global revenue. The bill, which was proposed this week, has yet to be passed.

Musk’s comments drew rebukes from senior Australian politicians, with Stephen Jones, Australia’s finance minister, telling national broadcaster ABC that it was “crackpot stuff” and the legislation was a matter of sovereignty.

Bill Shorten, the former leader of the Labor Party and a cabinet minister, accused the billionaire of only championing free speech when it was in his commercial interests. “Elon Musk’s had more positions on free speech than the Kama Sutra,” Shorten said in an interview with Australian radio.

The exchange marks the second time that Musk has confronted Australia over technology regulation.

In May, he accused the country’s eSafety Commissioner of censorship after the government agency took X to court in an effort to force it to remove graphic videos of a stabbing attack in Sydney. A court later denied the eSafety Commissioner’s application.

Musk has also been embroiled in a bitter dispute with authorities in Brazil, where the Supreme Court ruled last month that X should be blocked over its failure to remove or suspend certain accounts accused of spreading misinformation and hateful content.

Australia has been at the forefront of efforts to regulate the technology sector, pitting it against some of the world’s largest social media companies.

This week, the government pledged to introduce a minimum age limit for social media use to tackle “screen addiction” among young people.

In March, Canberra threatened to take action against Meta after the owner of Facebook and Instagram said it would withdraw from a world-first deal to pay media companies to link to news stories.

The government also introduced new data privacy measures to parliament on Thursday that would impose hefty fines and potential jail terms of up to seven years for people found guilty of “doxxing” individuals or groups.

Prime Minister Anthony Albanese’s government had pledged to outlaw doxxing—the publication of personal details online for malicious purposes—this year after the details of a private WhatsApp group containing hundreds of Jewish Australians were published online.

Australia is one of the first countries to pursue laws outlawing doxxing. It is also expected to introduce a tranche of laws in the coming months to regulate how personal data can be used by artificial intelligence.

“These reforms give more teeth to the regulation,” said Monique Azzopardi at law firm Clayton Utz.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

“Fascists”: Elon Musk responds to proposed fines for disinformation on X Read More »

procreate-defies-ai-trend,-pledges-“no-generative-ai”-in-its-illustration-app

Procreate defies AI trend, pledges “no generative AI” in its illustration app

Political pixels —

Procreate CEO: “I really f—ing hate generative AI.”

Still of Procreate CEO James Cuda from a video posted to X.

Enlarge / Still of Procreate CEO James Cuda from a video posted to X.

On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.

“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”

In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”

Cuda’s sentiment echoes the fears of some digital artists who feel that AI image synthesis models, often trained on content without consent or compensation, threaten their livelihood and the authenticity of creative work. That’s not a universal sentiment among artists, but AI image synthesis is often a deeply divisive subject on social media, with some taking starkly polarized positions on the topic.

Procreate CEO James Cuda lays out his argument against generative AI in a video posted to X.

Cuda’s video plays on that polarization with clear messaging against generative AI. His statement reads as follows:

You’ve been asking us about AI. You know, I usually don’t like getting in front of the camera. I prefer that our products speak for themselves. I really fucking hate generative AI. I don’t like what’s happening in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into out products. Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story’s gonna go or how it ends, but we believe that we’re on the right path supporting human creativity.

The debate over generative AI has intensified among some outspoken artists as more companies integrate these tools into their products. Dominant illustration software provider Adobe has tried to avoid ethical concerns by training its Firefly AI models on licensed or public domain content, but some artists have remained skeptical. Adobe Photoshop currently includes a “Generative Fill” feature powered by image synthesis, and the company is also experimenting with video synthesis models.

The backlash against image and video synthesis is not solely focused on creative app developers. Hardware manufacturer Wacom and game publisher Wizards of the Coast have faced criticism and issued apologies after using AI-generated content in their products. Toys “R” Us also faced a negative reaction after debuting an AI-generated commercial. Companies are still grappling with balancing the potential benefits of generative AI with the ethical concerns it raises.

Artists and critics react

A partial screenshot of Procreate's AI website captured on August 20, 2024.

Enlarge / A partial screenshot of Procreate’s AI website captured on August 20, 2024.

So far, Procreate’s anti-AI announcement has been met with a largely positive reaction in replies to its social media post. In a widely liked comment, artist Freya Holmér wrote on X, “this is very appreciated, thank you.”

Some of the more outspoken opponents of image synthesis also replied favorably to Procreate’s move. Karla Ortiz, who is a plaintiff in a lawsuit against AI image-generator companies, replied to Procreate’s video on X, “Whatever you need at any time, know I’m here!! Artists support each other, and also support those who allow us to continue doing what we do! So thank you for all you all do and so excited to see what the team does next!”

Artist RJ Palmer, who stoked the first major wave of AI art backlash with a viral tweet in 2022, also replied to Cuda’s video statement, saying, “Now thats the way to send a message. Now if only you guys could get a full power competitor to [Photoshop] on desktop with plugin support. Until someone can build a real competitor to high level [Photoshop] use, I’m stuck with it.”

A few pro-AI users also replied to the X post, including AI-augmented artist Claire Silver, who uses generative AI as an accessibility tool. She wrote on X, “Most of my early work is made with a combination of AI and Procreate. 7 years ago, before text to image was really even a thing. I loved procreate because it used tech to boost accessibility. Like AI, it augmented trad skill to allow more people to create. No rules, only tools.”

Since AI image synthesis continues to be a highly charged subject among some artists, reaffirming support for human-centric creativity could be an effective differentiated marketing move for Procreate, which currently plays underdog to creativity app giant Adobe. While some may prefer to use AI tools, in an (ideally healthy) app ecosystem with personal choice in illustration apps, people can follow their conscience.

Procreate’s anti-AI stance is slightly risky because it might also polarize part of its user base—and if the company changes its mind about including generative AI in the future, it will have to walk back its pledge. But for now, Procreate is confident in its decision: “In this technological rush, this might make us an exception or seem at risk of being left behind,” Procreate wrote. “But we see this road less traveled as the more exciting and fruitful one for our community.”

Procreate defies AI trend, pledges “no generative AI” in its illustration app Read More »

x-is-training-grok-ai-on-your-data—here’s-how-to-stop-it

X is training Grok AI on your data—here’s how to stop it

Grok Your Privacy Options —

Some users were outraged to learn this was opt-out, not opt-in.

An AI-generated image released by xAI during the launch of Grok

Enlarge / An AI-generated image released by xAI during the open-weights launch of Grok-1.

Elon Musk-led social media platform X is training Grok, its AI chatbot, on users’ data, and that’s opt-out, not opt-in. If you’re an X user, that means Grok is already being trained on your posts if you haven’t explicitly told it not to.

Over the past day or so, users of the platform noticed the checkbox to opt out of this data usage in X’s privacy settings. The discovery was accompanied by outrage that user data was being used this way to begin with.

The social media posts about this sometimes seem to suggest that Grok has only just begun training on X users’ data, but users actually don’t know for sure when it started happening.

Earlier today, X’s Safety account tweeted, “All X users have the ability to control whether their public posts can be used to train Grok, the AI search assistant.” But it didn’t clarify either when the option became available or when the data collection began.

You cannot currently disable it in the mobile apps, but you can on mobile web, and X says the option is coming to the apps soon.

On the privacy settings page, X says:

To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes. This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.

X’s privacy policy has allowed for this since at least September 2023.

It’s increasingly common for user data to be used this way; for example, Meta has done the same with its users’ content, and there was an outcry when Adobe updated its terms of use to allow for this kind of thing. (Adobe quickly backtracked and promised to “never” train generative AI on creators’ content.)

How to opt out

  • To stop Grok from training on your X content, first go to “Settings and privacy” from the “More” menu in the navigation panel…

    Samuel Axon

  • Then click or tap “Privacy and safety”…

    Samuel Axon

  • Then “Grok”…

    Samuel Axon

  • And finally, uncheck the box.

    Samuel Axon

You can’t opt out within the iOS or Android apps yet, but you can do so in a few quick steps on either mobile or desktop web. To do so:

  • Click or tap “More” in the nav panel
  • Click or tap “Settings and privacy”
  • Click or tap “Privacy and safety”
  • Scroll down and click or tap “Grok” under “Data sharing and personalization”
  • Uncheck the box “Allow your posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning,” which is checked by default.

Alternatively, you can follow this link directly to the settings page and uncheck the box with just one more click. If you’d like, you can also delete your conversation history with Grok here, provided you’ve actually used the chatbot before.

X is training Grok AI on your data—here’s how to stop it Read More »

no-judge-with-tesla-stock-should-handle-elon-musk-cases,-watchdog-argues

No judge with Tesla stock should handle Elon Musk cases, watchdog argues

No judge with Tesla stock should handle Elon Musk cases, watchdog argues

Elon Musk’s fight against Media Matters for America (MMFA)—a watchdog organization that he largely blames for an ad boycott that tanked Twitter/X’s revenue—has raised an interesting question about whether any judge owning Tesla stock might reasonably be considered biased when weighing any lawsuit centered on the tech billionaire.

In a court filing Monday, MMFA lawyers argued that “undisputed facts—including statements from Musk and Tesla—lay bare the interest Tesla shareholders have in this case.” According to the watchdog, any outcome in the litigation will likely impact Tesla’s finances, and that’s a problem because there’s a possibility that the judge in the case, Reed O’Connor, owns Tesla stock.

“X cannot dispute the public association between Musk—his persona, business practices, and public remarks—and the Tesla brand,” MMFA argued. “That association would lead a reasonable observer to ‘harbor doubts’ about whether a judge with a financial interest in Musk could impartially adjudicate this case.”

It’s still unclear if Judge O’Connor actually owns Tesla stock. But after MMFA’s legal team uncovered disclosures showing that he did as of last year, they argued that fact can only be clarified if the court views Tesla as a party with a “financial interest in the outcome of the case” under Texas law—“no matter how small.”

To make those facts clear, MMFA is now arguing that X must be ordered to add Tesla as an interested person in the litigation, which a source familiar with the matter told Ars, would most likely lead to a recusal if O’Connor indeed still owned Tesla stock.

“At most, requiring X to disclose Tesla would suggest that judges owning stock in Tesla—the only publicly traded Musk entity—should recuse from future cases in which Musk himself is demonstrably central to the dispute,” MMFA argued.

Ars could not immediately reach X Corp’s lawyer for comment.

However, in X’s court filing opposing the motion to add Tesla as an interested person, X insisted that “Tesla is not a party to this case and has no interest in the subject matter of the litigation, as the business relationships at issue concern only X Corp.’s contracts with X’s advertisers.”

Calling MMFA’s motion “meritless,” X accused MMFA of strategizing to get Judge O’Connor disqualified in order to go “forum shopping” after MMFA received “adverse rulings” on motions to stay discovery and dismiss the case.

As to the question of whether any judge owning Tesla stock might be considered impartial in weighing Musk-centric cases, X argued that Judge O’Connor was just as duty-bound to reject an improper motion for recusal, should MMFA go that route, as he was to accept a proper motion.

“Courts are ‘reluctant to fashion a rule requiring judges to recuse themselves from all cases that might remotely affect nonparty companies in which they own stock,'” X argued.

Recently, judges have recused themselves from cases involving Musk without explaining why. In November, a prior judge in the very same Media Matters’ suit mysteriously recused himself, with The Hill reporting that it was likely that the judge’s “impartiality might reasonably be questioned” for reasons like a financial interest or personal bias. Then in June, another judge ruled he was disqualified to rule on a severance lawsuit raised by former Twitter executives without giving “a specific reason,” Bloomberg Law reported.

Should another recusal come in the MMFA lawsuit, it would be a rare example of a judge clearly disclosing a financial interest in a Musk case.

“The straightforward question is whether Musk’s statements and behavior relevant to this case affect Tesla’s stock price, not whether they are the only factor that affects it,” MMFA argued. ” At the very least, there is a serious question about whether Musk’s highly unusual management practices mean Tesla must be disclosed as an interested party.”

Parties expect a ruling on MMFA’s motion in the coming weeks.

No judge with Tesla stock should handle Elon Musk cases, watchdog argues Read More »

elon-musk’s-x-tests-letting-users-request-community-notes-on-bad-posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.

X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.

Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.

Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.

“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”

Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.

“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.

Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.

“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”

X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”

According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”

It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.

That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”

Elon Musk’s X tests letting users request Community Notes on bad posts Read More »

elon-musk’s-x-may-succeed-in-blocking-calif.-content-moderation-law-on-appeal

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Judgment call —

Elon Musk’s X previously failed to block the law on First Amendment grounds.

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal

Elon Musk’s fight defending X’s content moderation decisions isn’t just with hate speech researchers and advertisers. He has also long been battling regulators, and this week, he seemed positioned to secure a potentially big win in California, where he’s hoping to permanently block a law that he claims unconstitutionally forces his platform to justify its judgment calls.

At a hearing Wednesday, three judges in the 9th US Circuit Court of Appeals seemed inclined to agree with Musk that a California law requiring disclosures from social media companies that clearly explain their content moderation choices likely violates the First Amendment.

Passed in 2022, AB-587 forces platforms like X to submit a “terms of service report” detailing how they moderate several categories of controversial content. Those categories include hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference, which X’s lawyer, Joel Kurtzberg, told judges yesterday “are the most controversial categories of so-called awful but lawful speech.”

The law would seemingly require more transparency than ever from X, making it easy for users to track exactly how much controversial content X flags and removes—and perhaps most notably for advertisers, how many users viewed concerning content.

To block the law, X sued in 2023, arguing that California was trying to dictate its terms of service and force the company to make statements on content moderation that could generate backlash. X worried that the law “impermissibly” interfered with both “the constitutionally protected editorial judgments” of social media companies, as well as impacted users’ speech by requiring companies “to remove, demonetize, or deprioritize constitutionally protected speech that the state deems undesirable or harmful.”

Any companies found to be non-compliant could face stiff fines of up to $15,000 per violation per day, which X considered “draconian.” But last year, a lower court declined to block the law, prompting X to appeal, and yesterday, the appeals court seemed more sympathetic to X’s case.

At the hearing, Kurtzberg told judges that the law was “deeply threatening to the well-established First Amendment interests” of an “extraordinary diversity of” people, which is why X’s complaint was supported by briefs from reporters, freedom of the press advocates, First Amendment scholars, “conservative entities,” and people across the political spectrum.

All share “a deep concern about a statute that, on its face, is aimed at pressuring social media companies to change their content moderation policies, so as to carry less or even no expression that’s viewed by the state as injurious to its people,” Kurtzberg told judges.

When the court pointed out that seemingly the law simply required X to abide by content moderation policies for each category defined in its own terms of service—and did not compel X to adopt any policy or position that it did not choose—Kurtzberg pushed back.

“They don’t mandate us to define the categories in a specific way, but they mandate us to take a position on what the legislature makes clear are the most controversial categories to moderate and define,” Kurtzberg said. “We are entitled to respond to the statute by saying we don’t define hate speech or racism. But the report also asks about policies that are supposedly, quote, ‘intended’ to address those categories, which is a judgment call.”

“This is very helpful,” Judge Anthony Johnstone responded. “Even if you don’t yourself define those categories in the terms of service, you read the law as requiring you to opine or discuss those categories, even if they’re not part of your own terms,” and “you are required to tell California essentially your views on hate speech, extremism, harassment, foreign political interference, how you define them or don’t define them, and what you choose to do about them?”

“That is correct,” Kurtzberg responded, noting that X considered those categories the most “fraught” and “difficult to define.”

Elon Musk’s X may succeed in blocking Calif. content moderation law on appeal Read More »

elon-musk-says-spacex-and-x-will-relocate-their-headquarters-to-texas

Elon Musk says SpaceX and X will relocate their headquarters to Texas

Home base at Starbase —

The billionaire blamed a California gender identity law for moving SpaceX and X headquarters.

A pedestrian walks past a flown Falcon 9 booster at SpaceX headquarters in Hawthorne, California, on Tuesday, the same day Elon Musk said he will relocate the headquarters to Texas.

Enlarge / A pedestrian walks past a flown Falcon 9 booster at SpaceX headquarters in Hawthorne, California, on Tuesday, the same day Elon Musk said he will relocate the headquarters to Texas.

Elon Musk said Tuesday that he will move the headquarters of SpaceX and his social media company X from California to Texas in response to a new gender identity law signed by California Governor Gavin Newsom.

Musk’s announcement, made via a post on X, follows his decision in 2021 to move the headquarters of the electric car company Tesla from Palo Alto, California, to Austin, Texas, in the wake of coronavirus lockdowns in the Bay Area the year before. Now, two of Musk’s other major holdings are making symbolic moves out of California: SpaceX to the company’s Starbase launch facility near Brownsville, Texas, and X to Austin.

The new gender identity law, signed by Governor Newsom, a Democrat, on Monday, bars school districts in California from requiring teachers to disclose a change in a student’s gender identification or sexual orientation to their parents without the child’s permission. Musk wrote on X that the law was the “final straw” prompting the relocation to Texas, where the billionaire executive and his companies could take advantage of lower taxes and light-touch regulations.

Earlier this year, SpaceX transferred its incorporation from Delaware to Texas after a Delaware judge invalidated his pay package at Tesla.

“Because of this law and the many others that preceded it, attacking both families and companies, SpaceX will now move its HQ from Hawthorne, California, to Starbase, Texas,” Musk wrote Tuesday on X.

The first-in-the-nation law in California is a flashpoint in the struggle between conservative school boards concerned about parental rights and proponents for the privacy rights of LGBTQ people.

“I did make it clear to Governor Newsom about a year ago that laws of this nature would force families and companies to leave California to protect their children,” wrote Musk, who on Saturday endorsed former President Donald Trump, the Republican nominee in this year’s presidential election.

In a statement, Newsom’s office said the law “does not allow a student’s name or gender identity to be changed on an official school record without parental consent” and “does not take away or undermine parents’ rights.”

What does this mean for SpaceX?

Musk’s comments on X didn’t mention details about the implications of his companies’ moves to Texas. However, while Tesla’s corporate headquarters relocated to Texas in 2021, the company still produces cars in California and announced a new engineering hub in Palo Alto last year. The situation with SpaceX is likely to be similar.

Since Musk bought Twitter in 2022, he renamed it X, rewrote the network’s policies on content moderation, and laid off most of the company’s staff, reducing its workforce to around 1,500 employees. With vast manufacturing capacities, SpaceX currently has more than 13,000 employees, so a relocation for Musk’s space company would affect more people and potentially be more disruptive than one at X.

SpaceX’s current headquarters in Hawthorne, California, serves as a factory, engineering design center, and mission control for the company’s rockets and spacecraft. Relocating these facilities wouldn’t be easy, but SpaceX may not need to.

Elon Musk says SpaceX and X will relocate their headquarters to Texas Read More »

40-years-later,-x-window-system-is-far-more-relevant-than-anyone-could-guess

40 years later, X Window System is far more relevant than anyone could guess

Widely but improperly known as X-windows —

One astrophysics professor’s memories of writing X11 code in the 1980s.

low angle view of Office Buildings in Hong Kong from below, with the sky visible through an X-like cross

Getty Images

Often times, when I am researching something about computers or coding that has been around a very long while, I will come across a document on a university website that tells me more about that thing than any Wikipedia page or archive ever could.

It’s usually a PDF, though sometimes a plaintext file, on a .edu subdirectory that starts with a username preceded by a tilde (~) character. This is typically a document that a professor, faced with the same questions semester after semester, has put together to save the most time possible and get back to their work. I recently found such a document inside Princeton University’s astrophysics department: “An Introduction to the X Window System,” written by Robert Lupton.

X Window System, which turned 40 years old earlier this week, was something you had to know how to use to work with space-facing instruments back in the early 1980s, when VT100s, VAX-11/750s, and Sun Microsystems boxes would share space at college computer labs. As the member of the AstroPhysical Sciences Department at Princeton who knew the most about computers back then, it fell to Lupton to fix things and take questions.

“I first wrote X10r4 server code, which eventually became X11,” Lupton said in a phone interview. “Anything that needed graphics code, where you’d want a button or some kind of display for something, that was X… People would probably bug me when I was trying to get work done down in the basement, so I probably wrote this for that reason.”

Getty Images

Where X came from (after W)

Robert W. Scheifler and Jim Gettys at MIT spent “the last couple weeks writing a window system for the VS100” back in 1984. As part of Project Athena‘s goals to create campus-wide computing with distributed resources and multiple hardware platforms, X fit the bill, being independent of platforms and vendors and able to call on remote resources. Scheifler “stole a fair amount of code from W,” made its interface asynchronous and thereby much faster, and “called it X” (back when that was still a cool thing to do).

That kind of cross-platform compatibility made X work for Princeton, and thereby Lupton. He notes in his guide that X provides “tools not rules,” which allows for “a very large number of confusing guises.” After explaining the three-part nature of X—the server, the clients, and the window manager—he goes on to provide some tips:

  • Modifier keys are key to X; “this sensitivity extends to things like mouse buttons that you might not normally think of as case-sensitive.”
  • “To start X, type xinit; do not type X unless you have defined an alias. X by itself starts the server but no clients, resulting in an empty screen.”
  • “All programmes running under X are equal, but one, the window manager, is more equal.”
  • Using the “--zaphod” flag prevents a mouse from going into a screen you can’t see; “Someone should be able to explain the etymology to you” (link mine).
  • “If you say kill 5 -9 12345 you will be sorry as the console will appear hopelessly confused. Return to your other terminal, say kbd mode -a, and make a note not to use -9 without due reason.”

I asked Lupton, whom I caught on the last day before he headed to Chile to help with a very big telescope, how he felt about X, 40 years later. Why had it survived?

“It worked, at least relative to the other options we had,” Lupton said. He noted that Princeton’s systems were not “heavily networked in those days,” such that the network traffic issues some had with X weren’t an issue then. “People weren’t expecting a lot of GUIs, either; they were expecting command lines, maybe a few buttons… it was the most portable version of a window system, running on both a VAX and the Suns at the time… it wasn’t bad.”

40 years later, X Window System is far more relevant than anyone could guess Read More »

elon-musk-rushes-to-debut-x-payments-as-tech-issues-hamper-creator-payouts

Elon Musk rushes to debut X payments as tech issues hamper creator payouts

Elon Musk rushes to debut X payments as tech issues hamper creator payouts

Elon Musk is still frantically pushing to launch X payment services in the US by the end of 2024, Bloomberg reported Tuesday.

Launching payment services is arguably one of the reasons why Musk paid so much to acquire Twitter in 2022. His rebranding of the social platform into X revives a former dream he had as a PayPal co-founder who fought and failed to name the now-ubiquitous payments app X. Musk has told X staff that transforming the company into a payments provider would be critical to achieving his goal of turning X into a so-called everything app “within three to five years.”

Late last year, Musk said it would “blow” his “mind” if X didn’t roll out payments by the end of 2024, so Bloomberg’s report likely comes as no big surprise to Musk’s biggest fans who believe in his vision. At that time, Musk said he wanted X users’ “entire financial lives” on the platform before 2024 ended, and a Bloomberg review of “more than 350 pages of documents and emails related to money transmitter licenses that X Payments submitted in 11 states” shows approximately how close he is to making that dream a reality on his platform.

X Payments, a subsidiary of X, reports that X already has money transmitter licenses in 28 states, but X wants to secure licenses in all states before 2024 winds down, Bloomberg reported.

Bloomberg’s review found that X has a multiyear plan to gradually introduce payment features across the US—including “Venmo-like” features to send and receive money, as well as make purchases online—but hopes to begin that process this year. Payment providers like Stripe and Adyen have already partnered with X to process its transactions, Bloomberg reported, and X has told regulators that it “anticipated” that its payments system would also rely on those partnerships.

Musk initially had hoped to launch payments globally in 2024, but regulatory pressures forced him to tamp down those ambitions, Bloomberg reported. States like Massachusetts, for example, required X to resubmit its application only after more than half of US states had issued licenses, Bloomberg found.

Ultimately, Musk wants X to become the largest financial institution in the world. Bloomberg reported that he plans to do this by giving users a convenient “digital dashboard” through X “that will serve as a centralized hub for all payments activity” online. To make sure that users keep their money stashed on the platform, Musk plans to offer “extremely high yield” savings accounts that X Payments’ chief information security officer, Chris Stanley, teased in April would basically guarantee that funds are rarely withdrawn from X.

“The end goal is if you ever have any incentive to take money out of our system, then we have failed,” Stanley posted on X.

Stanley compared X payments to Venmo and Apple Pay and said X’s plan for its payment feature was to “evolve” so that X users “can gain interest, buy products,” and “eventually use it to buy things in stores.”

Bloomberg confirmed that X does not plan to charge users any fees to send or receive payments, although Musk has told regulators that offering payments will “boost” X’s business by increasing X users’ “participation and engagement.” Analysts told Bloomberg that X could also profit off payments by charging merchants fees or by “offering banking services, such as checking accounts and debit cards.”

Musk has told X staff that he plans to offer checking accounts, debit cards, and even loans through X, saying that “if you address all things that you want from a finance standpoint, then we will be the people’s financial institution.”

X CEO Linda Yaccarino has been among the biggest cheerleaders for Musk’s plan to turn X into a bank, writing in a blog last year, “We want money on X to flow as freely as information and conversation.”

Elon Musk rushes to debut X payments as tech issues hamper creator payouts Read More »

elon-musk’s-x-defeats-australia’s-global-takedown-order-of-stabbing-video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Elon Musk’s X defeats Australia’s global takedown order of stabbing video

Australia’s safety regulator has ended a legal battle with X (formerly Twitter) after threatening approximately $500,000 daily fines for failing to remove 65 instances of a religiously motivated stabbing video from X globally.

Enforcing Australia’s Online Safety Act, eSafety commissioner Julie Inman-Grant had argued it would be dangerous for the videos to keep spreading on X, potentially inciting other acts of terror in Australia.

But X owner Elon Musk refused to comply with the global takedown order, arguing that it would be “unlawful and dangerous” to allow one country to control the global Internet. And Musk was not alone in this fight. The legal director of a nonprofit digital rights group called the Electronic Frontier Foundation (EFF), Corynne McSherry, backed up Musk, urging the court to agree that “no single country should be able to restrict speech across the entire Internet.”

“We welcome the news that the eSafety Commissioner is no longer pursuing legal action against X seeking the global removal of content that does not violate X’s rules,” X’s Global Government Affairs account posted late Tuesday night. “This case has raised important questions on how legal powers can be used to threaten global censorship of speech, and we are heartened to see that freedom of speech has prevailed.”

Inman-Grant was formerly Twitter’s director of public policy in Australia and used that experience to land what she told The Courier-Mail was her “dream role” as Australia’s eSafety commissioner in 2017. Since issuing the order to remove the video globally on X, Inman-Grant had traded barbs with Musk (along with other Australian lawmakers), responding to Musk labeling her a “censorship commissar” by calling him an “arrogant billionaire” for fighting the order.

On X, Musk arguably got the last word, posting, “Freedom of speech is worth fighting for.”

Safety regulator still defends takedown order

In a statement, Inman-Grant said early Wednesday that her decision to discontinue proceedings against X was part of an effort to “consolidate actions,” including “litigation across multiple cases.” She ultimately determined that dropping the case against X would be the “option likely to achieve the most positive outcome for the online safety of all Australians, especially children.”

“Our sole goal and focus in issuing our removal notice was to prevent this extremely violent footage from going viral, potentially inciting further violence and inflicting more harm on the Australian community,” Inman-Grant said, still defending the order despite dropping it.

In court, X’s lawyer Marcus Hoyne had pushed back on such logic, arguing that the eSafety regulator’s mission was “pointless” because “footage of the attack had now spread far beyond the few dozen URLs originally identified,” the Australian Broadcasting Corporation reported.

“I stand by my investigators and the decisions eSafety made,” Inman-Grant said.

Other Australian lawmakers agree the order was not out of line. According to AP News, Australian Minister for Communications Michelle Rowland shared a similar statement in parliament today, backing up the safety regulator while scolding X users who allegedly took up Musk’s fight by threatening Inman-Grant and her family. The safety regulator has said that Musk’s X posts incited a “pile-on” from his followers who allegedly sent death threats and exposed her children’s personal information, the BBC reported.

“The government backs our regulators and we back the eSafety Commissioner, particularly in light of the reprehensible threats to her physical safety and the threats to her family in the course of doing her job,” Rowland said.

Elon Musk’s X defeats Australia’s global takedown order of stabbing video Read More »

nvidia-emails:-elon-musk-diverting-tesla-gpus-to-his-other-companies

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies

why not just make cars? —

The Tesla CEO is accused of diverting resources from the company again.

A row of server racks

Enlarge / Tesla will have to rely on its Dojo supercomputer for a while longer after CEO Elon Musk diverted 12,000 Nvidia GPU clusters to X instead.

Tesla

Elon Musk is yet again being accused of diverting Tesla resources to his other companies. This time, it’s high-end H100 GPU clusters from Nvidia. CNBC’s Lora Kolodny reports that while Tesla ordered these pricey computers, emails from Nvidia staff show that Musk instead redirected 12,000 GPUs to be delivered to his social media company X.

It’s almost unheard of for a profitable automaker to pivot its business into another sector, but that appears to be the plan at Tesla as Musk continues to say that the electric car company is instead destined to be an AI and robotics firm instead.

Does Tesla make cars or AI?

That explains why Musk told investors in April that Tesla had spent $1 billion on GPUs in the first three months of this year, almost as much as it spent on R&D, despite being desperate for new models to add to what is now an old and very limited product lineup that is suffering rapidly declining sales in the US and China.

Despite increasing federal scrutiny here in the US, Tesla has reduced the price of its controversial “full-self driving” assist, and the automaker is said to be close to rolling out the feature in China. (Questions remain about how many Chinese Teslas would be able to utilize this feature given that a critical chip was left out of 1.2 million cars built there during the chip shortage.)

Perfecting this driver assist would be very valuable to Tesla, which offers FSD as a monthly subscription as an alternative to a one-off payment. The profit margins for subscription software services vastly outstrip the margins Tesla can make selling physical cars, which dropped to just 5.5 percent for Q1 2024. And Tesla says that massive GPU clusters are needed to develop FSD’s software.

Isn’t Tesla desperate for Nvidia GPUs?

Tesla has been developing its own in-house supercomputer for AI, called Dojo. But Musk has previously said that computer could be redundant if Tesla could source more H100s. “If they could deliver us enough GPUs, we might not need Dojo, but they can’t because they’ve got so many customers,” Musk said during a July 2023 investor day.

Which makes his decision to have his other companies jump all the more notable. In December, an internal Nvidia memo seen by CNBC said, “Elon prioritizing X H100 GPU cluster deployment at X versus Tesla by redirecting 12k of shipped H100 GPUs originally slated for Tesla to X instead. In exchange, original X orders of 12k H100 slated for Jan and June to be redirected to Tesla.”

X and the affiliated xAi are developing generative AI products like large language models.

Not the first time

This is not the first time that Musk has been accused of diverting resources (and his time) from publicly held Tesla to his other privately owned enterprises. In December 2022, US Sen. Elizabeth Warren (D-Mass.) wrote to Tesla asking Tesla to explain whether Musk was diverting Tesla resources to X (then called Twitter):

This use of Tesla employees raises obvious questions about whether Mr. Musk is appropriating resources from a publicly traded firm, Tesla, to benefit his own private company, Twitter. This, of course, would violate Mr. Musk’s legal duty of loyalty to Tesla and trigger questions about the Tesla Board’s responsibility to prevent such actions, and may also run afoul other “anti-tunneling rules that aim to prevent corporate insiders from extracting resources from their firms.”

Musk giving time meant (and compensated) for by Tesla to SpaceX, X, and his other ventures was also highlighted as a problem by the plaintiffs in a successful lawsuit to overturn a $56 billion stock compensation package.

And last summer, the US Department of Justice opened an investigation into whether Musk used Tesla resources to build a mansion for the CEO in Texas; the probe has since expanded to cover behavior stretching back to 2017.

These latest accusations of misuse of Tesla resources come at a time when Musk is asking shareholders to reapprove what is now a $46 billion stock compensation plan.

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies Read More »