Author name: Kris Guyer

internet-picks-“werewolf-clawing-off-its-own-shirt”-as-new-michigan-“i-voted”-sticker

Internet picks “werewolf clawing off its own shirt” as new Michigan “I Voted” sticker

RAWR —

“It was just so hot in that voting booth!”

A picture of the winning sticker.

Voting really feels good to this werewolf.

State of Michigan

You can’t just ask the Internet to vote on something and assume you’ll get a “normal” result.

The town of Fort Wayne, Indiana, learned this the hard way in 2011, when an online vote to name a new government center in town went with “Harry Baals.” Though Mr. Baals was in fact a respected former mayor of the town back in the 1930s, contemporary officials weren’t convinced that his name was chosen out of merely historical interest.

Or there was the time in 2015 when the British Columbia Ferry Service asked Internet users to name its newest ships and perhaps win a $500 prize. Contest entries included:

  • Spirit of The WalletSucker
  • The Floating Crapsickle
  • Royal Docksitter
  • The Coastal Corruption
  • HMS Cantafford
  • Queen of the Damned

Or again—and perhaps most famously—there was the UK government’s gloriously naive decision in 2016 to let the Internet pick a new name for a £200 million polar research vessel. And 124,109 members of the general public chose… Boaty McBoatface. (This was later overridden by the government, which named the ship the RRS Sir David Attenborough instead, but one of the boat’s remotely operated underwater vehicles was named Boaty McBoatface as a consolation prize.)

Even the not-quite-bleeding-edge-of-tech New York Times recognized in its headline on the story that this is “What You Get When You Let the Internet Decide.”

So, despite many years of cautionary tales, the state of Michigan this year launched a contest to design some new “I Voted” sticker designs. (NB: For our non-American readers, these stickers are often given out when you vote in elections so that you can shame any nonvoting friends, family, and colleagues with your civic virtue.)

The state commissioned designs from local school kids, no doubt anticipating that said designs would feature things like heartwarming drawings of the Michigan mitten. And they let the Internet weigh in on the results.

More than 57,000 people did so—and that’s why voters across the state, once they cast a ballot in this year’s presidential election, might be handed a round sticker featuring a werewolf ripping its own shirt to shreds as it throws its head back and howls like a maniac in front of an American flag. And it is glorious.

Why not?

This piece of inspired artwork came from the mind and pen of 12-year-old Jane Hynous of Grosse Pointe Farms. Though the contest selected nine winners, Hynous’ design beat every other entry by a wide margin. (See all winners here.)

The New York Times called Hynous to talk about the sticker and received this terrific quote:

“I didn’t want to do something that usually you think of when you think of Michigan,” she said. “I was like, ‘Why not make a wolf pulling his shirt off?'”

Why not, indeed? Clearly, the Internet has delivered on this one.

Election clerks can also order the traditional design. But why?

Enlarge / Election clerks can also order the traditional design. But why?

Michigan plans to print a million stickers, which will feature all nine winning designs, and local election clerks will need to order specific designs from the state. (They can also order the original, boring American flag “I Voted” stickers. But why would they?)

So if you live in Michigan, and if this November you want your shirt adorned with an insane werewolf celebrating the vote you just cast, now is the time to let your local clerk know.

Still, despite these great designs, I can’t help but feel that an opportunity was lost. No “Votey McVoteface”? Perhaps in 2028.

Internet picks “werewolf clawing off its own shirt” as new Michigan “I Voted” sticker Read More »

ai-and-the-technological-richter-scale

AI and the Technological Richter Scale

The Technological Richter scale is introduced about 80% of the way through Nate Silver’s new book On the Edge.

A full review is in the works (note to prediction markets: this post alone does NOT on its own count as a review, but this counts as part of a future review), but this concept seems highly useful, stands on its own and I want a reference post for it. Nate skips around his chapter titles and timelines, so why not do the same here?

Nate Silver, On the Edge (location 8,088 on Kindle): The Richter scale was created by the physicist Charles Richter in 1935 to quantify the amount of energy released by earthquakes.

It has two key features that I’ll borrow for my Technological Richter Scale (TRS). First, it is logarithmic. A magnitude 7 earthquake is actually ten times more powerful than a mag 6. Second, the frequency of earthquakes is inversely related to their Richter magnitude—so 6s occur about ten times more often than 7s. Technological innovations can also produce seismic disruptions.

Let’s proceed quickly through the lower readings of the Technological Richter Scale.

  1. Like a half-formulated thought in the shower.

  2. Is an idea you actuate, but never disseminate: a slightly better method to brine a chicken that only you and your family know about.

  3. Begins to show up in the official record somewhere, an idea you patent or make a prototype of.

  4. An invention successful enough that somebody pays for it; you sell it commercially or someone buys the IP.

  5. A commercially successful invention that is important in its category, say, Cool Ranch Doritos, or the leading brand of windshield wipers.

  6. An invention can have a broader societal impact, causing a disruption within its field and some ripple effects beyond it. A TRS 6 will be on the short list for technology of the year. At the low end of the 6s (a TRS 6.0) are clever and cute inventions like Post-it notes that provide some mundane utility. Toward the high end (a 6.8 or 6.9) might be something like the VCR, which disrupted home entertainment and had knock-on effects on the movie industry. The impact escalates quickly from there.

  7. One of the leading inventions of the decade and has a measurable impact on people’s everyday lives. Something like credit cards would be toward the lower end of the 7s, and social media a high 7.

  8. A truly seismic invention, a candidate for technology of the century, triggering broadly disruptive effects throughout society. Canonical examples include automobiles, electricity, and the internet.

  9. By the time we get to TRS 9, we’re talking about the most important inventions of all time, things that inarguably and unalterably changed the course of human history. You can count these on one or two hands. There’s fire, the wheel, agriculture, the printing press. Although they’re something of an odd case, I’d argue that nuclear weapons belong here also. True, their impact on daily life isn’t necessarily obvious if you’re living in a superpower protected by its nuclear umbrella (someone in Ukraine might feel differently). But if we’re thinking in expected-value terms, they’re the first invention that had the potential to destroy humanity.

  10. Finally, a 10 is a technology that defines a new epoch, one that alters not only the fate of humanity but that of the planet. For roughly the past twelve thousand years, we have been in the Holocene, the geological epoch defined not by the origin of Homo sapiens per se but by humans becoming the dominant species and beginning to alter the shape of the Earth with our technologies. AI wresting control of this dominant position from humans would qualify as a 10, as would other forms of a “technological singularity,” a term popularized by the computer scientist Ray Kurzweil.

One could quibble with some of these examples. Credit cards as a low 7, below social media, while the VCR is a 6.85 and you get to 6 with a Post-it note? Also one could worry we should condense the lower end of the scale to make room at the top.

Later he puts ‘the blockchain’ in the 7s, and I’m going to have to stop him right there. No. Blockchain is not on par with credit cards or mobile phones (either of which is reasonable at a 7 but a plausible 8), that makes no sense, and it also isn’t more important than (for example) the microwave oven, which he places at 6. Yes, crypto people like to get excited, but everyone chill.

I ran a poll to sanity check this, putting Blockchain up against various 6s.

This was a wipe-out, sufficient that I’m sending blockchain down to at best a low 6.

Claude estimates the microwave has already saved $4.5 trillion in time value alone, and you should multiply that several times over for other factors. The total market cap of Crypto is $2 trillion, that number is super fake given how illiquid so many coins are (and e.g. ~20% of Bitcoin, or 10% of the overall total, likely died with Satoshi and so on). And if you tell me there’s so much other value created, and crypto is going to transform the world, let me find that laugh track.

Microwaves then correctly got crushed when put up against real 7s.

I think this is sleeping on credit cards, at least if you include debit cards. Smooth payment rails are a huge deal. And electricity rather correctly smoked The Internet and automobiles (and air conditioning). This game is fun.

The overall point is also clear.

What is the range of plausible scores on this scale for generative AI?

The (unoriginal) term that I have used a few times, for the skeptic case, the AI-fizzle world, is that AI could prove to be ‘only internet big.’ In that scenario, GPT-5-level models are about as good as it gets, and they don’t enable dramatically better things than today’s GPT-4-level models. We then spend a long time getting the most out of what those have to offer.

I think that even if we see no major improvements from there, an 8.0 is already baked in. Counting the improvements I am very confident we can get about an 8.5.

In that scenario, we distill what we already have, costs fall by at least one additional order of magnitude, we build better scaffolding and prompting techniques, we integrate all this into our lives and civilization.

I asked Twitter, how advanced would frontier models have to get, before they were at least Internet Big, or a solid 8?

I think that 5-level models, given time to have their costs reduced and to be properly utilized, will inevitably be at least internet big, but only 45% of respondents agreed. I also think that 5-level models are inevitable – even if things are going to peter out, we should still have that much juice left in the tank.

Whereas, and I think this is mind-numbingly obvious, any substantial advances beyond that get us at least into the 9s, which probably gets us ASI (or an AGI capable of enabling the building of an ASI) and therefore is at least a 10.0. You could argue that since it changes the destiny of the universe instead of only Earth, it’s 11.0. Even if humanity retains control over the future and we get the outcomes we hope to get, creating things smarter than we are changes everything, well beyond things like agriculture or fire.

There are otherwise clearly intelligent people who seem to sincerely disagree with this. I try to understand it. On some levels, sometimes, I manage to do that.

On other levels, no, that is completely bonkers nuts, Obvious Nonsense.

Nate Silver offers this chart.

As I noted, I think you can move that ‘AI has already passed this threshold’ line up.

No, it hasn’t been as impactful as those things at 7 yet, but if civilization continues relatively normally that is inevitable.

I think this chart is actually overly pessimistic at the 8-level, and would be at the 7-level if that was still a possibility. Put me in the ‘probably extraordinary positive’ category if we stay there. The only worry is if AI at that level enables some highly disruptive technology with no available defenses, but I’d be 70%+ we’re on the extreme left in the 8-range. At the 7-range I’d be 90%+ we’re at least substantially good, and then it’s a question of how extraordinary you have to be to count.

If you tell me we got stuck at 9-level, the obvious question is how we got a 9 and then did not get a 10 soon thereafter. It’s actually tricky to imagine one without the other. But let us suppose that via some miracle of physics that is where AI stops? My guess is the distribution above is reasonable, but it’s hard because every time I try to imagine that world my brain basically shoots back ‘does not compute.’

What if it does go to 10-level, fully transformational AI? Nate nails the important point, which is that the result is either very, very good or it is very, very bad. The chart above puts the odds at 50/50. I think the odds are definitely against us here. When I think about 10-level scenarios where we survive, it always involves something going much better than I expect, and usually it involves multiple such somethings. I think we are still drawing live, but a huge portion of that is Model Error – the possibility that I am thinking about this wrong, missing or wrong about very important things.

The baseline scenario is that we create things smarter than ourselves, and then rapidly control over the future belongs to those smarter things, and this does not lead to good outcomes for humanity, or probably any humans surviving for long.

No, that does not require any kind of ‘sharp left turn’ or particular failure mode, it is simply what happens under competition, when everyone is under pressure to turn more and more things over to more and more ruthless AIs to compete against each other.

That even assumes we ‘solve alignment’ sufficiently to even get that far, and we very much should not be assuming that.

One can also get into all the other problems and obstacles, many of which Eliezer Yudkowsky covers under A List of Lethalities.

Almost all arguments I hear against this seem (to put it politely) extremely poor, and highly motivated. Mostly they do not even consider the real issues involved at all.

Most arguments that everything will work out fine, that are not nonsense, are not arguing we’ll get a 10.0 and survive it. Mostly they are arguing we do not get a 10.0.

It would very much help the discourse if people would be clearer on this. If they would say ‘I do not expect transformational AI, I think it is about an 8.0, but I agree that if it is going to hit 9+ then we should be very worried’ then we could focus on the actual crux of the matter.

Or if we could hear such arguments, respond with ‘so you don’t think AI is transformational, it won’t go beyond 8.0?’ and they could say yes. That works too.

What are some of the most common arguments against transformational AI?

It is very hard to state many of the common arguments without strawmanning, or sounding like one is strawmanning. But this is a sincere attempt to list as many of them as possible, and to distill their actual core arguments.

  1. We are on an S-curve of capabilities, and near the top, and that will be that.

  2. A sufficiently intelligent computer would take too much [compute, power, data].

  3. Look how stupid current AIs are, they can’t even solve [some simple thing].

  4. Intelligence does not much matter, so it won’t make much difference.

  5. Intelligence does not much matter because we won’t let it have physical impacts.

  6. Intelligence does not much matter without [X] which AIs will always lack.

  7. Intelligence does not much matter because everything is already invented.

  8. Intelligence has a natural limit about where humans are.

  9. Intelligence of AI has a natural limit at the level of your training data.

  10. AI can only reproduce what is in its training data.

  11. AI can only recombine the types of things in its training data.

  12. AI cannot be creative, or invent new things.

  13. AI is dumb, it can only do narrow things, that’s not intelligence.

  14. AI does not have goals, or does not have desires, or memory, and so on.

  15. AI will always be too unreliable to do anything important.

  16. AI will have some fatal weakness, and you can be Kirk.

  17. Intelligence is not real, there is only domain knowledge and skill.

  18. Humans are special, a computer could never [do X].

  19. Humans are special because we are embodied.

  20. Humans are special because we are [sentient, conscious, ensouled, common sense].

  21. Humans are special because we are so much more efficient somehow.

  22. Humans are special because ultimately humans will only trust or want humans.

  23. There is a lot of hype in AI, so it is all fake.

  24. Everything is fake, so AI is fake too.

  25. AI is a tool, and will always remain a mere tool.

  26. AI is just math, math is harmless.

  27. AI can only do what it can do now, we can ignore speculative claims.

  28. AI that does that sounds like science fiction, so we can ignore your claims.

  29. AI that does that implies an absurd future, so we can ignore your claims.

  30. AGI is not well defined so it will never exist.

  31. Arguments involving religious beliefs or God or aliens or what not.

  32. Arguments from various science fiction worlds where AI does not do this.

  33. Humanity will agree to stop building AI, we’re not that stupid.

  34. No, that whole thing is stupid, and I don’t like your stupid face.

I sincerely wish I was kidding here. I’m not.

A few of these are actual arguments that give one pause. It is not so implausible that we are near the top of an S-curve, that in some sense we don’t have the techniques and training data to get much more intelligence out of the AIs than we already have. Diminishing returns could set in, the scaling laws could break, and AI would get more expensive a lot faster than it makes everything easier, and progress stalls. The labs say there are no signs of it, but that proves nothing. We will, as we always do, learn a lot when we see the first 5-level models, or when we fail to see them for sufficiently long.

Then there are those that perhaps made sense in the past, but where the hypothesis has been falsified. Yes, humans say they want to trust humans and not AIs, but when humans in the loop are inconvenient, we already know what they will do, and of course if the AI in question was sufficiently capable it would not matter anyway.

Most tragically, there was a time it seemed plausible we would simply not built it exactly because we don’t want a 10.0-level event. That seems like a dim hope now, although we should still consider trying to postpone that event until we are ready.

Others are more absurd. I am especially frustrated by the arguments that I call Intelligence Denialism – that if you made something much smarter than a human, that it wouldn’t be able to do anything too impactful, or that intelligence is an incoherent concept. No, it couldn’t fool or manipulate me, or people in general, or make tons of money. No, it wouldn’t be able to run things much more productively, or invent new techniques, or whatever. And so on.

Many arguments accidentally disprove humans, or human civilization.

Then there are the ones that are word salads, or Obvious Nonsense, or pointing to obstacles that could not possibly bind over the medium term if nothing else was standing in the way, or aren’t arguments for the point in question. For example, you say the true intelligence requires embodiment? I mean I don’t see why you think that, but if true then there is an obvious fix. The true intelligence won’t matter because it won’t have a body? Um, you can get humans to do things by offering them money.

Or my favorite, the Marc Andreessen classic ‘AI is just math,’ to which the response is ‘so is the universe, and also so are you and me, what are you even talking about.’

I tried several times to write out taxonomies of the arguments that transformational AI will turn out fine. What I discovered was that going into details here rapidly took this post beyond scope, and getting it right is important but difficult.

This is very much not an attempt to argue for or against existential risk, or any particular conditional probability of doom. It is primarily here to contrast this list with the above list of arguments against transformational AI.

Briefly, one might offer the following high-level taxonomy.

  1. Arguments that human extinction is fine. (e.g. either AIs will have value inherently, AIs will carry our information and values, something else is The Good and can be maximized without us, or humanity is net negative, and so on.)

  2. Arguments from the AIs using advanced decision theories (e.g. FDT over CDT).

  3. Arguments from moral realism, fully robust alignment, that ‘good enough’ alignment is good enough in practice, and related concepts.

  4. Arguments from there being a benevolent future singleton or de facto singleton.

  5. Arguments from coordination being hard, and AIs not being able to coordinate, and that we will be protected by game theory (e.g. because AIs will be using CDT).

  6. Arguments from coordination being easy, and AIs, humans or both being thus able to coordinate far better than humans ever have. Also see de facto singleton.

  7. Arguments we have to gamble on it being fine, given our other options. So we therefore must assume everything will be fine, or act as if it will be, or treat that distribution of outcomes as overall fine, or similar.

  8. Arguments from science fiction or otherwise reasoning from what feels plausible, or from humans retaining some comparative advantage somehow.

  9. Arguments from blind faith, forms of just world hypothesis, failure to be able to imagine failure, or our general ability to figure things out and muddle through.

  10. Arguments from good outcomes being so cheap the AIs will allow them.

  11. Arguments that assume the default outcome must be fine, and then arguing against a particular bad scenario, which proves things will be fine.

  12. Arguments from uncertainty, therefore you can dismiss such concerns.

  13. Arguments from authority, or ad hominem, that worried people are bad.

I list ‘extinction is fine’ first because it is a values disagreement, and because it is important to realize a lot of people actually take such positions, and that this has had important impacts (e.g. fear of those who believe such arguments building AGI first motivating Musk to help create OpenAI).

The rest are in a combination of logical order and a roughly descending order of plausibility and of quality of the best of each class of arguments. Point seven is also unique, and is roughly the Point of No Return, beyond which the arguments get a lot worse.

The argument of whether we should proceed, and in what way, is of course vastly more complex than this and involves lots of factors on all sides.

A key set of relatively good arguments make the case that alignment of AIs and their objectives and what they do to what we want AIs to be doing is somewhere between easy (or even happens ‘by default’) and extremely difficult but solvable in time with the right investments. Reasonable people disagree on the difficulty level here, and I am of the position the problem is probably very difficult but not as impossible as some others (e.g. Yudkowsky) think.

Most (but not all!) people making such arguments then fail to grapple with what I tried calling Phase 2. That after you know how to get AIs to do what you want them to do, you still have to get to an equilibrium where this turns out well for humans, despite the default outcome of ‘humans all have very capable AIs that do what they want, and the humans are otherwise free’ is ‘the humans all turn everything over to their AIs and set them loose to compete’ because anyone not doing that loses out, on an individual level and on a corporate or national level.

What we want here is a highly ‘unnatural’ result, for the less competitive, less intelligent thing (the humans) to stay on top or at least stick around and have a bunch of resources, despite our inability to earn them in the marketplace, or ability to otherwise compete for them or for the exercise of power. So you have to find a way to intervene on the situation that fixes this, while preserving what we care about, that we can collectively agree to implement. And wow, that seems hard.

A key category is point 11: The argument that by default, creating entities far smarter, cheaper, more efficient, more competitive and more capable than ourselves will lead to good outcomes for us. If we can dismiss a particular bad scenario, we will definitely be in a good scenario. Then they choose a particular bad scenario, and find a step where they can dismiss it – or, they simply say ‘there are a lot of steps here, and one of them will not happen this way.’ Then they say since the bad scenario won’t happen, things will go well.

A remarkably large percentage of arguments for things being fine are either point 1 (human extinction is fine), point 11 (this particular bad end is implausible so things will be good) or points 12 and 13.

AI and the Technological Richter Scale Read More »

intel-core-ultra-200v-promises-arm-beating-battery-life-without-compatibility-issues

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues

Intel

Intel has formally announced its first batch of next-generation Core Ultra processors, codenamed “Lunar Lake.” The CPUs will be available in PCs beginning on September 24.

Formally dubbed “Intel Core Ultra (Series 2),” these CPUs follow up the Meteor Lake Core Ultra CPUs that Intel has been shipping all year. They promise modest CPU performance increases alongside big power efficiency and battery life improvements, much faster graphics performance, and a new neural processing engine (NPU) that will meet Microsoft’s requirements for Copilot+ PCs that use local rather than cloud processing for generative AI and machine-learning features.

Intel Core Ultra 200V

The high-level enhancements coming to the Lunar Lake Core Ultra chips.

Enlarge / The high-level enhancements coming to the Lunar Lake Core Ultra chips.

Intel

The most significant numbers in today’s update are actually about battery life: Intel compared a Lunar Lake system and a Snapdragon X Elite system from the “same OEM” using the “same chassis” and the same-sized 55 WHr battery. In the Procyon Office Productivity test, the Intel system lasted longer, though the Qualcomm system lasted longer on a Microsoft Teams call.

If Intel’s Lunar Lake laptops can match or even get close to Qualcomm’s battery life, it will be a big deal for Intel; as the company repeatedly stresses in its slide deck, x86 PCs don’t have the lingering app, game, and driver compatibility problems that Arm-powered Windows systems still do. If Intel can improve its battery life more quickly than Microsoft, and if Arm chipmakers and app developers can improve software compatibility, some of the current best arguments in favor of buying an Arm PC will go away.

  • Intel is trying to fight back against Qualcomm’s battery life advantage in Windows PCs.

    Intel

  • Many of Lunar Lake’s changes were done in service of reducing power use.

    Intel

  • Here, Intel claims a larger advantage in battery life against both Qualcomm and AMD, though there are lots of variables that determine battery life, and we’ll need to see more real-world testing to back these numbers up.

    Intel

Intel detailed many other Lunar Lake changes earlier this summer when it announced high-level performance numbers for the CPU, GPU, and NPU.

Like Meteor Lake, the Lunar Lake processors are a collection of silicon chiplets (also called “tiles”) fused into one large chip using Intel’s Foveros packaging technology. The big difference is that there are fewer functional tiles—two, instead of four, not counting the blank “filler tile” or the base tile that ties them all together—and that both of those tiles are now being manufactured at Intel competitor TSMC, rather than using a mix of TSMC and Intel manufacturing processes as Meteor Lake did.

Intel also said it would be shipping Core Ultra CPUs with the system RAM integrated into the CPU package, which Apple also does for its M-series Mac processors; Intel says this will save quite a bit of power relative to external RAM soldered to the laptop’s motherboard.

Keep that change in mind when looking at the list of initial Core Ultra 200V-series processors Intel is announcing today. There are technically nine separate CPU models here, but because memory is integrated into the CPU package, Intel is counting the 16GB and 32GB versions of the same processor as two separate model numbers. The exception is the Core Ultra 9 288V, which is only available with 32GB of memory.

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues Read More »

jaguar-i-pace-fire-risk-leads-to-recall,-instructions-to-park-outdoors

Jaguar I-Pace fire risk leads to recall, instructions to park outdoors

fold tab to recall —

The problem is similar to one that affected the Chevrolet Bolt in 2021.

A closeup of a cutaway jaguar I-apce battery pack

Enlarge / Jaguar sourced the I-Pace’s battery cells from LG Energy Solutions. But now there’s a problem with some of them.

Jonathan Gitlin

The Jaguar I-Pace deserves more credit. When it debuted in 2018, it was one of only two electric vehicles on sale that could offer Tesla-rivaling range. The other was the much more plebeian Chevrolet Bolt, which was cheaper but nowhere near as luxurious, nor as enjoyable to drive. Now, some I-Pace and Bolt owners have something else in common, as Jaguar issues a recall for some model-year 2019 I-Paces due to a fire risk, probably caused by badly folded battery anode tabs.

The problem doesn’t affect all I-Paces, just those built between January 9, 2018, and March 14, 2019—2,760 cars in total in the US. To date, three fires have been reported following software updates, which Jaguar’s recall report says does not provide “an appropriate level of protection for the 2019MY vehicles in the US.”

Although Jaguar’s investigation is still ongoing, it says that its battery supplier (LG Energy Solutions) is inspecting some battery modules that were identified by diagnostic software as “having characteristics of a folded anode tab.” In 2021, problems with LG batteries—in this case, folded separators and torn anode tabs—resulted in Chevrolet recalling every Bolt on the road and replacing their batteries under warranty at a cost of more than $1.8 billion.

For now, the Jaguar recall is less drastic. A software update will limit the maximum charge of the affected cars to 80 percent, to prevent the packs from charging to 100 percent. Jaguar also says that, similar to other OEMs who have conducted recalls for similar problems, the patched I-Paces should be parked away from structures for 30 days post-recall and should be charged outdoors where possible.

Jaguar I-Pace fire risk leads to recall, instructions to park outdoors Read More »

asus-rog-ally-x-review:-better-performance-and-feel-in-a-pricey-package

Asus ROG Ally X review: Better performance and feel in a pricey package

Faster, grippier, pricier, and just as Windows-ed —

A great hardware refresh, but it stands out for its not-quite-handheld cost.

Updated

It's hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It's also tricky to fit it into a photo, at the right angle, while it's in your hands.

Enlarge / It’s hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It’s also tricky to fit it into a photo, at the right angle, while it’s in your hands.

Kevin Purdy

The first ROG Ally from Asus, a $700 Windows-based handheld gaming PC, performed better than the Steam Deck, but it did so through notable compromises on battery life. The hardware also had a first-gen feel and software jank from both Asus’ own wraparound gaming app and Windows itself. The Ally asked an awkward question: “Do you want to pay nearly 50 percent more than you’d pay for a Steam Deck for a slightly faster but far more awkward handheld?”

The ROG Ally X makes that question more interesting and less obvious to answer. Yes, it’s still a handheld that’s trying to hide Windows annoyances, and it’s still missing trackpads, without which some PC games just feel bad. And (review spoiler) it still eats a charge faster than the Steam Deck OLED on less demanding games.

But the improvements Asus made to this X sequel are notable, and its new performance stats make it more viable for those who want to play more demanding games on a rather crisp screen. At $800, or $100 more than the original ROG Ally with no extras thrown in, you have to really, really want the best possible handheld gaming experience while still tolerating Windows’ awkward fit.

Asus

What’s new in the Ally X

Specs at a glance: Asus ROG Ally X
Display 7-inch IPS panel: 1920×1080, 120 Hz, 7 ms, 500 nits, 100% sRGB, FreeSync, Gorilla Glass Victus
OS Windows 11 (Home)
CPU AMD Ryzen Z1 Extreme (Zen 4, 8 core, 24M cache, 5.10 Ghz, 9-30 W (as reviewed)
RAM 24GB LPDDR5X 6400 MHz
GPU AMD Radeon RDNA3, 2.7 GHz, 8.6 Teraflops
Storage M.2 NVME 2280 Gen4x4, 1TB (as reviewed)
Networking Wi-Fi 6E, Bluetooth 5.2
Battery 80 Wh (65W max charge)
Ports USB-C (3.2 Gen2, DPI 1.4, PD 3.0), USB-C (DP, PD 3.0), 3.5 mm audio, Micro SD
Size 11×4.3×0.97 in. (280×111×25 mm)
Weight 1.49 lbs (678 g)
Price as reviewed $800

The ROG Ally X is essentially the ROG Ally with a bigger battery packed into a shell that is impressively not much bigger or heavier, more storage and RAM, and two USB-C ports instead of one USB-C and one weird mobile port that nobody could use. Asus reshaped the device and changed the face-button feel, and it all feels noticeably better, especially now that gaming sessions can last longer. The company also moved the microSD card slot so that your cards don’t melt, which is nice.

There’s a bit more to each of those changes that we’ll get into, but that’s the short version. Small spec bumps wouldn’t have changed much about the ROG Ally experience, but the changes Asus made for the X version do move the needle. Having more RAM available has a sizable impact on the frame performance of demanding games, and you can see that in our benchmarks.

We kept the LCD Steam Deck in our benchmarks because its chip has roughly the same performance as its OLED upgrade. But it’s really the Ally-to-Ally-X comparisons that are interesting; the Steam Deck has been fading back from AAA viability. If you want the Ally X to run modern, GPU-intensive games as fast as is feasible for a battery-powered device, it can now do that a lot better—for longer—and feel a bit better while you do.

The Rog Ally X has better answered the question “why not just buy a gaming laptop?” than its predecessor. At $800 and up, you might still ask how much portability is worth to you. But the Ally X is not as much of a niche (Windows-based handheld) inside a niche (moderately higher-end handhelds).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

Asus

How it feels using the Rog Ally X

My testing of the Rog Ally X consisted of benchmarks, battery testing, and playing some games on the couch. Specifically: Deep Rock Galactic: Survivor and Tactical Breach Wizards on the devices lowest-power setting (“Silent”), Deathloop on its medium-power setting (“Performance”), and Shadow of the Erdtree on its all-out “Turbo” mode.

All four of those games worked mostly fine, but DRG: Survivor pushed the boundaries of Silent mode a bit when its levels got crowded with enemies and projectiles. Most games could automatically figure out a decent settings scheme for the Ally X. If a game offers AMD’s FSR (FidelityFX Super Resolution) upscaling, you should at least try it; it’s usually a big boon to a game running on this handheld.

Overall, the ROG Ally X was a device I didn’t notice when I was using it, which is the best recommendation I can make. Perhaps I noticed that the 1080p screen was brighter, closer to the glass, and sharper than the LCD (original) Steam Deck. At handheld distance, the difference between 800p and 1080p isn’t huge to me, but the difference between LCD and OLED is more so. (Of course, an OLED version of the Steam Deck was released late last year.)

Asus ROG Ally X review: Better performance and feel in a pricey package Read More »

sunrise-alarm-clock-didn’t-make-waking-up-easier—but-made-sleeping-more-peaceful

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful

  • The Hatch Restore 2 with one of its lighting options on.

    Scharon Harding

  • The time is visible here, but you can disable that.

    Scharon Harding

  • Here’s the clock with a light on in the dark.

    Scharon Harding

  • A closer look.

    Scharon Harding

  • The clock’s backside.

    Scharon Harding

To say “I’m not a morning person” would be an understatement. Not only is it hard for me to be useful in the first hour (or so) of being awake, but it’s hard for me to wake up. I mean, really hard.

I’ve tried various recommendations and tricks: I’ve set multiple alarms and had coffee ready and waiting, and I’ve put my alarm clock far from my bed and kept my blinds open so the sun could wake me. But I’m still prone to sleeping through my alarm or hitting snooze until the last minute.

The Hatch Restore 2, a smart alarm clock with lighting that mimics sunrises and sunsets, seemed like a technologically savvy approach to realizing my dreams of becoming a morning person.

After about three weeks, though, I’m still no early bird. But the smart alarm clock is still earning a spot on my nightstand.

How it works

Hatch refers to the Restore 2 as a “smart sleep clock.” That’s marketing speak, but to be fair, the Restore 2 does help me sleep. A product page describes the clock as targeting users’ “natural circadian rhythm, so you can get your best sleep.” There’s some reasoning here. Circadian rhythms are “the physical, mental, and behavioral changes an organism experiences over a 24-hour cycle,” per the National Institute of General Medical Sciences (NIGMS). Circadian rhythms affect our sleep patterns (as well as other biological aspects, like appetite), NIGMS says.

The Restore 2’s pitch is a clock programmed to emit soothing lighting, which you can make change gradually as it approaches bedtime (like get darker), partnered with an alarm clock that simulates a sunrise with brightening lighting that can help you wake up more naturally. You can set the clock to play various soothing sounds while you’re winding down, sleeping, and/or as your alarm sound.

The clock needs a Wi-Fi connection and its app to set up the device. The free app has plenty of options, including sounds, colors, and tips for restful sleep (there’s a subscription for extra features and sounds for $5 per month, but thankfully, it’s optional).

Out like a light

This is, by far, the most customizable alarm clock I’ve ever used. The app was a little overwhelming at first, but once I got used to it, it was comforting to be able to set Routines or different lighting/sounds for different days. For example, I set mine to play two hours of “Calming Singing Bowls” with a slowly dimming sunset effect when I press the “Rest” button. Once I press the button again, the clock plays ocean sounds until my alarm goes off.

  • Routines in the Restore 2 app.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part one.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part two. (Part three would show a volume slider).

    Scharon Harding/Hatch

I didn’t think I needed a sleeping aid—I’m really good at sleeping. But I was surprised at how the Restore 2 helped me fall asleep more easily by blocking unpleasant noises. In my room, the biggest culprit is an aging air conditioner that’s loud while on, and it gets even more uproarious when automatically turning itself on and off (a feature that has become a bug I can’t disable).

As I’ve slept these past weeks, the clock has served as a handy, adjustable colored light to have on in the evening or as a cozy nightlight. The ocean noises have been blending in with the AC’s sounds, clearing my mind. I’d sleepily ponder if certain sounds I heard were coming from the clock or my AC. That’s the dull, fruitless thinking that quickly gets me snoozing.

Playing sounds to fall asleep is obviously not new (some of my earlier memories are falling asleep to a Lady and the Tramp cassette). Today, many would prefer using an app or playing a long video over getting a $170 alarm clock for the experience. Still, the convenience of setting repeating Routines on a device dedicated to being a clock turned out to be an asset. It’s also nice to be able to start a Routine by pressing an on-device button rather than having to use my phone to play sleeping sounds.

But the idea of the clock’s lighting and sounds helping me wind down in the hours before bed would only succeed if I was by the clock when winding down. I’m usually spending my last waking moments in my living room. So unless I’m willing to change my habits, or get a Restore 2 for the living room, this feature is lost on me.

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful Read More »

harmful-“nudify”-websites-used-google,-apple,-and-discord-sign-on-systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months.

A WIRED analysis found 16 of the biggest so-called undress and “nudify” websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites—offering them a veneer of credibility—before they pay for credits and generate images.

While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of “undress” abuse is alarmingly widespread, with teenage boys allegedly creating images of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores.

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience,” he says. “We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”

The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s login system appeared on 16 websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites.

WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadly having rules that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.

After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.

In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites’ access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and claimed it was trying to restore access. That undress service did not respond to WIRED’s request for comment about its operations.

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

feds-to-get-early-access-to-openai,-anthropic-ai-to-test-for-doomsday-scenarios

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

“Advancing the science of AI safety” —

AI companies agreed that ensuring AI safety was key to innovation.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

The US AI Safety Institute will also be collaborating with the UK AI Safety Institute when examining models to flag potential safety risks. Both groups will provide feedback to OpenAI and Anthropic “on potential safety improvements to their models.”

NIST said that the agreements also build on voluntary AI safety commitments that AI companies made to the Biden administration to evaluate models to detect risks.

Elizabeth Kelly, director of the US AI Safety Institute, called the agreements “an important milestone” to “help responsibly steward the future of AI.”

Anthropic co-founder: AI safety “crucial” to innovation

The announcement comes as California is poised to pass one of the country’s first AI safety bills, which will regulate how AI is developed and deployed in the state.

Among the most controversial aspects of the bill is a requirement that AI companies build in a “kill switch” to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

Critics say the bill overlooks existing safety risks from AI—like deepfakes and election misinformation—to prioritize prevention of doomsday scenarios and could stifle AI innovation while providing little security today. They’ve urged California’s governor, Gavin Newsom, to veto the bill if it arrives at his desk, but it’s still unclear if Newsom intends to sign.

Anthropic was one of the AI companies that cautiously supported California’s controversial AI bill, Reuters reported, claiming that the potential benefits of the regulations likely outweigh the costs after a late round of amendments.

The company’s CEO, Dario Amodei, told Newsom why Anthropic supports the bill now in a letter last week, Reuters reported. He wrote that although Anthropic isn’t certain about aspects of the bill that “seem concerning or ambiguous,” Anthropic’s “initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced” by recent changes to the bill.

OpenAI has notably joined critics opposing California’s AI safety bill and has been called out by whistleblowers for lobbying against it.

In a letter to the bill’s co-sponsor, California Senator Scott Wiener, OpenAI’s chief strategy officer, Jason Kwon, suggested that “the federal government should lead in regulating frontier AI models to account for implications to national security and competitiveness.”

The ChatGPT maker striking a deal with the US AI Safety Institute seems in line with that thinking. As Kwon told Reuters, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

While some critics worry California’s AI safety bill will hamper innovation, Anthropic’s co-founder, Jack Clark, told Reuters today that “safe, trustworthy AI is crucial for the technology’s positive impact.” He confirmed that Anthropic’s “collaboration with the US AI Safety Institute” will leverage the government’s “wide expertise to rigorously test” Anthropic’s models “before widespread deployment.”

In NIST’s press release, Kelly agreed that “safety is essential to fueling breakthrough technological innovation.”

By directly collaborating with OpenAI and Anthropic, the US AI Safety Institute also plans to conduct its own research to help “advance the science of AI safety,” Kelly said.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios Read More »

we-can-now-watch-grace-hopper’s-famed-1982-lecture-on-youtube

We can now watch Grace Hopper’s famed 1982 lecture on YouTube

Amazing Grace —

The lecture featured Hopper discussing future challenges of protecting information.

Rear Admiral Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982).

The late Rear Admiral Grace Hopper was a gifted mathematician and undisputed pioneer in computer programming, honored posthumously in 2016 with the Presidential Medal of Freedom. She was also very much in demand as a speaker in her later career. Hopper’s famous 1982 lecture on “Future Possibilities: Data, Hardware, Software, and People,” has long been publicly unavailable because of the obsolete media on which it was recorded. The National Archives and Records Administration (NARA) finally managed to retrieve the footage for the National Security Agency (NSA), which posted the lecture in two parts on YouTube (Part One embedded above, Part Two embedded below).

Hopper earned undergraduate degrees in math and physics from Vassar College and a PhD in math from Yale in 1930. She returned to Vassar as a professor, but when World War II broke out, she sought to enlist in the US Naval Reserve. She was initially denied on the basis of her age (34) and low weight-to-height ratio, and also because her expertise elsewhere made her particularly valuable to the war effort. Hopper got an exemption, and after graduating first in her class, she joined the Bureau of Ships Computation Project at Harvard University, where she served on the Mark I computer programming staff under Howard H. Aiken.

She stayed with the lab until 1949 and was next hired as a senior mathematician by Eckert-Mauchly Computer Corporation to develop the Universal Automatic Computer, or UNIVAC, the first computer. Hopper championed the development of a new programming language based on English words. “It’s much easier for most people to write an English statement than it is to use symbols,” she reasoned. “So I decided data processors ought to be able to write their programs in English and the computers would translate them into machine code.”

Her superiors were skeptical, but Hopper persisted, publishing papers on what became known as compilers. When Remington Rand took over the company, she created her first A-0 compiler. This early achievement would one day lead to the development of COBOL for data processors, which is still the major programming language used today.

“Grandma COBOL”

In November 1952, the UNIVAC was introduced to America by CBS news anchor Walter Cronkite as the presidential election results rolled in. Hopper and the rest of her team had worked tirelessly to input voting statistics from earlier elections and write the code that would allow the calculator to extrapolate the election results based on previous races. National pollsters predicted Adlai Stevenson II would win, while the UNIVAC group predicted a landslide for Dwight D. Eisenhower. UNIVAC’s prediction proved to be correct: Eisenhower won over 55 percent of the popular vote with an electoral margin of 442 to 89.  

Hopper retired at age 60 from the Naval Reserve in 1966 with the rank of commander but was subsequently recalled to active duty for many more years, thanks to congressional special approval allowing her to remain beyond the mandatory retirement age. She was promoted to commodore in 1983, a rank that was renamed “rear admiral” two years later, and Rear Admiral Grace Hopper finally retired permanently in 1986. But she didn’t stop working: She became a senior consultant to Digital Equipment Corporation and “goodwill ambassador,” giving public lectures at various computer-related events.

One of Hopper’s best-known lectures was delivered to NSA employees in August 1982. According to a National Security Agency press release, the footage had been preserved in a defunct media format—specifically, two 1-inch AMPEX tapes. The agency asked NARA to retrieve that footage and digitize it for public release, and NARA did so. The NSA described it as “one of the more unique public proactive transparency record releases… to date.”

Hopper was a very popular speaker not just because of her pioneering contributions to computing, but because she was a natural raconteur, telling entertaining and often irreverent war stories from her early days. And she spoke plainly, as evidenced in the 1982 lecture when she drew an analogy between using pairs of oxen to move large logs in the days before large tractors, and pairing computers to get more computer power rather than just getting a bigger computer—”which of course is what common sense would have told us to begin with.” For those who love the history of computers and computation, the full lecture is very much worth the time.

Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part Two, 1982).

Listing image by Lynn Gilbert/CC BY-SA 4.0

We can now watch Grace Hopper’s famed 1982 lecture on YouTube Read More »

massive-nationwide-meat-linked-outbreak-kills-5-more,-now-largest-since-2011

Massive nationwide meat-linked outbreak kills 5 more, now largest since 2011

Hardy germs —

CDC implores consumers to check their fridges for the recalled meats.

Listeria monocytogenes.

Enlarge / Listeria monocytogenes.

Five more people have died in a nationwide outbreak of Listeria infections linked to contaminated Boar’s Head brand meats, the Centers for Disease Control and Prevention reported Wednesday.

To date, 57 people across 18 states have been sickened, all of whom required hospitalization. A total of eight have died. The latest tally makes this the largest listeriosis outbreak in the US since 2011, when cantaloupe processed in an unsanitary facility led to 147 Listeria infections in 28 states, causing 33 deaths, the CDC notes.

The new cases and deaths come after a massive recall of more than 7 million pounds of Boar’s Head meat products, which encompassed 71 of the company’s products. That recall was announced on July 30, which itself was an expansion of a July 26 recall of an additional 207,528 pounds of Boar’s Head products. By August 8, when the CDC last provided an update on the outbreak, the number of cases had hit 43, with 43 hospitalizations and three deaths.

In a media statement Wednesday, the CDC says the updated toll of cases and deaths is a “reminder to avoid recalled products.” The agency noted that the outbreak bacteria, Listeria monocytogenes, is a “hardy germ that can remain on surfaces, like meat slicers, and foods, even at refrigerated temperatures. It can also take up to 10 weeks for some people to have symptoms of listeriosis.” The agency recommends that people look through their fridges for any recalled Boar’s Head products, which have sell-by dates into October.

If you find any recalled meats, do not eat them, the agency warns. Throw them away or return them to the store where they were purchased for a refund. The CDC and the US Department of Agriculture also recommend that you disinfect your fridge, given the germs’ ability to linger.

L. monocytogenes is most dangerous to people who are pregnant, people age 65 years or older, and people who have weakened immune systems. In these groups, the bacteria are more likely to move beyond the gastrointestinal system to cause an invasive listeriosis infection. In older and immunocompromised people, listeriosis usually causes fever, muscle aches, and tiredness but may also cause headache, stiff neck, confusion, loss of balance, or seizures. These cases almost always require hospitalization, and 1 in 6 die. In pregnant people, listeriosis also causes fever, muscle aches, and tiredness but can also lead to miscarriage, stillbirth, premature delivery, or a life-threatening infection in their newborns.

Massive nationwide meat-linked outbreak kills 5 more, now largest since 2011 Read More »

espn’s-where-to-watch-tries-to-solve-sports’-most-frustrating-problem

ESPN’s Where to Watch tries to solve sports’ most frustrating problem

A Licensing Cluster—

Find your game in a convoluted landscape of streaming services and TV channels.

The ESPN app on an iPhone 11 Pro.

The ESPN app on an iPhone 11 Pro.

ESPN

Too often, new tech product or service launches seem like solutions in search of a problem, but not this one: ESPN is launching software that lets you figure out just where you can watch the specific game you want to see amid an overcomplicated web of streaming services, cable channels, and arcane licensing agreements. Every sports fan is all too familiar with today’s convoluted streaming schedules.

Launching today on ESPN.com and the various ESPN mobile and streaming device apps, the new guide offers various views, including one that lists all the sporting events in a single day and a search function, among other things. You can also flag favorite sports or teams to customize those views.

“At the core of Where to Watch is an event database created and managed by the ESPN Stats and Information Group (SIG), which aggregates ESPN and partner data feeds along with originally sourced information and programming details from more than 250 media sources, including television networks and streaming platforms,” ESPN’s press release says.

ESPN previously offered browsable lists of games like this, but it didn’t identify where you could actually watch all the games.

There’s no guarantee that you’ll have access to the services needed to watch the games in the list, though. Those of us who cut the cable cord long ago know that some games—especially those local to your city—are unavailable without cable.

For example, I live within walking distance from Wrigley Field, but because I don’t have cable, I can’t watch most Cubs games on any screens in my home. As a former Angeleno, I follow the Dodgers instead because there are no market blackouts for me watching them all the way from Chicago. The reverse would be true if I were in LA.

Even if you do have cable, many sports are incredibly convoluted when it comes to figuring out where to watch stuff. ESPN Where to Watch could be useful for the new college football season, for example.

Expansion effort

ESPN isn’t the first company to envision this, though. The company to make the most progress up until now was Apple. Apple’s TV device and app was initially meant as a one-stop shop for virtually all streaming video, like a comprehensive 21st-century TV Guide. But with cable companies being difficult to work with and Netflix not participating, Apple never quite made that dream a reality.

It kept trying for sports, though, tying into third-party offerings like the MLB app alongside its own programming to try to make the TV app a place to launch all your games. Apple got pretty close, depending on which sport you’re trying to follow.

ESPN’s app seems a little more promising, as it covers a more comprehensive range of games and goes beyond the TV app’s “what’s happening right now” focus with better search and listings.

ESPN execs have said they hope to start offering more games streaming directly in the app, and if that app becomes the go-to spot thanks to this new guide, it might give the company more leverage with leagues to make that happen.

That could certainly be more convenient for viewers, though there are, of course, downsides to one company having too much influence and leverage in a sport.

ESPN’s Where to Watch tries to solve sports’ most frustrating problem Read More »