Author name: Mike M.

the-nature-of-consciousness,-and-how-to-enjoy-it-while-you-can

The nature of consciousness, and how to enjoy it while you can

Remaining aware —

In his new book, Christof Koch views consciousness as a theorist and an aficionado.

A black background with multicolored swirls filling the shape of a human brain.

Unraveling how consciousness arises out of particular configurations of organic matter is a quest that has absorbed scientists and philosophers for ages. Now, with AI systems behaving in strikingly conscious-looking ways, it is more important than ever to get a handle on who and what is capable of experiencing life on a conscious level. As Christof Koch writes in Then I Am Myself the World, “That you are intimately acquainted with the way life feels is a brute fact about the world that cries out for an explanation.” His explanation—bounded by the limits of current research and framed through Koch’s preferred theory of consciousness—is what he eloquently attempts to deliver.

Koch, a physicist, neuroscientist, and former president of the Allen Institute for Brain Science, has spent his career hunting for the seat of consciousness, scouring the brain for physical footprints of subjective experience. It turns out that the posterior hot zone, a region in the back of the neocortex, is intricately connected to self-awareness and experiences of sound, sight, and touch. Dense networks of neocortical neurons in this area connect in a looped configuration; output signals feedback into input neurons, allowing the posterior hot zone to influence its own behavior. And herein, Koch claims, lies the key to consciousness.

In the hot zone

According to integrated information theory (IIT)—which Koch strongly favors over a multitude of contending theories of consciousness—the Rosetta Stone of subjective experience is the ability of a system to influence itself: to use its past state to affect its present state and its present state to influence its future state.

Billions of neurons exist in the cerebellum, but they are wired “with nonoverlapping inputs and outputs … in a feed-forward manner,” writes Koch. He argues that a structure designed in this way, with limited influence over its own future, is not likely to produce consciousness. Similarly, the prefrontal cortex might allow us to perform complex calculations and exhibit advanced reasoning skills, but such traits do not equate to a capacity to experience life. It is the “reverberatory, self-sustaining excitatory loops prevalent in the neocortex,” Koch tells us, that set the stage for subjective experience to arise.

This declaration matches the experimental evidence Koch presents in Chapter 6: Injuries to the cerebellum do not eliminate a person’s awareness of themselves in relation to the outside world. Consciousness remains, even in a person who can no longer move their body with ease. Yet injuries to the posterior hot zone within the neocortex significantly change a person’s perception of auditory, visual, and tactile information, altering what they subjectively experience and how they describe these experiences to themselves and others.

Does this mean that artificial computer systems, wired appropriately, can be conscious? Not necessarily, Koch says. This might one day be possible with the advent of new technology, but we are not there yet. He writes. “The high connectivity [in a human brain] is very different from that found in the central processing unit of any digital computer, where one transistor typically connects to a handful of other transistors.” For the foreseeable future, AI systems will remain unconscious despite appearances to the contrary.

Koch’s eloquent overview of IIT and the melodic ease of his neuroscientific explanations are undeniably compelling, even for die-hard physicalists who flinch at terms like “self-influence.” His impeccably written descriptions are peppered with references to philosophers, writers, musicians, and psychologists—Albert Camus, Viktor Frankl, Richard Wagner, and Lewis Carroll all make appearances, adding richness and relatability to the narrative. For example, as an introduction to phenomenology—the way an experience feels or appears—he aptly quotes Eminem: “I can’t tell you what it really is, I can only tell you what it feels like.”

The nature of consciousness, and how to enjoy it while you can Read More »

the-apple-tv-is-coming-for-the-raspberry-pi’s-retro-emulation-box-crown

The Apple TV is coming for the Raspberry Pi’s retro emulation box crown

watch out, raspberry pi —

Apple’s restrictions will still hold it back, but there’s a lot of possibility.

The RetroArch app installed in tvOS.

Enlarge / The RetroArch app installed in tvOS.

Andrew Cunningham

Apple’s initial pitch for the tvOS and the Apple TV as it currently exists was centered around apps. No longer a mere streaming box, the Apple TV would also be a destination for general-purpose software and games, piggybacking off of the iPhone’s vibrant app and game library.

That never really panned out, and the Apple TV is still mostly a box for streaming TV shows and movies. But the same App Store rule change that recently allowed Delta, PPSSPP, and other retro console emulators onto the iPhone and iPad could also make the Apple TV appeal to people who want a small, efficient, no-fuss console emulator for their TVs.

So far, few of the emulators that have made it to the iPhone have been ported to the Apple TV. But earlier this week, the streaming box got an official port of RetroArch, the sprawling collection of emulators that runs on everything from the PlayStation Portable to the Raspberry Pi. RetroArch could be sideloaded onto iOS and tvOS before this, but only using awkward workarounds that took a lot more work and know-how than downloading an app from the App Store.

Downloading and using RetroArch on the Apple TV is a lot like using it on any other platform it supports, for better or worse. ROM files can be uploaded using a browser connected to the Apple TV’s IP address or hostname, which will pop up the first time you launch the RetroArch app. From there, you’re only really limited by the list of emulators that the Apple TV version of the app supports.

The main benefit of using the Apple TV hardware for emulation is that even older models have substantially better CPU and GPU performance than any Raspberry Pi; the first-gen Apple TV 4K and its Apple A10X chip date back to 2017 and still do better than a Pi 5 released in 2023. Even these older models should be more than fast enough to support advanced video filters, like Run Ahead, to reduce wireless controller latency and higher-than-native-resolution rendering to make 3D games look a bit more modern.

Beyond the hardware, tvOS is also a surprisingly capable gaming platform. Apple has done a good job adding and maintaining support for new Bluetooth gamepads in recent releases, and even Nintendo’s official Switch Online controllers for the NES, SNES, and N64 are all officially supported as of late 2022. Apple may have added this gamepad support primarily to help support its Apple Arcade service, but all of those gamepads work equally well with RetroArch.

At the risk of stating the obvious, another upside of using the Apple TV for retro gaming is that you can also still use it as a modern 4K video streaming box when you’re finished playing your games. It has well-supported apps from just about every streaming provider, and it supports all the DRM that these providers insist on when you’re trying to stream high-quality 4K video with modern codecs. Most Pi gaming distributions offer the Kodi streaming software, but it’s frankly outside the scope of this article to talk about the long list of caveats and add-ons you’d need to use to attempt using the same streaming services the Apple TV can access.

Obviously, there are trade-offs. Pis have been running retro games for a decade, and the Apple TV is just starting to be able to do it now. Even with the loosened App Store restrictions, Apple still has other emulation limitations relative to a Raspberry Pi or a PC.

The biggest one is that emulators on Apple’s platforms can’t use just-in-time (JIT) code compilation, needed for 3D console emulators like Dolphin. These restrictions make the Apple TV a less-than-ideal option for emulating newer consoles—the Nintendo 64, Nintendo DS, Sony PlayStation, PlayStation Portable, and Sega Saturn are the newest consoles RetroArch supports on the Apple TV, cutting out newer things like the GameCube and Wii, Dreamcast, and PlayStation 2 that are all well within the capabilities of Apple’s chips. Apple also insists nebulously that emulators must be for “retro” consoles rather than modern ones, which could limit the types of emulators that are available.

With respect to RetroArch specifically, there are other limitations. Though RetroArch describes itself as a front-end for emulators, its user interface is tricky to navigate, and cluttered with tons of overlapping settings that make it easy to break things if you don’t know what you’re doing. Most Raspberry Pi gaming distros use RetroArch, but with a front-end-for-a-front-end like EmulationStation installed to make RetroArch a bit more accessible and easy to learn. A developer could release an app that included RetroArch plus a separate front-end, but Apple’s sandboxing restrictions would likely prevent anyone from releasing an app that just served as a more user-friendly front-end for the RetroArch app.

Regardless, it’s still pretty cool to be able to play retro games on an Apple TV’s more advanced hardware. As more emulators make their way to the App Store, the Apple TV’s less-fussy software and the power of its hardware could make it a compelling alternative to a more effort-intensive Raspberry Pi setup.

The Apple TV is coming for the Raspberry Pi’s retro emulation box crown Read More »

cats-playing-with-robots-proves-a-winning-combo-in-novel-art-installation

Cats playing with robots proves a winning combo in novel art installation

The feline factor —

Cat Royale project explores what it takes to trust a robot to look after beloved pets.

Cat with the robot arm in the Cat Royale installation

Enlarge / A kitty named Clover prepares to play with a robot arm in the Cat Royale “multi-species” science/art installation .

Blast Theory – Stephen Daly

Cats and robots are a winning combination, as evidenced by all those videos of kitties riding on Roombas. And now we have Cat Royale, a “multispecies” live installation in which three cats regularly “played” with a robot over 12 days, carefully monitored by human operators. Created by computer scientists from the University of Nottingham in collaboration with artists from a group called Blast Theory, the installation debuted at the World Science Festival in Brisbane, Australia, last year and is now a touring exhibit. The accompanying YouTube video series recently won a Webby Award, and a paper outlining the insights gleaned from the experience was similarly voted best paper at the recent Computer-Human Conference (CHI’24).

“At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them,” said co-author Steve Benford of the University of Nottingham, who led the research, “Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves.” While cats might love Roombas, not all animal encounters with robots are positive: Guide dogs for the visually impaired can get confused by delivery robots, for example, while the rise of lawn mowing robots can have a negative impact on hedgehogs, per Benford et al.

Blast Theory and the scientists first held a series of exploratory workshops to ensure the installation and robotic design would take into account the welfare of the cats. “Creating a multispecies system—where cats, robots, and humans are all accounted for—takes more than just designing the robot,” said co-author Eike Schneiders of Nottingham’s Mixed Reality Lab about the primary takeaway from the project. “We had to ensure animal well-being at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot, and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.”

Based on those discussions, the team set about building the installation: a bespoke enclosure that would be inhabited by three cats for six hours a day over 12 days. The lucky cats were named Ghostbuster, Clover, and Pumpkin—a parent and two offspring to ensure the cats were familiar with each other and comfortable sharing the enclosure. The enclosure was tricked out to essentially be a “utopia for cats,” per the authors, with perches, walkways, dens, a scratching post, a water fountain, several feeding stations, a ball run, and litter boxes tucked away in secluded corners.

(l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.

Enlarge / (l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.

E. Schneiders et al., 2024

As for the robot, the team chose the Kino Gen3 lite robot arm, and the associated software was trained on over 7,000 videos of cats. A decision engine gave the robot autonomy and proposed activities for specific cats. Then a human operator used an interface control system to instruct the robot to execute the movements. The robotic arm’s two-finger gripper was augmented with custom 3D-printed attachments so that the robot could manipulate various cat toys and accessories.

Each cat/robot interaction was evaluated for a “happiness score” based on the cat’s level of engagement, body language, and so forth. Eight cameras monitored the cat and robot activities, and that footage was subsequently remixed and edited into daily YouTube highlight videos and, eventually, an eight-hour film.

Cats playing with robots proves a winning combo in novel art installation Read More »

leaks-from-valve’s-deadlock-look-like-a-pressed-sandwich-of-every-game-around

Leaks from Valve’s Deadlock look like a pressed sandwich of every game around

Deadlock isn’t the most original name, but trademarks are hard —

Is there something new underneath a whole bunch of familiar game elements?

Shelves at Valve's offices, as seen in 2018, with a mixture of artifacts from Half-Life, Portal, Dota 2, and other games.

Enlarge / Valve has its own canon of games full of artifacts and concepts worth emulating, as seen in a 2018 tour of its offices.

Sam Machkovech

“Basically, fast-paced interesting ADHD gameplay. Combination of Dota 2, Team Fortress 2, Overwatch, Valorant, Smite, Orcs Must Die.”

That’s how notable Valve leaker “Gabe Follower” describes Deadlock, a Valve game that is seemingly in playtesting at the moment, for which a few screenshots have leaked out.

The game has been known as “Neon Prime” and “Citadel” at prior points. It’s a “Competitive third-person hero-based shooter,” with six-on-six battles across a map with four “lanes.” That allows for some of the “Tower defense mechanics” mentioned by Gabe Follower, along with “fast travel using floating rails, similar to Bioshock Infinite.” The maps reference a “modern steampunk European city (little bit like Half-Life),” after “bad feedback” about a sci-fi theme pushed the development team toward fantasy.

Since testers started sharing Deadlock screenshots all over the place, here’s ones I can verify, featuring one of the heroes called Grey Talon. pic.twitter.com/KdZSRxObSz

— ‎Gabe Follower (@gabefollower) May 17, 2024

Valve doesn’t release games often, and the games it does release are often in development for long periods. Deadlock purportedly started development in 2018, two years before Half-Life: Alyx existed. That the game has now seemingly reached a closed (though not closed enough) “alpha” playtesting phase, with players in the “hundreds,” could suggest release within a reasonable time. Longtime Valve watcher (and modder, and code examiner) Tyler McVicker suggests in a related video that Deadlock has hundreds of people playing in this closed test, and the release is “about to happen.”

McVicker adds to the descriptor pile-on by noting that it’s “team-based,” “hero-based,” “class-based,” and “personality-driven.” It’s an attempt, he says, to “bring together all of their communities under one umbrella.”

Tyler McVicker’s discussion of the leaked Deadlock content, featuring … BioShock Infinite footage.

Many of Valve’s games do something notable to push gaming technology and culture forward. Half-Life brought advanced scripting, physics, and atmosphere to the “Doom clones” field and forever changed it. Counter-Strike and Team Fortress 2 lead the way in team multiplayer dynamics. Dota 2 solidified and popularized MOBAs, and Half-Life: Alyx gave VR on PC its killer app. Yes, there are Artifact moments, but they’re more exception than rule.

Following any of those games seems like a tall order, but Valve’s track record speaks for itself. I think players like me, who never took to Valorant or Overwatch or the like, should reserve judgment until the game can be seen in its whole. I have to imagine that there’s more to Deadlock than a pile of very familiar elements.

Leaks from Valve’s Deadlock look like a pressed sandwich of every game around Read More »

“unprecedented”-google-cloud-event-wipes-out-customer-account-and-its-backups

“Unprecedented” Google Cloud event wipes out customer account and its backups

Bringing new meaning to “Killed By Google” —

UniSuper, a $135 billion pension account, details its cloud compute nightmare.

“Unprecedented” Google Cloud event wipes out customer account and its backups

Buried under the news from Google I/O this week is one of Google Cloud’s biggest blunders ever: Google’s Amazon Web Services competitor accidentally deleted a giant customer account for no reason. UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service. UniSuper thankfully had some backups with a different provider and was able to recover its data, but according to UniSuper’s incident log, downtime started May 2, and a full restoration of services didn’t happen until May 15.

UniSuper’s website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled “A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian.” This statement reads, “Google Cloud CEO, Thomas Kurian has confirmed that the disruption arose from an unprecedented sequence of events whereby an inadvertent misconfiguration during provisioning of UniSuper’s Private Cloud services ultimately resulted in the deletion of UniSuper’s Private Cloud subscription. This is an isolated, ‘one-of-a-kind occurrence’ that has never before occurred with any of Google Cloud’s clients globally. This should not have happened. Google Cloud has identified the events that led to this disruption and taken measures to ensure this does not happen again.”

In the next section, titled “Why did the outage last so long?” the joint statement says, “UniSuper had duplication in two geographies as a protection against outages and loss. However, when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies.” Every cloud service keeps full backups, which you would presume are meant for worst-case scenarios. Imagine some hacker takes over your server or the building your data is inside of collapses, or something like that. But no, the actual worst-case scenario is “Google deletes your account,” which means all those backups are gone, too. Google Cloud is supposed to have safeguards that don’t allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution).

UniSuper is an Australian “superannuation fund“—the US equivalent would be a 401(k). It’s a retirement fund that employers pay into as part of an employee paycheck; in Australia, some amount of superfund payment is required by law for all employed people. Managing $135 billion worth of funds makes UniSuper a big enough company that, if something goes wrong, it gets the Google Cloud CEO on the phone instead of customer service.

A June 2023 press release touted UniSuper’s big cloud migration to Google, with Sam Cooper, UniSuper’s Head of Architecture, saying, “With Google Cloud VMware Engine, migrating to the cloud is streamlined and extremely easy. It’s all about efficiencies that help us deliver highly competitive fees for our members.”

The many stakeholders in the service meant service restoration wasn’t just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime.

“Unprecedented” Google Cloud event wipes out customer account and its backups Read More »

using-vague-language-about-scientific-facts-misleads-readers

Using vague language about scientific facts misleads readers

Using vague language about scientific facts misleads readers

Anyone can do a simple experiment. Navigate to a search engine that offers suggested completions for what you type, and start typing “scientists believe.” When I did it, I got suggestions about the origin of whales, the evolution of animals, the root cause of narcolepsy, and more. The search results contained a long list of topics, like “How scientists believe the loss of Arctic sea ice will impact US weather patterns” or “Scientists believe Moon is 40 million years older than first thought.”

What do these all have in common? They’re misleading, at least in terms of how most people understand the word “believe.” In all these examples, scientists have become convinced via compelling evidence; these are more than just hunches or emotional compulsions. Given that difference, using “believe” isn’t really an accurate description. Yet all these examples come from searching Google News, and so are likely to come from journalistic outlets that care about accuracy.

Does the difference matter? A recent study suggests that it does. People who were shown headlines that used subjective verbs like “believe” tended to view the issue being described as a matter of opinion—even if that issue was solidly grounded in fact.

Fact vs. opinion

The new work was done by three researchers at Stanford University: Aaron Chueya, Yiwei Luob, and Ellen Markman. “Media consumption is central to how we form, maintain, and spread beliefs in the modern world,” they write. “Moreover, how content is presented may be as important as the content itself.” The presentation they’re interested in involves what they term “epistemic verbs,” or those that convey information about our certainty regarding information. To put that in concrete terms, “’Know’ presents [a statement] as a fact by presup­posing that it is true, ‘believe’ does not,” they argue.

So, while it’s accurate to say, “Scientists know the Earth is warming, and that warming is driven by human activity,” replacing “know” with “believe” presents an inaccurate picture of the state of our knowledge. Yet, as noted above, “scientists believe” is heavily used in the popular press. Chueya, Luob, and Markman decided to see whether this makes a difference.

They were interested in two related questions. One is whether the use of verbs like believe and think influences how readers view whether the concepts they’re associated with are subjective issues rather than objective, factual ones. The second is whether using that phrasing undercuts the readers’ willingness to accept something as a fact.

To answer those questions, the researchers used a subject-recruiting service called Prolific to recruit over 2,700 participants who took part in a number of individual experiments focused on these issues. In each experiment, participants were given a series of headlines and asked about what inferences they drew about the information presented in them.

Using vague language about scientific facts misleads readers Read More »

twitter-urls-redirect-to-x.com-as-musk-gets-closer-to-killing-the-twitter-name

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name

Goodbye Twitter.com —

X.com stops redirecting to Twitter.com over a year after company name change.

An app icon and logo for Elon Musk's X service.

Getty Images | Kirill Kudryavtsev

Twitter.com links are now redirecting to the x.com domain as Elon Musk gets closer to wiping out the Twitter brand name over a year and half after buying the company.

“All core systems are now on X.com,” Musk wrote in an X post today. X also displayed a message to users that said, “We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same.”

Musk bought Twitter in October 2022 and turned it into X Corp. in April 2023, but the social network continued to use Twitter.com as its primary domain for more than another year. X.com links redirected to Twitter.com during that time.

There were still remnants of Twitter after today’s change. This morning, I noticed a support link took me to a help.twitter.com page. The link subsequently redirected to a help.x.com page after I sent a message to X’s public relations email, though the timing could be coincidence. After sending that message to [email protected], I got the standard auto-reply from [email protected], just as I have in the past.

You might still encounter Twitter links that don’t redirect to x.com, depending on which browser you use. The Verge said it is “seeing a mix of results depending upon browser choice and whether you’re logged in or not.”

I had no trouble accessing x.com on desktop browsers today. But in Safari on iPhone, I received error messages when trying to access either twitter.com or x.com without first logging in. I eventually succeeded in logging in and was able to view content, but I remained at twitter.com in the iPhone browser instead of being redirected to x.com.

This will presumably be sorted out, but the awkward Twitter-to-X transition has previously been accompanied by technical problems. In early April, Musk’s service started automatically changing “twitter.com” to “x.com” in links posted by users in the iOS app. But the automatic text replacement initially applied to any URL ending in “twitter.com” even if it wasn’t actually a twitter.com link, which meant that phishers could have taken advantage by registering misleading domain names.

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name Read More »

how-to-port-any-n64-game-to-the-pc-in-record-time

How to port any N64 game to the PC in record time

Enlarge / “N-tel (64) Inside”

Aurich Lawson | Getty Images

In recent years, we’ve reported on multiple efforts to reverse-engineer Nintendo 64 games into fully decompiled, human-readable C code that can then become the basis for full-fledged PC ports. While the results can be impressive, the decompilation process can take years of painstaking manual effort, meaning only the most popular N64 games are likely to get the requisite attention from reverse engineers.

Now, a newly released tool promises to vastly reduce the amount of human effort needed to get basic PC ports of most (if not all) N64 games. The N64 Recompiled project uses a process known as static recompilation to automate huge swaths of the labor-intensive process of drawing C code out of N64 binaries.

While human coding work is still needed to smooth out the edges, project lead Mr-Wiseguy told Ars that his recompilation tool is “the difference between weeks of work and years of work” when it comes to making a PC version of a classic N64 title. And parallel work on a powerful N64 graphic renderer means PC-enabled upgrades like smoother frame rates, resolution upscaling, and widescreen aspect ratios can be added with little effort.

Inspiration hits

Mr-Wiseguy told Ars he got his start in the N64 coding space working on various mod projects around 2020. In 2022, he started contributing to the then-new RT64 renderer project, which grew out of work on a ray-traced Super Mario 64 port into a more generalized effort to clean up the notoriously tricky process of recreating N64 graphics accurately. While working on that project, Mr-Wiseguy said he stumbled across an existing project that automates the disassembly of NES games and another that emulates an old SGI compiler to aid in the decompilation of N64 titles.

YouTuber Nerrel lays out some of the benefits of Mr-Wiseguy’s N64 recompilation tool.

“I realized it would be really easy to hook up the RT64 renderer to a game if it could be run through a similar static recompilation process,” Mr-Wiseguy told Ars. “So I put together a proof of concept to run a really simple game and then the project grew from there until it could run some of the more complex games.”

A basic proof of concept for Mr-Wiseguy’s idea took only “a couple of weeks at most” to get up and running, he said, and was ready as far back as November of 2022. Since then, months of off-and-on work have gone into rounding out the conversion code and getting a recompiled version of The Legend of Zelda: Majora’s Mask ready for public consumption.

Trust the process

At its most basic level, the N64 recompilation tool takes a raw game binary (provided by the user) and reprocesses every single instruction directly and literally into corresponding C code. The N64’s MIPS instruction set has been pretty well-documented over years of emulation work, so figuring out how to translate each individual opcode to its C equivalent isn’t too much of a hassle.

Wave Race 64.” height=”360″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/recomprt2-640×360.png” width=”640″>

Enlarge / An early beta of the RT64 renderer shows how ray-tracing shadows and reflections might look in a port of Wave Race 64.

The main difficulty, Mr-Wiseguy said, can be figuring out where to point the tool. “The contents of the [N64] ROM can be laid out however the developer chose to do so, which means you have to find where code is in the ROM before you can even start the static recompilation process,” he explained. And while N64 emulators automatically handle games that load and unload code throughout memory at runtime, handling those cases in a pre-compiled binary can add extra layers of complexity.

How to port any N64 game to the PC in record time Read More »

sony-music-opts-out-of-ai-training-for-its-entire-catalog

Sony Music opts out of AI training for its entire catalog

Taking a hard line —

Music group contacts more than 700 companies to prohibit use of content

picture of Beyonce who is a Sony artist

Enlarge / The Sony Music letter expressly prohibits artificial intelligence developers from using its music — which includes artists such as Beyoncé.

Kevin Mazur/WireImage for Parkwood via Getty Images

Sony Music is sending warning letters to more than 700 artificial intelligence developers and music streaming services globally in the latest salvo in the music industry’s battle against tech groups ripping off artists.

The Sony Music letter, which has been seen by the Financial Times, expressly prohibits AI developers from using its music—which includes artists such as Harry Styles, Adele and Beyoncé—and opts out of any text and data mining of any of its content for any purposes such as training, developing or commercializing any AI system.

Sony Music is sending the letter to companies developing AI systems including OpenAI, Microsoft, Google, Suno, and Udio, according to those close to the group.

The world’s second-largest music group is also sending separate letters to streaming platforms, including Spotify and Apple, asking them to adopt “best practice” measures to protect artists and songwriters and their music from scraping, mining and training by AI developers without consent or compensation. It has asked them to update their terms of service, making it clear that mining and training on its content is not permitted.

Sony Music declined to comment further.

The letter, which is being sent to tech companies around the world this week, marks an escalation of the music group’s attempts to stop the melodies, lyrics and images from copyrighted songs and artists being used by tech companies to produce new versions or to train systems to create their own music.

The letter says that Sony Music and its artists “recognize the significant potential and advancement of artificial intelligence” but adds that “unauthorized use . . . in the training, development or commercialization of AI systems deprives [Sony] of control over and appropriate compensation.”

It says: “This letter serves to put you on notice directly, and reiterate, that [Sony’s labels] expressly prohibit any use of [their] content.”

Executives at the New York-based group are concerned that their music has already been ripped off, and want to set out a clearly defined legal position that would be the first step to taking action against any developer of AI systems it considers to have exploited its music. They argue that Sony Music would be open to doing deals with AI developers to license the music, but want to reach a fair price for doing so.

The letter says: “Due to the nature of your operations and published information about your AI systems, we have reason to believe that you and/or your affiliates may already have made unauthorized uses [of Sony content] in relation to the training, development or commercialization of AI systems.”

Sony Music has asked developers to provide details of all content used by next week.

The letter also reflects concerns over the fragmented approach to AI regulation around the world. Global regulations over AI vary widely, with some regions moving forward with new rules and legal frameworks to cover the training and use of such systems but others leaving it to creative industries companies to work out relationships with developers.

In many countries around the world, particularly in the EU, copyright owners are advised to state publicly that content is not available for data mining and training for AI.

The letter says the prohibition includes using any bot, spider, scraper or automated program, tool, algorithm, code, process or methodology, as well as any “automated analytical techniques aimed at analyzing text and data in digital form to generate information, including patterns, trends, and correlations.”

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Sony Music opts out of AI training for its entire catalog Read More »

how-i-upgraded-my-water-heater-and-discovered-how-bad-smart-home-security-can-be

How I upgraded my water heater and discovered how bad smart home security can be

The bottom half of a tankless water heater, with lots of pipes connected, in a tight space

Enlarge / This is essentially the kind of water heater the author has hooked up, minus the Wi-Fi module that led him down a rabbit hole. Also, not 140-degrees F—yikes.

Getty Images

The hot water took too long to come out of the tap. That is what I was trying to solve. I did not intend to discover that, for a while there, water heaters like mine may have been open to anybody. That, with some API tinkering and an email address, a bad actor could possibly set its temperature or make it run constantly. That’s just how it happened.

Let’s take a step back. My wife and I moved into a new home last year. It had a Rinnai tankless water heater tucked into a utility closet in the garage. The builder and home inspector didn’t say much about it, just to run a yearly cleaning cycle on it.

Because it doesn’t keep a big tank of water heated and ready to be delivered to any house tap, tankless water heaters save energy—up to 34 percent, according to the Department of Energy. But they’re also, by default, slower. Opening a tap triggers the exchanger, heats up the water (with natural gas, in my case), and the device has to push it through the line to where it’s needed.

That led to me routinely holding my hand under cold water in the sink or shower, waiting longer than felt right for reasonably warm water to appear. I understood the water-for-energy trade-off I was making. But the setup wasted time, in addition to potable water, however plentiful and relatively cheap it was. It just irked me.

Little did I know the solution was just around the corner.

Hot water hotspot

  • Attention!

    Kevin Purdy

  • Nothing’ll happen. Just touch it. It’s what you wanna do. It’s there for you to touch.

    Kevin Purdy

  • The Rinnai Central app. It does this “Control failed” bit quite often.

    Rinnai

I mean that literally. When I went into the utility closet to shut off the hose bibbs for winter, I noticed a plastic bag magnetically stuck to the back side of the water heater. “Attention! The Control-R Wi-Fi Module must be installed for recirculation to operate,” read the intense yellow warning label. The water heater would not “recirculate” without it, it noted.

The Rinnai Control-R module, out of bag.

Enlarge / The Rinnai Control-R module, out of bag.

Rinnai

Recirculation means that the heater would start pulling water and heating it on demand, rather than waiting for enough negative pressure from the pipes. To trigger this, Rinnai offered smartphone apps that could connect through its servers to the module.

I found the manual, unplugged the water heater, and opened it up. The tone of the language inside (“DO NOT TOUCH,” unless you are “a properly trained technician”) did not match that of the can-do manual (“get the most from your new module”). But, having read the manual and slotted little beige nubs before, I felt trained and technical. I installed the device, went through the typical “Connect your phone to this weirdly named hotspot” process, and—it worked.

I now had an app that could start recirculation. I could get my shower hot while still in bed, or get started on the dinner dishes from the couch. And yet pulling out my phone whenever I wanted hot water felt like trading one inconvenience for another.

How I upgraded my water heater and discovered how bad smart home security can be Read More »

rocket-report:-starship-stacked;-georgia-shuts-the-door-on-spaceport-camden

Rocket Report: Starship stacked; Georgia shuts the door on Spaceport Camden

On Wednesday, SpaceX fully stacked the Super Heavy booster and Starship upper stage for the mega-rocket's next test flight from South Texas.

Enlarge / On Wednesday, SpaceX fully stacked the Super Heavy booster and Starship upper stage for the mega-rocket’s next test flight from South Texas.

Welcome to Edition 6.44 of the Rocket Report! Kathy Lueders, general manager of SpaceX’s Starbase launch facility, says the company expects to receive an FAA launch license for the next Starship test flight shortly after Memorial Day. It looks like this rocket could fly in late May or early June, about two-and-a-half months after the previous Starship test flight. This is an improvement over the previous intervals of seven months and four months between Starship flights.

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Blue Origin launch on tap this weekend. Blue Origin plans to launch its first human spaceflight mission in nearly two years on Sunday. This flight will launch six passengers on a flight to suborbital space more than 60 miles (100 km) over West Texas. Blue Origin, Jeff Bezos’s space company, has not flown people to space since a New Shepard rocket failure on an uncrewed research flight in September 2022. The company successfully launched New Shepard on another uncrewed suborbital mission in December.

Historic flight … This will be the 25th flight of Blue Origin’s New Shepard rocket, and the seventh human spaceflight mission on New Shepard. Before Blue Origin’s rocket failure in 2022, the company was reaching a flight cadence of about one launch every two months, on average. The flight rate has diminished since then. Sunday’s flight is important not only because it marks the resumption of launches for Blue Origin’s suborbital human spaceflight business, but also because its six-person crew includes an aviation pioneer. Ed Dwight, 90, almost became the first Black astronaut in 1963. Dwight, a retired Air Force captain, piloted military fighter jets and graduated test pilot school, following a familiar career track as many of the early astronauts. He was on a short list of astronaut candidates the Air Force provided NASA, but the space agency didn’t include him. Dwight will become the oldest person to ever fly in space.

Spaceport Camden is officially no more. With the stroke of a pen, Georgia Governor Brian Kemp signed a bill that dissolved the Camden County Spaceport Authority, Action News Jax reported. This news follows a referendum in March 2022 where more than 70 percent of voters rejected a plan to buy land for the spaceport on the Georgia coastline between Savannah and Jacksonville, Florida. County officials still tried to move forward with the spaceport initiative after the failed referendum, but Georgia’s Supreme Court ruled in February that the county had to abide by the voters’ wishes.

$12 million for what?… The government of Camden County, with a population of about 55,000 people spent $12 million on the Spaceport Camden concept over the course of a decade. The goal of the spaceport authority was to lure small launch companies to the region, but no major launches ever took place from Camden County. State Rep. Steven Sainz, who sponsored the bill eliminating the spaceport authority, said in a statement that the legislation “reflects the community’s choice and opens a path for future collaborations in economic initiatives that are more aligned with local needs.” (submitted by zapman987)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Polaris Spaceplanes moves on to bigger things. German startup Polaris Spaceplanes says it is progressing with construction of its MIRA II and MIRA III spaceplane prototypes after MIRA, a subscale test vehicle, was damaged earlier this year, European Spaceflight reports. The MIRA demonstration vehicle crash-landed on a test flight in February. The incident occurred on takeoff at an airfield in Germany before the vehicle could ignite its linear aerospace engine in flight. The remote-controlled MIRA prototype measured about 4.25 meters long. Polaris announced on April 30 that will not repair MIRA and will instead move forward with the construction of a pair of larger vehicles.

Nearly 16 months without a launch … The MIRA II and MIRA III vehicles will be 5 meters long and will be powered by Polaris’s AS-1 aerospike engines, along with jet engines to power the craft before and after in-flight tests of the rocket engine. Aerospike engines are rocket engines that are designed to operate efficiently at all altitudes. The MIRA test vehicles are precursors to AURORA, a multipurpose spaceplane and hypersonic transporter Polaris says will be capable of delivering up to 1,000 kilograms of payload to low-Earth orbit. (submitted by Jay500001 and Tfargo04)

Rocket Report: Starship stacked; Georgia shuts the door on Spaceport Camden Read More »

arizona-woman-accused-of-helping-north-koreans-get-remote-it-jobs-at-300-companies

Arizona woman accused of helping North Koreans get remote IT jobs at 300 companies

“STAGGERING FRAUD” —

Alleged $6.8M conspiracy involved “laptop farm,” identity theft, and résumé coaching.

Illustration of a judge's gavel on a digital background resembling a computer circuit board.

Getty Images | the-lightwriter

An Arizona woman has been accused of helping generate millions of dollars for North Korea’s ballistic missile program by helping citizens of that country land IT jobs at US-based Fortune 500 companies.

Christina Marie Chapman, 49, of Litchfield Park, Arizona, raised $6.8 million in the scheme, federal prosecutors said in an indictment unsealed Thursday. Chapman allegedly funneled the money to North Korea’s Munitions Industry Department, which is involved in key aspects of North Korea’s weapons program, including its development of ballistic missiles.

Part of the alleged scheme involved Chapman and co-conspirators compromising the identities of more than 60 people living in the US and using their personal information to get North Koreans IT jobs across more than 300 US companies.

In the indictment, prosecutors wrote:

The conspiracy perpetrated a staggering fraud on a multitude of industries, at the expense of generally unknowing US companies and persons. It impacted more than 300 US companies, compromised more than 60 identities of US persons, caused false information to be conveyed to DHS on more than 100 occasions, created false tax liabilities for more than 35 US persons, and resulted in at least $6.8 million of revenue to be generated for the overseas IT workers. The overseas IT workers worked at blue-chip US companies, including a top-5 national television network and media company, a premier Silicon Valley technology company, an aerospace and defense manufacturer, an iconic American car manufacturer, a high-end retail chain, and one of the most recognizable media and entertainment companies in the world, all of which were Fortune 500 companies.

As another part of the alleged conspiracy, Chapman operated a “laptop farm” at one of her residences to give the employers the impression the North Korean IT staffers were working from within the US; the laptops were issued by the employers. By using proxies and VPNs, the overseas workers appeared to be connecting from US-based IP addresses. Chapman also received employees’ paychecks at her home, prosecutors said.

Federal prosecutors said that Chapman and three North Korean IT workers—using the aliases of Jiho Han, Chunji Jin, Haoran Xu, and others—had been working since at least 2020 to plan a remote-work scheme. In March of that year, prosecutors said, an individual messaged Chapman on LinkedIn and invited her to “be the US face” of their company. From August to November of 2022, the North Korean IT workers allegedly amassed guides and other information online designed to coach North Koreans on how to write effective cover letters and résumés and falsify US Permanent Resident Cards.

Under the alleged scheme, the foreign workers developed “fictitious personas and online profiles to match the job requirements” and submitted fake documents to the Homeland Security Department as part of an employment eligibility check. Chapman also allegedly discussed with co-conspirators about transferring the money earned from their work.

“The charges in this case should be a wakeup call for American companies and government agencies that employ remote IT workers,” Nicole Argentieri, head of the Justice Department’s Criminal Division, said. “These crimes benefited the North Korean government, giving it a revenue stream and, in some instances, proprietary information stolen by the co-conspirators.”

The indictment came alongside a criminal complaint charging a Ukrainian man with carrying out a similar multiyear scheme. Oleksandr Didenko, 27, of Kyiv, Ukraine, allegedly helped individuals in North Korea “market” themselves as remote IT workers.

Chapman was arrested Wednesday. It wasn’t immediately known when she or Didenko were scheduled to make their first appearance in court. If convicted, Chapman faces 97.5 years in prison, and Didenko faces up to 67.5 years.

Arizona woman accused of helping North Koreans get remote IT jobs at 300 companies Read More »