ethics

game-dev-says-contract-barring-“subjective-negative-reviews”-was-a-mistake

Game dev says contract barring “subjective negative reviews” was a mistake

Be nice, or else —

Early streamers agreed not to “belittle the gameplay” or “make disparaging… comments.”

Artist's conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha.

Enlarge / Artist’s conception of NetEase using a legal contract to try to stop a wave of negative reviews of its closed alpha.

NetEase

The developers of team-based shooter Marvel Rivals have apologized for a contract clause that made creators promise not to provide “subjective negative reviews of the game” in exchange for early access to a closed alpha test.

The controversial early access contract gained widespread attention over the weekend when streamer Brandon Larned shared a portion on social media. In the “non-disparagement” clause shared by Larned, creators who are provided with an early download code are asked not to “make any public statements or engage in discussions that are detrimental to the reputation of the game.” In addition to the “subjective negative review” example above, the clause also specifically prohibits “making disparaging or satirical comments about any game-related material” and “engaging in malicious comparisons with competitors or belittling the gameplay or differences of Marvel Rivals.”

Extremely disappointed in @MarvelRivals.

Multiple creators asked for key codes to gain access to the playtest and are asked to sign a contract.

The contract signs away your right to negatively review the game.

Many streamers have signed without reading just to play

Insanity. pic.twitter.com/c11BUDyka9

— Brandon Larned (@A_Seagull) May 12, 2024

In a Discord post noticed by PCGamesN over the weekend, Chinese developer NetEase apologized for what it called “inappropriate and misleading terms” in the contract. “Our stand is absolutely open for both suggestions and criticisms to improve our games, and… our mission is to make Marvel Rivals better [and] satisfy players by those constructive suggestions.”

In a follow-up posted to social media this morning, NetEase went on to “apologize for any unpleasant experiences or doubts caused by the miscommunication of these terms… We actively encourage Creators to share their honest thoughts, suggestions, and criticisms as they play. All feedback, positive and negative, ultimately helps us craft the best experience for ourselves and the players.” NetEase says it is making “adjustments” to the contract “to be less restrictive and more Creator-friendly.”

What can you say, and when can you say it?

Creators and press outlets (including Ars) routinely agree to embargoes or sign review and/or non-disclosure agreements to protect sensitive information about a game before its launch. Usually, these agreements are focused on when certain information and early opinions about a game can be shared. These kinds of timing restrictions can help a developer coordinate a game’s marketing rollout and also prevent early reviewers from having to rush through a game to get a lucrative “first review” up on the Internet.

Sometimes, companies use embargo agreements to urge or prevent reviewers from sharing certain gameplay elements or story spoilers until a game’s release in an effort to preserve a sense of surprise for the player base. There are also sometimes restrictions on how many and/or what kinds of screenshots or videos can be shared in early coverage for similar reasons. But restrictions on what specific opinions can be shared about a game are practically unheard of in these kinds of agreements.

Nearly a decade ago, Microsoft faced criticism for a partnership with a Machinima video marketing campaign that paid video commentators for featuring Xbox One game footage in their content. That program, which was aped by Electronic Arts at the time, restricted participants from saying “anything negative or disparaging about Machinima, Xbox One, or any of its games.”

In response to the controversy, Microsoft said that it was adding disclaimers to make it clear these videos were paid promotions and that it “was not aware of individual contracts Machinima had with their content providers as part of this promotion and we didn’t provide feedback on any of the videos…”

In 2017, Atlus threatened to use its copyright controls to take down videos that spoiled certain elements of Persona 5, even after the game’s release.

Game dev says contract barring “subjective negative reviews” was a mistake Read More »

playboy-image-from-1972-gets-ban-from-ieee-computer-journals

Playboy image from 1972 gets ban from IEEE computer journals

image processing —

Use of “Lenna” image in computer image processing research stretches back to the 1970s.

Playboy image from 1972 gets ban from IEEE computer journals

Aurich Lawson | Getty Image

On Wednesday, the IEEE Computer Society announced to members that, after April 1, it would no longer accept papers that include a frequently used image of a 1972 Playboy model named Lena Forsén. The so-called “Lenna image,” (Forsén added an extra “n” to her name in her Playboy appearance to aid pronunciation) has been used in image processing research since 1973 and has attracted criticism for making some women feel unwelcome in the field.

In an email from the IEEE Computer Society sent to members on Wednesday, Technical & Conference Activities Vice President Terry Benzel wrote, “IEEE’s diversity statement and supporting policies such as the IEEE Code of Ethics speak to IEEE’s commitment to promoting an including and equitable culture that welcomes all. In alignment with this culture and with respect to the wishes of the subject of the image, Lena Forsén, IEEE will no longer accept submitted papers which include the ‘Lena image.'”

An uncropped version of the 512×512-pixel test image originally appeared as the centerfold picture for the December 1972 issue of Playboy Magazine. Usage of the Lenna image in image processing began in June or July 1973 when an assistant professor named Alexander Sawchuck and a graduate student at the University of Southern California Signal and Image Processing Institute scanned a square portion of the centerfold image with a primitive drum scanner, omitting nudity present in the original image. They scanned it for a colleague’s conference paper, and after that, others began to use the image as well.

The original 512×512

The original 512×512 “Lenna” test image, which is a cropped portion of a 1972 Playboy centerfold.

The image’s use spread in other papers throughout the 1970s, 80s, and 90s, and it caught Playboy’s attention, but the company decided to overlook the copyright violations. In 1997, Playboy helped track down Forsén, who appeared at the 50th Annual Conference of the Society for Imaging Science in Technology, signing autographs for fans. “They must be so tired of me … looking at the same picture for all these years!” she said at the time. VP of new media at Playboy Eileen Kent told Wired, “We decided we should exploit this, because it is a phenomenon.”

The image, which features Forsén’s face and bare shoulder as she wears a hat with a purple feather, was reportedly ideal for testing image processing systems in the early years of digital image technology due to its high contrast and varied detail. It is also a sexually suggestive photo of an attractive woman, and its use by men in the computer field has garnered criticism over the decades, especially from female scientists and engineers who felt that the image (especially related to its association with the Playboy brand) objectified women and created an academic climate where they did not feel entirely welcome.

Due to some of this criticism, which dates back to at least 1996, the journal Nature banned the use of the Lena image in paper submissions in 2018.

The comp.compression Usenet newsgroup FAQ document claims that in 1988, a Swedish publication asked Forsén if she minded her image being used in computer science, and she was reportedly pleasantly amused. In a 2019 Wired article, Linda Kinstler wrote that Forsén did not harbor resentment about the image, but she regretted that she wasn’t paid better for it originally. “I’m really proud of that picture,” she told Kinstler at the time.

Since then, Forsén has apparently changed her mind. In 2019, Creatable and Code Like a Girl created an advertising documentary titled Losing Lena, which was part of a promotional campaign aimed at removing the Lena image from use in tech and the image processing field. In a press release for the campaign and film, Forsén is quoted as saying, “I retired from modelling a long time ago. It’s time I retired from tech, too. We can make a simple change today that creates a lasting change for tomorrow. Let’s commit to losing me.”

It seems like that commitment is now being granted. The ban in IEEE publications, which have been historically important journals for computer imaging development, will likely further set a precedent toward removing the Lenna image from common use. In his email, the IEEE’s Benzel recommended wider sensitivity about the issue, writing, “In order to raise awareness of and increase author compliance with this new policy, program committee members and reviewers should look for inclusion of this image, and if present, should ask authors to replace the Lena image with an alternative.”

Playboy image from 1972 gets ban from IEEE computer journals Read More »

what-happens-when-chatgpt-tries-to-solve-50,000-trolley-problems?

What happens when ChatGPT tries to solve 50,000 trolley problems?

Images of cars on a freeway with green folder icons superimposed on each vehicle.

There’s a puppy on the road. The car is going too fast to stop in time, but swerving means the car will hit an old man on the sidewalk instead.

What choice would you make? Perhaps more importantly, what choice would ChatGPT make?

Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. In November, one startup called Ghost Autonomy announced experiments with ChatGPT to help its software navigate its environment.

But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans. His results showed that LLMs and humans have roughly the same priorities, but some showed clear deviations.

The Moral Machine

After ChatGPT was released in November 2022, it didn’t take long for researchers to ask it to tackle the Trolley Problem, a classic moral dilemma. This problem asks people to decide whether it is right to let a runaway trolley run over and kill five humans on a track or switch it to a different track where it kills only one person. (ChatGPT usually chose one person.)

But Takemoto wanted to ask LLMs more nuanced questions. “While dilemmas like the classic trolley problem offer binary choices, real-life decisions are rarely so black and white,” he wrote in his study, recently published in the journal Proceedings of the Royal Society.

Instead, he turned to an online initiative called the Moral Machine experiment. This platform shows humans two decisions that a driverless car may face. They must then decide which decision is more morally acceptable. For example, a user might be asked if, during a brake failure, a self-driving car should collide with an obstacle (killing the passenger) or swerve (killing a pedestrian crossing the road).

But the Moral Machine is also programmed to ask more complicated questions. For example, what if the passengers were an adult man, an adult woman, and a boy, and the pedestrians were two elderly men and an elderly woman walking against a “do not cross” signal?

The Moral Machine can generate randomized scenarios using factors like age, gender, species (saving humans or animals), social value (pregnant women or criminals), and actions (swerving, breaking the law, etc.). Even the fitness level of passengers and pedestrians can change.

In the study, Takemoto took four popular LLMs (GPT-3.5, GPT-4, PaLM 2, and Llama 2) and asked them to decide on over 50,000 scenarios created by the Moral Machine. More scenarios could have been tested, but the computational costs became too high. Nonetheless, these responses meant he could then compare how similar LLM decisions were to human decisions.

What happens when ChatGPT tries to solve 50,000 trolley problems? Read More »