Biz & IT

openai-updates-chatgpt-4-model-with-potential-fix-for-ai-“laziness”-problem

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem

Break’s over —

Also, new GPT-3.5 Turbo model, lower API prices, and other model updates.

A lazy robot (a man with a box on his head) sits on the floor beside a couch.

On Thursday, OpenAI announced updates to the AI models that power its ChatGPT assistant. Amid less noteworthy updates, OpenAI tucked in a mention of a potential fix to a widely reported “laziness” problem seen in GPT-4 Turbo since its release in November. The company also announced a new GPT-3.5 Turbo model (with lower pricing), a new embedding model, an updated moderation model, and a new way to manage API usage.

“Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task,” writes OpenAI in its blog post.

Since the launch of GPT-4 Turbo, a large number of ChatGPT users have reported that the ChatGPT-4 version of its AI assistant has been declining to do tasks (especially coding tasks) with the same exhaustive depth as it did in earlier versions of GPT-4. We’ve seen this behavior ourselves while experimenting with ChatGPT over time.

OpenAI has never offered an official explanation for this change in behavior, but OpenAI employees have previously acknowledged on social media that the problem is real, and the ChatGPT X account wrote in December, “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

We reached out to OpenAI asking if it could provide an official explanation for the laziness issue but did not receive a response by press time.

New GPT-3.5 Turbo, other updates

Elsewhere in OpenAI’s blog update, the company announced a new version of GPT-3.5 Turbo (gpt-3.5-turbo-0125), which it says will offer “various improvements including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.”

And the cost of GPT-3.5 Turbo through OpenAI’s API will decrease for the third time this year “to help our customers scale.” New input token prices are 50 percent less, at $0.0005 per 1,000 input tokens, and output prices are 25 percent less, at $0.0015 per 1,000 output tokens.

Lower token prices for GPT-3.5 Turbo will make operating third-party bots significantly less expensive, but the GPT-3.5 model is generally more likely to confabulate than GPT-4 Turbo. So we might see more scenarios like Quora’s bot telling people that eggs can melt (although the instance used a now-deprecated GPT-3 model called text-davinci-003). If GPT-4 Turbo API prices drop over time, some of those hallucination issues with third parties might eventually go away.

OpenAI also announced new embedding models, text-embedding-3-small and text-embedding-3-large, which convert content into numerical sequences, aiding in machine learning tasks like clustering and retrieval. And an updated moderation model, text-moderation-007, is part of the company’s API that “allows developers to identify potentially harmful text,” according to OpenAI.

Finally, OpenAI is rolling out improvements to its developer platform, introducing new tools for managing API keys and a new dashboard for tracking API usage. Developers can now assign permissions to API keys from the API keys page, helping to clamp down on misuse of API keys (if they get into the wrong hands) that can potentially cost developers lots of money. The API dashboard allows devs to “view usage on a per feature, team, product, or project level, simply by having separate API keys for each.”

As the media world seemingly swirls around the company with controversies and think pieces about the implications of its tech, releases like these show that the dev teams at OpenAI are still rolling along as usual with updates at a fairly regular pace. Despite the company almost completely falling apart late last year, it seems that, under the hood, it’s business as usual for OpenAI.

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem Read More »

microsoft-cancels-blizzard-survival-game,-lays-off-1,900

Microsoft cancels Blizzard survival game, lays off 1,900

Survival game won’t survive —

Job cuts hit Xbox, ZeniMax businesses, too, reports say.

Activision Blizzard survival game

Enlarge / Blizzard shared this image teasing a now-cancelled game in 2022.

Blizzard Entertainment/Twitter

The survival game that Blizzard announced it was working on in January 2022 has reportedly been canceled. The cut comes as Microsoft is slashing jobs a little over four months after closing its $69 billion Activision Blizzard acquisition.

Blizzard’s game didn’t have a title yet, but Blizzard said it would be for PC and console and introduce new stories and characters. In January 2022, Blizzard put out a call for workers to help build the game.

The game’s axing was revealed today in an internal memo from Microsoft Gaming CEO Phil Spencer seen by publications including The Verge and CNBC that said:

Blizzard is ending development on its survival game project and will be shifting some of the people working on it to one of several promising new projects Blizzard has in the early stages of development.

Spencer said Microsoft was laying off 1,900 people starting today, with workers continuing to receive notifications in the coming days. The layoffs affect 8.64 percent of Microsoft’s 22,000-employee gaming division.

Another internal memo, written by Matt Booty, Microsoft’s game content and studios president, and seen by The Verge, said the layoffs are hitting “multiple” Blizzard teams, “including development teams, shared service organizations and corporate functions.” In January 2022, after plans for the merger were first announced, Bobby Kotick, then-CEO of Activision Blizzard, reportedly told employees at a meeting that Microsoft was “committed to trying to retain as many of our people as possible.”

Spencer said workers in Microsoft’s Xbox and ZeniMax Media businesses will also be impacted. Microsoft acquired ZeniMax, which owns Bethesda Softworks, for $7.5 billion in a deal that closed in March 2021.

After a bumpy ride with global regulators, Microsoft’s Activision Blizzard purchase closed in October. Booty’s memo said the job cuts announced today “reflect a focus on products and strategies that hold the most promise for Blizzard’s future growth, as well as identified areas of overlap across Blizzard and Microsoft Gaming.”

He claimed that layoffs would “enable Blizzard and Xbox to deliver ambitious games… on more platforms and in more places than ever before,” as well as “sustainable growth.”

Spencer’s memo said:

As we move forward in 2024, the leadership of Microsoft Gaming and Activision Blizzard is committed to aligning on a strategy and an execution plan with a sustainable cost structure that will support the whole of our growing business. Together, we’ve set priorities, identified areas of overlap, and ensured that we’re all aligned on the best opportunities for growth.

Laid-off employees will receive severance as per local employment laws, Spencer added.

Additional departures

Blizzard President Mike Ybarra announced via his X profile today that he is leaving the company. Booty’s memo said Ybarra “decided to leave” since the acquisition was completed. Ybarra was a top executive at Microsoft for over 20 years, including leadership positions at Xbox, before he started working at Blizzard in 2019.

Blizzard’s chief design officer, Allen Adham, is also leaving the company, per Booty’s memo.

The changes at the game studio follow Activision Blizzard CEO Bobby Kotick’s exit on January 1.

Microsoft also laid off 10,000 people, or about 4.5 percent of its reported 221,000-person workforce, last year as it worked to complete its Activision Blizzard buy. Microsoft blamed those job cuts on “macroeconomic conditions and changing customer priorities.”

Today’s job losses also join a string of recently announced tech layoffs, including at IBM, Google, SAP, and eBay and in the gaming community platforms Unity, Twitch, and Discord. However, layoffs following Microsoft’s Activision Blizzard deal were somewhat anticipated due to expected redundancies among the Washington tech giant’s biggest merger ever. This week, Microsoft hit a $3 trillion market cap, becoming the second company to do so (after Apple).

Microsoft cancels Blizzard survival game, lays off 1,900 Read More »

google’s-latest-ai-video-generator-can-render-cute-animals-in-implausible-situations

Google’s latest AI video generator can render cute animals in implausible situations

An elephant with a party hat—underwater —

Lumiere generates five-second videos that “portray realistic, diverse and coherent motion.”

Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

Enlarge / Still images of AI-generated video examples provided by Google for its Lumiere video synthesis model.

On Tuesday, Google announced Lumiere, an AI video generator that it calls “a space-time diffusion model for realistic video generation” in the accompanying preprint paper. But let’s not kid ourselves: It does a great job at creating videos of cute animals in ridiculous scenarios, such as using roller skates, driving a car, or playing a piano. Sure, it can do more, but it is perhaps the most advanced text-to-animal AI video generator yet demonstrated.

According to Google, Lumiere utilizes unique architecture to generate a video’s entire temporal duration in one go. Or, as the company put it, “We introduce a Space-Time U-Net architecture that generates the entire temporal duration of the video at once, through a single pass in the model. This is in contrast to existing video models which synthesize distant keyframes followed by temporal super-resolution—an approach that inherently makes global temporal consistency difficult to achieve.”

In layperson terms, Google’s tech is designed to handle both the space (where things are in the video) and time (how things move and change throughout the video) aspects simultaneously. So, instead of making a video by putting together many small parts or frames, it can create the entire video, from start to finish, in one smooth process.

The official promotional video accompanying the paper “Lumiere: A Space-Time Diffusion Model for Video Generation,” released by Google.

Lumiere can also do plenty of party tricks, which are laid out quite well with examples on Google’s demo page. For example, it can perform text-to-video generation (turning a written prompt into a video), convert still images into videos, generate videos in specific styles using a reference image, apply consistent video editing using text-based prompts, create cinemagraphs by animating specific regions of an image, and offer video inpainting capabilities (for example, it can change the type of dress a person is wearing).

In the Lumiere research paper, the Google researchers state that the AI model outputs five-second long 1024×1024 pixel videos, which they describe as “low-resolution.” Despite those limitations, the researchers performed a user study and claim that Lumiere’s outputs were preferred over existing AI video synthesis models.

As for training data, Google doesn’t say where it got the videos they fed into Lumiere, writing, “We train our T2V [text to video] model on a dataset containing 30M videos along with their text caption. [sic] The videos are 80 frames long at 16 fps (5 seconds). The base model is trained at 128×128.”

A block diagram showing components of the Lumiere AI model, provided by Google.

Enlarge / A block diagram showing components of the Lumiere AI model, provided by Google.

AI-generated video is still in a primitive state, but it’s been progressing in quality over the past two years. In October 2022, we covered Google’s first publicly unveiled image synthesis model, Imagen Video. It could generate short 1280×768 video clips from a written prompt at 24 frames per second, but the results weren’t always coherent. Before that, Meta debuted its AI video generator, Make-A-Video. In June of last year, Runway’s Gen2 video synthesis model enabled the creation of two-second video clips from text prompts, fueling the creation of surrealistic parody commercials. And in November, we covered Stable Video Diffusion, which can generate short clips from still images.

AI companies often demonstrate video generators with cute animals because generating coherent, non-deformed humans is currently difficult—especially since we, as humans (you are human, right?), are adept at noticing any flaws in human bodies or how they move. Just look at AI-generated Will Smith eating spaghetti.

Judging by Google’s examples (and not having used it ourselves), Lumiere appears to surpass these other AI video generation models. But since Google tends to keep its AI research models close to its chest, we’re not sure when, if ever, the public may have a chance to try it for themselves.

As always, whenever we see text-to-video synthesis models getting more capable, we can’t help but think of the future implications for our Internet-connected society, which is centered around sharing media artifacts—and the general presumption that “realistic” video typically represents real objects in real situations captured by a camera. Future video synthesis tools more capable than Lumiere will make deceptive deepfakes trivially easy to create.

To that end, in the “Societal Impact” section of the Lumiere paper, the researchers write, “Our primary goal in this work is to enable novice users to generate visual content in an creative and flexible way. [sic] However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use.”

Google’s latest AI video generator can render cute animals in implausible situations Read More »

mass-exploitation-of-ivanti-vpns-is-infecting-networks-around-the-globe

Mass exploitation of Ivanti VPNs is infecting networks around the globe

THIS IS NOT A DRILL —

Orgs that haven’t acted yet should, even if it means suspending VPN services.

Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online.

Enlarge / Cybercriminals or anonymous hackers use malware on mobile phones to hack personal and business passwords online.

Getty Images

Hackers suspected of working for the Chinese government are mass exploiting a pair of critical vulnerabilities that give them complete control of virtual private network appliances sold by Ivanti, researchers said.

As of Tuesday morning, security company Censys detected 492 Ivanti VPNs that remained infected out of 26,000 devices exposed to the Internet. More than a quarter of the compromised VPNs—121—resided in the US. The three countries with the next biggest concentrations were Germany, with 26, South Korea, with 24, and China, with 21.

Censys

Microsoft’s customer cloud service hosted the most infected devices with 13, followed by cloud environments from Amazon with 12, and Comcast at 10.

Censys

“We conducted a secondary scan on all Ivanti Connect Secure servers in our dataset and found 412 unique hosts with this backdoor, Censys researchers wrote. “Additionally, we found 22 distinct ‘variants’ (or unique callback methods), which could indicate multiple attackers or a single attacker evolving their tactics.”

In an email, members of the Censys research team said evidence suggests that the people infecting the devices are motivated by espionage objectives. That theory aligns with reports published recently by security firms Volexity and Mandiant. Volexity researchers said they suspect the threat actor, tracked as UTA0178, is a “Chinese nation-state-level threat actor.” Mandiant, which tracks the attack group as UNC5221, said the hackers are pursuing an “espionage-motivated APT campaign.”

All civilian governmental agencies have been mandated to take corrective action to prevent exploitation. Federal Civilian Executive Branch agencies had until 11: 59 pm Monday to follow the mandate, which was issued Friday by the Cybersecurity and Infrastructure Security Agency. Ivanti has yet to release patches to fix the vulnerabilities. In their absence, Ivanti, CISA, and security companies are urging affected users to follow mitigation and recovery guidance provided by Ivanti that include preventative measures to block exploitation and steps for customers to rebuild and upgrade their systems if they detect exploitation.

“This directive is no surprise, considering the worldwide mass exploitation observed since Ivanti initially revealed the vulnerabilities on January 10,” Censys researchers wrote. “These vulnerabilities are particularly serious given the severity, widespread exposure of these systems, and the complexity of mitigation—especially given the absence of an official patch from the vendor as of the current writing.

When Avanti disclosed the vulnerabilities on January 10, the company said it would release patches on a staggered basis starting this week. The company has not issued a public statement since confirming the patch was still on schedule.

VPNs are an ideal device for hackers to infect because the always-on appliances sit at the very edge of the network, where they accept incoming connections. Because the VPNs must communicate with broad parts of the internal network, hackers who compromise the devices can then expand their presence to other areas. When exploited in unison, the vulnerabilities, tracked as CVE-2023-46805 and CVE-2024-21887, allow attackers to remotely execute code on servers. All supported versions of the Ivanti Connect Secure—often abbreviated as ICS and formerly known as Pulse Secure—are affected.

The ongoing attacks use the exploits to install a host of malware that acts as a backdoor. The hackers then use the malware to harvest as many credentials as possible belonging to various employees and devices on the infected network and to rifle around the network. Despite the use of this malware, the attackers largely employ an approach known as “living off the land,” which uses legitimate software and tools so they’re harder to detect.

The posts linked above from Volexity and Mandiant provide extensive descriptions of how the malware behaves and methods for detecting infections.

Given the severity of the vulnerabilities and the consequences that follow when they’re exploited, all users of affected products should prioritize mitigation of these vulnerabilities, even if that means temporarily suspending VPN usage.

Mass exploitation of Ivanti VPNs is infecting networks around the globe Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

openwrt,-now-20-years-old,-is-crafting-its-own-future-proof-reference-hardware

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware

It’s time for a new blue box —

There are, as you might expect, a few disagreements about what’s most important.

Linksys WRT54G

Enlarge / Failing an image of the proposed reference hardware by the OpenWrt group, let us gaze upon where this all started: inside a device that tried to quietly use open source software without crediting or releasing it.

Jim Salter

OpenWrt, the open source firmware that sprang from Linksys’ use of open source code in its iconic WRT54G router and subsequent release of its work, is 20 years old this year. To keep the project going, lead developers have proposed creating a “fully upstream supported hardware design,” one that would prevent the need for handling “binary blobs” in modern router hardware and let DIY router enthusiasts forge their own path.

OpenWRT project members, 13 of which signed off on this hardware, are keeping the “OpenWrt One” simple, while including “some nice features we believe all OpenWrt supported platforms should have,” including “almost unbrickable” low-level firmware, an on-board real-time clock with a battery backup, and USB-PD power. The price should be under $100 and the schematics and code publicly available.

But OpenWrt will not be producing or selling these boards, “for a ton of reasons.” The group is looking to the Banana Pi makers to distribute a fitting device, with every device producing a donation to the Software Freedom Conservancy earmarked for OpenWrt. That money could then be used for hosting expenses, or “maybe an OpenWrt summit.”

OpenWrt tries to answer some questions about its designs. There are two flash chips on the board to allow for both a main loader and a write-protected recovery. There’s no USB 3.0 because all the USB and PCIe buses are shared on the board. And there’s such an emphasis on a battery-backed RTC because “we believe there are many things a Wi-Fi … device should have on-board by default.”

But members of the site have more questions, some of them beyond the scope of what OpenWrt is promising. Some want to see a device that resembles the blue boxes of old, with four or five Ethernet ports built in. Others are asking about a lack of PoE support, or USB 3.0 for network-attached drives. Some are actually wondering why the proposed device includes NVMe storage. And quite a few are asking why the device has 1Gbps and 2.5Gbps ports, given that this means anyone with Internet faster than 1Gbps will be throttled, since the 2.5 port will likely be used for wireless output.

There is no expected release date, though it’s noted that it’s the “first” community-driven reference hardware.

OpenWrt, which has existed in parallel with the DD-WRT project that sprang from the same firmware moment, powers a number of custom-made routers. It and other open source router firmware faced an uncertain future in the mid-2010s, when Federal Communications Commission rules, or at least manufacturers’ interpretation of them, made them seem potentially illegal. Because open firmware often allowed for pushing wireless radios beyond their licensed radio frequency parameters, firms like TP-Link blocked them, while Linksys (at that point owned by Belkin) continued to allow them. In 2020, OpenWrt patched a code-execution exploit due to unencrypted update channels.

OpenWrt, now 20 years old, is crafting its own future-proof reference hardware Read More »

microsoft-network-breached-through-password-spraying-by-russian-state-hackers

Microsoft network breached through password-spraying by Russian-state hackers

Microsoft network breached through password-spraying by Russian-state hackers

Getty Images

Russia-state hackers exploited a weak password to compromise Microsoft’s corporate network and accessed emails and documents that belonged to senior executives and employees working in security and legal teams, Microsoft said late Friday.

The attack, which Microsoft attributed to a Kremlin-backed hacking group it tracks as Midnight Blizzard, is at least the second time in as many years that failures to follow basic security hygiene has resulted in a breach that has the potential to harm customers. One paragraph in Friday’s disclosure, filed with the Securities and Exchange Commission, was gobsmacking:

Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents. The investigation indicates they were initially targeting email accounts for information related to Midnight Blizzard itself. We are in the process of notifying employees whose email was accessed.

Microsoft didn’t detect the breach until January 12, exactly a week before Friday’s disclosure. Microsoft’s account raises the prospect that the Russian hackers had uninterrupted access to the accounts for as long as two months.

A translation of the 93 words quoted above: A device inside Microsoft’s network was protected by a weak password with no form of two-factor authentication employed. The Russian adversary group was able to guess it by peppering it with previously compromised or commonly used passwords until they finally landed on the right one. The threat actor then accessed the account, indicating that either 2FA wasn’t employed or the protection was somehow bypassed.

Furthermore, this “legacy non-production test tenant account” was somehow configured so that Midnight Blizzard could pivot and gain access to some of the company’s most senior and sensitive employee accounts.

As Steve Bellovin, a computer science professor and affiliate law prof at Columbia University with decades of experience in cybersecurity, wrote on Mastodon:

A lot of fascinating implications here. A successful password spray attack suggests no 2FA and either reused or weak passwords. Access to email accounts belonging to “senior leadership… cybersecurity, and legal” teams using just the permissions of a “test tenant account” suggests that someone gave that test account amazing privileges. Why? Why wasn’t it removed when the test was over? I also note that it took Microsoft about seven weeks to detect the attack.

While Microsoft said that it wasn’t aware of any evidence that Midnight Blizzard gained access to customer environments, production systems, source code, or AI systems, some researchers voiced doubts, particularly about whether the Microsoft 365 service might be or have been susceptible to similar attack techniques. One of the researchers was Kevin Beaumont, who has had a long cybersecurity career that has included a stint working for Microsoft. On LinkedIn, he wrote:

Microsoft staff use Microsoft 365 for email. SEC filings and blogs with no details on Friday night are great.. but they’re going to have to be followed with actual detail. The age of Microsoft doing tents, incident code words, CELA’ing things and pretending MSTIC sees everything (threat actors have Macs too) are over — they need to do radical technical and cultural transformation to retain trust.

CELA is short for Corporate, External, and Legal Affairs, a group inside Microsoft that helps draft disclosures. MSTIC stands for the Microsoft Threat Intelligence Center.

Microsoft network breached through password-spraying by Russian-state hackers Read More »

$40-billion-worth-of-crypto-crime-enabled-by-stablecoins-since-2022

$40 billion worth of crypto crime enabled by stablecoins since 2022

illustration of cryptocurrency breaking through brick wall

Anjali Nair; Getty Images

Stablecoins, cryptocurrencies pegged to a stable value like the US dollar, were created with the promise of bringing the frictionless, border-crossing fluidity of bitcoin to a form of digital money with far less volatility. That combination has proved to be wildly popular, rocketing the total value of stablecoin transactions since 2022 past even that of Bitcoin itself.

It turns out, however, that as stablecoins have become popular among legitimate users over the past two years, they were even more popular among a different kind of user: those exploiting them for billions of dollars of international sanctions evasion and scams.

As part of its annual crime report, cryptocurrency-tracing firm Chainalysis today released new numbers on the disproportionate use of stablecoins for both of those massive categories of illicit crypto transactions over the last year. By analyzing blockchains, Chainalysis determined that stablecoins were used in fully 70 percent of crypto scam transactions in 2023, 83 percent of crypto payments to sanctioned countries like Iran and Russia, and 84 percent of crypto payments to specifically sanctioned individuals and companies. Those numbers far outstrip stablecoins’ growing overall use—including for legitimate purposes—which accounted for 59 percent of all cryptocurrency transaction volume in 2023.

In total, Chainalysis measured $40 billion in illicit stablecoin transactions in 2022 and 2023 combined. The largest single category of that stablecoin-enabled crime was sanctions evasion. In fact, across all cryptocurrencies, sanctions evasion accounted for more than half of the $24.2 billion in criminal transactions Chainalysis observed in 2023, with stablecoins representing the vast majority of those transactions.

The attraction of stablecoins for both sanctioned people and countries, argues Andrew Fierman, Chainalysis’ head of sanctions strategy, is that it allows targets of sanctions to circumvent any attempt to deny them a stable currency like the US dollar. “Whether it’s an individual located in Iran or a bad guy trying to launder money—either way, there’s a benefit to the stability of the US dollar that people are looking to obtain,” Fierman says. “If you’re in a jurisdiction where you don’t have access to the US dollar due to sanctions, stablecoins become an interesting play.”

As examples, Fierman points to Nobitex, the largest cryptocurrency exchange operating in the sanctioned country of Iran, as well as Garantex, a notorious exchange based in Russia that has been specifically sanctioned for its widespread criminal use. Stablecoin usage on Nobitex outstrips bitcoin by a 9:1 ratio, and on Garantex by a 5:1 ratio, Chainalysis found. That’s a stark difference from the roughly 1:1 ratio between stablecoins and bitcoins on a few nonsanctioned mainstream exchanges that Chainalysis checked for comparison.

Chainalysis' chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Enlarge / Chainalysis’ chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Chainanalysis

$40 billion worth of crypto crime enabled by stablecoins since 2022 Read More »

convicted-murderer,-filesystem-creator-writes-of-regrets-to-linux-list

Convicted murderer, filesystem creator writes of regrets to Linux list

Pre-release notes —

“The man I am now would do things very differently,” Reiser says in long letter.

Hans Reiser letter to Fredrick Brennan

Enlarge / A portion of the cover letter attached to Hans Reiser’s response to Fredrick Brennan’s prompt about his filesystem’s obsolescence.

Fredrick Brennan

With the ReiserFS recently considered obsolete and slated for removal from the Linux kernel entirely, Fredrick R. Brennan, font designer and (now regretful) founder of 8chan, wrote to the filesystem’s creator, Hans Reiser, asking if he wanted to reply to the discussion on the Linux Kernel Mailing List (LKML).

Reiser, 59, serving a potential life sentence in a California prison for the 2006 murder of his estranged wife, Nina Reiser, wrote back with more than 6,500 words, which Brennan then forwarded to the LKML. It’s not often you see somebody apologize for killing their wife, explain their coding decisions around balanced trees versus extensible hashing, and suggest that elementary schools offer the same kinds of emotional intelligence curriculum that they’ve worked through in prison, in a software mailing list. It’s quite a document.

What follows is a relative summary of Reiser’s letter, dated November 26, 2023, which we first saw on the Phoronix blog, and which, by all appearances, is authentic (or would otherwise be an epic bit of minutely detailed fraud for no particular reason). It covers, broadly, why Reiser believes his system failed to gain mindshare among Linux users, beyond the most obvious reason. This leads Reiser to detail the technical possibilities, his interpersonal and leadership failings and development, some lingering regrets about dealings with SUSE and Oracle and the Linux community at large, and other topics, including modern Russian geopolitics.

“LKML and Slashdot.org seem like reasonable places to send it (as of 2006)”

In a cover letter, Reiser tells Brennan that he hopes he can use OCR to import his lengthy letter and asks him to use his best judgment in where to send his reply. He also asks, if he has time, Brennan might send him information on “Reiser5, or any interesting papers on other Filesystems, compression (especially Deep Learning based compression), etc.”

Then Reiser addresses the kernel mailing list directly—very directly:

I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don’t post directly because I am in prison for killing my wife Nina in 2006.

I am very sorry for my crime–a proper apology would be off topic for this forum, but available to any who ask.

A detailed apology for how I interacted with the Linux kernel community, and some history of V3 and V4, are included, along with descriptions of what the technical issues were. I have been attending prison workshops, and working hard on improving my social skills to aid my becoming less of a danger to society. The man I am now would do things very differently from how I did things then.

ReiserFS V3 was “our first filesystem, and in doing it we made mistakes, because we didn’t know what we were doing,” Reiser writes. He worked through “years of dark depression” to get V3 up to the performance speeds of ext2, but regrets how he celebrated that milestone. “The man I was then presented papers with benchmarks showing that ReiserFS was faster than ext2. The man I am now would stat his papers … crediting them for being faster than the filesystems of other operating systems, and thanking them for the years we used their filesystem to write ours.” It was “my first serious social mistake in the Linux community, and it was completely unnecessary.”

Reiser asks that a number of people who worked on ReiserFS be included in “one last release” of the README, and to “delete anything in there I might have said about why they were not credited.” He says prison has changed him in conflict resolution and with his “tendency to see people in extremes.”

Reiser extensively praises Mikhail Gilula, the “brightest mind in his generation of computer scientists,” for his work on ReiserFS from Russia and for his ideas on rewriting everything the field knew about data structures. With their ideas on filesystems and namespaces combined, it would be “the most important refactoring of code ever.” His analogy at the time, Reiser wrote, was Adam Smith’s ideas of how roads, waterways, and free trade affected civilization development; ReiserFS’ ideas could similarly change “the expressive power of the operating system.”

Convicted murderer, filesystem creator writes of regrets to Linux list Read More »

inventor-of-ntp-protocol-that-keeps-time-on-billions-of-devices-dies-at-age-85

Inventor of NTP protocol that keeps time on billions of devices dies at age 85

A legend in his own time —

Dave Mills created NTP, the protocol that holds the temporal Internet together, in 1985.

A photo of David L. Mills taken by David Woolley on April 27, 2005.

Enlarge / A photo of David L. Mills taken by David Woolley on April 27, 2005.

David Woolley / Benj Edwards / Getty Images

On Thursday, Internet pioneer Vint Cerf announced that Dr. David L. Mills, the inventor of Network Time Protocol (NTP), died peacefully at age 85 on January 17, 2024. The announcement came in a post on the Internet Society mailing list after Cerf was informed of David’s death by Mills’ daughter, Leigh.

“He was such an iconic element of the early Internet,” wrote Cerf.

Dr. Mills created the Network Time Protocol (NTP) in 1985 to address a crucial challenge in the online world: the synchronization of time across different computer systems and networks. In a digital environment where computers and servers are located all over the world, each with its own internal clock, there’s a significant need for a standardized and accurate timekeeping system.

NTP provides the solution by allowing clocks of computers over a network to synchronize to a common time source. This synchronization is vital for everything from data integrity to network security. For example, NTP keeps network financial transaction timestamps accurate, and it ensures accurate and synchronized timestamps for logging and monitoring network activities.

In the 1970s, during his tenure at COMSAT and involvement with ARPANET (the precursor to the Internet), Mills first identified the need for synchronized time across computer networks. His solution aligned computers to within tens of milliseconds. NTP now operates on billions of devices worldwide, coordinating time across every continent, and has become a cornerstone of modern digital infrastructure.

As detailed in an excellent 2022 New Yorker profile by Nate Hopper, Mills faced significant challenges in maintaining and evolving the protocol, especially as the Internet grew in scale and complexity. His work highlighted the often under-appreciated role of key open source software developers (a topic explored quite well in a 2020 xkcd comic). Mills was born with glaucoma and lost his sight, eventually becoming completely blind. Due to difficulties with his sight, Mills turned over control of the protocol to Harlan Stenn in the 2000s.

A screenshot of Dr. David L. Mills' website at the University of Delaware captured on January 19, 2024.

Enlarge / A screenshot of Dr. David L. Mills’ website at the University of Delaware captured on January 19, 2024.

Aside from his work on NTP, Mills also invented the first “Fuzzball router” for NSFNET (one of the first modern routers, based on the DEC PDP-11 computer), created one of the first implementations of FTP, inspired the creation of “ping,” and played a key role in Internet architecture as the first chairman of the Internet Architecture Task Force.

Mills was widely recognized for his work, becoming a Fellow of the Association for Computing Machinery in 1999 and the Institute of Electrical and Electronics Engineers in 2002, as well as receiving the IEEE Internet Award in 2013 for contributions to network protocols and timekeeping in the development of the Internet.

Mills received his PhD in Computer and Communication Sciences from the University of Michigan in 1971. At the time of his death, Mills was an emeritus professor at the University of Delaware, having retired in 2008 after teaching there for 22 years.

Inventor of NTP protocol that keeps time on billions of devices dies at age 85 Read More »

openai-opens-the-door-for-military-uses-but-maintains-ai-weapons-ban

OpenAI opens the door for military uses but maintains AI weapons ban

Skynet deferred —

Despite new Pentagon collab, OpenAI won’t allow customers to “develop or use weapons” with its tools.

The OpenAI logo over a camoflage background.

On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg. OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos. The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

According to Anna Makanju, OpenAI’s vice president of global affairs, “many people thought that [a previous blanket prohibition on military applications] would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

Under the “Universal Policies” section of OpenAI’s Usage Policies document, section 2 says, “Don’t use our service to harm yourself or others.” The prohibition includes using its AI products to “develop or use weapons.” Changes to the terms that removed the “military and warfare” prohibitions appear to have been made by OpenAI on January 10.

The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides. “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview. “We’ve been exploring whether it can assist with (prevention of) veteran suicide.”

The efforts mark a significant change from OpenAI’s original stance on military partnerships, Bloomberg says. Meanwhile, Microsoft Corp., a large investor in OpenAI, already has an established relationship with the US military through various software contracts.

OpenAI opens the door for military uses but maintains AI weapons ban Read More »

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »