Author name: Rejus Almole

Don’t Ignore What You Can Easily Control: Your IP Infrastructure

In today’s rapidly evolving digital landscape, managing IP infrastructure has become increasingly complex and critical for organizations of all sizes. As networks grow and diversify, encompassing on-premises, cloud, and hybrid environments, the challenges of efficiently managing DNS, DHCP, and IP address management services—collectively known as DDI—have multiplied. This blog post explores the key challenges in managing an evolving IP infrastructure, how DDI vendors are addressing those challenges, and what steps you should take to optimize your network management.

Challenges in Managing IP Infrastructure

IP infrastructure management involves the planning, allocation, tracking, and administration of IP addresses, DNS records, and DHCP services within an organization’s network. Effective management of these core network services is crucial for ensuring network stability, security, and performance. As networks become more complex, traditional methods of managing IP infrastructure have become inadequate and error-prone, giving rise to several challenges:

  • Manual management and human error: Many organizations still rely on spreadsheets or outdated tools for IP address management, which are prone to human error and inefficiency. This manual approach becomes increasingly untenable as networks grow in size and complexity.
  • Rapid growth of connected devices: The proliferation of internet of things (IoT) devices, cloud services, and mobile technologies has led to an exponential increase in the number of IP addresses that need to be managed. This growth strains traditional IP management systems and increases the risk of address conflicts and exhaustion.
  • Dynamic nature of modern networks: The increasing use of virtualization and containerization technologies results in more frequent changes to IP address assignments, making it difficult to maintain an up-to-date inventory.
  • Complexity of hybrid and multicloud environments: As organizations adopt hybrid and multicloud strategies, managing IP addresses across diverse environments becomes increasingly challenging. Each cloud provider may have its own IPAM tools and capabilities, leading to inconsistencies and potential conflicts.
  • IPv4 to IPv6 transition: The ongoing transition from IPv4 to IPv6 adds another layer of complexity to IP management. Organizations must manage both protocols simultaneously, ensuring seamless communication and avoiding disruptions during the transition period.
  • Security and compliance: With cybersecurity’s rising importance, IP address management plays a crucial role in network security. Ensuring proper allocation, tracking, and monitoring of IP addresses is essential for maintaining a secure network and complying with various regulations.
  • Lack of centralized visibility: Without a centralized management system, organizations struggle to maintain an accurate, real-time view of their IP address usage across different network segments and environments.

How DDI Vendors are Addressing These Challenges

DDI vendors have recognized these challenges and are developing sophisticated solutions to address them. Here are some key ways DDI vendors are innovating to help organizations manage their IP infrastructure more effectively:

  • Integrated DDI platforms: Vendors are offering comprehensive DDI solutions that integrate DNS, DHCP, and IPAM functionalities into a single platform. This integration provides a unified view of the network and simplifies management tasks.
  • Cloud-native and multicloud support: Modern DDI solutions are designed to work seamlessly across on-premises, cloud, and hybrid environments. They offer consistent management interfaces and capabilities regardless of the underlying infrastructure.
  • Automation and AI-driven management: Advanced DDI platforms leverage automation and AI to streamline IP address allocation, detect conflicts, and optimize resource utilization. This reduces the burden on IT teams and minimizes human errors.
  • Enhanced security features: DDI vendors are incorporating robust security features into their solutions, including DNS security extensions (DNSSEC), threat intelligence integration, and automated threat detection and response capabilities.
  • IPv6 transition support: DDI solutions now offer comprehensive support for both IPv4 and IPv6, facilitating a smooth transition between the two protocols and enabling efficient management of dual-stack environments.
  • API-driven architecture: Modern DDI platforms provide extensive API support, enabling integration with other IT management tools and supporting network automation initiatives.
  • Scalability and performance optimization: Vendors are focusing on developing highly scalable solutions that can handle the growing number of IP addresses and DNS queries in large-scale networks without compromising performance.

13 Recommendations for Effective IP infrastructure Management

Given the evolving landscape of IP infrastructure management and the solutions offered by DDI vendors, here are key recommendations for prospective customers:

  1. Assess current infrastructure: Conduct a thorough assessment of your current IP infrastructure, including an inventory of all IP addresses, DNS configurations, and DHCP settings. Identify pain points and areas where manual processes are causing inefficiencies or errors.
  2. Define clear objectives: Establish clear objectives for your IP management strategy. This might include improving security, enhancing visibility, increasing automation, or preparing for IPv6 adoption. Your objectives will guide your choice of DDI solution and implementation strategy.
  3. Develop a structured IP addressing scheme: Create and implement a well-structured IP addressing scheme that allows for efficient allocation and easy management. Consider factors such as:
    • Hierarchical addressing
    • Subnet planning
    • Reserved address ranges for critical infrastructure
    • Future growth and scalability
  4. Implement a centralized DDI solution: Invest in a comprehensive DDI solution that provides centralized management of DNS, DHCP, and IPAM across all network environments. Look for solutions that offer:
    • Multivendor support
    • Cloud and on-premises compatibility
    • Automation capabilities
    • Strong security features
    • Scalability to accommodate future growth
  5. Embrace automation for IP address management: Look for opportunities to automate routine IP management tasks. Leverage the automation capabilities of your DDI solution to streamline routine tasks such as IP address allocation, DNS record updates, DHCP configuration changes, and conflict detection.
  6. Implement regular audits and monitoring: Establish a process for regular IP address audits to maintain accuracy and identify unused or misallocated addresses. Implement real-time monitoring to detect and resolve issues promptly. Adopt IP management best practices, such as:
    • Implementing a structured IP addressing scheme
    • Regularly auditing IP address usage
    • Maintaining detailed documentation of IP assignments and policies
    • Using DHCP for dynamic IP address assignment where appropriate
    • Implementing proper subnet planning
  7. Prepare for IPv6 adoption: Even if you’re not immediately transitioning to IPv6, start planning for it. This includes educating your IT team, assessing your current infrastructure’s IPv6 readiness, and choosing DDI solutions that support both IPv4 and IPv6. Develop a strategy for IPv6 adoption, including:
    • Assessing current IPv6 readiness
    • Planning address allocation schemes
    • Testing and validating IPv6 compatibility of network devices and applications
    • Implementing dual-stack configurations where necessary
  8. Prioritize security measures: Ensure that your IP management strategy includes robust security measures. Utilize the security features of your DDI solution to improve overall network security:
    • Implement DNSSEC to protect against DNS spoofing and cache poisoning
    • Use role-based access control to limit administrative privileges
    • Enable logging and auditing features for better visibility and compliance
  9. Integrate with existing systems: Look for ways to integrate your DDI solution with other IT management tools and processes. Leverage API capabilities to integrate your DDI solution with other IT management tools, such as configuration management databases (CMDBs), network monitoring systems, and IT service management (ITSM) platforms.
  10. Invest in training and educating IT staff: Invest in training and education for your IT staff to ensure they are proficient in using the DDI solution and following best practices for IP address management. This might involve formal training programs, certifications, or ongoing education initiatives.
  11. Develop and enforce policies: Implement a process for regularly reviewing and optimizing your IP management strategy. This should include periodic audits, performance assessments, and updates to policies and procedures as your network evolves. Create clear policies and procedures for IP address management, including:
    • Address allocation guidelines
    • Naming conventions
    • Change management processes
    • Documentation requirements
  12. Consider managed services: For organizations lacking in-house expertise or resources, consider leveraging managed DDI services offered by vendors or managed service providers to ensure optimal management of your IP infrastructure. Many vendors offer managed solutions that can provide expertise and reduce the operational burden on your IT team.
  13. Plan for future growth: Regularly review and update your IP address management strategy to accommodate future growth and technological changes. This may include:
    • Reassessing address allocation schemes
    • Evaluating new DDI technologies and features
    • Adjusting policies and procedures as needed

Next Steps

Effective management of IP infrastructure is essential for maintaining a stable, secure, and efficient network environment. The challenges are significant, but DDI vendors are continually innovating to provide comprehensive solutions. By taking a strategic approach to IP management, implementing a centralized DDI platform, automating routine tasks, enhancing security measures, and following best practices for IP address management, organizations can overcome the complexities of modern networks and build a solid foundation for future growth. Regular assessment, continuous improvement, and staying informed about emerging technologies and trends in IP management will be key to long-term success in this critical area of IT infrastructure management.

To learn more, take a look at GigaOm’s DDI Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Don’t Ignore What You Can Easily Control: Your IP Infrastructure Read More »

lab-owner-pleads-guilty-to-faking-covid-test-results-during-pandemic

Lab owner pleads guilty to faking COVID test results during pandemic

Justice —

Ill-gotten millions bought a Bentley, Lamborghini, Tesla X, and crypto, among other things.

Residents line up for COVID-19 testing on November 30, 2020 in Chicago.

Enlarge / Residents line up for COVID-19 testing on November 30, 2020 in Chicago.

The co-owner of a Chicago-based lab has pleaded guilty for his role in a COVID testing scam that raked in millions—which he used to buy stocks, cryptocurrency, and several luxury cars while still squirreling away over $6 million in his personal bank account.

Zishan Alvi, 45, of Inverness, Illinois, co-owned LabElite, which federal prosecutors say billed the federal government for COVID-19 tests that were either never performed or were performed with purposefully inadequate components to render them futile. Customers who sought testing from LabElite—sometimes for clearance to travel or have contact with vulnerable people—received either no results or results indicating they were negative for the deadly virus.

The scam, which ran from around February 2021 to about February 2022, made over $83 million total in fraudulent payments from the federal government’s Health Resources and Services Administration (HRSA), which covered the cost of COVID-19 testing for people without insurance during the height of the pandemic. Local media coverage indicated that people who sought testing at LabElite were discouraged from providing health insurance information.

In February 2022, the FBI raided LabElite’s Chicago testing site amid a crackdown on several large-scale fraudulent COVID testing schemes. In March 2023, Alvi was indicted by a federal grand jury on 10 counts of wire fraud and one count of theft of government funds. The indictment sought forfeiture of his ill-gotten riches, which were listed in the indictment.

The list included five vehicles: a 2021 Mercedes-Benz, a 2021 Land Rover Range Rover HSE, a  2021 Lamborghini Urus, A 2021 Bentley, and a 2022 Tesla X. There was also about $810,000 in an E*Trade account, approximately $500,000 in a Fidelity Investments account, and $245,814 in a Coinbase account. Last, there was $6,825,089 in Alvi’s personal bank account.

On Monday, the Department of Justice announced a deal in which Alvi pleaded guilty to one count of wire fraud, taking responsibility for $14 million worth of fraudulent HRSA claims. He now faces up to 20 years in prison and will be sentenced on February 7, 2025.

Lab owner pleads guilty to faking COVID test results during pandemic Read More »

t-mobile-pays-$16-million-fine-for-three-years’-worth-of-data-breaches

T-Mobile pays $16 million fine for three years’ worth of data breaches

T-Mobile logo displayed in front of a stock market chart.

Getty Images | SOPA Images

T-Mobile has agreed to pay a $15.75 million fine and improve its security in a settlement over a series of data breaches over three years that affected tens of millions of customers.

“T-Mobile suffered data breaches in 2021, 2022, and 2023,” the Federal Communications Commission Enforcement Bureau said in an order approving a consent decree yesterday. “Combined, these breaches affected millions of current, former, or prospective T-Mobile customers and millions of end-user customers of T-Mobile wireless service resellers, which operate on T-Mobile’s network infrastructure and are known as mobile virtual network operators (MVNOs).”

Four breaches occurring over three years exposed personal information, including customer names, addresses, dates of birth, Social Security numbers, driver’s license numbers, the features customers subscribed to, and the number of lines on their accounts.

The FCC investigated T-Mobile for several potential violations: failure to meet its legal duty to protect confidentiality of private information; impermissibly using, disclosing, or permitting access to private information without customer approval; failure to take reasonable measures to discover and protect against attempts to gain unauthorized access to private information; unjust and unreasonable information security practices; and making misrepresentations to customers about its information security practices.

“To settle these investigations, T-Mobile will pay a civil penalty of $15,750,000 and commit to spending an additional $15,750,000 over the next two years to strengthen its cybersecurity program, and develop and implement a compliance plan to protect consumers against similar data breaches in the future,” the FCC said.

FCC touts “strong message” to carriers

The fine will be paid to the US Treasury. The FCC Enforcement Bureau said the security improvements that T-Mobile agreed to “will likely require expenditures an order of magnitude greater than the civil penalty here.” T-Mobile reported $19.8 billion in revenue and $2.9 billion in net income in Q2 2024.

In a press release, the FCC touted the settlement as “a model for the mobile telecommunications industry.” T-Mobile will “address foundational security flaws, work to improve cyber hygiene, and adopt robust modern architectures, like zero trust and phishing-resistant multifactor authentication,” the agency said.

“Today’s mobile networks are top targets for cybercriminals… We will continue to send a strong message to providers entrusted with this delicate information that they need to beef up their systems or there will be consequences,” FCC Chairwoman Jessica Rosenworcel said.

T-Mobile entered into the settlement despite not agreeing with the FCC’s accusations. “The Bureau and T-Mobile disagree about whether T-Mobile’s network and data security program and policies in place at the relevant times violated any standard of care or regulation then applicable to T-Mobile, but in the interest of resolving these investigations, and in the interest of putting consumer security first, the parties enter into this negotiated consent decree,” the agreement said.

T-Mobile pays $16 million fine for three years’ worth of data breaches Read More »

toxic-chemicals-from-ohio-train-derailment-lingered-in-buildings-for-months

Toxic chemicals from Ohio train derailment lingered in buildings for months

This video screenshot released by the US National Transportation Safety Board (NTSB) shows the site of a derailed freight train in East Palestine, Ohio.

Enlarge / This video screenshot released by the US National Transportation Safety Board (NTSB) shows the site of a derailed freight train in East Palestine, Ohio.

On February 3, 2023, a train carrying chemicals jumped the tracks in East Palestine, Ohio, rupturing railcars filled with hazardous materials and fueling chemical fires at the foothills of the Appalachian Mountains.

The disaster drew global attention as the governors of Ohio and Pennsylvania urged evacuations for a mile around the site. Flames and smoke billowed from burning chemicals, and an acrid odor radiated from the derailment area as chemicals entered the air and spilled into a nearby creek.

Three days later, at the urging of the rail company Norfolk Southern, about 1 million pounds of vinyl chloride, a chemical that can be toxic to humans at high doses, was released from the damaged train cars and set aflame.

Federal investigators later concluded that the open burn and the black mushroom cloud it produced were unnecessary, but it was too late. Railcar chemicals spread into Ohio and Pennsylvania.

As environmental engineers, I and my colleagues are often asked to assist with public health decisions after disasters by government agencies and communities. After the evacuation order was lifted, community members asked for help.

In a new study, we describe the contamination we found, along with problems with the response and cleanup that, in some cases, increased the chances that people would be exposed to hazardous chemicals. It offers important lessons to better protect communities in the future.

How chemicals get into homes and water

When large amounts of chemicals are released into the environment, the air can become toxic. Chemicals can also wash into waterways and seep into the ground, contaminating groundwater and wells. Some chemicals can travel below ground into nearby buildings and make the indoor air unsafe.

A computer model shows how chemicals from the train may have spread, given wind patterns. The star on the Ohio-Pennsylvania line is the site of the derailment.

Enlarge / A computer model shows how chemicals from the train may have spread, given wind patterns. The star on the Ohio-Pennsylvania line is the site of the derailment.

Air pollution can find its way into buildings through cracks, windows, doors, and other portals. Once inside, the chemicals can penetrate home items like carpets, drapes, furniture, counters, and clothing. When the air is stirred up, those chemicals can be released again.

Evacuation order lifted, but buildings were contaminated

Three weeks after the derailment, we began investigating the safety of the area near 17 buildings in Ohio and Pennsylvania. The highest concentration of air pollution occurred in the 1-mile evacuation zone and a shelter-in-place band another mile beyond that. But the chemical plume also traveled outside these areas.

In and outside East Palestine, evidence indicated that chemicals from the railcars had entered buildings. Many residents complained about headaches, rashes, and other health symptoms after reentering the buildings.

At one building 0.2 miles away from the derailment site, the indoor air was still contaminated more than four months later.

Nine days after the derailment, sophisticated air testing by a business owner showed the building’s indoor air was contaminated with butyl acrylate and other chemicals carried by the railcars. Butyl acrylate was found above the two-week exposure level, a level at which measures should be taken to protect human health.

When rail company contractors visited the building 11 days after the wreck, their team left after just 10 minutes. They reported an “overwhelming/unpleasent odor” even though their government-approved handheld air pollution detectors detected no chemicals. This building was located directly above Sulphur Run creek, which had been heavily contaminated by the spill. Chemicals likely entered from the initial smoke plumes and also rose from the creek into the building.

Our tests weeks later revealed that railcar chemicals had even penetrated the business’s silicone wristband products on its shelves. We also detected several other chemicals that may have been associated with the spill.

Homes and businesses were mere feet from the contaminated waterways in East Palestine.

Enlarge / Homes and businesses were mere feet from the contaminated waterways in East Palestine.

Weeks after the derailment, government officials discovered that air in the East Palestine Municipal Building, about 0.7 miles away from the derailment site, was also contaminated. Airborne chemicals had entered that building through an open drain pipe from Sulphur Run.

More than a month after the evacuation order was lifted, the Ohio Environmental Protection Agency acknowledged that multiple buildings in East Palestine were being contaminated as contractors cleaned contaminated culverts under and alongside buildings. Chemicals were entering the buildings.

Toxic chemicals from Ohio train derailment lingered in buildings for months Read More »

systems-used-by-courts-and-governments-across-the-us-riddled-with-vulnerabilities

Systems used by courts and governments across the US riddled with vulnerabilities

SECURITY FAILURE —

With hundreds of courts and agencies affected, chances are one near you is, too.

Systems used by courts and governments across the US riddled with vulnerabilities

Getty Images

Public records systems that courts and governments rely on to manage voter registrations and legal filings have been riddled with vulnerabilities that made it possible for attackers to falsify registration databases and add, delete, or modify official documents.

Over the past year, software developer turned security researcher Jason Parker has found and reported dozens of critical vulnerabilities in no fewer than 19 commercial platforms used by hundreds of courts, government agencies, and police departments across the country. Most of the vulnerabilities were critical.

One flaw he uncovered in the voter registration cancellation portal for the state of Georgia, for instance, allowed anyone visiting it to cancel the registration of any voter in that state when the visitor knew the name, birthdate, and county of residence of the voter. In another case, document management systems used in local courthouses across the country contained multiple flaws that allowed unauthorized people to access sensitive filings such as psychiatric evaluations that were under seal. And in one case, unauthorized people could assign themselves privileges that are supposed to be available only to clerks of the court and, from there, create, delete, or modify filings.

Failing at the most fundamental level

It’s hard to overstate the critical role these systems play in the administration of justice, voting rights, and other integral government functions. The number of vulnerabilities—mostly stemming from weak permission controls, poor validation of user inputs, and faulty authentication processes—demonstrate a lack of due care in ensuring the trustworthiness of the systems millions of citizens rely on every day.

“These platforms are supposed to ensure transparency and fairness, but are failing at the most fundamental level of cybersecurity,” Parker wrote recently in a post he penned in an attempt to raise awareness. “If a voter’s registration can be canceled with little effort and confidential legal filings can be accessed by unauthorized users, what does it mean for the integrity of these systems?”

The vulnerability in the Georgia voter registration database, for instance, lacked any form of automated way to reject cancellation requests that omitted required voter information. Instead of flagging such requests, the system processed it without even flagging it. Similarly, the Granicus GovQA platform hundreds of government agencies use to manage public records could be hacked to reset passwords and gain access to usernames and email addresses simply by slightly modifying the Web address showing in a browser window.

And a vulnerability in the Thomson Reuters’ C-Track eFiling system allowed attackers to elevate their user status to that of a court administrator. Exploitation required nothing more than manipulating certain fields during the registration process.

There is no indication that any of the vulnerabilities were actively exploited.

Word of the vulnerabilities comes four months after the discovery of a malicious backdoor surreptitiously planted in a component of the JAVS Suite 8, an application package that 10,000 courtrooms around the world use to record, play back, and manage audio and video from legal proceedings. A representative of the company said Monday that an investigation performed in cooperation with the Cybersecurity and Infrastructure Security Agency concluded that the malware was installed on only two computers and didn’t result in any information being compromised. The representative said the malware was available through a file a threat actor posted to the JAVS public marketing website.

Parker began examining the systems last year as a software developer purely on a voluntary basis. He has worked with the Electronic Frontier Foundation to contact the system vendors and other parties responsible for the platforms he has found vulnerable. To date, all the vulnerabilities he has reported have been fixed, in some cases only in the past month. More recently, Parker has taken a job as a security researcher focusing on such platforms.

“Fixing these issues requires more than just patching a few bugs,” Parker wrote. “It calls for a complete overhaul of how security is handled in court and public record systems. To prevent attackers from hijacking accounts or altering sensitive data, robust permission controls must be immediately implemented, and stricter validation of user inputs enforced. Regular security audits and penetration testing should be standard practice, not an afterthought, and following the principles of Secure by Design should be an integral part of any Software Development Lifecycle.”

The 19 affected platforms are:

Parker is urging vendors and customers alike to shore up the security of their systems by performing penetration testing and software audits and training employees, particularly those in IT departments. He also said that multifactor authentication should be universally available for all such systems.

“This series of disclosures is a wake-up call to all organizations that manage sensitive public data,” Parker wrote. “If they fail to act quickly, the consequences could be devastating—not just for the institutions themselves but for the individuals whose privacy they are sworn to protect. For now, the responsibility lies with the agencies and vendors behind these platforms to take immediate action, to shore up their defenses, and to restore trust in the systems that so many people depend on.”

Systems used by courts and governments across the US riddled with vulnerabilities Read More »

“so-aggravating”:-outdated-ads-start-appearing-on-ps5-home-screen

“So aggravating”: Outdated ads start appearing on PS5 home screen

Ad station —

Players are annoyed as new home screen needs work.

PlayStation 5

Getty

PlayStation 5 owners are reporting advertisements on the device’s home screen. Frustratingly, the ads seem to be rather difficult to disable, and some are also outdated ads and/or confusing content.

The ads, visible on users’ home screens when they hover over a game title, can only be removed if you disconnect from the Internet, IGN reported today. However, that would block a lot of the console’s functionality. The PS5 dashboard previously had ads but not on the home screen.

Before this recent development, people would see game art if they hovered over a game icon on the PS5’s home screen. Now, doing so reportedly brings up dated advertisements. For example, IGN reported seeing an ad for Spider-Man: Across the Spider-Verse “coming soon exclusively in cinemas” when hovering over the Marvel’s Spider-Man: Miles Morales game. Webheads will of course recall that the Spider-Verse movie came out in June 2023.

Similarly, going to NBA 2K25 reportedly shows an ad for gaining early access. But the game came out early this month.

Per IGN, it seems that the console is “pulling in the latest news for each game, whether it be a YouTube video, patch notes, or even the announcement of a different game entirely.” That means that not all games are showing advertisements. Instead, some show an image for a YouTube video about the game or a note about patch notes or updates for the game.

There also seem to be some mix-ups, with MP1st reporting seeing an ad for the LEGO Horizon Adventures game when hovering over the icon for Horizon Zero Dawn. The publication wrote: “The ad also make[s it] confusing a bit, as… it looks like you’re playing LEGO Horizon Adventures and not the actual Horizon game we’re on.”

Some games, like Astro Bot, however, don’t seem to be affected by the changes, per IGN.

Annoyed and confused

Gamers noticing the change have taken to the web to share their annoyance, disappointment, and, at times, confusion about the content suddenly forced into the PS5’s home screen.

“As someone playing through the Spiderman series now, this confused the hell out of me,” Crack_an_ag said via Reddit.

Others are urging Sony to either remove the feature or fix it so that it can be helpful, while others argue that the feature couldn’t be helpful regardless.

“Forcing every single game to make its latest news story its dashboard art is SO stupid as no one game uses the news feature consistently,” Reddit user jackcos wrote.

Sam88FPS, meanwhile, noted that ads drove them from Xbox to PlayStation:

One of the main reasons I moved away from Xbox was the fact they started to build the Xbox UI around ads and pushing [Game Pass]. Hopefully Sony listens more because Xbox absolutely refused to, in fact, they even added full screen startup ads lmao.

It’s unclear what exactly prompted this change. Some suspect it’s related to firmware update 24.06-10.00.00. But that update came out on September 12, and, as IGN noted, its patch notes don’t say anything about this. Considering the obvious problems and mix of content being populated, it’s possible that Sony is working out some kinks and that eventually the content shown on users’ home screens will become more relevant or consistent. The change has also come a few days after a developer claimed that Sony lost $400 million after pulling the Concord online game after just two weeks, prompting digs at Sony and unconfirmed theories that Sony is trying to make up for financial losses with ads.

Ars Technica has reached out to Sony about why it decided to add non-removable ads to the PS5 home screen and about the outdated and otherwise perplexing content being displayed. We’ll let you know if we hear back.

“So aggravating”: Outdated ads start appearing on PS5 home screen Read More »

verizon-customers-face-mass-scale-outage-across-the-us

Verizon customers face mass-scale outage across the US

5Gpocalypse —

More than 100,000 reports appeared on Downdetector.

A map showing hotspots of outages primarily in the east coast and central US, but some in California as well

Enlarge / A Downdetector map showing where Verizon outages are reported.

Wireless customers of Verizon and AT&T have found that they cannot make calls, send or receive text messages, or download any mobile data. As of this article’s publication, it appears the problem has yet to be resolved.

Users took to social media throughout the morning to complain that their phones were showing “SOS” mode, which allows emergency calls but nothing else. This is what phones sometimes offer when the user has no SIM registered on the device. Resetting the device and other common solutions do not resolve the issue. For much of the morning, Verizon offered no response to the reports.

Within hours, more than 100,000 users reported problems on the website Downdetector. The problem does not appear isolated to any particular part of the country; users in California reported problems, and so did users on the East Coast and in Chicago, among other places.

By 10 am, some AT&T users also began reporting problems. Outage maps based on user-reported data found that the outages were especially common in parts of the country otherwise affected by Hurricane Helene.

After a period of silence, Verizon acknowledged the problem in a public statement. “We are aware of an issue impacting service for some customers,” a spokesperson told NBC News and others. “Our engineers are engaged and we are working quickly to identify and solve the issue.”

However, the spokesperson did not specify why the outage was occurring. It’s not the first major online service outage this year, though. AT&T experienced an outage previously, and the CrowdStrike-related outage of Microsoft services caused chaos and made headlines in July.

Update 5: 37 PM ET:  Some users are reporting they have regained service, and Verizon confirmed this in another statement: “Verizon engineers are making progress on our network issue and service has started to be restored. We know how much people rely on Verizon and apologize for any inconvenience some of our customers experienced today. We continue to work around the clock to fully resolve this issue.”

Verizon customers face mass-scale outage across the US Read More »

for-the-first-time-since-1882,-uk-will-have-no-coal-fired-power-plants

For the first time since 1882, UK will have no coal-fired power plants

Into the black —

A combination of government policy and economics spells the end of UK’s coal use.

Image of cooling towers and smoke stacks against a dusk sky.

Enlarge / The Ratcliffe-on-Soar plant is set to shut down for good today.

On Monday, the UK will see the closure of its last operational coal power plant, Ratcliffe-on-Soar, which has been operating since 1968. The closure of the plant, which had a capacity of 2,000 megawatts, will bring an end to the history of the country’s coal use, which started with the opening of the first coal-fired power station in 1882. Coal played a central part in the UK’s power system in the interim, in some years providing over 90 percent of its total electricity.

But a number of factors combined to place coal in a long-term decline: the growth of natural gas-powered plants and renewables, pollution controls, carbon pricing, and a government goal to hit net-zero greenhouse gas emissions by 2050.

From boom to bust

It’s difficult to overstate the importance of coal to the UK grid. It was providing over 90 percent of the UK’s electricity as recently as 1956. The total amount of power generated continued to climb well after that, reaching a peak of 212 terawatt hours of production by 1980. And the construction of new coal plants was under consideration as recently as the late 2000s. According to the organization Carbon Brief’s excellent timeline of coal use in the UK, continuing the use of coal with carbon capture was given consideration.

But several factors slowed the use of fuel ahead of any climate goals set out by the UK, some of which have parallels to the US’s situation. The European Union, which included the UK at the time, instituted new rules to address acid rain, which raised the cost of coal plants. In addition, the exploitation of oil and gas deposits in the North Sea provided access to an alternative fuel. Meanwhile, major gains in efficiency and the shift of some heavy industry overseas cut demand in the UK significantly.

Through their effect on coal use, these changes also lowered employment in coal mining. The mining sector has sometimes been a significant force in UK politics, but the decline of coal reduced the number of people employed in the sector, reducing its political influence.

These had all reduced the use of coal even before governments started taking any aggressive steps to limit climate change. But, by 2005, the EU implemented a carbon trading system that put a cost on emissions. By 2008, the UK government adopted national emissions targets, which have been maintained and strengthened since then by both Labour and Conservative governments up until Rishi Sunak, who was voted out of office before he had altered the UK’s trajectory. What started as a pledge for a 60 percent reduction in greenhouse gas emissions by 2050 now requires the UK to hit net zero by that date.

Renewables, natural gas, and efficiency have all squeezed coal off the UK grid.

Enlarge / Renewables, natural gas, and efficiency have all squeezed coal off the UK grid.

These have included a floor on the price of carbon that ensures fossil-powered plants pay a cost for emissions that’s significant enough to promote the transition to renewables, even if prices in the EU’s carbon trading scheme are too low for that. And that transition has been rapid, with the total generations by renewables nearly tripling in the decade since 2013, heavily aided by the growth of offshore wind.

How to clean up the power sector

The trends were significant enough that, in 2015, the UK announced that it would target the end of coal in 2025, despite the fact that the first coal-free day on the grid wouldn’t come until two years after. But two years after that landmark, however, the UK was seeing entire weeks where no coal-fired plants were active.

To limit the worst impacts of climate change, it will be critical for other countries to follow the UK’s lead. So it’s worthwhile to consider how a country that was committed to coal relatively recently could manage such a rapid transition. There are a few UK-specific factors that won’t be possible to replicate everywhere. The first is that most of its coal infrastructure was quite old—Ratcliffe-on-Soar dates from the 1960s—and so it required replacement in any case. Part of the reason for its aging coal fleet was the local availability of relatively cheap natural gas, something that might not be true elsewhere, which put economic pressure on coal generation.

Another key factor is that the ever-shrinking number of people employed by coal power didn’t exert significant pressure on government policies. Despite the existence of a vocal group of climate contrarians in the UK, the issue never became heavily politicized. Both Labour and Conservative governments maintained a fact-based approach to climate change and set policies accordingly. That’s notably not the case in countries like the US and Australia.

But other factors are going to be applicable to a wide variety of countries. As the UK was moving away from coal, renewables became the cheapest way to generate power in much of the world. Coal is also the most polluting source of electrical power, providing ample reasons for regulation that have little to do with climate. Forcing coal users to pay even a fraction of its externalized costs on human health and the environment serve to make it even less economical compared to alternatives.

If these later factors can drive a move away from coal despite government inertia, then it can pay significant dividends in the fight to limit climate change. Inspired in part by the success in moving its grid off coal, the new Labour government in the UK has moved up its timeline for decarbonizing its power sector to 2030 (up from the previous Conservative government’s target of 2035).

For the first time since 1882, UK will have no coal-fired power plants Read More »

opinion:-how-to-design-a-us-data-privacy-law

Opinion: How to design a US data privacy law

robust privacy protection —

Op-ed: Why you should care about the GDPR, and how the US could develop a better version.

General data protection regulation GDPR logo on padlock with blue color background.

Nick Dedeke is an associate teaching professor at Northeastern University, Boston. His research interests include digital transformation strategies, ethics, and privacy. His research has been published in IEEE Management Review, IEEE Spectrum, and the Journal of Business Ethics. He holds a PhD in Industrial Engineering from the University of Kaiserslautern-Landau, Germany. The opinions in this piece do not necessarily reflect the views of Ars Technica.

In an earlier article, I discussed a few of the flaws in Europe’s flagship data privacy law, the General Data Protection Regulation (GDPR). Building on that critique, I would now like to go further, proposing specifications for developing a robust privacy protection regime in the US.

Writers must overcome several hurdles to have a chance at persuading readers about possible flaws in the GDPR. First, some readers are skeptical of any piece criticizing the GDPR because they believe the law is still too young to evaluate. Second, some are suspicious of any piece criticizing the GDPR because they suspect that the authors might be covert supporters of Big Tech’s anti-GDPR agenda. (I can assure readers that I am not, nor have I ever, worked to support any agenda of Big Tech companies.)

In this piece, I will highlight the price of ignoring the GDPR. Then, I will present several conceptual flaws of the GDPR that have been acknowledged by one of the lead architects of the law. Next, I will propose certain characteristics and design requirements that countries like the United States should consider when developing a privacy protection law. Lastly, I provide a few reasons why everyone should care about this project.

The high price of ignoring the GDPR

People sometimes assume that the GDPR is mostly a “bureaucratic headache”—but this perspective is no longer valid. Consider the following actions by administrators of the GDPR in different countries.

  • In May 2023, the Irish authorities hit Meta with a fine of $1.3 billion for unlawfully transferring personal data from the European Union to the US.
  • On July 16, 2021, the Luxembourg National Commission for Data Protection (CNDP) issued a fine of 746 million euros ($888 million) to Amazon Inc. The fine was issued due to a complaint from 10,000 people against Amazon in May 2018 orchestrated by a French privacy rights group.
  • On September 5, 2022, Ireland’s Data Protection Commission (DPC) issued a 405 million-euro GDPR fine to Meta Ireland as a penalty for violating GDPR’s stipulation regarding the lawfulness of children’s data (see other fines here).

In other words, the GDPR is not merely a bureaucratic matter; it can trigger hefty, unexpected fines. The notion that the GDPR can be ignored is a fatal error.

9 conceptual flaws of the GDPR: Perspective of the GDPR’s lead architect

Axel Voss is one of the lead architects of the GDPR. He is a member of the European Parliament and authored the 2011 initiative report titled “Comprehensive Approach to Personal Data Protection in the EU” when he was the European Parliament’s rapporteur. His call for action resulted in the development of the GDPR legislation. After observing the unfulfilled promises of the GDPR, Voss wrote a position paper highlighting the law’s weaknesses. I want to mention nine of the flaws that Voss described.

First, while the GDPR was excellent in theory and pointed a path toward the improvement of standards for data protection, it is an overly bureaucratic law created largely using a top-down approach by EU bureaucrats.

Second, the law is based on the premise that data protection should be a fundamental right of EU persons. Hence, the stipulations are absolute and one-sided or laser-focused only on protecting the “fundamental rights and freedoms” of natural persons. In making this change, the GDPR architects have transferred the relationship between the state and the citizen and applied it to the relationship between citizens and companies and the relationship between companies and their peers. This construction is one reason why the obligations imposed on data controllers and processors are rigid.

Third, the GDPR law aims to empower the data subjects by giving them rights and enshrining these rights into law. Specifically, the law enshrines nine data subject rights into law. They are: the right to be informed, the right to access, the right to rectification, the right to be forgotten/or to erasure, the right to data portability, the right to restrict processing, the right to object to the processing of personal data, the right to object to automated processing and the right to withdraw consent. As with any list, there is always a concern that some rights may be missing. If critical rights are omitted from the GDPR, it would hinder the effectiveness of the law in protecting privacy and data protection. Specifically, in the case of the GDPR, the protected data subject rights are not exhaustive.

Fourth, the GDPR is grounded on a prohibition and limitation approach to data protection. For example, the principle of purpose limitation excludes chance discoveries in science. This ignores the reality that current technologies, e.g., machine learning and artificial Intelligence applications, function differently. Hence, these old data protection mindsets, such as data minimization and storage limitation, are not workable anymore.

Fifth, the GDPR, on principle, posits that every processing of personal data restricts the data subject’s right to data protection. It requires, therefore, that each of these processes needs a justification based on the law. The GDPR deems any processing of personal data as a potential risk and forbids its processing in principle. It only allows processing if a legal ground is met. Such an anti-processing and anti-sharing approach may not make sense in a data-driven economy.

Sixth, the law does not distinguish between low-risk and high-risk applications by imposing the same obligations for each type of data processing application, with a few exceptions requiring consultation of the Data Processing Administrator for high-risk applications.

Seventh, the GDPR also excludes exemptions for low-risk processing scenarios or when SMEs, startups, non-commercial entities, or private citizens are the data controllers. Further, there are no exemptions or provisions that protect the rights of the controller and of third parties for such scenarios in which the data controller has a legitimate interest in protecting business and trade secrets, fulfilling confidentiality obligations, or the economic interest in avoiding huge and disproportionate efforts to meet GDPR obligations.

Eighth, the GDPR lacks a mechanism that allows SMEs and startups to shift the compliance burden onto third parties, which then store and process data.

Ninth, the GPR relies heavily on government-based bureaucratic monitoring and administration of GDPR privacy compliance. This means an extensive bureaucratic system is needed to manage the compliance regime.

There are other issues with GDPR enforcement (see pieces by Matt Burgess and Anda Bologa) and its negative impacts on the EU’s digital economy and on Irish technology companies. This piece will focus only on the nine flaws described above. These nine flaws are some of the reasons why the US authorities should not simply copy the GDPR.

The good news is that many of these flaws can be resolved.

Opinion: How to design a US data privacy law Read More »

ibm-opens-its-quantum-computing-stack-to-third-parties

IBM opens its quantum-computing stack to third parties

Image of a large collection of copper-colored metal plates and wires, all surrounding a small, black piece of silicon.

Enlarge / The small quantum processor (center) surrounded by cables that carry microwave signals to it, and the refrigeration hardware.

As we described earlier this year, operating a quantum computer will require a significant investment in classical computing resources, given the amount of measurements and control operations that need to be executed and interpreted. That means that operating a quantum computer will also require a software stack to control and interpret the flow of information from the quantum side.

But software also gets involved well before anything gets executed. While it’s possible to execute algorithms on quantum hardware by defining the full set of commands sent to the hardware, most users are going to want to focus on algorithm development, rather than the details of controlling any single piece of quantum hardware. “If everyone’s got to get down and know what the noise is, [use] performance management tools, they’ve got to know how to compile a quantum circuit through hardware, you’ve got to become an expert in too much to be able to do the algorithm discovery,” said IBM’s Jay Gambetta. So, part of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them.

IBM’s version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. We’ll take a look at what software stacks do before getting into the details of what’s new.

What’s the software stack do?

It’s tempting to view IBM’s Qiskit as the equivalent of a compiler. And at the most basic level, that’s a reasonable analogy, in that it takes algorithms defined by humans and converts them to things that can be executed by hardware. But there are significant differences in the details. A compiler for a classical computer produces code that the computer’s processor converts to internal instructions that are used to configure the processor hardware and execute operations.

Even when using what’s termed “machine language,” programmers don’t directly control the hardware; programmers have no control over where on the hardware things are executed (ie, which processor or execution unit within that processor), or even the order instructions are executed in.

Things are very different for quantum computers, at least at present. For starters, everything that happens on the processor is controlled by external hardware, which typically act by generating a series of laser or microwave pulses. So, software like IBM’s Qiskit or Microsoft’s Q# act by converting the code they’re given into commands that are sent to hardware that’s external to the processor.

These “compilers” must also keep track of exactly which part of the processor things are happening on. Quantum computers act by performing specific operations (called gates) on individual or pairs of qubits; to do that, you have to know exactly which qubit you’re addressing. And, for things like superconducting qubits, where there can be device-to-device variations, which hardware qubits you end up using can have a significant effect on the outcome of the calculations.

As a result, most things like Qiskit provide the option of directly addressing the hardware. If a programmer chooses not to, however, the software can transform generic instructions into a precise series of actions that will execute whatever algorithm has been encoded. That involves the software stack making choices about which physical qubits to use, what gates and measurements to execute, and what order to execute them in.

The role of the software stack, however, is likely to expand considerably over the next few years. A number of companies are experimenting with hardware qubit designs that can flag when one type of common error occurs, and there has been progress with developing logical qubits that enable error correction. Ultimately, any company providing access to quantum computers will want to modify its software stack so that these features are enabled without requiring effort on the part of the people designing the algorithms.

IBM opens its quantum-computing stack to third parties Read More »

report:-apple-changes-film-strategy,-will-rarely-do-wide-theatrical-releases

Report: Apple changes film strategy, will rarely do wide theatrical releases

Small screen focus —

Apple TV+ has made more waves with TV shows than movies so far.

George Clooney and Brad Pitt stand in a doorway

Enlarge / A still from Wolfs, an Apple-produced film starring George Clooney and Brad Pitt.

Apple

For the past few years, Apple has been making big-budget movies meant to compete with the best traditional Hollywood studios have to offer, and it has been releasing them in theaters to drive ticket sales and awards buzz.

Much of that is about to change, according to a report from Bloomberg. The article claims that Apple is “rethinking its movie strategy” after several box office misfires, like Argylle and Napoleon.

It has already canceled the wide theatrical release of one of its tent pole movies, the George Clooney and Brad Pitt-led Wolfs. Most other upcoming big-budget movies from Apple will be released in just a few theaters, suggesting the plan is simple to ensure continued awards eligibility but not to put butts in seats.

Further, Apple plans to move away from super-budget films and to focus its portfolio on a dozen films a year at lower budgets. Just one major big-budget film is planned to get a wide theatrical release: F1. How that one performs could inform future changes to Apple’s strategy.

The report notes that Apple is not the only streamer changing its strategy. Netflix is reducing costs and bringing more movie production in-house, while Amazon is trying (so far unsuccessfully) to produce a higher volume of movies annually, but with a mixture of online-only and in-theater releases. It also points out that movie theater chains are feeling ever more financial pressure, as overall ticket sales haven’t matched their pre-pandemic levels despite occasional hits like Inside Out 2 and Deadpool & Wolverine.

Cinemas have been counting on streamers like Netflix and Apple to crank out films, but those hopes may be dashed if the media companies continue to pull back. For the most part, tech companies like Apple and Amazon have had better luck gaining buzz with television series than with feature films.

Report: Apple changes film strategy, will rarely do wide theatrical releases Read More »