Navigating

Navigating Technological Sovereignty in the Digital Age

Depending on who you speak to, technological sovereignty is either a hot topic, or something that other organizations need to deal with. So, should it matter to you and your organization? Let’s first consider what’s driving it, not least the crystal in the solute of the US Cloud Act, which ostensibly gives the US government access to any data managed by a US provider. This spooked EU authorities and nations, as well as others who saw it as a step too far. 

Whilst this accelerated activity across Europe, Africa and other continents, moves were already afoot to preserve a level of sovereignty across three axes: data movement, local control, and what is increasingly seen as the big one – a desire for countries to develop and retain skills and innovate, rather than being passive participants in a cloud-based brain drain. 

This is impacting not just government departments and their contractors, but also suppliers to in-country companies. A couple of years ago, I spoke to a manufacturing materials organization in France that provided goods to companies in Nigeria. “What’s your biggest headache,” I asked the CIO as a conversation starter. “Sovereignty,” he said. “If I can’t show my clients how I will keep data in-country, I can’t supply my goods.”

Such themes like the Cloud Act have made cross-border data management tricky. With different countries enforcing different laws, navigating where and how your data is stored has become a significant challenge. If it matters to you, it really matters. In principle, technological sovereignty solves this, but there’s no single, clear definition. It’s a concept that’s easy to understand at a high level, but tricky to pin down.

Technological sovereignty is all about ensuring you have control over your digital assets—your data, infrastructure, and the systems that run your business. But it’s not just about knowing where your data is stored. It’s about making sure that data is handled in a way that aligns with the country’s regulations and your business strategy and values.

For organizations in Europe, the rules and regs are quite specific. The upcoming EU Data Act focuses on data sharing and access across different sectors, whilst the AI Act introduces rules around artificial intelligence systems. Together, these evolving regulations are pushing organizations to rethink their technology architectures and data management strategies.

As ever, this means changing the wheels on a moving train. Hybrid/multi-cloud environments and complex data architectures add layers of complexity, whilst artificial intelligence is transforming how we interact with and manage data. AI is a sovereignty blessing and a curse – it can both enable data to be handled more effectively, but as AI models become more sophisticated, organizations need to be even more careful about how they process data from a compliance perspective. 

So, where does this leave organizations that want the flexibility of cloud services but need to maintain control over their data? Organizations have several options:

  • Sovereign Hyper-Scalers: Over the next year, cloud giants like AWS and Azure will be rolling out sovereign cloud offerings tailored to the needs of organizations that require stricter data controls. 
  • Localized Providers: Working with local managed service providers (MSPs) can give organizations more control within their own country or region, helping them keep data close to home.
  • On-premise Solutions: This is the go-to option if you want full control. However, on-premise solutions can be costly and come with their own set of complexities. It’s about balancing control with practicality.

The likelihood is a combination of all three will be required, at least in the short-medium term. Inertia will play its part: given that it’s already a challenge to move existing workloads beyond the lower-hanging fruit into the cloud, sovereignty creates yet another series of reasons to leave them where they are, for better or worse. 

There’s a way forward for sovereignty as both a goal and a burden, centered on the word governance. Good governance is about setting clear policies for how your data and systems are managed, who has access, and how you stay compliant with regulations for both your organization and your customers. This is a business-wide responsibility: every level of your organization should be aligned on what sovereignty means for your company and how you will enforce it. 

This may sound onerous to the point of impossibility, but that is the nature of governance, compliance and risk (GRC) – the trick is to assess, prioritize and plan, building sovereignty criteria into the way the business is designed. Want to do business in certain jurisdictions? If so, you need to bake their requirements into your business policies, which can then be rolled out into your application, data and operational policies. 

Get this the other way around, and it will always be harder than necessary. However, done right, technological sovereignty can also offer a competitive advantage. Organizations with a handle on their data and systems can offer their customers more security and transparency, building trust. By embedding sovereignty into your digital strategy, you’re not just protecting your organization—you’re positioning yourself as a leader in responsible business, and building a stronger foundation for growth and innovation. 

Technological sovereignty should be a strategic priority for any organization that wants to stay ahead in today’s complex digital landscape. It’s not just about choosing the right cloud provider or investing in the latest security tools—it’s about building a long-term, business-driven strategy that ensures you stay in control of your data, wherever in the world it is.

The future of sovereignty is about balance. Balancing cloud and on-premise solutions, innovation and control, and security with flexibility. If you can get that balance right, you’ll be in a strong position to navigate whatever the digital world throws at you next.

Navigating Technological Sovereignty in the Digital Age Read More »

Navigating the Unique Landscape of OT Security Solutions

Exploring the operational technology (OT) security sector has been both enlightening and challenging, particularly due to its distinct priorities and requirements compared to traditional IT security. One of the most intriguing aspects of this journey has been understanding how the foundational principles of security differ between IT and OT environments. Typically, IT security is guided by the CIA triad—confidentiality, integrity, and availability, in that order. However, in the world of OT, the priority sequence shifts dramatically to AIC—availability, integrity, and confidentiality. This inversion underscores the unique nature of OT environments where system availability and operational continuity are paramount, often surpassing the need for confidentiality.

Learning through Contrast and Comparison

My initial approach to researching OT security solutions involved drawing parallels with familiar IT security strategies. However, I quickly realized that such a comparison, while useful, only scratches the surface. To truly understand the nuances of OT security, I delved into case studies, white papers, and real-world incidents that highlighted the critical need for availability and integrity above all. Interviews with industry experts and interactive webinars provided deeper insights into why disruptions in service, even for a brief period, can have catastrophic outcomes in sectors like manufacturing, energy, or public utilities, far outweighing concerns about data confidentiality.

Challenges for Adopters

One of the most significant challenges for organizations adopting OT security solutions is the integration of these systems into existing infrastructures without disrupting operational continuity. Many OT environments operate with legacy systems that are not only sensitive to changes but also may not support the latest security protocols. The delicate balance of upgrading security without hampering the availability of critical systems presents a steep learning curve for adopters. This challenge is compounded by the need to ensure that security measures are robust enough to prevent increasingly sophisticated cyberattacks, which are now more frequently targeting vulnerable OT assets.

Surprising Discoveries

Perhaps the most surprising discovery during my research was the level of interconnectedness between IT and OT systems in many organizations. While this is still developing, this convergence is driving a new wave of cybersecurity strategies that must cover the extended surface area without introducing new vulnerabilities. Additionally, the rate of technological adoption in OT—such as IoT devices in industrial settings—has accelerated, creating both opportunities and unprecedented security challenges. The pace at which OT environments are becoming digitized is astonishing and not without risks, as seen in several high-profile security breaches over the past year.

YoY Changes in OT Security

Comparing the state of OT security solutions now to just a year ago, the landscape has evolved rapidly. There has been a marked increase in the adoption of machine learning and artificial intelligence to predict and respond to threats in real time, a trend barely in its nascent stages last year. Vendors are also emphasizing the creation of more integrated platforms that offer both deeper visibility into OT systems and more comprehensive management tools. This shift toward more sophisticated, unified solutions is a direct response to the growing complexity and connectivity of modern industrial environments.

Looking Forward

Moving forward, the OT security sector is poised to continue its rapid evolution. The integration of AI and predictive analytics is expected to deepen, with solutions becoming more proactive rather than reactive. For IT decision-makers, staying ahead means not only adopting cutting-edge security solutions, but also fostering a culture of continuous learning and adaptation within their organizations.

Understanding the unique aspects of researching and implementing OT security solutions highlights the importance of tailored approaches in cybersecurity. As the sector continues to grow and transform, the journey of discovery and adaptation promises to be as challenging as it is rewarding.

Next Steps

To learn more, take a look at GigaOm’s OT security Key Criteria and Radar reports. These reports provide a comprehensive overview of the market, outline the criteria you’ll want to consider in a purchase decision, and evaluate how a number of vendors perform against those decision criteria.

If you’re not yet a GigaOm subscriber, sign up here.

Navigating the Unique Landscape of OT Security Solutions Read More »

Navigating the CrowdStrike Outage: Insights from a Tech Industry Veteran

As a seasoned CIO/CISO and tech industry analyst with 35 years of experience, I’ve seen my fair share of cybersecurity incidents. However, the recent CrowdStrike outage stands out due to its extensive impact across multiple sectors. Here’s a deep dive into what happened, the repercussions, and the lessons we can all learn from this incident.

Background and Initial Reaction

I started my journey in IT in the late ’80s when I wrote a piece of software called PleadPerfect. Over the years, I’ve worn many hats—engineer, architect, and executive at both large and small companies. For the last 18 years, I’ve been a CIO/CISO for organizations ranging from 8-11 figures in revenue.

When I first heard about the CrowdStrike-related outage, my initial reaction was one of deep concern. I took a moment of silence in honor of the lost hours my peers and fellow IT pros sacrificed with their families to fix a problem that should never have occurred. The lack of good QA practices shown by CrowdStrike is deeply upsetting. They should have caught this issue in testing before releasing it to the public. The fact that it affected every Windows OS since 2008 is inexcusable.

Understanding the Incident

CrowdStrike’s Falcon software is installed at the core of the OS, which is how it protects machines so effectively. However, this tight integration also causes significant problems when updates are not properly tested. The faulty update led to widespread instances of the “Blue Screen of Death” (BSOD), causing machines to crash and not automatically recover. The recovery process involved booting machines in safe mode and deleting a CrowdStrike file—a task complicated by the inability to remotely enter safe mode on every device/OS. Additionally, best practices dictate securing the boot drive with BitLocker, which requires a key to unlock and enter safe mode. These keys are often stored in systems also affected by this flaw, greatly increasing the effort and time required for recovery.

Such incidents are not uncommon in the cybersecurity industry, but this one is particularly damaging because it stems from a QA and testing issue, not a cybersecurity breach. The tight integration between Falcon and the OS made the damage far more widespread and the recovery process far more onerous.

Impact on Businesses and Services

All sectors and industries were affected, but critical infrastructure sectors were hit the hardest. Transportation (airlines), banking/financial services, and healthcare (hospitals and emergency rooms) pose the most risk to world economies when disrupted. The three biggest US airlines, as well as those around the world, experienced grounded flights and communication issues. Banks in many countries went offline, and hospital networks faced significant disruptions.

Response and Resolution

CrowdStrike’s response to the incident was swift, but I am not sure what more they can do at this point. I did not feel George Kurtz’s (the CEO) apology was “full-throated” and took sufficient responsibility for the incident. This is nobody else’s fault but CrowdStrike’s. While they have committed to helping everyone affected, they have 24,000 customers, all of whom are impacted, so they cannot give each the attention they need. Billions of dollars in damage are being done to those companies from this outage.

Lessons Learned

The key lessons from this incident are clear: Be careful where you place your trust in other companies and partners. Ensure your contracts allow you to seek damages, as that may be the only recourse in such situations. Have a comprehensive disaster recovery (DR) plan and test it regularly. The number of companies having to rebuild their backup infrastructure just to restore systems because they cannot access (or do not have) their BitLocker keys is far too great.

To better prepare for and prevent similar issues, develop and thoroughly test your recovery plans. Consider using a completely different set of security tools for backup and recovery to avoid similar attack vectors. Treat backup and recovery infrastructure as a critical business function and harden it as much as possible.

Future of Cybersecurity

Time will tell how this incident influences future cybersecurity practices and policies. Between the SolarWinds and CrowdStrike issues, both being failures of best practices by the companies themselves, something has to change.

Emerging technologies like AI and machine learning could help predict and prevent similar issues by identifying potential vulnerabilities before they become problems. However, the real fix may lie in revamping processes and possibly having independent bodies audit and certify the practices of technology companies.

Personal Insights

As someone deeply involved in the tech industry, I stay updated with the latest cybersecurity trends and threats by reading extensively, following industry developments, consuming relevant content, talking to peers, and moving out of my silo to share and learn from others.

My advice to fellow CIOs and CISOs is simple: Plan for the worst and test for the worst. If you fail to prepare for these kinds of incidents, you will be in the worst possible position when the board asks for your response.

Final Thoughts

The recent CrowdStrike outage was a wake-up call for many in the tech industry. It highlighted the vulnerabilities inherent in our interconnected world and underscored the need for robust cybersecurity measures. By learning from this incident and implementing the lessons outlined above, we can better prepare for and prevent similar issues in the future.

Stay vigilant, stay prepared, and let’s continue to fortify our defenses against the ever-evolving landscape of cybersecurity threats.

Navigating the CrowdStrike Outage: Insights from a Tech Industry Veteran Read More »

Navigating the SEC Cybersecurity Ruling

The latest SEC ruling on cybersecurity will almost certainly have an impact on risk management and post-incident disclosure, and CISOs will need to map this to their specific environments and tooling. I asked our cybersecurity analysts Andrew Green, Chris Ray, and Paul Stringfellow what they thought, and I amalgamated their perspectives.

What Is the Ruling?

The new SEC ruling requires disclosure following an incident at a publicly traded company. This should come as no surprise to any organization already dealing with data protection legislation, such as the GDPR in Europe or California’s CCPA. The final rule has two requirements for public companies:

  • Disclosure of material cybersecurity incidents within four business days after the company determines the incident is material.
  • Disclosure annually of information about the company’s cybersecurity risk management, strategy, and governance.

The first requirement is similar to what GDPR enforces, that breaches must be reported within a set time (72 hours for GDPR, 96 for SEC). To do this, you need to know when the breach happened, what was contained in the breach, who it impacted, and so on. And keep in mind that the 96 hours begins not when a breach is first discovered, but when it is determined to be material.

The second part of the SEC ruling relates to annual reporting of what risks a company has and how they are being addressed. This doesn’t create impossible hurdles—for example, it’s not a requirement to have a security expert on the board. However, it does confirm a level of expectation: companies need to be able to show how expertise has come into play and is acted on at board level.

What are Material Cybersecurity Incidents?

Given the reference to “material” incidents, the SEC ruling includes a discussion of what materiality means: simply put, if your business feels it’s important enough to take action on, then it’s important enough to disclose. This does beg the question of how the ruling might be gamed, but we don’t advise ignoring a breach just to avoid potential disclosure.

In terms of applicable security topics to help companies implement a solution to handle the ruling, this aligns with our research on proactive detection and response (XDR and NDR), as well as event collation and insights (SIEM) and automated response (SOAR). SIEM vendors, I reckon, would need very little effort to deliver on this, as they already focus on compliance with many standards. SIEM also links to operational areas, such as incident management.

What Needs to be Disclosed in the Annual Reporting?

The ruling doesn’t constrain how security is done, but it does need the mechanisms used to be reported. The final rule focuses on disclosing management’s role in assessing and managing material risks from cybersecurity threats, for example.

In research terms, this relates to topics such as data security posture management (DSPM), as well as other posture management areas. It also touches on governance, compliance, and risk management, which is hardly surprising. Yes, indeed, it would be beneficial to all if overlaps were reduced between top-down governance approaches and middle-out security tooling.

What Are the Real-World Impacts?

Overall, the SEC ruling looks to balance security feasibility with action—the goal is to reduce risk any which way, and if tools can replace skills (or vice versa), the SEC will not mind. While the ruling overlaps with GDPR in terms of requirements, it is aimed at different audiences. The SEC ruling’s aim is to enable a consistent view for investors, likely so they can feed into their own investment risk planning. It therefore feels less bureaucratic than GDPR and potentially easier to follow and enforce.

Not that public organizations have any choice, in either case. Given how hard the SEC came down following the SolarWinds attack, these aren’t regulations any CISO will want to ignore.

Navigating the SEC Cybersecurity Ruling Read More »