Skip to main content


Latest blog entries from the Network Perception team

Preparing for Escalating Cyber Threats

By Blog, Event

Governments around the world have issued warnings about escalating cyber threats following the invasion of Ukraine by Russia. The Cybersecurity and Infrastructure Security Agency (CISA) has launched the Shields Up campaign and offers a range of no-cost cyber hygiene services to help organizations to prepare. If you have not already done so, now is the time to verify your network access protection. Ensure multi-factor authentication and review your firewall configurations to:

    1. Restrict all remote access rules on a need-to-know basis
    2. Disable all ports and protocols that are not essential
    3. Verify your network segmentation

Please reach out if you need guidance or support, the NP team is here to help.

Contact us here.

GRC Outlook: Solidifying Cyber Resiliency

By Blog

As the frequency and sophistication of attacks increase, not only against traditional IT networks but also against Operational Technology (OT) networks, it is making way for more cybersecurity challenges. The pressure from cyber threats is reaching new highs and organizations are realizing that achieving perfect security is unrealistic. To align with these challenges, businesses are adapting their security strategy to improve cyber resiliency. However, the journey to make cyber resiliency a reality as well as making it operational is not an easy endeavor. This is where Network Perception is making an impact by delivering solutions that verify industrial control systems protection by ensuring network access security as the first line of defense. “Our lightweight independent verification and visualization platform provides complete network transparency and continuous mapping to better support cybersecurity compliance and operationalize cyber resiliency. We visualize the security posture to ensure that there’s no blind spot and make sure that all the stakeholders are involved in this journey towards resiliency,” Robin Berthier, Co-Founder, and CEO, Network Perception. 

What makes Network Perception a pioneer in the industry is its lightweight, robust, and safe network security solution. The company offers a completely frictionless deployment and instant value for its customers who are under stringent compliance and cybersecurity pressure. Moreover, its solutions are highly usable for both technical and non-technical users, thanks to the unique design. Moreover, the solution’s progressive data ingestion system provides values even if a subset of network devices is imported. “Our solutions are crafted to be as intuitive as possible, the user interface is elegant and simple so everyone can understand network risk exposure immediately, regardless of their technical background,” explains Berthier.

Instantly Visualize the Network Map

Network Perception’s instant network visualization platform is called NP-View. The platform solves compliance and security audit challenges by performing an automated and comprehensive analysis of a client’s network device configuration files. Packaged as a desktop application or server-based application, it uses the configuration files from firewalls, routers, and switches to instantly visualize the network topology. The network visualization enables anyone to understand compliance and security issues instantly. The results of the automated analysis can be seamlessly exported into actionable reports. In short, NP-View builds a model of a network that accurately represents how each network device allows and denies communication. This model computes the complete set of possible paths among network assets. “There’s no other platform where you can visualize the network in a lightweight, fast way, like Network Perception’s. ” extols Berthier. 

In addition, the solution will automatically identify overly permissive rules or misconfigurations that could put your infrastructure at risk. It can also compute the connectivity in the network based on the configurations without touching the network. “First, we input the configuration file of the network devices and display a visual map of your network. Second, we do a risk assessment. And finally, we compute a  path analysis to automatically verify your correct network segmentation,” explains Berthier.

The Deep Modeling Technology

The first step in defending  a network is to know that network extremely well. The knowledge of a network configuration is the best pro-active line of defense to protect critical assets against attacks. At Network Perception, the team helps its clients elevate that understanding and align everyone with the same comprehension of how the network is configured. This way, Network Perception’s customers can monitor their network security control with ease.

Network Perception’s innovative cyber resiliency solution comes with a deep network modeling technology to automatically verify network segmentation and provide instant firewall risk assessment. The added comprehensive REST API easily integrates within the larger cybersecurity ecosystem. The solutions’ read-only deployment feature independently verifies network configurations without increasing the attack surface. 

As a cybersecurity solution provider, Network Perception focuses on developing technology to support cyber resiliency, enhancing the customers’ ability to verify faster and to visualize how their network architecture is efficiently protecting their critical assets or not. In most OT environments, a single person is usually able to intimately understand how the network is actually configured. The larger team relies on network diagrams that are outdated or incorrect. Moreover, most networks would accidentally connect non-critical to critical zones. NP-View is designed to automatically and effortlessly verify those issues while clarifying network architecture for all stakeholders.

The Invention of Automated Security

The story of the inception of Network Perception dates back to when CEO and Co-Founder Robin Berthier was working as a research scientist with funding from the Department of Energy and the Department of Homeland Security at the University of Illinois in Champaign, Urbana. The government was extremely concerned with the risk of cyber attacks against the electrical grid and asked the researchers including Berthier to develop the next generation of network modeling solutions. This led to the development of an initial prototype to better understand networks, and better verify how access policies protect critical assets. Berthier adds, “We were very fortunate from day one of the research project to be able to work closely with industry partners. We partnered with electric utilities in the Midwest, including Ameren  and ComEd, to understand their challenges and their pain points. We received continuous feedback as we developed the initial prototype.” This ultimately led to the founding of Network Perception and its evolution as a leading cybersecurity company. 

The team realized the complexity and monotony  of having to manually go through thousands of policy rules to try to understand exactly how firewalls in the network are blocking unwanted access to critical assets and critical industrial equipment. Network Perception’ solution automates the entire firewall audit process and enables compliance and security  teams to shift from that tedious manual, lengthy review process into a much faster, automated and comprehensive workflow, while removing the risk of human errors.

Embracing the Zero-Trust Culture

Over time, Network Perception has evolved as one of the most innovative network security solution providers in the industry with over 100 customers, including half of the largest 30 utilities in the U.S. The company has also established a strong relationship with NERC and the electric utility industry. As part of its growth plan, Network Perception is launching a new set of licensing tiers for its NP-View product. The three tiers include NP-View Essential—the fastest solution to determine if network devices are in or out of compliance. NP-View Essential is the entry-level version to support the needs of organizations that don’t yet have an independent verification process established. The second tier is NP-View Professional, a solution that continuously verifies if a network architecture is correctly configured to protect mission-critical assets. The Professional version is best suited for organizations with a consistent and documented independent verification process in place. It builds on the Essential version by enabling users to track changes and to augment their network verification and visibility capabilities with vulnerability information. 

Finally, the third tier is NP-View Enterprise, the most advanced and customizable platform to continuously check if critical assets are protected 24/7 by best-in-class defense-in-depth. The Enterprise version of NP-View is designed for organizations that have measurable security policies and procedures. It builds on the Professional version by adding dashboards, custom fields, advanced workflow automation, and full API integration. “As part of the growing zero-trust culture , it is vital to invest in a  read-only and continuous verification solution that can give you a clear picture of your risk exposure without adding any new risk to your infrastructure. ” concludes Berthier.


CS2AI Cybersecurity 4 Energy Virtual Symposium Recording

By Announcement, Blog

In case you missed it, we partnered with (CS)²AI and Q-Net Security on January 19th for a virtual symposium focused on securing and protecting electrical operations and power grids. Our expert panel of thought leaders in the electric sector discussed tangible recommendations and best practices for electric utilities to address current and upcoming compliance and cybersecurity challenges.

Our CEO and Co-Founder, Robin Berthier, demonstrated how to improve firewall rule change review through a 5-step workflow. A timely topic since a majority of attendees selected network architecture and segmentation as their top priority when asked which cybersecurity area would they most like to improve this year.

See what else the panel had to say:

  • Melissa Hathaway | President of Hathaway Global Strategies | Electric Sector Digital Resilience – A Global Perspective
  • Marc Rogers | VP of Cybersecurity at Okta | Hands-on Experience on Exploit
  • Ben Sooter | Principal Project Manager | Responding to High Impact Cyber Security Events in Operations
  • Branko D. Terzic | Former FERC Commissioner | Challenges for Electric Utilities
  • Philip Huff | University of Arkansas | Vulnerability Management for Electric Utilities
  • Saman Zonouz | Associate Professor at Rutgers University | Threats to Programmable Logic Controllers
  • Todd Chwialkowski | EDF Renewables | Implementing Electronic Security Control

View the recording of the Symposia here:

The Importance of Velocity in Cybersecurity

2022 Resolution: Cybersecurity Verification

By Blog

Due to heightened risk of cyber attack, the Cybersecurity and Infrastructure Security Agency (CISA) recently published a short checklist of urgent, near-term steps to reduce the likelihood and impact of a potentially damaging compromise. The recommendations include validating remote access to the organization’s network and confirming that all ports and services that are not essential for business purpose have been disabled. We invite every organization to not only review the list of controls but to also invest in independent verification of their correct implementation. Verified cybersecurity makes all the difference between catastrophic failure and operational resiliency. This is particularly true for OT networks where configuration changes can erode security controls such as network segmentation over time.

View the Cybersecurity and Infrastructure Security Agency Checklist Below:

The Importance of Velocity in Cybersecurity

The Importance of Velocity in Cybersecurity

By Blog

Part 4 of 4 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

In the first three parts of this blog series on cybersecurity for OT critical infrastructure infrastructures, we discussed the elements and specific roles of verification and visibility for an effective cyber-resiliency framework. However, it is also important to note the requirement of velocity in the resilience equation. You need to achieve verification and velocity at speed to be protected, monitor, and to respond to an incident.

Cybersecurity frameworks and strategies all recognize the need for speed. In the NIST Framework, rapid response and mitigation are prioritized, “Response processes and procedures are executed and maintained, to ensure timely response to detected cybersecurity incidents. Also, activities are performed to prevent expansion of an event, mitigate its effects, and resolve the incident.” Respond | NIST In NERC’s framework CIP-008-5 it mandates that “security incidents related to any critical cyber assets must be identified, classified, responded to and reported in a manner deemed appropriate by NERC.”

VELOCITY – Verification and Visibility at Speed in Protecting Digital and Physical Assets in Critical Infrastructure

The current critical infrastructure threat landscape includes sophisticated and capable hackers from state actors and organized criminal gangs. They often share the latest and most effective hacking tools and tactics among each other. A breach can have catastrophic consequences for OT industrial systems and is essential that security measures require speed to mitigate threats. This operational velocity is required for monitoring ports and services, security patch management, malicious software identification, and especially rapid incident response.

A quote from Gene Yoo at the Forbes technology Council succinctly present the stakes for both IT and OT operations: “In cybersecurity, speed defines the success of both the defender and the attacker. It takes an independent cybercriminal around 9.5 hours to obtain illicit access to a target’s network. Every minute a company does not use to its advantage gives hackers a chance to cause greater damage.” The Importance Of Time And Speed In Cybersecurity (

What is necessary to ensure in achieving verification and visibility at speed in cybersecurity to help reduce the threat of attackers? George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton, sees the need of a combination of three factors: resources, organizational structure, and environment understanding. He notes that “you can have all the resources in the world, but if your organization is not structured to execute, you will have blind spots. Proper resources give you capability. Sound organizational structures give you ability. Strong environmental understanding gives you knowledge. There is your trifecta.” He sees technology as an enabler for bolstering those three factors with velocity: “well configured automation increases your resource capabilities and possible your environmental understanding.”

Automation is also a theme articulated by Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium for velocity. He believes that getting operational/security telemetry from systems/networks, then analysis through tools and human review requires a significant amount of integration. He says that making the data useful and removing unnecessary alerts or false positives to chase down is essential for response and that it can probably cover as much as 70%-80% of the work. That automation significantly allows for greater speed. Patrick says that “the challenge is to automate where it makes sense, and with tested/proven process. All automated processes require independent monitoring, as well. Checks and/or tests to ensure the process is still functioning as expected (all controls intact and working) is crucial. This applies to the areas of 1) asset inventory; 2) phase out of fragile systems; 3) architecting networks and systems for defense; 4) change control and configuration management; 5) logging and monitoring; 6) reduction of complexity; 7) well-rehearsed incident response and recovery.”

According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, we are making headway on verification, visibility, and velocity. If the computer knows what’s going on the machine knows it. It’s logging it. He says that “if you’re a looking at your logs, and doing log reviews, and even having a machine review your logs for you, you’re going to see things very quickly. But if you wait for the phone call, or you wait for the website that goes down to be your first indication there’s a problem and you are way behind the curve.”

Emerging technologies, including artificial intelligence are changing the game in terms of doing things faster and having the ability to monitor equipment, threats, automate incident response. The new capabilities for automation and reaction a speed is highlighted in a new Congressional Research Report on “Evolving Electric Power Systems and Cybersecurity” November 4, 2021.

The report states that “while these new components may add to the ability to control power flows and enhance the efficiency of grid operations, they also potentially increase the susceptibility of the grid to cyberattack. The potential for a major disruption or widespread damage to the nation’s power system from a large-scale cyberattack has increased focus on the cybersecurity of the Smart Grid.

The speed inherent in the Smart Grid’s enabling digital technologies may also increase the chances of a successful cyberattack, potentially exceeding the ability of the defensive system and defenders to comprehend the threat and respond appropriately. Such scenarios may become more common as machine-to-machine interfaces enabled by artificial intelligence (AI) are being integrated into cyber defenses.” R46959 (

In this blog series we discussed the elements of (1 Verification), (2) Visibility, and (3) Velocity for cybersecurity resilience in cybersecurity, particularly OT critical infrastructure systems. Those three elements do not stand alone as pillars and are part of a unified cybersecurity triad. It is this triad of velocity, visibility, and verification that will help critical infrastructure operators assess situational awareness, adhere to compliance mandates, align policies & training, optimize technology integration, promote information sharing, establish mitigation capabilities, maintain cyber resilience, and ultimately be more cyber secure.

The Importance of Visibility in Cybersecurity

The Importance of Visibility in Cybersecurity

By Blog

Part 3 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity


In its July 2021 Memo, the White House created a voluntary industrial control systems (ICS) initiative to encourage collaboration between the federal government and the critical infrastructure community. The key purpose of the initiative is “to defend the nation’s critical infrastructure community by encouraging and facilitating the deployment of technologies and systems that provide threat visibility, indications, detection, and warnings, and enabling response capabilities for cybersecurity in essential control systems and operational technology (OT) networks.” The memo further elaborated that “we cannot address threats we cannot see; therefore, deploying systems and technologies that can monitor control systems to detect malicious activity and facilitate response actions to cyber threats is central to ensuring the safe operations of these critical systems.” New cybersecurity initiative by Homeland Security, NIST to protect critical infrastructure community – Industrial Cyber

The concept of visibility, knowing what assets you must manage and protect, described by the memo is a fundamental aspect of any cybersecurity strategy, especially in regard to critical infrastructure where the costs of a breach may have devastating implications. For this reason, identifying what digital and physical assets in your network is the first basic tenet of The NIST Framework that integrates industry standards to mitigate cybersecurity risks.

NERC has also recognized the importance of visibility for compliance. Visibility of industrial cyber assets include Electronic Access Control or Monitoring Systems – intrusion detection systems, electronic access points, and authentication servers, Physical Access Control Systems – card access systems and authentication servers and Protected Cyber Assets – networked printers, file servers and LAN switches are defined by NERC CIP-002-5.1a: Bulk Electric System (BES) Cyber System Categorization under BERC Identification and Categorization. What are the 10 Fundamentals of NERC CIP Compliance? | RSI Security

VISIBILTY: The Importance of Visibility in Protecting Digital and Physical Assets in Critical Infrastructure

How do we define visibility in cybersecurity? According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, visibility means knowledge of where you are, or what’s going on. And if you’re a believer in the NIST framework, the first step is identification of your assets. And so, if you don’t know what you own, you can’t protect what you don’t know you have. Visibility of assets, and that includes people. They’re not just wires and blinky light things, but even who has access to what, visibility of files and resources. So, visibility truly starts with knowing what you have.  Also, oftentimes it’s a user who detects something that’s not normal, and calls the help desk, and says, “hey, I see something wrong here.” And then alert help desk to say, “okay, could this be a security incident? Or is it just a user problem, or some malfunctioning software?”

Visibility can also be viewed as the fuel for managing, protecting, and analyzing operations & assets.

Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium sees visibility as getting sufficient data from target networks and systems into the analysis engine and then managing that data in such a way as to make it useful and not just “noise.” He notes that visibility is highly dependent on the organization. He believes that visibility starts with a sufficient asset inventory and that without that, the value and effectiveness of visibility goes down. He notes that tailored visibility and a solid asset inventory can be effective and enable IR teams to see what is happening to which systems.

Visibility also requires knowledge of the inventory of what may lurk in software.

Tom Alrich is Co-leader, Energy Sector SBOM Proof of Concept at National Technology & Information Administration US Department of Commerce has worked in the era of NERC CIP issues since 2008. He is focused on the software aspects of visibility. He notes that the average software product has 135 components in it and that 90% of them are open source. Tom states that lots of products have thousands of components and that each component can develop vulnerabilities. He says that “the end user has no way of tracking those without a software bill of materials (SBOM) that provides visibility into component risks.”

Visibility is a management and board issue.

Mary-Ellen Seale, The National Cybersecurity Society, and former Deputy Director of the National Cybersecurity Center at DHS says that one of the things is having visibility of the risk associated with a company or organization at the board level. So, it’s not just an IT guy or an IT team that has visibility or a company, a third party, that’s providing information to that baseline. Visibility requires actually “figuring out what are the critical activities that need to occur? What are the costs associated with that, and how do I present them to leadership to have them correct it?”

Visibility is about awareness.

Paul Ferrillo, Privacy and Cybersecurity Partner at Seyfarth Shaw LLP, brings a legal perspective with questions that pertain to operational visibility. “Do you know who is using your system? Is it just directors, officers, and employees? Is it vendors? Who’s accessing your system? How are they accessing your system? Is it through mainframe computer? Is it through a laptop? Is it from a BYOB device? Are they who they say they are when they’re accessing the network?

I agree with our expert commentators and with the insights provided in the White House memo, and by NIST and NERC on the topic of visibility. It is a must first step for cybersecurity in any vertical or industry. It is important for both operational teams and incident response teams to have transparent inventories of digital and physical assets to assess any vulnerabilities to threats. Mapping interactions between networks, devices, applications, and cyber-resilience roles of management should be part of any risk management strategy protecting critical infrastructure.

Next blog: Part 4: The Importance of Velocity in Cybersecurity


The Importance of Verification in Cybersecurity

The Importance of Verification in Cybersecurity

By Blog

Part 2 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

The concept of verification is the process of checking and attaining information about the ability of an individual, a company, or an organization to comply with the standards. In the case of cybersecurity, verification is intertwined with compliance of regulatory standards based on industry best practices. The European Union’s General Data Protection Regulation (GDPR) is a good example of the linkage of verification and compliance, as are other regulatory initiatives in government such as CMMC and HIPAA.

The energy and utilities industry requires a strong adherence to verification and compliance in its security posture. Recently, the Federal Energy Regulatory Commission (FERC) released its recommendations to help users, owners, and operators of the bulk-power system (BPS) improve their compliance with the mandatory CIP reliability standards and their overall cybersecurity posture. Staff from FERC’s Office of Electric Reliability and Office of Enforcement conducted the audits in collaboration with staff from the North American Electric Reliability Corporation (NERC) and its regional entities.

In its 2021 Staff Report ‘Lessons Learned from Commission-Led CIP Reliability Audits,’ the agency advised “enhancing policies and procedures to include evaluation of cyber asset misuse and degradation during asset categorization, properly document and implement policies, procedures and controls for low-impact transient cyber assets, and enhance recovery and testing plans to include a sample of any offsite backup images in the representative sample of data used to test the restoration of bulk-electric system cyber systems.”

The report also proposed improving vulnerability assessments to include credential-based scans of cyber assets and boosting internal compliance and controls programs to include control documentation processes and associated procedures pertaining to compliance with the CIP reliability standards. FERC report recommends compliance with CIP reliability standards – Industrial Cyber

Utility security can be viewed as the integration of national security into the power and electricity sectors, especially to protect the power grid. The North American Electric Reliability Corporation (NERC) is the regulatory authority with responsibility for the reliability of service to more than 334 million people. NERC’s standards are directly aimed at encouraging or mandating steps for utilities in protecting their operation.

NERC’s authority has led to critical infrastructure protection (CIP) standards that guide utilities’ planning and activities to eliminate or mitigate the many internal and external threat profiles. The CIP standards have evolved over time both in the scope of their focus and in the level of their authority. Utility Security: Understanding NERC CIP 014 Requirements and Their Impact (

VERIFICATION: Establishing a Baseline and Validating Risk Assessment Frameworks

Building effective verification begins by defining the scope of the verification process. You start by selecting those mission-critical assets — determine where they are, how critical they are to daily operations and who or what has access to them. To help initiate a strategy for verification within a physical and cyber resiliency framework for mission-essential systems such as utilities, it is helpful to understand the role of verification and compliance.

According to Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, “compliance is, as everybody understands, the initial baseline. You’re required by law to be compliant with some framework. And NERC CIP is what we use for the bulk power system. I think most qualified engineers, and security professionals, know that is the baseline, the minimum that you meet.”

“NERC CIP is essentially the minimum security required as a Registered Entity under NERC,” agrees Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium. Like many cybersecurity experts, he believes that verification should be possible by any qualified party. Most organizations have SMEs within each business unit who handle the day-to-day operational aspect of compliance, but when it comes to guiding and validating evidence, that is usually performed by a central and authoritative compliance function.

According to Tom Alrich, Co-leader, Energy Sector SBOM Proof of Concept at National Technology & Information Administration US Department of Commerce, the biggest threats in the world are supply chain related, and SolarWinds and Kaseya demonstrated that not enough attention has been paid to those risks.

George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton states that “independent verification is your reality check. Even the best professional athletes have coaches. As good as you can be, you may have a blind spot, or something needs tweaking.”

Clearly, with the newly released FERC/NERC Staff Report on compliance and CIP reliability standards, it signals that verification will remain a key element of future policy. As our SMEs have noted in our discussion, the vulnerabilities and sophistication of potential security threats continue against CIP continue to expand. Therefore, it is important to incorporate a strategy that not only complies with best practices and standards, but also anticipates mitigating new risks. In our next blog we will discuss how visibility is essential to the risk matrix.

Next blog: Part 2: Operational Visibility to Achieve Greater Cyber Resiliency

How to Achieve Cyber Resilience

How to Achieve Cyber Resilience

By Blog

Part 1 in a Blog Series on 3 Key Elements of a Cyber Resiliency Framework: (1) Verification, (2) Visibility, and (3) Velocity

In industry and in government it is not a question of if you will be cyber-attacked and potentially breached, but when. The cyber-attack surface has grown exponentially larger in recent years with the meshing of OT and IT systems, and the greater connectivity brought by the Internet of Things. Also, the threat actors themselves, that include nation states, criminal enterprises, insider threats, and hacktivists, have become more sophisticated and capable. Their activities are increasingly being focused on critical infrastructure, including the energy and utilities industry.

The energy ecosystem includes power plants, utilities, nuclear plants, and the electric grid. Protecting the sector’s critical ICS, OT, and IT systems from cybersecurity threats is complex, as much of the energy critical infrastructure components have unique operational frameworks and access points, and they integrate a variety of legacy systems and technologies.

Because of the changing digital ecosystem, and the consequences of being breached, creating a cybersecurity framework that encompasses resiliency has a top priority for mitigating both current and future threats. There are multiple components to that framework that need to be explored. This is the first blog of a four-part series that will focus on the key elements of a cyber resiliency framework, (1) verification, (2) visibility, and (3) velocity. Another objective with this series is to intersect/combine cyber resiliency and NERC CIP compliance.

What is Cyber Resilience?

 A joint DNI, DHS Report sees cyber resilience as “important for mission-essential systems that support our national security, homeland security, essential government services, and the critical infrastructure that supports the nation’s economy. Cyber resiliency is that attribute of a system that assures it continues to perform its mission-essential functions even when under cyber-attack. For services that are mission-essential, or that require high or uninterrupted availability, cyber resiliency should be built into the design of systems that provide or support those services.“ Cyber Resilience and Response (

In August of 2021, NIST updated its guide on Cybersecurity Resilience by sharing a new definition: The NIST Draft “turns the traditional perimeter defense strategy on its head and moves organizations toward a cyber resiliency strategy that facilitates defending systems from the inside out instead of from the outside in. This guidance helps organizations anticipate, withstand, recover from, and adapt to adverse conditions, stresses, or compromises on systems – including hostile and increasingly destructive cyber-attacks from nation states, criminal gangs, and disgruntled individuals.” SP 800-160 Vol. 2 Rev. 1 (Draft), Developing Cyber-Resilient Systems: SSE Approach | CSRC (

To initiate a strategy for verification, visibility, and velocity within a cyber resiliency framework for mission-essential systems such as utilities, you also need perspectives to build on the DNI/DHS definition of what constitutes cyber resilience from practitioners in the field. We asked leading experts to share their definition of resilience in the context of a cyber system.

According to George Platsis, Senior Lead Technologist, Proactive Incident Response & Crisis Management at Booz Allen Hamilton, utilities, and individual organizations should have that candid talk and define what “cyber resilience” means to them. He notes that the Lawrence Livermore National Laboratory defines their Cyber and Infrastructure Resilience Program’s mission as the ability to enhance the security and resilience of the nation’s critical infrastructure systems and networks to cyber, physical, and environmental hazards and to enable their reliable and sustainable design and operation now and into the future. George interprets that as “the ability to keep the business going, regardless of hazard.”

Marcus Sachs, Research Director for Auburn University’s McCrary Institute for Cyber and Critical Infrastructure Security, and former Senior Vice President and Chief Security Officer at the North American Electric Reliability Corporation, sees resilience as the “ability to recover, or the ability to endure some sort of pain.” For any organization, and that includes utilities, from small distribution up to transmission and generation. If you’re able to continue to operate in the face of an adversary, or be able to recover very, very quickly, should something bad happen, that’s good resilience. Realistically, we’re going to have interruptions. So, how quickly can you recover from an interruption, is a good gauge of your resiliency.”

Patrick C. Miller, CEO at Ampere Industrial Security and Founder and President Emeritus of the Energy Sector Consortium, states that “by and large, most utilities know that resilience means continuing to operate under negative, degraded or even adversarial operating conditions. They understand this from many perspectives, with a long history of response and recovery after natural disasters and other human/animal-caused outages (car/pole, backhoe, squirrels, etc.). Adding cyber to that, whether through accidental or malicious human action, is nothing outside of their world.”

Benjamin Stirling, Former Manager of Generation Cybersecurity at Vistra, believes that frameworks for classifying Process that you are protecting are integral to cyber resilience. He says that the first step in risk analysis for OT and ICS cybersecurity is understanding and classifying the process. He notes that protecting a water treatment plant at a site versus a burner management system at a site may be two very different things. “Once you have this risk categorization piece done, then you can suggest how you’re going to protect those assets and begin to have a methodology. You can go down a path where you can have a reasonable risk-based approach to resilience.

Paul Ferrillo, Privacy and Cybersecurity Partner at Seyfarth Shaw LLP, perhaps has a description of the topic that many can relate. He defines cyber resilience much as a boxing match, as being able to take a punch right in the face and hitting the canvas and getting back up again. For him, resilience is getting back on the internet, doing your backups, restoring your backup tapes, and getting back into play.

All these cybersecurity experts concur that cyber resilience is generally defined as being able recover and go forward and continue to operate in the event of an incident. Sometimes that is easier said than done, especially with morphing of threats, a dearth of skilled cybersecurity workforce, and the regulatory requirements of maintaining critical infrastructure that is often owned by the private sector and government by the public sector.

Also, there is no one size cyber resilience framework that fits all cases, even in the same industry such as utilities. The ability to be cyber resilient starts with a risk management focus and allocation of resources and training to varying threat scenarios to get to the end goal of being able to recover quickly and remain operational. It also requires a customized strategy augmented by automation tools to keep systems optimally prepared and running.

In further discussions with the SME practitioners, it was clearly surmised that cyber risk management is the nexus for helping best secure cyberspace, especially in OT/ICS operating environments. This will require creating a cyber-resilience framework that will assess situational awareness, adhere to compliance mandates, align policies & training, optimize technology integration, promote information sharing, establish mitigation capabilities, and maintain cyber resilience in event of incidents. This is where the specific elements of verification, visibility, and velocity need to be enabled to achieve cyber resilience.

Next blog: Part 2: COMPLIANCE VERIFICATION to achieve greater cyber resiliency

Preventing Lateral Movement Through Network Access Visibility

By Blog

In the first five months of this year, we have already witnessed multiple cyber attacks against critical infrastructure in the US. Those events range from an individual endangering people’s life by poisoning a water-treatment facility to large organized groups disrupting fuel delivery to a significant part of the country.

The increasing number and sophistication of such incidents have reinforced the importance of building resilient cyber infrastructure. Organizations have started identifying their critical systems and protecting them with multiple cyber-defense layers. However, many connected systems that form the perimeter of the organization’s network remain exposed. Such devices include external-facing servers and corporate workstations. Attackers often exploit the perimeter, leveraging existing networking services and unknown loopholes to reach the network’s crown jewels. That approach is termed lateral movement—a set of activities used by attackers to make their way from the initial entry point to critical assets. In such an expansion phase, attackers utilize several exploit techniques and use intermediate devices as stepping stones. Eventually, lateral movement enables attackers to launch data exfiltration or service disruption.


Lateral Movement in Action: the SolarWinds Incident

In the
words of Brad Smith, President, Microsoft, the 2020 SolarWinds supply chain attack was an “attack on the United States and its government and other critical institutions, including security firms.” The incident that came into public space in December 2020 had occurred between March and June that year. Sophisticated advanced persistent threat (APT) actors introduced malicious code into the vendor’s Orion platform, a network and endpoint management software. Subsequently, the download of the compromised software provided the APT with a foothold into IT networks of more than 18,000 SolarWinds’ customers that included federal agencies and major private organizations.

Figure 1 illustrates how the malware virtually made it from the Internet to critical segments of a target network. First, the compromised Orion software gave attackers a backdoor into the victim system. Second, since a network management system is typically authorized to have two-way communication with all the devices, attackers could collect authentication keys and tokens. Brute-force password cracking attacks might have also helped attackers to gain privileged access to critical servers. With knowledge of internal architecture and access to credentials, the malicious traffic could go undetected, giving attackers access to confidential information and important services. Due to the large number of entities affected, investigators believe that the extent of the damage from the attack will take years to unravel. Attackers may also carry out follow on attacks using the information collected and tools deployed in victim networks.

Figure 1: Lateral movement in the SolarWinds incident utilized (1) delivery of malware through software update mechanism, (2) Internal reconnaissance and credential harvesting through trusted communications, and (3) Data exfiltration or service disruption.


Why does Lateral Movement Need Special Attention?

Lateral movement has been an essential step in a majority of recent cyber attacks. However, since it is a precursor to the actual action on target, organizations have an excellent reason to invest more in defending against lateral movement and the steps that lead to it. Such preparedness would save them significant costs that they would otherwise spend on incident response and repair. 

Achieving resiliency against lateral movement attacks is challenging for three core reasons. First, the attack vectors and techniques that the adversaries can adopt are virtually unlimited. Next, the sophistication of attackers in utilizing benign OS and networking services is increasing. Finally, even though network access and security policies aim to segment networks effectively, unwanted access paths can easily result from misconfigurations, software bugs, and human errors. For example, misconfiguration of firewall access policies was a primary enabler of the attacker’s lateral movement in the 2013 Target Corporation data breach and 2015-16 Ukrainian power grid incident.


Preventing Lateral Movement

One important insight that benefits the defender is that an adversary, to move laterally, must have several interactions with the network and leverage the existing access patterns. Therefore, the awareness of network assets and access paths can be vital in measuring and reducing risk concerning lateral movement. Here, an access path refers to a possible network connection between two devices.

At a high level, a common approach to understand lateral movements and reduce risk exposure consists of the following steps:

  1. Computing risk metrics by analyzing the graph structure generated by network paths
  2. Specializing the metrics with additional context from services and vulnerabilities
  3. Changing network configuration to decrease the risk

The first step involves constructing a network access graph and selecting relevant metric(s) to quantify the risk. One commonly adopted metric is the number of (strongly) connected components. A strongly connected component is a directed graph in which every node is reachable from every other node. Because of that property, a connected component becomes a single lateral movement domain. Hence, the presence of large connected components in the network access graph indicates network zones with higher risk.

Figure 2 depicts a sample network segmented into subnets using a Cisco firewall. The figure summarizes network access paths in terms of a connectivity matrix between the different subnets. Such connectivity means that the entire network is one connected component. That is a state of high risk with respect to lateral movement and should be fixed.

Figure 2: With the access policies configured as shown in the table, the network becomes a single fully-connected graph.


It is easy to see the value of such analysis for real-world networks consisting of many firewalls and routers. In the second step of the overall process, we can further specialize access paths for specifics of the underlying network and the likely attack vectors. In that context, defenders can implement the following approaches relying on the situational awareness obtained in the previous step:

  1. Analyze paths, both inbound and outbound, for specific networks and devices
  2. Filter paths per service type (protocol-port combinations) to focus more on lateral movement vectors such as authentication, remote access, file transfer, and sharing services
  3. Correlate paths with vulnerability information to evaluate the reachability of highly vulnerable parts of the network to high-value assets

The final step in the risk mitigation process is to be able to identify root causes and fix them. With the precise and actionable information collected so far, security admins can take concrete steps, including the following:

  1. Break down large connected components into many smaller ones to limit the extents of lateral movement domains
  2. Limit reachability of highly vulnerable nodes to critical assets

For instance, in the network presented previously, an admin may choose to limit direct access from ‘Marketing’ to the rest of the network. To accomplish that, as we show in Figure 3a, she can select the specific path and correlate it with the corresponding configuration entry. She can then quickly limit the connectivity and transform the network to a safer state of  Figure 3b.

Figure 3a: Correlating network paths (shown by red arrows) with the corresponding entry in firewall configurations (highlighted by the red box).


Figure 3b: Modifying firewall configuration leads to segmenting the network in multiple connected components and improving the overall security posture.


Responding to Lateral Movement

The recent attacks against critical infrastructure have reinforced that lateral movement is an integral part of cyber threats. Therefore, as soon as an initial compromise is detected, quickly determining which other systems are endangered is the key to minimizing the damage. Subsequently, one can isolate those assets and restore them in a safe state. 

An accurate understanding of current access paths is a strong ally to reduce risk exposure. Security teams can examine outgoing network access paths from suspected compromised nodes and filter them using compromised services to limit the search space. In particular, a stepping-stone analysis is essential to tell how far specific systems are from a network access standpoint. We have discussed such analyses in detail in our previous article on accelerating incident response.



In this article, we have discussed strategies for countering malicious lateral movement. Specifically, we have demonstrated that situational awareness of network assets and access paths is crucial for blocking lateral movement. In that context, we have illustrated the use of two graph-based risk metrics:
number of connected components and reachability.

Experts have emphasized the importance for cyber-resilient organizations to think in graphs. However, understanding the complex architecture of multi-layer networks can be extremely challenging. Network Perception’s solutions NP-View and NP-Live have been designed to address this challenge by enabling real-time visibility into network assets and access paths, making it easy to adopt the graph-thinking paradigm in practice.

Where was your Baseline when the Colonial Incident Happened?

By Blog

The Importance of Knowing your Baseline

On May 7, Joseph Blount, CEO of Colonial Pipeline, authorized a ransom payment of $4.4 million to Darkside, a cyber criminal gang believed to be based in Eastern Europe. Executives at Colonial were forced to make decisions quickly and with a lack of information they were unsure how badly the cyberattack had breached its systems or how long it would take to bring the pipeline back. Operators of the Colonial Pipeline learned the Company was in trouble when an employee found a ransom note displayed on the screen of a control-room computer. This cyberattack underscores the growing impact of cyberthreats on industrial sectors and the fact that attackers are now specifically targeting critical infrastructure to increase their profit.

It is impossible to determine the target or nature of the next cyber attack, but all critical infrastructure industry executives should be asking themselves the same question right now: where is my baseline? Executives don’t know the who, what, how, where or when of the next attack, but all companies can raise the baseline on their cyber resilience posture. Companies that have invested in creating a higher level of cyber resiliency are working from a different baseline and have put themselves in a better position to respond quickly and effectively to reduce cost and risk. These companies will have the information they need for faster, more efficient decision making. Companies that prioritize and invest in creating cyber resiliency as part of their cybersecurity posture are effectively removing risk from the inevitable next cyber attack.

How to Establish Your Baseline

Establishing the initial cyber resiliency baseline is a core step of the Structure Cyber Resiliency Analysis Methodology (SCRAM) developed by MITRE. The goal is to answer the question what can we build on? This is accomplished by reviewing current capabilities, policies and procedures already in place, cybersecurity solutions deployed, and gaps to achieve relevant cyber resiliency goals. As illustrated in the SCRAM document, the result of this activity can be recorded in a scorecard:

In the context of the Colonial Pipeline ransomware incident, the crucial parts of the baseline to review are:

  • The ability to visualize asset inventory, network architecture, and network access
  • The ability to verify correct privilege restriction and network segmentation
  • The speed of existing response capabilities

An efficient approach to build the initial baseline is to use the Colonial attack as a scenario to engage with relevant subject matter experts (SMEs) in your company. Once the baseline has been defined, then a gap analysis can be conducted in order to create and implement a cyber resiliency plan.

Baseline and Cyber Resiliency

The World Economic Forum published this week a guidance document on cyber resiliency that presents 10 key principles that executives in the industrial sector should understand and adopt. In particular, principle #7 states that:

The board ensures that management supports the officer accountable for cyber resilience through the creation, implementation, testing and ongoing improvement of cyber-resilience plans, which are appropriately harmonized across the business. It requires the officer in charge to monitor performance and to regularly report to the board.

Capturing the initial baseline plays a crucial role to create such plans, since it enables all stakeholders to develop a common understanding on which a path to higher cyber resiliency can be defined. This is important to build alignment among business units and across all levels of the organization.