Version 1.4 dated 22 April 2024
Introduction to Automation and System Integration in Information Security
To our surprise, in decades of work in the field of information security, we have not come across a sufficiently comprehensive and consistent, and at the same time, accurate and concise, without unnecessary details, overview of software and hardware solutions in the field of cybersecurity, their classification, description of interrelationships and evolution. Therefore, we decided to create such an overview primarily for our internal goals.
Having traveled the paths of auditors, managers, developers, consultants, and entrepreneurs in enterprise security, we have become convinced that the basics of security automation and systems integration are essential not only for strategic management or security architecture development, but also for security assessment, risk management, data and application protection, operational security, and many other branches of modern cybersecurity. We intend to continually maintain, revise, and supplement this overview with your help.
This approach is consistent with our philosophy, as we at H-X Technologies are committed to not only providing you with effective services and unique solutions in information security, but also to sharing both our experience and the latest advances in the field.
Challenges of Automating Cybersecurity
Many details in the vast and multifaceted world of information security are key to ensuring the reliable protection of information and the systems that process it. It is a difficult task to take into account all these details and not miss anything. By the way, for the sake of simplicity, here we use the terms “information security” and “cybersecurity” as synonyms, although they are slightly different.
There is no universal solution to all information security problems, as security is not a state, but a process. Above all, it is constant painstaking manual work and adherence to rules. When technologies are properly combined, they greatly facilitate many security tasks, from security assessment to response to security incidents.
A separate problem in modern information security is the abundance of obscure acronyms. It is quite difficult to memorize them and figure out which ones are protocols, algorithms, frameworks, standards, methods, or technologies, and which ones are types and classes of hardware and software solutions or trademarks.
Finally, it is not always clear how modern and relevant a particular security technology is, as well as what place it occupies in the modern security industry, how it relates to other technologies, and what its advantages and disadvantages are. And most importantly, how fundamental and promising each technology is, and whether it is worth investing effort, money, time and other resources in its development and implementation.
Our solutions
Here is a review of software and hardware solutions for information security. It gives you an overview of 131 classes of cybersecurity solutions and technologies, grouped into 15 categories. This categorization covers a wide range of tools and techniques, from data protection and cryptographic solutions, to event correlation systems and risk management.
For simplicity, in this work we use as synonyms the concepts of methods, methodologies and technologies in information security. On the other hand, we also use means, tools, systems, and solutions as synonyms to refer to more specific realizations of these technologies and methods.
While honoring the role of scientific research in technological progress, we also believe in the driving force of the market. In the development of technology, people often “vote with their wallets” for certain technologies. While the marketing activities of solution providers set the tone for shaping the direction of technology development, the final decision on how it will evolve is made by its users, who invest their time, attention and money in one solution or another.
Therefore, given that there are quite a few different methods, tools and technologies in cybersecurity, we focus primarily on popular classes of enterprise software and hardware solutions, down to products and services, and secondarily on the technologies underlying these solutions. Some of these solutions and technologies are multipurpose and are used not only in security, but also in related areas. For example, in configuration or performance management, as well as in other IT and business processes.
This work may be one of your entry points into the world of information security. Hopefully, it will serve you as a textbook, reference, or even a desktop guide, especially if you are actively involved in cybersecurity solutions or technologies.
In either case, our work will help you not only to get an overview of automation in information security, but also to come closer to understanding which services or tools might be suitable for protecting a particular digital environment.
Thus, the main goal of this work is to methodically classify popular hardware and software solutions in the field of information security, to show their interrelationship and development, to describe common abbreviations of their names (acronyms), and to consider how these solutions help to solve actual problems of information security, mainly – corporate security.
A Race of Threats and Defenses
What exactly do information security solutions protect against? For a consistent and in-depth understanding of the development of information security solutions, it makes sense to first consider the retrospective and evolution of information security threats.
With the emergence and development of computers and computer networks in the mid-twentieth century, each new decade, the threats of technological offenses, problems of confidentiality, integrity and availability of information acquired qualitatively new forms and levels:
- 1950s: Large-scale information security (IS) incidents were rare, as computer technology was just beginning to evolve. The main focus was on physical security and protecting information from Cold War spying.
- 1960s: As computer technology and network communications advanced, the first cases of unauthorized access to computer systems emerged. Hackers began to explore ways to penetrate systems and networks, although these actions were often motivated more by curiosity than malice. The concept of “information security incident” did not yet have a modern meaning.
- 1970s: Appearance of the first prototypes of computer viruses and antiviruses – self-propagating programs Creeper and Reaper. Development of telephone fraud. For example, one of the world’s first hackers, John Draper (known as Cap’n Crunch), used a cereal box whistle to imitate the tones of the AT&T telephone network to gain free access to long-distance calls.
- 1980s: Computer viruses and hacking actively developed. Cyberattacks began to be used in political and ideological conflicts. In 1982, the U.S. added a software tab to a pipeline control system borrowed from the Soviet Union, causing a 3-kiloton explosion and a fire in Siberia. In the late 1980s, German hacker Markus Hess, recruited by the KGB (the Committee for State Security in the Soviet Union), hacked into computers at U.S. universities and military bases, highlighting the vulnerability of critical systems. In 1988, Robert Morris launched the first computer worm for the ARPANET, which paralyzed about 10% of the computers connected to the network. Robert Morris became the world’s first person convicted of a computer crime.
- 1990s: Cybercrime and cyber spying grow. First cases of phishing and DDoS attacks. With the expansion of the Internet and the emergence of new services such as e-mail, social networking, online gaming, etc., there were more opportunities for criminals who tried to gain profit or information from these resources. In 1994, Vladimir Levin (known as Hacker 007) stole about $10 million from Citibank using Internet access to its network. In 1998, two California teenagers carried out a major attack on the Pentagon called Solar Sunrise.
- 2000s: The methods of cybercriminals continue to evolve rapidly. Botnets – networks of compromised computers used by hackers to steal information, send spam, DDoS attacks, disguise hacks and other purposes – have emerged. The 2007 cyberattack on Estonia was a major example of a politically motivated cyberattack. With the development of social media and mobile technologies, a new form of expressing one’s opinion or discontent has emerged – cyber activism and cyber protests. Some groups and individuals have begun to use cyberattacks as a means of gaining attention or demonstrating their position. In 2008, the hacktivist group Anonymous launched Operation Chanology against the Church of Scientology using DDoS attacks, website hacks, and other methods.
- 2010s: First thefts of cryptocurrencies, threats to the Internet of Things (IoT), and supply chain attacks. Crypto miners have emerged, utilizing computer resources to mine cryptocurrencies. Cyber blackmail based on ransomware that encrypts user data and demands a ransom for its recovery has rapidly developed. In 2017, the global WannaCry ransomware attack infected more than 200,000 computers in 150 countries, demanding users pay between $300 and $600 in bitcoins to unlock their files.
- 2020s: Phishing attacks and other methods of social engineering have evolved and begun to utilize artificial intelligence. For example, deepfakes are realistic audio and video for misinformation or fraud. Attacks on removed employees have gained popularity due to the coronavirus pandemic. States’ cyber espionage and cyberwar activities have increased. Supply chain attacks continued to evolve. The attack on network and security solutions provider SolarWinds in 2020 turned out to be a large-scale cyber espionage operation affecting many government agencies and private companies in the United States and other countries.
Today, almost every organization has faced a cyber problem. About 83% of organizations discover data breaches in the course of their operations. This underscores the critical need to develop and implement reliable information protection solutions, including hardware and software solutions. They are developing, practically keeping pace with the development of threats. The growth rate of damage to the global economy from information security incidents is comparable to the growth rate of the information security solutions market.
While information security threats are becoming more complex, the functions of hardware and software solutions are also actively evolving. They protect against threats not only when an attack occurs, but also at increasingly distant points in time. Security technologies are addressing the earliest prerequisites and conditions for vulnerabilities, leveraging advanced technologies such as cloud, blockchain and neural networks, improving system visibility and analytics, and offering increasingly sophisticated risk mitigation and compensation.
Thus, understanding the basics and trends of information security automation becomes a necessity not only for IT and cybersecurity professionals, but also for organizational leaders and even for a wide range of users seeking to secure their data in the digital age, including security assessment of their data, computers, smartphones, applications, as well as the security quality of IT service providers and products.
Groups of hardware and software solutions
The way we have adopted to structure sections and classes of solutions is not a dogma. For example, some solutions may be assigned to several groups at once, while others may require separate sections. Our method of structuring is based on considerations of ease of learning. We adhere to the principles of Occam’s Razor and “simple to complex” and the assumption that the simpler and earlier the technologies and tools are, the more familiar they are to the reader, and the easier it is for the reader to build up a picture in their mind.
The structure of the review:
- Cryptographic solutions: EPM, ES, FDE, HSM, KMS, Password Management, PKI, SSE, TPM, TRSM, ZKP.
- Data Security: DAG, Data Classification, Data Masking, DB Security, DCAP, DLP, DRM, IRM, Tokenization.
- Threat Detection: CCTV, DPI, FIM, HIDS, IDS, IIDS, NBA, NBAD, NIDS, NTA.
- Threat Prevention: ATP, AV, HIPS, IPS, NG-AM, NGAV, NGIPS, NIPS.
- Corrective and compensatory solutions: Backup and Restore, DRP, Forensic Tools, FT, HA, IRP, Patch Management Systems.
- Deception Solutions: Deception Technology, Entrapment and Decoy Objects, Obfuscation, Steganography, Tarpits.
- Identity and Access Management: IDaaS, IDM, IGA, MFA, PAM, PIM, PIV, SSO, WAM.
- Network access security: DA, NAC, SASE, SDP, VPN, ZTA, ZTE, ZTNA.
- Network security: ALF, CDN, DNS Firewall, FW, NGFW, SIF, SWG, UTM, WAAP, WAF.
- Endpoint Management: EDR, EMM, EPP, MAM, MCM, MDM, MTD, UEM, UES.
- Application Security and DevSecOps: CI/CD, Compliance as Code, Container Security Solutions, IaC, Policy as Code, RASP, Sandboxing, SCA, Secrets Management Tools, Threat Modeling Tools.
- Vulnerability Management: CAASM, DAST, EASM, IAST, MAST, OSINT Tools, PT, SAST, VA, VM, Vulnerability Scanners.
- Cloud Security: CASB, CDR, CIEM, CNAPP, CSPM, CWPP.
- Security information and event management: DRPS, LM, MDR, NDR, NSM, SEM, SIEM, SIM, SOAR, TIP, UEBA, XDR.
- Risk Management: CCM, CMDB, CSAM, GRC, ISRM, Phishing Simulation, SAT.
In some cases, acronyms may not be universally recognizable without appropriate context or definitions, especially for non-specialist readers.
For some solutions, commonly accepted and uniquely identifiable acronyms are not well-established, so full names are used.
Cryptographic Solutions
Cryptography was one of the first disciplines within information security. Since antiquity, militaries, diplomats and spies have used various devices and tools to encrypt, decrypt and transmit sensitive information. During the Middle Ages, cryptography continued to evolve and became more complex and diverse, including various forms of transposition and substitution ciphers.
Since the early 20th century, cryptographic technologies have evolved with a relentless stream of mechanical, electronic, and mathematical innovations, reaching such a level of development that they are now ingrained in our daily lives everywhere, becoming an invisible but integral part of almost any work with digital information.
Modern cryptographic solutions play a key role in protecting processing data and systems. Cryptographic technologies are present in very many classes of information security systems. Nevertheless, we decided to distinguish several classes of general and specialized cryptographic solutions.
- ES (Encryption Solutions) is a general class of solutions that protect data by converting it into an encrypted form that can only be accessed with the correct key. Encryption solutions cover a wide range of tools and technologies designed to encrypt data, whether at rest or in transit. This can include custom encryption applications, encryption protocols for secure communications (e.g. SSL/TLS for Internet traffic), and encryption services provided by cloud providers. The goal of these solutions is to ensure data confidentiality and integrity by converting readable data into an unreadable format that can only be reversed with the correct decryption keys.
- FDE (Full Disk Encryption) is a class of software or hardware solutions designed to encrypt all data on the hard disk of a computer or other device. This approach protects information at the whole disk level, including system files, programs and data. The main purpose of FDE is to ensure privacy and protect the data on the device from unauthorized access, especially in the event of loss or theft. The two varieties of FDE are software and hardware solutions. FDE systems are associated with TPM technology. One of the first FDE systems, Jetico BestCrypt, was developed in 1993. Common FDE solutions built into operating systems are Microsoft BitLocker and Apple FileVault.
- Password Management Tools (Password Managers) are designed to simplify and improve the processes of creating and storing passwords. The functions of password managers include generating strong passwords, storing and organizing passwords, automatically filling out forms, and changing passwords. Varieties of password management tools include personal password managers; team password managers that provide password sharing functionality for workgroups; privileged password managers; and enterprise password managers (EPMs). The first software password manager, Password Safe, was created by Bruce Schneier in 1997 as a free utility for Microsoft Windows 95. As of 2023, the most commonly used password manager was Google Chrome’s built-in password manager.
- EPM (Enterprise Password Management) is an evolution of password managers in application to the entire organization. These solutions are designed for centralized password management, providing management, monitoring and protection of privileged accounts for both user and service accounts in organizations. EPM features include centralized password management, automatic password updates, password activity tracking, regular auditing to ensure compliance with security policies, and role-based access management. The two varieties of EPM are onshore and cloud-based. EPM solutions are related to PAM solutions. Password management solutions in the corporate environment started to evolve in the early 2000s.
- TRSM (Tamper-Resistant Security Module) is a generic name for devices designed to be particularly resistant to physical tampering and unauthorized access. These modules often include additional physical security measures, such as self-destruct mechanisms, to prevent physical attacks or unauthorized access to the encryption keys and cryptographic operations they perform. TRSMs are needed in environments where security is a primary concern, such as military or financial institutions. The simplest examples of TRSMs are payment smart cards, which emerged in the 1970s. POS terminals and HSM devices are also examples of TRSMs.
- HSM (Hardware Security Module) is a physical computing device that protects and manages secrets. HSMs emerged in the late 1970s. Hardware security modules typically provide secure management of the most important cryptographic keys and operations. HSMs are used to generate, store and manage encryption keys in a secure form, offering a higher level of security than software key management because the keys are stored in a tamper-proof hardware device. HSMs are widely used in high-security environments such as financial institutions, government agencies and large enterprises where protecting sensitive data is critical.
- KMS (Key Management Systems) are solutions designed to centrally manage the cryptographic keys used to encrypt data. Their main task is to ensure the security, availability and lifecycle management of keys. KMSs automate the creation, distribution, storage, rotation and destruction of keys. They integrate with various applications and infrastructure, providing centralized control over encryption in enterprises. The idea of centralized cryptographic key management began to develop in the 1970s along with the growing use of cryptography, but specific KMS systems began to be actively developed and deployed in the 1990s and 2000s.
- PKI (Public Key Infrastructure) is a system used to create, manage, distribute, use and store digital certificates and public cryptographic keys. It provides secure digital signing of documents, data encryption, and authentication of users or devices in electronic systems. PKI is a key element in securing network communications and transactions, allowing participants to exchange data confidentially and authenticate each other. PKI provides a set of tools for managing asymmetric keys and certificates, both within individual organizations and entire nations. This distinguishes PKI solutions from KMS solutions, which focus on more flexible key management, but usually only within a single enterprise. The history of PKI began in the 1970s with the development of asymmetric encryption. The concept of PKI in the modern sense was developed and standardized in the 1990s. With the advent of blockchain Ethereum in 2015, decentralized PKIs began to develop.
- SSE (Server-Side Encryption) is a method of encrypting data stored on the server side, used since the early 2000s to improve data security. Server-side encryption is the encryption of data on server drives. The encryption keys are managed by the server itself or by a central key management system. This ensures that only authorized individuals have access to the data. SSE is particularly effective for protecting sensitive data in cloud storage because it prevents unauthorized access even if physical storage devices are compromised.
- TPM (Trusted Platform Module) is a hardware component that securely stores the cryptographic keys used to encrypt and protect information on a computer or other device. The TPM is installed on the device’s motherboard or embedded in the processor. The TPM concept was first introduced by the Trusted Computing Group (TCG) consortium and standardized by ISO in 2009. Since then, TPM has become a standard component in many computers, especially in the corporate sector, where security requirements are particularly high.
- ZKP (Zero-Knowledge Proof) is used to accomplish the task of communications, where one party needs to convince another party that the former knows some secret without revealing that secret other than the trustworthy fact that it exists. Silvio Micali, Shafi Goldwasser, Oded Goldreich, Avi Wigderson, and Charles Rackoff contributed to the development of the ZKP technique in the 1980s. Starting in 2020, the ZKP method has created post-quantum security systems, i.e., systems that are resistant to cryptanalysis on quantum computers. ZKP technology is finding more and more applications, from transaction security and authentication to privacy in blockchain systems and other applications where personal data protection and privacy are important.
Data Security
Not all information can be encrypted, and not all threats can be detected and eliminated. Even encrypted information can still be destroyed, blocked or corrupted, or it can leak due to user error or computer vulnerabilities.
Solutions in this group can also be attributed to some of the other groups described in this work, such as threat prevention. However, here we focus on specialized technologies in the field of structured and unstructured data security. Therefore, these solutions are singled out as a separate category. They have evolved continuously since the first databases were introduced in the 1960s.
The application of these technologies not only protects critical information, but also improves operational efficiency, increases customer confidence, and meets the growing demands of information security.
- DB Security (DataBase Security) includes security solutions to protect structured data stored in databases from unauthorized access, modification, leakage, or destruction. Some of these features, such as access control and identification, and backup and recovery, date back to the 1960s and have been actively developed since the 1980s. Other features, such as encrypting data in the database, auditing and monitoring the database, masking data in the database, and protecting against SQL injection and other types of attacks, have become popular since the 2000s.
- DAG (Data Access Governance) is a technology for managing threats of unauthorized access to sensitive and valuable unstructured data that began to develop in the 2000s. DAG mitigates both malicious and non-malicious threats by ensuring that sensitive files are stored securely, access rights are properly enforced, and only authorized users can access data. DAG also actively protects repositories that store sensitive and valuable data.
- Data Masking – These technologies replace sensitive data with fictitious, unrealistic information elements. This process preserves the format and appearance of the original data, but renders it useless to anyone trying to misuse it. Data masking is widely used to protect personal and sensitive information during data development, testing, and analysis. It helps businesses comply with privacy standards and regulations such as GDPR. Data masking came into prominence in the 2000s.
- Tokenization is a technology that replaces sensitive data with non-confidential equivalents, which are usually of a different type and format. This is how tokenization differs from data masking. For example, in the tokenization process, a sensitive data element such as a credit card number is completely replaced with a non-confidential equivalent known as a token, which has no external or exploitable meaning or value. The token is correlated with the sensitive data through a tokenization system, but does not reveal the original data. The solution started to be applied in the early 2000s and is widely used in payment processing.
- DRM (Digital Rights Management) is a technology designed to control the use of digital media and content. The first example of DRM was a system developed by Ryuichi Moriya in 1983. Active development of DRM began in the late 1990s. One of the first companies to actively implement DRM was Sony. DRM technology is used by copyright holders to control the terms of use, copying and distribution of content. This is accomplished through the use of various technical means, including encryption, watermarks, license keys, and restrictions on copying and printing functions. Although copyright compliance is considered a related (non-core) information security requirement, we have considered DRM in the context of the next class of solutions, IRM (Information Rights Management).
- IRM (Information Rights Management) or E-DRM (Enterprise Digital Rights Management, Enterprise DRM) is a technology designed to protect sensitive information in documents and electronic files. IRM solutions emerged in the late 1990s and early 2000s as an evolution of the DRM solutions described above. Unlike DRM, which is focused on individuals, IRM solutions are mainly used in organizations. One of the early pioneers in this area was Adobe, which incorporated IRM technologies into its PDF rights management products. These solutions allow you to restrict access to information and control its use even after it has left the original organization. IRM includes encrypting data and enforcing access policies that define who can view, edit, print or transmit information and how. IRM is a key element in many organizations’ data protection strategies.
- Data Classification solutions help discover, identify, categorize and label data according to its privacy, relevance and other criteria. Data classification solutions started to be widely used in the 2000s. The main purpose of these solutions is to facilitate data management, ensure compliance with regulatory requirements, and enable integration with DLP (Data Loss Prevention), encryption, and other solutions.
- DLP (Data Loss Prevention, less commonly Data Leakage Prevention) – data leakage prevention solutions began to be offered in the mid-2000s by companies such as Symantec, McAfee and Digital Guardian. These solutions provide identification, monitoring and prevention of sensitive data breaches. This includes protecting data-in-use, data-in-motion, and data-at-rest through content and contextual analysis, as well as data classification. DLP solutions control and regulate the flow of data across an organization’s network so that sensitive information does not leave predefined perimeters without permission. DLP solutions are important to protect intellectual property, comply with privacy regulations, and prevent accidental or malicious data breaches.
- DCAP (Data-Centric Audit and Protection) is a set of solutions for auditing and protecting data in data processing and storage centers. These systems monitor, analyze and protect data at all stages of its lifecycle, from creation to destruction. They enable organizations to track and analyze data access and usage, detect unusual or suspicious behavior patterns, and prevent data breaches. DCAP includes tools for data classification, access monitoring, auditing, threat protection and compliance. DCAP solutions began to evolve as a separate line of business in the early 2010s in response to increasing data security threats and compliance requirements. DCAP solutions are the result of the evolution and synthesis of technologies in information systems security, data analytics and cybersecurity.
Threat Detection
According to one generally accepted simple classification of information security methods and tools, all these methods and tools are divided into detective, preventive and compensatory. This division is rather conventional, but it corresponds to the life cycle of information security problems – from threats and exploitation of vulnerabilities to incidents and damage from them. Therefore, such a categorization is useful for training.
Obviously, the earlier an information security threat is detected, the more effective the defense against it. Therefore, information security technologies have naturally evolved towards a proactive approach to detecting security threats, increasing the depth of analysis and adapting to search for threats in specific environments or conditions.
Some of these solutions can be categorized into other groups, such as network security. However, in order to improve the consistency of the presentation and simplify the mastering of the material, while keeping a balance between the application of narrow and broad groups of solutions, we decided to describe these solutions in the group of detection technologies.
- CCTV (Closed-Circuit Television) is a video surveillance system. It was invented by Lev Termen in 1927 and is still actively used in various security applications. CCTV is used to monitor security in real time and to investigate past events. Modern CCTV uses technology to record, process, store, analyze and play back video information, often with audio. Face recognition, license plate recognition and other identification technologies are also used.
- FIM (File Integrity Monitoring) is a technology which focuses on monitoring and detecting changes to operating system and application software files. This includes checking file integrity, system configuration, registry, and other critical elements. The primary purpose of FIM is to detect unauthorized changes that may be signs of hacking, malware, or an internal security breach. FIM works by comparing the current state of files and configurations against a known, trusted “baseline”. Any deviation from this baseline can be a sign of a security problem. The FIM concept has evolved since the 1980s. One of the first FIM Tripwire systems still in use today was created in 1992 by Eugene Spafford and Gene Kim.
- IDS (Intrusion Detection System) is a system that tracks system events and network traffic for suspicious activity or attacks. IDS compares the inspected data against known signatures or attack patterns, as well as normal behavioral profiles, to detect outliers. When present, IDS generates alerts or reports on detected incidents and provides contextual information to understand and respond to them. The earliest preliminary concept of IDS was formulated in 1980 by James Anderson. In 1986, Dorothy Denning and Peter Neumann published the IDS model that formed the basis of many modern systems.
- HIDS (Host-based Intrusion Detection System, Host-based IDS) is a subclass of IDS that monitors and analyzes incoming and outgoing traffic from a host computer or other device, as well as files and system logs, to detect suspicious or malicious activity. HIDS are designed to detect intrusions and suspicious activity on a particular computer. They can detect malware, rootkits, phishing, and some other forms of attacks. HIDS typically use signatures of known threats, anomalous behavior, or both to detect suspicious activity. HIDS solutions have evolved since the 1980s in close association with FIM systems.
- NIDS (Network Intrusion Detection System, Network IDS) is a subclass of IDS that monitors and analyzes network traffic to detect malicious activity, unauthorized access, or violations of security policies. These systems are typically embedded in active network devices (switches, routers, etc.) and monitor and analyze both external and internal network traffic for abnormal or suspicious behavior such as DoS attacks, port scans, intrusion attempts, etc. Besides analyzing network traffic and generating alerts or reports on detected incidents, NIDS also integrates with other security systems, such as firewalls, intrusion prevention systems (IPS), SIEM systems, etc. NIDS have been evolving since the late 1980s.
- IIDS (Industrial Intrusion Detection System, Industrial IDS) is an intrusion detection systems specifically designed for industrial networks such as SCADA, ICS or IIoT since the late 1990s. IIDS monitor abnormal or malicious behavior in industrial networks, which can be due to cyberattacks, equipment errors, or security policy violations. IIDS also collects information on the status of devices and process parameters on industrial networks and provides information on the nature, source and consequences of detected incidents, as well as recommendations for remediation.
- DPI (Deep Packet Inspection) is a technology for inspecting network packets based on their content in order to regulate and filter traffic, as well as to accumulate statistical data. The concept of DPI began to develop in the late 1990s. The technology allows service providers, organizations and government agencies to apply various policies and measures to ensure the security, quality and efficiency of network communications. This approach analyzes not only the headers but also the payload of network packets, at OSI model layers two (link) to seven (application). Protocols, applications, users, and other entities involved in network communication are identified and classified. Further development of the technology has allowed it to be used not only for threat detection, collection and analysis of network communication statistics for monitoring, reporting and optimization, but also for blocking malicious code, attacks, data leaks and other security breaches, as well as for implementing QoS (Quality of Service) and other traffic management mechanisms such as rate limiting, prioritization, caching and others.
- NTA (Network Traffic Analyzer, also known as network analyzer, packet analyzer, packet sniffer, or protocol analyzer) are software or hardware tools designed to monitor and analyze real-time network traffic in wired and wireless networks. NTAs provide detailed information about traffic characteristics, including the sources, destination, volume, and types of data being transmitted. Unlike NIDS, NTAs provide a broader view of network activity and are used not only for security, but also to solve a variety of network and application problems. Traffic analyzers integrate with IDS/IPS, SIEM, SOAR, and network configuration and device management tools. The history of traffic analysis begins with the interception and decryption of enemy information during World Wars I and II. Tcpdump, one of the first computer traffic analyzers, which works in the command line and is still used today, appeared in 1988. The Ethereal analyzer, which appeared in 1998, developed the NTA technology. In 2006, Ethereal was renamed Wireshark. This tool is also still popular today. In 2020, NTA solutions evolved into NDR (Network Detection and Response).
- NBA (Network Behavior Analysis) is a class of solutions designed to monitor and analyze network behavior to detect anomalies and, as an additional effect of this analysis, to detect security threats. NBA technologies include analyzing event logs and network traffic, and are used in monitoring network performance and security. NBA solutions are sometimes integrated with IDS, IPS, SIEM, EDR and other systems. NBA technologies have evolved since the early 2000s. Over time, NBA solutions have evolved to include machine learning and artificial intelligence.
- NBAD (Network Behavior Anomaly Detection) – this technology, according to one group of sources, is a component of NBA solutions, and specializes exclusively in security, and is responsible for the early detection and prevention of incidents such as viruses, worms, DDoS attacks or abuse of authority. According to another group of sources, the NBA and NBAD are one and the same. We have reason to believe that some NBA vendors have focused on security functionality in their systems, while some NBAD vendors, on the contrary, have gone beyond security functionality, blurring the line between these two classes of solutions.
Threat Prevention
After reviewing the various tools and techniques for detecting information security threats, let’s move on to the next stage in the lifecycle of security problems and solutions – threat prevention. If threat detection is the first step to security, then threat prevention is the most active phase of threat management.
In this section, we will focus on hardware and software solutions that not only detect threats, but also actively prevent them from materializing, providing a higher level of protection for systems and data. We will look at how modern technologies and innovative approaches can act as a barrier against a variety of threats, ranging from computer viruses and hacker attacks, to protecting websites and individual critical server components.
As with the threat detection solutions described above, some of the classes of solutions described below may be included in other groups of solutions, such as network security or endpoint protection. However, given the functional nature of our categorization, we decided to describe them here to improve understanding of the material.
- AV (Antivirus) is a specialized program for detecting and removing malicious software such as viruses, Trojans, worms, spyware, adware, rootkits and others. Antivirus prototypes appeared in the 1970s, and the first popular antiviruses appeared in the 1980s. An antivirus protects your computer from malicious code infections and repairs corrupted or modified files. Antivirus can also prevent malicious code from infecting files or the operating system by using proactive protection, which analyzes program behavior and blocks suspicious activities. Most of today’s commercial anti-malware has expanded their capabilities and moved into the EPP (Endpoint Protection Platforms) solution category.
- NGAV (Next-Generation Antivirus) or NG-AM (Next-Generation Anti-Malware) is a modern approach to anti-malware development that goes beyond traditional antivirus by offering a broader and deeper level of protection. NGAV uses advanced techniques such as machine learning, behavioral analysis, artificial intelligence and cloud technologies to detect and prevent not only known viruses and malware, but also new, previously unknown threats. These systems are capable of analyzing large amounts of data in real time, providing protection against sophisticated and targeted attacks such as zero-day exploits, ransomware and advanced persistent threats (APTs). The term “Next-Generation Antivirus” began to be used in the mid-2010s. CrowdStrike and Cylance were among the first companies to start promoting the idea of NGAV.
- IPS (Intrusion Prevention System) is a system designed to detect and prevent unauthorized access attempts and other attacks on computer networks and individual computers. The first IPS systems appeared in the early 1990s as a logical evolution of intrusion detection systems (IDS). Unlike IDSs, IPSs not only detect suspicious activity but also take steps to neutralize it, making them more effective in preventing attacks than IDSs. On the other hand, typically, any IPS system can operate in IDS mode. Therefore, the compound abbreviation IDS/IPS is used quite often, less often IDPS or IDP System. Cisco Systems and Juniper Networks were among the first to offer IPS solutions commercially.
- HIPS (Host-based Intrusion Prevention System) is a subclass of IPS systems and an evolution of HIDS systems to protect against threats at the level of the individual host – a computer or other device. HIPS analyzes and monitors incoming and outgoing traffic, application activity, and system changes on the host, detecting and blocking suspicious activity that may indicate intrusion or malicious behavior. HIPS systems can use a variety of techniques, including signature analysis, heuristic analysis, and monitoring changes to the system registry, file systems, and important system files. HIPS systems have been actively evolving since the 2000s and have been integrated into more sophisticated classes of information security solutions in recent decades.
- NIPS (Network Intrusion Prevention System) is a subclass of IPS systems and an evolution of NIDS systems for preventing unauthorized access or other breaches in computer networks. NIPS can be implemented as either physical devices or software, and are often integrated with other security systems, such as firewalls. NIPS continuously analyzes network traffic in real time and can automatically block suspicious or malicious traffic based on predefined security rules and threat signatures. NIPS works by deeply analyzing packets and sessions, applying techniques such as signature analysis, anomaly analysis and behavioral analysis to identify and block attacks such as DoS (Denial of Service), DDoS, exploits, viruses and worms. One of the first NIDS/NIPS, created in the 1990s and still in use today, is the Snort application. The evolution of NIDS into NIPS intensified in the 2000s with Cisco, Juniper Networks, and McAfee. In recent decades, NIPS has been integrated into more sophisticated classes of information security solutions.
- NGIPS (Next-Generation Intrusion Prevention System) is a subclass of IPS systems that provides deeper and more intelligent analysis of network traffic, including encrypted traffic, as well as application policy enforcement by users. NGIPS integrates with advanced technologies such as machine learning, behavioral analytics, and deep data analytics to detect and prevent not only known threats, but also previously unknown or specific attacks such as zero-day exploits and advanced persistent threats (APTs). These systems can adapt and respond to the changing threat landscape, providing protection in dynamic and complex network environments. NGIPS are often integrated with SIEM systems. The NGIPS concept began to evolve in the late 2000s and early 2010s. FireEye, Palo Alto Networks and Cisco, contributed to the development and popularization of NGIPS.
- ATP (Advanced Threat Protection) is a comprehensive class of cybersecurity solutions designed to protect organizations from sophisticated threats such as targeted attacks, zero-day exploits, and the Advanced Persistent Threat (APT – do not confuse with ATP). ATP provides protection at multiple layers, including network, endpoints, applications and cloud services, using various techniques such as machine learning, behavioral analysis, sandboxing and advanced traffic analysis. ATPs integrate with various security infrastructure components to provide centralized management and coordination of security measures. ATP solutions were proposed in the 2000s and began to be actively promoted in the mid-2010s by companies such as Symantec.
Corrective and Compensatory Solutions
Continuing the sequence started in the previous two sections when describing detecting and preventing solutions, it is logical to describe the third type of security measures in this series – corrective and compensating solutions. Their difference from detecting and preventive solutions is in the key way they work, aimed at correcting security violations that have already occurred or at reducing the damage caused by them, including prosecution of security violators.
Technically, to complete the picture, cyber-attack tools could also be included in this group, but such activities are on the border of legality, so we will not describe these tools.
In its purest form, corrective measures are those or other measures of backup recovery of failed data, software or hardware, or services, and compensatory measures are services of insurance companies to compensate for damage after security incidents when recovery by other means is impossible. However, under certain assumptions, some other methods and tools can also be included in this group.
- Backup and Restore systems are a critical class of hardware and software solutions used to ensure data integrity and availability by backing up important information so that it can be restored in the event of loss or corruption. The key functions of these systems are regular automatic backups, data encryption, data recovery and version control. There are local, cloud and hybrid backup systems. Backup methods and tools appeared in the pre-computer era, organically developed with the development of computing technology and are now integrated with EPP, DB Security and many other solutions and technologies.
- HA (High-Availability) solutions are the evolution and automation of redundancy technologies for real-time operations. HA systems minimize downtime and ensure that business-critical applications remain at least partially available to users even in the event of failures or disasters. The main functions of HA solutions are data replication, automatic switching, load balancing, monitoring and management. The main technical approaches to HA implementation are clustering, failover backup and distributed systems. HA technologies have been developed since the beginning of computing. Since the 1980s, HA solutions have become more affordable and diverse.
- FT (Fault-Tolerance) solutions are an evolution of HA solutions. With few exceptions, FT solutions are superior to HA solutions in terms of functionality. FT solutions have higher levels of load balancing, redundancy and data integrity. This results in higher complexity, quality (data and service availability) and cost. On the other hand, the scalability of FT solutions is usually lower than HA due to the embedded architecture. The first known fault-tolerant computer was created in 1951 by Antonin Svoboda. Later NASA, Tandem Computers and other organizations contributed to the development of FT technologies.
- DRP (Disaster Recovery Planning, Disaster Recovery Solutions) is a set of measures and tools aimed at minimizing the consequences of disasters: natural disasters, major technological accidents, cyberattacks and other similar incidents that can lead to disruptions not only in the IT infrastructure, but also in the entire organization. Unlike Backup and Restore solutions, the DRP process considers backing up not only data, but also any other critical assets: software, hardware, communications, key vendors, etc. Unlike HA and FT systems that operate automatically in real time, DRP solutions often focus on manual recovery from relatively rare events, although modern DRP systems utilize automated failover to redundant systems and infrastructure. The concept of disaster recovery has been evolving since the mid-1970s. As it evolved, DRP processes and solutions became incorporated into the corporate process of Business Continuity Planning (BCP), which had been evolving since the 1950s. Since the late 1990s, the term BCDR (Business Continuity and Disaster Recovery) has been used to combine DRP and BCP.
- Patch Management Systems – systems for managing “patches” and other software updates. The purpose of patch management is to keep information systems updated and protected from known vulnerabilities and bugs. The main functions of patch management are inventory of software and hardware in an organization, vulnerability monitoring, patch testing, patch deployment, and reporting and auditing. Patch management systems integrate with IDS/IPS, anti-malware, configuration management systems, and vulnerability assessment tools. The term “patch” is a literal one. Since the invention of punched cards in the 18th century, erroneous holes have been made in them for various reasons. Since about the 1930s, with the advent of IBM calculating machines and punched cards, errors in them were actively corrected by paper “patches” (physical stickers) with correct holes, which were glued directly onto the punched cards.
- IRP (Incident Response Platform) is a set of solutions designed to automate responses to information security incidents. The main goals of IRP are to effectively respond to incidents, minimize their impact and prevent recurrence. IRP functions include supporting automated threat response scenarios, tracking and analyzing incidents, recording incident lifecycle stages, providing tools for incident analysis and research, communication and response coordination, reporting and documentation, analyst training, etc. IRP systems integrate with IDS/IPS, TIP, SIEM, EDR, SOAR and other systems. IRP solutions began development in the early 2000s and evolved into SOAR by the early 2020s.
- Forensic Tools are a specialized class of software and hardware tools designed to collect, analyze, and present data collected from digital devices in the context of investigating cybercrime or information security incidents. The main purpose of these tools is to provide evidence, including evidence admissible in the courts of certain jurisdictions, when investigating crimes related to computers and networks, and to assist in the analysis of IS incidents such as unauthorized access, fraud or data breaches. Investigative tools can be categorized by the type of objects investigated: computers, mobile devices, networks, and the cloud. The use of non-specialized investigative tools evolved in the 1980s. In the 1990s, several specialized free and proprietary tools (both hardware and software) were created to allow investigations to be conducted with the guarantee of media immutability, which is one of the main requirements for securing evidence. Law enforcement agencies such as the FBI and Interpol have played an important role in the development of digital forensics tools. DFIR (Digital Forensics and Incident Response) solutions are a combination of IRP and investigative tools.
Deceptive Solutions
This group describes special solutions that covertly make it more difficult to realize security breaches. Indirectly, this is accomplished by hiding actual objects or procedures from attackers, or by directly misleading attackers. This distinguishes these solutions from any other solution that protects assets overtly and relies on, for example, access regimes, password secrecy, or the mathematical strength of encryption.
Hiding technology solutions are based on the full or partial masking of information or its protection. The approach to hiding information security tools and techniques is called “Security through Obscurity”. This approach is often criticized as being insufficiently robust. Although some elements of this approach are still in use, it is seen more as a complement to more robust and transparent security mechanisms. At the same time, solutions for hiding the information itself are developing quite actively.
Solutions based on deception technologies are based on the principle of creating a misleading environment or conditions for attackers to make it more difficult to realize unauthorized access or other information security breaches. The effectiveness of these solutions lies in manipulating attackers and forcing them to spend time and resources on work that does not accomplish their goals. An additional goal of some solutions in this group is to be able to study the actions of attackers in conditions close to the real ones, without putting at risk the real resources of the organization.
- Obfuscation is an information hiding technique involving various methods of masking or distorting data, code, or communications, making them difficult to read and difficult to analyze without special knowledge or keys. This technique is often used in software development to partially protect source code from reverse engineering and is useful for protecting intellectual property. Obfuscation is also used to hide sensitive data during transmission. Obfuscation techniques include changing data formats, using special algorithms to change code structure, and many other methods. Not only do they increase the complexity of human understanding of data, but they can also make automated data analysis more difficult, for example, when malware attempts to recognize protected data. Obfuscation originated in the 1970s, evolved among virus writers from the 1980s, and, as a method of data and code protection, began to develop in the 1990s.
- Steganography is a method of hiding the fact that information is being transmitted or stored. In digital steganography, data is often embedded in digital media files using various algorithms that can change, for example, the minimum color bits of pixels in an image or the amplitude of audio files. Digital steganography began to be used in the 1980s and has since evolved into computer and network steganography systems. Computer steganography, unlike general digital steganography, is based on the specifics of the computer platform. An example is the StegFS steganographic file system for Linux. Network steganography is a method of transmitting hidden information through computer networks using the peculiarities of data transfer protocols.
- Entrapment and Decoy Objects are false targets such as fake files, databases, control systems, or network resources that appear valuable or vulnerable to attack. Traps and decoys are used to detect unauthorized activity as well as to analyze attacker behavior. The concept of traps was first proposed by the FIPS 39 standard in 1976. These technologies have been actively developed since the 1990s in the form of honeypots and honeynets solutions. One of the first documented examples of a honeypot was created in 1991 by William Cheswick. Honeynets are networks composed of multiple honeypots. Unlike single honeypots, which simulate individual systems or services, honeynets create a more complex and realistic network environment in which multiple traps can interact. This allows for the exploration of more complex and targeted attacks, as well as attacker behavior in a broader context. The Honeynet project, launched by Lance Spitzner in the late 1990s, is one of the first and best known examples of the use of honeynets.
- Deception Technology is a further development of trap and bait technologies, and represent a broad class of solutions involving various techniques and strategies for creating false indicators and resources on the network. These solutions include not only honeypots and honeynets, but also other means such as false network paths, fake accounts, and data manipulation to mislead attackers and divert them from real assets. Since the 2010s, Israeli companies such as Illusive Networks and TrapX have contributed to the development of deception-based enterprise products.
- Tarpits are solutions that slow or thwart automated attacks such as port scanning or the spread of worms and botnets. Tarpits solutions work by establishing an interaction with an attacker and deliberately delaying that interaction, causing the malware to take significantly longer than usual. This is accomplished by intentionally slowing down responses to network requests or creating false services and resources that seem interesting to the attacker. One famous example of a “tar pit” is LaBrea Tarpit, created by Tom Liston in the early 2000s. This program was designed to combat the Code Red worm, which was spreading rapidly by scanning and infecting web servers. LaBrea effectively slowed down the scanning by creating virtual machines that appeared vulnerable to the worm but were actually traps.
Identity and Access Management
This section describes solutions that play a specific role in ensuring that the right people have access to the right resources at the right time and with the right privileges.
IAM (Identity and Access Management) is the generic name for processes and solutions aimed at managing user identities and controlling access to resources. These systems authenticate users’ identities, manage their access rights, track their actions, and ensure data security and confidentiality. IAM includes authentication, authorization, user management, role and access policy management, and integration with various applications and systems.
The history of IAM begins long before the computer age. Seals and passphrases used by ancient civilizations were the first prototypes of IAM. The first models (MAC – Mandatory Access Control, DAC – Discretionary Access Control, etc.) and the first implementations of access control systems began to take shape in the late 1960s.
More modern access models (RBAC – Role-Based Access Control, ABAC – Attribute-Based Access Control, etc.) as well as IAM as integrated systems have been actively developed since the 1990s. Beginning in the 2000s, IAM solutions began to utilize the RBA (Risk-Based Authentication) method.
With the constant evolution of security threats and tightening security compliance requirements, IAM solutions are becoming not just a security measure, but a strategic tool that helps organizations not only protect their assets, but also improve efficiency and productivity.
- SSO (Single Sign-On) is an authentication method that allows a user to access multiple applications or systems by entering their credentials only once. It is a convenient access control method that greatly simplifies the process of logging into various corporate or private services. SSO eliminates the need to remember multiple passwords, which reduces the risk of losing or compromising them. There are different types of SSO, including enterprise SSO (for an organization’s internal systems), web SSO (for online services), and federated SSO (which uses standards such as SAML and OAuth to authenticate across different domains). The concept of SSO began to evolve in the 1980s as organizations looked for ways to simplify account management in the face of a growing number of computer systems and applications. The first SSO solutions were provided by Hewlett-Packard, CA Technologies, Oblix, Magnaquest Technologies and Novell. The development of cloud technologies in the 2000s has greatly expanded the needs and opportunities for SSO applications.
- Multi-Factor Authentication (MFA) is a method and technology that requires a user to provide two or more proofs of identity before accessing a resource. A special case of MFA for two factors is Two-Factor Authentication (2FA). MFA significantly increases security by combining multiple independent confirmations, usually of different kinds: something the user knows (such as a password or code), something the user has (such as a smartphone or token), and something that is part of the user (such as a fingerprint or facial biometrics scan). The emergence of the preconditions of 2FA dates back to the 1980s. The first 2FA is most often cited as the first transaction authorization system based on the exchange of codes via two-way pagers, which was created in 1996 by AT&T. The first MFA solutions often involved the use of physical tokens or special cards. With the development of smartphones and biometric technologies since the early 2010s, MFA methods have become more diverse and affordable.
- IDM (Identity Management, IdM) are solutions that manage digital user identities. This includes the processes of creating and deleting user accounts and identities, as well as other management operations. IDM is sometimes used synonymously with IAM, but it is more accurate to think of IDM as one of the functions of IAM. While IDM focuses on identity management, IAM provides a more comprehensive approach, including mechanisms to protect data and resources from unauthorized access.
- WAM (Web Access Management) is a class of solutions designed to manage access to web applications and online services. The main purpose of WAM is to provide secure, controlled, and convenient access to web resources for authorized users. The functions of WAM are authentication and authorization, SSO, auditing and monitoring, and session management. There are various WAM solutions designed for different types of applications and environments, including cloud services, enterprise portals, and public websites. The development of WAM began in the late 1990s in the form of SSO solutions. In the 2000s, WAM started to become a separate class of solutions and, by now, has begun to lose relevance, giving way to modern IAM.
- PIM (Privileged Identity Management, Privileged Account Management or Privileged User Management, PUM) is a technology for managing accounts that have access to sensitive data or critical systems. These include, for example, accounts for super users, system administrators, database administrators, service administrators, etc. PIM focuses on the creation, maintenance and revocation of accounts with elevated permissions. PIM tools typically provide support for privileged account discovery, account lifecycle management, strong password policy enforcement, access key protection, and access monitoring and reporting. The concept of PIM emerged in the early 2000s.
- PAM (Privileged Access Management or Privileged Account and Session Management, PASM) is a class of solutions that manage access to critical systems and resources in an organization. PAM can be considered an extension of PIM because PAM provides a broader range of features for privileged access management. For example, timely privilege assignment, secure remote access without a password, session recording capabilities, monitoring and controlling privileged access, and detecting and responding to suspicious activity. The concept of PAM began to evolve in the early 2000s. The functions and variations of PAM include PSM (Privileged Session Management), EPM (Endpoint Privilege Management), WAM (Web Access Management), PEDM (Privilege Elevation and Delegation Management), SAG (Service Account Governance), SAPM (Shared Account Password Management), AAPM (Application-to-Application Password Management or Application Password Management, APM) and VPAM (Vendor Privileged Access Management).
- IGA (Identity Governance and Administration) is a class of solutions for managing user identity, access to resources, and compliance with relevant policies and regulations. IGA solutions began to grow rapidly after the passage of the U.S. HIPAA laws in 1996 and SOX in 2002. IGA solutions manage the identity lifecycle, implement access policy and role management, and provide auditing and reporting. IGA solutions can be viewed as a complement or extension of IAM solutions. The integration of IGA with PAM represents PAG (Privileged Access Governance) technology.
- PIV (Personal Identity Verification) solutions are solutions to provide strong identity verification for U.S. government and contractor employees. PIV solutions are designed to strengthen the security of access to the physical and information resources of government agencies. This is achieved through the use of PIV cards that contain biometric data (fingerprints, photos) as well as electronic certificates and pin codes for access to information systems and physical facilities. The FIPS 201 standard, developed by NIST and describing PIV, was approved in 2005.
- IDaaS (Identity as a Service) is a cloud-based service that provides end-to-end IAM solutions. IDaaS is designed to provide secure flexible and scalable management of digital user identities, including authentication, authorization and resource access accounting. IDaaS includes SSO, MFA, role-based access control (RBAC) and policy management, account synchronization, integration with various cloud and on-premises applications, etc. IDaaS is tightly integrated with other security systems, such as DLP, CASB and SIEM systems, providing a comprehensive approach to cybersecurity. The IDaaS concept began to evolve in the early 2000s with the proliferation of cloud technologies. Among the first companies to offer IDaaS solutions were Microsoft, Okta and OneLogin.
Network Access Security
Providing secure access to network resources has been at the center of cybersecurity with the massive proliferation of the Internet since the beginning of the 21st century. Each of the secure network access solutions described below offers specific techniques and strategies for securing a highly dynamic and diverse network environment.
Consider how these technologies help organizations adapt to new cybersecurity challenges by providing reliable access control, protection from external and internal threats, flexibility and scalability in an ever-changing landscape of IT infrastructure and business processes. This section will provide you with an understanding of modern approaches to secure network access, their features, evolution, benefits, and potential applications as part of a comprehensive cybersecurity strategy.
- VPN (Virtual Private Network) is a generic term for technologies that enable one or more secure network connections over another network. A VPN can provide three types of connections: host-to-host, host-to-network, and network-to-network. VPNs are used to protect network traffic from interception and tampering. Protection is provided through the use of cryptography. VPNs are often integrated with firewalls, IDS/IPS systems, and IAM solutions. The first prototypes of VPNs appeared in the 1970s. Between 1992 and 1999, the first popular VPN protocols, IPsec and PPTP, also known as PPTN, were developed. Other examples of VPN protocols include L2TP and WireGuard. Some VPN implementations use SSL/TLS, MPLS technology, or other protocols and technologies. Cisco and Microsoft have made key contributions to the development of VPNs. In 2001, an implementation of OpenVPN appeared and quickly became popular. Wireguard appeared in 2016 and was included in the Linux kernel in 2020, and it is currently actively supplanting OpenVPN as a more performant and simpler solution.
- DA (Device Authentication) – various technologies and methods used to authenticate devices in digital networks and systems. The main functions are device identification and authentication, encryption and access control. Types of solutions differ in the basis of authentication: MAC addresses, certificates, unique device identifiers (e.g., IMEI), etc. Device Authentication began to develop in the 1980s. The development of Device Authentication technologies led to the concept of NAC, described below. With the development of the Internet of Things (IoT), mobile technologies, and BYOD (bring your own device) policies, device authentication has become even more important for securing the networks to which these devices connect.
- NAC (Network Access Control) is a standard and broad class of network access control solutions that includes identifying and authenticating users, verifying the status of their devices, and enforcing security policies to control access to network resources. NAC solutions help secure the network by controlling access based on certain client computer criteria, such as user role, device type, location, and device state (anti-malware status, system updates, etc.). The concept of NAC began to evolve in the early 2000s. The basic form of NAC is Port-based Network Access Control (PNAC), defined by the IEEE 802.1X standard adopted in 2001. An evolution of NAC is the Trusted Network Connect (TNC) device stateful-based approach introduced in 2005. Examples of specific NAC implementations include Cisco Network Admission Control (2004-2011), Microsoft Network Access Protection (2008-2016), Cisco Identity Services Engine (ISE), Microsoft Endpoint Manager, Fortinet FortiNAC, Aruba ClearPass, and Forescout NAC solutions.
- ZTA (Zero Trust Architecture) is a security architecture based on the principle of zero trust (“trust no one, verify everything”). ZTA integrates with a wide range of network and security technologies, including identification, authentication, data protection, encryption, network segmentation, monitoring, and access control. The term “zero trust” was coined by Stephen Paul Marsh in 1994. The OSSTMM standard released in 2001 placed a lot of emphasis on trust, and the 2007 version of this standard asserted “trust is vulnerability”. In 2009, Google implemented a zero-trust architecture called BeyondCorp. In 2018, the NIST and NCCoE organizations published the SP 800-207 Zero Trust Architecture standard. After that, the widespread adoption of mobile and cloud services began to spread ZTA solutions.
- SDP (Software-Defined Perimeter) is a network security model based on the creation of dynamically configurable perimeters around network resources. The SDP model can be seen as one way to implement the ZTA architecture. Unlike NAC, the more modern SDP model does not control a user’s or device’s access session to the entire network, but rather each access request to each resource individually. SDP uses cloud technology and software-defined architectures to create tightly controlled access points that can adapt to changing security requirements. SDP is also referred to as “black cloud” because the application infrastructure, including IP addresses and DNS names, is completely inaccessible to unverified devices. SDP is related to IAM systems, firewalls, IDS/IPS systems, and cloud security gateways. The concept of SDP began to evolve in the late 2000s. The Defense Information Systems Agency (DISA) and Cloud Security Alliance (CSA) made key contributions to the development of SDP.
- ZTNA (Zero Trust Network Access) is one of the core technologies within the ZTA framework. Unlike traditional security models, ZTNA makes no assumptions about trust and requires strict verification of origin and context for every resource access request, regardless of where the request comes from. ZTNA solutions are related to SDP and SASE as well as IAM solutions. Unlike SDP solutions, which cover not only application access but also network infrastructure management, ZTNA solutions focus on user access management. ZTNA is often seen as a more secure, flexible and modern alternative to traditional VPN solutions.
- SASE (Secure Access Service Edge) is a network architecture that integrates comprehensive network security and broadband access functions into a single cloud-based platform. SASE was designed to provide secure, efficient access to network resources in the face of an increasing number of remote users and distributed applications. It is a groundbreaking model proposed by Gartner in 2019 that redefines traditional approaches to network security and access. SASE integrates features such as SD-WAN (software-defined wide area network), secure web gateway, cloud access to secure resources (CASB), intrusion detection and prevention (IDS/IPS), and more.
- ZTE (Zero Trust Edge) is a concept that combines the principles of ZTA with networking technologies at the edge of the network (edge computing). This approach aims to provide strong security for network resources and applications hosted at the edge of the network, especially with the increasing number of remote users and the growing use of cloud and mobile technologies. ZTE solutions are closely aligned with ZTNA, SDP and SASE. The ZTE model proposed by Forrester in 2021 is similar to SASE, but with an additional focus on implementing zero-trust principles for user authentication and authorization.
Network Security
This section is devoted to an overview of solutions designed to protect the network infrastructure. These are mainly various proxy solutions in the form of filters, gateways, etc. This group does not include secure network access solutions and some other solutions that were described in the different groups above, as well as mobile, cloud and other solutions described below.
Each tool in this group plays a different role in creating a layered defense strategy. Let’s look at how these various solutions can be integrated to form a flexible, effective network defense, and review their key features, benefits and potential applications. We will also describe the subgroups and evolution of network security technologies.
- FW (Firewall) is a basic network device or software solution designed to control and filter incoming and outgoing network traffic based on predetermined security rules. The main purpose of a firewall is to protect computers and internal networks from unauthorized access and various types of network attacks, ensuring the security of data and resources. The main types of firewalls are packet filters, Stateful Inspection Firewalls (SIF), Application Layer Firewalls (ALF), and Next-Generation Firewalls (NGFW). Firewalls are often used in conjunction with other security systems, including IDS/IPS, SIEM, and VPN systems. The first work on firewall technology was published in 1987 by engineers at Digital Equipment Corporation (DEC). AT&T Bell Labs also contributed to the development of the first firewalls.
- DNS Firewall is a solution for monitoring and controlling DNS queries to prevent access to malicious or suspicious sites. DNS Firewalls differ from regular firewalls in that they focus solely on DNS traffic. DNS Firewalls can be implemented as hardware appliances, software, or cloud services. DNS firewalls are often integrated with IDS systems, anti-malware, and other types of firewalls. DNS firewalls were introduced in the late 1990s.
- SIF (Stateful Inspection Firewall, Stateful Firewall) is a type of firewall that not only filters traffic based on source and target IP addresses, ports, and protocols, but also monitors and accounts for the states of active network connections (sessions). This allows it to dynamically manage traffic based on session context, offering more effective protection than simple packet filters. SIFs operate at the network, session and application layers of the OSI model. SIFs are often used in conjunction with IDS/IPS, IAM, and VPN systems. SIF was invented in the early 1990s by Check Point Software Technologies. SIFs have become the basis for many modern cybersecurity solutions, including NGFW.
- ALF (Application-Level Firewall or Application Firewall) is a type of firewall that monitors and, if necessary, blocks connections and I/O to applications and system services based on a customized policy (set of rules). An ALF can control communication down to the application layer of the OSI model, such as protecting Web applications, e-mail, FTP, and other protocols. The two main categories of application layer firewalls are network and host firewalls. ALF is often integrated with IDS/IPS and IAM systems. ALF development began in the 1990s through the efforts of Purdue University and AT&T and DEC.
- WAF (Web Application Firewall) is an application layer firewall (ALF) designed to protect web applications and APIs (application program interfaces) from various types of attacks such as XSS (cross-site scripting), SQL injection, CSRF (cross-site request forgery), etc. The varieties of WAFs are hardware, cloud and virtual. WAF sits in front of web applications in the path of requests to them and are often integrated with IDS/IPS, CMS and SIEM systems. The first WAFs were developed by Perfecto Technologies, Kavado and Gilian Technologies in the late 90s. In 2002, the ModSecurity open source project was created, which made WAF technology more accessible. WAF solutions are being actively developed.
- WAAP (Web Application and API Protection) is a class of solutions designed to secure web applications and APIs. It includes the features of a traditional WAF and extends them with additional protective measures. Unlike traditional WAFs, WAAP provides more comprehensive protection, including API protection, cloud services, and advanced threat intelligence. WAAP can be implemented as cloud services, hardware solutions, or integrated platforms. WAAP is often integrated with IDS/IPS and SIEM systems. The WAAP concept was proposed by Gartner in 2019.
- SWG (Secure Web Gateway) is a solution designed to monitor and manage inbound and outbound web traffic to prevent malicious activity and enforce corporate policies. SWGs are typically implemented as proxy servers and are placed between users and Internet access. The functions of SWGs include web content filtering and caching, malware protection, application and user access control. SWGs can be deployed as hardware appliances, software solutions, or as cloud services. SWGs are often integrated with IDS/IPS systems, anti-malware solutions, firewalls and DLP. SWG solutions began to appear in the 1990s.
- NGFW (Next-Generation Firewall) is a class of firewall that not only performs standard traffic filtering functions, but also includes additional capabilities to provide deeper and more comprehensive network security. NGFWs are implemented as a device or software. NGFW technologies include packet filtering, SIF, DPI, IDS/IPS, ALF, anti-malware, URL and content filtering, etc. NGFW solutions are tightly integrated with SIEM, IAM, and cloud services and mobile devices. In 2007, Palo Alto Networks introduced, in fact, the first NGFW, although the term was coined by Gartner later, in 2009. The creation of the NGFW fundamentally changed previous approaches to filtering network traffic.
- UTM (Unified Threat Management) is a comprehensive solution that integrates multiple security functions into a single device or software package. The main functions of UTM are firewall, IDS and IPS. Additional UTM functions are gateway anti-malware, ALF, DPI, SWG, traffic filtering, spam and content filtering, DLP, SIEM, VPN, network access control, tarpit, DoS/DDoS protection, zero-day attacks, etc. Depending on the specific implementations of NGFWs and UTMs, in practice these solutions either do not differ in functionality or differ in customization capabilities – UTMs offer a wider range of versatile built-in features than NGFWs, often offering functions that are more flexible in customization. UTM solutions began to develop in the early 2000s, thanks to Fortinet and a few others. Then, in 2004, IDC coined the term UTM.
- CDN (Content Delivery Network) – This technology is primarily designed to speed up content downloads on the Internet, but it can also contribute to improving information security. CDNs reduce the risk of DDoS (distributed denial of service) attacks by distributing traffic across a network of servers, making it more difficult to organize a single-point attack. In addition, some CDNs offer additional security features, such as WAF and SSL/TLS encryption. The concept of CDNs was proposed in the late 1990s. The first large CDN network was built by Akamai. One of the largest providers of free and paid CDN services is Cloudflare.
Endpoint Management
Endpoints (endpoint devices) are servers, user desktops, mobile devices, and other devices that connect to a computer network to send or receive information, as opposed to redistribution points, which are service network devices that transmit endpoint information or manage the network.
Endpoint protection technologies have evolved smoothly since the advent of computers. Anti-malware, firewalls, FDE, HIDS and HIPS are examples of such protection. The separation of endpoint security technologies into a separate group in the 2000s should be attributed to the development of mobile devices (portable equipment).
As is often the case in information security in its various manifestations, convenience came into conflict with security. Since the late 2000s, more convenient Apple and Android mobile devices have displaced Blackberry gadgets, which were more secure from a corporate point of view. Therefore, a large niche of security solutions was created for devices on the new platforms, which began to fill up quickly. A key factor in the development of endpoint management systems was the corporate “Bring Your Own Device” (BYOD) policy, which grew in popularity in the 2010s.
The Endpoint Management Solutions Group provides a prime example of a large set of interconnected, specialized solutions. They are rapidly evolving and increasing the number and complexity of their functions, which occasionally recur in different classes of solutions in this group and cause confusion when studied. Nevertheless, it makes sense to make sense of it.
The table dividing the classes of solutions by their conditional management and protection functions (although both of these functions are related to security), and into two subgroups by the types of platforms on which these solutions run, will help us to do so:
Main Functions | |||
---|---|---|---|
Management | Protection | ||
Types of platforms | Endpoints | UEM | EPP, EDR, UES |
Mobile devices | MDM, MAM, MCM, EMM | MTD |
The general subgroup of solutions under consideration includes endpoint management and protection solutions, including desktop computers and mobile devices. The names of these solutions include the word “Endpoint”. The private subgroup includes solutions for managing and protecting mobile devices such as cell phones, smartphones and PDAs. These solutions have the word “Mobile” in their names. Both subgroups have been closely related to each other in recent years and, to some extent, linked to other classes of solutions: data protection, anti-malware, threat detection, etc.
- EPP (Endpoint Protection Platform) – these solutions represent an evolution of classic anti-malware. Almost all modern anti-malware is called EPP. The main purpose of EPP is to protect endpoints from a wide range of cyber threats, including viruses, Trojans, phishing, spyware, rootkits, and other types of malware. EPP’s key features are NGAV, IDS/IPS, firewall, behavior monitoring and heuristic analysis. EPP solutions have been evolving since the 2010s. Today’s EPPs include password management, cloud backup, vulnerability management, artificial intelligence technologies, and integrate with EDR, SIEM, DLP and MDM systems to provide a deeper level of analytics and automation.
- EDR (Endpoint Detection and Response, or Endpoint Threat Detection and Response, ETDR) serves to detect, investigate and respond to incidents and potential threats, including those not detected by EPP. EDR, unlike EPP, is geared more toward large enterprises than individual users. EDR also relies more on analyzing and investigating security events, as well as integrating with SIEM and other network security services. EDR features include endpoint monitoring in both online and offline modes, continuous collection of security event data, data analysis to identify suspicious activity and indicators of compromise, automated response to threats or suspicious activity, and investigative, visibility and user data visibility features. Some EDR vendors utilize the MITRE ATT&CK framework for threat management. The term ETDR was coined by Anton Chuvakin of Gartner in 2013.
- MDM (Mobile Device Management) is a set of software solutions designed to administer, control and secure mobile devices used in organizations. These systems allow centralized management of security policies, settings, applications and data on mobile devices to ensure compliance with corporate standards and regulations. MDM features also include monitoring and reporting on device status and usage, remote device management, including locking and wiping data in the event of loss or theft, and managing access to corporate resources and data. MDM solutions are often integrated with MAM, MCM, IAM and EDR systems. The development of MDM began in the 2000s.
- Mobile Application Management (MAM) solutions are designed to manage access to and usage of mobile applications on corporate and personal mobile devices of employees. This includes deploying, updating, monitoring and securing enterprise mobile applications, as well as managing access to data and functionality of these applications. Unlike MDM systems that manage the entire device, MAM systems focus only on enterprise applications. This improves the separation of the personal and corporate parts of the gadget. The forerunners of MAM were the enterprise mobile email clients NitroDesk TouchDown in 2008 and Good for Enterprise in 2009. Active development of MAM began in the 2010s. Recently, the transition of MAM into EMM and UEM systems has intensified.
- MCM (Mobile Content Management) is a type of content management system (CMS), document management system (DMS), or enterprise content management system (ECMS) capable of storing and delivering information and services to mobile devices. The primary function of MCM in a security context is to control access to content. Access control often includes download control, user-specific data encryption and deletion, and time-based access. To enhance security, many MCMs support authentication by IP address and by mobile device hardware IDs. MCM can either be a standalone system or integrated with other management systems such as EMM and UEM.
- MTD (Mobile Threat Defense) are solutions that specialize in detecting, preventing and responding to security threats targeting mobile devices and operating systems. MTDs are designed to protect against a wide range of threats, including malware, phishing, network attacks, and vulnerability exploitation. MTD features include malware and rootkit detection and prevention, protection against network attacks and phishing, vulnerability detection and remediation, application and device behavior monitoring and analysis, and threat response, including isolation of infected devices. MTD varieties include standalone applications, integrated platforms, and components of broader security systems such as EMM or UEM. Also, MTD solutions integrate with ZTNA, XDR and UES solutions. The development of MTD began in the mid-2010s.
- EMM (Enterprise Mobility Management) is a set of technologies and services for managing mobile devices and applications in an organization based on MDM, MAM, MCM, MIAM (Mobile IAM) or MIM (Mobile Identity Manager) and some other functions such as mobile expense management (MEM). EMM solutions began to evolve in the early 2010s as smartphones and tablets became more sophisticated, and it became clear that enterprises needed one convenient tool to optimize operations. EMM solutions provide capabilities for managing security policies on mobile devices, deploying and managing mobile applications, managing access to enterprise content, and protecting data. Users of EMM solutions are companies that focus on employee mobility and require mobile device and application management.
- UEM (Unified Endpoint Management) is a solution that provides a single interface for managing mobile, PC and other devices. UEM solutions are an evolution and integration of MDM, EMM and desktop management. UEM development began in 2018. UEM solutions combine the management of traditional devices (e.g., desktop and laptop computers) and mobile devices in a single solution. UEM features include all EMM functions and also provide tools to manage desktop operating systems such as Windows, macOS and Linux. UEM can also support the Internet of Things (IoT) and other types of devices. Users of UEM solutions are companies seeking centralized management of all their endpoint devices, regardless of device type or operating system.
- UES (Unified Endpoint Security) is an emerging approach to endpoint security that focuses on combining multiple endpoint security capabilities into a single integrated platform. Once an attack is detected, the UES platform can automatically take steps to not only eliminate the threat, but also address the underlying issues that made it possible. UES integrates EPP, EDR and MTD functions and integrates with UEM. The development of UES started in the late 2010s.
Application Security and DevSecOps
The security of applications and the process of creating and maintaining them is central to all modern information security, because if there are security issues within the program code, many of the solutions we describe in this work lose effectiveness.
The evolution of application development and support processes has led to the emergence and active development of the concept of DevOps and then DevSecOps. These approaches combine software development, operations, and security practices in an effort to improve the efficiency, speed, and security of application development and operations processes.
The origins of DevOps date back to the 2000s, when there was a need for improved collaboration and integration between development and operations teams. This was aimed at speeding up application development and delivery processes, as well as making them more reliable and resilient to failure. As DevOps evolved, it became apparent that security should be integrated throughout the application lifecycle, not just at the end of application development. This led to the birth of the concept of Security DevOps (DevSecOps), which extends DevOps to include security as an integral part of the development and operations processes.
In this section, we review the key classes of hardware and software solutions that contribute to application security and DevSecOps practices, excluding vulnerability management, which, due to its infrastructure specificity, we have separated into a separate section. Each of these solutions plays a role in application and infrastructure security, helping organizations adapt to the rapidly changing security requirements of the IT world.
- Sandbox is a type of virtualization that allows programs to run in a separate, secure environment isolated from the main system. The allusion to children in a sandbox is quite ironic – protecting the safe adult external space from the unpredictable and dangerous internal childish space. The purpose of using a sandbox is to test suspicious programs for malicious behavior without risking the actual operating system or network. Sandboxes are used in anti-malware, browsers, operating systems, development environments, etc. The concept of sandboxing originated in the 1970s at Carnegie Mellon University. The development of the Internet and the rise of cyber threats in the 1990s and 2000s contributed to the popularity and development of sandboxing technologies.
- Threat Modeling Tools are designed to analyze, identify, and manage potential security threats during the software development process. There are various approaches to threat modeling, including data flow diagrams, methodologies such as STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege), and specialized software tools to automate the threat modeling process. Early IT threat modeling methodologies were developed by Christopher Alexander in 1977. In 1988, Robert Barnard developed the first IT system attacker profile. The STRIDE model was developed by Microsoft in 1999. In 2003, Christopher Alberts introduced the OCTAVE method and in 2014, Ryan Stillions introduced the DML model.
- SCA (Software Composition Analysis) is a set of tools and platforms that help organizations identify and manage open source components and third-party libraries in their software. This includes detecting licenses, vulnerabilities, legacy code, and other potential risks. SCA solutions can be delivered as cloud services or on-premises systems, and can also be integrated into broader DevOps platforms and CI/CD (Continuous Integration/Continuous Deployment) automation tools. SCA technology has become popular since the late 1990s with the development of open source software (OSS).
- IaC (Infrastructure as Code) are tools aimed at defining and managing infrastructure using code. This automates the creation, deployment and management of infrastructure, reducing the risks associated with human error and ensuring compliance with security standards. The forerunners of IaC – configuration management solutions – have evolved fairly smoothly since the 1970s. One of the first tools that can be considered a precursor to the modern IaC approach was CFEngine, created by Mark Burgess in 1993. The IaC approach has evolved since the early 2000s, thanks to the efforts of companies such as HashiCorp (Terraform product), Red Hat (Ansible product), Puppet Labs and Chef Software.
- CI/CD (Continuous Integration/Continuous Delivery, or less commonly, Continuous Deployment, a more automated delivery option) is a technology that plays a key role in integrating security practices into the software development process. CI/CD enables automation and optimization of application development, testing, and deployment processes. This, in turn, enables the adoption of secure programming practices and continuous security monitoring throughout the application lifecycle. CI/CD solutions can be cloud-based or on-premises, and can include a variety of tools for automation, version control, QA, and security. CI/CD is closely related to container security and key management technologies, as well as IaC, DevSecOps, SAST, DAST, IAST, RASP, and SIEM. The earliest known CI solution was the Infuse environment introduced by G. E. Kaiser, D. E. Perry, and W. M. Schell in 1989 year. Grady Booch systematized CI in 1991. The term “continuous delivery” was popularized by Jez Humble and David Farley in 2010, but the concept has continued to evolve and now has a more expansive meaning. CI/CD solutions had little practical application until the early 2010s, but then began to grow in use and adoption.
- Compliance as Code – these solutions are focused on automation and management of compliance with regulatory requirements and standards in the field of information technology. Key features include automatic verification and enforcement of infrastructure configuration compliance with regulatory requirements, the ability to encode security policies and compliance standards as executable rules, continuous monitoring and reporting of compliance status, and integration with tools for infrastructure deployment and management. The Compliance as Code concept began to evolve in the early 2010s.
- Policy as Code is an approach to managing and automating policies and rules in IT infrastructure by defining them as code. While Compliance as Code solutions automate compliance checks and focus on monitoring and reporting, Policy as Code covers a broader range of policies and rules, including security, configuration and governance, and actively manages policies. Key features include defining security and configuration policies as code, automating policy enforcement on infrastructure, simplifying change management and policy versioning, and integrating with CI/CD and IaC tools. The “Policy as Code” approach began to evolve in the mid-2010s.
- Container Security Solutions secure containerized applications and infrastructure. They are software solutions designed to protect containers, their images, runtime environments, and all associated infrastructure and processes. These solutions provide security for all phases of the container lifecycle, from development to deployment and operation. Solutions can range from specialized container image scanning tools to comprehensive security platforms that integrate multiple functions, including automation, monitoring and policy management. These solutions integrate with CI/CD systems, container orchestration systems, and monitoring and analytics tools. The development of container security solutions began in the mid-2010s, following the growing popularity of containerization technologies such as Docker and orchestration technologies such as Kubernetes.
- Secrets Management Tools are concerned with providing secure storage, access and management of secret data such as passwords, encryption keys and access tokens. The difference between these solutions and KMS is that the former are more sophisticated solutions and cover a wider range of secrets, not only cryptographic keys, but also passwords, tokens and API keys. Secrets Management Tools features include centralized storage of secrets, automatic rotation of secrets, role and policy-based access control, auditing and logging access to secrets, and integration with various applications and services. These solutions can be cloud-based or on-premises, offer different levels of integration with infrastructure and applications, and integrate with CI/CD tools, configuration management and orchestration systems. The development of secrets management solutions began in the early 2010s by companies such as HashiCorp, CyberArk and AWS.
- RASP (Runtime Application Self-Protection) is a security technology embedded directly into an application or its runtime environment to provide real-time protection against threats. RASP actively monitors application behavior and can automatically respond to detected threats, preventing them from being exploited. The RASP solution can be implemented as integrated with the application or as a standalone agent in the application runtime environment. RASP solutions can be specialized for different programming languages and platforms. RASP is used in conjunction with WAF, IDS/IPS, and monitoring and analytics systems. Unlike WAF or IDS/IPS, which inspect traffic, RASP solutions detect threats not only in traffic, and not only general threats, but also application-specific threats. RASP solutions have evolved since the late 2000s and early 2010s.
Vulnerability Management
Vulnerability management could be considered part of the application security and Security DevOps processes were it not for the huge set of problems associated with vulnerabilities in third-party applications and configurations.
Users of modern software often do not have access to their source code and therefore manage vulnerabilities in these applications in a largely reactive rather than preventative manner. In this approach, users or their hired security professionals test already deployed applications (often directly in the production environment) and compare the test results with vulnerability descriptions issued by the authors of those applications or information security organizations.
A separate issue that makes vulnerability management a discipline in its own right is the uniqueness and complexity of specific IT infrastructures where vulnerabilities can amplify each other and can be associated with errors not only in software source code, but also in database, application, interface, operating system or network hardware configurations.
- VA (Vulnerability Assessment) is a technology for identifying, ranking and prioritizing vulnerabilities in computer applications, systems and network infrastructures. The goal of VA is to discover vulnerabilities – potential weaknesses that can be exploited by attackers – and provide recommendations to eliminate or mitigate those vulnerabilities. Varieties of VA tools include network vulnerability scanners, static code security analyzers (SAST), and dynamic application security analyzers (DAST). VA systems integrate with IDS/IPS systems, incident management and threat response tools, and change and patch management systems. The development of vulnerability assessment solutions began in the 1990s. In 1995, the non-profit organization MITRE and the NIST Institute created a standardized way to identify and create the CVE (Common Vulnerabilities and Exposures) vulnerability database. This was a major step in the development of vulnerability assessment.
- Vulnerability Scanners are automated tools designed to scan computer systems, networks, and applications for vulnerabilities and additional information. Unlike the general class of Vulnerability Assessment, which includes a wide range of vulnerability identification and assessment functions, scanners focus on finding known vulnerabilities described by various vulnerability databases, both independent, like CVE, and proprietary databases from the developers of these vulnerability scanners. Types of vulnerability scanners are port scanners, computer network topology scanners, network service vulnerability scanners, CGI scanners, etc. SAST and DAST solutions are also often categorized as vulnerability scanners, but we separate them into separate classes. One of the first vulnerability scanners was ISS, created by Chris Klaus in 1992. The SATAN scanner, developed by Dan Farmer and Wietse Venema in 1995, gained a lot of popularity in its time. In 1997, nmap, the famous host and port scanner, was created. In 1998, Renaud Deraison created the free vulnerability scanner Nessus, which became commercial in 2005 and is still popular today. Qualys and Rapid7 have also contributed greatly to the development of vulnerability scanners.
- VM (Vulnerability Management) is a comprehensive approach to detecting, assessing, prioritizing and remediating vulnerabilities in information systems and networks. While VA solutions focus on on-demand vulnerability detection and assessment, VM is a broader process that includes not only regular vulnerability detection and assessment, but also subsequent remediation actions. VM varieties include: solutions focused on network devices and infrastructure; solutions that specialize in application and web services vulnerabilities; and integrated platforms that cover a wide range of assets and resources. VMs integrate with vulnerability scanners, with SIEM and MDR systems, and with configuration management tools. The development of VM solutions began in the 2000s. Tenable, Qualys and Rapid7 contributed to the development of VM solutions.
- SAST (Static Application Security Testing) is a methodology and toolkit for statically analyzing source code, bytecode, or binary files of applications to detect security vulnerabilities. SAST analyzes applications for vulnerabilities early in development, helping to prevent vulnerabilities from appearing in the final product. Varieties – SAST integrated directly into development environments (IDEs); standalone SAST solutions; and cloud-based SAST solutions. SAST systems integrate with DAST, version control and code repository systems, vulnerability management and security monitoring systems. Although the process of static source code analysis has existed since computers have existed, the method expanded to security in the late 1990s. Development of SAST solutions occurred from the early to mid-2000s, through the efforts of Fortify Software (later acquired by Hewlett-Packard), Veracode, Checkmarx, SonarSource, and other companies.
- DAST (Dynamic Application Security Testing) is a methodology and tools for dynamically analyzing application security at runtime. DAST solutions are a type of vulnerability scanners aimed at detecting vulnerabilities in applications (mainly web applications) while they are actively running. These include, for example, session management issues, XSS (Cross-Site Scripting), SQL injection and other vulnerabilities. DAST solutions integrate with CI/CD, VM and SAST. DAST systems began to evolve in the early 2000s. Some of the first DAST solutions were SPI Dynamics WebInspect (later acquired by HP) and Sanctum AppScan (later acquired by IBM and then by HCL). One of the popular DAST tools was Acunetix, introduced to the market in 2005.
- IAST (Interactive Application Security Testing) is an application security methodology and toolkit that provides dynamic analysis combined with static analysis. IAST integrates directly into applications or their runtime environment to detect vulnerabilities while testing applications in real-world environments. IAST interoperates with IDEs, SAST, DAST, version control, CI/CD and test automation systems. IAST solutions began to develop in the early 2010s. One of the first IAST solutions was released by the Israeli company Seeker Security, which was later acquired by Quotium, which was later acquired by Synopsys.
- MAST (Mobile Application Security Testing) is a mobile application security assessment technology for detecting vulnerabilities, privacy issues, and code and configuration errors. MAST aims to secure mobile applications on different platforms, such as Android and iOS, taking into account the specifics of mobile devices and ecosystems. The varieties are Mobile SAST, Mobile DAST, Mobile IAST, and Automated MAST (AMAST). The development of MAST began in the early 2010s, thanks to the efforts of Veracode, NowSecure, Synopsys and other companies.
- PT (Pentest tools) is a broad class of penetration test tools (pentests) used to assess the security of networks, systems, and applications by simulating attackers. The main purpose of a pentest is to identify vulnerabilities, weaknesses and deficiencies in defenses to prevent real cyberattacks. The term VAPT (Vulnerability Assessment and Penetration Testing) is often used in relation to the pentest vulnerability assessment process and tools. Varieties – tools for vulnerability scanning and analysis, manual testing and exploitation of vulnerabilities, testing of web applications, wireless networks, mobile applications, etc. The development of pentest tools is associated with the development of vulnerability assessment tools in the 1990s. Their separation into a separate class can be associated with the appearance in 2003 of such advanced tools as PortSwigger Burp Suite, Rapid7 Metasploit and many others.
- OSINT (Open-Source Intelligence) includes tools and techniques aimed at collecting and analyzing open-source data to identify vulnerabilities, threats, incidents, and assess risks. Broadly speaking, OSINT is any open-source intelligence. The main goals of OSINT in corporate security are to obtain valuable information about various security issues and to monitor public data to assess risks and protect organizations from cyberattacks. Some OSINT techniques and tools can be used to study the Darknet, Deep Web, and Dark Web, although gathering information from these parts of the Internet is not the primary focus of OSINT. OSINT tools integrate with SIEM and TIP. The first documented OSINT practice dates back to the mid-nineteenth century in the United States. OSINT practice in computer networks began with the emergence of these networks. OSINT technologies acquired qualitatively new levels with the emergence of WWW and social networks. One of the first popular simple OSINT tools was and still is an ordinary browser. One of the first specialized OSINT methodologies is Google Dorking. The Internet Archive Wayback Machine (1996), BuiltWith (2007), Maltego (2008), Shodan (2009), Skopenow (2012), Have I Been Pwned (HIBP, 2013), and OSINT Framework (2015) solutions have also been and continue to be popular OSINT tools.
- EASM (External Attack Surface Management) is a class of solutions aimed at managing the risks associated with an organization’s external attack surface. EASM scans and analyzes an organization’s Internet assets, such as websites, web applications, servers, network devices, and other published resources, to identify vulnerabilities, weaknesses, and potential threats. EASM utilizes DNS records, Whois, and Internet scanning to discover a company’s external infrastructure. EASM varieties include web application and API scanning solutions, domain and IP address monitoring platforms, and tools for analyzing and managing digital risk. EASM solutions integrate with VAPT, SIEM, MDR and other systems. EASM solutions have been evolving since the early 2010s. RiskIQ, CyCognito and other companies have contributed to the development of EASM.
- CAASM (Cyber Asset Attack Surface Management) is a class of solutions designed to detect, analyze, manage and protect all of an organization’s digital assets. CAASM covers a wide range of assets, including network devices, servers, applications, cloud services and IoT devices, and aims to mitigate the risks associated with their operation and management. While EASM typically focuses on external assets, CAASM often includes both internal and external assets in its scope. Internal assets include software, firmware, or devices that are used by members of the organization. External assets are available online and can include publicly available IP addresses, web applications, APIs, and more. CAASM functions include automated discovery and inventory of all of an organization’s digital assets, assessing risks and vulnerabilities across assets, providing a centralized view and management of the attack surface, and integrating security data to improve incident response and protection planning. CAASM varieties include asset management platforms for cloud and virtualized environments; enterprise network and infrastructure solutions; and tools for monitoring and managing IoT and OT (Operational Technology) devices. CAASM systems integrate with VAPT, SIEM, TIP, MDR, etc. CAASM solutions have been evolving since the late 2010s. JupiterOne, Axonius, Armis and other companies are developing CAASM systems.
Cloud Security
The development of cloud technologies has led to the emergence of new challenges and threats in the field of information security. Cloud security requires consideration of highly scalable, dynamic, and distributed features. Unlike traditional security approaches, which often focus on perimeter defense and internal network resources, cloud security solutions must provide protection in a more open, flexible and changeable environment.
A key aspect of cloud security is access and identity management, as cloud services are often accessible from anywhere in the world. Accordingly, it is important to not only control who has access to cloud resources, but also to protect data and applications in the cloud. This requires a comprehensive approach that includes configuration monitoring, workload protection, and ongoing compliance assessment.
In this section, we’ll explore how today’s cloud security solutions meet these challenges by providing protection in a dynamic and scalable cloud environment. We’ll learn how they help organizations manage cloud risk and how they fit into their overall information security strategy.
- CASB (Cloud Access Security Brokers) are intermediary systems between user organizations and cloud providers. CASBs allow administrators to deploy and enforce security policies more conveniently. It helps companies create security rules when their administrators are unfamiliar with how to enforce cybersecurity in the cloud. CASB helps organizations to control the use of cloud applications and protect data. CASB varieties: proxies, both forward and backward; CASBs that integrate with cloud services through their APIs. CASBs integrate with IAM, SIEM, SOAR, VM, GRC and other systems. The term CASB was coined by Gartner in 2012. CASB developers include Microsoft, Symantec, Palo Alto Networks, McAfee, Trend Micro, Forcepoint, Skyhigh Security, Netskope, CipherCloud and others.
- CSPM (Cloud Security Posture Management) is a class of solutions designed for automated security management of cloud environments. The basic functions of CSPM include monitoring cloud configurations for breaches and vulnerabilities, automated correction of misconfigurations or risky configurations, assessment of cloud compliance, and analysis and reporting of cloud security posture. CSPM systems integrate with CASB, CIEM, SIEM, SOAR, IAM, etc. In the mid-2010s, CSPM solutions started to develop. CSPM developers include Microsoft, CrowdStrike, Trend Micro, Palo Alto Networks, McAfee, Check Point, Orca Security, Zscaler and others. With the development of CSPM technologies, they have also started to control access and applications, and help in responding to and remediating cyberattacks. The development of CSPM solutions is based on context-aware access, integration of different types of clouds, integration of cloud and on-premises applications and infrastructure resource management, integration with DevOps, and leveraging artificial intelligence, etc.
- CWPP (Cloud Workload Protection Platform, or Cloud Workload Protection, CWP) is a class of solutions designed to secure and protect workloads in cloud environments, including virtual machines, containers, and server functions. CWPPs are typically based on software agents that run continuously on protected computers, collect security-critical data and events, and send them to cloud services. The functions of CWPPs are to detect and prevent threats associated with cloud workloads; monitor and manage security configurations; protect and encrypt data in the cloud; and manage access and identity in cloud environments. The varieties of CWPPs are solutions focused on protecting virtual machines and servers; platforms for protecting containerized applications and microservices; and tools for securing server functions and cloud applications. CWPPs integrate with CASB, CSPM, VM, IDS, IAM, etc. Development of CWPP began in the mid-2010s. Palo Alto Networks, McAfee, Symantec, Trend Micro, VMware, Checkpoint, Microsoft and others participated in the development of CWPP.
- CIEM (Cloud Infrastructure Entitlement Management) solutions manage and optimize user access and authorization in cloud environments. CIEMs manage the risks associated with privileged access, permissions and access policies in cloud infrastructures such as AWS, Azure, Google Cloud and others. CIEM functions include managing privileged access to cloud infrastructure; optimizing and minimizing redundant access rights and permissions; monitoring and auditing user activity and access configurations; and ensuring compliance with access management regulations. CIEM integrates with IAM, CASB, SIEM, SOAR and other systems. In the late 2010s, CIEM solutions began developing. CIEM developers include CrowdStrike, Palo Alto Networks, Rapid7, CyberArk, Microsoft, SailPoint, CloudKnox and other companies.
- CNAPP (Cloud-Native Application Protection Platform) is a solution that combines the functions of CSPM, CWPP, and CIEM. CNAPP systems are a unified and tightly integrated set of security and compliance capabilities designed to protect cloud applications during the development and production phases. CNAPP brings together many previously disparate capabilities, including container scanning, cloud security state management, infrastructure-as-code scanning, cloud infrastructure rights management, runtime cloud workload protection, and runtime vulnerability/configuration scanning. The term CNAPP was coined by Gartner in 2021. CNAPP developers include Palo Alto Networks, Trend Micro, McAfee, CrowdStrike, Zscaler and others.
- CDR (Cloud Detection and Response) – These solutions are designed to detect and respond to security threats in cloud environments. CDR provides security monitoring, analysis and management of cloud resources, including infrastructure, applications and data. CDR solutions perform EDR, NDR, and XDR functions in clouds. CDR solution can also be described in the next section as it is on the border of two sections. The first CDR solutions were created in the early 2010s, when Amazon Web Services, Microsoft Azure, Google Cloud and others started to introduce specialized tools to protect their cloud platforms.
Security Information and Event Management
Throughout the above sections of this work, we have grouped security features by function, protected objects, parts of the infrastructure, or according to stages of the security issue lifecycle. This specialization of solutions leads to a significant drawback – the need for manual work by experts to aggregate the results of multiple heterogeneous security tools and make further decisions on how to analyze, remediate, or otherwise handle security issues and problems. This shortcoming has been exacerbated as the diversity of security threats has increased, the complexity of the infrastructures being protected has grown, and the boundaries have blurred with the use of mobile and cloud technologies.
So it is only natural that detection, prevention and response functions, as well as security data from very different components and processes, have begun to be integrated within separate solutions to accelerate and automate the real-time remediation of security problems at any stage of those problems.
Key elements of this integration have been the concepts of security events and security information. The foundation of the basics of managing these elements is logging. Therefore, we will start this section with a description of logging.
- LM (Log Management, journaling, logging) is a class of solutions designed to collect, store, aggregate, basic analyze, and manage logs (event logs) from various computer or IT infrastructure sources. These solutions help organizations manage large volumes of logs for security, compliance and analytics purposes. Log management technologies are core technologies for developing, debugging, deploying, operating, and securing information technology. Logging tools are integrated, if not in all types of modern software, then in the lion’s share of them. The history of LM can be seen, starting with the predecessors of the Unix operating system in the late 1950s. As systems and applications evolved, more advanced log collection tools such as syslog-ng (1998) and rsyslog (2004) became available. With the development of DevOps and CI/CD since the late 2000s, the need for automated and integrated log management solutions has grown. This has led to the development of modern log management tools.
- SIM (Security Information Management) is an evolution of logging with a focus on advanced analysis of security events from various sources in an organization. The functions of SIM are to aggregate and store large amounts of security data, support security analysis and reporting to identify trends and patterns, manage logs for regulatory compliance, and integrate with various security tools and systems to collect data. Methods for analyzing and monitoring security have evolved since the late 1970s. Log consolidation and centralization solutions have evolved since the late 1990s.
- SEM (Security Event Management) is a solution for monitoring, detecting and analyzing security events in real time. SEM is designed to respond quickly to security threats and incidents, as well as to provide visibility and control over security events in an organization’s IT infrastructure. SEM systems evolved in the first half of the 2000s.
- SIEM (Security Information and Event Management) is an amalgamation of SIM and SEM methods and terms. Gartner, one of the trendsetters in information security terminology, coined the term SIEM in 2005. After that, the SIEM concept quickly became the standard for managing current and past corporate security events, as well as a basic tool for Security Operations Centers (SOCs). The goal of SIEM is centralized security management and early detection and response to security incidents. SIEM functions include collecting and aggregating logs and security event data, advanced data analysis to identify potential threats and anomalies, generating real-time security alerts and warnings, supporting incident investigation and management, reporting and compliance. Modern SIEM systems integrate with IDS, IAM, TIP, EDR, SOAR and many other solutions. SIEM development has actively involved both large companies such as IBM, McAfee and Splunk, as well as new players bringing innovation to the sector.
- TIP (Threat Intelligence Platform) is a technology solution that collects information security threat data from various sources and formats, aggregates and organizes it. The TIP then delivers threat intelligence results, including indicators of compromise (IoC), to analysts and other systems for further analysis and decision-making. TIP systems process information about known malware and other threats to identify, investigate, and respond to them. Threat intelligence is also used to proactively search the network for threats (threat hunting). TIP solutions can be on-premises or Software as a Service (SaaS). TIP integration with SIEM enables the use of intelligence to correlate events and improve the accuracy of threat detection. TIP integration with MDR and SOAR is used to automate incident response. TIP solutions have evolved since the early 2000s.
- UEBA (User and Entity Behavior Analytics) is a class of solutions designed to analyze and detect abnormal behavior of users and “entities” (devices, applications, etc.) on a network. UEBA uses machine learning algorithms and behavioral analytics to detect insider threats such as fraud, account compromises, and insider attacks. UEBA solutions integrate with SIEM, EDR, NBA and TIP. Unlike EDR solutions that focus on threats external to personnel, UEBA solutions focus on internal, insider threats (unscrupulous employees, etc.). The forerunner of UEBA in 2014 was UBA (User Behavior Analysis) solutions, which analyze user activity and detect insider threats and fraud. In 2015, Gartner expanded the idea of UBA from users to entities and described the concept of UEBA, which quickly became popular.
- DRPS (Digital Risk Protection Service) is a class of solutions aimed at protecting organizations from digital threats related to their brand, reputation, online presence, and digital assets. Despite the presence of the word “risk” in the name, this class of solutions refers more to security incident management, as it often focuses on detecting information leaks. DRPS functions include monitoring and analyzing online sources for threats, vulnerabilities and incidents; identifying and remediating reputational and fraud threats; tracking data breaches and illegal use of intellectual property; and providing recommendations and strategies to minimize risks. DRPS varieties include solutions for monitoring social networks and online forums, Deep Web and Dark Web analysis tools, digital reputation and brand management services. DRPS solutions integrate with SIEM, TIP, MDR, VM and GRC systems. DRPS systems began to develop in the mid-2010s, thanks to the efforts of Digital Shadows, ZeroFOX, RiskIQ and other companies.
- MDR (Managed Detection and Response) is a class of information security tools and services that combines threat detection technologies, advanced forensics, and response operations to provide effective threat management. MDRs obtain indicators of compromise (IoC) from TIPs, collect data from enterprise systems and validate them for IoC, and automate routine response tasks. Unlike TIPs, MDR solutions focus on real-time incident analysis and response. MDR integrates EPP, EDR, IDS, SIEM, NDR, XDR, SOAR and other solutions. The key feature of MDR is the presentation of the solution in the form of a service and the availability of expert resources for its provision. MDR solutions and services began to take shape in the mid-2010s. Leading IS companies such as FireEye, CrowdStrike, and Rapid7 have played a significant role in developing and promoting MDR services and solutions.
- SOAR (Security Orchestration, Automation, and Response) is a technology that evolved from IRP solutions to help execute, automate, and coordinate between different specialists and tools to respond to security incidents. SOAR collects input data, such as alerts from SIEM, TIP and other systems, and helps identify, prioritize and manage standardized incident response actions, as well as automatically respond to certain incidents. The term SOAR was introduced by Gartner in 2017. One of the earliest significant examples of a SOAR system is IBM Security SOAR, developed from an acquired startup, Resilient, whose CTO was renowned information security expert Bruce Schneier.
- XDR (Extended Detection and Response) are cyberattack detection and response solutions that integrate the functionality of multiple solutions to increase visibility into security events, analyze them more fully and deeply, and respond to attacks more effectively. XDR can be viewed as an evolution of EDR as applied to the enterprise network. The integrations of XDR vary in different implementations and often include EPP, EDR, NTA/NDR, FW/NGFW, UEBA, SIEM, SOAR, TIP, etc. XDR solutions process information about user actions, endpoint events, email, applications, networks, cloud workloads, and data. The functions of XDR are to collect and correlate data from different layers of defense; apply advanced detection techniques such as machine learning, behavioral analysis, and signature matching; provide a single management console to visualize, search, filter, and sort security events; support automation and orchestration of incident response actions such as blocking, isolating, removing, or encrypting malicious objects; and integrate with other security systems to coordinate network or system defense activities. The term XDR was coined in 2018 by Nir Zuk of Palo Alto Networks, who offers the following diagram useful for understanding the evolution and connectivity of the many solutions unified by the XDR solution.
- NDR (Network Detection and Response) is a network infrastructure cyber threat detection and response solution based on NTA technology, artificial intelligence, machine learning and behavioral analysis. The NDR solution inspects raw network packets and metadata for both internal network communications (also called “east-west”) and external network communications (also called “north-south”). Unlike NTA, an NDR system uses historical metadata to analyze threats and not only notifies but also automatically responds to them through integration with FW, EDR, NAC or SOAR. According to Gartner, which coined the term NDR in 2020, this technology is one of the three pillars of the visibility triad, along with EDR and SIEM.
- NSM (Network Security Monitoring) is an ambiguous class of solutions that can combine the functions of IDS, IPS, EDR, MDR and SIEM.
Risk Management
In the previous sections of our review, we looked at numerous classes of information security solutions covering a wide range of technologies. These solutions are primarily focused on the technical aspects of information security – preventing, detecting, and remediating vulnerabilities, attacks, incident consequences, and near-term threats – on the scale of seconds, hours, days, and weeks.
However, in the context of ever-increasing and more complex cyber threats and, more importantly, ever-increasing security budgets, organizations need to anticipate and plan as far ahead as possible – months and years in advance. To do so, they need to go beyond purely technical measures and think more strategically, improve risk management and better incorporate international, industry and government regulations and standards into their strategy.
Therefore, in the last part of our work, we move from technical aspects to strategic and managerial methods and tools. The solutions described in this section provide an all-encompassing approach to risk management. These solutions enable organizations to not only look far into the future when planning for risk management and compliance, but also to effectively manage the return on security investments and enhance the corporate security culture.
- CMDB (Configuration Management Database) is a tool or database for storing information about the components of an enterprise IT infrastructure and their interrelationships. The main task of CMDB is to provide configuration management of IT services and infrastructure. CMDB is not an information security solution, but it plays an important role for it. For example, a CMDB can be used to track changes to the IT infrastructure, manage assets and their dependencies, support incident and problem management, change planning and risk assessment. IT Asset Management (ITAM) solutions perform similar functions.
- CSAM (Cybersecurity Asset Management) is a solution for managing IT assets in an organization to ensure their security. These assets include network equipment, servers, workstations, operating technology (OT), mobile devices, applications, and data. The main objective of asset management in the context of cybersecurity is to improve the view of an organization’s IT assets, and to better manage the risks associated with IT infrastructure and data sets. CSAM functions include automatic asset discovery, asset classification, monitoring and analyzing asset security status, vulnerability management and patch management, and integration with other security systems. Unlike CAASM solutions, which cover a wide range of internal and external assets, as well as their vulnerabilities and related threats and incidents, CSAM solutions deal mainly with internal assets and related risks, i.e., at a higher level. CSAM solutions began to evolve in the early 2000s. Axonius, JupiterOne, Lansweeper, Noetic Cyber, Panaseer, Qualys and others have contributed to CSAM development.
- ISRM (Information Security Risk Management or IT Risk Management, ITRM) is a tool for automating information security risk management – the process of identifying, analyzing, assessing and tracking risks. Risk management allows you to reduce the probability and negative consequences of incidents, as well as assess the cost-effectiveness of implementing security measures and tools. The functions of ISRM are to assist in identifying assets and threats, assessing vulnerabilities and the likelihood of incidents, evaluating potential damage, developing and tracking risk treatment measures, and continuously monitoring and reassessing risks. Varieties of ISRM are analytical tools; threat intelligence, vulnerability or incident management systems with risk management functions; business continuity planning tools; and compliance management tools. The concept of risk management in IS began to develop in the 1970s. In 1987, the CRAMM methodology (CCTA Risk Analysis and Management Method) appeared, which was disseminated in the form of ISRM software solutions. Significant steps in the development of ISRM methodology were the appearance of standards NIST SP 800-30 in 2002 and ISO 27005 in 2008.
- GRC (Governance, Risk, and Compliance) is a set of processes and technologies aimed at effective strategic management of an organization, minimizing its risks and ensuring its compliance with applicable legal and regulatory requirements. The main purpose of GRC in the context of information security is to ensure consistency and effectiveness of its management within the organization as part of its industry, government and macroeconomics. GRC tools can be divided into compliance areas, as well as generalist and specialized tools, such as those embedded in cloud services. GRC solutions have evolved since the late 1980s. In the early 2000s, Forrester’s Michael Rasmussen coined the term GRC for these processes and solutions. In 2002, Symbiant created one of the first GRC tools. Enterprise Risk Management (ERM) processes and tools closely related to GRC have been taking shape since the late 1990s. In 2017, Gartner proposed the concept of Integrated Risk Management (IRM), which continued the evolution of ERM.
- CCM (Continuous Controls Monitoring) is a fairly broad class of tools that can include continuous or high-frequency tracking of the effectiveness or safety of economic, financial or technological processes or controls. In the context of information security, CCM solutions are designed to monitor and analyze the effectiveness of managerial, operational, and technical security controls. CCMs track the effectiveness of processes and risk mitigation tools, including cyberattacks, business continuity and compliance. These tools help ensure compliance with regulations, policies and standards, and help improve the overall security and efficiency of the IT infrastructure. The concept of CCM began to evolve in the late 1990s. The ISACA COBIT standard released in 1996 and the US SOX (Sarbanes-Oxley) Act of 2002 influenced the development of CCM solutions.
- SAT (Security Awareness Training) is a set of solutions designed to educate, inform and raise awareness among users and employees about information security principles, threats and best practices. The main goal of these tools is to reduce human error risks by educating and raising employee awareness. SATs include: training programs: courses, webinars, interactive simulations; knowledge testing: conducting quizzes and tests to check the level of understanding of the material; phishing simulations: creating controlled phishing attacks to assess user response; learning management systems (LMS): platforms for distance learning and tracking employee progress; newsletters and reminders: mailings with information security updates and tips; and many other training methods and tools. The concept of IS training and awareness-raising originated in the 1990s.
- Phishing Simulation are solutions designed to simulate phishing attacks in order to train users to recognize and prevent such threats. These tools allow companies to conduct controlled test attacks on their employees to assess their level of awareness and preparedness for real phishing attacks. Phishing simulation features: create simulated phishing campaigns with different scenarios, track user reactions to phishing emails, analyze results and report on employee vulnerability levels, train and inform employees on phishing recognition techniques. PS solutions integrate with SAT solutions. The concept of phishing simulation emerged in the early 2000s.
Conclusion
Cybersecurity, despite its rich history, is still an evolving discipline. The terminology and concepts of security technologies are constantly being added, refined and evolved. Despite extensive standardization efforts in this area by organizations such as Gartner, Forrester, IDC, NIST and ISO, vendors and creators of new security technologies are constantly increasing the diversity of terms and the complexity of classifying security solutions.
New solutions are constantly replacing older solutions or integrating multiple technologies from different classes. Sometimes solutions evolve a lot, although the name of their class does not change for a long time. Sometimes, on the contrary, manufacturers for marketing reasons try to position their solutions as a new class of solutions invented by them, without introducing any fundamental novelty in the functionality of the solution. Sometimes manufacturers refer their solution to a better-selling class, although in fact the solution lacks the corresponding functions.
These processes increase the probability of errors and inaccuracies in reviews like ours. We used a number of sources for this review: Wikipedia, Palo Alto Networks Cyberpedia, Zenarmor Security Basics, Kaspersky IT Encyclopedia, Delinea PAM & Cybersecurity Glossary, and many others. However, none of the sources provided sufficiently complete and accurate information that we had to analyze and compile manually. Therefore, we would be grateful for your recommendations from other sources and for your help in refining and developing this work. Send us your suggestions for improvement, please.
We hope that you found our review not only informative, but also stimulating to dive further into the world of automation and information security solutions, and to organize this world.
In summary, the tools and technologies we have reviewed help, on the one hand, to facilitate and simplify, and on the other hand, to accelerate and strengthen system and data protection processes. As practitioners rather than theorists, we actively develop and integrate these tools in an effort to provide you with the best free and commercial innovative solutions.
If you need an implementation or security assessment of such solutions, or any type of cybersecurity automation, write to us today for a free consultation using the promo code “SOLUTIONS”.
Subscribe to our social media at the bottom of this page so you don’t miss our news and blog articles.