Artificial Intelligence Security

30 Nov 2024 Author: Vladimir Buldyzhov

Trust in artificial intelligence

The widespread adoption of artificial intelligence (AI) raises serious security issues that previously seemed the domain of science fiction writers. 

How confident can we be in the smooth and uninterrupted operation of critical systems controlled by AI? Can we entrust critical decisions about human and animal health to artificial intelligence? Can we trust a machine with the budget of a state or even a single commercial or public organization? 

AI, particularly Machine Learning (ML), technologies are rapidly improving human life — from everyday tasks to global business processes. These technologies have enormous potential to automate routines, improve efficiency, generate ideas, give birth to innovations, and create new opportunities. 

On the other hand, can we be sure that the personal data used to train models will remain private? How can we protect algorithms from interference from malicious actors seeking to distort results? All of these questions emphasize the need for special attention to security in the design and implementation of AI.

In addition to answering these pressing questions, our strategic goal is to improve the robustness of AI-based systems and periodically assess their trustworthiness in the face of their rapid development and proliferation. 

In this article, we address critical AI security issues with a focus on cybersecurity, analyze the threats, introduce their models, and offer practical guidelines, methods, and recommendations to minimize AI-related risks. 

The article will be of interest to a wide range of readers interested in AI, and especially to security specialists, developers and owners of AI systems.

Artificial intelligence in the real world

AI systems make decisions that affect thousands and even millions of people. And, far from just virtual life, they are also affecting the real world. For example, Zest AI and Ant Financial use ML models to assess credit risk, analyze transactions, and analyze user behavior. Another example, AI-powered surveillance cameras are able to recognize suspicious behavior and automatically notify security services, improving public safety. 

A host of AI use cases are related to transportation. Companies like Tesla and Waymo, are actively developing and deploying unmanned vehicles capable of autonomous driving on public roads. AI-driven traffic lights are being introduced. They analyze traffic density and optimize signal timing to reduce congestion. 

In the agricultural sector, AI is being used to monitor crop health, optimize resource utilization, and predict yields. Drones with AI algorithms can detect plant diseases in their early stages. All of this is just a small part of examples of industries where AI is being applied.

Machines are rapidly augmenting humans in areas once considered impossible to automate. Recent tests show that a computer sometimes diagnoses on average faster and more accurately than a doctor. 

Moreover, AI is already being used in sophisticated robotic surgical systems. These systems allow surgeons to perform complex operations with high precision, minimizing invasiveness and reducing patient recovery time. Robotic surgeons with AI can not only perform surgical tasks like injections, tissue lifting and suturing, but also unexpected non-routine actions like picking up fallen objects.

These examples of physical applications of AI systems illustrate that they carry inherent risks. Naturally, trust in these systems is still very limited and is just beginning to form. Let us look at the reasons for mistrust in a little more detail.

AI security and safety risks

Erroneous or manipulative decisions made by AI systems can affect human lives and lead to serious consequences. This makes the security of AI systems critical. Occasionally, news reports have highlighted AI-assisted medical errors, dangerous AI-generated recipes, and concerning instances of AI-related psychological manipulation.

Some machine learning models, particularly deep neural networks, can generate outputs that are difficult for humans to interpret. This creates challenges in understanding decision-making processes, as well as new challenges for safety and control. For example, autonomous vehicles may face unpredictable situations on the road. They may require instantaneous difficult decisions like the “Trolley Problem” — choosing between two or more options for unavoidable casualties. 

Many AI models handle vast amounts of personal and corporate data. Leakage or misuse of this information can lead to serious privacy and security implications. For example, when integrating such tools into hospitals, financial institutions, or other critical systems, a breach of their security could leak personal data and jeopardize the personal safety of millions of people.

The rapid evolution of AI/ML creates an ever-changing threat landscape that requires continuous adaptation of security measures. New threats and vulnerabilities emerge often. That said, AI security involves not only technical aspects, but also ethical issues such as preventing discrimination and ensuring the fairness of algorithms. The unethical use of AI can lead to increased social conflicts and inequalities.

Finally, AI technologies are becoming a key factor in the competitiveness of companies. Protecting intellectual property and preventing industrial espionage in this area is critical for businesses. Competitors may try to steal your AI developments and technologies, which can seriously undermine your business.

Again, the examples given are just isolated instances in the scope of the issues. They do not reflect the full breadth and depth of the risks, threats, and vulnerabilities associated with AI. So let’s take a different route — let’s try to take a systematic approach to describing AI threats and risks.

Threat models

What’s the best place to start a systematic study of AI cybersecurity threats? Consider classic information security threat models, such as STRIDE, DREAD, PASTA or LINDDUN. They are universal and suitable for most cases where you need to study information security threats and are unclear where to start.

Then delve into specialized models. For example, the paper “Modeling Threats to AI-ML Systems Using STRIDE” proposes the STRIDE-AI methodology, which focuses on assessing the security of AI assets.

To illustrate the essence of these models, let us consider the basic processes in AI systems and apply to them the simplest categorization of elementary threats according to key information security requirements — the classic triad of security criteria — confidentiality, integrity and availability.

Data Confidentiality. AI models are trained on large amounts of data that may contain sensitive information. Threats to confidentiality can be as follows:

  • Inference attacks. Model outputs can be used to obtain information about the input data or training set.
  • Attacks on models. Techniques like model inversion can recover the input data from the model, compromising privacy.
  • Data leakage. Attackers can gain access to training data or intermediate results from the model. This can lead to the disclosure of personal or commercial information.
  • De-anonymization. Specific individuals can be identified in insufficiently anonymized data.

Integrity of models and data. Integrity refers to the immutability and trustworthiness of data and models. Major threats:

  • Data poisoning. Introducing malicious data into the training set can skew model results.
  • Adversarial attacks. Minor changes to input data can lead to erroneous model outputs.
  • Privilege escalation. Unauthorized access to AI services and systems can lead to their complete control by attackers and the possibility of any manipulation with them.

Service availability. Ensuring uninterrupted operation of AI systems is critical for many applications. Availability threats include:

  • DDoS and infrastructure attacks. Massive requests or hacking of associated servers can make the system unavailable to users.
  • Dependencies on external services. Like any modern complex systems, AI has vendor dependencies and risks associated with the use of third-party services or platforms. Failures in third-party APIs or services can disrupt AI applications.

The skills to analyze, synthesize, and apply such threat models, understand them, and select adequate security measures at all stages of the AI systems lifecycle are key to building and maintaining robust secure AI solutions.

Related areas of AI security

AI risks are not limited to information security. We will not elaborate on these risks, but simply give some examples of groups of such risks to AI owners and users.

Operational risks include problems associated with the operation of AI systems. Deployment errors, such as improper configuration or integration of AI systems, can lead to failures or vulnerabilities. Also, AI models have the property of degradation — a decrease in performance over time due to changes in data or environment.

Legal and regulatory risks include possible legal issues due to novelty and immature legal practices. Non-compliance with regulatory or statutory requirements, such as violating data protection laws (GDPR, CCPA, etc.) can result in serious penalties. Also, AI system owners often face intellectual property lawsuits over not only AI technologies but also their data. Although legal frameworks like the EU’s AI Act are beginning to define accountability for AI decisions, uncertainty remains, especially regarding liability in critical areas like healthcare and public safety.

Ethical risks and biases include violating accepted societal norms and making it difficult for public scrutiny of AI decision-making. In algorithmic discrimination, AI models can reinforce existing social biases and inequalities. Also, the decisions of deep learning models have the property of opacity — these decisions are difficult to explain. Finally, in any complex system, abuse and manipulation are possible. AI systems can be covertly used to influence public opinion or individual behavior.

AI risk frameworks, catalogs, databases, and repositories

MIT

Despite the newness of the industry and the low maturity of its security practices, some progress has been made in recent months. As we continue to define AI security issues, first of all, it would be logical to mention the MITRE ATLAS, the first specialized framework for mapping AI security threats.

In terms of vulnerabilities, it is worth citing AVID, an open specialized database of AI vulnerabilities.

Let us consider the MIT AI Risk Repository in more detail.

The MIT AI Risk Repository is an extensive free database containing more than 700 AI-related risks. This repository was created by analyzing 43 existing frameworks. The database categorizes risks by their causes and applications. The goal of the repository is to provide researchers, developers, and managers with a single source of information to understand and manage AI risks. 

Key features of the MIT AI Risk Repository:

  • Risk Database — contains detailed risk descriptions with sources and evidence. This helps in organizing risk information.
  • Classification by cause — risks are categorized based on their causes. This helps in understanding the mechanisms of their occurrence.
  • Classification by area of application — risks are categorized into seven major areas and 23 subcategories. This makes them easier to find and analyze.

Using this repository allows organizations to more effectively identify the risks associated with AI implementation to manage them easier.

Now that we’ve defined the scope of the challenges, we can begin to address them and to describe the security measures in place at all stages of the AI development and use lifecycle.

Overall AI risk management

four experts

We have seen that AI safety and security issues are broad and deep. Without practical guidelines, understanding these challenges becomes extremely complex. Even more so, modern user-friendly updated standards and guidelines are needed to address AI safety and security issues. And such standards and guidelines are emerging and actively being developed.

The NIST AI RMF and AI RMF Playbook are, respectively, a free standard and guideline developed by the U.S. National Institute of Standards and Technology (NIST). They were released in early 2023 and are continuously maintained. 

The AI RMF and Playbook are built in the spirit of modern cybersecurity standards such as the NIST Cybersecurity Framework (CSF) and ISO 27001. They offer organizations practical guidance for managing AI risks. 

It’s about recommendations, references, and related guidance for achieving results across four functions: Govern, Map, Measure, and Manage. Let’s look at these functions in a little more detail.

Govern function

The purpose of this function is to create a culture of risk management and ensure a responsible approach to the development and implementation of AI. Practical application:

  • Develop policies and procedures. Organizations should establish clear policies governing the development and use of AI, including ethics, privacy and security considerations.
  • Employee training. Conduct regular training sessions to raise awareness of the risks associated with AI and how to minimize them.
  • Assigning responsible persons. Defining the roles and responsibilities of employees responsible for managing AI risks.

Map function

This function aims to identify the context, objectives and stakeholders of the AI system. Practical steps:

  • Analyzing the application context. Defining the scope of the AI application, including goals, objectives, and expected outcomes.
  • Stakeholder identification. Identifying all stakeholders that may be affected by the AI system, including customers, partners, and regulators.
  • Assessing potential impacts. Analyzing the possible impacts of AI on different groups and processes.

Measure function

Aims to assess risks and define metrics to monitor the effectiveness of the AI system. Practical application:

  • Developing metrics. Establishing metrics to evaluate the performance and security of the AI system.
  • Conducting Testing. Regularly testing the system for vulnerabilities and deviations from expected results.
  • Monitoring and reporting. Continuous collection of system performance data and reporting to stakeholders.

Manage function

This function focuses on developing and implementing measures to mitigate risks and ensure system reliability. Practical steps include:

  • Developing a risk management plan. Creating an action plan to mitigate identified risks, including incident prevention and response measures.
  • Security measures. Implementing technical and organizational measures to protect the AI system from threats.
  • Continuous improvement. Regularly reviewing and updating risk management processes based on lessons learned and changes in technology.

Applying the NIST AI RMF and its Playbook enables organizations to systematically approach AI risk management, ensuring reliability, security, and compliance.

In summary, we have taken a bird’s eye view of AI risks and top-level AI risk management practices for a wide range of organizations. We’ll now explore more detailed aspects of AI security management for those seeking a comprehensive understanding.

Risk management of the AI lifecycle

Multiple AI monitors

Let us elaborate on the risk management of AI systems in the context of the lifecycle of these systems. This is especially important for developers and owners of AI systems. For this purpose, let’s familiarize ourselves with the security principles and methods used in the design, development, testing, deployment, and use phases of AI systems.

Secure design of AI systems

Design is the initial and critical stage in the lifecycle of AI systems. This is where the architectural foundations are laid, determining the system’s security and reliability throughout its existence.. Let’s look at some of the key issues that require attention during this phase.

Security in architecture development

To minimize potential damage in the event of a compromise, separation of system components into isolated modules is used. Containerization and virtualization are used to provide an additional level of isolation.

As in all classical security systems, in AI security, it is important to grant the minimum access rights that are necessary to perform functions. Regular auditing and revision of access rights is necessary.

Encryption mechanisms protect data at rest and in transit. Anonymization and pseudonymization techniques are also designed to protect sensitive information.

A logging and monitoring system is envisioned to track anomalies in the operation of AI models. Implementation of robust logging mechanisms is needed to ensure auditing and investigation of incidents.

AI architecture is designed with possible failures and attacks in mind. Automatic recovery and backup mechanisms need to be considered.

Secure APIs are provided to interact with AI models. Authentication and authorization mechanisms are implemented to control access to APIs.

Selection of secure machine learning algorithms and methods

abstract algorithms

Security should be one of the key criteria also when selecting machine learning algorithms and methods. Each algorithm should be carefully evaluated not only in terms of its performance, but also in terms of potential security risks.

Methods that can withstand manipulation of input data should be chosen. This is especially important in areas such as computer vision or natural language processing, where attackers may try to trick your model.

Explore the use of differential privacy techniques (adding controlled noise to the data) or federated learning, where models are trained on distributed data without centralizing it. While techniques like homomorphic encryption, K-anonymity, and L-diversity can enhance data privacy, they have limitations in practical applications, especially with high-dimensional datasets typical in AI projects. These approaches can help you protect the privacy of training data and minimize the risks of information leakage.

Ensure the robustness of models. Choose algorithms that can handle noise and variation in the data. This will not only improve the quality of predictions, but also make your model more resilient to potential attacks.

Apply ensembles of models. Not only can this approach improve the accuracy of predictions, but it can also add an extra layer of defense against attacks by allowing you to check the consistency of the solutions of different models.

Choose algorithms based on their computational complexity and resource requirements. This will not only help you optimize infrastructure costs, but also help you prevent potential denial-of-service attacks on your system.

Choosing secure algorithms and machine learning techniques is not a one-time action, but an ongoing process. As technology evolves and new threats emerge, you will need to regularly review and update your approach to AI system security.

Protecting source code and intellectual property

When you develop AI models, you create valuable intellectual property. Protecting these assets is critical to your business. Start by implementing strict source code access control policies. Use version control systems with encryption and two-factor authentication.

Consider restricting access to critical pieces of code, especially those that contain unique algorithms or business logic. However, remember that access restrictions will make support and debugging more difficult for your developers.

Patent key innovations and use non-disclosure agreements with employees and contractors. This will create an additional layer of protection for your intellectual property.

Secure DevSecOps practices in AI projects

DevSecOps is an approach that integrates security into development and operations processes. In the context of AI projects, this is especially important. Start by implementing automated security checks into your continuous integration and delivery (CI/CD) process. This can include scanning code for vulnerabilities, checking dependencies for known security issues, and automated testing for resilience to attacks.

Train your developers on AI-specific secure coding principles. This may include proper handling of sensitive data, protecting against model leaks, and preventing attacks via input data.

Implement the practice of regular code inspections with a focus on security. This will help identify potential problems early and spread security awareness among the development team.

Ethical considerations and bias assessment of AI systems

Bias in AI models is a serious ethical issue that can lead to discrimination and unfair decisions. Current methods for assessing and addressing bias include:

  • analyzing training data for historical biases;
  • testing models on different demographic groups;
  • applying debiasing techniques to balance results;
  • ongoing monitoring and adjustments to models as they operate.

Timely identification and elimination of discriminatory patterns not only makes AI decisions more ethical, but also reduces reputational and legal risks for the company.

The ability to explain your model’s decisions can be a key factor in the credibility of your model. Methods like LIME and SHAP can aid in interpreting model decisions, though they may introduce computational overhead and may not fully capture the complexities of deep learning models.

Creating fair and ethical AI systems requires an integrated approach. Consider the following steps:

  • Developing clear ethical principles and guidelines for AI projects.
  • Implementing ethical evaluation processes at all stages of the AI lifecycle.
  • Educating development teams on the principles of ethical AI.
  • Engaging ethics experts and stakeholder representatives.

Ensuring AI is fair and ethical not only meets social expectations, but also creates a long-term competitive advantage by increasing customer and partner confidence in your AI solutions.

In summary, we have considered security issues in the design of AI systems. Obviously, these issues are extremely complex and deep. 

Evaluation and testing of AI security

Once an AI model has been developed, it’s important to thoroughly evaluate and test its security. This will help identify and address potential vulnerabilities before the model is deployed in a production environment.

Assess and audit the security of AI models

Start by conducting penetration testing specific to AI systems. This may include attempts to fool the model using adversarial techniques, testing for resistance to attacks through third-party channels, and testing for the ability to extract sensitive information from the model.

Analyze the sensitivity of the model to variations in the input data. This will help identify potential weaknesses that can be exploited by attackers.

Don’t forget formal verification, if applicable to your model. While this may be difficult for many modern neural networks, for some critical components, formal verification can provide a high level of security assurance.

Security testing of language models

Large Language Models (LLM) are particularly vulnerable to various types of attacks due to their ability to generate human-like text. Let’s look at the key aspects of testing their security.

Testing for Adversarial Attacks
  • Malicious Content Generation Testing. Develop a systematic approach to probing the model’s ability to generate harmful content. Create controlled test scenarios that challenge the model’s content generation boundaries. Establish clear metrics for assessing potential risk levels
  • Context Manipulation Vulnerability Assessment. Design test cases that introduce deliberately misleading or false contextual information. Evaluate the model’s susceptibility to subtle contextual shifts. Analyze how context changes might alter the model’s output or reasoning
Prompt Injection Testing
  • Prompt Injection Vulnerability Evaluation. Develop multi-stage injection techniques. Test input sanitization mechanisms. Assess the model’s resilience to subtle instruction modifications, indirect manipulation attempts, context-based redirection of model behavior.
Ethical Constraint Testing
  • Create scenarios that test the model’s ability to maintain ethical boundaries, resistance to manipulation of core ethical principles, and consistency in handling ethically challenging scenarios.

LLM security assessment and testing is an ongoing process. As your models evolve and new attack techniques emerge, you will need to reassess security, as well as continually adapt and improve your testing and defense methods.

Security in the implementation and use of AI systems

datacenter

Securing deployed AI systems is as important as their development and testing. Consider the key security issues when implementing and operating AI systems.

Securing infrastructure and data transmission channels

Security of AI systems is not limited to models only. It is important to ensure the protection of the entire infrastructure.

It is necessary to properly implement and support the security measures designed at the stage of developing the security architecture: network segmentation and isolation of critical components, encryption of data during storage and transmission, strict access control and authentication, regular updates and patches of security systems, etc.

A comprehensive approach to protecting the infrastructure minimizes the risks of unauthorized access and compromise of AI systems.

Real-time security monitoring and analysis

To promptly identify threats and eliminate them, it is necessary to implement monitoring and response systems:

  • SIEM systems for log analysis and anomaly detection.
  • User and entity behavior analysis (UEBA).
  • Automated incident response (SOAR).
  • Automation of automatic policy updates and security patches.

The use of AI for security monitoring has become popular in recent years. It allows you to process huge amounts of data and identify complex multi-stage attacks that may go unnoticed with a traditional approach. Let’s take a closer look at this.

Securing AI systems with specialized AI

AI technologies can be a powerful tool for securing AI systems themselves or any other IT systems or infrastructures in general. Consider implementing anomaly detection systems based on machine learning. Such systems can analyze usage patterns of your AI model and identify suspicious activity that may indicate attempted attacks.

You can also use AI to analyze input data for potential adversarial attacks. Train a separate model to recognize the signs of manipulated input data.

Consider using AI to continuously monitor the performance and behavior of your model. Sudden changes in the distribution of predictions or unexpected patterns in the output can be early indicators of security issues.

Implementing comprehensive security measures at all stages of the AI system lifecycle – from development to operation – is critical to ensuring their reliability and security. 

AI security training and awareness

training on roof

The human factor plays a key role in ensuring the security of AI systems. Let’s look at how training and awareness can help.

Effective training programs include courses on how to securely develop, evaluate, and test AI models, training on the ethical aspects of AI, hands-on training on identifying and preventing attacks on AI systems, and training on security and monitoring tools. Regular training helps teams stay up-to-date on the latest threats and security best practices.

It is also important to create and maintain a security culture. This requires that the organization’s senior management support security initiatives. This includes integrating security into development and operations processes, rewarding employees for identifying and reporting security issues, and regularly conducting incident simulations and attack response exercises. A security culture promotes a proactive approach to protecting AI systems at all levels of the organization.

A service-based approach to AI security

AI security experts

Securing machine learning and artificial intelligence systems requires comprehensive, multifaceted approaches. To optimize these resources, consider outsourcing specific ML security tasks:

  • AI model security assessment and audit.
  • Risk management and compliance.
  • Protection against model attacks.
  • Integration of security practices into the ML ​​lifecycle.
  • Deployment of AI tools for ML security.

Examples of large language model (LLM) security services include:

  • Adversarial attack testing.
  • Injection testing.
  • Bias and fairness assessment.
  • Security vulnerability assessment.
  • Ethics and compliance audit.

Outsourcing, for example, to h-x.technology, allows you to optimize your budget and flexibly use our knowledge, experience, and resources instead of searching for and hiring full-time employees or investing in internal competencies that may be either insufficient or excessive.

Conclusions

AI system security is an ongoing, complex process that requires attention to all facets and stages of the AI ​​system lifecycle: from design and development to implementation and operation.

Key considerations include risk management and compliance, ethics and model bias, ensuring data privacy, security assessment and testing, training, building a security culture, and defending against attacks.

Working with AI security experts like H-X Technologies enables organizations to effectively address these complex challenges, ensuring AI solutions are reliable, ethical, and secure.

AI security is not just a technical challenge, but a strategic imperative for modern businesses. That’s why we offer free AI security consultations.

Don’t wait until AI security issues become critical. Contact our experts today for a free consultation to ensure your AI projects are moving forward reliably, safely and ethically.

Other posts

10/11/2024
How to protect and teach how to protect logins to systems
10/10/2024
Overview of modern programming languages and blockchains for smart contracts