Test Your Skills
Multiple-Choice Questions
What is the goal of defense evasion techniques used by adversaries?
To enhance the performance of machine learning models
To gain unauthorized access to machine learning systems
To avoid detection by AI/ML-enabled security software
To improve the accuracy of anomaly detection algorithms
Which technique can adversaries use to prevent a machine learning model from correctly identifying the contents of data?
Model replication
Model extraction
Craft adversarial data
Inference API access
What is the purpose of ML attack staging techniques?
To gather information about the target system
To manipulate business and operational processes
To prepare for an attack on a machine learning model
To exfiltrate sensitive information
An adversary could create a network packet that looks like a normal packet to a machine learning model, but that contains malicious code. This packet could then be used to exploit a vulnerability on a target system. What is the technique used by the adversary?
Reconnaissance.
Evading an ML model.
Exfiltration.
None of these answers are correct.
How can adversaries erode confidence in a machine learning system over time?
By training proxy models
By manipulating AI/ML artifacts
By introducing backdoors into the model
By degrading the model’s performance with adversarial data inputs
What is the primary purpose of exfiltrating AI/ML artifacts?
To gain access to AI/ML-enabled security software
To manipulate the behavior of machine learning models
To steal intellectual property and cause economic harm
To enhance the performance of machine learning algorithms
What is the potential privacy concern related to inferring the membership of a data sample in its training set?
Disclosure of personally identifiable information
Leakage of sensitive business operations
Exposure of the machine learning model’s architecture
Violation of data integrity within the ML system
How can adversaries verify the efficacy of their attack on a machine learning model?
By manipulating the training process of the model
By training proxy models using the victim’s inference API
By exfiltrating the model’s training data
By exploiting vulnerabilities in the ML-enabled security software
What is the purpose of adversarial data in the context of machine learning?
To improve the interpretability of machine learning models
To enhance the generalization capabilities of models
To evaluate the robustness of machine learning algorithms
To cause the model to produce incorrect or misleading results
How can adversaries cause disruption or damage to machine learning systems?
By training proxy models for performance improvement
By manipulating ML artifacts for better accuracy
By flooding the system with excessive requests
By using ML artifacts to enhance the system’s capabilities
What is the potential impact of adversarial data inputs on a machine learning system?
Improved accuracy and reliability of the system
Increased resilience against cyberattacks
Decreased efficiency and degraded performance
Enhanced interpretability of the model’s decisions
How can adversaries use AI/ML model inference API access for exfiltration?
By collecting inferences from the target model and using them as labels for training a separate model
By manipulating the inputs to the inference API to extract private information embedded in the training data
By stealing the model itself through the inference API
By flooding the inference API with requests to disrupt the system
What is the primary purpose of exfiltrating AI/ML artifacts via traditional cyberattack techniques?
To improve the performance of the machine learning system
To enhance the accuracy of anomaly detection algorithms
To gain unauthorized access to AI/ML-enabled security software
To steal valuable intellectual property and sensitive information using common practices
What is the potential impact of flooding a machine learning system with useless queries or computationally expensive inputs?
Improved accuracy and faster response time of the system
Enhanced interpretability of the machine learning model
Increased operational costs and resource exhaustion
Reduced false positives in the system’s outputs
What is the potential impact of eroding confidence in a machine learning system over time?
Increased interpretability of the model’s decisions
Enhanced accuracy and generalization capabilities
Decreased trust and reliance on the system’s outputs
Improved resilience against adversarial attacks
Exercise 5-1: Understanding the MITRE ATT&CK Framework
Objective: Research and explore MITRE ATT&CK Framework
Instructions:
Step 1. Visit the official MITRE ATT&CK website (attack.mitre.org).
Step 2. Familiarize yourself with the different tactics and techniques listed in the framework.
Step 3. Choose one specific technique from any tactic that interests you.
Step 4. Conduct further research on the chosen technique to understand its details, real-world examples, and potential mitigation strategies.
Step 5. Write a brief summary of your findings, including the technique’s description, its potential impact, and any recommended defensive measures.
Exercise 5-2: Exploring the MITRE ATLAS Framework
Objective: Explore the MITRE ATLAS Knowledge Base
Instructions:
Step 1. Visit the official MITRE ATLAS website (atlas.mitre.org).
Step 2. Explore the ATLAS knowledge base and its resources, including tactics, techniques, and case studies for machine learning systems.
Step 3. Select one specific technique or case study related to machine learning security that captures your interest.
Step 4. Research further on the chosen technique or case study to gain a deeper understanding of its context, implementation, and implications.
Step 5. Create a short presentation or a blog post summarizing the technique or case study, including its purpose, potential risks, and possible countermeasures.