How Are the AIF-C01 Exam Domains Structured?
The AWS AI Practitioner exam (AIF-C01) is organized into five domains, each representing a percentage of the 50 scored questions. The five domains are not equal in weight: Domain 3 alone accounts for 28% of your score, making it the single most important area to master. The official exam guide is available through aws.amazon.com/certification/certified-ai-practitioner and lists every topic tested in each domain.
Here is the full domain breakdown:
- Domain 1: Fundamentals of AI and ML — 20%
- Domain 2: Fundamentals of Generative AI — 24%
- Domain 3: Applications of Foundation Models — 28%
- Domain 4: Guidelines for Responsible AI — 14%
- Domain 5: Security, Compliance, and Governance for AI Solutions — 14%
Domain 1: Fundamentals of AI and ML (20%) — What It Actually Tests
Domain 1 tests conceptual understanding of AI and ML—not how to implement algorithms, but what they do, when to use them, and what their limitations are. The domain covers:
Types of AI problems and ML approaches. Supervised learning (predicting outcomes from labeled training data), unsupervised learning (finding patterns without labels), reinforcement learning (learning through reward signals), and semi-supervised learning. The exam tests which approach fits a described problem—"a company wants to group customers by purchasing behavior without predefined categories" points to unsupervised clustering.
The ML pipeline. Data collection, data preparation (cleaning, feature engineering, normalization), model training, model evaluation, deployment, and monitoring. Each stage is tested in scenario form—"an ML model performs well on training data but poorly on new data" describes overfitting, which points to solutions like regularization, more training data, or a simpler model.
Evaluation metrics. Accuracy, precision, recall, F1 score, AUC-ROC. The exam presents a use case and asks which metric is most appropriate—for a fraud detection model where missing a fraudulent transaction is costly, recall (minimizing false negatives) is more important than precision.
Deep learning and neural networks at a conceptual level. What neural networks are, what training means, what a layer does, and how deep learning differs from classical ML. No math is tested—the exam wants conceptual understanding, not backpropagation formulas.
Domain 2: Fundamentals of Generative AI (24%) — What It Actually Tests
Domain 2 is the second largest and covers generative AI at a depth that many candidates underestimate. It goes beyond "what is an LLM" into specifics about how to use, evaluate, and adapt generative AI models.
Foundation models and LLMs. What distinguishes foundation models from task-specific models (pre-training on broad data vs. training from scratch for each task), what tokens are, how context windows limit what a model can process in a single prompt, and the concept of temperature as a parameter controlling output randomness.
Prompt engineering. Zero-shot prompting (task with no examples), few-shot prompting (task with examples), chain-of-thought prompting (instructing step-by-step reasoning), and system prompts (setting model behavior before user interaction). The exam presents scenarios and asks which technique is most appropriate.
Retrieval-augmented generation (RAG). RAG combines a retrieval system (typically a vector database or Amazon Kendra) with a generative model to ground outputs in specific documents. The exam tests the architecture: when RAG is preferred over fine-tuning, what components it requires, and which AWS service is the retrieval layer.
Fine-tuning. Adapting a pre-trained foundation model on domain-specific data to improve performance on that domain. The exam distinguishes fine-tuning from RAG: fine-tuning changes model weights (expensive, persistent); RAG retrieves context at inference time (cheaper, more flexible for frequently changing data).
Model evaluation for generative AI. Metrics like BLEU (machine translation quality), ROUGE (summarization quality), and human evaluation approaches. The exam tests which metric applies to which use case.
Domain 3: Applications of Foundation Models (28%) — What It Actually Tests
Domain 3 is the most AWS-service-heavy domain and the one where most candidates lose the most points. It requires knowing specific AWS AI services by name, purpose, and appropriate use case—not just general AI concepts.
Amazon Bedrock. AWS's managed service for accessing foundation models from multiple providers. Key exam topics: available model providers (Anthropic, AI21 Labs, Cohere, Meta, Stability AI), Bedrock Agents (automated multi-step reasoning workflows), Bedrock Knowledge Bases (RAG implementation), and model evaluation in Bedrock.
Amazon SageMaker. Full ML development platform. AIF-C01 tests SageMaker at a use-case level: SageMaker Studio (development environment), SageMaker Autopilot (automated ML), SageMaker Clarify (bias detection and explainability), and SageMaker Model Monitor (production model drift detection).
Purpose-built AI services. Services the exam matches to specific use cases:
- Amazon Rekognition — image/video analysis (object detection, facial recognition, content moderation)
- Amazon Comprehend — NLP on text (entity recognition, sentiment, key phrases)
- Amazon Transcribe — speech-to-text
- Amazon Translate — language translation
- Amazon Polly — text-to-speech
- Amazon Textract — extracting text and structured data from documents
- Amazon Kendra — enterprise search with natural language
- Amazon Lex — conversational AI (chatbots)
Cost and performance trade-offs. The exam asks you to evaluate which solution is most cost-effective or highest performance for a described use case—when to use a small, cheap model vs. a large, expensive one; when to use a pre-built service vs. building a custom model.
Domain 4: Guidelines for Responsible AI (14%) — What It Actually Tests
Domain 4 tests knowledge of AWS's responsible AI framework and the practical techniques for building fairer, more transparent AI systems. The topics:
Bias types in AI systems. Data bias (training data that underrepresents or misrepresents groups), model bias (learned associations that produce discriminatory outputs), and algorithmic bias (structural properties of the algorithm that produce unequal outcomes). Amazon SageMaker Clarify is the AWS tool tested for detecting and measuring bias in models and datasets.
Explainability. Model explainability means being able to describe why a model made a specific prediction. SageMaker Clarify provides SHAP (SHapley Additive exPlanations) values that quantify each feature's contribution to a prediction. The exam tests when explainability matters most—high-stakes decisions like loan approvals, medical diagnoses, and hiring.
Human oversight. For high-stakes AI outputs, when human review is required before action is taken. Amazon Augmented AI (A2I) is the AWS service for building human review workflows into AI pipelines.
AWS responsible AI principles. AWS publishes six dimensions of responsible AI: fairness, explainability, privacy and security, safety, controllability, and veracity and robustness. The exam tests which principle applies to a described scenario.
Domain 5: Security, Compliance, and Governance for AI Solutions (14%) — What It Actually Tests
Domain 5 tests how to secure AI workloads on AWS and maintain compliance and governance for AI systems. The key topics:
Data governance for ML. Access controls for training data (IAM policies, S3 bucket policies), data lineage tracking, and ensuring that sensitive data used for training is handled in compliance with regulations like GDPR and HIPAA.
Model governance. Version control for models, approval workflows before deploying model updates, and audit trails for model predictions. Amazon SageMaker Model Registry supports model versioning and approval workflows.
AWS security services relevant to AI. AWS Identity and Access Management (IAM) for access control, AWS CloudTrail for logging and auditing API calls to AI services, AWS Config for tracking resource configuration compliance, and AWS Macie for detecting sensitive data in S3 training datasets.
Compliance frameworks. The exam tests general awareness of compliance frameworks (SOC 2, HIPAA, GDPR) and what they require for AI workloads—data residency, access logging, deletion rights—without requiring deep regulatory knowledge.
Exam details verified against aws.amazon.com/certification/certified-ai-practitioner as of 2026-02-27. Fees and requirements are subject to change — confirm current details at aws.amazon.com/certification/certified-ai-practitioner before your exam date.
Ready to pass AI/ML Certifications?
Get the complete study package
📄 AI/ML Certifications Study Guide PDF
125+ pages · Practice questions · Study plan · Exam cheat sheets
Get the PDF — $19 →🤖 AI Study Tutor
Unlimited Q&A · Instant explanations · Personalized to AI/ML Certifications
Try SimpuTech Free →Use code AIMLSTUDY50 — 50% off first month