How These Azure AI-900 Practice Questions Are Organized
These 30 questions cover all five AI-900 skill areas: AI Workloads and Considerations (Q1–Q6), Fundamental Principles of Machine Learning on Azure (Q7–Q12), Computer Vision Workloads on Azure (Q13–Q18), Natural Language Processing Workloads on Azure (Q19–Q24), and Generative AI Workloads on Azure (Q25–Q30). The real exam has 40 to 60 questions, a 45-minute time limit, and a passing score of 700 out of 1,000. It is a proctored exam that costs $99 — register through learn.microsoft.com/en-us/credentials/certifications/azure-ai-fundamentals.
Skill Area 1: AI Workloads and Considerations (Questions 1–6)
Q1. A hospital uses an AI system to help triage patients. The system disproportionately flags non-urgent cases as urgent for patients from a specific demographic group. Which Microsoft responsible AI principle is most directly being violated?
Answer: Fairness. Microsoft's fairness principle requires that AI systems treat all people equitably and do not create or reinforce unfair bias. A system with differential performance across demographic groups violates fairness—regardless of intent. The other five principles are reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Q2. A company wants an AI solution that can predict equipment failures before they occur based on sensor data from machinery. Which AI workload type does this describe?
Answer: Anomaly detection (or predictive maintenance, a form of regression and anomaly detection). Predicting failures from sensor data uses time-series anomaly detection or regression models trained on historical failure data. This is a supervised or unsupervised ML problem depending on whether failure events are labeled in the training data.
Q3. An AI chatbot repeatedly gives confident but incorrect answers when asked about information it was not trained on. Which responsible AI consideration does this raise?
Answer: Reliability and safety. AI systems should perform reliably and safely. A chatbot generating confident incorrect answers (hallucination) is an unreliable system, and if acted upon, those incorrect answers can cause harm. Mitigation includes RAG grounding, output verification, and clear communication of the system's knowledge limits.
Q4. A company uses a facial recognition system to identify employees for building access. An employee is denied access because the system failed to recognize their face due to poor lighting. Which responsible AI consideration does this raise?
Answer: Reliability and safety. The system failed to perform its primary function in a real-world condition (variable lighting), which is a reliability failure. A secondary concern is inclusiveness—if the system fails more often for certain groups. Ensuring the system works across varied conditions is a reliability and safety obligation.
Q5. Which of the following is an example of an AI workload that uses natural language processing?
Answer: A system that automatically categorizes customer support emails by topic and sentiment. NLP workloads analyze, classify, and extract meaning from text. Categorizing emails by topic and sentiment is NLP—it requires understanding language semantics, not just pattern matching on words.
Q6. An organization wants to ensure that all decisions made by an AI model can be explained to the people affected by them. Which Microsoft responsible AI principle does this describe?
Answer: Transparency. Transparency means AI systems should be understandable—users and affected parties should be able to understand how the system works and how it reached its conclusions. This is especially important in high-stakes contexts: loan decisions, hiring, medical diagnoses.
Skill Area 2: Machine Learning on Azure (Questions 7–12)
Q7. A data scientist wants to train a model to predict whether a customer will churn (leave) in the next 30 days, using historical data where each customer is labeled as "churned" or "not churned." Which type of ML is this?
Answer: Supervised learning — binary classification. The model learns from labeled examples (churned / not churned). Predicting one of two categories is binary classification. Multi-class classification handles more than two categories. Regression predicts a continuous number (like revenue).
Q8. Which Azure service allows a data scientist to train and deploy ML models using a visual drag-and-drop interface without writing code?
Answer: Azure Machine Learning designer. The Azure ML designer is a visual drag-and-drop interface for building ML pipelines. It is distinct from Azure ML Automated ML (AutoML), which automatically selects and tunes algorithms without visual pipeline construction.
Q9. A team wants Azure to automatically test multiple ML algorithms and hyperparameter combinations to find the best-performing model for their dataset, with minimal manual configuration. Which Azure ML feature handles this?
Answer: Azure Machine Learning Automated ML (AutoML). AutoML automatically tests multiple algorithms and hyperparameters, evaluates them, and produces a leaderboard of the best models. It significantly reduces the time a data scientist spends on algorithm selection and tuning.
Q10. An ML model predicts house prices (a continuous number). After evaluation, the model has a mean absolute error (MAE) of $15,000. What does this mean?
Answer: On average, the model's predictions are $15,000 away from the actual price. MAE is the average absolute difference between predicted and actual values. A $15,000 MAE on house price predictions means the model is typically off by $15,000 in either direction. Whether this is acceptable depends on the price range of the houses.
Q11. The process of selecting which variables from raw data will be used as inputs to an ML model is called what?
Answer: Feature engineering (or feature selection). Feature engineering involves selecting, transforming, and creating input variables (features) from raw data to improve model performance. It is one of the most impactful steps in the ML pipeline and requires domain knowledge about what signals are meaningful for the prediction task.
Q12. A company trains an ML model on data from 2019–2022 to predict customer demand. After deployment in 2024, the model's accuracy declines. What is the most likely explanation?
Answer: Data drift or concept drift—the statistical properties of the input data have changed since the model was trained. Economic changes, shifts in consumer behavior, or new products have changed the patterns in 2024 customer data in ways the 2019–2022 training data did not represent. The model needs to be retrained on more recent data.
Skill Area 3: Computer Vision Workloads on Azure (Questions 13–18)
Q13. A logistics company wants to automatically extract invoice numbers, dates, and total amounts from thousands of scanned PDF invoices. Which Azure AI service is most appropriate?
Answer: Azure AI Document Intelligence (formerly Form Recognizer). Document Intelligence is purpose-built for extracting structured data from documents—forms, invoices, receipts, ID documents. Azure AI Vision provides general OCR for reading text from images but does not extract structured key-value pairs from forms as effectively.
Q14. A retailer wants to identify the specific products visible in shelf photos taken by store employees, using a custom model trained on their product catalog images. Which Azure AI service enables this?
Answer: Azure AI Custom Vision. Custom Vision allows training custom image classification and object detection models on your own labeled images—without needing deep ML expertise. The pre-trained Azure AI Vision service can detect general objects but not brand-specific product variations that require custom training data.
Q15. A social media platform needs to automatically detect and blur explicit content in user-uploaded images before they are published. Which Azure AI service provides this capability?
Answer: Azure AI Vision (Content Moderation feature) or Azure AI Content Safety. Azure AI Content Safety (a dedicated service) and Azure AI Vision both support detecting explicit content in images. Azure AI Content Safety is the more current, dedicated service for this use case, providing confidence scores for different content categories.