AI Model Evaluation and Metrics: Understanding Performance Indicators MCQs
Explore key concepts in accuracy, precision, recall and model assessment techniques. Ideal for students and AI professionals.
📌 Important Instructions
- ✅ This is a free test. Beware of scammers who ask for money to attend this test.
- 📋 Total Number of Questions: 30
- ⏳ Time Allotted: 30 Minutes
- 📝 Marking Scheme: Each question carries 1 mark. There is no negative marking.
- ⚠️ Do not refresh or close the page during the test, as it may result in loss of progress.
- 🔍 Read each question carefully before selecting your answer.
- 🎯 All the best! Give your best effort and ace the test! 🚀
Time Left: 00:00
1. What does accuracy measure in a classification model?
- The ratio of false positives to false negatives.
- The difference between predicted and actual values.
- The percentage of correct predictions.
- The ability of the model to handle imbalanced data.
2. Which metric is used to evaluate a model’s ability to correctly classify positive instances?
- Precision
- Recall
- Accuracy
- F1-Score
3. What is precision in the context of classification models?
- The ability of the model to reduce errors.
- The percentage of correct predictions among all predictions.
- The ability to identify negative instances.
- The percentage of true positive predictions among all positive predictions.
4. What does recall measure in a classification model?
- The percentage of actual positive instances correctly identified by the model.
- The percentage of true positive predictions among all negative predictions.
- The ability to handle missing data.
- The overall accuracy of the model.
5. What is F1-Score?
- The sum of precision and recall.
- The harmonic mean of precision and recall.
- The difference between precision and recall.
- A measure of model complexity.
6. What is the confusion matrix used for?
- To calculate the model’s execution time.
- To train the model on labeled data.
- To visualize the distribution of the dataset.
- To evaluate the performance of a classification model by comparing predicted and actual values.
7. What does the term 'false positive' refer to in classification models?
- When the model incorrectly classifies a negative instance as positive.
- When the model correctly identifies a positive instance.
- When the model misses positive instances.
- When the model predicts the correct label for negative instances.
8. What is ROC (Receiver Operating Characteristic) curve used for?
- To calculate the accuracy of a model.
- To plot the true positive rate against the false positive rate.
- To determine the number of clusters in unsupervised learning.
- To visualize the decision boundaries of the model.
9. What does the AUC (Area Under Curve) represent in an ROC curve?
- The overall ability of the model to distinguish between positive and negative classes.
- The ratio of true positives to total predictions.
- The proportion of the dataset used for testing.
- The time complexity of the model.
10. Which metric is primarily used to evaluate regression models?
- Precision
- F1-Score
- Mean Squared Error (MSE)
- Confusion Matrix
11. What does R-squared (R²) indicate in regression analysis?
- The proportion of variance in the dependent variable explained by the independent variables.
- The total number of errors in a regression model.
- The relationship between independent and dependent variables.
- The total error of a classification model.
12. What is the main advantage of using cross-validation during model evaluation?
- It increases the model’s training time.
- It helps in assessing the model’s performance by using different subsets of data for training and testing.
- It reduces the computational power required.
- It helps in overfitting the model.
13. What is a characteristic of a model that is overfitting?
- It performs well on the training data but poorly on unseen data.
- It performs well on both training and test data.
- It fails to learn from the training data.
- It consistently gives inaccurate results.
14. What is the primary goal of model selection in machine learning?
- To minimize the execution time of the model.
- To select the model that performs best on the training set.
- To choose the model that generalizes well on unseen data.
- To maximize the number of features in the dataset.
15. What does the term "underfitting" mean in model evaluation?
- When a model is too simple and fails to capture the underlying patterns of the data.
- When a model is overly complex and learns too much from the training data.
- When the model performs well on unseen data but not on the training data.
- When the model is unable to make predictions on any data.
16. What is a key benefit of using the F1-score over precision and recall individually?
- It is more useful for regression problems.
- It is easier to compute than precision and recall.
- It balances the tradeoff between precision and recall in one metric.
- It is applicable only to binary classification problems.
17. Which of the following is a limitation of using accuracy as the only evaluation metric for imbalanced data?
- Accuracy can be misleading because it may favor the majority class.
- Accuracy always provides a clear picture of model performance.
- Accuracy ignores the false negatives in the dataset.
- Accuracy is not suitable for regression problems.
18. In which scenario would you use the area under the precision-recall curve (PR AUC)?
- When there is a need for a visual representation of errors.
- When there are only two possible outcomes in a classification task.
- When evaluating models with highly imbalanced classes.
- When comparing regression models with similar performance.
19. What is a characteristic of a good evaluation metric for a machine learning model?
- It should be easy to compute with minimal data.
- It should only consider the performance on the training data.
- It should reflect the real-world performance of the model on unseen data.
- It should always yield the same result for different datasets.
20. Which of the following best describes the importance of model evaluation?
- It ensures that the model is both accurate and generalizes well to new data.
- It only measures the speed of the model during execution.
- It focuses solely on optimizing the model for training data.
- It is only useful for determining model performance on training sets.
21. Which of the following is an example of a metric for evaluating classification models in imbalanced datasets?
- F1-Score
- Mean Squared Error
- ROC Curve
- R-squared
22. What is the purpose of the log-loss function in classification problems?
- To normalize the dataset.
- To compute the time complexity of the model.
- To evaluate the performance of a model based on the probability of its predictions.
- To calculate the variance of the errors.
23. Which of the following is NOT a disadvantage of using accuracy as a performance metric for imbalanced data?
- It may be misleading if the dataset has a large class imbalance.
- It doesn't account for the types of errors made by the model.
- It is insensitive to false negatives.
- It always works well for binary classification tasks.
24. Which metric is used to evaluate a classification model on multi-class problems?
- Multi-class ROC-AUC
- Mean Squared Error
- Adjusted R-squared
- Precision-Recall Curve
25. What does the term "overfitting" mean in model evaluation?
- The model is optimized for speed but not accuracy.
- The model fails to capture the patterns in the training data.
- The model performs well on both training and test data.
- The model learns noise from the training data and performs poorly on unseen data.
26. Which of the following is true about the precision-recall curve?
- It only applies to binary classification tasks.
- It is used primarily for regression problems.
- It shows the tradeoff between precision and recall for different thresholds.
- It is used to evaluate the accuracy of the dataset.
27. What does "area under the ROC curve" (AUC-ROC) measure in a classification model?
- The ability of the model to distinguish between positive and negative classes.
- The accuracy of the model on the training data.
- The model's error rate in predictions.
- The precision of the model on unseen data.
28. What does the term "false negatives" refer to in the confusion matrix?
- Instances where the model correctly classifies a positive instance as positive.
- Instances where the model correctly classifies a negative instance as negative.
- Instances where the model incorrectly classifies a positive instance as negative.
- Instances where the model incorrectly classifies a negative instance as positive.
29. Which evaluation metric is most appropriate for a binary classification problem with an imbalanced dataset?
- Precision-Recall AUC
- R-squared
- Mean Absolute Error
- Confusion Matrix
30. What does the term "true positives" mean in a confusion matrix?
- Instances where the model correctly classifies positive instances.
- Instances where the model incorrectly classifies positive instances.
- Instances where the model correctly classifies negative instances.
- Instances where the model incorrectly classifies negative instances.