AI Model Evaluation and Metrics: Understanding Performance Indicators MCQs

Questions: 30

Questions
  • 1. What does accuracy measure in a classification model?

    • a) The ratio of false positives to false negatives.
    • b) The difference between predicted and actual values.
    • c) The percentage of correct predictions.
    • d) The ability of the model to handle imbalanced data.
  • 2. Which metric is used to evaluate a model’s ability to correctly classify positive instances?

    • a) Precision
    • b) Recall
    • c) Accuracy
    • d) F1-Score
  • 3. What is precision in the context of classification models?

    • a) The ability of the model to reduce errors.
    • b) The percentage of correct predictions among all predictions.
    • c) The ability to identify negative instances.
    • d) The percentage of true positive predictions among all positive predictions.
  • 4. What does recall measure in a classification model?

    • a) The percentage of actual positive instances correctly identified by the model.
    • b) The percentage of true positive predictions among all negative predictions.
    • c) The ability to handle missing data.
    • d) The overall accuracy of the model.
  • 5. What is F1-Score?

    • a) The sum of precision and recall.
    • b) The harmonic mean of precision and recall.
    • c) The difference between precision and recall.
    • d) A measure of model complexity.
  • 6. What is the confusion matrix used for?

    • a) To calculate the model’s execution time.
    • b) To train the model on labeled data.
    • c) To visualize the distribution of the dataset.
    • d) To evaluate the performance of a classification model by comparing predicted and actual values.
  • 7. What does the term 'false positive' refer to in classification models?

    • a) When the model incorrectly classifies a negative instance as positive.
    • b) When the model correctly identifies a positive instance.
    • c) When the model misses positive instances.
    • d) When the model predicts the correct label for negative instances.
  • 8. What is ROC (Receiver Operating Characteristic) curve used for?

    • a) To calculate the accuracy of a model.
    • b) To plot the true positive rate against the false positive rate.
    • c) To determine the number of clusters in unsupervised learning.
    • d) To visualize the decision boundaries of the model.
  • 9. What does the AUC (Area Under Curve) represent in an ROC curve?

    • a) The overall ability of the model to distinguish between positive and negative classes.
    • b) The ratio of true positives to total predictions.
    • c) The proportion of the dataset used for testing.
    • d) The time complexity of the model.
  • 10. Which metric is primarily used to evaluate regression models?

    • a) Precision
    • b) F1-Score
    • c) Mean Squared Error (MSE)
    • d) Confusion Matrix
  • 11. What does R-squared (R²) indicate in regression analysis?

    • a) The proportion of variance in the dependent variable explained by the independent variables.
    • b) The total number of errors in a regression model.
    • c) The relationship between independent and dependent variables.
    • d) The total error of a classification model.
  • 12. What is the main advantage of using cross-validation during model evaluation?

    • a) It increases the model’s training time.
    • b) It helps in assessing the model’s performance by using different subsets of data for training and testing.
    • c) It reduces the computational power required.
    • d) It helps in overfitting the model.
  • 13. What is a characteristic of a model that is overfitting?

    • a) It performs well on the training data but poorly on unseen data.
    • b) It performs well on both training and test data.
    • c) It fails to learn from the training data.
    • d) It consistently gives inaccurate results.
  • 14. What is the primary goal of model selection in machine learning?

    • a) To minimize the execution time of the model.
    • b) To select the model that performs best on the training set.
    • c) To choose the model that generalizes well on unseen data.
    • d) To maximize the number of features in the dataset.
  • 15. What does the term "underfitting" mean in model evaluation?

    • a) When a model is too simple and fails to capture the underlying patterns of the data.
    • b) When a model is overly complex and learns too much from the training data.
    • c) When the model performs well on unseen data but not on the training data.
    • d) When the model is unable to make predictions on any data.
  • 16. What is a key benefit of using the F1-score over precision and recall individually?

    • a) It is more useful for regression problems.
    • b) It is easier to compute than precision and recall.
    • c) It balances the tradeoff between precision and recall in one metric.
    • d) It is applicable only to binary classification problems.
  • 17. Which of the following is a limitation of using accuracy as the only evaluation metric for imbalanced data?

    • a) Accuracy can be misleading because it may favor the majority class.
    • b) Accuracy always provides a clear picture of model performance.
    • c) Accuracy ignores the false negatives in the dataset.
    • d) Accuracy is not suitable for regression problems.
  • 18. In which scenario would you use the area under the precision-recall curve (PR AUC)?

    • a) When there is a need for a visual representation of errors.
    • b) When there are only two possible outcomes in a classification task.
    • c) When evaluating models with highly imbalanced classes.
    • d) When comparing regression models with similar performance.
  • 19. What is a characteristic of a good evaluation metric for a machine learning model?

    • a) It should be easy to compute with minimal data.
    • b) It should only consider the performance on the training data.
    • c) It should reflect the real-world performance of the model on unseen data.
    • d) It should always yield the same result for different datasets.
  • 20. Which of the following best describes the importance of model evaluation?

    • a) It ensures that the model is both accurate and generalizes well to new data.
    • b) It only measures the speed of the model during execution.
    • c) It focuses solely on optimizing the model for training data.
    • d) It is only useful for determining model performance on training sets.
  • 21. Which of the following is an example of a metric for evaluating classification models in imbalanced datasets?

    • a) F1-Score
    • b) Mean Squared Error
    • c) ROC Curve
    • d) R-squared
  • 22. What is the purpose of the log-loss function in classification problems?

    • a) To normalize the dataset.
    • b) To compute the time complexity of the model.
    • c) To evaluate the performance of a model based on the probability of its predictions.
    • d) To calculate the variance of the errors.
  • 23. Which of the following is NOT a disadvantage of using accuracy as a performance metric for imbalanced data?

    • a) It may be misleading if the dataset has a large class imbalance.
    • b) It doesn't account for the types of errors made by the model.
    • c) It is insensitive to false negatives.
    • d) It always works well for binary classification tasks.
  • 24. Which metric is used to evaluate a classification model on multi-class problems?

    • a) Multi-class ROC-AUC
    • b) Mean Squared Error
    • c) Adjusted R-squared
    • d) Precision-Recall Curve
  • 25. What does the term "overfitting" mean in model evaluation?

    • a) The model is optimized for speed but not accuracy.
    • b) The model fails to capture the patterns in the training data.
    • c) The model performs well on both training and test data.
    • d) The model learns noise from the training data and performs poorly on unseen data.
  • 26. Which of the following is true about the precision-recall curve?

    • a) It only applies to binary classification tasks.
    • b) It is used primarily for regression problems.
    • c) It shows the tradeoff between precision and recall for different thresholds.
    • d) It is used to evaluate the accuracy of the dataset.
  • 27. What does "area under the ROC curve" (AUC-ROC) measure in a classification model?

    • a) The ability of the model to distinguish between positive and negative classes.
    • b) The accuracy of the model on the training data.
    • c) The model's error rate in predictions.
    • d) The precision of the model on unseen data.
  • 28. What does the term "false negatives" refer to in the confusion matrix?

    • a) Instances where the model correctly classifies a positive instance as positive.
    • b) Instances where the model correctly classifies a negative instance as negative.
    • c) Instances where the model incorrectly classifies a positive instance as negative.
    • d) Instances where the model incorrectly classifies a negative instance as positive.
  • 29. Which evaluation metric is most appropriate for a binary classification problem with an imbalanced dataset?

    • a) Precision-Recall AUC
    • b) R-squared
    • c) Mean Absolute Error
    • d) Confusion Matrix
  • 30. What does the term "true positives" mean in a confusion matrix?

    • a) Instances where the model correctly classifies positive instances.
    • b) Instances where the model incorrectly classifies positive instances.
    • c) Instances where the model correctly classifies negative instances.
    • d) Instances where the model incorrectly classifies negative instances.

Ready to put your knowledge to the test? Take this exam and evaluate your understanding of the subject.

Start Exam