T4Tutorials .PK

NSCT – Supervised Learning MCQs

1. . Supervised learning is:

(A) Training models with no data


(B) A type of machine learning where the model is trained on labeled data


(C) Compressing unlabeled data


(D) Backup only




2. . Labeled data in supervised learning contains:

(A) Only input features


(B) Input features and their corresponding output or target values


(C) Only outputs


(D) Backup only




3. . The main goal of supervised learning is to:

(A) Compress data


(B) Encrypt data


(C) Learn a mapping from inputs to outputs to make predictions on new data


(D) Backup only




4. . Regression in supervised learning is used to:

(A) Backup only


(B) Predict categories


(C) Encrypt numbers


(D) Predict continuous numeric values




5. . Classification in supervised learning is used to:

(A) Encrypt categories


(B) Predict continuous values


(C) Predict categorical or discrete outcomes


(D) Backup only




6. . Common supervised learning algorithms include:

(A) Principal component analysis only


(B) K-means clustering only


(C) Linear regression, logistic regression, decision trees, random forests, and support vector machines


(D) Backup only




7. . Mean Squared Error (MSE) is commonly used to:

(A) Backup only


(B) Encrypt errors


(C) Compress errors


(D) Evaluate regression models




8. . Accuracy, precision, recall, and F1-score are used to:

(A) Compress metrics


(B) Encrypt metrics


(C) Evaluate classification models


(D) Backup only




9. . Training set in supervised learning is:

(A) Encrypting data


(B) The subset of data used to train the model


(C) Compressing data


(D) Backup only




10. . Test set in supervised learning is:

(A) Compressing test data


(B) Encrypting test data


(C) The subset of data used to evaluate the model's performance on unseen data


(D) Backup only




11. . Overfitting occurs when:

(A) Compressing models


(B) Encrypting models


(C) The model performs well on training data but poorly on new data


(D) Backup only




12. . Underfitting occurs when:

(A) Compressing models


(B) Encrypting models


(C) The model is too simple to capture patterns in the data


(D) Backup only




13. . Cross-validation is used to:

(A) Encrypt data splits


(B) Assess model performance more reliably and prevent overfitting


(C) Compress validation data


(D) Backup only




14. . Feature scaling in supervised learning helps:

(A) Backup only


(B) Encrypt features


(C) Compress features


(D) Improve convergence and performance of algorithms sensitive to feature magnitude




15. . Decision trees in supervised learning:

(A) Split data based on feature values to make predictions


(B) Encrypt trees


(C) Compress trees


(D) Backup only




16. . Support Vector Machines (SVM) aim to:

(A) Encrypt hyperplanes


(B) Find the optimal hyperplane that separates classes in the feature space


(C) Compress feature spaces


(D) Backup only




17. . K-Nearest Neighbors (KNN) predicts output by:

(A) Considering the majority label or average value of nearest neighbors


(B) Encrypting neighbors


(C) Compressing neighbors


(D) Backup only




18. . Regularization in supervised learning is used to:

(A) Backup only


(B) Encrypt coefficients


(C) Compress models


(D) Reduce overfitting by penalizing large coefficients




19. . Label encoding is used to:

(A) Encrypt labels


(B) Convert categorical variables into numeric labels for modeling


(C) Compress labels


(D) Backup only




20. . The main purpose of supervised learning is to:

(A) Predict outputs for new inputs using a model trained on labeled data


(B) Encrypt all data


(C) Compress all features


(D) Backup only




Exit mobile version