T4Tutorials .PK

NSCT – Unsupervised Learning MCQs

1. . Unsupervised learning is:

(A) Training models with labeled data only


(B) A type of machine learning where the model is trained on unlabeled data to find patterns or structure


(C) Compressing labeled datasets


(D) Backup only




2. . The main goal of unsupervised learning is to:

(A) Encrypt patterns


(B) Discover hidden patterns, clusters, or relationships in data


(C) Compress datasets


(D) Backup only




3. . Clustering in unsupervised learning is:

(A) Compressing clusters


(B) Encrypting clusters


(C) Grouping similar data points into clusters


(D) Backup only




4. . K-Means is:

(A) Backup only


(B) A supervised learning algorithm


(C) Encryption algorithm


(D) A clustering algorithm that partitions data into K clusters based on similarity




5. . Hierarchical clustering builds:

(A) A flat label set


(B) A hierarchy of clusters using either agglomerative or divisive methods


(C) An encryption tree


(D) Backup only




6. . Dimensionality reduction in unsupervised learning is:

(A) Compressing datasets


(B) Encrypting features


(C) Reducing the number of input variables to simplify data and improve analysis


(D) Backup only




7. . Principal Component Analysis (PCA) is used to:

(A) Reduce dimensionality while preserving variance in data


(B) Encrypt principal components


(C) Compress features only


(D) Backup only




8. . Independent Component Analysis (ICA) is used for:

(A) Separating a multivariate signal into independent non-Gaussian components


(B) Encrypting signals


(C) Compressing signals


(D) Backup only




9. . Anomaly detection in unsupervised learning is:

(A) Encrypting anomalies


(B) Identifying unusual data points that differ significantly from the majority


(C) Compressing anomalies


(D) Backup only




10. . DBSCAN is:

(A) A density-based clustering algorithm that identifies clusters and outliers


(B) A supervised regression algorithm


(C) Encryption method


(D) Backup only




11. . Autoencoders are:

(A) Encrypting autoencoders


(B) Neural networks used for unsupervised feature learning and dimensionality reduction


(C) Compressing neural networks


(D) Backup only




12. . Silhouette score in clustering measures:

(A) How well data points are clustered, indicating cohesion and separation


(B) Encrypting clusters


(C) Compressing clusters


(D) Backup only




13. . Unsupervised learning is useful for:

(A) Compressing patterns


(B) Encrypting marketing data


(C) Market segmentation, anomaly detection, pattern discovery, and dimensionality reduction


(D) Backup only




14. . Cosine similarity is used in unsupervised learning to:

(A) Encrypt similarity


(B) Measure similarity between two vectors


(C) Compress vectors


(D) Backup only




15. . Agglomerative clustering starts with:

(A) One big cluster


(B) Each data point as its own cluster and merges them iteratively


(C) Encrypting clusters


(D) Backup only




16. . Divisive clustering starts with:

(A) All data points in one cluster and splits them iteratively


(B) Each point as a cluster


(C) Encrypting divisions


(D) Backup only




17. . Feature scaling is important in unsupervised learning because:

(A) Compressing features


(B) Encrypting features


(C) Distance-based algorithms like K-Means are sensitive to feature magnitudes


(D) Backup only




18. . The elbow method is used to:

(A) Determine the optimal number of clusters in K-Means


(B) Encrypt K-values


(C) Compress clusters


(D) Backup only




19. . t-SNE is a technique used for:

(A) Visualizing high-dimensional data in 2D or 3D


(B) Encrypting high-dimensional data


(C) Compressing features


(D) Backup only




20. . The main purpose of unsupervised learning is to:

(A) Discover hidden structure, patterns, or relationships in unlabeled data


(B) Encrypt all data


(C) Compress datasets


(D) Backup only




Exit mobile version