T4Tutorials .PK

NSCT-Advanced Deep Learning MCQs

1. . Advanced deep learning focuses on:

(A) Compressing datasets only


(B) Encrypting neural networks


(C) Complex architectures and techniques such as CNNs, RNNs, LSTMs, GANs, and Transformers


(D) Backup only




2. . Convolutional Neural Networks (CNNs) are best suited for:

(A) Backup only


(B) Encrypting images


(C) Compressing image data


(D) Image recognition, object detection, and computer vision tasks




3. . Key layers in CNN include:

(A) Encrypting layers


(B) Convolutional layer, pooling layer, fully connected layer


(C) Compression layers


(D) Backup only




4. . Pooling layers in CNNs are used to:

(A) Encrypt pooled data


(B) Reduce spatial dimensions and computational complexity while retaining important features


(C) Compress features


(D) Backup only




5. . Recurrent Neural Networks (RNNs) are suitable for:

(A) Compressing sequences


(B) Encrypting sequences


(C) Sequential data such as text, speech, and time series


(D) Backup only




6. . Long Short-Term Memory (LSTM) networks help to:

(A) Compress sequences


(B) Encrypt memory


(C) Solve the vanishing gradient problem in RNNs and capture long-term dependencies


(D) Backup only




7. . Gated Recurrent Units (GRUs) are:

(A) Compressing GRUs


(B) Encrypting GRUs


(C) Simplified versions of LSTMs with fewer parameters


(D) Backup only




8. . Generative Adversarial Networks (GANs) consist of:

(A) A generator and a discriminator network competing in a zero-sum game


(B) Encrypting GANs


(C) Compressing GANs


(D) Backup only




9. . GANs are mainly used for:

(A) Data generation, image synthesis, and augmentation


(B) Encrypting images


(C) Compressing datasets


(D) Backup only




10. . Autoencoders are used for:

(A) Compressing features only


(B) Encrypting data


(C) Dimensionality reduction, denoising, and unsupervised feature learning


(D) Backup only




11. . Attention mechanism in deep learning helps to:

(A) Encrypt attention weights


(B) Focus on important parts of input sequences for better prediction


(C) Compress sequences


(D) Backup only




12. . Transformers are:

(A) Backup only


(B) Encrypting transformers


(C) Compressing transformers


(D) Deep learning architectures that rely entirely on attention mechanisms, widely used in NLP




13. . BERT (Bidirectional Encoder Representations from Transformers) is used for:

(A) Encrypting text


(B) NLP tasks such as text classification, question answering, and sentiment analysis


(C) Compressing text


(D) Backup only




14. . Transfer learning in deep learning involves:

(A) Backup only


(B) Encrypting weights


(C) Compressing models


(D) Using a pre-trained model on a new but related task to save time and improve performance




15. . Fine-tuning is:

(A) Adjusting weights of a pre-trained model on new data to adapt it to a specific task


(B) Encrypting models


(C) Compressing weights


(D) Backup only




16. . Dropout in advanced deep learning is used to:

(A) Backup only


(B) Encrypt neurons


(C) Compress layers


(D) Prevent overfitting by randomly deactivating neurons during training




17. . Batch normalization helps to:

(A) Encrypt batches


(B) Accelerate training and stabilize learning by normalizing layer inputs


(C) Compress layers


(D) Backup only




18. . Residual connections in deep networks:

(A) Encrypt residuals


(B) Help train very deep networks by allowing gradients to flow directly


(C) Compress layers


(D) Backup only




19. . Hyperparameter tuning in deep learning involves:

(A) Selecting optimal values for learning rate, batch size, number of layers, and neurons


(B) Encrypting hyperparameters


(C) Compressing hyperparameters


(D) Backup only




20. . The main purpose of advanced deep learning is to:

(A) Solve complex tasks in vision, language, and sequential data using sophisticated neural network architectures


(B) Encrypt all models


(C) Compress all features


(D) Backup only




Exit mobile version