T4Tutorials .PK

NSCT-Deep Learning Fundamentals MCQs

1. . Deep learning is:

(A) Compressing datasets only


(B) Encrypting neural networks only


(C) A subset of machine learning that uses neural networks with multiple layers to model complex patterns


(D) Backup only




2. . The main advantage of deep learning over traditional machine learning is:

(A) Compressing data


(B) Encrypting features only


(C) Ability to automatically learn features from raw data


(D) Backup only




3. . A neuron in a neural network is:

(A) Compressing unit only


(B) Encrypting unit only


(C) A computational unit that receives input, applies weights, sums them, and passes through an activation function


(D) Backup only




4. . Weights in a neural network represent:

(A) Backup only


(B) Encrypting weights


(C) Compressing weights


(D) The importance of each input in computing the output




5. . Bias in a neuron helps to:

(A) Encrypt bias


(B) Shift the activation function to better fit the data


(C) Compress bias


(D) Backup only




6. . Activation functions introduce:

(A) Compressing non-linearity


(B) Encrypting non-linearity


(C) Non-linearity to neural networks allowing them to learn complex patterns


(D) Backup only




7. . Common activation functions include:

(A) Compressing functions only


(B) Encrypting functions only


(C) Sigmoid, Tanh, ReLU, Leaky ReLU, and Softmax


(D) Backup only




8. . The output layer in a neural network:

(A) Encrypts the output


(B) Produces the final prediction of the network


(C) Compresses the output


(D) Backup only




9. . Loss function in deep learning measures:

(A) Backup only


(B) Encrypting errors


(C) Compressing errors


(D) The difference between predicted and actual outputs




10. . Common loss functions include:

(A) Compressing losses only


(B) Encrypting losses only


(C) Mean Squared Error (MSE), Cross-Entropy Loss, Hinge Loss


(D) Backup only




11. . Backpropagation is:

(A) Backup only


(B) Encrypting gradients


(C) Compressing gradients


(D) An algorithm to compute gradients of the loss function with respect to weights for learning




12. . Optimizers in deep learning help to:

(A) Backup only


(B) Encrypt weights


(C) Compress weights


(D) Update network weights to minimize the loss function




13. . Common optimizers include:

(A) Encrypting optimizers


(B) Gradient Descent, Stochastic Gradient Descent (SGD), Adam, RMSprop


(C) Compressing optimizers


(D) Backup only




14. . Epoch in deep learning is:

(A) Encrypting data pass


(B) One complete pass of the entire training dataset through the network


(C) Compressing epoch


(D) Backup only




15. . Batch size refers to:

(A) Encrypting batch


(B) The number of training samples processed before updating weights


(C) Compressing batch


(D) Backup only




16. . Overfitting occurs when:

(A) Compressing models


(B) Encrypting training data


(C) The model learns training data too well and fails to generalize on new data


(D) Backup only




17. . Dropout in deep learning is used to:

(A) Randomly deactivate neurons during training to prevent overfitting


(B) Encrypt neurons


(C) Compress activations


(D) Backup only




18. . Convolutional Neural Networks (CNNs) are mainly used for:

(A) Encrypting images


(B) Image and video data processing


(C) Compressing images


(D) Backup only




19. . Recurrent Neural Networks (RNNs) are suitable for:

(A) Compressing sequences


(B) Encrypting sequences


(C) Sequential data such as time series or text


(D) Backup only




20. . The main purpose of deep learning fundamentals is to:

(A) Compress all features


(B) Encrypt all data


(C) Build models that can automatically learn complex patterns from data for prediction or classification tasks


(D) Backup only




Exit mobile version