T4Tutorials .PK

NSCT – Natural Language Processing (NLP) MCQs

1. . Natural Language Processing (NLP) is:

(A) Compressing datasets only


(B) Encrypting text data only


(C) A branch of AI that focuses on the interaction between computers and human language


(D) Backup only




2. . Tokenization in NLP refers to:

(A) Encrypting tokens


(B) Splitting text into smaller units like words or sentences


(C) Compressing text


(D) Backup only




3. . Stemming in NLP is:

(A) Backup only


(B) Encrypting words


(C) Compressing words


(D) Reducing words to their root form (e.g., "running" → "run")




4. . Lemmatization differs from stemming because it:

(A) Backup only


(B) Encrypts words


(C) Compresses words


(D) Converts words to their base or dictionary form considering context




5. . Stop words in NLP are:

(A) Compressing words


(B) Encrypting words


(C) Commonly used words like "the", "is", "and" which are often removed during preprocessing


(D) Backup only




6. . Bag-of-Words (BoW) is:

(A) Backup only


(B) Encrypting words


(C) Compressing text


(D) A representation of text as a collection of word frequencies ignoring grammar and order




7. . TF-IDF stands for:

(A) Term Frequency-Inverse Document Frequency, used to weigh word importance


(B) Encrypting words


(C) Compressing features


(D) Backup only




8. . Word embeddings like Word2Vec or GloVe are used to:

(A) Represent words as dense vectors capturing semantic meaning


(B) Encrypt vectors


(C) Compress vectors


(D) Backup only




9. . Named Entity Recognition (NER) is:

(A) Compressing text


(B) Encrypting entities


(C) Identifying and classifying entities like names, locations, and dates in text


(D) Backup only




10. . Part-of-Speech (POS) tagging assigns:

(A) Encrypts POS tags


(B) Grammatical labels such as noun, verb, adjective to words in a sentence


(C) Compresses tags


(D) Backup only




11. . Sentiment analysis in NLP aims to:

(A) Encrypt text


(B) Determine the emotion or opinion expressed in text


(C) Compress data


(D) Backup only




12. . Sequence-to-Sequence (Seq2Seq) models are used for:

(A) Machine translation, text summarization, and chatbots


(B) Encrypting sequences


(C) Compressing sequences


(D) Backup only




13. . Attention mechanism in NLP allows:

(A) Compressing sequences


(B) Encrypting attention


(C) The model to focus on relevant parts of the input sequence for better predictions


(D) Backup only




14. . Transformers are:

(A) Encrypting transformers


(B) Deep learning models based entirely on attention mechanisms for NLP tasks


(C) Compressing models


(D) Backup only




15. . BERT is:

(A) Encrypting BERT


(B) A pre-trained transformer model for understanding context in NLP tasks


(C) Compressing BERT


(D) Backup only




16. . GPT models are used for:

(A) Encrypting text


(B) Text generation, summarization, and conversational AI


(C) Compressing text


(D) Backup only




17. . Text preprocessing in NLP typically includes:

(A) Backup only


(B) Encrypting text


(C) Compressing data


(D) Tokenization, stop word removal, stemming, lemmatization, and normalization




18. . Cosine similarity in NLP is used to:

(A) Compress similarity


(B) Encrypt similarity


(C) Measure similarity between two text vectors


(D) Backup only




19. . N-grams in NLP represent:

(A) Backup only


(B) Encrypting sequences


(C) Compressing sequences


(D) Contiguous sequences of N words used for text modeling




20. . The main purpose of NLP is to:

(A) Enable machines to understand, interpret, and generate human language effectively


(B) Encrypt all text


(C) Compress datasets


(D) Backup only




Exit mobile version