Named Entity Recognition (NER) is a core task in natural language processing that focuses on identifying and classifying entities such as persons, organizations, locations, dates, and other domain-specific terms within text. Early approaches relied on rule-based systems and statistical models like Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs), while deep learning has transformed NER by enabling automatic feature extraction and context-aware representations. Modern research leverages recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and bidirectional LSTMs with CRF decoding, achieving state-of-the-art performance in sequence labeling tasks. Transformer-based models such as BERT, RoBERTa, and GPT further enhance NER by providing contextual embeddings that capture complex syntactic and semantic dependencies. Applications of NER span information extraction, question answering, knowledge graph construction, biomedical text mining, and social media analytics. Recent studies also explore cross-lingual and domain-adaptive NER, few-shot learning for low-resource languages, and integration with multimodal data to improve robustness and generalization across diverse datasets.