Imitation Learning (IL) is a significant area of research in machine learning and robotics that focuses on enabling agents to learn policies by observing expert demonstrations rather than through explicit reward signals, thereby reducing the reliance on manual reward engineering. Foundational approaches include behavioral cloning, where supervised learning is used to mimic expert actions, and inverse reinforcement learning (IRL), which infers the underlying reward function guiding expert behavior. Recent research explores deep imitation learning using convolutional and recurrent neural networks, generative adversarial imitation learning (GAIL) for improved policy generalization, and hierarchical IL to handle complex tasks through sub-policy decomposition. Applications span autonomous driving, robotic manipulation, game playing, and human–robot interaction, where IL provides efficient learning in real-world and high-dimensional environments. Current studies also integrate IL with reinforcement learning, meta-learning, and curriculum learning to enhance sample efficiency, robustness, and adaptability, establishing it as a key framework for developing agents that can learn from limited demonstrations while performing complex sequential decision-making tasks.