Computation offloading in edge computing has emerged as a major research focus, driven by the increasing demand for low-latency, energy-efficient, and high-performance services in mobile and Internet of Things (IoT) environments. Research papers in this area investigate strategies for migrating resource-intensive tasks from end devices to nearby edge servers, thereby alleviating computational burdens and extending device battery life. Studies explore diverse offloading models, including partial, full, dynamic, and cooperative offloading, while considering factors such as task dependency, wireless channel conditions, mobility, and network congestion. Recent works apply optimization algorithms, game theory, machine learning, and reinforcement learning to achieve efficient resource allocation, task scheduling, and adaptive decision-making. Security-aware and privacy-preserving offloading frameworks are also gaining attention, addressing risks such as data leakage, adversarial manipulation, and denial-of-service attacks in distributed edge environments. Moreover, hybrid approaches combining edge, cloud, and fog computing are being studied to balance scalability, reliability, and performance for latency-sensitive applications like augmented reality, smart healthcare, autonomous driving, and industrial IoT. Overall, computation offloading research highlights the potential of edge computing to deliver intelligent, secure, and real-time services while addressing challenges of heterogeneity, resource constraints, and dynamic environments.