IFogSim Projects in Fog Computing for Final Year Computer Science
With the rapid growth of the Internet of Things (IoT), smart devices, and the ever-increasing amount of data being generated at the networks edge, traditional cloud computing faces several challenges in terms of latency, bandwidth, and real-time processing. Fog computing (or fog networking) was introduced as a complementary approach to cloud computing to address these issues. Fog computing extends cloud capabilities closer to the edge of the network by providing computation, storage, and networking services locally, rather than relying on distant data centers. This helps in minimizing latency, enhancing security, and improving real-time decision-making in edge-based applications.
Fog computing plays a significant role in IoT environments where data needs to be processed closer to where it is generated, reducing delays and improving system efficiency. It allows devices like sensors, routers, gateways, and other edge nodes to process data locally before sending it to the cloud, offering a more distributed and decentralized approach. The increasing reliance on real-time data processing in areas like smart cities, healthcare, autonomous vehicles, and industrial automation makes fog computing an essential technology to explore in student projects.
Software Tools and Technologies
• Operating System: Ubuntu 20.04 LTS 64bit / Windows 10
• Development Tools: Apache NetBeans IDE 22 / iFogSim 4.0 / CloudSim SDN 3.0.0
• Language Version: JAVA SDK 21.0.2
List Of Final Year IFogSim Projects in Fog Computing
Energy Consumption Optimization With a Delay Threshold in Cloud-Fog Cooperation Computing Project Description : This project addresses the critical trade-off between energy efficiency and application performance in a collaborative cloud-fog environment. It proposes a novel scheduling algorithm that aims to minimize the total energy consumption of the entire system—including fog nodes and cloud data centers—while strictly adhering to a user-defined delay threshold for tasks. The model considers factors like computational load, network bandwidth, and the heterogeneous capabilities of fog and cloud resources. By intelligently offloading tasks either to nearby fog nodes for low-latency processing or to the powerful cloud for heavy computation, the system optimizes for energy without compromising the quality of service, making it sustainable and responsive for delay-sensitive IoT applications.
IOT Health Care Applications with Improved African Buffalo Optimization Algorithm in Fog Computing Project Description : Focusing on critical healthcare IoT applications like remote patient monitoring and real-time anomaly detection, this project enhances the African Buffalo Optimization (ABO) metaheuristic algorithm to manage the processing of sensitive medical data within a fog computing architecture. The improved ABO algorithm efficiently schedules and offloads computational tasks from medical sensors and devices to optimal fog nodes, optimizing for both latency and reliability. This approach ensures that life-critical data is processed with minimal delay near the data source, reducing dependency on the cloud and enabling faster emergency responses, all while managing the resource constraints of the fog network efficiently.
Minimising Delay and Energy with Computational Offloading in Online Dynamic Fog System Project Description : This project tackles the challenge of resource management in a dynamic fog computing environment where conditions like network traffic and node availability change in real-time. It develops an online offloading strategy that makes instantaneous decisions on whether to execute a task locally on an IoT device, on a proximate fog node, or on the cloud. The objective function is designed to minimize a weighted sum of total processing delay and energy consumption across the system. By continuously monitoring the network state and resource loads, the proposed solution adapts its offloading decisions on the fly, ensuring optimal performance for dynamic and unpredictable workloads.
Quality of Experience (QoE)-Aware Placement of Applications in Fog Computing Project Description : Moving beyond traditional metrics like latency and throughput, this project focuses on maximizing the end-users Quality of Experience (QoE) when interacting with applications deployed in a fog environment. It proposes a framework for placing application modules across fog and cloud resources based on QoE predictors, such as response time, jitter, and frame rate for video applications. The placement strategy ensures that the most critical components affecting user perception are deployed on the most appropriate nodes, guaranteeing a smooth, responsive, and satisfactory user experience for applications like augmented reality, interactive gaming, and video streaming.
Efficient Scientific Workflow Scheduling for Deadline-Constrained Parallel Tasks in Cloud Computing Environments Project Description : This project designs a sophisticated scheduling algorithm for complex scientific workflows comprised of multiple interdependent parallel tasks with a strict global deadline. The model accounts for task dependencies, data transfer times between virtual machines, and the computational capacity of cloud resources. The schedulers goal is to map each task to a suitable cloud resource and determine an execution sequence that minimizes the overall monetary cost of renting cloud instances while ensuring the entire workflow is completed before its deadline. This is crucial for fields like bioinformatics, astronomy, and climate modeling that rely on large-scale, time-sensitive computations.
Augmenting Resource Utilization with Scheduling-Based Fog Computing Framework Project Description : This project proposes a holistic fog computing framework centered on an advanced scheduling algorithm to combat resource underutilization. The framework intelligently partitions incoming tasks from various IoT domains and assigns them to underutilized fog nodes in a way that balances the load across the entire network. By maximizing the usage of available fog resources, the system reduces the need to spin up additional nodes or offload to the cloud, leading to higher overall system throughput, reduced operational costs, and improved scalability for large-scale IoT deployments.
Machine Learning based Minimizing Latency by using Fog Computing Approach Project Description : This project leverages machine learning techniques to predict network conditions and computational demands in a fog computing ecosystem. By training models on historical data, the system can forecast congestion and node load, enabling proactive task offloading decisions. The ML model identifies the optimal fog node for processing a task before it is even generated, minimizing service latency by avoiding overloaded paths and nodes. This predictive approach is particularly effective for latency-critical applications like autonomous vehicles and industrial automation, where reaction time is paramount.
Optimizing Task Scheduling Algorithm by using Improved-List in Fog Computing Environment Project Description : This work enhances the traditional list-based scheduling heuristic (e.g., HEFT) for a fog environment. The improved algorithm creates a sorted list of tasks based on new prioritizing criteria that consider not only computational cost but also fog-specific constraints like node proximity, bandwidth availability, and energy consumption. The scheduler then assigns each task from the sorted list to the fog node that optimizes the chosen objective, such as minimizing makespan or balancing load. This method provides a efficient and practical solution for scheduling heterogeneous tasks on heterogeneous fog resources.
Deep Reinforcement Learning for Autonomous Computation Offloading and Auto-Scaling in Mobile Fog Computing Project Description : This project develops an autonomous management system for mobile fog computing using Deep Reinforcement Learning (DRL). The DRL agent learns optimal policies by continuously interacting with the environment. It makes two key decisions: firstly, whether to offload a task from a mobile device to a fog node or the cloud, and secondly, whether to scale fog resources (e.g., activate or sleep nodes) based on current demand. This end-to-end automation allows the system to self-adapt to changing workloads and mobility patterns, optimizing for performance and cost without human intervention.
Scientific Workflow Application based Fog Computing Architecture of Load Balancing Project Description : This project introduces a fog computing architecture specifically tailored for executing data-intensive scientific workflows. It focuses on a dynamic load-balancing strategy that distributes the modules of a workflow across federated fog nodes to prevent any single node from becoming a bottleneck. The architecture considers data locality to minimize transfer times and includes a monitoring system to redistribute load in case of node failure or congestion. This approach brings computational power closer to scientific instruments (e.g., telescopes, sensors), enabling faster analysis and reducing the burden on core cloud data centers.
Developing Offload and Migration Enabled Smart Gateway for Cloud of Things in Cognitive Fog Framework Project Description : This project designs an intelligent gateway that acts as a mediator between IoT devices and the fog/cloud layers in a Cognitive Fog framework. Equipped with cognitive capabilities, the gateway can decide to either process data locally, offload it to a fog node for collaborative processing, or migrate ongoing tasks between fog nodes to follow mobile users or avoid failures. This smart gateway enhances the systems agility, reliability, and efficiency, making the Internet of Things infrastructure more responsive and context-aware.
The Cloud-Fog Environments for Resource-Aware Cost-Effiecient Scheduler Project Description : This project creates a unified scheduler that operates across the integrated cloud-fog continuum. Its primary objective is to minimize the total financial cost of execution, which includes the cost of using fog resources (often based on energy) and the cost of renting cloud resources (based on instance time). The scheduler is resource-aware, meaning it understands the capabilities and associated costs of each tier. It makes cost-efficient placement decisions by sending latency-sensitive tasks to fog nodes and computationally intensive, non-delay-sensitive tasks to the more cost-effective cloud, achieving an optimal balance.
Fog Computing Environment Based on the Technique for Resource Allocation and Management Project Description : This project proposes a comprehensive technique and a set of protocols for the fundamental challenges of resource allocation and management in a fog environment. It encompasses mechanisms for resource discovery (identifying available fog nodes and their capabilities), resource provisioning (allocating node resources to specific tasks or users), and resource monitoring (tracking usage and performance). The technique aims to ensure efficient, fair, and secure utilization of the highly distributed and volatile resources that constitute a fog computing network.
Hidden Markov Model - Based Approach for Latency-Aware and Energy-Efficient Computation Offloading in Mobile Fog Computing Project Description : This project utilizes a Hidden Markov Model (HMM) to model the uncertain and dynamic state of a mobile fog computing system, including factors like wireless channel quality and fog node load. The HMM predicts the most likely future states of the system, which informs the offloading decision process. By understanding the probabilistic evolution of the environment, the algorithm can choose an offloading action (local, fog, cloud) that optimally trades off expected latency against energy consumption on the mobile device, enhancing both battery life and application responsiveness.
Optimized Resource Provisioning by using Learning-Based Techniques in Fog Computing Project Description : This project applies machine learning-based techniques to predict future resource demand in fog computing networks. By analyzing patterns in historical workload data, the system can proactively provision resources—such as allocating more CPU cycles or reserving bandwidth—before demand peaks occur. This predictive provisioning prevents performance degradation during high load, ensures resource availability for critical tasks, and improves overall system efficiency by avoiding both over-provisioning (waste) and under-provisioning (poor performance).
Hybrid Meta-Heuristic Approaches for Energy-Aware Task Scheduling in Fog Computing Project Description : This research investigates the combination of two or more meta-heuristic algorithms (e.g., Genetic Algorithm with Simulated Annealing, or PSO with GWO) to create a powerful hybrid approach for energy-aware task scheduling. The hybrid algorithm leverages the strengths of each constituent method to effectively explore the vast solution space of task-to-node mappings. The primary goal is to find a scheduling solution that minimizes the total energy consumption of the fog layer while respecting task deadlines, thereby promoting green and sustainable fog computing.
Energy-Aware Resource Management in Fog-Based IoT Using a Hybrid Algorithm Project Description : Similar to the previous project but with a specific focus on IoT applications, this work employs a hybrid algorithm (e.g., ACO-PSO) for holistic resource management. The algorithm manages not just task scheduling but also other resources like network bandwidth and storage allocation across fog nodes. Its core optimization objective is to minimize the total energy footprint of the entire IoT-fog system, extending the battery life of constrained IoT devices and reducing the operational cost of the fog infrastructure.
Optimizing Cost-Aware Task Scheduling in Fog-Cloud Environments Project Description : This project focuses explicitly on the economic aspect of scheduling in a federated fog-cloud environment. It models the distinct cost structures of fog providers (who may charge based on energy or time) and cloud providers (who charge based on instance hours). The schedulers algorithm makes placement decisions that minimize the total monetary cost incurred by the user or application provider for executing their tasks, making fog-cloud deployments more economically viable for businesses and end-users.
IFogSim-Based Offloading Techniques in Cloud and Fog Hybrid Infrastructures Project Description : This project utilizes iFogSim, a widely adopted simulation toolkit for fog computing, to model, simulate, and evaluate novel offloading techniques. Researchers can design different offloading policies (e.g., latency-minimizing, energy-saving, cost-aware) and test their performance within a simulated hybrid cloud-fog infrastructure under various workload scenarios. This allows for the comparison of different strategies and the validation of their effectiveness before costly real-world deployment.
Improved Firework Algorithm Based Task Scheduling Algorithm in Fog Computing Project Description : This work enhances the Firework Algorithm (FWA), a swarm intelligence metaheuristic inspired by the explosion of fireworks, to solve the task scheduling problem in fog computing. The improvements address limitations of the basic FWA, such as convergence speed and exploration/exploitation balance. The improved FWA searches for the optimal task assignment schedule that minimizes objectives like makespan or energy consumption, demonstrating the applicability of novel bio-inspired algorithms to complex fog scheduling problems.
Optimizing Delay and Performance for Job Scheduling Algorithm in Fog Computing Project Description : This project centers on designing a job scheduling algorithm whose primary goal is to optimize two key metrics: overall job completion time (delay) and system performance (e.g., throughput). The algorithm considers the heterogeneity of jobs and fog resources, aiming to assign each job to the node that can process it the fastest while also considering the current system load to prevent congestion. This results in a responsive fog system that can handle a high volume of jobs efficiently.
Ciphertext-Policy Attribute-Based Encryption for Compulsory Traceable Against Privilege Abuse in Fog Computing Project Description : This project addresses security and privacy concerns in fog computing by implementing a sophisticated encryption scheme. It employs Ciphertext-Policy Attribute-Based Encryption (CP-ABE) which allows data owners to encrypt data with an access policy defining which users (with specific attributes) can decrypt it. A crucial addition is a compulsory traceability mechanism that can identify and revoke the credentials of any fog node or user who abuses their decryption privileges. This ensures secure and auditable data sharing in potentially untrusted fog environments.
Delay-Aware Task Scheduling and Offloading in Fog Networks Project Description : This project provides a comprehensive solution for managing tasks in fog networks with an overriding focus on minimizing end-to-end delay. The proposed framework integrates both scheduling (ordering and assigning tasks to nodes) and offloading (making the local/fog/cloud decision). It uses precise delay models that include computation time on nodes and transmission time over links to make decisions that guarantee tasks meet their stringent latency requirements, which is vital for real-time control systems and interactive services.
FOLO: Vehicular Fog Computing based Latency and Quality Optimized Task Allocation Project Description : FOLO is a novel task allocation framework designed specifically for Vehicular Fog Computing (VFC), where moving vehicles act as fog nodes. It tackles the unique challenges of high mobility and volatility by optimizing for both ultra-low latency and high quality of computational results. The allocation strategy considers vehicle trajectory prediction to ensure task completion before a vehicle moves out of range and incorporates mechanisms to maintain computation quality despite the dynamic network conditions, enabling applications like cooperative collision avoidance and real-time traffic optimization.
Improved NSGA-II Based Multi-Objective Optimization for Efficient Fog Computing Resource Scheduling Project Description : This project enhances the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to handle the multi-objective nature of fog scheduling. The improved algorithm simultaneously optimizes for conflicting objectives such as minimizing latency, minimizing energy consumption, and maximizing resource utilization. It provides a set of Pareto-optimal solutions (a front of non-dominated solutions), allowing system administrators to choose a scheduling policy that best fits their current priorities and constraints.
Fog Computing Network based on the Planning and Design Problem Project Description : This project addresses the foundational planning and design problem for deploying a fog computing network. It involves making strategic decisions on the number of fog nodes to deploy, their geographical placement, their hardware specification (computing power, storage), and the network connectivity between them. The goal is to design a cost-effective infrastructure that meets expected performance criteria (e.g., coverage, latency) for target applications before actual deployment, serving as a crucial blueprint for fog network operators.
Application Placement Strategies in Fog Computing for Enhanced Quality of Experience(QoE) Project Description : This research delves into strategies for placing the different components or microservices of a complex application across fog nodes to directly enhance the users Quality of Experience (QoE). It moves beyond technical metrics to model how factors like delay, jitter, and resolution impact user satisfaction. The placement strategy uses these QoE models to decide where to instantiate each application module, ensuring the user perceives the service as high-quality, responsive, and reliable, which is key for consumer-facing fog applications.
Optimized Task Offloading and Data offloading Techniques in Mobile Fog-Based Collaborative Networks Project Description : This project focuses on the dual problem of offloading both computation (tasks) and the associated data in mobile collaborative fog networks. It recognizes that data transfer can often be the bottleneck. The proposed technique co-optimizes the decisions of where to compute and where to store/access data, aiming to minimize total execution time and energy consumption. This is especially important for data-intensive collaborative applications where multiple mobile users or fog nodes need to share and process common datasets.
Large-Scale Fog Computing based Self-Similarity-Based Load Balancing(SSLB) Project Description : This project proposes a Self-Similarity-Based Load Balancing (SSLB) algorithm designed for large-scale fog networks. It exploits the observation that network traffic and load patterns often exhibit self-similarity (fractal properties). By recognizing these patterns, the algorithm can predict short-term future load and proactively balance traffic across fog nodes, preventing localized congestion and improving the overall stability and capacity of the large-scale system. This approach is more adaptive to real-world traffic characteristics than traditional methods.
Efficient Resource Scheduling in Fog Computing through Extended Particle Swarm Optimization Project Description : This work extends the classic Particle Swarm Optimization (PSO) algorithm to make it more suitable for the fog scheduling problem. The extensions might include a novel representation of the solution, custom velocity update rules that incorporate fog constraints, or hybridizing PSO with a local search technique. The Extended PSO algorithm efficiently navigates the complex search space to find high-quality schedules that optimize objectives like makespan, cost, or energy, demonstrating improved performance over standard optimization techniques.