List of Topics:
Location Research Breakthrough Possible @S-Logix pro@slogix.in

Office Address

Social List

Final Year Cloudsim Projects in Cloud Computing

final-year-cloudsim-projects-in-cloud-computing.jpg

Cloudsim Projects for Cloud Computing in Final Year computer science

  • Cloud computing has revolutionized how individuals and organizations access and manage computational resources, data storage, and application services. By providing scalable, on-demand access to a shared pool of configurable computing resources, cloud computing enables flexible, efficient, and cost-effective solutions for both small and large-scale applications. Cloud computing services are typically offered through three primary models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), each offering varying levels of abstraction and control.

    CloudSim is a powerful and widely used simulation tool for cloud computing environments. It allows researchers and developers to simulate and experiment with cloud infrastructure, data centers, virtual machines (VMs), applications, and resource provisioning policies. In terms of algorithms, CloudSim can be used to implement and evaluate several cloud computing algorithms related to scheduling, load balancing, resource allocation, and energy efficiency.The adoption of cloud computing has transformed traditional IT infrastructure by allowing users to rent computing power and storage on-demand, instead of investing in costly hardware and data centers. Cloud computing is particularly significant for academic and student projects, as it offers a practical, scalable solution to test, deploy, and manage applications without the need for significant upfront investment.

Software Tools and Technologies

  • • Operating System: Ubuntu 20.04 LTS 64bit / Windows 10
  • • Development Tools: Apache NetBeans IDE 22 / CloudSim 4.0.0 / WorkFlowSim 1.0 / CloudAuction-2.0 / FederatedCloudSim 2.0.1
  • • Language Version: JAVA SDK 21.0.2

List Of Final Year Cloudsim Projects in Cloud Computing

  • Assuring Fault-Tolerance through VM Migration-Based Load Balancing in the Cloud
    Project Description : This project focuses on enhancing cloud system reliability by integrating fault tolerance directly into the load balancing mechanism. It employs intelligent Virtual Machine (VM) migration strategies not just to distribute computational load evenly across physical hosts, but also to proactively evacuate VMs from servers showing signs of potential failure (e.g., high temperature, hardware errors). The system continuously monitors host health metrics and workload distribution. Upon detecting an imbalanced or at-risk node, it triggers a live migration of select VMs to healthier, underutilized servers, thereby preventing service downtime, ensuring continuous availability, and maintaining performance Service Level Agreements (SLAs) even in the event of individual component failures.
  • Optimization of Completion Time through Efficient Resource Allocation of Task in Cloud Computing by Enhancing the Genetic Algorithm Using Differential Evolutionary Algorithm
    Project Description : This project aims to minimize the total makespan (completion time) for a batch of tasks in a cloud environment by developing a hybrid meta-heuristic scheduling algorithm. It enhances the standard Genetic Algorithm (GA), which can sometimes converge prematurely or get stuck in local optima, by integrating the crossover and mutation strategies from the Differential Evolution (DE) algorithm. This hybrid GA-DE approach creates a more robust and efficient search mechanism for the vast solution space of task-to-VM mappings. The algorithm evaluates individuals (schedules) based on their predicted makespan, evolving over generations to find an optimal or near-optimal allocation that significantly reduces overall job completion time compared to traditional schedulers.
  • Efficient Management and Effective Utilization of Resources through Optimal Allocation and Opportunistic Migration of Virtual Machines in Cloud Data Centers
    Project Description : This project tackles the challenge of low resource utilization in cloud data centers, which often leads to wasted energy and infrastructure. It proposes a holistic resource management framework that combines optimal initial placement of new Virtual Machines with opportunistic live migration of existing ones. The system uses predictive analytics to forecast resource demand and identifies consolidation opportunities. It then strategically migrates VMs from underutilized hosts, allowing those hosts to be switched to low-power sleep modes, thereby improving overall CPU/RAM utilization rates, reducing the number of active servers, and leading to substantial energy savings without violating performance guarantees.
  • Minimization of Energy Consumption in Cloud Data Centers by Applying Dynamic Programming and Analysis Tool
    Project Description : This project addresses the high operational costs and carbon footprint of cloud data centers by formulating energy minimization as a sequential decision-making problem solved using Dynamic Programming (DP). The DP model breaks down the continuous operation of the data center into discrete time steps. At each step, it evaluates the state (e.g., current load on each server, power mode) and computes an optimal policy for actions like VM consolidation, migration, and host power state switching (on/off/sleep) that minimizes the total cumulative energy consumption over a long horizon. A simulation and analysis tool (like CloudSim) is used to model the data center environment and validate the effectiveness of the DP-based strategy against other heuristic approaches.
  • Game Theory Oriented Auction based Resource Allocation in Cloud Computing
    Project Description : This project models the resource allocation process in cloud computing as a strategic game between the cloud provider (seller) and multiple users (bidders). Using auction theory, a sub-field of game theory, it designs mechanisms where users bid for virtual machine instances based on their urgency and budget. The auction-based system (e.g., a combinatorial auction) efficiently allocates resources to those who value them the most, maximizing the providers revenue while ensuring truthful bidding from users. This approach leads to a fair and economically efficient market-like environment for cloud resources, optimizing social welfare and improving resource utilization compared to fixed-price models.
  • Energy-Efficient Algorithms for Dynamic Virtual Machine Consolidation in Cloud Data Centers
    Project Description : This project is dedicated to developing and evaluating a suite of algorithms specifically for dynamic Virtual Machine (VM) consolidation, a key technique for saving energy in clouds. The process involves four main steps: periodically detecting underloaded and overloaded hosts, selecting which VMs to migrate from these hosts, choosing the optimal destination for each migrated VM, and finally placing the VMs. The project designs novel algorithms for each step, focusing on intelligent thresholds for host detection, minimizing migration overhead, and maximizing the number of servers that can be powered off. The ultimate goal is to dynamically pack VMs onto the fewest possible physical machines without compromising performance, drastically reducing the data centers energy consumption.
  • Design and Analysis of Sustainable and Seasonal Profit Scaling Model in Cloud Environment
    Project Description : This project proposes a business model for cloud providers that aligns profitability with environmental sustainability. The "Seasonal Profit Scaling" model dynamically adjusts resource pricing and allocation strategies based on seasonal variations in renewable energy availability (e.g., more solar power in summer, more wind in winter). During periods of high renewable generation, the model encourages higher resource consumption by offering lower "green" prices, maximizing profit while using clean energy. During low renewable periods, it scales back or charges a premium, reducing reliance on fossil fuels. This approach allows providers to market themselves as sustainable, attract eco-conscious customers, and optimize profit in harmony with the environment.
  • Energy-Aware Resource Auto-Scaling Based on Allometric Scaling and Metabolic Rate Techniques for Workflow Applications In Cloud Datacenter
    Project Description : Inspired by biological principles, this project develops a novel auto-scaling mechanism for workflow applications. It draws an analogy between a data centers resource consumption and an organisms metabolic rate, which scales predictably with its size (allometric scaling). The algorithm models the cloud infrastructure as a biological system, predicting the "metabolic" (energy) cost of scaling resources up or down to meet the demands of a workflow. By applying allometric laws, it can proactively provision the most energy-efficient amount of resources (e.g., number of VMs) required to complete the workflow within its deadline, avoiding both under-provisioning (which causes delays) and over-provisioning (which wastes energy).
  • Dependable Scheduling with Active Replica Placement for Workflow Applications in Cloud Computing
    Project Description : This project ensures high reliability and fault tolerance for critical workflow applications (e.g., scientific computations, business processes) through proactive replication. Instead of running a single instance of each task, the scheduler actively creates multiple replicas of tasks and strategically places them on different, geographically separated Virtual Machines or physical hosts. This approach guarantees that even if one host fails or a network partition occurs, at least one replica of each task can still complete successfully, preventing the entire workflow from failing. The scheduling algorithm intelligently decides which tasks to replicate (e.g., those on critical path) and where to place them to maximize dependability while minimizing the resource overhead of replication.
  • Energy Efficient Approach to Reduce the Emission of Carbon in Data Centers
    Project Description : This project takes a holistic approach to reduce the carbon footprint of data centers by focusing on the source of energy rather than just consumption. It integrates three key strategies: 1) **Geographical Load Balancing:** directing user requests to data centers in regions where electricity is primarily generated from renewable sources (e.g., hydro, wind, solar). 2) **Temporal Load Shifting:** delaying non-urgent batch processing jobs (e.g., data backups, analytics) to times of day when renewable energy supply is high. 3) **Internal Energy Efficiency:** employing VM consolidation and other techniques to minimize kWh usage. By optimizing for "green" energy availability, the project directly tackles the environmental impact of cloud computing.
  • Hierarchical and Revocable Attribute-Based Encryption for Fine-Grained Access Control in Cloud Storage Services
    Project Description : This project addresses data security and privacy in cloud storage by implementing a sophisticated encryption scheme. Hierarchical and Revocable Attribute-Based Encryption (HR-ABE) allows data owners to encrypt files with a policy based on user attributes (e.g., "Department: Finance AND Level: Manager"), rather than specific identities. Authorized users with matching attributes can decrypt the data. The "Hierarchical" component allows for scalability by delegating authority across different domains. Crucially, the "Revocable" feature enables the data owner to instantly revoke access from users whose attributes change (e.g., an employee moving departments), providing fine-grained, dynamic, and scalable access control for outsourced data.
  • Novel Round Robin Resource Scheduling Algorithm in Cloud Computing with Dynamic Time Quantum Allocation
    Project Description : This project enhances the classic Round Robin (RR) CPU scheduling algorithm, which is known for its fairness but poor performance with varying burst times, for cloud environments. The standard RR uses a fixed time quantum, which can lead to high context-switch overhead if too small or poor responsiveness if too large. This novel algorithm introduces a dynamic time quantum that is calculated based on the actual CPU burst requirements of the tasks in the ready queue. By adapting the quantum size dynamically—making it larger for CPU-intensive tasks and smaller for I/O-intensive tasks—it significantly reduces the number of context switches, improves average waiting time, and enhances overall system throughput and responsiveness.
  • Collaboration of Shortest Job First with Longest Job First Algorithms for Efficient Task Scheduling In Cloud Datacenter
    Project Description : This project proposes a hybrid scheduling strategy that combines the strengths of Shortest Job First (SJF) and Longest Job First (LJF) to overcome their individual weaknesses. SJF minimizes average waiting time but can starve long-running tasks. LJF ensures long tasks get processed but can delay short tasks. The collaborative algorithm uses a multi-level queue system: short tasks are prioritized in a high-priority queue using an SJF-like policy to ensure quick turnaround, while long tasks are placed in a lower-priority queue with an LJF-like policy to ensure they are eventually allocated sufficient resources without causing starvation for other jobs. This balance leads to improved fairness and efficiency.
  • Reduction of Power Consumption and Improving Resource Utilization in Cloud Infrastructure through Combination of Genetic and Meta-Heuristic Scheduling Algorithms
    Project Description : This project tackles the dual objectives of energy savings and high resource utilization by employing a powerful hybrid meta-heuristic approach. It combines a Genetic Algorithm (GA) for its strong global search capabilities across the vast space of possible task-VM mappings with another meta-heuristic (like Particle Swarm Optimization or Simulated Annealing) for fine-tuning and local search. The fitness function for this hybrid algorithm is multi-objective, evaluating schedules based on their total power consumption (aiming to minimize it through consolidation) and their resource utilization rate (aiming to maximize it). This results in schedules that pack tasks tightly onto servers, allowing unused servers to be powered down, thus achieving both goals simultaneously.
  • Clustering Algorithm based Implementation and Performance Analysis of Various VM Placement Strategies in CloudSim
    Project Description : This project uses the CloudSim simulation toolkit to study and compare different Virtual Machine (VM) placement strategies. The core innovation is the application of clustering algorithms (like K-Means) to intelligently group VMs based on their resource usage patterns (e.g., CPU, memory, network). The hypothesis is that placing complementary VMs (e.g., a CPU-intensive VM with a memory-intensive VM) together on the same host leads to better overall utilization. The project implements several placement policies: one based on this clustering approach, others like Power-Aware, and Throttled. A comprehensive performance analysis is then conducted in CloudSim to evaluate them based on metrics like energy consumption, SLA violations, and number of host shutdowns.
  • Implementation of Demand Prediction by using Improved Dynamic Resource Demand Prediction and Allocation in Multi-Tenant Service Clouds
    Project Description : This project focuses on improving resource elasticity in Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) clouds where multiple tenants (applications/users) share underlying resources. It implements a predictive auto-scaling system that uses advanced time-series forecasting techniques (e.g., ARIMA, LSTM neural networks) to predict the future resource demand (CPU, memory) of each application tenant. These improved predictions are more accurate than simple reactive scaling based on current load. The system then proactively allocates or deallocates resources just ahead of predicted demand spikes or lulls, ensuring consistent application performance for tenants while minimizing resource waste and cost for the provider.
  • A Learning Automata-Based Algorithm for Energy and SLA Efficient Consolidation of Virtual Machines in Cloud Data Centers
    Project Description : This project employs Learning Automata (LA), a reinforcement learning technique, to make intelligent decisions in the VM consolidation process. The LA algorithm operates in an uncertain environment (the dynamic cloud data center). Its actions are decisions like "migrate VM X from host Y" or "power off host Z." The response from the environment is feedback on whether this action improved the objectives (reduced energy, maintained SLA). Over time, the LA learns the probability of certain actions leading to desirable outcomes. This allows it to adaptively and proactively consolidate VMs in a way that optimally balances the trade-off between energy efficiency and avoiding performance degradation (SLA violations), outperforming static threshold-based approaches.
  • Elastic and Flexible Deadline Constraint Load Balancing Algorithm for Cloud Computing
    Project Description : This project designs a load balancing algorithm specifically for time-sensitive applications where tasks have associated deadlines. The algorithm is "elastic" because it can dynamically scale the resource pool up or down based on the current load and urgency of tasks. It is "flexible" because it can prioritize tasks based on the tightness of their deadlines. Tasks with closer deadlines are allocated resources with higher priority, even if they are larger, to ensure they complete on time. The algorithm continuously monitors the system and can preempt less urgent tasks or provision new resources to meet the deadlines of incoming high-priority tasks, ensuring high throughput and timeliness.
  • Big Media Healthcare Data Processing in Cloud: A Collaborative Resource Management Perspective
    Project Description : This project addresses the challenge of processing massive, sensitive healthcare media files (like MRI, CT scans, medical videos) in the cloud. It proposes a collaborative resource management framework where multiple cloud data centers or zones within a center work together. The framework includes mechanisms for: 1) **Secure data partitioning and distribution** across resources while maintaining patient privacy (e.g., via encryption), 2) **Scheduling computational tasks** (e.g., image analysis, pattern recognition) close to where the data resides to minimize transfer latency, and 3) **Orchestrating collaborative processing** where different nodes analyze different parts of the data simultaneously. This approach significantly speeds up the analysis of big medical data, enabling faster diagnostics and research.
  • Dynamic IAAS Computing Resource Provisioning Strategy with QOS Constraint
    Project Description : This project develops a strategy for Infrastructure-as-a-Service (IaaS) providers to dynamically provision virtual machine instances for users while adhering to strict Quality of Service (QoS) constraints, typically defined in an SLA (e.g., response time, throughput). The strategy involves a control system that continuously monitors the performance of allocated VMs. If QoS metrics begin to degrade towards violation thresholds (e.g., due to increased load), the system proactively triggers auto-scaling actions—such as adding more VMs, vertically scaling an existing VM (adding more CPU/RAM), or migrating to a more powerful host—to ensure the QoS constraints are continuously met, providing a reliable and consistent performance experience for the cloud customer.
  • Hybrid Task Scheduling Method for Cloud Computing by Meta Heuristic (Genetic Algorithm - Differential Evolution Method) Algorithm
    Project Description : This project creates a powerful hybrid meta-heuristic by fusing the population-based search of Genetic Algorithms (GA) with the efficient perturbation strategies of Differential Evolution (DE). The algorithm is designed to navigate the complex search space of task-to-VM scheduling. It uses GAs selection, crossover, and mutation operators to maintain population diversity and explore broad areas. The DE component is integrated to enhance the mutation phase, creating more efficient and targeted trial vectors for offspring generation. This hybrid approach accelerates convergence towards the global optimum, effectively minimizing key scheduling objectives like makespan, cost, or energy consumption, outperforming either algorithm used independently.
  • Holistic Virtual Machine Scheduling in Cloud Datacenters Towards Minimizing Total Energy
    Project Description : Moving beyond simple VM consolidation, this project takes a "holistic" view of energy consumption. It considers all factors contributing to total data center energy use, including: 1) IT energy (servers running VMs), 2) Cooling energy (to remove heat from servers), and 3) Network energy (switches and links used for VM migration and data transfer). The scheduling algorithm makes coordinated decisions on VM placement and migration by modeling the thermal dynamics of the data center (placing VMs in cooler zones) and the network topology (minimizing migration distance). By optimizing for this total energy footprint, it achieves greater overall energy efficiency than methods that only focus on server energy alone.
  • Minimizing the Risk of Cloud Services Downtime using Live Migration and HEFT Upward Rank Placement
    Project Description : This project enhances cloud service reliability by proactively minimizing downtime risks. It combines two techniques: 1) Live Migration: to seamlessly move VMs away from physical hosts that are predicted to fail or require maintenance. 2) HEFT-based Placement: The Heterogeneous Earliest Finish Time (HEFT) algorithm, known for scheduling workflow tasks, is adapted for initial VM placement. It calculates an "upward rank" for each VM (a priority based on its criticality and dependencies). The most critical VMs are then placed on the most reliable hosts. During a potential failure event, live migration is prioritized for these high-rank VMs, ensuring the most important services have the lowest downtime.
  • DR-Cloud for Multi-Cloud Based Disaster Recovery Service
    Project Description : This project designs and implements a robust Disaster Recovery (DR) framework, "DR-Cloud," that leverages multiple public cloud providers (e.g., AWS, Azure, GCP) to ensure business continuity. Instead of relying on a single cloud or a private DR site, DR-Cloud continuously replicates critical data and application snapshots across different cloud regions and providers. This multi-cloud strategy eliminates a single point of failure. In the event of a disaster affecting one cloud provider or region, the system automatically fails over to a healthy environment in another cloud, minimizing Recovery Time Objective (RTO) and Recovery Point Objective (RPO). It provides a cost-effective and highly reliable DR-as-a-Service solution.
  • Hybrid Green Scheduling Algorithm using Genetic Algorithm and Particle Swarm Optimization Algorithm
    Project Description : This project develops a "green" scheduling algorithm focused on energy efficiency by hybridizing two powerful population-based meta-heuristics. The Genetic Algorithm (GA) provides a strong foundation for global exploration of the scheduling solution space through crossover and mutation. The Particle Swarm Optimization (PSO) component is integrated to enhance local exploitation; the social learning behavior of particles (schedules) helps the population converge more quickly towards energy-efficient optima. The fitness function evaluates each candidate schedule primarily on its total predicted energy consumption, leading to solutions that pack tasks efficiently, enable server shutdowns, and minimize the clouds carbon footprint.
  • Virtual Machine Consolidation with Multiple Usage Prediction for Energy-Efficient Cloud Data Centers
    Project Description : This project improves the accuracy and effectiveness of VM consolidation by employing multiple predictive models. Rather than relying on a single metric (like CPU usage), it predicts multiple resource usage dimensions (CPU, memory, network I/O) for each VM using time-series forecasting techniques. A multi-resource utilization forecast provides a more complete picture of future load. The consolidation algorithm uses this comprehensive prediction to make smarter decisions: it identifies hosts that will be underloaded or overloaded across all resources, selects VMs for migration that will balance the load, and places them on destination hosts where they will not cause future multi-dimensional resource contention, leading to more stable and energy-efficient consolidation.
  • Energy Aware for Improving a Quality of Service parameter in Load Balancing in Cloud Computing Environment
    Project Description : This project focuses on the critical trade-off between energy efficiency and Quality of Service (QoS) in load balancing. It designs an energy-aware load balancer that actively manages this trade-off. The core mechanism involves dynamically adjusting the aggressiveness of VM consolidation based on current QoS metrics (e.g., response time, throughput). If QoS is well within acceptable limits, the algorithm can aggressively consolidate VMs to save more energy. If QoS metrics approach violation thresholds, it becomes less aggressive, potentially keeping more servers active to ensure performance. This dynamic adjustment ensures that energy savings are maximized without negatively impacting the user-experience and SLA adherence.
  • Hybrid Heuristic Algorithm Based Energy Optimization in the Provision of On-Demand Cloud Computing Services
    Project Description : This project targets the energy efficiency of on-demand resource provisioning, a core cloud service. It develops a hybrid heuristic that combines the strengths of different simple heuristics to make smarter provisioning decisions. For example, it might use a Best-Fit Decreasing heuristic for initial placement to pack VMs tightly and then use a Simulated Annealing heuristic to further optimize the mapping for energy savings by exploring small swaps. This hybrid approach is more efficient than complex meta-heuristics and more effective than simple heuristics alone. It quickly finds near-optimal provisioning plans that minimize the number of active physical servers required to meet on-demand requests, thereby reducing energy consumption.
  • Task-Based System Efficient Load Balancing in Cloud Computing Using Particle Swarm Optimization
    Project Description : This project applies Particle Swarm Optimization (PSO), a bio-inspired meta-heuristic, specifically to the problem of task-based load balancing. In this model, each "particle" in the swarm represents a potential mapping of tasks to available Virtual Machines. The particles position is adjusted over iterations based on its own experience and the experience of its neighbors, moving through the solution space towards the best-known mapping. The fitness function evaluates each mapping based on load balance efficiency metrics, such as minimizing the standard deviation of load across all VMs or the maximum load on any VM. This PSO-based approach efficiently finds a scheduling solution that distributes tasks evenly, preventing bottlenecks and improving overall system throughput.
  • Task Based Minimizing Energy Consumption in Mobile Cloud
    Project Description : This project addresses energy conservation in mobile cloud computing, where the goal is to extend the battery life of mobile devices. It employs computational offloading, deciding which tasks should be executed locally on the mobile device and which should be offloaded to the cloud. The algorithm makes this decision per task based on a cost model that considers the tasks computational complexity, the data transfer size, the network bandwidth, and the energy cost of local computation vs. data transmission. By strategically offloading only the tasks that would consume more energy if computed locally, the system minimizes the total energy consumption on the mobile device, significantly prolonging its battery life while maintaining application performance.
  • Towards Building Forensics Enabled Cloud through Secure Logging-as-a-Service
    Project Description : This project proposes a framework to make cloud environments "forensics-ready" by providing Secure Logging-as-a-Service (SLaaS). Digital forensics requires reliable, tamper-proof logs to investigate security incidents. The SLaaS framework ensures that logs from all layers (user actions, VM operations, network traffic, hypervisor events) are securely collected, encrypted, and stored in a centralized, immutable repository with strict access controls. It uses techniques like hash chaining to detect any alteration of log data. This provides a verifiable audit trail, enabling cloud providers and investigators to reliably reconstruct events after a security breach, identify the source of an attack, and meet compliance requirements for data governance.
  • Energy Efficient Resource Allocation for the Effective Task Executing through VM Allocation in Cloud Computing
    Project Description : This project tightly couples task execution efficiency with energy-efficient resource allocation. The core idea is that the way Virtual Machines are allocated directly impacts the energy cost of executing tasks. The proposed system doesnt just allocate a generic VM for a task; it profiles the tasks resource needs (e.g., CPU-intensive, memory-bound, I/O-heavy) and then allocates a VM type that is a "right fit"—providing exactly the resources needed without over-provisioning. This precise allocation prevents energy waste from underutilized VM resources. Furthermore, it consolidates these right-fit VMs onto the fewest physical servers possible, turning idle servers off to save energy, thereby making task execution inherently more energy-efficient.
  • Genetic Algorithm-based Framework for Scheduling and Management with Adaptive Resource Tuning in Mobile Cloud
    Project Description : This project creates a comprehensive Genetic Algorithm (GA)-based framework for mobile cloud offloading that handles both scheduling and runtime adaptation. The GA is used to find an optimal offloading plan—which tasks to run on the mobile device and which on the cloud, and on which cloud VM—to minimize execution time or energy. Furthermore, the framework is "adaptive." It continuously monitors network conditions and device status. If changes occur (e.g., bandwidth drops), it dynamically "tunes" the resource allocation by re-running the GA with updated parameters, potentially migrating tasks back locally or to a different cloud node. This ensures the offloading strategy remains optimal even in the volatile mobile environment.
  • Resource Allocation on Hybrid Cloud Network using Binary Reverse Auction Algorithm in Cloud Computing
    Project Description : This project models resource allocation in a hybrid cloud (a mix of private and public clouds) as a reverse auction. In this auction, the user (broker) has a task and announces their requirement. Multiple public cloud providers (bidders) compete to offer their resources to execute the task. They submit bids specifying the price and QoS they can provide. The user then applies a binary reverse auction algorithm, which selects the winning bidder(s)—a binary decision (yes/no for each bidder)—that minimizes the users cost while meeting task constraints. This market-driven mechanism ensures cost-effective resource acquisition from public clouds, optimizing spending in a hybrid cloud environment.
  • Fault Tolerant Workflow Scheduling Based on Replication and Resubmission of Tasks in Cloud Computing
    Project Description : This project ensures reliable completion of scientific workflows in the unreliable cloud environment by employing a hybrid fault tolerance strategy. It uses proactive replication for critical tasks (those on the critical path or with high failure impact), running multiple copies on different resources. For non-critical tasks or when replication overhead is too high, it uses reactive resubmission: if a task fails, it is simply resubmitted for execution on another available resource. The scheduler intelligently decides which fault-tolerance technique to apply to each task based on its criticality, estimated runtime, and reliability of available resources, creating a cost-effective and robust schedule that withstands individual task failures.
  • Management and Monitoring System of Physical and Virtual Resources of Data Centers with Utilization Prediction Model for Energy-Aware VM Consolidation
    Project Description : This project implements an integrated management and monitoring system for cloud data centers. It collects real-time metrics on the utilization of both physical hosts and virtual machines. The key component is a predictive model that forecasts future resource usage based on historical data. This prediction is fed into the VM consolidation engine. Instead of reacting to current overloads or underloads, the system acts proactively: if a host is predicted to be underloaded, it plans migrations to evacuate and shut it down; if a host is predicted to be overloaded, it plans migrations to prevent SLA violations. This prediction-driven approach leads to more stable, efficient, and energy-aware data center operations.
  • Power Aware Resource Management of Cloud Datacenter through Multi-Objective VM Placement with Utilization Forecasting of IAAS
    Project Description : This project tackles power management as a multi-objective optimization problem for IaaS providers. The VM placement algorithm must balance competing goals: 1) Minimize power consumption, 2) Maximize resource utilization, and 3) Avoid SLA violations due to over-consolidation. The system uses utilization forecasting to predict the future load of VMs and physical hosts. It then employs a multi-objective algorithm (e.g., based on NSGA-II) to find a set of Pareto-optimal VM placement solutions—each representing a different trade-off between the objectives. The operator can then choose a placement plan that aligns with their current priorities (e.g., extreme energy savings vs. conservative performance guarantees).
  • Optimization of Throughput using Multi objective Tasks Scheduling Algorithm for Cloud Computing
    Project Description : This project aims to maximize system throughput (the number of tasks completed per unit of time) in a cloud environment. It recognizes that optimizing for throughput is a multi-objective problem that involves minimizing makespan, maximizing resource utilization, and efficiently handling task arrivals and departures. The developed scheduling algorithm considers multiple factors simultaneously, such as task execution time, VM processing speed, and current system load. By evaluating potential schedules against these multiple criteria, it finds allocations that keep resources busy and complete batches of tasks in the shortest possible time, thereby maximizing the overall throughput and efficiency of the cloud system.
  • XBRLE and LZ4 Compression Algorithm for Migration of Virtual Machine in Cloud Computing
    Project Description : This project focuses on reducing the overhead of live VM migration, which involves transferring the entire memory state of a VM over the network. It proposes a hybrid compression strategy to minimize the amount of data transferred. First, it uses a technique like XBR (a variant for memory pages) to identify and eliminate zero pages or duplicate memory pages (deduplication). Then, it applies the extremely fast LZ4 compression algorithm to further compress the remaining memory data. This combination significantly reduces the migration time and the network bandwidth consumed during the process, leading to faster, less disruptive migrations, lower energy costs, and minimized performance impact on the migrating VM and other services.
  • Non Clustering Algorithm Based Implementation and Performance Analysis of Various VM Placement Strategies in CloudSim
    Project Description : This project serves as a comparative study within the CloudSim simulator, focusing on VM placement strategies that do not rely on clustering techniques. It implements and analyzes a range of classic and heuristic policies such as First-Fit, Best-Fit, Worst-Fit, Power-Aware Best-Fit, and Random placement. The performance analysis rigorously evaluates these non-clustering algorithms against standard metrics: total energy consumption of the data center, number of SLA violations, average resource utilization, and number of host shutdowns achieved. The results provide valuable insights into the strengths and weaknesses of these simpler, often less computationally expensive, placement strategies for different cloud workload scenarios.
  • Implementation of Fault Prediction and Optimal PM Selection by using Proactive Fault-Tolerance in Cloud Computing
    Project Description : This project implements a proactive fault tolerance system that moves beyond reacting to failures and instead predicts them. It uses machine learning models (e.g., analyzing SMART stats, performance counters) to predict the likelihood of a physical machine (PM) failing in the near future. When a PM is flagged as high-risk, the system triggers a proactive response: it selects an "optimal" healthy PM from the pool based on criteria like available resources, current load, and proximity (to minimize migration network traffic). It then live-migrates all VMs from the predicted-to-fail PM to the optimally selected destination PM, preventing service downtime and data loss before the fault even occurs.
  • Resource Allocation Framework for Load Balancing using Hybrid Metaheuristic Algorithm in Cloud Computing
    Project Description : This project proposes a comprehensive framework that handles resource allocation with the explicit goal of achieving load balance across all physical servers. At its core is a hybrid metaheuristic algorithm, such as a Genetic Algorithm combined with Tabu Search. This hybrid algorithm searches for the optimal allocation of incoming VM requests to physical hosts. The fitness function is designed to evaluate how well a candidate allocation balances the load (e.g., by minimizing the standard deviation of CPU utilization across all hosts). By finding this optimal allocation, the framework prevents situations where some servers are overwhelmed (bottlenecks) while others are idle, leading to improved performance, higher throughput, and better overall resource utilization.
  • Memory Content Similarity for Server Consolidation by Optimizing Virtual Machine Selection and Placement
    Project Description : This project enhances server consolidation by leveraging memory content similarity between Virtual Machines. When VMs running similar operating systems or applications are consolidated onto the same host, their memory pages often have identical content (e.g., common libraries, kernel code). The algorithm optimizes VM selection for migration and placement by identifying VMs with high memory similarity. By co-locating such VMs, the hypervisor can use memory deduplication techniques (like KSM in Linux) to transparently share identical memory pages. This reduces the total physical RAM required on the host, allowing for a higher consolidation ratio (more VMs per server) and greater energy savings without adding performance overhead.
  • Load Balancing Algorithm based on Binary Bird Swarm Optimization for Cloud Computing
    Project Description : This project applies a discrete variant of the Bird Swarm Optimization (BSO) algorithm to the load balancing problem. Standard BSO is inspired by the social behavior and foraging of birds. For cloud scheduling, a binary version is used where each "birds" position is represented as a binary string denoting a specific task-to-VM mapping. The birds (solutions) navigate the search space by following the best foraging positions, simulating behaviors like foraging and vigilance. The fitness function evaluates how balanced the load is across VMs. This novel application of Binary BSO efficiently explores possible mappings to find one that minimizes load imbalance, improving response times and preventing resource starvation.
  • Adaptive Proactive Resource Allocation in Cloud Computing Based on Predictive Model
    Project Description : This project implements a resource allocation system that is both proactive and adaptive. It uses predictive models (e.g., time-series forecasting, machine learning) to anticipate future resource demands of applications. Based on these predictions, it proactively allocates resources (e.g., provisions new VMs) ahead of the actual need to prevent performance degradation. Crucially, the system is "adaptive": it continuously monitors the accuracy of its predictions and the effectiveness of its allocations. If the prediction model drifts due to changing workload patterns, it automatically retrains or adjusts the model, ensuring that the proactive allocations remain accurate and effective over time, leading to a self-tuning cloud infrastructure.
  • Efficient Adaptive Migration Algorithm in Cloud Infrastructure
    Project Description : This project designs an algorithm specifically to optimize the process of live VM migration, a core function for cloud management. The algorithm is "adaptive" because it dynamically chooses key migration parameters based on real-time conditions. It adjusts the page dirtying rate (how quickly memory changes during migration), the compression level applied to transferred data, and the network bandwidth allocated to the migration process. The goal is to minimize three key metrics simultaneously: total migration time, downtime of the VM, and performance impact on other services sharing the network. By intelligently adapting these parameters, the algorithm makes the migration process faster, smoother, and less intrusive.
  • Optimizing Cloud Workflow Scheduling by using Knowledge-based Adaptive Discrete Water Wave Optimization
    Project Description : This project enhances the Water Wave Optimization (WWO) meta-heuristic for complex workflow scheduling problems. It develops a discrete version of WWO (D-WWO) suitable for task assignment. Furthermore, it makes the algorithm "knowledge-based" and "adaptive" by incorporating problem-specific heuristics (e.g., prioritizing tasks on the critical path) into the wave propagation and refraction operations. The algorithm adaptively tunes its parameters (like wavelength) based on the search progress. This knowledge-guided, adaptive D-WWO more efficiently navigates the complex search space of workflow task ordering and resource assignment, finding higher-quality schedules that minimize makespan or cost compared to standard meta-heuristics.
  • Load Balancing with Predictive Priority-Based Dynamic Resource Provisioning Scheme in Heterogeneous Cloud Computing
    Project Description : This project creates a sophisticated load balancing scheme for heterogeneous clouds (with different types of servers and VMs). It combines two ideas: 1) Predictive Priority: It predicts the resource needs of incoming tasks and assigns them a priority based on their urgency and resource demands. 2) Dynamic Provisioning: It uses these predictions and priorities to dynamically provision the right type and size of VM from the heterogeneous resource pool. High-priority tasks are allocated powerful VMs immediately, while lower-priority tasks might be queued or assigned to less powerful ones. This predictive priority-based approach ensures that critical loads are balanced effectively onto appropriate resources, meeting performance goals while maintaining efficiency.
  • Multi-Tenant Service Clouds Based Dynamic Resource Demand Prediction and Allocation
    Project Description : This project focuses on resource management for multi-tenant PaaS or SaaS clouds where numerous independent applications share a common resource pool. It implements a system that dynamically predicts the resource demand (CPU, memory, I/O) for each tenant application individually, using historical usage data and machine learning. Based on these fine-grained predictions, the system proactively allocates resources to each tenant, scaling their dedicated pool up or down just before the predicted demand change occurs. This ensures that each application gets the resources it needs precisely when it needs them, preventing performance isolation issues (noisy neighbors) and optimizing the overall utilization of the shared infrastructure.
  • Deadline Constraint with Novel CR-PSO Approach for Multi-Objective Task Scheduling in Cloud Computing
    Project Description : This project addresses scheduling tasks with strict deadlines while optimizing for multiple objectives like cost and energy. It proposes a novel Constrained and Repaired Particle Swarm Optimization (CR-PSO) approach. The "Constrained" part handles the deadline requirement as a hard constraint, invalidating any particle (schedule) that violates it. The "Repaired" part is key: instead of discarding invalid particles, a repair heuristic modifies them to meet the deadline (e.g., by moving tasks to faster VMs). This allows the algorithm to efficiently use information from the entire population. The multi-objective PSO then optimizes the valid, repaired schedules for cost and energy, finding the best trade-off solutions that guarantee deadlines are met.
  • Workflow Applications based Distributed Grey Wolf Optimizer for Scheduling in Cloud Environment
    Project Description : This project adapts the Grey Wolf Optimizer (GWO), a meta-heuristic inspired by the social hierarchy and hunting behavior of grey wolves, for distributed workflow scheduling. The algorithm is implemented in a distributed manner, making it scalable for large workflows. The "pack" of wolves (search agents) cooperatively hunts for the optimal schedule. The alpha, beta, and delta wolves guide the population towards promising areas in the search space (good schedules). The fitness function evaluates schedules based on workflow makespan. This distributed GWO efficiently explores the complex dependencies between workflow tasks, finding high-quality schedules that minimize the total execution time of the application in the cloud.
  • Multi-Objective Optimization for Energy and Cost-Aware Workflow Scheduling in Cloud Data Centers
    Project Description : This project tackles the fundamental trade-off in cloud computing: performance vs. cost vs. energy. It formulates workflow scheduling as a multi-objective optimization problem with three competing goals: 1) Minimize energy consumption, 2) Minimize financial cost (VM rental cost), and 3) Minimize makespan (time). It employs a multi-objective algorithm like NSGA-II or MOPSO to find a set of Pareto-optimal solutions. Each solution on the Pareto front represents a different trade-off (e.g., a fast but expensive schedule, a slow but cheap and green schedule). This allows the user to visually analyze the trade-offs and select the schedule that best aligns with their current priorities and constraints.
  • Cloud-Based Workflow Task Scheduling: Balancing Energy Efficiency and Reliability
    Project Description : This project explicitly addresses the conflict between saving energy (through aggressive VM consolidation) and maintaining reliability (avoiding failures caused by overloading hosts). The scheduling algorithm is designed to balance these two objectives. It might use reliability models to estimate the failure rate of a host under a given load. The optimizer then finds a task-to-VM mapping that minimizes energy consumption but keeps the estimated failure rate below a certain acceptable threshold. This might result in a slightly less aggressive consolidation strategy than a pure energy-minimizing approach, but it yields a more robust and reliable system where the risk of failure-induced delays is controlled.
  • Geo-Distributed Data with Energy-Aware Cloud Workflow Applications Scheduling
    Project Description : This project schedules workflow applications where the input data is geo-distributed across multiple data centers (e.g., in different countries). The scheduler must decide where to execute each task of the workflow. Its decision is energy-aware, considering the different carbon intensity of the energy grids powering each data center. It evaluates the trade-off between the energy cost of transferring large datasets to a "green" data center versus executing the task on a less green data center that is closer to the data. The goal is to generate a schedule that minimizes the overall carbon footprint of the workflow by strategically placing computation near green energy sources, even if it involves some data transfer overhead.
  • LYRIC: Deadline and Budget-Aware Spatio-Temporal Query Optimization in Cloud Computing Environments
    Project Description : LYRIC is a framework designed for optimizing complex spatio-temporal queries (e.g., "find all vehicles in this area over the past hour") that process large datasets in the cloud. The optimization is "deadline and budget-aware," meaning it must find a query execution plan that finishes within a user-specified time (deadline) and cost (budget). It considers the spatial and temporal attributes of the data to partition the workload effectively across cloud resources. LYRIC evaluates different plans that might use different numbers of VMs, data indices, or processing algorithms, selecting the one that meets the dual constraints, providing efficient and cost-controlled query processing for geo-spatial big data applications.
  • Enhance the Performance of Cloud Environments by using Load Balancing Strategies
    Project Description : This project provides a broad investigation and implementation of various load balancing strategies to enhance overall cloud performance. It implements and compares multiple categories of algorithms, such as static (Round Robin, Weighted Round Robin) and dynamic (Throttled, Biased Random Sampling, Active Clustering). The performance enhancement is measured through key metrics like response time, throughput, resource utilization, and fault tolerance. The study aims to determine the most effective load balancing strategy for different types of cloud workloads (e.g., CPU-intensive, I/O-intensive, transactional), providing valuable guidelines for cloud administrators to choose the right strategy for their specific environment.
  • Risk Management Framework for SLA-Aware Load Balancing in Cloud Computing
    Project Description : This project integrates risk management principles into load balancing. It treats SLA violations as a "risk" that must be quantified and mitigated. The framework assesses the risk of violating SLAs (e.g., based on current load, historical performance) for different load balancing decisions. When distributing load, it doesnt just aim for even distribution; it evaluates the risk of sending new requests to a server that is already near its capacity limit. The load balancer makes decisions that minimize the overall risk of SLA violations across the system, potentially choosing a slightly less loaded server over a more loaded one, even if the latter has marginally better performance, leading to more reliable and predictable service delivery.
  • To Optimizing User Requirements for Cloud Data Centers by using Virtual Machine Allocation Strategy
    Project Description : This project focuses on the cloud providers perspective for meeting diverse user requirements efficiently. Users request VMs with specific configurations (vCPUs, RAM, etc.) for their applications. The VM allocation strategys goal is to accommodate these user requirements while optimizing the providers infrastructure. The algorithm must solve a multi-dimensional bin-packing problem, placing VMs onto physical servers in a way that minimizes the number of active servers (saving energy) and minimizes resource fragmentation (maximizing utilization). This optimal allocation ensures that the provider can serve more user requests with the same physical infrastructure, reducing costs and increasing profit margins.
  • Optimizing Resource Provisioning Based on Meta-Heuristic Population and Deterministic Algorithm by using Ant Colony Optimization and Spanning Tree
    Project Description : This project creates a hybrid algorithm for resource provisioning in cloud networks. It uses Ant Colony Optimization (ACO), a meta-heuristic, to discover promising paths or configurations for connecting and provisioning resources across the data center network. The solutions found by ACO (a population of good paths) are then refined using a deterministic Spanning Tree algorithm (like MST). The spanning tree ensures an efficient, loop-free network topology for data flow between the provisioned resources. This hybrid approach leverages ACOs exploratory power to find good initial solutions and the deterministic algorithms precision to optimize the network structure, leading to efficient and well-connected resource provisioning.
  • Optimizing Workflow Scheduling by using Hybrid Cost-Effective Genetic and Firefly Algorithm in Cloud Computing
    Project Description : This project develops a hybrid meta-heuristic by combining the Genetic Algorithm (GA) and the Firefly Algorithm (FA) for cost-effective workflow scheduling. GA is effective at broad exploration through crossover and mutation. FA is effective at attraction-based local search and convergence. In the hybrid, the population of schedules (fireflies) is first evolved using GA operators to generate diversity. Then, the FA mechanism is applied, where fireflies (schedules) are attracted to brighter fireflies (better fitness, i.e., lower cost) in the search space. This combination often yields faster convergence to higher-quality solutions than either algorithm alone, finding schedules that minimize the total computational cost of executing the workflow in the cloud.
  • Energy-Efficient VM Placement Policy in Cloud Computing Using Simulated Annealing Optimization
    Project Description : This project employs Simulated Annealing (SA), a single-solution based meta-heuristic, to solve the VM placement problem for energy efficiency. Starting with an initial random placement of VMs on hosts, SA generates a new candidate solution by slightly perturbing the current one (e.g., moving one VM to a different host). If the new placement consumes less energy, it is accepted. If it consumes more, it may still be accepted with a certain probability (based on a "temperature" parameter that decreases over time), which helps escape local optima. This process repeats, slowly "cooling" the system, until it converges to an energy-efficient VM placement that allows a maximum number of idle servers to be powered down.
  • Discrete Water Wave Optimization Based Energy-Aware Workflow Task Scheduling in Clouds with Virtual Machine Consolidation
    Project Description : This project applies a discrete variant of the Water Wave Optimization (WWO) meta-heuristic to the combined problem of workflow scheduling and VM consolidation. The algorithm finds a schedule that maps workflow tasks to Virtual Machines. Furthermore, it is "energy-aware" because its fitness function evaluates the energy consumption of the underlying physical hosts after the schedule is deployed, taking into account the consolidation of VMs onto fewer servers. By using DWWO, the algorithm efficiently searches for a schedule that not only minimizes the workflow makespan but also leads to a VM allocation that can be highly consolidated, resulting in significant energy savings for the entire data center.
  • Optimizing Energy and Resource Efficiency by using Hybrid Heuristic Approach in Cloud Environments
    Project Description : This project takes a pragmatic approach by creating a hybrid of simple yet effective heuristics to optimize for both energy and resource efficiency. It might combine a power-aware best-fit algorithm for initial VM placement with a runtime consolidation heuristic that is triggered when utilization drops below a threshold. The hybrid approach is computationally less expensive than complex meta-heuristics, making it suitable for large-scale, dynamic cloud environments where scheduling decisions need to be made quickly. The combined heuristics work in tandem to ensure physical servers are used efficiently (high resource utilization) and that unused servers are powered off (high energy efficiency).
  • Optimizing Task Scheduling and Load Balancing Techniques by using Meta Heuristic Optimization in Cloud Infrastructure Services
    Project Description : This project provides a broad application of meta-heuristic optimization (e.g., Genetic Algorithms, Particle Swarm Optimization, Ant Colony Optimization) to the core problems of task scheduling and load balancing in IaaS clouds. It frames both problems as optimization problems with objectives like minimizing makespan, maximizing throughput, or balancing load. The power of meta-heuristics is their ability to find near-optimal solutions in these complex, NP-hard problem spaces. The project involves implementing these algorithms and demonstrating their superiority over traditional heuristic approaches (like Round Robin, FCFS) in terms of achieving higher performance, better resource utilization, and more efficient operation of the cloud infrastructure.
  • Energy Efficiency Based on Host Overload Management in Cloud Data Centers
    Project Description : This project specifically targets the problem of host overload detection and management, a key subtask of VM consolidation. It develops and compares intelligent algorithms to determine when a physical host is considered "overloaded" (i.e., at risk of causing performance degradation). Techniques range from static thresholds to adaptive methods based on statistical analysis of historical data (e.g., Median Absolute Deviation, Local Regression). Once an overload is detected, the algorithm selects specific VMs to migrate away from the host. The effectiveness of the overall energy efficiency strategy heavily depends on the accuracy of this overload detection, and this project focuses on optimizing this crucial component.
  • Optimizing Resource Management by using Online VM Prediction based Multi-Objective Load Balancing Framework in Cloud Datacenters
    Project Description : This project presents a comprehensive framework that integrates online prediction with multi-objective load balancing. It uses real-time time-series forecasting to predict the future load of VMs. This prediction is fed into a multi-objective optimizer that balances several goals when making load distribution decisions: 1) Balancing current load, 2) Preventing predicted future overloads, 3) Minimizing energy consumption, and 4) Reducing network traffic caused by migrations. By working with predicted future states rather than just the current state, the framework can make more proactive and stable load balancing decisions, avoiding oscillations and continuously maintaining an efficient and balanced data center.
  • QRAS: Task Scheduling based Efficient Resource Allocation in Cloud Computing
    Project Description : QRAS (QoS and Resource Aware Scheduler) is a task scheduling algorithm designed for efficient resource allocation. Its efficiency comes from being aware of both the resource requirements of tasks and the desired Quality of Service (QoS) targets. The algorithm profiles tasks and resources, matching the needs of a task (e.g., high CPU, low I/O) with the strengths of an available VM. It also considers QoS parameters like expected completion time. By making this intelligent matching, QRAS avoids misallocations that lead to poor performance and resource waste. This results in higher task throughput, better resource utilization, and improved adherence to QoS expectations compared to blind allocation strategies.
  • Reliability-Focused and Cost-Effective Strategy for Scientific Workflow Scheduling in Multi-Cloud Environments
    Project Description : This project schedules large scientific workflows across multiple public cloud providers to achieve high reliability while controlling cost. It uses a multi-cloud strategy to avoid vendor lock-in and single points of failure. The scheduling algorithm makes cost-effective choices about which cloud provider to use for each task based on their pricing models (e.g., spot vs. on-demand instances). Crucially, it incorporates reliability by potentially replicating critical tasks across different providers or selecting providers with historically high reliability scores. This creates a schedule that is both resilient to individual cloud outages and financially optimized, providing a robust and economical platform for critical scientific computations.