Amazing technological breakthrough possible @S-Logix pro@slogix.in

Office Address

  • #5, First Floor, 4th Street Dr. Subbarayan Nagar Kodambakkam, Chennai-600 024 Landmark : Samiyar Madam
  • pro@slogix.in
  • +91- 81240 01111

Social List

Decentralized Location-aware Orchestration of Containerized Microservice Applications : Enabling Distributed Intelligence at the Edge

Decentralized Location-aware Orchestration of Containerized Microservice Applications : Enabling Distributed Intelligence at the Edge

Good PhD Thesis on Decentralized Location-aware Orchestration of Containerized Microservice Applications : Enabling Distributed Intelligence at the Edge

Research Area:  Edge Computing

Abstract:

   Services that operate on public, private, or hybrid clouds, should always be available and reachable to their end-users or clients. However, a shift in the demand for current and future services has led to new requirements on network infrastructure, service orchestration, and Quality-of-Service (QoS). Services related to, for example, online-gaming, video-streaming, smart cities, smart homes, connected cars, or other Internet-of-Things (IoT) powered use cases are data-intensive and often have real-time and locality requirements. These have pushed for a new computing paradigm, Edge computing, based on moving some intelligence from the cloud to the edge of the network to minimize latency and data transfer. This situation has set new challenges for cloud providers, telecommunications operators, and content providers
   The work focuses on a Proof-of-Concept design and analysis of a scalable and resilient decentralized orchestrator for containerized applications, and a scalable monitoring solution for containerized processes. The proposed orchestrator deals with the complexity of managing a geographically dispersed and heterogeneous infrastructure to efficiently deploy and manage applications that operate across different geographical locations — thus facilitating the pursuit of bringing some of the intelligence from the cloud to the edge, in a way that is transparent to the applications. The results show this orchestrator’s ability to scale to 20 000 nodes and to deploy 30 000 applications in parallel.
   Consequently, we explore developing innovative supervised machine learning algorithms to efficiently run in settings demanding low power and resource consumption, and realtime responses. The classifiers proposed are computationally inexpensive, suitable for parallel processing, and have a small memory footprint. Therefore, they are a viable choice for pervasive systems with one or a combination of these limitations, as they facilitate increasing battery life and achieving reduced predictive latency. An implementation of one of the developed classifiers deployed to an off-the-shelf FPGA resulted in a predictive throughput of 57.1 million classifications per second, or one classification every 17.485 ns.

Name of the Researcher:  Lara Lorna Jiménez

Name of the Supervisor(s):  Olov Schelen, Kare Synnes

Year of Completion:  2020

University:  Lulea University of Technology

Thesis Link:   Home Page Url