The fast-growing Internet-of-Things (IoT) penetrates industry and society. Future smart hospitals, production plants or entire cities may comprise thousands, even millions of sensors and smart objects. Ever larger amounts of data must be processed efficiently, increasingly under real time constraints, to provide the functionality deemed smart. In the widely used ‘two-tier’ setup (cf. mobile-cloud computing), computationally intensive or multi-device/user tasks are offloaded from resource-constraint devices to ‘the cloud’, with serious shortcomings as follows: (1) Unbounded communication delay fluctuations and intermittent connectivity are intolerable under real-time conditions. (2) Insufficient pre-processing of data wastes network resources and aggravates congestion. (3) Privacy and security issues are harder to solve than on premise or in proximity of the data source.
Key goal of the project is the furthering of the recently propagated edge computing paradigm which furthers the two-tier device-cloud setup towards a three-tier device-edge-cloud computing setup. While it has the potential to remove the above shortcomings, considerable research challenges remain. We will develop and advance algorithms, protocols and mechanisms, as well as unifying models as a basis for integrated development and runtime support. The key goal is divided into four sub-goals; the short-form of these sub-goals and planned contributions are as follows: (1) Virtualization: approaches to novel lightweight containers for virtualization that enable the mapping of application modules onto computing resources and their migration despite highly varying capabilities, providing highly dynamic runtime support for edge computing. (2) Control: new configuration and control protocols and algorithms built upon recent advancements in Software Defined Networking (SDN) and Network Function Virtualization (NFV), enabling flexible management of heterogeneous devices in the edge environment and scalable deployment of device-edge-cloud applications. (3) Optimization: efficient online graph based mapping of (virtualization-enabled) application modules onto resources subject to multiple optimization objectives, with dynamic adaptation to location, load, and resource cost changes. (4) Communication: a communication layer with a unified communication framework and efficient protocols and mechanisms, adjusting to relevant content classes (executables, media streams, complex event streams), hop classes (intra-edge, device-edge, edge-cloud), quality-of-service requirements, and network access technologies.
The project outcome will help developers abstract from technical issues of edge computing by means of the unifying model, and it will enable the sophisticated automatic handling of these issues in a runtime-adaptive manner with its novel algorithms, protocols and mechanisms. We deem it seminal for fostering large-scale ‘smart’ IoT applications.
The proliferation of mobile devices and applications, in the context of Augmented Reality (AR) and time-critical Internet-of-Things (IoT) such as assisted driving and process control, has driven the shift of computing resources from central mega data centers to more distributed computing nodes, in proximity of the mobile devices concerned. This trend has led to a new computing paradigm called “edge computing’’. Meanwhile, advanced network technologies like 5G will help enable low-latency communications at the edge, facilitating further the idea of using resources at the network edge for time-critical applications. The Collaborative Research Center (CRC) MAKI captures this trend by investigating “in-networking processing’’ since its Phase II. In particular, the concepts of Software Defined Networking (SDN) and Network Function Virtualization (NFV) are extended to the network edge, providing a NetApp concept for network functions and corresponding programming language support, a flexible runtime environment and function-aware optimization engine, and operator graphs as subjects of placement and operator placement programming interfaces.
As part of CRC MAKI, this subproject will take an alternative approach by exploring network adaptivity from a service-centric view. In particular, we investigate the fundamental challenges in advancing edge computing based on the software engineering paradigm called microservice architecture, in which a single application is developed as a suit of small services rather than a monolithic whole. Unlike the related research in MAKI, microservice-based edge computing is not restricted to any particular application scenarios. Therefore, the results generated by this subproject will help close the gap of enabling network adaptivity generally at the software stack level. While bringing more flexibility compared to the monolithic architecture, the modularity of microservice, on the other hand, introduces more dynamics, both spatially and temporally. Our overarching goal is to tackle these challenges and provide a formal basis for such a new edge computing architecture solution with necessary theoretical guarantees.