Latency, Energy and Carbon Aware Collaborative Resource Allocation with Consolidation and QoS Degradation Strategies in Edge Computing
Résumé
Edge Computing has emerged from the Cloud to tackle the increasingly stringent latency, reliability and scalability imperatives of modern applications, mainly in the Internet of Things arena. To this end, the data centers are pushed to the edge of the network to diversify and bring the services closer to the users. This spatial distribution offer a wide range of opportunities for allowing self-consumption from local renewable energy sources with regard to the local weather conditions. However, scheduling the users' tasks so as to meet the service restrictions while consuming the most renewable energy and reducing the carbon footprint remains a challenge. In this paper, we design a nationwide Edge infrastructure, and study its behavior under three typical electrical configurations including solar power plant, batteries and the grid. Then, we study a set of techniques that collaboratively allocates resources on the edge data centers to harvest renewable energy and reduce the environmental impact. These strategies also includes energy efficiency optimization by means of reasonable quality of service degradation and consolidation techniques at each data center in order to reduce the need for brown energy. The simulation results show that combining these techniques allows to increase the self-consumption of the platform by 7.83% and to reduce the carbon footprint by 35.7% compared to the baseline algorithm. The optimizations also outperform classical energy-aware resource management algorithms from the literature. Yet, these techniques do not equally contribute to these performances, consolidation being the most efficient.
Origine | Fichiers produits par l'(les) auteur(s) |
---|