Online learning for offloading and autoscaling in renewable-powered mobile edge computing

Jie Xu, Shaolei Ren

Research output: Chapter in Book/Report/Conference proceedingConference contribution

25 Citations (Scopus)

Abstract

Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run- time performance when compared to standard reinforcement learning algorithms such as Q- learning.

Original languageEnglish (US)
Title of host publication2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781509013289
DOIs
StatePublished - Feb 2 2017
Event59th IEEE Global Communications Conference, GLOBECOM 2016 - Washington, United States
Duration: Dec 4 2016Dec 8 2016

Other

Other59th IEEE Global Communications Conference, GLOBECOM 2016
CountryUnited States
CityWashington
Period12/4/1612/8/16

Fingerprint

Reinforcement learning
Learning algorithms
In situ processing
Fog
Costs
Wireless networks
Quality of service
Servers
Decomposition

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Hardware and Architecture
  • Safety, Risk, Reliability and Quality

Cite this

Xu, J., & Ren, S. (2017). Online learning for offloading and autoscaling in renewable-powered mobile edge computing. In 2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings [7842069] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/GLOCOM.2016.7842069

Online learning for offloading and autoscaling in renewable-powered mobile edge computing. / Xu, Jie; Ren, Shaolei.

2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017. 7842069.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Xu, J & Ren, S 2017, Online learning for offloading and autoscaling in renewable-powered mobile edge computing. in 2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings., 7842069, Institute of Electrical and Electronics Engineers Inc., 59th IEEE Global Communications Conference, GLOBECOM 2016, Washington, United States, 12/4/16. https://doi.org/10.1109/GLOCOM.2016.7842069
Xu J, Ren S. Online learning for offloading and autoscaling in renewable-powered mobile edge computing. In 2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2017. 7842069 https://doi.org/10.1109/GLOCOM.2016.7842069
Xu, Jie ; Ren, Shaolei. / Online learning for offloading and autoscaling in renewable-powered mobile edge computing. 2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2017.
@inproceedings{605739c65bec4116b1fe2a9fe6df4619,
title = "Online learning for offloading and autoscaling in renewable-powered mobile edge computing",
abstract = "Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run- time performance when compared to standard reinforcement learning algorithms such as Q- learning.",
author = "Jie Xu and Shaolei Ren",
year = "2017",
month = "2",
day = "2",
doi = "10.1109/GLOCOM.2016.7842069",
language = "English (US)",
booktitle = "2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Online learning for offloading and autoscaling in renewable-powered mobile edge computing

AU - Xu, Jie

AU - Ren, Shaolei

PY - 2017/2/2

Y1 - 2017/2/2

N2 - Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run- time performance when compared to standard reinforcement learning algorithms such as Q- learning.

AB - Mobile edge computing (a.k.a. fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in renewable-powered mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run- time performance when compared to standard reinforcement learning algorithms such as Q- learning.

UR - http://www.scopus.com/inward/record.url?scp=85015380112&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85015380112&partnerID=8YFLogxK

U2 - 10.1109/GLOCOM.2016.7842069

DO - 10.1109/GLOCOM.2016.7842069

M3 - Conference contribution

BT - 2016 IEEE Global Communications Conference, GLOBECOM 2016 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -