Total for the last 12 months
number of access : ?
number of downloads : ?
ID 117740
Author
Quan, Li University of Science and Technology Beijing
Wang, Zhiliang University of Science and Technology Beijing
Keywords
mobile device
task offloading
tradeoff
mobile cloud computing
two layered reinforcement learning
Content Type
Journal Article
Description
Mobile devices could augment their ability via cloud resources in mobile cloud computing environments. This paper developed a novel two-layered reinforcement learning (TLRL) algorithm to consider task offloading for resource-constrained mobile devices. As opposed to existing literature, the utilization rate of the physical machine and the delay for offloaded tasks are taken into account simultaneously by introducing a weighted reward. The high dimensionality of the state space and action space might affect the speed of convergence. Therefore, a novel reinforcement learning algorithm with a two-layered structure is presented to address this problem. First, k clusters of the physical machines are generated based on the k-nearest neighbors algorithm (k-NN). The first layer of TLRL is implemented by a deep reinforcement learning to determine the cluster to be assigned for the offloaded tasks. On this basis, the second layer intends to further specify a physical machine for task execution. Finally, simulation examples are carried out to verify that the proposed TLRL algorithm is able to speed up the optimal policy learning and can deal with the tradeoff between physical machine utilization rate and delay.
Journal Title
Future Internet
ISSN
19995903
Publisher
MDPI
Volume
10
Issue
7
Start Page
60
Published Date
2018-07-01
Rights
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
EDB ID
DOI (Published Version)
URL ( Publisher's Version )
FullText File
language
eng
TextVersion
Publisher
departments
Science and Technology