TY - GEN
T1 - Active learning for streaming data in a contextual bandit framework
AU - Song, Linqi
AU - Xu, Jie
AU - Li, Congduan
PY - 2019/5/4
Y1 - 2019/5/4
N2 - Contextual bandit algorithms have been shown to be effective in solving sequential decision making problems under uncertainty. A common assumption in the literature is that the realized (ground truth) reward is observed by the learner at no cost, which, however, is not realistic in many practical scenarios. When observing the ground truth reward is costly, a key challenge is how to judiciously acquire the ground truth by assessing the benefits and costs in order to balance learning efficiency and learning cost. In this paper, we design a novel contextual bandit-based learning algorithm and endow it with the active learning capability. In addition to sending a query to an annotator for the ground truth, prior information about the ground truth learned by the learner is sent together, thereby reducing the query cost. We prove that the learning regret of the proposed algorithm achieves the same order as that of conventional contextual bandit algorithms in cost-free scenarios, implying that, surprisingly, cost due to acquiring the ground truth does not increase the learning regret in the long-run, where the prior information about the ground truth plays a critical role.
AB - Contextual bandit algorithms have been shown to be effective in solving sequential decision making problems under uncertainty. A common assumption in the literature is that the realized (ground truth) reward is observed by the learner at no cost, which, however, is not realistic in many practical scenarios. When observing the ground truth reward is costly, a key challenge is how to judiciously acquire the ground truth by assessing the benefits and costs in order to balance learning efficiency and learning cost. In this paper, we design a novel contextual bandit-based learning algorithm and endow it with the active learning capability. In addition to sending a query to an annotator for the ground truth, prior information about the ground truth learned by the learner is sent together, thereby reducing the query cost. We prove that the learning regret of the proposed algorithm achieves the same order as that of conventional contextual bandit algorithms in cost-free scenarios, implying that, surprisingly, cost due to acquiring the ground truth does not increase the learning regret in the long-run, where the prior information about the ground truth plays a critical role.
KW - Active learning
KW - Contextual bandits
KW - Streaming data
UR - http://www.scopus.com/inward/record.url?scp=85069055967&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85069055967&partnerID=8YFLogxK
U2 - 10.1145/3330530.3330543
DO - 10.1145/3330530.3330543
M3 - Conference contribution
AN - SCOPUS:85069055967
T3 - ACM International Conference Proceeding Series
SP - 29
EP - 35
BT - Proceedings of the 2019 5th International Conference on Computing and Data Engineering, ICCDE 2019
PB - Association for Computing Machinery
T2 - 5th International Conference on Computing and Data Engineering, ICCDE 2019
Y2 - 4 May 2019 through 6 May 2019
ER -