Loading…

Is Machine Learning in Power Systems Vulnerable?

Recent advances in Machine Learning(ML) have led to its broad adoption in a series of power system applications, ranging from meter data analytics, renewable/load/price forecasting to grid security assessment. Although these data-driven methods yield state-of-the-art performances in many tasks, the...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2018-08
Main Authors: Chen, Yize, Tan, Yushi, Deka, Deepjyoti
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in Machine Learning(ML) have led to its broad adoption in a series of power system applications, ranging from meter data analytics, renewable/load/price forecasting to grid security assessment. Although these data-driven methods yield state-of-the-art performances in many tasks, the robustness and security of applying such algorithms in modern power grids have not been discussed. In this paper, we attempt to address the issues regarding the security of ML applications in power systems. We first show that most of the current ML algorithms proposed in power systems are vulnerable to adversarial examples, which are maliciously crafted input data. We then adopt and extend a simple yet efficient algorithm for finding subtle perturbations, which could be used for generating adversaries for both categorical(e.g., user load profile classification) and sequential applications(e.g., renewables generation forecasting). Case studies on classification of power quality disturbances and forecast of building loads demonstrate the vulnerabilities of current ML algorithms in power networks under our adversarial designs. These vulnerabilities call for design of robust and secure ML algorithms for real world applications.
ISSN:2331-8422