- Associate Professor, South China University of Technology, China
In security-related applications, an adversary is able to fool a model by using carefully crafted samples. A traditional machine learning method may be compromised through an adversarial attack that violates the implicit assumption of the same distributions on training and test samples. This security problem may become more serious in deep learning since public dataset and pre-trained models are used more frequently in recent years, and those datasets and models can be easily compromised by a nefarious third party supplier. This tutorial will introduce the concept of how vulnerable a machine learning method is and how its robustness can be improved, and also discuss real life machine learning applications in an adversarial environment.
Patrick P. K. Chan received the Ph.D. degree from Hong Kong Polytechnic University in 2009. He is currently Associate Professor of School of Computer Science and Engineering, and the person in charge of Machine Learning and Cybernetics Research Laboratory in South China University of Technology, Guangzhou, China. He is also a part-time Lecturer of Hyogo College of Medicine, Japan. His current research interests include computer vision and adversarial learning. Dr. Chan was a member of the governing boards of IEEE SMC Society 14-16 and also the Chairman of IEEE SMCS Hong Kong Chapter 14-15. He is the counselor of IEEE Student Branch in South China University of Technology. He serves as an organizing committee chair of several international conferences, and an associate editor for international journals.