To The Top!
Banner1 for slider
Keynote Speakers
Statistical Hypothesis Testing and Its Applications in Adversarial Data Detection
Dr. Feng Liu
  • The University of Melbourne, Australia
Abstract
Two-sample tests, as one of the most important hypothesis testing techniques, ask, "given samples from each, are these two populations the same?" For instance, one might wish to know whether a treatment and control group differ. With very low-dimensional data and/or strong parametric assumptions, methods such as t-tests or Kolmogorov-Smirnov tests are widespread. Recent work in statistics and machine learning has sought tests that cover situations not well-handled by these classic methods, providing tools useful in adversarial machine learning, causal discovery, generative modeling, and more. In this talk, I will introduce recent advance in the two-sample testing field and present how to use advanced tests to defend against the adversarial attacks, which justified the significance of two-sample testing in the AI security area.
Biography
Dr Feng Liu is a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. Currently, he is a Lecturer at The University of Melbourne, Australia, a Visiting Scientist at RIKEN-AIP, Japan, and a Visiting Research Fellow at AAII, UTS, Australia. He received his PhD degree in Computer Science from UTS in 2020. He has served as SPC members for IJCAI and ECAI, and PC members for NeurIPS, ICML, ICLR, AISTATS, ACML, and KDD. He also serves as a reviewer for many academic journals, such as JMLR, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS, and AMM. He has received the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), the Outstanding Reviewer Award of ICLR (2021), and UTS Best Thesis Award (Dean’s List). Until now, he has published over 50 papers in high-quality journals or conferences, such as Nature Communications, IEEE-TPAMI, IEEE-TNNLS, IEEE-TFS, NeurIPS, ICML, ICLR, KDD, AAAI and IJCAI.




The Rise of Neural Priors
Professor Simon Lucey
  • The University of Adelaide, Australia
Abstract
The performance of an AI is nearly always associated with the amount of data you have at your disposal. Self-supervised machine learning can help – mitigating tedious human supervision – but the need for massive training datasets in modern AI seems unquenchable. Sometimes it is not the amount of data, but the mismatch of statistics between the train and test sets – commonly referred to as bias ¬– that limits the utility of an AI. In this talk I will explore a new direction based on the concept of a “neural prior” that relies on no training dataset whatsoever. A neural prior speaks to the remarkable ability of neural networks to both memorise training and generalise to unseen testing examples. Though never explicitly enforced, the chosen architecture of a neural network applies an implicit neural prior to regularise its predictions. It is this property we will leverage for problems that historically suffer from a paucity of training data or out-of-distribution bias. We will demonstrate the practical application of neural priors to augmented reality, autonomous driving and noisy signal recovery – with many of these outputs already being taken up in industry.
Biography
Simon Lucey Ph.D. is the Director of the Australian Institute for Machine Learning (AIML) and a professor in the School of Computer Science (SCS), at the University of Adelaide. Prior to this he was an associate research professor at Carnegie Mellon University's Robotics Institute (RI) in Pittsburgh USA; where he spent over 10 years as an academic. He was also Principal Research Scientist at the autonomous vehicle company Argo AI from 2017-2022. He has received various career awards, including an Australian Research Council Future Fellowship (2009-2013). Simon’s research interests span computer vision, machine learning, and robotics. He enjoys drawing inspiration from AI researchers of the past to attempt to unlock computational and mathematic models that underlie the processes of visual perception.
Copyright 2023 ICMLC & ICWAPR. All rights reserved.