Aishan Liu (刘艾杉)

Assistant Professor

State Key Laboratory of Software Development Environment

SCSE, Beihang University, Beijing, China

He received the Ph.D. degree in 2021 from Beihang University, supervised by Prof. Wei Li and Prof. Xianglong Liu. Before that, He obtained the M.Sc and B.Sc degree from Beihang University at 2016 and 2013, respectively, where he was supervised by Prof. Wei Li.

In his Ph.D study, during 2021, he was a visiting student at UC Berkeley, supervised by Prof. Dawn Song. During 2020, he was a visiting student at the University of Sydney, supervised by Prof. Dacheng Tao. In 2019, he interned at AI Lab at Tencent supported by Tecent Rhino-Bird Elite Program, supervised by Prof. Liwei Wang. He serves as a reviewer for the top conferences and journals such as CVPR, ICML, ICCV, ECCV, NeurIPS, ICLR, AAAI, TIP, etc.

Email: liuaishan AT buaa DOT edu DOT cn

Google Scholar  /  Github  /  dblp

Research

My research interests include some sub-fields of Computer Vision and Deep Learning:

  • Robust Deep Neural Networks: Adversarial Example, Model Robustness, AI Safety


News

[Challenge@CVPR2022] I am co-organizing the Challenge on Robust Models towards Open-world Classification Challenge at CVPR 2022. Please participate and win the $15K prize!

[Call for Papers] I am serving as the Guest Editor on Practical Deep Learning in the Wild at Pattern Recognition (JCR Q1). Please submit your papers!

[Workshop@CVPR2022] I am co-organizing the Workshop on The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop&Challenge at CVPR 2022. Please submit your papers and win the prizes!

[2022.03] Two papers accepted by CVPR 2022.

[2022.01] One paper accepted by ICLR 2022.

[2021.10] Two papers accepted by IEEE TIP.

[2021.09] We released the first comprehensive Robustness investigation benchmark (RobustART) on large-scale dataset ImageNet regarding ARchitectural design (1000+) and Training techniques (10+).

[2021.08] One paper accepted by IEEE TNNLS.

[2021.08] One paper accepted by ACM Multimedia 2021.

[2021.05] One paper accepted by IEEE Transactions on Image Processing (TIP).

Publications

    Conference Papers

CVPR 2022

Defensive Patches for Robust Recognition in the Physical World
Jiakai Wang, Zixin Yin, Pengfei Hu, Renshuai Tao, Haotong Qin, Xianglong Liu, Dacheng Tao, Aishan Liu.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
pdf / Project page

We generate defensive patches to help building robust recognition systems in practice against noises by simply sticking them on the target object.

CVPR 2022

Exploring Endogenous Shift for Cross-domain Detection: A Large-scale Benchmark and Perturbation Suppression Network
Renshuai Tao, Hainan Li, Tianbo Wang, Yanlu Wei, Yifu Ding, Bowei Jin, Hongping Zhi, Xianglong Liu, Aishan Liu.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
pdf

This paper proposes Endogenous Domain Shift that measures the noises caused by different X-ray machine types with different hardware parameters, which can severely harm the cross-domain detection robustness.

ICLR2021

BIBERT: Accurate Fully Binarized BERT
Haotong Qin, Yifu Ding, Mingyuan Zhang, Qinghua Yan, Aishan Liu, Qingqing Dang, Ziwei Liu, Xianglong Liu
International Conference on Learning Representations (ICLR), 2022
pdf / Project page

In this paper, we propose BiBERT, an accurate fully binarized BERT to eliminate the performance bottlenecks for large pre-trained BERT binarization.

ACM MM2021

ARShoe: Real-Time Augmented Reality Shoe Try-on System on Smartphones
Shan An, Guangfu Che, Jinghao Guo, Haogang Zhu, Junjie Ye, Fangru Zhou, Zhaoqi Zhu, Dong Wei, Aishan Liu, Wei Zhang.
ACM Multimedia (ACM MM), 2021
pdf

We propose a real-time augmented reality virtual shoe try-on system for smartphones, namely ARShoe. The system has been used in JD App.

CVPR2021

Dual Attention Suppression Attack: Generate Adversarial Camouflage in Physical World
Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, Xianglong Liu.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021 (Oral)
pdf / News: (机器之心) /Project page

We propose the Dual Attention Suppression (DAS) attack to generate visually-natural physical adversarial camouflages with strong transferability by suppressing both model and human attention.

eccv2020

Spatiotemporal Attacks for Embodied Agents
Aishan Liu, Tairan Huang, Xianglong Liu, Yitao Xu, Yuqing Ma, Xinyun Chen, Stephen Maybank, Dacheng Tao.
European Conference on Computer Vision (ECCV), 2020
pdf / News: (量子位) /Project page

We take the first step to study adversarial attacks for embodied agents.

eccv2020

Bias-based Universal Adversarial Patch Attack for Automatic Check-out
Aishan Liu, Jiakai Wang, Xianglong Liu, Bowen Cao, Chongzhi Zhang, Hang Yu.
European Conference on Computer Vision (ECCV), 2020
pdf / News: (新智元) /Project page

We propose a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability, which exploits both the perceptual and semantic bias of models.

ijcai2019

Few-shot Visual Learning with Contextual Memory and Fine-grained Calibration
Yuqing Ma, Shihao Bai, Wei Liu, Qingyu Zhang, Aishan Liu , Weimin Chen, Xianglong Liu
International Joint Conference on Artificial Intelligence (IJCAI), 2020
pdf

To improve the generalization ability of few-shot learning, we propose an inverted pyramid network that intimates the human’s coarse-to-fine cognition paradigm.

ijcai2020

Transductive Relation-Propagation Network for Few-shot Learning
Yuqing Ma, Shihao Bai, Shan An, Wei Liu, Aishan Liu, Xiantong Zhen, Xianglong Liu.
International Joint Conference on Artificial Intelligence (IJCAI), 2020
pdf / Project page

For few-shot learning task, we propose a transductive relation-propagation graph neural network to explicitly model and propagate such relations across support-query pairs.

ijcai2019

Coarse-to-Fine Image Inpainting via Region-wise Convolutions and Non-Local Correlation
Yuqing Ma, Xianglong Liu, Shihao Bai, Lei Wang, Dailan He, Aishan Liu
International Joint Conference on Artificial Intelligence (IJCAI), 2019
pdf

To address the image inpainting problem, we propose a coarse-to-fine framework to restore semantically reasonable and visually realistic images, which consists region-wise convolutions to locally deal with the different types of regions and non-local operations to globally model the correlation among different regions.

aaai2019

Perceptual Sensitive GAN for Generating Adversarial Patches
Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie and Dacheng Tao.
AAAI Conference on Artificial Intelligence (AAAI), 2019 (Spotlight)
pdf / News: (新智元, 腾讯, 网易)

We propose a PS-GAN framework to generate adversarial patches to attack auto-driving systems in the physical world.

    Journal Papers

TIP2021

Universal Adversarial Patch Attack for Automatic Checkout using Perceptual and Attentional Bias
Jiakai Wang*, Aishan Liu*, Xiao Bai, Xianglong Liu
(* indicates equal contributions)
IEEE Transactions on Image Processing (TIP), 2021 (IF=10.86)
pdf / Project page

We propose a bias-based framework to generate universal adversarial patches with strong generalization ability, which exploits the perceptual bias and attentional bias to improve the attacking ability.

TIP2021

Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach
Hang Yu, Aishan Liu, Gengchao Li, Jichen Yang, Chongzhi Zhang
(♠ indicates corresponding author)
IEEE Transactions on Image Processing (TIP), 2021 (IF=10.86)
pdf / Project page

We propose a simple yet effective method, named Progressive Diversified Augmentation (PDA), which improves the robustness of DNNs towards both adversarial attacks and common corruptions by progressively injecting diverse adversarial noises during training.

NeuCom2021

Revisiting Audio Visual Scene-Aware Dialog
Aishan Liu, Huiyuan Xie, Xianglong Liu, Zixin Yin, Shunchang Liu
NeuroComputing, 2021 (IF=5.72)
pdf

This paper empirically revisits the AVSD task and argues that this task exhibits a variety of biases in terms of models, dataset, and evaluation metrics.

TNNLS2021

On the Guaranteed Almost Equivalence Between Imitation Learning From Observation and Demonstration
Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, Dacheng Tao
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021 (IF=10.45)
pdf

In contrast to previous studies, this paper first proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness.

TIP2021

Training Robust Deep Neural Networks via Adversarial Noise Propagation
Aishan Liu, Xianglong Liu, Hang Yu, Chongzhi Zhang, Qiang Liu, Dacheng Tao
IEEE Transactions on Image Processing (TIP), 2021 (IF=10.86)
pdf Project page

We propose a simple yet powerful training algorithm to improve model robustness, named Adversarial Noise Propagation (ANP), which injects noise into the hidden layers in a layer-wise manner. ANP can be implemented efficiently by exploiting the nature of the backward-forward training style.

TIP2021

Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity
Chongzhi Zhang*, Aishan Liu*, Xianglong Liu, Yitao Xu, Hang Yu, Yuqing Ma, Tianlin Li (* indicates equal contributions)
IEEE Transactions on Image Processing (TIP), 2021 (IF=10.86)
pdf / Project page

We are the first to explain adversarial robustness for deep models from the perspective of neuron sensitivity, which is measured by neuron behavior variation intensity against benign and adversarial examples.

TIP2021

Understanding Adversarial Robustness via Critical Attacking Route
Tianlin Li*, Aishan Liu*, Xianglong Liu, Yitao Xu, Chongzhi Zhang, Xiaofei Xie (* indicates equal contributions)
Information Sciences, 2020 (IF=5.91)
pdf / Project page

We try to explain adversarial robustness for deep models from a new perspective of critical attacking route, which is computed by a gradient-based influence propagation strategy.

AIView

人工智能安全与评测
刘艾杉, 王嘉凯, 刘祥龙
人工智能(AI-View), 2020
pdf

AIView

人工智能机器学习模型及系统的质量要素和测试方法
王嘉凯, 刘艾杉, 刘祥龙
信息技术与标准化, 2020
pdf

Highlight Project
PontTuset

重明 (AISafety)

pdf / (News: TechWeb) / Project page

重明 is an open-source platform to evaluate model robustness and safety towards noises (e.g., adversarial examples, corruptions, etc.). The name is taken from the Chinese myth 重明鸟, which has strong power, could fight against beasts and avoid disasters. We hope our platform could improve the robustness of deep learning systems and help them to avoid safety-related problems. 重明 has been awarded the 首届OpenI启智社区优秀开源项目 (First OpenI Excellent Open Source Project).

PontTuset

RobustART

pdf / (News: 机器之心) / Project page

RobustART is the first comprehensive Robustness investigation benchmark on large-scale dataset ImageNet regarding ARchitectural design (49 human-designed off-the-shelf architectures and 1200+ neural architecture searched networks) and Training techniques (10+ general ones e.g., extra training data, etc) towards diverse noises (adversarial, natural, and system noises). Our benchmark (including open-source toolkit, pre-trained model zoo, datasets, and analyses): (1) presents an open-source platform for conducting comprehensive evaluation on diverse robustness types; (2) provides a variety of pre-trained models with different training techniques to facilitate robustness evaluation; (3) proposes a new view to better understand the mechanism towards designing robust DNN architectures, backed up by the analysis. We will continuously contribute to building this ecosystem for the community.

Talks & Academic Services

[2021.08] Co-organizer of the Workshop on 1st International Workshop on Practical Deep Learning in the Wild at AAAI 2022.

[2021.08] Co-organizer of the Forum on Safety and Privacy on Pattern Recognition at PRCV 2021.

[2021.08] Co-organizer of the Forum on Safety and Privacy for Multimedia Systems at ChinaMM 2021.

[2021.03] Co-organizer of the Workshop on 1st International Workshop on Adversarial Learning for Multimedia at ACM MM 2021.

[2020.12] Invited talk about Adversarial Machine Learning at 智东西公开课(Zhidx)

[2020.11] Invited talk about Adversarial Attacks for Embodiment at CSAI, Tsinghua University, hosted by Prof. Huaping Liu.

[2020.08] Invited talk about AI Safety in Automatic Check-out Scenario at 智东西公开课(Zhidx)

[2020.07] Invited talk about Adversarial Machine Learning in Physical World at JD.

Main Awards

[2020.12] First OpenI Excellent Open Source Project (Nationwide 7)

[2019.05] Tencent Rhino-Bird Elite Training Program (Nationwide 56)

[2016.01] Outstanding Graduate Award, Beihang University

[2013.06] Outstanding Graduate Award, Beijing

[2012.10] CCF National Outstanding Undergraduate, China Computer Federation (Nationwide 100)

[2012.06] Google Excellence Scholarship, Google (Nationwide 100)


Last update: 2022.03