profile photo

Yangguang Li(李阳光)

I am working at SenseTime from 2019 to now. I obtained my master degree from Beijing University of Posts and Telecommunications and bachelor degree from Tianjin University.

Currently, I am focusing on the exploration and exploitation of autonomous driving camera perception models(Monocular and BEV). Before that I focused on general vision(multi-modal/big model) and Detection(face/human/traffic/structure/keypoint/video/automl) research.

Email  /  Linkedin  /  Zhihu  /  Github

News
  • September 2022: One paper gets accepted to NeurIPS 2022.
  • August 2022: One paper gets accepted to COLING 2022.
  • August 2022: SupMAE code is available.
  • July 2022: BCDNet code is available.
  • July 2022: One paper gets accepted to ECCV 2022.
  • June 2022: 1st place in Embodied AI Challenge @ CVPR 2022.
  • June 2022: 2nd place in UG2+ Challenge @ CVPR 2022.
  • June 2022: One papers gets accepted to ICML workshops 2022.
  • April 2022: One papers gets accepted to CVPR workshops 2022.
  • April 2022: One paper gets accepted to IJCAI 2022 as long oral.
  • February 2022: SNCSE code is available.
  • January 2022: DeCLIP & DeCLIP benchmark code are available.
  • January 2022: One paper gets accepted to ICLR 2022.
  • July 2022: OpenGVLab is released.
  • November 2021: INTERN general vision technical report is released.
Publications
bcdnet Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies
Xingrun Xing, Yangguang Li, Wei Li, Wenrui Ding, Yalong Jiang, Yufeng Wang, Jing Shao Chunlei Liu, Xianglong Liu
ECCV, 2022
arxiv, bibtex, code

We propose BCDNet, a newly designed method to binary neural modules, which enables BNNs to learn effective contextual dependencies.

declip Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Yangguang Li*, Feng Liang*, Lichen Zhao*, Yufeng Cui, Wanli Ouyang Jing Shao, Fengwei Yu, Junjie Yan
ICLR, 2022
arxiv, bibtex, code, video presentation

We propose Data efficient CLIP (DeCLIP), a method to efficiently train CLIP via utilizing the widespread supervision among the image-text data.

declip_benchmark Democratizing Contrastive Language-Image Pre-training: A CLIP Benchmark of Data, Model, and Supervision
Yufeng Cui*, Lichen Zhao*, Feng Liang*, Yangguang Li, Jing Shao
ICMLW, 2022
arxiv, bibtex, code

We propose a CLIP benchmark of data, model, and supervision.

repre RePre: Improving Self-Supervised Vision Transformer with Reconstructive Pre-training
Luya Wang, Feng Liang, Yangguang Li, Honggang Zhang, Wanli Ouyang, Jing Shao
IJCAI, 2022, Long oral
arxiv, bibtex

We propose RePre to extends contrastive frameworks by adding a branch for reconstructing raw image pixels in parallel with the existing contrastive objective.

imci IMCI: Integrate Multi-view Contextual Information for Fact Extraction and Verification
Hao Wang, Yangguang Li, Zhen Huang, Yong Dou
COLING, 2022
arxiv, bibtex, code

We propose IMCI to integrate multi-view contextual information for fact extraction and verification.

Technical Report
tbd Task-Balanced Distillation for Object Detection
Ruining Tang, Zhenyu Liu, Yangguang Li, Yiguo Song, Hui Liu, Qide Wang, Jing Shao, Guifang Duan, Jianrong Tan
Arxiv, 2022
arxiv, bibtex

We alleviate the problem of inconsistent spatial distributions between classification score and localization quality (IOU) in detection task by designing a customized knowledge distillation teacher-student training workflow.

intern INTERN: A New Learning Paradigm Towards General Vision
Jing Shao*, Siyu Chen*, Yangguang Li*, Kun Wang*, Zhenfei Yin*, Yinan He*, Jianing Teng*, Qinghong Sun*, Mengya Gao*, Jihao Liu*, Gengshi Huang*, .etc
Arxiv, 2021
arxiv, bibtex, code, project

we propose a new learning paradigm named INTERN, which introduces a continuous learning scheme, including a highly extensible upstream pretraining pipeline leveraging large-scale data and various supervisory signals, as well as a flexible downstream adaptation towards diversified tasks.

Selected Honors
  • SenseTime Team Award, SenseTime’s highest award, 2021.
  • SenseTime Outstanding Intern, 2020.
  • SenseTime Outstanding Intern, 2019.
Service

  • Reviewer in ICML 2022, ECCV 2022, NeurIPS 2022
  • Organizer in ECCV 2022 Computer Vision in the Wild Challenge

  • Thanks to Feng (Jeff) Liang