|
Yangguang Li
I am a Research Director at VAST from 2023.
Before that I was a Research Leader at SenseTime.
Currently, I am focusing on 3D generation research.
I am PhD in the Chinese University of Hong Kong.
Email  / 
Google Scholar
|
News:
- 2025.03: We release TripoSF, which was the best image-to-3D generation model.
- 2025.02: We release TripoSG, which was the best image-to-3D generation model at the time.
- 2024.12: Our paper TexGen won the Best Paper Honorable Mention Award in Siggraph Asia 2024.
- 2024.03: We release TripoSR with Stability AI, which was the fastest image-to-3D reconstruction model at the time.
- 2023.12: We launches 3D generation product TripoAI.
|
Selected 3D GenAI Papers:
- (TripoSF) SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling
Xianglong He, Zi-Xin Zou, Chia-Hao Chen, Yuan-Chen Guo, Ding Liang, Chun Yuan, Wanli Ouyang, Yan-Pei Cao, Yangguang Li
[Paper][Code][Arxiv 2025.3]
- TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models
Yangguang Li, Zi-Xin Zou, Zexiang Liu, Dehu Wang, Yuan Liang, Zhipeng Yu, Xingchao Liu, Yuan-Chen Guo, Ding Liang, Wanli Ouyang, Yan-Pei Cao
[Paper][Code][Arxiv 2025.2]
- TripoSR: Fast 3D Object Reconstruction from a Single Image
Dmitry Tochilkin, David Pankratz, Zexiang Liu, Zixuan Huang, Adam Letts, Yangguang Li, Ding Liang, Christian Laforte, Varun Jampani, Yan-Pei Cao
[Paper][Code][Arxiv 2024.3]
- MIDI: Multi-Instance Diffusion for Single Image to 3D Scene Generation
Zehuan Huang, Yuan-Chen Guo, Xingqiao An, Yunhan Yang, Yangguang Li, Zi-Xin Zou, Ding Liang, Xihui Liu, Yan-Pei Cao, Lu Sheng
[Paper][Code][CVPR 2025]
- PSHuman: Photorealistic Single-image 3D Human Reconstruction using Cross-Scale Multiview Diffusion and Explicit Remeshing
Peng Li, Wangguandong Zheng, Yuan Liu, Tao Yu, Yangguang Li, Xingqun Qi, Xiaowei Chi, Siyu Xia, Yan-Pei Cao, Wei Xue, Wenhan Luo, Yike Guo
[Paper][Code][CVPR 2025]
- TEXGen: a Generative Diffusion Model for Mesh Textures
Xin Yu, Ze Yuan, Yuan-Chen Guo, Ying-Tian Liu, Jianhui Liu, Yangguang Li, Yan-Pei Cao, Ding Liang, Xiaojuan Qi
[Paper][Code][TOG 2024]
- Tripo Doodle: The Next-Gen AI 3D Creative Tool
Sienna Hwang, Muqing Jia, Yan-Pei Cao, Yuan-Chen Guo, Yangguang Li, Ding Liang
[Paper][Code Updating][SIGGRAPH Asia 2024 Real-Time Live!]
- Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT
Le Zhuo, Ruoyi Du, Han Xiao, Yangguang Li, Dongyang Liu, Rongjie Huang, Wenze Liu, Lirui Zhao, Fu-Yun Wang, Zhanyu Ma, Xu Luo, Zehan Wang, Kaipeng Zhang, Xiangyang Zhu, Si Liu, Xiangyu Yue, Dingning Liu, Wanli Ouyang, Ziwei Liu, Yu Qiao, Hongsheng Li, Peng Gao
[Paper][Code][NeurIPS 2024]
- Fast-BEV: A Fast and Strong Bird’s-Eye View Perception Baseline
Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen, Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, Jing Shao
[Paper][Code][TPAMI 2024]
- GVGEN: Text-to-3D Generation with Volumetric Representation
Xianglong He, Junyi Chen, Sida Peng, Di Huang, Yangguang Li, Xiaoshui Huang, Chun Yuan, Wanli Ouyang, Tong He
[Paper][Code][ECCV 2024]
- UniDream: Unifying Diffusion Priors for Relightable Text-to-3D Generation
Zexiang Liu, Yangguang Li, Youtian Lin, Xin Yu, Sida Peng, Yan-Pei Cao, Xiaojuan Qi, Xiaoshui Huang, Ding Liang, Wanli Ouyang
[Paper][Page][ECCV 2024]
- Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers
Zi-Xin Zou, Zhipeng Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Yan-Pei Cao, Song-Hai Zhang
[Paper][Code][CVPR 2024]
- EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
Zehuan Huang, Hao Wen, Junting Dong, Yaohui Wang, Yangguang Li, Xinyuan Chen, Yan-Pei Cao, Ding Liang, Yu Qiao, Bo Dai, Lu Sheng
[Paper][Code][CVPR 2024]
- Text-to-3D with Classifier Score Distillation
Xin Yu, Yuan-Chen Guo, Yangguang Li, Ding Liang, Song-Hai Zhang, Xiaojuan Qi
[Paper][Code][ICLR 2024]
- BEVBert: Multimodal Map Pre-training for Language-guided Navigation
ong An, Yuankai Qi, Yangguang Li, Yan Huang, Liang Wang, Tieniu Tan, Jing Shao
[Paper][Code][ICCV 2023]
- A Mixture of Surprises for Unsupervised Reinforcement Learning
Andrew Zhao, Matthieu Gaetan Lin, Yangguang Li, Yong-Jin Liu, Gao Huang
[Paper][Code][NeurIPS 2022]
- Towards Accurate Binary Neural Networks via Modeling Contextual Dependencies
Xingrun Xing, Yangguang Li, Wei Li, Wenrui Ding, Yalong Jiang, Yufeng Wang, Jing Shao, Chunlei Liu, Xianglong Liu
[Paper][Code][ECCV 2022]
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan
[Paper][Code][ICLR 2022]
|
Academic Service:
- Serve as a reviewer in CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, etc.
- ECCV 2022: Workshop Organizers @ Computer Vision in the Wild.
|
Selected Award and Honor:
- 2024.12: Best Paper Honorable Mention Award @ SIGGRAPH Asia 2024
- 2024.01: Best Poster Award @ AAAI 2024 Edge Intelligence Workshop
- 2023.01: SenseTime Team Award, SenseTime's highest award @2022 Autonomous Driving Mass Production projects
- 2022.06: 1st place in Embodied AI Workshop @ CVPR 2022
- 2022.06: 2nd place in UG2+ Challenge @ CVPR 2022
- 2022.01: SenseTime Team Award, SenseTime's highest award @2021 General Vision Big Models Technology System
- 2020.12: Outstanding Intern @SenseTime2020
- 2019.12: Outstanding Intern @SenseTime2019
- 2019.12: 1st place in Celebrity Video Identification Challenge @ACMMM 2019
|
|