Kangle Deng

I am a PhD student at the Robotics Institute of Carnegie Mellon University, where I'm fortunate to be co-advised by Deva Ramanan and Jun-Yan Zhu.

Prior to joining CMU, I obtained my B.S degree in 2020 from Peking University.

Email  /  GitHub  /  Google Scholar

profile photo
Research

So far, my research mainly involves computer-aided creation. My current work is supported by Microsoft Research PhD Fellowship.

FlashTex: Fast Relightable Mesh Texturing with LightControlNet
Kangle Deng, Timothy Omernick, Alexander Weiss, Deva Ramanan, Jun-Yan Zhu, Tinghui Zhou, Maneesh Agrawala

ArXiv, 2024
project page

FlashTex textures an input 3D mesh given a user-provided text prompt. Notably, our generated texture can be relit properly in different lighting environments.

Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis
Chonghyuk(Andrew) Song, Gengshan Yang, Kangle Deng, Jun-Yan Zhu, Deva Ramanan

ICCV, 2023
project page / github

Given a RGBD video of deformable objects, Total-Recon builds 3D models of objects and background, which enables embodied view synthesis.

3D-aware Conditional Image Synthesis
Kangle Deng, Gengshan Yang, Deva Ramanan, Jun-Yan Zhu

CVPR, 2023
project page / github

We propose pix2pix3D, a 3D-aware conditional generative model for controllable photorealistic image synthesis. Given a 2D label map, such as a segmentation or edge map, our model learns to synthesize a corresponding image from different viewpoints.

Depth-supervised NeRF: Fewer Views and Faster Training for Free
Kangle Deng, Andrew Liu, Jun-Yan Zhu, Deva Ramanan

CVPR, 2022
project page / github

Proposed DS-NeRF (Depth-supervised Neural Radiance Fields), a model for learning neural radiance fields that takes advantage of depth supervised by 3D point clouds.

Unsupervised Any-to-Many Audiovisual Synthesis via Exemplar Autoencoders
Kangle Deng, Aayush Bansal, Deva Ramanan

ICLR, 2021
project page

Defined and addressed a new question of unsupervised audiovisual synthesis -- input the audio of a random individual and then output the talking-head video with audio in the style of another target speaker.

IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation
Kangle Deng*, Tianyi Fei*, Xin Huang, Yuxin Peng

IJCAI, 2019
bibtex

Application of Mutual Information to Text-to-video Generation.

Selected Honors and Awards
  • Microsoft Research Fellowship Award, 2022;
  • Outstanding Graduates in Beijing, 2020;
  • Peking University Weiming Scholar, 2020;
  • China National Scholarship (Top 1%, 3 times) , 2017, 2018, and 2019;
  • Sensetime Scholarship, 2019;
Teaching

Teaching Assistant:

  • Spring 2023, 16-720A: Computer Vision, Carnegie Mellon University
  • Fall 2022, 16-822: Geometry-based Methods in Vision, Carnegie Mellon University
  • Fall 2018, Introduction to Computer Systems, Peking University

This webpage is "stolen" from Jon Barron. Thanks to him!