Anand Bhattad

Research Assistant Professor
Toyota Technological Institute at Chicago

Email  /  CV  /  Google Scholar  /  Twitter

profile photo

I am a Research Assistant Professor at the Toyota Technological Institute at Chicago (TTIC) located in the University of Chicago campus. Earlier, I completed my PhD from Computer Vision Group at University of Illinois Urbana-Champaign (UIUC) with my advisor David Forsyth. During my PhD, I had the pleasure of closely collaborating with Derek Hoiem, Shenlong Wang, and Yuxiong Wang at UIUC.

  • Sept 2023: Two papers accepted by NeurIPS 2023. One on discovering hidden gems in StyleGAN and another on language guided 3D aware image editing.
  • Sept 2023: Excited to start as a Research Assistant Professor at TTIC!
  • July 2023: One paper accepted by ICCV 2023 on Equivariant Dense Prediction Models.
  • Jun 2023: Successfully organized Scholars and Big Models workshop at CVPR 2023.
  • May 2023: Defended my PhD!
Recent and Upcoming Talks
  • Stanford University, Jun 2023
  • University of Tübingen, Autonomous Vision Group, May 2023
  • UC Berkeley: Vision Seminar, Apr 2023
  • NVIDIA Research, Apr 2023
  • MIT: Vision and Graphics Seminar, Apr 2023
  • CMU: VASC Seminar, Mar 2023
  • UW: Vision Seminar, Mar 2023
  • UMD: Vision Seminar, Mar 2023
  • UCSD: Pixel Cafe Seminar, Feb 2023
  • TTIC: Research Talk, Feb 2023


My research interests are in Computer vision, graphics, photography and machine learning, with a focus on understanding the fundamentals of scene representations and generative models. My research aims to uncover the emergent scene properties in generative models and establish them as the backbone for modern computer vision systems. I am passionate about developing data-driven methodologies for understanding, modeling, and recreating the visual world.


Preprints / Technical Reports

Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, David Forsyth, Jia-Bin Huang, Anand Bhattad, Shenlong Wang
arXiv, 2023
[arXiv] [project page]

UrbanIR creates realistic 3D renderings of urban scenes from single videos, allowing for novel lighting conditions and controllable editing.

Vaibhav Vavilala, Seemandhar Jain, Rahul Vasanth, Anand Bhattad, David Forsyth
arXiv, 2023

Blocks2World decomposes 3D scenes into editable primitives and uses a trained model to render these into 2D images, providing high control for scene editing.

Anand Bhattad, David A. Forsyth
arXiv, 2023
[arXiv] [project page]

By imposing known physical facts about images, we can prompt StyleGAN to generate relighted or resurfaced images without using labeled data.

Anand Bhattad, Viraj Shah Derek Hoiem, David A. Forsyth
arXiv, 2023
[arXiv], [project page]

A novel near-perfect GAN Inversion method that preserves editing capabilities, even for out-of-domain images

Anand Bhattad, Brian Chen, Shenlong Wang, David A. Forsyth

First self-supervised, image-based object relighting method trained without labeled paired data, CGI data, geometry, or environment maps.

David A. Forsyth, Anand Bhattad, Pranav Asthana, Yuani Zhong, Yuxiong Wang
arXiv, 2022
Technical Report

First scene relighting method that requires no labeled or paired image data.

Anand Bhattad, Daniel McKee, Derek Hoiem, David Forsyth
NeurIPS, 2023

StyleGAN has easy accssible internal encoding of intrinsic images as originally defined by Barrow and Tenenbaum in their influential paper of 1978.

Oscar Michel, Anand Bhattad, Eli VanderBilt, Ranjay Krishna, Ani Kembhavi, Tanmay Gupta
NeurIPS, 2023
[arXiv] [project page]

A synthetic dataset and a model that learns to rotate, translate, insert, and remove objects identified by language in a scene. It can transfer to real-world images.

Yuani Zhong, Anand Bhattad, Yuxiong Wang David A. Forsyth
ICCV 2023

SOTA normal and depth predictors are not equivariant to image cropping. We propose equivariant regularization loss to improve equivariance in these models.

Anand Bhattad, David A. Forsyth
3DV, 2022
[project page]

Convincing cut-and-paste reshading with consistent image decomposition inferences.

Liwen Wu, Jae Yong Lee, Anand Bhattad, Yuxiong Wang, David A. Forsyth
CVPR, 2022 (Best Paper Finalist)
[project page] / Training Code / Real-time Code

Improving Real-Time NeRF with Deterministic Integration.

Anand Bhattad, Aysegul Dundar, Guilin Liu, Andrew Tao, Bryan Catanzaro
CVPR, 2021
[project page]

Consistent textured 3D inferences from a single 2D image.

Anand Bhattad*, Min Jin Chong*, Kaizhao Liang, Bo Li, David A. Forsyth
ICLR, 2020
CVPR-W on Adversarial ML in Real-World Computer Vision Systems, 2019

Generating realistic adversarial examples by image re-colorization and texture transfer.

Mao Chuang Yeh*, Shuai Tang*, Anand Bhattad, Chuhang Zou, David A. Forsyth
WACV, 2020

A novel quantitative evaluation procedure for style transfer methods.

Anand Bhattad, Jason Rock, David A. Forsyth
CVPR Workshop on Vision with Biased or Scarce Data, 2018

A simple unsupervised method for detecting anomalous faces by carefully constructing features from "No Peeking" or inpainting autoencoders.


Graduate Teaching Assistant, CS498 Applied Machine Learning, Fall 2018

Graduate Teaching Assistant, CS225 Data Structures, Spring 2017

Graduate Teaching Assistant, CS101 Intro Computer Science, Spring 2016 (Ranked as Outstanding TA) & Fall 2017

Template credit: Jon Barron.