My research interests are in Computer vision, graphics, photography and machine learning, with a focus on understanding the fundamentals of scene representations and generative models. My research aims to uncover the emergent scene properties in generative models and establish them as the backbone for modern computer vision systems. I am passionate about developing data-driven methodologies for understanding, modeling, and recreating the visual world.
|
![]() ![]() |
Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, David Forsyth, Jia-Bin Huang, Anand Bhattad, Shenlong Wang arXiv, 2023 [arXiv] [project page] UrbanIR creates realistic 3D renderings of urban scenes from single videos, allowing for novel lighting conditions and controllable editing. |
![]() ![]() |
Vaibhav Vavilala, Seemandhar Jain, Rahul Vasanth, Anand Bhattad, David Forsyth arXiv, 2023 arXiv Blocks2World decomposes 3D scenes into editable primitives and uses a trained model to render these into 2D images, providing high control for scene editing. |
![]() ![]() |
StyLitGAN: Prompting StyleGAN to Produce New Illumination Conditions
Anand Bhattad,
David A. Forsyth
arXiv, 2023 [arXiv] [project page] By imposing known physical facts about images, we can prompt StyleGAN to generate relighted or resurfaced images without using labeled data. |
![]() ![]() |
Make It So: Steering StyleGAN for Any Image Inversion and Editing
Anand Bhattad,
Viraj Shah
Derek Hoiem,
David A. Forsyth
arXiv, 2023 [arXiv], [project page] A novel near-perfect GAN Inversion method that preserves editing capabilities, even for out-of-domain images |
![]() ![]() |
Illuminator: Learning to Correct Lighting without Ground Truth Data
Anand Bhattad,
Brian Chen,
Shenlong Wang,
David A. Forsyth
First self-supervised, image-based object relighting method trained without labeled paired data, CGI data, geometry, or environment maps. |
![]() ![]() |
SIRfyN: Single Image Relighting from your Neighbors
David A. Forsyth,
Anand Bhattad,
Pranav Asthana,
Yuani Zhong,
Yuxiong Wang
arXiv, 2022 Technical Report First scene relighting method that requires no labeled or paired image data. |
![]() ![]() |
Anand Bhattad, Daniel McKee, Derek Hoiem, David Forsyth NeurIPS, 2023 [arXiv] StyleGAN has easy accssible internal encoding of intrinsic images as originally defined by Barrow and Tenenbaum in their influential paper of 1978. |
![]() ![]() |
Oscar Michel, Anand Bhattad, Eli VanderBilt, Ranjay Krishna, Ani Kembhavi, Tanmay Gupta NeurIPS, 2023 [arXiv] [project page] A synthetic dataset and a model that learns to rotate, translate, insert, and remove objects identified by language in a scene. It can transfer to real-world images. |
![]() |
Yuani Zhong,
Anand Bhattad,
Yuxiong Wang
David A. Forsyth
ICCV 2023 SOTA normal and depth predictors are not equivariant to image cropping. We propose equivariant regularization loss to improve equivariance in these models. |
![]() ![]() |
Cut-and-Paste Object Insertion by Enabling Deep Image Prior for Reshading
Anand Bhattad,
David A. Forsyth
3DV, 2022 [project page] Convincing cut-and-paste reshading with consistent image decomposition inferences. |
![]() ![]() |
DIVeR: Real-time and Accurate Neural Radiance Fields with Deterministic Integration for
Volume Rendering
Liwen Wu,
Jae Yong Lee,
Anand Bhattad,
Yuxiong Wang,
David A. Forsyth
CVPR, 2022 (Best Paper Finalist) [project page] / Training Code / Real-time Code Improving Real-Time NeRF with Deterministic Integration. |
![]() ![]() |
View Generalization for Single Image Textured 3D Models
Anand Bhattad,
Aysegul Dundar,
Guilin Liu,
Andrew Tao,
Bryan Catanzaro
CVPR, 2021 [project page] Consistent textured 3D inferences from a single 2D image. |
![]() ![]() |
Unrestricted Adversarial Examples via Semantic Manipulation
Anand Bhattad*,
Min Jin Chong*,
Kaizhao Liang,
Bo Li,
David A. Forsyth
ICLR, 2020 CVPR-W on Adversarial ML in Real-World Computer Vision Systems, 2019 Generating realistic adversarial examples by image re-colorization and texture transfer. |
![]() |
Improved Style Transfer with Calibrated Metrics
Mao Chuang Yeh*,
Shuai Tang*,
Anand Bhattad,
Chuhang Zou,
David A. Forsyth
WACV, 2020 A novel quantitative evaluation procedure for style transfer methods. |
![]() ![]() |
Detecting Anomalous Faces with "No Peeking'' Autoencoders
Anand Bhattad, Jason Rock, David A. Forsyth CVPR Workshop on Vision with Biased or Scarce Data, 2018 A simple unsupervised method for detecting anomalous faces by carefully constructing features from "No Peeking" or inpainting autoencoders. |
Graduate Teaching Assistant, CS498
Applied Machine Learning, Fall 2018
Graduate Teaching Assistant, CS225 Data Structures, Spring 2017 Graduate Teaching Assistant, CS101 Intro Computer Science, Spring 2016 (Ranked as Outstanding TA) & Fall 2017 |