Publications

GARField: Group Anything with Radiance Fields
Chung Min Kim*,Mingxuan Wu*,Justin Kerr*,Ken Goldberg,Matthew Tancik,Angjoo Kanazawa

CVPR (2024)

arXivProject Website

Hierarchical grouping in 3D by training a scale-conditioned affinity field from multi-level masks

Matthew Tancik*,Ethan Weber*,Evonne Ng*,Ruilong Li,Brent Yi,Justin Kerr,Terrance Wang,Alexander Kristoffersen,Jake Austin,Kamyar Salahi,Abhik Ahuja,David McAllister,Angjoo Kanazawa

SIGGRAPH (2023)

Project WebsiteGithubarXiv

A Modular Framework for Neural Radiance Field Development.

LERF: Language Embedded Radiance Fields
Justin Kerr*,Chung Min Kim*,Ken Goldberg,Angjoo Kanazawa,Matthew Tancik

ICCV (2023) Oral

arXivProject Website

Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D.

Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
Ayaan Haque,Matthew Tancik,Alexei Efros,Aleksander Hołyński,Angjoo Kanazawa,

ICCV (2023) Oral

arXivProject Website

Instruct-NeRF2NeRF enables instruction-based editing of NeRFs via a 2D diffusion model.

NerfAcc: Efficient Sampling Accelerates NeRFs
Ruilong Li,Hang Gao,Matthew Tancik,Angjoo Kanazawa

ICCV (2023)

arXivProject Website

NerfAcc integrates advanced efficient sampling techniques that lead to significant speedups in training various recent NeRF papers with minimal modifications to existing codebases.

Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
Frederik Warburg,Ethan Weber*,Matthew Tancik,Aleksander Hołyński,Angjoo Kanazawa

ICCV (2023)

arXivProject Website

Nerfbusters proposes an evaluation procedure for in-the-wild NeRFs, and presents a method that uses a 3D diffusion prior to clean NeRFs.

Evo-NeRF: Evolving NeRF for Sequential Robot Grasping
Justin Kerr,Letian Fu,Huang Huang,Yahav Avigal,Matthew Tancik,Jeffrey Ichnowski,Angjoo Kanazawa,Ken Goldberg

CoRL (2022) Oral

OpenReviewProject Website

We show that by training NeRFs incrementally over a stream of images, they can be used robotics grasping tasks. They are particularly useful in tasks involving transparent objects which are traditionally hard to compute geometry for.

The One Where They Reconstructed
3D Humans and Environments in TV Shows
Georgios Pavlakos*,Ethan Weber*,Matthew Tancik,Angjoo Kanazawa

ECCV (2022)

arXivProject Website

We show that is it possible to reconstruct TV show in 3D. Further, reasoning about humans and their environment in 3D enables a broad range of downstream applications: re-identification, gaze estimation, cinematography and image editing.

Block-NeRF: Scalable Large Scene Neural View Synthesis
Matthew Tancik,Vincent Casser,Xinchen Yan,Sabeek Pradhan,Ben Mildenhall,Pratul P. Srinivasan,Jonathan T. Barron,Henrik Kretzschmar

CVPR (2022) Oral

arXivProject WebsiteVideo

We present a variant of Neural Radiance Fields that can represent large-scale environments. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.

Plenoxels: Radiance Fields without Neural Networks
Alex Yu*,Sara Fridovich-Keil*,Matthew Tancik,Qinhong Chen,Benjamin Recht,Angjoo Kanazawa

CVPR (2022) Oral

arXivProject WebsiteVideo

We propose a view-dependent sparse voxel model, Plenoxel (plenoptic volume element), that can optimize to the same fidelity as Neural Radiance Fields (NeRFs) without any neural networks. Our typical optimization time is 11 minutes on a single GPU, a speedup of two orders of magnitude compared to NeRF.

PlenOctrees for Real-time Rendering of Neural Radiance Fields
Alex Yu,Ruilong Li,Matthew Tancik,Hao Li,Ren Ng,Angjoo Kanazawa

ICCV (2021) Oral

arXivDemo / Project WebsiteVideo

We introduce a method to render Neural Radiance Fields (NeRFs) in real time without sacrificing quality. Our method preserves the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects.

Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Jonathan T. Barron,Ben Mildenhall,Matthew Tancik,Peter Hedman,Ricardo Martin-Brualla,Pratul P. Srinivasan

ICCV (2021) Oral - Best Paper Honorable Mention

arXivProject WebsiteVideo

The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. We prefilter the positional encoding function and train NeRF to generate anti-aliased renderings.

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
Ajay Jain,Matthew Tancik,Pieter Abbeel

ICCV (2021)

arXivProject WebsiteVideo

We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP.

Learned Initializations for Optimizing Coordinate-Based Neural Representations
Matthew Tancik*,Ben Mildenhall*,Terrance Wang,Divi Schmidt,Pratul P. Srinivasan,Jonathan T. Barron,Ren Ng

CVPR (2021) Oral

arXivProject WebsiteCodeVideo

We find that standard meta-learning algorithms for weight initialization can enable faster convergence during optimization and can serve as a strong prior over the signal class being modeled, resulting in better generalization when only partial observations of a given signal are available.

pixelNeRF: Neural Radiance Fields from One or Few Images
Alex Yu,Vickie Ye,Matthew Tancik,Angjoo Kanazawa

CVPR (2021)

arXivProject WebsiteCodeVideo

We propose a learning framework that predicts a continuous neural scene representation from one or few input images by conditioning on image features encoded by a convolutional neural network.

NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
Pratul P. Srinivasan,Boyang Deng,Xiuming Zhang,Matthew Tancik,Ben Mildenhall,Jonathan T. Barron,

CVPR (2021)

arXivProject WebsiteVideo

We recover relightable NeRF-like models using neural approximations of expensive visibility integrals, so we can simulate complex volumetric light transport during training.

Fourier Features Let Networks Learn
High Frequency Functions in Low Dimensional Domains
Matthew Tancik*,Pratul P. Srinivasan*,Ben Mildenhall*,Sara Fridovich-Keil,Nithin Raghavan,Utkarsh Singhal,Ravi Ramamoorthi,Jonathan T. Barron,Ren Ng

NeurIPS (2020) Spotlight

arXivProject WebsiteCodeVideo

We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall*,Pratul P. Srinivasan*,Matthew Tancik*,Jonathan T. Barron,Ravi Ramamoorthi,Ren Ng

ECCV (2020) Oral - Best Paper Honorable Mention

arXivProject WebsiteCodeVideoFollow-ups

We propose an algorithm that represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. With this representation we achieve state-of-the-art results for synthesizing novel views of scenes from a sparse set of input views.

StegaStamp: Invisible Hyperlinks in Physical Photographs
Matthew Tancik*,Ben Mildenhall*,Ren Ng

CVPR (2020)

arXivProject WebsiteCodeVideo

We present a deep learning method to hide imperceptible data into printed images that can be recovered after photographing the print. The method is robust to corruptions like shadows, occlusions, noice, and shift in color .

Lighthouse: Predicting Lighting Volumes
for Spatially-Coherent Illumination
Pratul P. Srinivasan*,Ben Mildenhall*,Matthew Tancik,Jonathan T. Barron,Richard Tucker,Noah Snavely

CVPR (2020)

arXivProject WebsiteVideo

We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. We propose a model that estimates a 3D volumetric RGBA model of a scene, including content outside the observed field of view, and then uses standard volume rendering to estimate the incident illumination at any 3D location within that volume.

TurkEyes: A Web-Based Toolbox for Crowdsourcing Attention Data
Anelise Newman,Barry McNamara,Camilo Fosco,Yun Bin Zhang,Pat Sukham,Matthew Tancik,Nam Wook Kim,Zoya Bylinskii

CHI (2020)

arXivProject WebsiteCode

Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention.

Towards Photography Through Realistic Fog
Guy Satat,Matthew Tancik,Ramesh Raskar

ICCP (2018)

Project WebsiteLocal CopyVideoMIT News

We demonstrate a technique that recovers reflectance and depth of a scene obstructed by dense, dynamic, and heterogeneous fog. We use a single photon avalanche diode (SPAD) camera filter our the light that scatters off of the fog in the scene.

Flash Photography for Data-Driven Hidden Scene Recovery
Matthew Tancik,Guy Satat,Ramesh Raskar

arXivVideo

We introduce a method that couples traditional geometric understanding and data-driven techniques to image around corners with consumer cameras. We show that we can recover information in real scenes despite only training our models on synthetically generated data.

Photography optics at relativistic speeds
Barmak Heshmat,Matthew Tancik,Guy Satat,Ramesh Raskar

Nature Photonics  (2018)

Project WebsiteNature ArticleVideoMIT News

We demonstrate that by folding the optical path in time, one can collapse the conventional photography optics into a compact volume or multiplex various functionalities into a single imaging optics piece without losing spatial or temporal resolution. By using time-folding at different regions of the optical path, we achieve an order of magnitude lens tube compression, ultrafast multi-zoom imaging, and ultrafast multi-spectral imaging.

Synthetically Trained Icon Proposals for Parsing and Summarizing Infographics
Spandan Madan*,Zoya Bylinskii*,Matthew Tancik*,Adria Recasens,Kim Zhong,Sami Alsheikh,Hanspeter Pfister,Aude Olivia,Fredo Durand

arXivVisually29K

Combining icon classification and text extraction, we present a multi-modal summarization application. Our application takes an infographic as input and automatically produces text tags and visual hashtags that are textually and visually representative of the infographic’s topics respectively.

Lensless Imaging with Compressive Ultrafast Sensing
Guy Satat,Matthew Tancik,Ramesh Raskar

IEEE Transactions on Computational Imaging (2017)

Project WebsiteLocal CopyIEEEMIT News

We demonstrate a new imaging method that is lensless and requires only a single pixel. Compared to previous single pixel cameras our system allows significantly faster and more efficient acquisition by using ultrafast time-resolved measurement with compressive sensing.

Object Classification through Scattering Media with Deep Learning on Time Resolved Measurement
Guy Satat,Matthew Tancik,Otkrist Gupta,Barmak Heshmat,Ramesh Raskar

Optics Express  (2017)

Project WebsiteLocal CopyOSA

A deep learning method for object classification through scattering media. Our method trains on synthetic data with variations in calibration parameters that allows the network to learn a calibration invariant model.

Ken Goldberg,