🕸️SPIDR🕸️
SDF-based Neural Point Fields for Illumination and Deformation


Ruofan Liang1,4, Jiahao Zhang1, Haoda Li1,2, Chen Yang3, Yushi Guan1, Nandita Vijaykumar1,4

1 University of Toronto   2 University of California, Berkeley   3 Shanghai Jiao Tong University   4 Vector Institute  

Abstract


Pipeline
Given a set of scene images under an unknown illumination, SPIDR uses a hybrid neural implicit point representation to learn the scene geometry, radiance, and BRDF parameters. SPIDR also employs an MLP model to learn and represent environment illumination. After obtaining a trained SPIDR model, users can perform various geometry editing using our explicit point cloud representation. SPIDR then updates its estimated rendering factors based on user's geometry editing. SPIDR finally uses these estimated rendering factors to synthesize the deformed object image using BRDF-based rendering.

Implicit neural representations such as neural radiance fields (NeRFs) have recently emerged as a promising approach for 3D reconstruction and novel view synthesis. However, NeRF-based methods encode shape, reflectance, and illumination implicitly in their neural representations, and this makes it challenging for users to manipulate these properties in the rendered images explicitly. Existing approaches only enable limited editing of the scene and deformation of the geometry. Furthermore, no existing work enables accurate scene illumination after object deformation. In this work, we introduce SPIDR, a new hybrid neural SDF representation. SPIDR combines point cloud and neural implicit representations to enable the reconstruction of higher quality meshes and surfaces for object deformation and lighting estimation. To more accurately capture environment illumination for scene relighting, we propose a novel neural implicit model to learn environment light. To enable accurate illumination updates after deformation, we use the shadow mapping technique to efficiently approximate the light visibility updates caused by geometry editing. We demonstrate the effectiveness of SPIDR in enabling high quality geometry editing and deformation with accurate updates to the illumination of the scene. In comparison to prior work, we demonstrate significantly better rendering quality after deformation and lighting estimation.


Lighting and BRDF Estimation


mic decomponsition

ship decomponsition

ficus decomponsition

lego decomponsition

hotdog decomponsition

chair decomponsition

drums decomponsition

materials decomponsition

trex decomponsition

manikin decomponsition

Qualitative results of the synthetic scenes. Note that in the “Environment” column, the upper row shows the ground truth environment light, and the lower row shows our estimated environment light.
fountain decomponsition

gundam decomponsition

character decomponsition

jade decomponsition

statues decomponsition

eva decomponsition

Qualitative results of real-captured scenes from BlendedMVS dataset. Since these real-captured scenes do not have ground truth environment light, we only show our estimated environment light in the last column.

Geometry Editing & Results


Deformation comparison

trex deform

manikin deform

charater deform

statues deform

EVA deform

Gundam deform

Qualitative results on geometry editing.

Relighting & Results


relighting labels trex relighting gundam relighting

eva relighting materials relighting
Qualitative results on relighting.

Citation


@article{liang2022spidr,
  title={SPIDR: SDF-based Neural Point Fields for Illumination and Deformation},
  author={Liang, Ruofan and Zhang, Jiahao and Li, Haoda and Yang, Chen and Guan, Yushi and Vijaykumar, Nandita},
  journal={arXiv preprint arXiv:2210.08398},
  year={2022}
}