Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/86/0c/75/860c75aa-068a-18b9-1cb5-600f803acdd4/mza_17177667092256625558.jpg/600x600bb.jpg
AI Illuminated
The AI Illuminators
25 episodes
1 day ago
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
RSS
All content for AI Illuminated is the property of The AI Illuminators and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42256170/42256170-1730088043575-87e1906607ab3.jpg
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
AI Illuminated
7 minutes 13 seconds
1 year ago
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias

[00:00] Intro to LVSM: Novel transformer for view synthesis

[00:14] Problems with existing 3D synthesis methods

[00:59] LVSM architecture: encoder-decoder vs decoder-only

[01:41] Performance trade-offs between architectures

[02:13] Using Pluecker rays for implicit 3D geometry

[02:49] Zero-shot capabilities with varying input views

[03:23] Training stability and technical solutions

[03:59] Training & evaluation datasets

[04:23] Insights from architecture ablation studies

[05:00] Achieving SOTA with limited GPU resources

[05:25] Future work and research directions

[06:05] Parallels with language models

[06:38] Limitations in aspect ratio handling


Authors: Haian Jin, Hanwen Jiang, Hao Tan, Kai Zhang, Sai Bi, Tianyuan Zhang, Fujun Luan, Noah Snavely, Zexiang Xu


Affiliations: Cornell University, The University of Texas at Austin, Adobe Research, Massachusetts Institute of Technology


Abstract: We propose the Large View Synthesis Model (LVSM), a novel transformer-based approach for scalable and generalizable novel view synthesis from sparse-view inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which encodes input image tokens into a fixed number of 1D latent tokens, functioning as a fully learned scene representation, and decodes novel-view images from them; and (2) a decoder-only LVSM, which directly maps input images to novel-view outputs, completely eliminating intermediate scene representations. Both models bypass the 3D inductive biases used in previous methods -- from 3D representations (e.g., NeRF, 3DGS) to network designs (e.g., epipolar projections, plane sweeps) -- addressing novel view synthesis with a fully data-driven approach. While the encoder-decoder model offers faster inference due to its independent latent representation, the decoder-only LVSM achieves superior quality, scalability, and zero-shot generalization, outperforming previous state-of-the-art methods by 1.5 to 3.5 dB PSNR. Comprehensive evaluations across multiple datasets demonstrate that both LVSM variants achieve state-of-the-art novel view synthesis quality. Notably, our models surpass all previous methods even with reduced computational resources (1-2 GPUs). Please see our website for more details: this https URL .


Link: https://arxiv.org/abs/2410.17242

AI Illuminated
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.