Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/86/0c/75/860c75aa-068a-18b9-1cb5-600f803acdd4/mza_17177667092256625558.jpg/600x600bb.jpg
AI Illuminated
The AI Illuminators
25 episodes
1 week ago
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
RSS
All content for AI Illuminated is the property of The AI Illuminators and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42256170/42256170-1733593097694-5ddd0d8374ce6.jpg
MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos
AI Illuminated
7 minutes 17 seconds
11 months ago
MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos

0:00 Introduction 

0:20 Limitations of traditional SfM and SLAM techniques.

0:57 Shortcomings of existing neural network methods.

1:07 MegaSaM's approach: balance of accuracy, speed, and robustness.

1:31 Differentiable bundle adjustment (BA) layer.

2:03 Integration of monocular depth priors and motion probability maps.

2:37 Uncertainty-aware global BA scheme.

3:14 Two-stage training scheme.

3:45 Consistent video depth estimation without test-time fine-tuning.

4:16 Key quantitative and qualitative improvements.

4:49 Limitations of MegaSaM and future research avenues.

5:15 Synthetic data for training and generalization to real-world videos.

5:49 Datasets used for evaluation.

6:26 DepthAnything and UniDepth for monocular depth estimation.

7:02 Summary of MegaSaM's advancements.

Authors: Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, Noah Snavely

Affiliations: Google DeepMind, UC Berkeley, University of Michigan

Abstract: We present a system that allows for accurate, fast, and robust estimation of camera parameters and depth maps from casual monocular videos of dynamic scenes. Most conventional structure from motion and monocular SLAM techniques assume input videos that feature predominantly static scenes with large amounts of parallax. Such methods tend to produce erroneous estimates in the absence of these conditions. Recent neural network-based approaches attempt to overcome these challenges; however, such methods are either computationally expensive or brittle when run on dynamic videos with uncontrolled camera motion or unknown field of view. We demonstrate the surprising effectiveness of a deep visual SLAM framework: with careful modifications to its training and inference schemes, this system can scale to real-world videos of complex dynamic scenes with unconstrained camera paths, including videos with little camera parallax. Extensive experiments on both synthetic and real videos demonstrate that our system is significantly more accurate and robust at camera pose and depth estimation when compared with prior and concurrent work, with faster or comparable running times. See interactive results on our project page: this https URL

Link: https://mega-sam.github.io/

AI Illuminated
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.