Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/86/0c/75/860c75aa-068a-18b9-1cb5-600f803acdd4/mza_17177667092256625558.jpg/600x600bb.jpg
AI Illuminated
The AI Illuminators
25 episodes
1 day ago
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
RSS
All content for AI Illuminated is the property of The AI Illuminators and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.
Show more...
Courses
Education
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42256170/42256170-1731117581212-c28f4161d0b7a.jpg
Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers
AI Illuminated
11 minutes 2 seconds
12 months ago
Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers

[00:00] Intro

[00:21] Key problem: Poor generalization in robotic learning

[00:51] HPT: New transformer architecture for robotics

[00:59] Core components of HPT architecture

[01:44] Scale analysis: Data and model size impacts

[02:16] Training data: Real robots, simulations, human videos

[02:54] Results: 20% improvement on new tasks

[04:04] Real-world testing limitations

[05:18] Future additions: Tactile and 3D data

[05:57] Requirements for better robotics datasets

[06:48] Weight sampling in heterogeneous data

[08:55] Benefits of modular architecture

[10:30] Scaling challenges and trade-offs


Authors: Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He


Affiliations: MIT CSAIL, Meta FAIR


Abstract: One of the roadblocks for training generalist robotic models today is heterogeneity. Previous robot learning methods often collect data to train with one specific embodiment for one task, which is expensive and prone to overfitting. This work studies the problem of learning policy representations through heterogeneous pre-training on robot data across different embodiments and tasks at scale. We propose Heterogeneous Pre-trained Transformers (HPT), which pre-train a large, shareable trunk of a policy neural network to learn a task and embodiment agnostic shared representation. This general architecture aligns the specific proprioception and vision inputs from distinct embodiments to a short sequence of tokens and then processes such tokens to map to control robots for different tasks. Leveraging the recent large-scale multi-embodiment real-world robotic datasets as well as simulation, deployed robots, and human video datasets, we investigate pre-training policies across heterogeneity. We conduct experiments to investigate the scaling behaviors of training objectives, to the extent of 52 datasets. HPTs outperform several baselines and enhance the fine-tuned policy performance by over 20% on unseen tasks in multiple simulator benchmarks and real-world settings. See the project website (this https URL) for code and videos.


Link: https://arxiv.org/abs/2409.20537

AI Illuminated
A new way to keep up with AI research. Delivered to your ears. Illuminated by AI. Part of the GenAI4Good initiative.