Skip to content

PaperCodex

Subscribe

Vision-language-action

UniVLA: Enable Robots to Generalize Across Embodiments with Minimal Data and Compute

UniVLA: Enable Robots to Generalize Across Embodiments with Minimal Data and Compute 771

Imagine deploying a single robot policy that works across different hardware—robotic arms, mobile bases, or even human-inspired setups—without retraining from…

01/09/2026Cross-embodiment Generalization, Robotic Manipulation, Vision-language-action
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex