Skip to content

PaperCodex

Subscribe

Robotic Manipulation

UniVLA: Enable Robots to Generalize Across Embodiments with Minimal Data and Compute

UniVLA: Enable Robots to Generalize Across Embodiments with Minimal Data and Compute 771

Imagine deploying a single robot policy that works across different hardware—robotic arms, mobile bases, or even human-inspired setups—without retraining from…

01/09/2026Cross-embodiment Generalization, Robotic Manipulation, Vision-language-action
SimpleVLA-RL: Boost Robotic Task Performance with Minimal Data Using Reinforcement Learning

SimpleVLA-RL: Boost Robotic Task Performance with Minimal Data Using Reinforcement Learning 762

Building capable robotic systems that understand vision, language, and action—commonly referred to as Vision-Language-Action (VLA) models—has become a central goal…

01/05/2026Reinforcement Learning, Robotic Manipulation, Vision-Language-Action Modeling
Meta-World+: A Reproducible, Standardized Benchmark for Multi-Task and Meta Reinforcement Learning in Robotic Control

Meta-World+: A Reproducible, Standardized Benchmark for Multi-Task and Meta Reinforcement Learning in Robotic Control 1659

Evaluating reinforcement learning (RL) agents—especially those designed for multi-task or meta-learning scenarios—requires benchmarks that are consistent, well-documented, and technically accessible.…

12/19/2025Meta-reinforcement Learning, Multi-task Reinforcement Learning, Robotic Manipulation
SmolVLA: High-Performance Vision-Language-Action Robotics on a Single GPU

SmolVLA: High-Performance Vision-Language-Action Robotics on a Single GPU 20075

SmolVLA is a compact yet capable Vision-Language-Action (VLA) model designed to bring state-of-the-art robot control within reach of researchers, educators,…

12/18/2025Imitation Learning, Robotic Manipulation, Vision-Language-Action Modeling
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex