Skip to content

PaperCodex

Subscribe

Representation Learning

CRATE: Interpretable, Parameter-Efficient Vision Transformers for Structured Unsupervised Learning

CRATE: Interpretable, Parameter-Efficient Vision Transformers for Structured Unsupervised Learning 1245

In an era where deep learning models grow ever larger and more opaque, the demand for interpretable, efficient, and theoretically…

12/27/2025Computer Vision, Representation Learning, Self-supervised Learning
Meta-Transformer: One Unified Model for 12 Modalities—No Paired Data Needed

Meta-Transformer: One Unified Model for 12 Modalities—No Paired Data Needed 1644

In today’s AI landscape, building systems that understand multiple types of data—text, images, audio, video, time series, and more—is increasingly…

12/17/2025Foundation Model, Multimodal Learning, Representation Learning
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex