Skip to content

PaperCodex

Subscribe

Zero-shot Learning

Tip-Adapter: Boost Few-Shot Image Classification Without Any Training

Tip-Adapter: Boost Few-Shot Image Classification Without Any Training 647

In the era of foundation models, CLIP (Contrastive Language–Image Pretraining) has revolutionized how we approach vision-language tasks—especially zero-shot image classification.…

01/13/2026Few-shot Image Classification, Vision-language Adaptation, Zero-shot Learning
Matcher: One-Shot Segmentation Without Training—Unlock Flexible, Label-Free Perception for Real-World Applications

Matcher: One-Shot Segmentation Without Training—Unlock Flexible, Label-Free Perception for Real-World Applications 522

In modern computer vision workflows, deploying accurate segmentation models often demands large annotated datasets, task-specific architectures, and costly retraining—barriers that…

01/13/2026One-shot Segmentation, Open-world Perception, Zero-shot Learning
ULIP-2: Scalable Multimodal 3D Understanding Without Manual Annotations

ULIP-2: Scalable Multimodal 3D Understanding Without Manual Annotations 547

Imagine building a system that can understand 3D objects as intuitively as humans do—recognizing a chair from its point cloud,…

01/13/20263D Classification, Multimodal Representation Learning, Zero-shot Learning
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex