Skip to content

PaperCodex

Subscribe

Few-shot Image Classification

Tip-Adapter: Boost Few-Shot Image Classification Without Any Training

Tip-Adapter: Boost Few-Shot Image Classification Without Any Training 647

In the era of foundation models, CLIP (Contrastive Language–Image Pretraining) has revolutionized how we approach vision-language tasks—especially zero-shot image classification.…

01/13/2026Few-shot Image Classification, Vision-language Adaptation, Zero-shot Learning
CoOp: Adapt Vision-Language Models Like CLIP to Your Task with Just a Few Labels—No Full Fine-Tuning Needed

CoOp: Adapt Vision-Language Models Like CLIP to Your Task with Just a Few Labels—No Full Fine-Tuning Needed 2134

Imagine you have access to a powerful pre-trained vision-language model like CLIP—capable of understanding both images and text—but you need…

12/26/2025Few-shot Image Classification, Prompt Learning, Vision-language Model Adaptation
Copyright © 2026 PaperCodex.
  • Facebook
  • YouTube
  • Twitter

PaperCodex