A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
-
Updated
Apr 16, 2026
A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.
[MICCAI 2024] EndoDAC: Efficient Adapting Foundation Model for Self-Supervised Depth Estimation from Any Endoscopic Camera
[IPCAI 2024 (IJCARS special issue)] Surgical-DINO: Adapter Learning of Foundation Models for Depth Estimation in Endoscopic Surgery
Add a description, image, and links to the adapter-learning topic page so that developers can more easily learn about it.
To associate your repository with the adapter-learning topic, visit your repo's landing page and select "manage topics."