Fine-tune Multi-modal LLaVA Vision and Language Models

Опубликовано: 15 Февраль 2024
на канале: Trelis Research
36,894
987

➡️ ADVANCED Vision Fine-tuning Repo: https://trelis.com/advanced-vision/

KEY TRELIS LINKS:
🛠 Tools (Fine-tuning, Vision, Audio, Inference): https://Trelis.com
💡 Consulting (Technical Assistance OR Market Insights): https://forms.gle/2VXzrBzpvm1BmG6e7
🤝 Work for Trelis: https://trelis.com/jobs/
💸 Grants Program: https://trelis.com/trelis-ai-grants/
📧 Newsletter: https://trelis.substack.com

*Video Resources*
Slides: https://docs.google.com/presentation/...
IDEFICS: https://huggingface.co/HuggingFaceM4/...
LLaVA: https://llava.hliu.cc/

Affiliate Links (support the channel):
RunPod - https://tinyurl.com/4b6ecbbn
Vast AI - https://cloud.vast.ai/?ref_id=98762

Chapters:
0:00 Fine-tuning Multi-modal Models
0:16 Overview
1:30 LLaVA vs ChatGPT
4:53 Applications
5:37 Multi-modal model architecture
9:05 Vision Encoder architecture
14:00 LLaVA 1.5 architecture
16:30 LLaVA 1.6 architecture
18:30 IDEFICS architecture
22:00 Data creation
24:11 Dataset creation
25:29 Fine-tuning
34:25 Inference and Evaluation
37:34 Data loading
40:00 LoRA setup
42:52 Recap so far
43.25 Evaluation pre-training
44:26 Training
45:40 Evaluation post-training
46:45 Technical clarifications
50:29 Summary