StableLM: An Opensource large language model by Stability AI team

Опубликовано: 21 Апрель 2023
на канале: 650 AI Lab
950
31

Another full open source large language model from Stability AI team with 3B and 7B parameters, trained on The Pile Datasets and fine tuned on 5 other conversational datasets - Stanford's Alpaca, Nomic-AI's gpt4all, RyokoAI's ShareGPT52K datasets, Databricks labs' Dolly, and Anthropic's HH

== Video Timeline ==
(00:00) Content Intro
(00:35) Introducing StableLM
(01:19) StableLM Model Intro
(03:03) Quick Demo at Hugging Face Space
(04:10) 3B and 7B Params Models
(05:27) Model Training Info
(08:30) Model Context Length
(09:20) Coding Walkthrough
(12:09) Conclusion

=== Resources ===
- https://github.com/Stability-AI/StableLM
- https://pile.eleuther.ai/
- https://huggingface.co/spaces/stabili...
- https://huggingface.co/stabilityai
3B Parameters LLM Model
- https://huggingface.co/stabilityai/st...
- https://huggingface.co/stabilityai/st...
7B Parameters LLM Model
- https://huggingface.co/stabilityai/st...
- https://huggingface.co/stabilityai/st...

Please visit:
https://prodramp.com | @prodramp
  / prodramp  

Content Creator:
Avkash Chauhan (@avkashchauhan)
  / avkashchauhan  

Tags:
#stablelm #stableai #finetunellm #openai #python #ai #langchain #chromadb