In this episode, we look at running a self hosted Large Language Model (LLM) and consuming it with a Rails application. We will use a background to make API requests to the LLM and then stream the responses in real-time to the browser.
00:00 - Introduction
03:54 - Installing a LLM
05:53 - Creating a new Rails app
06:04 - Creating the Chat form
07:48 - Creating the route
07:59 - Creating the Chat controller
09:14 - Creating the Chat job
09:36 - Building the API Request
12:09 - Broadcasting the initial div
14:20 - Making the API Request to the LLM
15:53 - Processing the chunk
17:29 - Formatting the response
22:19 - Demo
22:30 - Final thoughts
► Full Episode - https://www.driftingruby.com/episodes...
This episode is sponsored by Honeybadger (https://www.honeybadger.io/)
► Visit the Merchandise Store - https://www.railsstore.com/
► Ruby on Rails Templates - https://www.rubidium.io
► Subscribe to Drifting Ruby at https://www.driftingruby.com/subscrip...
#ruby #rubyonrails #programming #code #hotwire #javascript #development