Read the abstract ➤ https://www.conf42.com/Machine_Learni...
Other sessions at this event ➤ https://www.conf42.com/ml2024
Support our mission ➤ https://www.conf42.com/support
Join Discord ➤ / discord
Chapters
0:00 intro
0:26 preamble
0:56 gcore at a glance
1:48 gcore edge ai solutions
2:04 global intelligence pipeline
3:01 nvidia h100 and a100 + infiniband gpu
3:40 where can i serve my trained model with low latency?
4:16 market overview: increasing rever adopting ai
5:04 real-time llm inference example
6:03 ai use case at the edge
6:44 edge ai inference requirements
7:13 solving challenges in llm inference
9:18 network latency
10:20 real-time end-to-end processing
13:41 aiot architecture
14:20 demo
18:22 inference at the edge
19:14 network latency goal
19:33 thank you!