Exploiting Insecure Output Handling in LLMs

Опубликовано: 29 Июль 2024
на канале: Intigriti
4,455
60

👩‍🎓👨‍🎓 Learn about Large Language Model (LLM) attacks! This lab handles LLM output insecurely, leaving it vulnerable to XSS. The user carlos frequently uses the live chat to ask about the Lightweight "l33t" Leather Jacket product. To solve the lab, we must use indirect prompt injection to perform an XSS attack that deletes the user carlos.

If you're struggling with the concepts covered in this lab, please review https://portswigger.net/web-security/... 🧠

🔗 ‪@PortSwiggerTV‬ challenge: https://portswigger.net/web-security/...

🧑💻 Sign up and start hacking right now - https://go.intigriti.com/register

👾 Join our Discord - https://go.intigriti.com/discord

🎙️ This show is hosted by   / _cryptocat   ( ‪@_CryptoCat‬ ) &   / intigriti  

👕 Do you want some Intigriti Swag? Check out https://swag.intigriti.com

Overview:
0:00 Intro
0:31 Lab: Exploiting insecure output handling in LLMs
0:57 Explore site functionality
1:48 Probe LLM live chat
2:30 Exploit XSS to delete victim account
9:26 Training data poisoning
9:49 Leaking sensitive training data
10:42 Defending against LLM attacks
12:29 Conclusion