Security of LLM APIs

Опубликовано: 03 Апрель 2024
на канале: Nordic APIs
392
6

A talk given by Ankita Gupta from Akto at the 2024 Austin API Summit in Austin, Texas.

In this session, we talked about API security of LLM APIs, addressing key vulnerabilities and attack vectors. The purpose is to educate developers, API designers, architects and organizations about the potential security risks when deploying and managing LLM APIs.

1. Overview of Large Language Models (LLMs) APIs
2. Understanding LLM Vulnerabilities:
– Prompt Injections
– Sensitive Data Leakage
– Inadequate Sandboxing
– Insecure Plugin Design
– Model Denial of Service
– Unauthorized Code Execution
– Input attacks
– Poisoning attacks
3. Best practices to secure LLM APIs from data breaches

I will explain all the above using real life examples.
----------
Get the latest API insights straight to your inbox, subscribe to Nordic APIs newsletter: https://nordicapis.com/newsletter/