大模型安全

- 大模型安全非常重要

- Fool the LLM by role play

- Jailbreak

- Jailbreak using base 64

- Jailbreak using Universal Transferable Suffix

- Jailbreak using adversarial examples

- Jailbreak using adversarial examples

- Prompt Injection via self-hosted webpages

- Prompt Injection via self-created docs

- Data Poisoning / Backdoor Attacks

- Security in LLMs

- Ending Pose