Glossary

adversarial examples

inputs designed to cause misclassification — and **prompt injection** attacks against language models.

Learn More

Related Terms