Anthropic: Even a Little Data Poisoning Can Corrupt AI Models
New research from Anthropic and academic collaborators, finds that only a few hundred malicious data points can introduce hidden vulnerabilities into large language models (LLMs). The study examined m...