New Anthropic research: We found that just a few malicious documents can produce vulnerabilities in an AI model—regardless of the size of the model or its training data. This means that data-poisoning attacks might be more practical than previously believed.