Compromising training data compromises the model itself, and backdoor attacks can be nearly impossible to detect through standard testing.