The risk that known explanation requirements enable organizations to design AI systems that produce explanation-compliant outputs that do not accurately represent the model's actual reasoning.