We learned that AI decisions are probability estimates, not truths. How does this insight change how you think about the EU AI Act's requirement for "human oversight" of high-risk AI systems? What should "human oversight" actually look like in practice?