New research from Anthropic found that reasoning models willfully omit where it got some information.