Transparency paradox
The paradox of transparency in the COT of reasoning models and why OpenAI plans to show a summary instead of the real COT:
Deepseek provides full transparency into how R1 thinks:
This leads to users trusting their model more because they know exactly how the model arrived at its output.
BUT
It also leads to mistrust because you can see the model self-censoring in real time (Tiananmen Square) and the biases and inconsistencies in its thinking are also exposed.
Do you trust a model’s output more when you can see all its flaws and make better decisions based on more information? Or do you rely less on the model because you know it is like any human being: flawed.