Thanks, this is insightful.
I liked the "rewrite the claim in 5 different ways". Can be really useful for RAG scenarios.
I liked the idea of detecting hallucination using another aligned LLM, though i don't know how effective it will be.
"not enough info" is probably the hardest. Most LLMs today are trained to say anything rather than being humble, as you said.