Discussion about this post

User's avatar
Anton's avatar

This was a great breakdown of CoT and SoT—practical, clear, and actually useful. The toddler nephew analogy made me laugh, and honestly, it’s a solid way to think about guiding LLMs toward better reasoning.

I’ve used CoT prompting before, but I hadn’t considered Auto-CoT or self-consistency in depth. The idea of generating multiple reasoning paths and selecting the most consistent one is something I’ll definitely be experimenting with. Skeleton-of-thought is another concept I hadn’t played with, but the idea of forcing structured thinking before elaboration makes total sense—especially when trying to get AI to write more coherently.

Expand full comment

No posts