One important lesson I learned while working with spatio-temporal graph data on the METR-LA dataset during my Executive Masters open-book assignment:
Do not keep switching between Claude, ChatGPT, Perplexity, Gemini, and other LLMs or AI tools during the execution stage. This lesson has repeated itself in the two years throughout the Executive Masters whenever we have been allowed to use LLMs.
My learning:
• Different LLMs reason differently
• They are trained and fine-tuned differently
• They suggest different libraries, assumptions, fixes, and coding styles
• Mixing their guidance during debugging can create unnecessary chaos
• What looks like “more intelligence” can become “more confusion”
• Multi-model thinking is useful during brainstorming
• It helps in debating, exploring, comparing, and expanding ideas
• But once execution begins, consistency matters more than variety
• Pick one model and work through the problem step by step
• Ask it to explain, debug, simplify, correct, and iterate
• Stay with one reasoning path until the solution stabilizes
My conclusion:
Use multiple LLMs for exploration.
Use one LLM for execution.
Mixing models during ideation can create insight.
Mixing models during implementation can create chaos.
This is especially true in technical work involving data science, graph ML, spatio-temporal modeling, package dependencies, tensor shapes, runtime environments, and debugging.
Progress comes from disciplined iteration, not tool-hopping.
Note: Enhanced / compiled with help of AI / LLMs
- Email me: Neil@HarwaniSytems.in
- Website: www.HarwaniSystems.in
- Blog: www.TechAndTrain.com/blog
- LinkedIn: Neil Harwani | LinkedIn
