tip: llm-chaining
most people prompt one llm and stop. you can get great results come from chaining multiple lllms together. here's an example

in this example (this was for email reply automation as well as research)

โ†’ start with chatgpt 3o for structure + ideation
โ†’ send the draft to perplexity for fast fact-checking
โ†’ pass the refined output to claude 3 for synthesis + tone
โ†’ run the synthesis through r1 for polish
โ†’ finish with a final check and polish in chatgpt 4.5 before shipping

each model has strengths. chaining lets you stack them.

don't ask me why i used r1 in the end, i tinkered and it just worked better that way.

thx for sharing this tip with me Walid Boulanouar


This post was originally shared by on Linkedin.