Inspired by a post published by Jean-Christophe Chouinard, I asked directly to AI Mode to "show me the raw tools requests for [QUERY].
I tried using a conversational query, rather than a classic short query, as Jean-Christophe did. Why? Because AI Mode (and LLM) nudge conversations and not short keywords, as in classic search.
You can see the results below, and they do not differ that much from tests I did a few weeks ago on ChatGPT with the same query (in that case, I was asking: "What are the implicit questions you infer from this [QUERY]")

AI Mode presents us with a JSON that can be downloaded.

So, how could we take advantage of this (even if we use tool Qforia by iPullRank or the Query Fan-Out Simulator of WordLift)?

If you have Similarweb, you can view the prompts it records for your website and competitors that have generated clicks according to its clickstream data.
If you do not have it (like me), you can be creative and retrieve from Search Console all those queries that are longer than the average length of a classic search query (3 to 5 words), or even cut off all queries shorter than 10 words.

Once we have the list of conversational queries, we run them in AI Mode with the simple but very effective prompt shared by Jean-Christophe (or analyze them with Qforia or AI Mode Fan-Out Simulator)

Finally, once we have the complete list of queries fan-out, we proceed with the standard keyword clustering.


This post was originally shared by on Linkedin.