Model specificity

Prompts are NOT universal

Each model is trained with different data sets and training prompts. The net result is that same prompt may lead to different results with different models.

Each LLM have its own unique sets of tags that when used lead to better performance of the model.

You can achieve better performance from the model by following the model provider’s guidance on prompt creation.

Always check the model documentation for guidance on prompt creation.

Example:

This is an example guidance for summarization task for Anthropic Claude and Google Gemini

Anthropopic performs better if you specify the role, instructions and text to be summarized in a formatted fashion where as Gemini doesn’t suggest use of tags or specific format

You can pass the instructions in a free flowing text format.

Anthropic Claude {role}: Specifies the desired role of the LLM

{instruction}: Summarize the text blow.

{text}: Placeholder for the input text you want the LLM to process.

Google Gemini

Your task is to summarize an abstract into one sentence. Avoid technical jargon and explain it in the simplest of words.

Abstract: … …

References

Mistral prompts guidance

LangChain Conditional Prompt Selector