The article „GPT Best Practices“ by OpenAI provides a comprehensive guide on how to achieve better results from GPT models. It presents six strategies:
- Write clear instructions: GPT models can’t guess what the user wants. Therefore, it’s important to give clear instructions and include details in the query to get more relevant answers. You can also ask the model to adopt a persona or specify the desired length of the output.
- Provide reference text: GPT models can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. By providing reference text, the model can be guided to give less fabricated answers.
- Split complex tasks into simpler subtasks: Complex tasks often lead to higher error rates. Therefore, it’s beneficial to decompose these into a series of simpler tasks that can be defined as a workflow.
- Give GPTs time to „think“: GPT models make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Therefore, it can be helpful to ask for a chain of reasoning before an answer.
- Use external tools: The weaknesses of GPTs can be compensated by using other tools. For example, a text retrieval system can inform GPTs about relevant documents, or a code execution engine can assist GPTs in performing calculations and running code.
- Test changes systematically: To ensure that a change improves performance, it may be necessary to define a comprehensive test suite and evaluate model outputs with reference to gold-standard answers.
Each of these strategies can be implemented with specific tactics. These tactics are meant to provide ideas for things to try. They are by no means fully comprehensive, and you should feel free to try creative ideas not represented here. The link to the complete article