Before using the chosen LLM we need to give it a text file that preps the LLM by giving the chosen LLM more directions to limit false data.
This file once given to the chosen LLM will then follow these rules that was given when processing the prompt that is going to be given in the next step.
After the chosen LLM has read and accepted the prompt primer then we can give a prompt that is not ambigious.
Now that we got results we need to make sure that what is being presented to us is accurate. To make sure we need to ask the LLM if there is any AI hullucinations.
We need to ask these questions to make the AI double check that what is being given as a result follows the rules given in the prompt primer and is accurate.