You can ask questions to the LLM based on the uploaded file.
1Open the Evaluate Agent guide.
2On the Start tab of the Start step, ensure that the Run As field is set to Current User.
3Save and publish the guide.
4On the Actions menu, click Run. Alternatively, you can copy the execution URL from the Properties Details dialog box to run the guide.
5On the Instructions page, enter the user prompt, select the tracing level, and upload a file with grounded data.
6Click Continue.
The first LLM generates responses based on the data in the file. If the file does not contain enough information or if the questions fall outside the file's scope, the LLM will not provide an answer. Consequently, the assessment might not be objective.
You can also use the embed code to embed the guide into an HTML document of a third-party application.
You can evaluate the response using both a score and a verbose. The score ranges from 0 to 5, indicating the level of correctness. If you choose Verbose, the LLM will provide a score along with an explanation.
Example of a verbose evaluation result:
•Relevance: Score: 0. This indicates that the response is not relevant. Instead of providing information or acknowledging the limitations of the provided data and attempting a partial answer, it simply states that it cannot answer.
•Completeness: Score: 0. This indicates that the response is entirely incomplete and offers no information about Lviv.
•Style and Tone: Score: 5. This reflects that the tone is neutral and appropriate.
•Correctness: Score: 0. This shows that the response incorrectly claims it cannot answer the question despite available data about Lviv. Although limited details on population, life expectancy, fertility rates, and age demographics are accessible, they could be used to fulfill the request for more information.
•Clarity: Score: 5. This indicates that the response is clear and concise, its statement inaccurately reflects its capabilities.