GeminiAI Chat with File > Using the GeminiAI Chat with File recipe > Configuring and publishing the processes
  

Configuring and publishing the processes

    1Open the Read File process.
    2On the Start tab of the Start step, select the Secure Agent from the Run On list.
    3Optionally, you can change the tracing level from Verbose to None on the Advanced tab.
    4 Save and publish the process.
    5Open the Chat with File process.
    6On the Start tab of the Start step, select Cloud Server from the Run On list.
    7On the Temp Fields tab of the Start step, the Model_LLM field is set to gemini-1.5 pro by default. You can optionally edit the model version. For information about changing the model version, see the Gemini documentation.
    8Optionally, you can change the tracing level from Verbose to None on the Advanced tab.
    9Optionally, in the Prepare Request step, enter the prompt instructions in the Assignments field by updating the Prompt_Configuration and Request fields using the Expression Editor, as shown in the following sample code:
    For Prompt_Configuration:
    <generationConfig>
    <stopSequences>.</stopSequences>
    <candidateCount>1</candidateCount>
    <maxOutputTokens>500</maxOutputTokens>
    <temperature>0.1</temperature>
    <topP>0.1</topP>
    <topK>2</topK>
    </generationConfig>
    For Request:
    <Generate_Content_Request>
    <contents>
    <parts>
    <text>This is the text from the file: {$temp.Content_From_File}
    </parts>
    <role>user</role>
    </contents>
    <parts>
    <text>{$input.User_Prompt}</text>
    </parts>
    <role>user</role>
    </contents>
    <generationConfig>
    <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences>
    <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount>
    <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens>
    <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature>
    <topP>{$temp.Prompt_Configuration[1]/topP }</topP>
    <topK>{$temp.Prompt_Configuration[1]/topK }</topK>
    </generationConfig>
    </Generate_Content_Request>
    For the Prompt_Configuration field, enter values for the following properties:
    Property
    Description
    stopSequences
    Contains sequences of characters or strings that stop the model's output. This property controls where the model must end its response.
    candidateCount
    Specifies the number of response candidates that the model must generate. For example, if the value is set to 1, the model generates one response. If set to a higher number, the model generates that many alternative responses for the same input.
    maxOutputTokens
    Defines the maximum number of tokens that the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
    temperature
    Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if temperature is set to 0.5, the model balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds topP. For example, if topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    topK
    Limits the number of the highest-probability tokens to consider during response generation. For example, if topK is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
    After configuring the prompt instructions, the process reads the file using the XQuery function fn:unparsed-text. It reads the file stored in the database as either a text or binary file and returns its contents as a string.
    Note: You can use files of the following formats:
    .txt, .doc, .docx, .json, .js
    10 Save and publish the process.