Amazon Bedrock Chat with File using Guide > Using the Amazon Bedrock Chat with File using Guide recipe > Step 3. Configure and publish the processes
  

Step 3. Configure and publish the processes

Configure the deployment details of the LLM model and publish the processes.
    1To publish the Get Content from File process, click Actions in the row that contains the process and select Publish.
    2Open the Chat with File process.
    3On the Temp Fields tab of the Start step, in the Model_id field, enter the deployment ID of the LLM model that was deployed.
    The default model id is anthropic.claude-3-sonnet-20240229-v1:0. You can optionally edit the model id.
    4Optionally, in the Configure Request Parameters step, enter the prompt instructions in the Assignments field by updating the Prompt_Configuration field using the Expression Editor, as shown in the following sample code:
    For Prompt_Configuration:
    <GenerationConfig_AwsBedrock_PO-1>
    <Temperature>0.5</Temperature>
    <TopP>0.6</TopP>
    <MaxTokens>500</MaxTokens>
    </GenerationConfig_AwsBedrock_PO-1>
    For the Prompt_Configuration field, enter values for the following properties:
    Property
    Description
    temperature
    Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if temperature is set to 0.5, the model balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds topP. For example, if topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    max_tokens
    Defines the token count of your prompt. The value can't exceed the model's context length. Most of the models have a context length of 2048 tokens.
    5Optionally, in the Prepare Request step, enter the prompt instructions in the Assignments field by updating the Request field using the Expression Editor, as shown in the following sample code:
    For Request:
    <root xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data">
    <system m:isArray="true">
    <type>text</type>
    <text>You are an helpfull assistant .{$temp.Content_From_File }</text>
    </system>
    <messages m:isArray="true">
    <role>user</role>
    <content m:isArray="true">
    <type>text</type>
    <text>{$input.User_Prompt }</text>
    </content>
    </messages>
    <interfenceConfig>
    <max_tokens m:type="xs:int">{$temp.Prompt_Configuration[1]/MaxTokens }</max_tokens>
    <temperature m:type="xs:double">{$temp.Prompt_Configuration[1]/Temperature }</temperature>
    <topP m:type="xs:double">{$temp.Prompt_Configuration[1]/TopP }</topP>
    </interfenceConfig>
    </root>
    6 Save and publish the process.