When you publish an Ollama connection in Application Integration, the Actions and Objects appear on the Metadata tab.
Consider the following information when you use specific actions and objects:
•The Chat Completion action generates a chat interaction with a provided model and without streaming output.
To use this action, specify the following input fields:
- ChatCompletionsRequestBody: The request body consists of a series of messages. The model generates a response to the last message, using earlier messages as context.
- model: The name of your deployed model. You must deploy a model before making calls.
The following snippet is a sample of the request body:
<ChatCompletionsRequestBody xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data"> <model>llama3.2:Latest</model> <messages m:isArray="true"> <role>system</role> <content>you are a helpfull assistant</content> </messages> <messages m:isArray="true"> <role>user</role> <content>why is the sky blue</content> </messages> <stream>false</stream> </ChatCompletionsRequestBody>
•The Embeddings action generates embeddings from a model.
To use this action, specify the following input fields:
- embeddingsRequest: The request body to get the embeddings.
- model: The name of your deployed model. You must deploy a model before making calls.
The following snippet is a sample of the request body:
<EmbeddingsRequestBody xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data"> <model>llama3.2:Latest</model> <prompt>Here is an article </prompt> <temperature>0.7</temperature> <top_k>50</top_k> <top_p>0.7</top_p> </EmbeddingsRequestBody>
•The Generate action generates a chat interaction with streaming output for a given prompt with a provided model. This is a streaming endpoint that contains a series of responses. The final response object will include statistics and additional data from the request.
To use this action, specify the following input fields:
- GenerateRequestBody: The request body consists of a series of messages. The model generates a response to the last message.
- model: The name of your deployed model. You must deploy a model before making calls.
The following snippet is a sample of the request body:
<GenerateRequestBody xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data"> <model>llama3.2:Latest</model> <prompt>Why is the sky blue?</prompt> </GenerateRequestBody>
•The List Local Models action lists all the models you have downloaded locally.