- •The ChatCompletions action creates a completion for the chat message.
The following input fields are required:
- - chat_completions_request_body - The request body which consists of a series of messages. The model generates a response to the last message, using earlier messages as context.
The following snippet is a sample request body:
<CreateChatCompletionRequest>
<messages>
<role>user</role>
<content>I'm in USA right now</content>
</messages>
<messages>
<role>user</role>
<content>What is the weather tommorow?</content>
</messages>
<temperature>1</temperature>
<stream>false</stream>
<stop>some stop</stop>
<max_tokens>14</max_tokens>
<presence_penalty>1</presence_penalty>
<frequency_penalty>1</frequency_penalty>
<logit_bias/>
<user>someUser</user>
<n>1</n>
<seed>1</seed>
<logprobs>true</logprobs>
<top_logprobs>1</top_logprobs>
<response_format>
<type>text</type>
</response_format>
</CreateChatCompletionRequest>
- - deployment_id - The name of your model deployment. You must first deploy a model before you can make calls.
- - api_version - The API version to use for this operation. This follows the YYYY-MM-DD or YYYY-MM-DD-preview format.
The following snippet is a sample response:
<root>
<created>1758875013</created>
<prompt_filter_results>
<content_filter_results>
<self_harm>
<severity>safe</severity>
<filtered>false</filtered>
</self_harm>
<jailbreak>
<filtered>false</filtered>
<detected>false</detected>
</jailbreak>
<hate>
<severity>safe</severity>
<filtered>false</filtered>
</hate>
<sexual>
<severity>safe</severity>
<filtered>false</filtered>
</sexual>
<violence>
<severity>safe</severity>
<filtered>false</filtered>
</violence>
</content_filter_results>
<prompt_index>0</prompt_index>
</prompt_filter_results>
<usage>
<completion_tokens>14</completion_tokens>
<prompt_tokens>24</prompt_tokens>
<completion_tokens_details>
<accepted_prediction_tokens>0</accepted_prediction_tokens>
<audio_tokens>0</audio_tokens>
<rejected_prediction_tokens>0</rejected_prediction_tokens>
<reasoning_tokens>0</reasoning_tokens>
</completion_tokens_details>
<prompt_tokens_details>
<audio_tokens>0</audio_tokens>
<cached_tokens>0</cached_tokens>
</prompt_tokens_details>
<total_tokens>38</total_tokens>
</usage>
<model>gpt-4.1-mini-2025-04-14</model>
<id>chatcmpl-CJy1l4oJRg7wiYvF5XJt2lNzp3RcF</id>
<system_fingerprint>fp_4f3d32ad4e</system_fingerprint>
<choices>
<content_filter_results>
<self_harm>
<severity>safe</severity>
<filtered>false</filtered>
</self_harm>
<hate>
<severity>safe</severity>
<filtered>false</filtered>
</hate>
<sexual>
<severity>safe</severity>
<filtered>false</filtered>
</sexual>
<violence>
<severity>safe</severity>
<filtered>false</filtered>
</violence>
</content_filter_results>
<finish_reason>length</finish_reason>
<index>0</index>
<message>
<role>assistant</role>
<refusal/>
<annotations/>
<content>I don't have access to real-time data, including current weather updates.</content>
</message>
<logprobs>
<refusal/>
<content>
<top_logprobs>
<logprob>-0.00092492</logprob>
<bytes>73</bytes>
<token>I</token>
</top_logprobs>
<logprob>-0.00092492</logprob>
<bytes>73</bytes>
<token>I</token>
</content>
...
...
...
<content>
<top_logprobs>
<logprob>-0.00000019</logprob>
<bytes>46</bytes>
<token>.</token>
</top_logprobs>
<logprob>-0.00000019</logprob>
<bytes>46</bytes>
<token>.</token>
</content>
</logprobs>
</choices>
<object>chat.completion</object>
</root>
Chat Completions with Function Calling
The following snippet is the sample payload of chat_completions_request_body:
<CreateChatCompletionRequest xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data">
<messages>
<role>user</role>
<content>My name is John and I'm 30 years old</content>
</messages>
<tools m:isArray="true">
<type>function</type>
<function>
<name>get_name_age</name>
<description>Please provide your name and age</description>
<parameters>
<type>object</type>
<properties>
<name>
<type>string</type>
<description>The name of participant</description>
</name>
<age>
<type>integer</type>
<description>The age of participant</description>
</age>
</properties>
<required m:isArray="true">name</required>
<required m:isArray="true">age</required>
</parameters>
</function>
</tools>
<logit_bias>
<item>
<id>5935</id>
<bias>-100</bias>
</item>
<item>
<id>9653</id>
<bias>-100</bias>
</item>
<item>
<id>1130</id>
<bias>-100</bias>
</item>
</logit_bias>
<stop m:isArray="true">test value1</stop>
<stop m:isArray="true">test value 2</stop>
<tool_choice>
<type>function</type>
<function>
<name>get_name_age</name>
</function>
</tool_choice>
</CreateChatCompletionRequest>
The following snippet is the sample response:
<root>
<created>1758880263</created>
<prompt_filter_results>
<content_filter_results>
<self_harm>
<severity>safe</severity>
<filtered>false</filtered>
</self_harm>
<jailbreak>
<filtered>false</filtered>
<detected>false</detected>
</jailbreak>
<hate>
<severity>safe</severity>
<filtered>false</filtered>
</hate>
<violence>
<severity>safe</severity>
<filtered>false</filtered>
</violence>
</content_filter_results>
<prompt_index>0</prompt_index>
</prompt_filter_results>
<usage>
<completion_tokens>10</completion_tokens>
<prompt_tokens>80</prompt_tokens>
<completion_tokens_details>
<accepted_prediction_tokens>0</accepted_prediction_tokens>
<audio_tokens>0</audio_tokens>
<rejected_prediction_tokens>0</rejected_prediction_tokens>
<reasoning_tokens>0</reasoning_tokens>
</completion_tokens_details>
<prompt_tokens_details>
<audio_tokens>0</audio_tokens>
<cached_tokens>0</cached_tokens>
</prompt_tokens_details>
<total_tokens>90</total_tokens>
</usage>
<model>gpt-4.1-mini-2025-04-14</model>
<id>chatcmpl-CJzORJKv160B0BdgcfUXQDEbaaC3u</id>
<system_fingerprint>fp_4f3d32ad4e</system_fingerprint>
<choices>
<content_filter_results/>
<finish_reason>stop</finish_reason>
<index>0</index>
<message>
<role>assistant</role>
<refusal/>
<annotations/>
<tool_calls>
<function>
<name>get_name_age</name>
<arguments>{"name":"John","age":33}</arguments>
</function>
<id>call_Ks5cBW0lCsuqJgKJnkG3kb9V</id>
<type>function</type>
</tool_calls>
<content/>
</message>
<logprobs/>
</choices>
<object>chat.completion</object>
</root>
For more information, see the
Azure OpenAI documentation.
- •The Embeddings action creates a vector representation of an input that machine learning models and algorithms can easily consume.
The following input fields are required:
- - embeddings_request_body - The request body to obtain the embeddings.
- - deployment_id - The name of your model deployment. You must first deploy a model before you can make calls.
- - api_version - The API version to use for this operation. This follows the YYYY-MM-DD format.
The following snippet is the sample request body for embeddings_request_body field:
<root>
<input>The food was delicious and the waiter...</input>
<model>text-embedding-3-large</model>
<encoding_format>float</encoding_format>
<dimensions>1536</dimensions>
<user>Jey</user>
</root>
The following snippet is a sample response:
<root>
<data>
<index>0</index>
<embedding>-0.01099226</embedding>
<embedding>0.00275461</embedding>
<embedding>-0.0057186</embedding>
<embedding>-0.008061</embedding>
<embedding>0.00064367</embedding>
<embedding>0.01056043</embedding>
...
...
<embedding>-0.01368799</embedding>
<embedding>-0.00563026</embedding>
<embedding>0.02373806</embedding>
<embedding>0.00878073</embedding>
<embedding>0.00617006</embedding>
<embedding>-0.02632909</embedding>
<embedding>-0.00401087</embedding>
<object>embedding</object>
</data>
<data>
<index>1</index>
<embedding>-0.01123379</embedding>
<embedding>0.04406882</embedding>
<embedding>-0.01900215</embedding>
<embedding>0.00220741</embedding>
<embedding>0.01660522</embedding>
<embedding>-0.02373825</embedding>
...
...
...
<embedding>-0.00999923</embedding>
<embedding>0.02431582</embedding>
<embedding>-0.00141235</embedding>
<embedding>-0.00057848</embedding>
<embedding>-0.0273914</embedding>
<embedding>-0.02011398</embedding>
<object>embedding</object>
</data>
<usage>
<prompt_tokens>12</prompt_tokens>
<total_tokens>12</total_tokens>
</usage>
<model>text-embedding-3-large</model>
<object>list</object>
</root>
For more information, see the
Azure OpenAI documentation.
- •The GetModel action returns a single model by the specified ID.
For example:
- - api_version - 2024-02-15-preview
- - model_id - gpt-4o-canvas-2024-09-25
The following snippet is the sample response:
<root>
<capabilities>
<embeddings>false</embeddings>
<completion>false</completion>
<inference>true</inference>
<chat_completion>true</chat_completion>
<fine_tune>false</fine_tune>
</capabilities>
<created_at>1731024000</created_at>
<lifecycle_status>preview</lifecycle_status>
<id>gpt-4o-canvas-2024-09-25</id>
<deprecation>
<inference>1748736000</inference>
</deprecation>
<status>succeeded</status>
<object>model</object>
</root>
- •The GetModelList returns a list of available models by API version.
Fox example: api_version - 2024-02-15-preview
The following snippet is the sample response:
<root>
<data>
<capabilities>
<embeddings>false</embeddings>
<completion>false</completion>
<inference>true</inference>
<chat_completion>false</chat_completion>
<fine_tune>false</fine_tune>
</capabilities>
<created_at>1691712000</created_at>
<lifecycle_status>generally-available</lifecycle_status>
<id>dall-e-3-3.0</id>
<deprecation>
<inference>1769817600</inference>
</deprecation>
<status>succeeded</status>
<object>model</object>
</data>
...
...
...
<data>
<capabilities>
<embeddings>false</embeddings>
<completion>false</completion>
<inference>true</inference>
<chat_completion>false</chat_completion>
<fine_tune>false</fine_tune>
</capabilities>
<created_at>1744934400</created_at>
<lifecycle_status>preview</lifecycle_status>
<id>gpt-image-1</id>
<deprecation>
<inference>1761868800</inference>
</deprecation>
<status>succeeded</status>
<object>model</object>
</data>
<object>list</object>
</root>
- •The Create Response action creates the response object.
Example of CreateResponseRequest object:
<CreateResponseRequest xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data">
<model>gpt-4.1</model>
<input m:isArray="true">
<type>message</type>
<role>user</role>
<content>Define and explain the concept of catastrophic forgetting?</content>
</input>
<previous_response_id>resp_68d14ea51afc8196aecd8308297ed1ee05b2d33fc583cfec</previous_response_id>
<temperature>0.7</temperature>
<max_output_tokens>200</max_output_tokens>
</CreateResponseRequest>
The following snippet is the sample response:
<root>
<instructions/>
<metadata/>
<max_tool_calls/>
<reasoning>
<summary/>
<effort/>
</reasoning>
<usage>
<input_tokens_details>
<cached_tokens>0</cached_tokens>
</input_tokens_details>
<total_tokens>433</total_tokens>
<output_tokens>200</output_tokens>
<input_tokens>233</input_tokens>
<output_tokens_details>
<reasoning_tokens>0</reasoning_tokens>
</output_tokens_details>
</usage>
<created_at>1758882238</created_at>
<safety_identifier/>
<error/>
<tools/>
<content_filters/>
<output>
<role>assistant</role>
<id>msg_68d669bf326c8196943c9b5b5f9febc205b2d33fc583cfec</id>
<type>message</type>
<content>
<annotations/>
<text>**Catastrophic forgetting** (also called **catastrophic interference**) is a phenomenon in artificial neural networks where the model rapidly loses or overwrites previously learned knowledge when it is trained on new tasks or data, especially in a sequential manner.
### Explanation
- **Sequential Learning:** When a neural network is trained on one task and then on another, the learning process for the new task updates the network’s weights.
- **Interference:** These updates can interfere with and overwrite the information stored from previous tasks, leading to a sharp decline in performance on those earlier tasks.
- **Why it Happens:** Unlike the human brain, which can retain old knowledge while learning new things, standard neural networks lack mechanisms to preserve previous learning unless specifically designed to do so.
### Example
Suppose a neural network is trained to recognize handwritten digits (like in the MNIST dataset). Later, the same network is trained (without access to the old digit data) to recognize different types of flowers. After training on</text>
<type>output_text</type>
</content>
<status>incomplete</status>
</output>
<top_p>1</top_p>
<previous_response_id>resp_68d14ea51afc8196aecd8308297ed1ee05b2d33fc583cfec</previous_response_id>
<temperature>0.7</temperature>
<tool_choice>auto</tool_choice>
<model>gpt-4.1</model>
<service_tier>default</service_tier>
<id>resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec</id>
<text>
<format>
<type>text</type>
</format>
</text>
<incomplete_details>
<reason>max_output_tokens</reason>
</incomplete_details>
<prompt_cache_key/>
<truncation>disabled</truncation>
<store>true</store>
<parallel_tool_calls>true</parallel_tool_calls>
<background>false</background>
<user/>
<object>response</object>
<status>incomplete</status>
<max_output_tokens>200</max_output_tokens>
</root>
- •The Get Response action returns a single response object.
Example of input parameters:
- - api_version - 2025-04-01-preview
- - response_id - resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec
The following snippet is the sample response:
<root>
<instructions/>
<metadata/>
<max_tool_calls/>
<reasoning>
<summary/>
<effort/>
</reasoning>
<usage>
<input_tokens_details>
<cached_tokens>0</cached_tokens>
</input_tokens_details>
<total_tokens>433</total_tokens>
<output_tokens>200</output_tokens>
<input_tokens>233</input_tokens>
<output_tokens_details>
<reasoning_tokens>0</reasoning_tokens>
</output_tokens_details>
</usage>
<created_at>1758882238</created_at>
<safety_identifier/>
<error/>
<tools/>
<content_filters/>
<output>
<role>assistant</role>
<id>msg_68d669bf326c8196943c9b5b5f9febc205b2d33fc583cfec</id>
<type>message</type>
<content>
<annotations/>
<text>**Catastrophic forgetting** (also called **catastrophic interference**) is a phenomenon in artificial neural networks where the model rapidly loses or overwrites previously learned knowledge when it is trained on new tasks or data, especially in a sequential manner.
### Explanation
- **Sequential Learning:** When a neural network is trained on one task and then on another, the learning process for the new task updates the network’s weights.
- **Interference:** These updates can interfere with and overwrite the information stored from previous tasks, leading to a sharp decline in performance on those earlier tasks.
- **Why it Happens:** Unlike the human brain, which can retain old knowledge while learning new things, standard neural networks lack mechanisms to preserve previous learning unless specifically designed to do so.
### Example
Suppose a neural network is trained to recognize handwritten digits (like in the MNIST dataset). Later, the same network is trained (without access to the old digit data) to recognize different types of flowers. After training on</text>
<type>output_text</type>
</content>
<status>incomplete</status>
</output>
<top_p>1</top_p>
<previous_response_id>resp_68d14ea51afc8196aecd8308297ed1ee05b2d33fc583cfec</previous_response_id>
<temperature>0.7</temperature>
<tool_choice>auto</tool_choice>
<model>gpt-4.1</model>
<service_tier>default</service_tier>
<id>resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec</id>
<text>
<format>
<type>text</type>
</format>
</text>
<incomplete_details>
<reason>max_output_tokens</reason>
</incomplete_details>
<prompt_cache_key/>
<truncation>disabled</truncation>
<store>true</store>
<parallel_tool_calls>true</parallel_tool_calls>
<background>false</background>
<user/>
<object>response</object>
<status>incomplete</status>
<max_output_tokens>200</max_output_tokens>
</root>
- •The Get List Input Items action returns the list of input items for a response.
Example of input parameters:
- - api_version - 2025-04-01-preview
- - response_id - resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec
The following snippet is the sample response:
<root>
<first_id>msg_68d669be80d08196a133aef64b6f767f05b2d33fc583cfec</first_id>
<data>
<role>user</role>
<id>msg_68d669be80d08196a133aef64b6f767f05b2d33fc583cfec</id>
<type>message</type>
<content>
<text>Define and explain the concept of catastrophic forgetting?</text>
<type>input_text</type>
</content>
<status>completed</status>
</data>
<data>
<role>assistant</role>
<id>msg_68d14ea587588196b23d4e6874d5834105b2d33fc583cfec</id>
<type>message</type>
<content>
<annotations/>
<text>**Catastrophic forgetting** (also known as **catastrophic interference**) is a phenomenon in artificial neural networks where the model **forgets previously learned information upon learning new information**. This typically occurs when a neural network is trained sequentially on multiple tasks or data distributions.
### Explanation
- **Sequential Learning Problem:** In standard neural networks, weights are updated to minimize error on the current task or data. When the network is trained on a new task, these weights are modified, often overwriting or interfering with the knowledge acquired from previous tasks.
- **Result:** The network's performance on earlier tasks drops significantly after training on new tasks, sometimes to the level of random guessing.
### Example
Suppose a neural network is first trained to classify animals (e.g., cats vs. dogs). Later, it is trained to classify vehicles (e.g., cars vs. trucks) using the same weights and architecture. After learning vehicles, the network may perform very poorly on the animal classification task</text>
<type>output_text</type>
</content>
<status>incomplete</status>
</data>
<data>
<role>user</role>
<id>msg_68d14ea51c748196b42c0e74f88e29eb05b2d33fc583cfec</id>
<type>message</type>
<content>
<text>Define and explain the concept of catastrophic forgetting?</text>
<type>input_text</type>
</content>
<status>completed</status>
</data>
<last_id>msg_68d14ea51c748196b42c0e74f88e29eb05b2d33fc583cfec</last_id>
<has_more>false</has_more>
<object>list</object>
</root>
- •The Delete Response action deletes a response by specified ID.
Example of input parameters:
- - api_version - 2025-04-01-preview
- - response_id - resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec
The following snippet is the sample response:
<root>
<deleted>true</deleted>
<id>resp_68d669be7e8c8196bc44149363affd9305b2d33fc583cfec</id>
<object>response.deleted</object>
</root>