Connections > Large Language Model connection properties > Prepare for authentication
  

Prepare for authentication

Before you configure a Large Language Model connection, you need to keep the authentication details handy based on the model provider.

Get the API key

You need the API key and endpoint URL to make API calls to the Azure OpenAI chat model or embedding model.
  1. 1Log in to the Azure portal and open the Azure OpenAI service.
  2. 2Click the name of the Azure OpenAI resource that you want to connect to.
  3. 3On the Overview page, click Explore Azure AI Foundry portal.
  4. 4Under Shared resources, click Deployments.
  5. 5On the Model deployments tab, click the name of the chat or embedding model for which you need the API key and endpoint URL.
  6. 6On the Details tab, copy the key and the endpoint URL.

Configure an API request for a custom model

You can connect to a custom model provider through a REST API and use the chat model to process and interpret unstructured data within an intelligent structure model.
When you configure a Large Language Model connection, you can specify the API request details to connect to a custom model provider in the Configuration field.
Consider the following examples for an API request for large language models from different model providers:
GrokAI model
{
"headers": {
"Content-Type": "application/json",
"Authorization":"Bearer {{API_KEY}}"
},
"body": {
"messages": [
{
"role": "system",
"content": "{{system_message}}"
},
{
"role": "user",
"content": "{{user_message}}"
}
],
"model": "grok-4-latest",
"stream": false,
"temperature": 0
}
}
Self-hosted Llama model
{
"headers": {
"Content-Type": "application/json"
},
"params": {
"key": "{{API_KEY}}"
},
body:
{
"messages":
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say hello!"}
]
}
}
Self-hosted Mistral AI model
{
"headers": {
"Content-Type": "application/json",
"api-key": "{{API_KEY}}"
},
"body": {
"messages": [
{
"role": "user",
"content": "{{user_message}}"
}
],
"model": "mistral-small",
"temperature": 0.7,
"max_tokens": 100
}
}