Before you configure a Large Language Model connection, you need to keep the authentication details handy based on the model provider.
•To connect to Azure OpenAI, get the API key and endpoint URL of your Azure OpenAI account from the Azure portal.
•To connect to a custom model provider, get the API request details such as parameters, headers, and body for the custom model provider.
Get the API key
You need the API key and endpoint URL to make API calls to the Azure OpenAI chat model or embedding model.
1Log in to the Azure portal and open the Azure OpenAI service.
2Click the name of the Azure OpenAI resource that you want to connect to.
3On the Overview page, click Explore Azure AI Foundry portal.
4Under Shared resources, click Deployments.
5On the Model deployments tab, click the name of the chat or embedding model for which you need the API key and endpoint URL.
6On the Details tab, copy the key and the endpoint URL.
Configure an API request for a custom model
You can connect to a custom model provider through a REST API and use the chat model to process and interpret unstructured data within an intelligent structure model.
When you configure a Large Language Model connection, you can specify the API request details to connect to a custom model provider in the Configuration field.
Consider the following examples for an API request for large language models from different model providers: