How to set the AI LLM model
Configuration Scope: Environment-Specific
This setting is environment-specific and must be configured separately in each environment (dev, test, prod). Changes here will not be included in configuration exports.
Overview
This document provides guidance on configuring AI models for the Function App by updating the AppSettings. The current model being used is gpt-4-32k, and it is configured to auto-update when newer versions are available. Below, you will find steps and information on available models and how to update the model setting.

Updating the Model in AppSettings
Steps:
-
Navigate to the Function App.
-
Open the AppSettings section.
-
Locate the variable:
AzureOpenAIServiceAssistantDeploymentNameorAzureOpenAIServiceChatCompletionsDeploymentName. -
Update this variable to match one of the available model names from the list above, or in Azure AI Foundry. For example:
AzureOpenAIServiceAssistantDeploymentName = briefconnect-ai-poc-32k -
Save your changes.
Adding Support for New Models
To support models such as o1 and o1-mini, the following steps must be completed:
1. Azure Region Support: Ensure the desired models are supported in the Azure region where your resources are deployed.
-
Model Configuration in Azure AI Foundry:
-
Add the desired model to the model list within Azure AI Foundry.
-
Verify that the deployment for the model succeeds.
-
-
Update Function App AppSettings:
- Update the
AzureOpenAIServiceAssistantDeploymentNameorAzureOpenAIServiceChatCompletionsDeploymentNameto reference the newly added model.
- Update the

Example Configuration:
If adding o1-mini:
AzureOpenAIServiceAssistantDeploymentName = o1-mini