Skip to content

How to set the AI LLM model

Configuration Scope: Environment-Specific

This setting is environment-specific and must be configured separately in each environment (dev, test, prod). Changes here will not be included in configuration exports.

Overview


This document provides guidance on configuring AI models for the Function App by updating the AppSettings. The current model being used is gpt-4-32k, and it is configured to auto-update when newer versions are available. Below, you will find steps and information on available models and how to update the model setting.



Updating the Model in AppSettings

Steps:

  1. Navigate to the Function App.

  2. Open the AppSettings section.

  3. Locate the variable: AzureOpenAIServiceAssistantDeploymentName or AzureOpenAIServiceChatCompletionsDeploymentName.

  4. Update this variable to match one of the available model names from the list above, or in Azure AI Foundry. For example:

    AzureOpenAIServiceAssistantDeploymentName = briefconnect-ai-poc-32k
    
  5. Save your changes.

Adding Support for New Models

To support models such as o1 and o1-mini, the following steps must be completed: 1. Azure Region Support: Ensure the desired models are supported in the Azure region where your resources are deployed.

  1. Model Configuration in Azure AI Foundry:

    • Add the desired model to the model list within Azure AI Foundry.

    • Verify that the deployment for the model succeeds.

  2. Update Function App AppSettings:

    • Update the AzureOpenAIServiceAssistantDeploymentName or AzureOpenAIServiceChatCompletionsDeploymentName to reference the newly added model.

Example Configuration:

If adding o1-mini:

AzureOpenAIServiceAssistantDeploymentName = o1-mini