Model Configuration

Model Configuration

Overview

An LLM model and provider can be configured for your Profound AI instance via the config.js file. To add a new LLM model, the models property should be added (or modified) to include the desired model and provider.

Model Properties

The following table provides a list of available properties for the models configuration option, along with descriptions and additional information related to each property.

Property

Description

Additional Information

Property

Description

Additional Information

provider

The specific provider of the LLM model the Profound AI Agent should use.

When this property is specified (e.g. openai, google, mistral, anthropic, groq), the endpoint and apiFormat properties assume default values for each respective provider.

cloud

The cloud provider of the LLM model that the AI Agent should use.

This option is only needed if the model is hosted with a cloud provider – like azure or aws (for example).

endpoint

Specify a custom endpoint for API URL for the model.

This may be necessary for custom, open-source or inhouse LLM models.

resource

Specify the resource name in place of the full endpoint.

For Azure OpenAI only.

awsAccessKey

Specify the AWS access key ID.

For aws cloud only.

awsSecretKey

Specify the AWS secret access key.

For aws cloud only.

awsRegion

Specify the AWS region.

For aws cloud only.

model

The specific LLM model the Profound AI Agent should use.

The provider or endpoint may expose multiple models. This property identifies one specific model from that provider/endpoint.

apiFormat

Override the default format of how the data is exchanged with the model.

Providers may support various formats.

For example: Open AI offers completion, assistant and responses API.

useMessages

Indicates whether the model provider’s API supports a list of conversation messages as input.

Default value is true.

Set this to false if the model API cannot process conversation messages as a list and the information will be sent to the API as a single text prompt containing the messages.

apiKey

Contains the API key (or secret key) associated with the specific model.

These keys are used for authentication when making requests to the API.

genApiKey

Alternate method of providing an API key by generating it dynamically.

genApiKey: async () => { // logic to dynamically retrieve the api key goes here }

apiVersion

Set the specific API version.

Only needed if the API requires a version.

defaultAssistantId

Specify the default assistant ID for use with the assistant API format.

When using the openai provider, the assistant API format is currently the default. This format requires configuring an assistant object on the OpenAI platform.

If the property is missing while employing the assistant API format from OpenAI, Profound AI will automatically set up an assistant object. Additionally, a message containing the defaultAssistantId will be logged to the server. The config.js file should be updated with this defaultAssistantId.

showCitations

Specify that the Agent should cite knowledge documents by showing the document name in the response and the document text.

Set this property to true to enable this feature.

The document text information is shown when the user hovers over the document name.

stream

Specify that the model API should stream its output as it is generated, rather than sending the output at one time.

Some models may not support this capability.

Set this property to true to enable this feature.

suppressStatusMessage

Specify that status messages (such as “Calling Data Access”) should be suppressed when the model is processing information.

Set this property to true to enable this feature.

This property ensures that the end-user only sees the model’s final response.

voice

Configure voice recognition capabilities for the LLM model.

Due to browser security measures, this feature requires that your Profound AI instance be set up to use HTTPS (SSL). Otherwise, the browser will not give permission to any microphone/voice feature.

Contains the following sub-properties:

  • enabled - A Boolean value indicating whether voice recognition is enabled for this model. Defaults to true.

  • lang - A String specifying the language code for voice recognition (e.g., en-US for English). Defaults to en-US.

rag

Specify how the model will use RAG by providing an object that specifies various RAG options.

RAG (or Retrieval-Augmented Generation) is a technique in natural language processing that Profound AI uses to search Knowledge Documents based on questions from the end-user.

Example options:

  • provider;

    • Default is openai;

    • llamaindex can also be used;

  • embeddingModel;

  • database;

additionalParams

Specify additional parameters that should be passed directly to the model API.

This can be specified as a JavaScript function that receives the parameters generated by Profound AI and adjusts them dynamically.

See Example 2 below for a coded example of this property.

The specific parameters supported within additionalParams depend on the model provider and the API being used.

Common parameters:

  • temperature;

  • max_completion_tokens;

  • top_p;

  • frequency_penalty;

  • reasoning_effort

    • Limits the effort applied to reasoning for reasoning models;

      • low;

      • medium;

      • high;

Using Environment Variables for API Keys

Instead of placing your API keys directly into the Profound AI configuration (config.js file), it is recommended that environment variables are used instead. Environment variables can be placed into a .env file.

For example, the file contents below store the OpenAI API Key and the OpenAI Assistant Id:

OPENAI_API_KEY=sk-...... OPENAI_ASSISTANT_ID=asst_......

Then, your configuration file may refer to the environment variables, as demonstrated in the following example:

models: { "GPT-4o": { provider: "openai", model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY, defaultAssistantId: process.env.OPENAI_ASSISTANT_ID }, "GPT-3.5 Turbo Streaming": { provider: "openai", apiFormat: "completion", stream: true, model: "gpt-3.5-turbo-0125", apiKey: process.env.OPENAI_API_KEY }, "GPT-4 Turbo Streaming": { provider: "openai", apiFormat: "completion", stream: true, model: "gpt-4-0125-preview", apiKey: process.env.OPENAI_API_KEY } }

If you are using Git, the .env file is generally set to be ignored (using .gitignore) and its contents are not committed to your Git repository.

Examples

Example 1

This example shows various model setups.

For more information on how to configure specific models, we recommend viewing the model-specific documentation pages that we offer in this section.

models: { "gpt-5-mini": { provider: "openai", model: "gpt-5-mini-2025-08-07", apiFormat: "responses" }, "gpt-5-mini streaming": { provider: "openai", model: "gpt-5-mini-2025-08-07", apiFormat: "responses", stream: true }, "GPT-4o": { provider: "openai", model: "gpt-4o", apiKey: "sk-.....", stream: true }, "Azure OpenAI GPT-3.5": { provider: "openai", cloud: "azure", model: "azure-gpt-35-0613", resource: "profound-openai", apiKey: "......", apiVersion: "2023-12-01-preview" }, "AWS Bedrock - Claude 2": { provider: "anthropic", cloud: "aws", model: "claude-2.1", accessKeyId: "......", secretAccessKey: "......" } }

Example 2

This example demonstrates how to use the additionalParams property when configuring models:

models: { "OpenAI o1": { provider: "openai", model: "o1", apiFormat: "completion", apiKey: process.env.OPENAI_API_KEY, additionalParams: { reasoning_effort: 'medium', max_completion_tokens: 100, } }, "GPT-4o": { provider: "openai", model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY, additionalParams: { temperature: 0.7, max_completion_tokens: 100, top_p: 1.0 } }, // Include other models as needed... }

Model-Specific Pages