Model Configuration
Available models are configured in your config.js file. Add or modify the “models” property to add new large language models. For example:
models: {
"GPT-4o": {
provider: "openai",
model: "gpt-4o",
apiKey: "sk-.....",
stream: true
},
"GPT-4o mini": {
provider: "openai",
model: "gpt-4o-mini",
apiKey: "sk-.....",
stream: true
},
"GPT-3.5 Turbo": {
provider: "openai",
model: "gpt-3.5-turbo-0125",
apiKey: "sk-....."
},
"Gemini Pro": {
provider: "google",
model: "gemini-pro",
apiKey: "AI......"
},
"Azure OpenAI GPT-3.5": {
provider: "openai",
cloud: "azure",
model: "azure-gpt-35-0613",
resource: "profound-openai",
apiKey: "......",
apiVersion: "2023-12-01-preview"
},
"Mistral Medium with Streaming": {
provider: "mistral",
model: "mistral-medium",
stream: true,
apiKey: "......"
},
"Mistral 7B via Anyscale": {
endpoint: "https://api.endpoints.anyscale.com/v1",
apiFormat: "completion",
model: "mistralai/Mistral-7B-Instruct-v0.1",
apiKey: "esecret_......"
},
"Llama 2 70B": {
endpoint: "https://myserver",
apiFormat: "completion",
model: "Llama-2-70b-chat-hf-function-calling-v2",
apiKey: "esecret_......"
},
"Claude 2 Streaming": {
provider: "anthropic",
model: "claude-2.1",
apiKey: "sk-ant-......",
stream: true
},
"AWS Bedrock - Claude 2": {
provider: "anthropic",
cloud: "aws",
model: "claude-2.1",
accessKeyId: "......",
secretAccessKey: "......"
}
}
Using Environment Variables for API Keys
Instead of placing your API keys directly into the Profound AI configuration (config.js
file), it is recommended that environment variables are used instead.
Environment variables can be placed into a .env
file. For example, the file contents below store the OpenAI API Key and the OpenAI Assistant Id.
OPENAI_API_KEY=sk-......
OPENAI_ASSISTANT_ID=asst_......
Then, your configuration file may refer to the environment variables, as demonstrated in the following example:
models: {
"GPT-4o": {
provider: "openai",
model: "gpt-4o",
apiKey: process.env.OPENAI_API_KEY,
defaultAssistantId: process.env.OPENAI_ASSISTANT_ID
},
"GPT-3.5 Turbo": {
provider: "openai",
model: "gpt-3.5-turbo-0125",
apiKey: process.env.OPENAI_API_KEY,
defaultAssistantId: process.env.OPENAI_ASSISTANT_ID
},
"GPT-3.5 Turbo Streaming": {
provider: "openai",
apiFormat: "completion",
stream: true,
model: "gpt-3.5-turbo-0125",
apiKey: process.env.OPENAI_API_KEY
},
"GPT-4 Turbo Streaming": {
provider: "openai",
apiFormat: "completion",
stream: true,
model: "gpt-4-0125-preview",
apiKey: process.env.OPENAI_API_KEY
}
}
If you are using Git, the .env
file is generally set to be ignored (using .gitignore
) and its contents are not committed to your Git repository.
Model Properties
provider
When this property is specified (e.g. “openai”, “google”, “mistral”, “anthropic”, “groq”), the endpoint and apiFormat properties assume default values for each respective provider.
cloud
The model may be hosted with a cloud provider, such as “azure” or “aws”, which may require additional authentication. If this is the case for the model you’re trying to work with, use this property to specify the cloud provider.
endpoint
Use this property to specify a custom endpoint or API URL. This may be necessary for custom, open-source, and/or inhouse models.
resource
For Azure OpenAI, you can specify the the resource name in place of the full endpoint.
awsAccessKey
When the “aws” cloud is used, this property specifies the AWS access key ID.
awsSecretKey
When the “aws” cloud is used, this property specifies the AWS secret access key.
awsRegion
When the “aws” cloud is used, this property specifies the AWS region.
model
The provider or endpoint may expose multiple models. This property identifies the specific model to use.
apiFormat
Use this property to override the default format of how the data is exchanged with the model. Providers may support different formats. For example, OpenAI offers “completion” and “assistant” API.
useMessages
The default value for this property is true
. The property indicates whether the model provider’s API supports a list of conversation messages as input. Set this property to false
if the model API cannot process conversation messages as a list and the information will be sent to the API as a single text prompt containing the messages.
apiKey
This property contains the API key or secret key associated with the respective model. These keys are used for authentication when making requests to the API.
apiVersion
The API you’re using may require a version. Use this property to set the appropriate API version.
defaultAssistantId
This property specifies the default assistant ID for use with the “assistant” API format. When using the “openai” provider, the “assistant” API format is the default. This format requires configuring an assistant object on the OpenAI platform.
If the property is missing while employing the “assistant” API format from OpenAI, Profound AI will automatically set up an assistant object. Additionally, a message containing the defaultAssistantId will be logged to the server. You should then update your config.js file with this defaultAssistantId. This update ensures no extra processing is needed to locate or create the assistant object in subsequent operations.
showCitations
When this property is set to true
, the agent will cite knowledge documents by showing the document name in the response and the document text used when the user hovers over the name.
stream
Set this Boolean property to true
to indicate that you want the model API to stream its output as it is generated rather than sending the output all at once. Beware that some model API may not have this capability.
suppressStatusMessages
Set this Boolean property to true
to suppress status messages, such as “Calling Data Access”, when a streaming model is processing information. This ensures the end-user only sees the model’s final response without seeing intermediate action messages.
voice
Due to browser security measures, this feature requires that your Profound AI instance be set up to use HTTPS (SSL). Otherwise, the browser will not give permission to any microphone/voice feature.
This property allows you to configure voice recognition capabilities for the model. It contains the following sub-properties:
enabled: A Boolean value indicating whether voice recognition is enabled for this model. If not specified, it defaults to
true
.lang: A String specifying the language code for voice recognition (e.g.,
"en-US"
for English). If not specified, it defaults to"en-US"
.
rag
RAG (or Retrieval-Augmented Generation) is a technique in natural language processing that Profound AI uses to search Knowledge Documents based on questions from the end-user. Use this property to define how the model will use RAG by providing an object that specifies various RAG options, such as provider
, embeddingModel
, and database.
The default provider is “openai”. Alternatively, the “llamaindex” provider can be used.
additionalParams
To enhance flexibility and allow for customized interactions with various models, use this property to pass extra parameters directly to the model API.
For example:
The specific parameters supported within additionalParams
depend on the model provider and the API being used. Common parameters include, but are not limited to, temperature
, max_tokens
, top_p
, and frequency_penalty
. Please refer to the documentation of the respective model provider's API for a complete list of supported parameters and their effects.
For further flexibility, this configuration can be specified as a JavaScript function that receives the parameters generated by Profound AI and adjusts them dynamically.