Custom / Open Source Models
See Model Configuration for a full list of available model properties.
Overview
Use the procedure outlined below to configure and use custom or open-source AI models within Profound AI. This page explains how to:
Specify a custom endpoint when hosting a self-managed model.
Define a custom provider for models that use a different API format.
Integrate custom models seamlessly with Profound AI through example configurations.
This information enables you to extend Profound AI’s capabilities to support any AI model or deployment environment.
Using a Custom Endpoint
If you’re hosting a model on your own server, use the endpoint property to specify the location of the model.
Example
The following configuration example is based on the assumption that the model's API follows the format commonly used by the OpenAI Chat Completion API. This also applies if you're hosting an open-source large language model locally with a tool like Ollama (for example).
models: {
"My Custom Open Source Model": {
endpoint: "http://127.0.0.1:11434/v1",
model: "llama2",
apiKey: "secret", // required, even if unused
stream: true
}
}Using a Custom Provider
If the model's API requires uniquely formatted input and output, you can create a custom provider. The custom provider object must be attached to profound.ai.providers and specify a subset or all of the following methods:
Method | Description |
|---|---|
getAPIFunction() | Returns a function used to call the model. |
getAPIParms() | Returns the parameters to pass to the function above. The method receives an object with model, messages, instructions, and tools. |
processResponse() | Takes the response from the model function call, processes it, and returns and object with the following properties:
|
processStreamPart() | If the model is configured for steaming, this method processes each part of the stream. The method must return the text content of the part. It receives a parameter object with the following properties:
|
getToolCalls() | This method receives an object parameter with the responseMessage property, and returns the derived list of tool calls based on the OpenAI tool calls array format. |
Example
profound.ai.providers.myCustomProvider = {
getAPIFunction: function(data) {
return async function() {
return {
message: {
type: "hardcoded test response",
content: "Hi there. I am a custom model."
}
};
}
},
getAPIParms: function(data) {
return data; // pass all available data to the API function as parameters
},
processResponse: function(response) {
const isToolCall = false;
const responseMessage = response.message;
const content = responseMessage.content;
return { responseMessage, content, isToolCall };
}
};
module.exports = {
// misc configration entries
models: {
"Custom Test": {
provider: "myCustomProvider",
model: "custom-test"
}
}
}