Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 11 Current »

Using your own endpoint

If you’re hosting a model on your own server, use the endpoint property to specify the location of the model.

For example:

models: {
  "My Custom Open Source Model": {
    endpoint: "http://127.0.0.1:11434/v1",
    model: "llama2",
    apiKey: "secret" // requred, even if unused
  }
}

The above configuration is based on the assumption that the model's API follows the format commonly used by the OpenAI Chat Completion API. This applies, for example, if you're hosting an open-source large language model locally with a tool like Ollama.

Using a custom provider

If the model's API requires uniquely formatted input and output, you can create a custom provider. For example:

profound.ai.providers.myCustomProvider = {
  getAPIFunction: function(data) {
    return async function() {
      return { 
        message: {
          type: "hardcoded test response",
          content: "Hi there. I am a custom model."
        }
      };
    }
  },
  getAPIParms: function(data) {    
    return data; // pass all available data to the API function as parameters
  },
  processResponse: function(response) {
    const isToolCall = false;
    const responseMessage = response.message;
    const content = responseMessage.content;    
    return { responseMessage, content, isToolCall };
  }  
};

module.exports = {

  // misc configration entries
  
  models: {
    "Custom Test": {
      provider: "myCustomProvider",
      model: "custom-test"
    }
  }
}

The customer provider object must be attached to profound.ai.providers and specify a subset or all of the following methods:

getAPIFunction()

Returns a function used to call the model.

getAPIParms()

Returns the parameters to pass to the function above. The method receives an object with model, messages, instructions, and tools.

processResponse()

Takes the response from the model function call, processes it, and returns and object with the following properties:

  • responseMessage - an object representing the message returned by the model

  • content - the text content returned by the model

  • isToolCall - a boolean value that dictates whether the model has requested one or more calls to tools, such as Data Access, Routines, or Data Analytics.

processStreamPart()

If the model is configured for steaming, this method processes each part of the stream. The method must return the text content of the part. It receives a parameter object with the following properties:

  • part - the steamed part

  • responseMessage - the message object; the method can modify this property as needed

  • isToolCall - the boolean Tool Call state; the method can set this property accordingly based on the information in the streamed part

  • content - the accumulated text content that has been streamed up to this point

getToolCalls()

This method receives an object parameter with the responseMessage property, and returns the derived list of tool calls based on the OpenAI tool calls array format.

  • No labels