Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Profound AI's Agent Knowledge feature allows you to upload enterprise-specific documentation, or any other unstructured data, to enhance the capabilities of your AI agents. This The documentation can range from technical manuals and product information to internal policies and procedural guides. Once uploaded, these documents are indexed and made searchable by AI agents, enabling them to provide comprehensive and contextually relevant responses to user queries.

...

It's important to distinguish between Knowledge Documents and Agent Instructions in Profound AI, as they serve different yet complementary roles in enhancing the capabilities of AI agents. The main difference is in how how information is processed and utilized. Knowledge Documents function like an encyclopedia, where documents are semantically searched, and only relevant sections are read and processed as needed. This allows these documents to contain vast amounts of information, accessible on demand. In contrast, Agent Instructions are limited by the large language model's system prompt token limit, as it involves preloaded instructions that are read all at once. The semantic search capability of Knowledge Documents ensures efficient token usage, as the AI agents only reference and process the specific parts of documents pertinent to a query. This distinction makes Knowledge Documents ideal for handling complex, information-rich queries, while Agent Instructions are more suited for providing concise, predetermined responses information that must fit within the token limit constraints.

...

In the context of Profound AI's Knowledge Documents, semantic search enables the AI agents to sift through extensive documentation and find the most relevant sections that answer or relate to the user's query. The approach used is typically referred to as RAG, or Retrieval-Augmented Generation. This approach ensures more accurate, context-aware responses and a better understanding of user needs, compared to traditional keyword-based search methods.

...

Finally, test searching the knowledge documents by typing a query in the Agent Preview section of the IDE.

RAG Options

A model can optionally be configured to use specific RAG, or Retrieval-Augmented Generation, options. See Model Configuration for additional details. When no RAG configuration is provided, default settings are used.

A RAG configuration consists of the following:

  • Provider - this points to the service or library of code that provides RAG capabilities. Current options include “openai” and “llamaindex”.

  • Embedding Model - this specifies the model that translates text into a representation of meaning in the form of embeddings, which are also knows as vectors. If not specified, a default embedding model from OpenAI is used.

  • Vector Database - this specifies the database that holds indexed documents. The documents are broken up into chunks, assigned embeddings/vectors using the Embedding Model, and then placed into a database.

The default vector database for the “openai” provider is the built-in cloud database used by OpenAI.

For “llamaindex”, the default is to use local .json files, which are always loaded in memory. To scale beyond this, an external vector database should be provisioned.