Usage

Let’s walk through a complete example of creating an AI assistant using vectara-agentic. We will build a finance assistant that can answer questions about the annual financial reports for Apple Computer, Google, Amazon, Snowflake, Atlassian, Tesla, Nvidia, Microsoft, Advanced Micro Devices, Intel, and Netflix between the years 2020 and 2024.

Import Dependencies

First, we must import some libraries and define some constants for our demo.

import os
from dotenv import load_dotenv
import streamlit as st
import pandas as pd
import requests
from pydantic import Field

load_dotenv(override=True)

We then use the load_dotenv function to load our environment variables from a .env file.

Create Tools

Next, we will create the tools for our agent.

There are three categories of tools you can use with vectara-agentic:

  1. A query tool that connects to Vectara to ask a question about data in a Vectara corpus.

  2. Pre-built tools that are available out of the box, or ready to use tools from the LlamaIndex Tools Hub.

  3. Any other tool that you want to make for your agent, based on custom code in Python.

Vectara RAG Query Tool

Let’s see how to create a Vectara query tool. In order to use this tool, you need to create a corpus and API key with a Vectara account. In this example, we will create the ask_transcripts tool, which can be used to perform RAG queries on analyst call transcripts. You can see this tool in use with our Finance Assistant demo.

from pydantic import BaseModel

# define the arguments schema for the tool
class QueryTranscriptsArgs(BaseModel):
    query: str = Field(..., description="The user query.")
    year: int = Field(..., description=f"The year. An integer between {min(years)} and {max(years)}.")
    ticker: str = Field(..., description=f"The company ticker. Must be a valid ticket symbol from the list {tickers.keys()}.")

Note that the arguments for this tool are defined using Python’s pydantic package with the Field class. By defining the tool in this way, we provide a good description for each argument so that the agent LLM can easily understand the tool’s functionality and how to use it properly.

Now to create the actual tool, we use the create_rag_tool() method from the VectaraToolFactory class as follows:

from vectara_agentic.tools import VectaraToolFactory

vec_factory = VectaraToolFactory(vectara_api_key=vectara_api_key,
                                 vectara_customer_id=vectara_customer_id,
                                 vectara_corpus_id=vectar_corpus_id)

ask_transcripts = vec_factory.create_rag_tool(
    tool_name = "ask_transcripts",
    tool_description = """
    Given a company name and year,
    returns a response (str) to a user question about a company, based on analyst call transcripts about the company's financial reports for that year.
    You can ask this tool any question about the compaany including risks, opportunities, financial performance, competitors and more.
    Make sure to provide the a valid company ticker and year.
    """,
    tool_args_schema = QueryTranscriptsArgs,
    reranker = "chain", rerank_k = 100,
    rerank_chain = [
      {
        "type": "slingshot"
      },
      {
        "type": "mmr",
        "diversity_bias": 0.1
      }
    ],
    n_sentences_before = 2, n_sentences_after = 2, lambda_val = 0.005,
    summary_num_results = 10,
    vectara_summarizer = 'vectara-summary-ext-24-05-med-omni',
    include_citations = False,
    fcs_threshold = 0.2
)

In the code above, we did the following:

  • First, we initialized the VectaraToolFactory with the Vectara customer ID, corpus ID, and API key. If you don’t want to explicitly pass in these arguments, you can specify them in your environment as VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID, and VECTARA_API_KEY. Additionally, you can also create a single VectaraToolFactory that queries multiple corpora. This may be helpful if you have related information across multiple corpora in Vectara. To do this, create a query API key on the Authorization page and give it to access to all the corpora you want for this query tool. When specifying your environment variables, set VECTARA_CORPUS_ID to a list of corpus IDs separated by commas (e.g. 5,6,19).

  • Then we called create_rag_tool(), specifying the tool name, description and schema for the tool, followed by various optional parameters to control the Vectara RAG query tool.

One important parameter to point out is fcs_threshold. This allows you to specify a minimum factual consistency score (between 0 and 1) for the response to be considered a “good” response. If the generated response has an FCS below this threshold, the agent will not use the generated summary (considering it a hallucination). You can think of this as a hallucination guardrail. The higher you set fcs_threshold, the stricter your guardrail will be.

If your agent continuously rejects all of the generated responses, consider lowering the threshold.

Another important parameter is reranker. In this example, we are using a chain reranker, which chains together multiple reranking methods to achieve better control over the reranking and combines the strengths of various reranking methods. In the example above, we use the multilingual (or slingshot) reranker followed by the MMR reranker with a diversity bias of 0.1. You can also supply other parameters to each reranker, such as a cutoff parameter, which removes documents that have scores below this threshold value after applying the given reranker. Lastly, you can add a user defined function reranker as the last reranker in the chain to specify a customized expression to rerank results in a way that is relevant to your specific application. If you want to learn more about chain reranking and user defined functions, check out our blog post and example notebook for some guidance and inspiration.

That’s it: now the ask_transcripts tool is ready to be added to the agent.

You can use the VectaraToolFactory to generate more than one RAG tool with different parameters, depending on your needs.

Additional Tools

To generate non-RAG tools, you can use the ToolsFactory class, which provides some out-of-the-box tools that you might find helpful when building your agents, as well as an easy way to create custom tools.

Currently, we have a few tool groups you may want to consider using:

  • standard_tools(): These are basic tools that can be helpful, and include the summarize_text tool and rephrase_text tool.

  • finance_tools(): includes a set of financial query tools based on Yahoo! finance.

  • legal_tools(): These tools are designed to help with legal queries, and include critique_as_judge and summarize_legal_text.

  • database_tools(): tools to explore SQL databases and make queries based on user prompts.

  • guardrail_tools(): These tools are designed to help the agent avoid certain topics from its response.

For example, to get access to all the legal tools, you can use the following:

from vectara_agentic.tools import ToolsFactory

legal_tools = ToolsFactory().legal_tools()

For more details about the tools see Tools.

Create your own tool

You can also create your own tool directly by defining a Python function:

def earnings_per_share(
  net_income: float = Field(description="the net income for the company"),
  number_of_shares: float = Field(description="the number of oustanding shares"),
) -> float:
    """
    This tool returns the EPS (earnings per share).
    """
    return np.round(net_income / number_of_shares,4)

my_tool = tools_factory.create_tool(earnings_per_share)

A few important things to note:

  1. A tool may accept any type of argument (e.g. float, int) and return any type of value (e.g. float). The create_tool() method will handle the conversion of the arguments and response into strings (which is type the agent expects).

  2. It is important to define a clear and concise docstring for your tool. This will help the agent understand what the tool does and how to use it.

Here are some functions we will define for our finance assistant example:

  tickers = {
    "AAPL": "Apple Computer",
    "GOOG": "Google",
    "AMZN": "Amazon",
    "SNOW": "Snowflake",
    "TEAM": "Atlassian",
    "TSLA": "Tesla",
    "NVDA": "Nvidia",
    "MSFT": "Microsoft",
    "AMD": "Advanced Micro Devices",
    "INTC": "Intel",
    "NFLX": "Netflix",
}
years = [2020, 2021, 2022, 2023, 2024]

def get_company_info() -> list[str]:
  """
  Returns a dictionary of companies you can query about. Always check this before using any other tool.
  The output is a dictionary of valid ticker symbols mapped to company names.
  You can use this to identify the companies you can query about, and their ticker information.
  """
  return tickers

def get_valid_years() -> list[str]:
  """
  Returns a list of the years for which financial reports are available.
  Always check this before using any other tool.
  """
  return years

# Tool to get the income statement for a given company and year using the FMP API
def get_income_statement(
  ticker=Field(description="the ticker symbol of the company."),
  year=Field(description="the year for which to get the income statement."),
) -> str:
  """
  Get the income statement for a given company and year using the FMP (https://financialmodelingprep.com) API.
  Returns a dictionary with the income statement data. All data is in USD, but you can convert it to more compact form like K, M, B.
  """
  fmp_api_key = os.environ.get("FMP_API_KEY", None)
  if fmp_api_key is None:
     return "FMP_API_KEY environment variable not set. This tool does not work."
  url = f"https://financialmodelingprep.com/api/v3/income-statement/{ticker}?apikey={fmp_api_key}"
  response = requests.get(url)
  if response.status_code == 200:
     data = response.json()
     income_statement = pd.DataFrame(data)
     income_statement["date"] = pd.to_datetime(income_statement["date"])
     income_statement_specific_year = income_statement[
       income_statement["date"].dt.year == int(year)
     ]
     values_dict = income_statement_specific_year.to_dict(orient="records")[0]
     return f"Financial results: {', '.join([f'{key}: {value}' for key, value in values_dict.items() if key not in ['date', 'cik', 'link', 'finalLink']])}"
  else:
     return "FMP API returned error. This tool does not work."

The get_income_statement() tool utilizes the FMP API to get the income statement for a given company and year. Notice how the tool description is structured. We describe each of the expected arguments to the function using pydantic’s Field class. The function description only describes to the agent what the function does and how the agent should use the tool. This function definition follows best practices for defining tools. You should make this description detailed enough so that your agent knows when to use each of your tools.

Your tools should also handle any exceptions gracefully by returning an Exception or a string describing the failure. The agent can interpret that string and then decide how to deal with the failure (either calling another tool to accomplish the task or telling the user that their request was unable to be processed).

Finally, notice that we have used snake_case for all of our function names. While this is not required, it’s a best practice that we recommend for you to follow.

Initialize The Agent

Now that we have our tools, let’s create the agent, using the following arguments:

  1. tools: list[FunctionTool]: A list of tools that the agent will use to interact with information and apply actions. For any tools you create yourself, make sure to pass them to the create_tool() method of your ToolsFactory object.

  2. topic: str = "general": This is simply a string (should be a noun) that is used to identify the agent’s area of expertise. For our example we set this to financial analyst.

  3. custom_instructions: str = "": This is a set of instructions that the agent will follow. These instructions should not tell the agent what your tools do (that’s what the tool descriptions are for) but rather any particular behavior you want your LLM to have, such as how to present the information it receives from the tools to the user.

  4. update_func: Optional[Callable[[AgentStatusType, str], None]] = None: This is an optional callback function that will be called on every agent step. It can be used to update the user interface or the steps of the agent.

Every agent has its own default set of instructions that it follows to interpret users’ messages and use the necessary tools to complete its task. However, we can (and often should) define custom instructions (via the custom_instructions argument) for our AI assistant. Here are some guidelines to follow when creating your instructions:

  • Write precise and clear instructions without overcomplicating the agent.

  • Consider edge cases and unusual or atypical scenarios.

  • Be cautious to not over-specify behavior based on your primary use case as this may limit the agent’s ability to behave properly in other situations.

Here are the instructions we are using for our financial AI assistant:

financial_assistant_instructions = """
  - You are a helpful financial assistant, with expertise in financial reporting, in conversation with a user.
  - Never discuss politics, and always respond politely.
  - Respond in a compact format by using appropriate units of measure (e.g., K for thousands, M for millions, B for billions).
  - Do not report the same number twice (e.g. $100K and 100,000 USD).
  - Always check the get_company_info and get_valid_years tools to validate company and year are valid.
  - When querying a tool for a numeric value or KPI, use a concise and non-ambiguous description of what you are looking for.
  - If you calculate a metric, make sure you have all the necessary information to complete the calculation. Don't guess.
"""

Notice how these instructions are different from the tool function descriptions. These instructions are general rules that the agent should follow. At times, these instructions may refer to specific tools, but in general, the agent should be able to decide for itself what tools it should call. This is what makes agents very powerful and makes our job as coders much simpler.

update_func callback

The update_func is an optional Callable function that can serve a variety of purposes for your assistant. It is a callback function that is managed by the agent, and it will be called anytime the agent is updated, such as when calling a tool, or when receiving a response from a tool.

In our example, we will use it to log the actions of our agent so users can see the steps the agent is taking as it answers their questions. Since our assistant is using streamlit to display the results, we will append the log messages to the session state.

from vectara_agentic.agent import AgentStatusType

def update_func(status_type: AgentStatusType, msg: str):
  output = f"{status_type.value} - {msg}"
  st.session_state.log_messages.append(output)

Creating the agent

Here is how we will instantiate our finance assistant:

from vectara_agentic import Agent

agent = Agent(
     tools=[tools_factory.create_tool(tool, tool_type="query") for tool in
               [
                   get_company_info,
                   get_valid_years,
                   get_income_statement
               ]
           ] +
           tools_factory.standard_tools() +
           tools_factory.financial_tools() +
           tools_factory.guardrail_tools() +
           [ask_transcripts],
     topic="10-K annual financial reports",
     custom_instructions=financial_assistant_instructions,
     update_func=update_func
)

Notice that when we call the create_tool() method, we specified a tool_type. This can either be "query" (default) or "action". For our example, all of the tools are query tools, so we can easily add all of them to our agent with a list comprehension, as shown above.

Chat with your Assistant

Once you have created your agent, using it is quite simple. All you have to do is call its chat() method, which prompts your agent to answer the user’s query using its available set of tools. It’s that easy.

query = "Which 3 companies had the highest revenue in 2022, and how did they do in 2021?"
agent.chat(query)

The agent returns the response:

The three companies with the highest revenue in 2022 were:

  1. Amazon (AMZN): $513.98B

  2. Apple (AAPL): $394.33B

  3. Google (GOOG): $282.84B

Their revenues in 2021 were:

  1. Amazon (AMZN): $469.82B

  2. Apple (AAPL): $365.82B

  3. Google (GOOG): $257.64B

To make a full Streamlit app, there is some extra code that is necessary to configure the demo layout. You can check out the full code and demo for this app on Hugging Face.

Additional Information

Agent Information

The Agent class defines a few helpful methods to help you understand the internals of your application.

  1. The report() method prints out the agent object’s type (REACT, OPENAI, or LLMCOMPILER), the tools, and the LLMs used for the main agent and tool calling.

  2. The token_counts() method tells you how many tokens you have used in the current session for both the main agent and tool calling LLMs. This can be helpful for users who want to track how many tokens have been used, which translates to how much money they are spending.

If you have any other information that you would like to be accessible to users, feel free to make a suggestion on our community server.

Observability

You can also setup full observability for your vectara-agentic assistant or agent using Arize Phoenix. This allows you to view LLM prompt inputs and outputs, the latency of each task and subtask, and many of the individual function calls performed by the LLM, as well as FCS scores for each response.

To set up observability for your app, follow these steps:

  1. Set os["VECTARA_AGENTIC_OBSERVER_TYPE"] = "ARIZE_PHOENIX".

  2. Connect to a local phoenix server:

    1. If you have a local phoenix server that you’ve run using e.g. python -m phoenix.server.main serve, vectara-agentic will send all traces to it automatically.

    2. If not, vectara-agentic will run a local instance during the agent’s lifecycle, and will close it when finished.

    3. In both cases, traces will be sent to the local instance, and you can see the dashboard at http://localhost:6006.

  1. Alternatively, you can connect to a Phoenix instance hosted on Arize.

    1. Go to https://app.phoenix.arize.com, and set up an account if you don’t have one.

    2. Create an API key and put it in the PHOENIX_API_KEY variable. This variable indicates you want to use the hosted version.

    3. To view the traces go to https://app.phoenix.arize.com.

In addition to the raw traces, vectara-agentic also records FCS values into Arize for every Vectara RAG call. You can see those results in the Feedback column of the arize UI.