Monitoring

Overview

Monitoring involves observing and tracking the performance of the policies that you have created. You can also track the usage of your Motif with reports and logs.

Motific.ai also monitors if there are unapproved or disallowed LLMs (Shadow GenAI) being used in your organization using Cisco Umbrella.

Policy flags

In the policy flags tab, track out of compliance GenAI usage.

  • To see the policy performance page, navigate to the Monitoring » Policy flags tab.
  • To see the policy performance graphs for a particular time period, select the date range from the drop down.
  • Based on the date the graphs are populated with the data for that period.
    • Motifs with policy flags- Motifs with policy flags section displays the graph for the Motifs whose usage has violated the policies set for the given date range.
    • Policy flags over time- This graph shows the number of policies violated within the given data range.
    • Flags triggered by user- This graph displays the number of policy flags violated by a user.

Token usage

The token usage tab displays the details about your Motif’s token usage. You can view the details of token usage by user and token usage by Motif.

Prompt history

You can observe usage patterns in the prompt history section.

  1. To see the Reports and logs page, navigate to Monitoring » Prompt history tab.

  2. To view details of a particular log, click the View details link in the Actions column.

Prompt history details

Here, you can view the details of the prompt that you selected. You can find the following information about the prompt:

  • General information- This section provides general information about the prompt like prompt execution time, input and output token count for the prompt.
  • Profiler- The profiler shows the time elapsed at each step of the process right from the prompt submission to checking for policy violations to LLM response.

General information

alt text This section provides general information about the prompt.

  • Prompt ID- The id of the prompt sent via the Motif.
  • Motif- The Motif name is displayed here via which the user input was sent.
  • Execution timestamp- The time stamp when the prompt was executed i.e., sent to the LLM to fetch a response.
  • Response tokens- The number of response tokens consumed by the LLM while providing the response to the prompt.
  • Input tokens- The number of input tokens consumed by the LLM when the prompt was sent for the inference.

Model input and output

  • User query- The original prompt from the user sent for get an inference from the LLM.
  • Knowledge base context- The content of the knowledge base on which the Motif response is based on.
  • Model response- The out from the LLM in response to the prompt. If there is no response, then check if there was any policy violation that caused Motific.ai to enforce the policy action of blocking the prompt from going to LLM.

Policy actions

The actions that were taken by Motific.ai according to the policy actions set for the Motif this prompt belongs to.

  • Step- The step of the execution can be while checking for the policy violations by the prompt or during fetching response from the model, etc.
  • Policies action- What action was taken based on the policies applied to the Motif via which the user input was sent.
  • Details- When you click on the View link, the JSON details about the response from Motific.ai are displayed.

Execution profiler

alt text The profiler shows the sequence of the prompt execution and the time elapsed at each step of the process right from the prompt submission to checking for policy violations to LLM response. When you hover over the graph you can view the details of the execution like response at each step, time taken to execute the step, and the policy action.