Intelligence

Overview

The Intelligence page provides insights to enterprises and organizations on how your GenAI users benefit from the AI assistants by looking at the actual work being conducted via the users prompts/ inputs. On this page, you can access an aggregate view of the intelligence related to the tasks that users are performing via the AI assistants that you have provisioned through Motifs.

Now the question arises what is “Intelligence” about? The intelligence is related to prompts, can also be refered as prompt-intelligence and it encompasses various details of the prompts/user inputs that are passed to an AI assistant. It aims at providing business analysts with credible base parameters to help them better understand GenAI usage such as:

  • Intelligence regarding information about each Motif that you have created and how it is being used by the users.
  • It comprises of the information about the tasks categories requested by each prompt/user input.
  • The task categories that are most often requested by the user, the token usage trends for each requested task category.
  • The time saved by using an AI assistant for a particular task category, and comparisons with different tasks categories.

All this data and trends can help you make necessary decisions and optimize your AI assistant usage. And also supports the understanding of how the user inputs evolve and give visibility to the actual prompts form users.

On this page you can also view the aggregate data of the prompt intelligence for all the Motifs that you have created. The various graphs displayed on this page are:

  • Trends on requested categories across all Motifs: This section showcases the tasks that were requested across all the Motifs by the users the most and the least. Based on this information you can optimize the model usage.

    • Least prompted category- This metric shows the prompt trend and the number of prompts for the task category that were least used by the user across all the Motifs.
    • Most prompted category- This metric shows the prompt trend and the number of prompts for the task category that were most used by the user across all the Motifs.
  • Trends on token usage across all Motifs: This section highlights the token usage trends for requested task across all the Motifs. You can determine which tasks consume the highest or least number of tokens and optimize the cost and LLM usage. The following token usage trends are displayed:

    • Lowest token consumption category- The task category with lowest token consumption across all the Motifs is displayed here. Also, the tokens trends of the task with the lowest percent change in the number of tokens is displayed.
    • Highest token consumption category- The task category with highest token consumption across all the Motifs is displayed here. Also, the tokens trends of the task with the highest percent change in the number of tokens is displayed.
  • Number of prompts for the top 5 categories: In this section, you are provided with an easy-to-read graphical representation of the top 5 requested task category out of the total prompts requested by the users across all the Motifs that you have created.

  • Categories usage by prompts and time saved: Categories usage by prompts and time saved graph represents what percentage of task categories were requested by the prompts/inputs requested by the users of all the Motifs and how much of the user’s time was saved by using the Gen AI assistant for the tasks.

Motifs

The Motifs you have created are listed and you can view the individual Motif’s prompt intelligence details by clicking on the Motif. The data associated with the Motif such as Least prompted category, Most prompted category, and Total time saved while using the particular Motif are displayed.

Here, you can filter the Motifs based on intelligence data, i.e., if a GenAI assistant provisioned via a Motif has been utilized by the users for different tasks and prompts have been provided, then there will be prompt intelligence data associated with the Motif and such Motifs can be viewed with Contains intelligence data filter. Whereas if prompts have not been passed for a Motif, then such Motifs can be filtered with No intelligence data.

When you click on the Motif of your choice, you get the following tabs with easy-to-read graphs and metrics about the prompts/inputs from the user that users request for a particular Motif.

Overview

The overview section provides you with the information about the prompts passed to the model via a Motif.

Latest prompts

The latest prompts section the most recent prompts sent by the users of the Gen AI assistant you provisioned via Motific. A prompt classification is also provided. The following details can be viewed:

  • Date: The time the prompt was passed to the model.
  • Prompt ID: The ID of the prompt.
  • Prompt: The user input that is passed by the user to get an inference from the model.
  • Requested task: The requested task column represents the task category that the prompt belongs to. The tasks category can one of the following:
    • Content Processing
    • Coding support
    • Brainstorming
    • Greetings
    • Text translation
    • Unclassified

Number of prompts for the top 5 categories

In this section, you are provided with an easy-to-read graphical representation of the top 5 requested task category out of the total prompts requested by the users of a Motif over the selected period.

Categories usage by prompts and time saved

Categories usage by prompts and time saved graph represents what percentage of tasks were requested by the prompts/inputs requested by the users of a Motif and how much of the user’s estimated time was saved by using the Gen AI assistant for the task.

The benefits of these graphs are instant understanding about the type of tasks requested by the Motif users, and the identification of which task categories are saving more time to the Motif users.

This section showcases the tasks that were requested by the Motif users the most and the least. Based on this information you can optimize the model usage.

  • Least prompted category- This metric shows the prompt trend and the number of prompts for the task that were least used by the user while using the Motif.
  • Most prompted category- This metric shows the prompt trend and the number of prompts for the task that were most used by the user while using the Motif.

This section highlights the token usage trends for requested tasks. You can determine which tasks consume the highest or least number of tokens and optimize the cost and LLM usage. The following token usage trends are displayed:

  • Lowest token consumption category- The task with lowest token consumption is displayed here. Also, the tokens trends of the task with the lowest percent change in the number of tokens is displayed.
  • Highest token consumption category- The task with highest token consumption is displayed here. Also, the tokens trends of the task with the highest percent change in the number of tokens is displayed.

Category usage

The category usage tab represents how such each task category is being requested over time. The user can select which task to display information for.

Based on the various tasks identified by Motific the graphs for prompts per task, token usage per task, Gen AI cost per task, and trends and comparison with other tasks are provided. Each task has its own screen with all the above mentioned graphs and metrics calculated when the data for the specific task is available.

The benefits of these graphs are detailed understanding of the usage, and the understanding of the token consumption and LLM costs for a given task category.

The tasks present in the Motific are as follows:

  • Coding Support
  • Content Creation
  • Content Processing
  • Conversational
  • Data Analysis
  • Greetings
  • Question & Answer
  • Text Translation
  • Unclassified

Let’s dive in and look at each of these graphs for a task. Every task has the same graphs presented with the data for the respective requested task category via the Motif. The graphs can be empty if there is no corresponding data available in the prompts requested by the user.

Prompts per category

The prompts per task graph displays the data of how many prompts were requested for the particular task over a period of time. The task is determined by which tab you are on. The legend on the graph explains that the number of prompts is signified by a particular color.

Token usage per category

The token usage per task graph displays the data of how many tokens were consumed when a prompt for a particular category was requested by the users of a Motif over a period of time. The task is determined by which tab you are on.

Gen AI cost per category

The Gen AI cost per task graph displays the data for the cost incurred over a period of time for the particular task, depending on which task tab you are on.

The trends and comparison with other tasks graph provides information about the prompts, input and output tokens, and cost. It compares these entities for the current and past periods of time to show the trends for a particular task. The task is determined by the tab you are on. This graph has two sections, Trends and comparisons. You can select from within the graph what you like to view.

Trends- In the trends graph, you can see the total number of prompts, input and output tokens, and cost of the current and previous periods for a particular task, providing the trends.

Comparison- In the comparison graph, you can see the comparison of the total prompts, total tokens, and total costs of a particular task with other tasks.

Time savings

Time savings builds on usage insights. It adds estimations of time savings based on per transaction or prompts input from the various users for a particular application.

In the time savings tab, based on the various tasks identified by Motific the graphs for time savings per task, estimated time saved per task, Gen AI cost per task are provided.

The tasks present in the Motific are as follows:

  • Coding Support
  • Content Creation
  • Content Processing
  • Conversational
  • Data Analysis
  • Greetings
  • Question & Answer
  • Text Translation
  • Unclassified

Let’s dive in and look at each of these graphs for a task. Every task has the same graphs presented with the data for the respective time savings.

Time savings per category

The time savings per task table provides the breakdown of the estimated time saved for the selected task category with a comparison to the previous period and the corresponding trend (up or down). It displays data on the estimated time saved and Gen AI costs for a specific task requested by a user through a motif. It shows the data for both the current and previous time periods. Additionally, it calculates and presents the time saved for reading, writing, reviewing, and searching for the task in either the current or previous time period. This data assists in estimating future trends. For each different tasks the same graphs with the particular task’s data are displayed.

The benefit of these metrics is that the estimated time savings comparison with previous period allow you to detect if users are becoming more efficient with the way they prompt LLMs.

Time savings breakdown

The time savings breakdown graph illustrates the total and average estimated time saved for a specific category of tasks, such as coding support, content creation, and content processing. The steps are categorized as thinking, reading, writing, and testing. Motific then estimates the time these steps would take for a particular task category, both without using GenAI and with using GenAI. This provides valuable insight to organizations on how GenAI tools can enhance user productivity.

Total

Average

Estimated time saved per category

The estimated time saved per task graph provides information about the time saved in reading, writing, testing, and searching by using the Gen AI assistant for a particular task via a Motif over a period of time.

Optimization

In the optimization tab, you can compare the performance, delay, and cost between the current model that you have selected for a Motif versus any other model of your choice. Based on the calculations, Motific recommends which model best suits your use case.

Also, you can see the results between the current model that you have selected for a Motif versus any other model based on any one of the optimization options such as performance, delay, or cost.

The optimization of the models allows you to verify if the best model was selected for a given Motif according to the type of tasks users leverage. After looking at the optimization results in comparison with other models, you can choose to reconfigure the Motif to use another model and observe the results and metrics.

To get the optimization recommendations, follow the steps below:

  1. Navigate to the Motif menu, click on the Motif for which you want to check the optimization recommendations. If you have not created a Motif, then create a new Motif.
  2. Go to the Optimization tab of the Motif details.
  3. Here, you can see that the current model for the Motif has been selected and cannot be changed.
  4. Next, select if you want optimization recommendation based on model or other options.

Optimization by model

  1. For optimization details by model select By model.
  2. Next, select a LLM provider and a model against which you want to check the optimization details.
  3. Click Submit.
  4. You can see the results on the graph in the Absolute tab. The graph displayed shows the comparison between the cost, performance and delays of the two models selected. Also, you can view the recommended model
  5. In the Normalized result tab, you can view the normalized graph of the optimization between cost delay and performance.

Optimization by options

  1. For optimization details by different options select By options.
  2. Next, select one or more options for which you want to check the optimization details. The options available are cost, delay, and quality.
  3. Click Submit.
  4. You can see the results on the graph in the Absolute tab. The graph displayed shows the comparison between the cost, quality and delays of the current model and the best model for the option you selected. Also, you can view the recommended model
  5. In the Normalized result tab, you can view the normalized graph of the optimization between cost, delay and quality.