What's new

Motific key features

Security and compliance

Personally Identifiable Information (PII) protection

Motific detects, classifies, controls and produces analytics reports on PII entities flagged going in and out of LLMs. The following PII entities can be protected through redaction or blocking: Social Security Number (SSN), Credit card number, Phone number, Physical address, Person name, Email address.

LLM (Large Language Model) security

Prompt injection detection and malicious URLs can be detected in prompts and defined actions are taken to mitigate the security risks.

Toxic content prevention

Motific detects, classifies, controls and produces analytics reports on inappropriate and harmful content going in and out of LLMs that violates company’s harmful content policies. The following toxic content categories are prevented from inputs and responses: hate, self-harm, violence, and sexual. Enterprises can identify content prohibitied by their usage policies, and Motific’s inbuilt controls can take appropriate action to warn or block that content.

Off-topic prevention

Off-topic detection policy in Motific, helps keep conversations focused and relevant, preventing misuse of chatbots for unintended purposes. The prompts and LLM responses are scanned to determine if the topic of conversation with an AI assistant is not from the off-topic list. The off-topic list can be configured through the Motific admin console.

Cost metrics

Options for optimization of LLM usage budgets are provided. The cost trends based on token usage and prompt categories can be viewed for each Motif’s usage.

Shadow GenAI detection

Monitor and detect usage of unauthorized LLMs and GenAI services using integrations such as Cisco Umbrella solutions.

LLM (Large Language Model) gateway

Motific provides hassle free integration with industry leading Large Language Models (LLM) like Azure OpenAI, Mistral and AWS (Amazon Web Services) Bedrock. Motific adds additional LLM providers on an ongoing basis.

Common LLM adapter

Standardized interfaces are provided to integrate all supported LLM models by unifying their APIs and request protocols to enable simplified consumption by enterprise business clients.

Currently, LLMs from the following providers are supported:

  • Azure OpenAI models
  • Mistral
  • Amazon Bedrock models

Prompt intelligence

Motific has a distinctive approach to gathering and presenting information and patterns related to the prompts sent via a Motif to the LLMs, which we call prompt-intelligence. Prompt-intelligence encompasses various details of the prompts that are passed to an AI assistant. For example, what kind of tasks does each prompt request, which tasks are most often requested by the user, what are the token usage trends for each requested task, what is the time saved by using an AI assistant for a particular task, and trends and comparisons with different tasks. All this data can help you make the necessary decisions, and you can use this data to optimize your AI assistant usage.

With Motific, you can now measure the time saved by users when they utilize the Gen AI assistant for a task. Time-saving is demonstrated through various tasks that users typically engage in, such as reading, writing, searching, or reviewing for specific details. These metrics allow you to discover how Gen AI assistants enable productivity within your organization.

Moreover, Motific offers model optimization options, allowing you to compare the performance of your chosen AI model with other providers based on parameters like delay, cost, and quality of replies.

Motific provides detailed information in easily understandable graphical representations of prompt trends for tasks that users may perform with the Gen AI assistant, such as brainstorming, content generation, Q&A, and data analysis.

Some of the graphs include:

  • Tasks most frequently requested by users via your Gen AI assistant.
  • Token usage patterns for various tasks, indicating the tasks that consume the most tokens.
  • Gen AI cost trends per task.
  • Time savings per task.

Motif access control

Definition and enforcement of policies to ensure that only authorized individuals, groups or user roles can interact with provisioned APIs and assistants.

Configuration & Monitoring

  • Policy configurator: Policy definition templates for individual system control plugins to enable definition and review of policies for individual security and compliance controls.

  • Abstracted API console: Abstracted API enable teams to connect their GenAI apps with Motific.

  • Testing console: Testing tools are provided to evaluate the performance of policies applied to newly vended APIs and assistant.

  • Observability dashboard: IT admin dashboard to view real time activity, configure alerts and notifications, generate reports and capture logs. For each of the above visibility and insights mechanisms, Motific provides the following details to the IT administrators: Violations of defined policies,input and output tokens used, most utilized LLMs, most active Motifs.

RAG system

RAG (Retrieval-Augmented Generation) integration in AI assistants and abstracted API for adding organizational private data sources such as:

  • SharePoint
  • Websites

What can you do in Motific?

  • With just a few clicks, central-IT and security teams in organizations can provision GenAI assistants and abstracted APIs for Large Language Models, that are customized with Retrieval-Augmented-Generation (RAG) on organizational data sources, for out-of-the-box use or for building GenAI applications.
  • Sign up and get started with Motific.
  • Create a Motif, create LLM connections, and enable policies and actions that can be applied to the AI assistant that you provision via a Motif.
  • You can also connect knowledge bases customized with Retrieval-Augmented-Generation (RAG) on organizational data sources.
  • Learn about how you can add users to your Motific tenant. There are different roles authourized to interact with certain features of Motific.
  • Integrate your application with the Motif created by your organization. Connect to the Motific API endpoints that enable you to interact with the LLM provider and knowledge bases configured while adhering to the policies defined in the Motif.
  • Test out the Motifs you create in our chat console environment.