What's new

Motific.ai key features

Security and compliance guardrails

Motific.ai’s security and compliance guidelines empower organizations to establish and maintain controlled GenAI usage across their organizations, ensuring adherence to compnay policies and responsible AI standards. Furthermore, these guardrails and controls incorporate safeguards to protect GenAI applications and their associated data from unauthorized access, modification, or destruction. These safeguards are manifested as policy templates within Motific.ai, granting you the flexibility to tailor the guidelines to suit your specific requirements. These policies must be applied to a GenAI application for them to take effect.

The policies that can be configured with Motific.ai include:

  • Code presence: The code presence policy, when applied, can detect the presence of code in the input prompts and model responses. Currently this policy is in the experimental stage and supports detection of coding languages like Python, JavaScript, and Java.
  • Adversarial content: An adversarial content policy, when applied, can block attempts to exploit AI models through prompt injections, SQL query injection, and security threats, ensuring safe interactions with LLMs.
  • Toxic content: Toxic content policy enforces guidelines for toxic (umbrella term for rude, offensive, sexually explicit content) and unsafe content. It ensures interactions with any LLMs are free from racism, sexism, and other harmful behaviors.
  • Malicious URL: Malicious URL and data protection policy prohibits the injection of harmful URLs, protecting the chat interface from cybersecurity risks.
  • Off-topic content: Off-topic content policy, when set, helps maintain focused and relevant conversations, preventing misuse of chatbots for unintended purposes.
  • Personally identifiable information content: PII content policy prevents the sharing of sensitive personal information with LLMs to safeguard user privacy. It can redact and block the following PII entities: Social Security Number (SSN), credit card number, phone number, physical address, person name, and email addresses.

Language support

Motific.ai supports English language only. You can enable Motific.ai to support other languages, but note that the underlying small language models that power the Motific.ai system are trained with English language datasets.

LLM provider connections

Motific.ai offers seamless integration with a range of foundational models from multiple providers including Mistral AI, Amazon Bedrock, and Azure OpenAI. This flexibility allows Motific.ai administrators to customize and personalize a wide range of GenAI assistants and API endpoints to meet the specific needs of business teams’ use cases.

Retrieval augmented generation service

Motific.ai leverages the Retrieval-Augmented Generation (RAG) framework for its RAG service, which is an enterprise-grade offering enriched by a generative AI toolchain. This toolchain incorporates elements like data source connectors, embedding models, a vector database, and a retrieval system to facilitate context-aware model inputs and outputs. It enables the incorporation of tailored knowledge bases into your generative AI applications, which may include a range of enterprise data sources such as Microsoft SharePoint or various internal and external websites that act as repositories of business-relevant documents. The RAG service allows Motific.ai to generate responses that are not only precise but also pertinent to the given context, drawing from actual data to inform its output as opposed to relying exclusively on the pre-trained knowledge of the model.

Hallucination policy for RAG

In Motific.ai, hallucination detection ensures faithfulness of queries and responses to the context derived from the attached knowledge bases of Motific.ai assistants and API endpoints. Currently, context in user queries outside the Motif’s knowledge base is treated as user prompt and not checked for response faithfulness.

Cost management

Motific.ai provides comprehensive cost management for every customized GenAI application with configurable token budgets and thresholds for each Motif. This cost control functionality enables you to define token usage limits for each application. Should an app exceed its allocated token threshold, it will cease to process further prompts or inputs, preventing users from receiving responses to their inquiries. You have the flexibility to adjust these budgetary constraints to align with weekly, monthly and annual budgets or with changing usage patterns.

Intelligence

The Intelligence feature offers a suite of insights encompassing operational, usage, and business metrics. It equips Motific.ai administrators and business decision-makers with the critical data necessary to make informed investment and operational decisions based on the usage patterns of Motific.ai assistants. The insights offered by the Intelligence feature include a summary of tasks performed by Gen AI assistants, analysis of token consumption trends by task categories, estimates of productivity gains and time efficiencies achieved through the deployment of GenAI assistants and recommendations for the most effective model to handle specific tasks.

Dashboard

Motific.ai’s observability dashboard provides real-time monitoring capabilities for AI assistants equipped with configured policies and data sources. It offers insights into key operational metrics such as policy violations and token consumption for both inputs and outputs. It features usage insights, including visualizations of the number of prompts for the top five task categories queried by users across all provisioned AI assistants. Furthermore, the dashboard highlights trends in token usage, comparing data from the current to the previous month.

Monitoring

The monitoring feature enables you to review the summary of policy flags by Motif, over a period of time and by most flags by across assistant users. In addition it also shows a summary of top token consumptions by user and motif.

Shadow GenAI detection is another feature of the monitoring section that enables detection of enterprise endpoints using LLMs across the organization. This capability requires integrating with an existing Cisco Umbrella CASB account.

Prompt history is another feature within the monitoring section that enables review of each individual prompt interaction with the system. This is useful for use cases such as audit trail, policy effectiveness evaluation and end to end system efficiency checks.

Abstract APIs

Our abstracted APIs provide simplified, consistent access to the chosen foundation models. The ease of use of these APIs makes it easy to integrate with your Gen AI assistants.

What can you do in Motific?

  • With just a few clicks, central-IT and security teams in organizations can provision GenAI assistants and abstracted APIs for Large Language Models, that are customized with RAG on organizational data sources, for out-of-the-box use or for building GenAI applications.

Step 1: Sign up and access the Admin Console

Step 2: Configure a Motif

Customize the following settings within the Motif:

  • Connect your model provider and select the models that will power your Motifs.

  • Connect your data sources and create knowledge bases to provide contextual data to your Motifs.

  • Create a policy from our templates to improve trust, safety, security, and cost compliance.

  • Add users to your Motific.ai tenant. There are different roles authourized to interact with certain features of Motific.ai.

  • Create a Motif connecting your model, knowledge base, and policies to deliver a trustworthy GenAI assistant.

Step 3: Test a Motif

Step 4: Monitor/Intelligence

  • Monitor how your configured Motif is being used.

  • Track engagement with LLMs using prompts within the Motif’s framework.