Deploy local and secure AI agents
Proto powers multilingual workflow automation with secure AI training, API management, and sovereign hosting.
Instantly train AI with new information from URLs or documents, keeping responses up-to-date and relevant.
Combine the best of traditional chatbot capabilities with modern LLMs, allowing you to create multiple assistants tailored to your brand voice and customer service guidelines, delivering personable interactions.
Enable seamless handover between AI agents – even across different workspaces – to ensure smooth, context-aware support throughout the customer journey.
Leverage generative AI for powerful conversations. Third-party LLMs from providers like OpenAI can be optionally integrated into any assistant to enhance its capabilities.
Deploy a customised and secure large language model trained on your organisation’s knowledge and modified with ProtoAI natural language understanding.
Deploy your AI agents across a variety of channels and platforms, including webchat, WhatsApp, and Facebook Messenger among others.
Set up agents in a host of languages, as required by your location and customer preferences.
Engage website visitors with a proactive popup message triggered by custom page-based rules or API.
Receive and generate voice messages for faster, more intuitive communication with customers across popular apps.
Automatically transcribe voice messages into text – enabling live reps to read and respond directly from the Inbox.
Convert AI replies into voice messages and include transcripts for clarity and accessibility.
Create scripted chat flows by designing custom triggers and actions. From simple, automated responses to complex API integrations, easily build and visualize your assistant's decision tree.
Use chat variables to store information from a chat session, including data about the app, user, and any inputs during the interaction. Utilise these variables in an AI assistant's subsequent actions.
Create tickets automatically, leveraging info collected from chats to auto-populate and auto-assign the new cases.
Supercharge your AI assistant with API calls, allowing it to send custom requests to external systems and receive responses.
Enable your live reps to seamlessly intervene in assistant-led conversations, ensuring human touch in critical situations for elevated customer service.
Gain unlimited support from a dedicated prompt engineer to set up and optimize your AI assistant, ensuring a personalized and intuitive user experience.
AI Agents FAQ
Can I edit AI agent training data at any time?
Yes – you can sync AI agents with URLs and upload documents (PDFs, tables, JSON, and transcripts). Training sources can be added, edited, or removed at any time.
Can I set up prompts to direct the AI agent on how to behave?
Yes – you can customise instructions for your LLM–powered AI agent, and use intents, triggers, and structured flows for specific journeys (e.g., refunds, complaint intake, bookings). AI agents can also call APIs using your authentication rules.
Can Proto’s AI agents handle complex queries?
Yes – AI agents can use your knowledge base (with LLM enabled) to answer complex questions, while escalating sensitive or unresolved cases to live agents. You can also build manual workflows for structured customer journeys (e.g., for refunds, complaint intake, bookings). Workflows provide full control over bulding a complex agentic functionality.
Can the AI agent be trained to correct its answers?
Yes – monitor conversations in the Inbox, test the AI agent before deployment, add or refine training sources, and adjust prompts or workflows to improve how the AI agent responds over time.
Does Proto support hybrid conversation design alongside generative AI?
How does Proto work with large language models?
Every message goes through one of the configured workflows. LLM-generated response can be triggered by a custom LLM action in any of your workflows or when fallback is triggered.
For response generation, you have to enable third–party LLMs (available on any plan), use a dedicated LLM (Enterprise Max), or self–host your own model (available for Enterprise on–premise deployments).
What AI models can Proto AI agents use?
Proto's AI agents can function with or without enabled third-party LLMs.
Without enabled LLM, functionality will be limited to workflows (knowledge base will not be available). This setup provides extended data control, as data isn't shared with external providers. AI agent will function using Proto's own AI engine.
For response generation, you have to enable third–party LLMs (available on any plan), use a dedicated LLM (Enterprise Max), or self–host your own model (available for Enterprise on–premise deployments). Our Enterprise solution provides the most extended data control options.
What is an AI agent workflow?
A workflow is a set of actions an AI agent executes when a specific user intent or event is detected.
Workflows can be triggered by events such as:
- Message received – when a user sends a new message
- Attachment received – when a user sends a file
- Fallback – when the AI agent cannot confidently match the message to a known workflow
Every AI agent also includes a baseline set of default workflows to handle key lifecycle events, such as Chat started, Chat closed, Chat timeout, AI timeout, and Fallback.