Infrastructure
Operations are anchored in Swiss data centres, primarily in Zurich and Geneva, ensuring data and processes remain under Swiss legal jurisdiction
AI for Enterprise

One-click switching between any LLM, including Gemini, Claude ChatGPT, Groq or local models
session-based pricing keeps cost low and predictable; and ensures your budget goes to where the most value is created
White-label branding: attach your own brand name, colours and logo
Integrate with AI Agents, tools, and workflows so the interface can ACT, not just chat
100% auditabilty, human-in-the-loop approval, built-in guardrails
Option to store all your data on-premises or in a private cloud
Model switching, model fallback, centralised memory, RAG integration
Import your history from your previous LLM - yourGPT immediately knows your context
Operations are anchored in Swiss data centres, primarily in Zurich and Geneva, ensuring data and processes remain under Swiss legal jurisdiction
1. Deploy into your own VPC or on-premise infrastructure for complete data control
2. A fully-managed SaaS option in the Lyzr cloud
All of your chosen LLM's are built in so that you can always use the best AI for the job.
Model Fallback: automatically switch to a secondary option if an LLM fails a safety check
Connecting your GPT to applications within your ecosystem turns the chat interface into an organisational command centre
Fully managed SaaS edition -the fastest time-to-value
Full data soverignty & control - the most secure option
The default retention period is 12 months for all stored data — conversation history, files, and embeddings.
This is fully configurable based on your compliance requirements. Contact your account team to adjust.
All deletions are hard deletions. When an agent or knowledge base is deleted, data is permanently removed with no soft-delete intermediary or backup persistence beyond the configured SLA.
Uploaded files are stored temporarily, encrypted at rest, and follow the same retention policy as all other data.
Supported file types are parsed for text extraction. Document metadata such as comments, tracked changes, and hidden sheets may be processed as part of extraction.
Lyzr's optional PII Guard layer sits in the processing pipeline — before any data is sent to an external LLM provider. It combines two detection approaches:
NLP-based entity recognition
Identifies contextual entities like person names using trained language models.
Pattern & regex matching
Catches structured PII: credit card numbers, SSNs, phone numbers, emails, IP addresses, IBANs. Resultsare merged, deduplicated, and filtered through per-entity confidence thresholds to reduce false positives.
Detected entities are handled in one of three ways, configurable per entity type: Redact — replaced with a numbered placeholder such as PERSON_1; Block — the entire input is rejected; or Disabled — no action taken. A reverse mapping is returned so responses can be de-anonymised on the return path.
85.9% Precision, 83.4% F-Score, 82.8% Recall from 10k samples
The system was evaluated against a synthetic dataset of 10,000 samples spanning 17 PII entity types, including names, addresses, credit cards, SSNs, phone numbers, emails, IBANs, and IP addresses.
The only exception is images: PII embedded visually in scanned documents, screenshots, or photos of IDs will not be detected.
Captured:
Request identifiers for debugging
Model and provider info
Token usage for billing
Error rates and response times
Workflow steps — tools and actions run
NEVER Captured:
Personal information of any kind
Content of prompts or responses
Uploaded files or attachments
Conversation history
Any organisational identifiers
Lyzr enforces logical tenant separation — each organisation's agents, knowledge bases, conversation history, and configuration are scoped to that organisation's namespace and cannot be accessed by other tenants.
Conversation data is stored per session and isolated per organisation. Message storage can be disabled entirely via a platform setting if your policy requires it.
Yes — all data is encrypted both in transit using TLS and at rest. This applies across all datastores: AWS DocumentDB, Qdrant, ElastiCache, S3, and ClickHouse.
Lyzr uses AWS KMS for key management and supports automatic key rotation. A secrets management layer
handles all provider and API keys securely. Customer-managed keys — BYOK and HYOK — are supported.
Contact your account team for the scope and limitations specific to your deployment tier.
External LLM receives:
System prompt
User message or PII-masked version
Conversation history
Tool definitions
NEVER receive:
User IDs or org IDs
Session identifiers
Billing or usage data
Any internal Lyzr metadata
Provider allowlisting, single-provider mode, and regional data residency controls are all available. The platform's fallback routing system can be configured to restrict which providers are eligible, including preventing fallback across provider boundaries entirely.
Discuss your specific provider restriction requirements with Growth Directors to configure the right routing policy for your compliance and data residency environment.
Lyzr enforces responsible AI guardrails as a core part of the agent architecture. These include input inspection, content filtering, and the optional PII Guard layer. Agent traces are available for auditability.
Prompt injection controls and RAG grounding mechanisms are built into the retrieval and processing pipeline.
Log export to external SIEM systems or S3 for independent retention is a supported capability at the enterprise tier. The exact log schema for user requests and admin/config actions, tamper-evidence controls, and export configuration details are available in the enterprise documentation package.
Contact us at Growth Directors to discuss log forwarding and SIEM integration for your compliance needs.