Trust & Security

Responsible AI

AI that works for your business. Built with accountability.

Complexio builds enterprise AI for operational intelligence. Our AI governance framework ensures that every system we deploy is transparent, proportionate, and aligned with European regulatory standards.

Guiding principles for trustworthy AI.

  1. Transparency — users always know when they are interacting with AI, and can understand how outputs are generated.
  2. Proportionality — AI is applied only where it adds genuine business value, with safeguards proportionate to the risk.
  3. Human Oversight — critical decisions are supported, not replaced, by AI. Human review is always available.
  4. Anti-Surveillance — we do not build systems designed to monitor, profile, or score individual employees.
  5. No Training on Customer Data — customer data is never used to train or fine-tune foundation models. Your data improves your results — not our models.

Aligned with Europe’s AI regulation.

  • All AI systems within Complexio have been assessed against the EU AI Act risk classification framework.
  • Where our systems fall within regulated risk categories, we implement the corresponding transparency, documentation, and oversight obligations.
  • Transparency obligations are met through clear user-facing disclosures and system documentation.
  • We maintain up-to-date technical documentation and conformity records for all AI components.
  • Our AI governance framework is reviewed annually and updated to reflect regulatory developments.

How our AI models handle your data.

Enterprise Automator

  • Uses customer-hosted AI models running entirely within the customer’s own infrastructure.
  • No data is sent to external AI providers during processing.
  • Model selection and configuration are controlled by the customer.

Stevie (Interactive AI)

  • Uses a leading foundation model via secure API integration within the customer's cloud environment.
  • The model provider's commercial terms confirm that API inputs and outputs are not used for training.
  • Prompt and response data is not retained beyond the immediate API transaction.

Safeguards

  • AI interactions are logged for audit and review, with logs available to customers on request.
  • Output controls are designed to detect and redact personal or sensitive data before it reaches the user.
  • Customers can review and configure the behaviour of AI components through administrative controls.

Privacy built into the AI pipeline.

Every AI response passes through an output sanitisation pipeline designed to minimise the risk of personal data surfacing in responses:

  1. Deterministic rules — pattern-based detection and redaction of known PII formats (emails, phone numbers, national IDs).
  2. LLM-based rewriting — a secondary AI pass reviews and rewrites responses to remove any residual personal or sensitive information.
  3. Regex validation — a final rule-based sweep catches any remaining patterns that match known PII structures.