Legal
We use AI in how we work and in what we build for clients. This page explains how we do that responsibly — what data we use, how we protect it, and what we won't do. It should be read alongside our Privacy Policy and Terms and Conditions.
Effective date: 02 April 2026
Business IQ uses AI in two ways. First, internally — to help us run our business, prepare for client sessions, and process outputs. Second, in what we build for clients — AI-powered workflows and automations that run inside their businesses.
This page explains both. It covers how we use AI tools, how we handle data when AI is involved, what safeguards we build into every deployment, and what we won't do regardless of how we're asked.
For how we handle your personal data more generally, see our Privacy Policy. For the terms that govern your use of this website, see our Terms and Conditions.
We use AI tools to help us work more effectively. In practice, that means things like:
The AI tools we use include Claude (Anthropic), and where relevant, Microsoft and Google AI capabilities. We choose tools based on their security profile, their data handling commitments, and whether they are appropriate for the type of work involved.
We do not use data from client sessions or website visitors to train AI models. When we use AI to process information, it is to complete a specific task — not to build datasets or improve a model's future performance.
We classify data into three tiers based on sensitivity. This determines what controls we apply and what we will and won't do.
The default for everything we build. Workflows are designed so that personal data is not passed into AI components. Where possible, we use anonymised or aggregated information, generic content, or non-identifying references. This is where we aim to operate unless there is a clear business reason to go further.
Where a workflow genuinely needs to process personal data, for example, updating a CRM contact or drafting a customer response, we require explicit written confirmation from the client before proceeding. We document what data is permitted, why it is needed, and who can access the outputs.
Health information, financial account details, payroll, disciplinary records, and credentials fall into this tier. We only process sensitive data with explicit written agreement, enhanced safeguards, and named approvers. In many cases we will recommend redesigning the workflow to avoid sensitive data entirely.
Every AI workflow we build follows a consistent set of safety controls. These are not optional extras — they are built into every deployment as standard.
A consequential action is anything that creates a real-world effect — sending an email, updating a record, publishing content, or triggering a payment. We build approval gates into every workflow that takes consequential actions. A named person reviews and approves before anything is executed. Workflows do not act autonomously on things that matter.
Every workflow operates within defined boundaries. We specify in advance which systems it can write to, which recipients it can contact, and which actions it is permitted to take. Anything outside those boundaries is blocked by default.
AI outputs are validated before they are used to drive any action. We check that required fields are present, values are within permitted ranges, and destinations meet allow-list rules. If validation fails, the workflow stops and routes to manual review rather than proceeding with uncertain output.
Prompt injection is when malicious instructions are hidden inside content that a workflow processes — for example, an email that tries to instruct the AI to ignore its rules. We design workflows to separate instructions from content, filter suspicious inputs, and route anything that looks like an attempt to override controls to manual review.
When something goes wrong, workflows are designed to fail safely. That means stopping rather than proceeding with incomplete or uncertain information, logging the failure with enough detail to investigate, and routing to manual intervention where safety cannot be assured automatically.
When we build and operate solutions inside a client's systems, we are granted access to platforms and tools that matter to their business. We take that responsibility seriously.
Every member of the Business IQ team uses a named individual account for access to client systems. We do not use shared accounts. This means every action taken in a client environment is attributable to a specific person.
We request only the access needed to complete the work in scope. If we need to read a specific folder, we ask for access to that folder — not the whole system. We do not accumulate permissions beyond what the delivery requires.
All Business IQ administrative accounts use multi-factor authentication as a minimum. We require the same for any admin access used in client environments where we have the ability to influence that standard.
Credentials we manage on behalf of clients are stored in an approved password manager or the automation platform's built-in credential vault. They are never stored in documents, emails, chat messages, or workflow logic. They are never passed into AI processing steps.
When a Fix engagement ends or a Keep relationship concludes, we support a documented offboarding process. Business IQ access is removed, and credentials we had access to are rotated. Clients retain full control of their environment throughout and at exit.
We hold ourselves to a clear internal security standard and we are transparent about where we are on the certification journey.
Business IQ is working toward Cyber Essentials certification. Our internal practices are designed to meet that standard as a minimum, covering access controls, device security, patching, and secure configuration across the tools and systems we use in delivery.
Our planned progression is Cyber Essentials, then Cyber Essentials Plus, then ISO 27001. We are not claiming certifications we do not yet hold. As each stage is achieved, this page will be updated to reflect it.
We review our security practices at least annually, after any material incident, and when we add new tools or delivery patterns to our approved stack. This page is updated to reflect any material changes.
No system is completely immune to problems. What matters is how quickly issues are identified, contained, and resolved, and how clearly clients are kept informed.
Every workflow we build includes monitoring appropriate to its risk level. Failures, validation errors, and unexpected behaviour trigger alerts to a named owner. For Keep clients, we monitor actively within the agreed scope of the managed service.
When an incident is suspected, the first priority is to stop further harm. That typically means disabling workflow triggers, pausing outbound actions, and restricting access while the situation is assessed. Every workflow we deploy includes a kill switch — a way for the client to pause or disable it independently, without needing Business IQ to be available.
We will notify clients promptly when we become aware of an incident affecting a workflow we have built or operate. For Fix engagements, that means within one business day of confirming an incident. For Keep clients, notification timelines are agreed during onboarding. Where an incident involves potential personal data exposure, we prioritise rapid notification so the client can assess any regulatory obligations they may have.
We investigate what happened, identify the root cause, and implement fixes before restoring the workflow to production. For material incidents, we complete a post-incident review and update our controls where needed.
If you suspect misuse, unauthorised access, or any security issue related to Business IQ or a workflow we have built, please contact us immediately at hello@biq-consulting.com.