Shadow AI in Your Firm: 5 Warning Signs You Cannot Ignore
Is your team using ChatGPT without a policy? Here is how to identify and govern unauthorised AI use in a professional firm.
Caricamento...
Every professional firm that uses AI tools needs an internal AI policy. Here is what it is, the 6 mandatory elements, and how to draft it step by step — without a lawyer.
Every professional firm that has adopted AI tools — even just ChatGPT for drafts or Copilot for emails — is already applying an AI policy. The problem is that in most cases that policy does not exist on paper: it exists only in the informal habits of some colleagues.
The internal AI policy is the instrument that transforms informal AI use into a governed, documented and defensible process. It is not a document for clients. It is not a bureaucratic obligation. It is the answer to the question: "If I were asked tomorrow how AI is used in my firm, would I be able to answer?"
First, three clarifications on what an internal AI policy is not:
An internal AI policy is an operational document for internal use that establishes: which AI tools are authorised, who can use them, for which activities, with what limits, and what to do when something goes wrong.
1. AI Act — governance obligation for deployers Article 9 of the AI Act requires "deployers" of AI systems (those who use third-party AI systems in their work) to adopt proportionate governance measures. A documented internal policy is the minimum form of compliance.
2. It protects against shadow AI Shadow AI is the phenomenon by which colleagues use unauthorised AI tools, often with sensitive client data. Without an explicit policy — that says both what is permitted and what is not — there is no way to prevent it or manage the consequences.
3. It reduces GDPR risk Every time a colleague copies a client's document into ChatGPT, that client's data is transferred to the provider's servers. If there is no adequate data processing agreement, it is a GDPR violation. The policy identifies tools with adequate contracts and those prohibited for client data.
4. It builds trust with clients A client who asks "do you use AI?" deserves a precise answer, not a vague one. Being able to show an internal policy — even just in summary — signals professionalism and control.
5. It facilitates onboarding of new colleagues "How is AI used in this firm?" is one of the first questions a new colleague asks. A written policy answers once, consistently, for everyone.
A well-structured internal AI policy must contain at least these six elements:
1. Scope and purpose Who it applies to (all colleagues, including external ones), which activities it covers (client work, administrative activities, training) and which tools it covers.
2. Table of authorised AI tools A precise list of approved tools, with name, supplier, link to the data processing agreement, and an indication of whether they can receive client data.
| Tool | Supplier | Client data | Notes |
|---|---|---|---|
| ChatGPT (Enterprise) | OpenAI | Yes (with DPA) | Drafts only, not financial data |
| Microsoft Copilot | Microsoft | Yes (with DPA) | Integrated in M365, compliant |
| Claude.ai | Anthropic | No | Generic internal use only |
3. Rules of use — what can and cannot be done Rules must be concrete and binary. Examples of clear rules:
4. AI Officer (or AI responsible) Who is the point of contact for questions about AI use, who approves the adoption of new tools, who manages incidents. In small firms this may be the principal themselves; in more structured firms it may be a dedicated partner.
5. Incident management What to do if a violation is suspected: who to notify, within what timeframe, how to document it. This element is often the most overlooked and the most important in the event of a dispute.
6. Training and updates Who must be trained, how often, how training is documented. The AI Act imposes AI literacy requirements on deployers: documented training is the evidence of compliance.
Step 1 — Map AI tools in use Before writing, do an inventory. Ask every colleague which AI tools they use and for what. The result often surprises: there are tools adopted informally that nobody knew about.
Step 2 — Classify the data they handle For each identified tool, verify whether it can receive client data and whether the contract with the supplier includes an adequate DPA. This classification determines the rules of use.
Step 3 — Use a structured template Start from a template — not a blank page. A well-structured template covers all 6 elements and requires only completing the sections specific to your firm. The estimated time to complete a template with already-collected information is 1-2 hours.
Step 4 — Get it signed and distribute it An unsigned policy is a policy that does not legally exist. The principal signs the policy. All colleagues sign an acknowledgement. Retain the signatures. Repeat the process with each significant update.
A firm that uses AI without an internal policy is not in a neutral position: it is exposed. Exposed to a GDPR violation from a colleague who used ChatGPT with a client's data. Exposed to a challenge from a client who did not know their tax opinion was drafted with AI. Exposed to a sanction from the supervisory authority for lack of AI Act governance measures.
An internal AI policy does not eliminate these risks. It manages them, documents them, and makes them defensible.
Want to start without beginning from scratch? In our guides collection you will find an internal AI policy template already structured for professional firms, with a tools table, mandatory element checklist and colleague acknowledgement form.
For firms that want to go further than a policy document — embedding AI governance into the culture and operations of the practice — our Fractional AI Officer service provides ongoing strategic support without the overhead of a full-time hire. To understand how the internal policy fits into the broader regulatory picture, our AI Governance guide covers the full compliance landscape for professional firms. For specific questions about your situation, contact us.
An effective internal AI policy does not need to be long. For an average professional firm, 4-8 pages are sufficient: one page on scope and purpose, a table of authorised tools, rules of use (maximum 10), incident management, and the update procedure. What matters is that it is concrete, applicable and signed — not that it impresses by its length.
The policy must be signed by the firm's responsible person (senior partner or principal), who assumes responsibility for it. It should then be distributed to all colleagues who accept to comply with it — ideally with a signed acknowledgement. In structured firms, it can be part of the onboarding process for new colleagues.
The internal AI policy should be updated at minimum once a year, and whenever: (1) a new AI tool is adopted, (2) a relevant regulatory deadline arrives (AI Act, national implementing laws), (3) an incident occurs. It is good practice to keep a version log with the date and main changes.
No, not necessarily. An internal AI policy is primarily an operational internal document, not a complex legal instrument. With a structured template, a partner or senior colleague can draft it. However, for firms handling sensitive data (health data, significant financial data) or using high-risk AI systems, legal review is advisable.
Ogni settimana: guide pratiche, novità normative e casi d'uso per studi professionali. Niente spam.
Is your team using ChatGPT without a policy? Here is how to identify and govern unauthorised AI use in a professional firm.