Internal AI Policy: What It Is and How to Write It in 2 Hours
Every professional firm that uses AI tools needs an internal AI policy. Here is what it is, the 6 mandatory elements, and how to draft it step by step — without a lawyer.
Caricamento...
Is your team using ChatGPT without a policy? Here is how to identify and govern unauthorised AI use in a professional firm.
Shadow AI is the ungoverned use of artificial intelligence tools by staff at a professional firm — without authorisation, without a policy, and without management awareness. According to the Microsoft Work Trend Index 2024, 78% of AI users at work bring their own unauthorised tools (Bring Your Own AI), often without the company's knowledge.
For a firm of accountants, lawyers or HR consultants, shadow AI is not just a technology problem: it is a professional, legal and reputational risk.
When a colleague pastes a client's balance sheet into ChatGPT to get an analysis, they are potentially:
The problem is not the use of AI in itself — AI is a powerful tool that can dramatically improve productivity. The problem is ungoverned use.
A colleague who suddenly doubles their productivity in drafting opinions, analyses or communications might be using AI tools. This is not necessarily negative, but if you do not know about it, you cannot govern it.
What to do: ask directly. Create an environment in which using AI is not stigmatised but discussed openly.
Output generated by ChatGPT often has recognisable characteristics: list-based structure, neutral-formal tone, a tendency to generalise, phrases like "in conclusion" or "it is important to underline". If you notice this pattern in internal documents, it is a signal.
What to do: do not ban AI, but establish review standards. Every AI output must be verified and personalised by a professional.
The absence of questions is the most worrying sign. It means either nobody is using AI (unlikely in 2026) or everyone is using it without feeling the need to ask.
What to do: communicate proactively. An AI policy should not be born after an incident, but as a strategic decision of the firm.
If you find non-existent legal citations, references to cases that cannot be located, or statistical data without a source in the firm's documents, it could be an instance of AI "hallucination" that was not caught.
What to do: implement a mandatory fact-checking process for any content that will be sent to clients or used in official documents.
If nobody in your firm can explain why ChatGPT can "invent" a legal provision, or why it is not safe to enter client data into free tools, you have an AI literacy problem that Art. 4 of the AI Act requires you to resolve (obligation in force since February 2, 2025; enforcement from August 2026).
What to do: organise structured training. An email with "the rules" is not enough: you need an interactive session with practical examples relevant to the profession.
The solution is not to ban AI — that would be like banning Excel in the 1990s. The solution is to govern it:
Our AI Governance framework is designed specifically for professional firms that want to turn shadow AI into a governed competitive advantage.
Ignoring shadow AI is not a sustainable option. The risks are real:
The good news: acting now is straightforward and costs little compared to the risks involved. Start with an AI Readiness Assessment to understand where you are and what the priorities are. Or contact us for personalised advice.
Shadow AI is the unauthorised and ungoverned use of artificial intelligence tools within an organisation. It happens when colleagues use ChatGPT, Copilot or other AI tools for work activities without the firm being aware of it or having defined rules of use.
There are four main risks: (1) breach of professional confidentiality if client data is entered into non-compliant AI tools, (2) GDPR non-compliance for cross-border data transfers, (3) professional errors based on unverified AI outputs, (4) AI Act violations for lack of AI literacy and oversight.
Three interventions: (1) a clear internal policy defining which AI tools are authorised and for which activities, (2) AI literacy training for all staff, (3) a governance framework that includes periodic review of tools in use and approval procedures for new tools.
Ogni settimana: guide pratiche, novità normative e casi d'uso per studi professionali. Niente spam.
Every professional firm that uses AI tools needs an internal AI policy. Here is what it is, the 6 mandatory elements, and how to draft it step by step — without a lawyer.