topic
jurisdiction
downloadable assets
article
Article
Across organisations, business teams are increasingly “self-serving” legal answers through generative AI, bypassing Legal and, in some cases, moving work forward on the basis of unreviewed outputs.
This behaviour is widely discussed online under the label “shadow AI”, meaning the use of public or unapproved models outside governed workflows, often triggered by time pressure, friction in the approved path, or restrictive tool policies that push employees onto personal devices and accounts.
The practical consequence is that Legal becomes involved later, after positions have hardened or sensitive information has already been shared, turning what should have been early risk triage into remediation and rework
We asked four in-house legal and AI specialists about the risks of the business using GenAI for legal questions, and how legal functions should respond.
***
What is driving business teams to use GenAI instead of going to legal first?
Sonja Dunne: Urgency, whether real or perceived, is a key driver. Legal review can be a bottleneck in getting deals closed. Legal teams are often lean, and it can take time to receive an answer. Where the business previously waited a few days for a legal question to be addressed, it can now receive an instant response from an AI tool.
Company policy is another factor. When organisations restrict company-approved AI tools too heavily, they push people towards shadow AI. Nothing prevents someone from taking out their phone and obtaining a legal answer from a public model. The moment that happens, the company’s guardrails on AI use become irrelevant. The user has stepped outside the governed environment entirely.
However, the shift towards using AI for legal questions does not unfold in the same way everywhere. Age is one factor. Millennials and Gen Z have different expectations regarding speed and self-service, and behaviour tends to follow those expectations.
Hui Min Langridge: Culture matters too and this can differ between countries. Within the same organisation, a European legacy business may sit alongside a team in India that is years ahead in terms of embracing AI in their day-to-day work. In some regions, workers may already be very comfortable experimenting with AI and their adoption of AI at work will reflect this, including extensive deployment of agents to perform menial repetitive tasks. In other regions, colleagues may not even know how to draft basic prompts or use even a basic chatbot. The gap can be significant and this will affect the degree to which different functions lean on AI prior to engaging with the Legal team.
Tara Harris: There is also a structural shift underway. Many companies now take an AI-first approach, expecting employees to use AI for a wide range of tasks. Adoption and automation are embedded in performance incentives. The more AI employees use, the more they are respected.
In such organisations, thousands of agents may already be in production. When deployment operates at that scale, it is natural for business teams to consult a tool before consulting legal.
Sonja Dunne: Which brings us back to incentives. If the approved path feels slow, restrictive, or uncertain, employees will find ways around it.
Hui Min Langridge: And if legal is perceived as a blocker rather than a partner, they will bypass it even more quickly.
Tara Harris: Especially when the organisation explicitly rewards adoption rather than caution.
What are the key risks when non-lawyers use GenAI for legal questions?
Spencer Davis: The risks are substantial.
Let’s start with what goes into the prompt.
- Confidentiality is the first concern. A business would not want “crown jewels” information to be leaked through jailbreaking or training data extraction. Strategically sensitive content should not be processed using tools that are not fully air gapped.
- Another concern is privacy. Information on employees may be inadvertently memorized and later regurgitated by the model in response to queries from other users.
- If the company has listed securities, the risk extends to market abuse. If an employee inadvertently leaks "Material Non-Public Information" (MNPI) via a LLM, the consequences could be catastrophic for the organization’s market cap.as well as the exposure to regulatory intervention.
- Privileged or other legally sensitive information of tactical significance should not form part of any prompt. If it does, it may become discoverable. Courts are already seeing the first cases. In litigation, a counterparty may be able to examine what was entered into the tool. The outcomes and consequences from a litigation standpoint can be severe.
The common thread is irreversibility. Once sensitive information enters a non-governed system, you cannot reverse that or reclaim it.
Sonja Dunne: The second risk area is how outputs are used. The business may take what the tool produces and deploys it in contract negotiations or legal arguments without discussing with a lawyer. The deeper problem with this approach is that without the in-depth legal experience and knowledge that comes from a lawyer, the business is often unable to determine when the output is wrong, or when it is theoretically right but not fit for purpose in their specific context. A plausible-sounding answer is often accepted at face value without further consideration.
Jurisdiction is a classic example. People rarely prompt with governing law in mind. Is this English law? German law? Without that framing and context, the “legal advice” derived from the output can be meaningless. Another example is where teams summarise term sheets or entire contracts using LLMs, and the output is often not reviewed before it goes out.
Even if accuracy is above 90%, that margin matters enormously in practice. Applied to contract termination or renewal, even a small error rate becomes significant. Missed dates and misread termination rights can put the company in a very difficult position.
Spencer Davis: Beyond how outputs are used, consider how misinformation compounds.
Let’s imagine company ABC asks a LLM a question, for example about how products are regulated. Company ABC publishes the legally unsound answer and information in investor materials. The next time someone in asks the LLM a similar question, the system retrieves Company ABC’s source and repeats the error. It becomes a self-reinforcing loop. In specialised fields where training data is limited, this is not an unlikely scenario.
Related to this is manipulability. LLMs can be influenced by how they are prompted, what sources they retrieve from, or whether training data has been corrupted. Prompt injection and retrieval from manipulated sources remain live problems.
The cumulative effect is a trust problem with real cost consequences. The “client” believes the output because it looks coherent. Maybe it even confirms what they wanted to hear. When the lawyer scrutinises the answer and then disagrees, Legal must redo the analysis and rebuild confidence. Getting the client's thinking back on track generates significantly more work than if Legal had been consulted in the first place.
The usage of AI tools is not going to stop. So how do you ensure safe and productive use for legal questions? What governance and operating model controls actually work in practice?
Sonja Dunne: The key is enablement, not restriction. Build controls and training directly into the user journey. We embed our playbooks and policies into the tool itself, with pop-ups reminding users that AI is not a replacement for legal advice, alongside links to the relevant legal guardrails. The goal is to make the safe path the easy path.
Hui Min Langridge: That enablement must be tailored to the organisation. Risk tolerance varies, and more regulated sectors will naturally be more conservative around AI adoption. But whatever the risk level, users need to know what they can input into the tool, what they cannot, and when they can rely on the output and how they can use the tool safely in the context that they operate in, and not to diminish the importance of human judgement. That makes training a precursor to successful implementation of AI. It is worth noting that organisations spending the most on AI do not always get the best returns. Much of the value comes from the training, not the technology.
Tara Harris: On top of that, thresholds for sign-off can be a significant enabler. You can build systems that automatically risk-tier agreements, assigning green, amber or red ratings, flagging problematic clauses against your policy and suggesting compliant alternatives. If a contract is below a certain threshold, people can renew it themselves.
But even within these systems, responsibility stays with the user. The policy must be clear: you own the output. You cannot blame the tool.
A critical element of that enablement is the procurement of powerful tools. It is far safer to pull them out of shadow AI and give them a powerful, yet governed, environment to work in. If you do not, they will use unapproved tools regardless.
And when thresholds are crossed, there must be a hard stop. At that point, approval is required and accountability follows. In some cases, holding at the safest level is the right call.
How can you technically reduce hallucination, and what trade-offs does that create?
Sonja Dunne: Start with segmentation. Which tasks can automation own completely? Which can it assist with? Which should it never touch? Not everything belongs in the same bucket.
Playbooks are the key. Build in fallback logic, multiple “if this then that” options, and embed your risk tolerances directly into a closed tool, one that cannot be influenced from outside. This will help reduce variability.
The system should reflect agreed legal positions rather than improvising.
Literacy and training matters too. AI training should include modules on how to prompt properly. With better prompts, robust playbooks and the right tools, hallucination risk drops significantly. It does not disappear, but it drops.
Hui Min Langridge: Technical configuration of the AI tool is equally important. A tool should be configurable so that searches are not retrievable across users. Retrieval-augmented generation runs on a per-user basis only. Where technically possible, inputs/outputs should be ring-fenced so that a prompt by one user will not surface data inputted into the system by another user.
There is a real trade-off here. Using a closed loop AI system reduces leakage risk (with stronger IP and privacy controls) but can reduce output quality, particularly if your model was not initially trained on a diverse set of data before it was deployed in your environment. You have to choose whether to preserve analytical power of the AI model and accept some leakage risk or use a closed loop AI system and accept losing some breadth in the outputs and capabilities.
Some tasks should simply not be automated and carried out solely by an AI system. An AI system will not trump a lawyer’s ability to leverage pre-existing relationships to influence stakeholders and get things done in an organisation. People do not want to be managed by a machine. Some elements of legal work will remain human by design.
Tara Harris: Literacy must evolve beyond basic inputs. We need to teach people how to collaborate with agentic AI and manage automated workflows. When users understand how to work with agents within a governed framework, the risk of hallucination becomes more manageable.
Grounding and citations address a different dimension of the problem. Build agents that require citations and can only draw from authoritative sources. Layer on checker agents: one reviews another's output, verifies citations, checks the source directly. Agent-to-agent validation.
Monitoring is also easier when people use governed tools. Behaviour on shadow systems is invisible. Approved tools give you visibility.
Layered review still matters, particularly for contract work. Standard clauses go to contracts. IP issues require IP sign-off. Privacy goes to privacy. The system routes issues to where the expertise sits.
General Counsel are under growing pressure to automate before they hire. So what is the future role of the in-house lawyer in an AI-driven company?
Sonja Dunne: Growing your legal team will be a hard sell. Management will expect you to explore automation first. That means embracing it and actively identifying the opportunities AI can bring.
At the same time, demonstrate the value your team delivers. Legal should be strategic partners to the business that turn legal risks and obligations into practical decisions. If today only 30 to 40 percent of time is spent on higher-value work, that share will grow. Repetitive, lower-complexity tasks will not define the future role of lawyers. Strategic work that impact business decisions will.
Hui Min Langridge: If all you do is transactional work that can be automated, the business may well conclude it only needs a GC, with automatons underneath.
But AI does not create something from nothing. It is derivative in nature. It does not invent from lived experience or consciousness. That is where the human layer matters: context, experience, and judgment. Institutional memory, in particular, will remain decisive.
Spencer Davis: I would go even further. Legal advice is an art form. Good legal advice requires context, around jurisdictional preferences, costs, adverse PR risk, history with the customer, and the kind of judgment that cannot be reduced to rules. The best advice is often pragmatic, born from real understanding of a situation's nuances.
AI can produce something that looks coherent, but that is precisely the challenge. Much of the critical context has never been written down, and what is not captured cannot be applied. For advice to be truly effective, the human layer will remain essential.
Tara Harris: The speed problem is real. To stay relevant, the legal function must integrate with faster systems; otherwise, it will be circumvented.
AI is enabling people to do things that were previously unimaginable and freeing them from work that adds little value. This is an opportunity, not a threat.
The answer is to maintain control where it counts. In our firm, contracts are not executed until they pass through legal. That gate is hard and non-negotiable. It will catch poorly AI-driven negotiating positions before it's too late.
The real power lies in the combination. Use your institutional knowledge and use it faster.
20Minds thank the participants for their candid observations.
Sources
1 Sonja Dunne is Head of Legal Global Direct Procurement at Reckitt.
2 Tara Harris is Group IP Lead – Digital & Regulatory at Prosus Group and Naspers Limited.
3 Hui Min Langridge is Head of Legal and Privacy for EMEA and APAC at Aptos Retail.
4 Spencer Davis is Chief Legal Officer, Lifezone Metals.




.jpg)



