What New Zealand's Law Society Guidance Really Means for Your Firm's AI Journey
Insights | March 2024
The New Zealand Law Society Te Kahui Ture o Aotearoa has released its first dedicated guidance on Generative AI for the legal profession. It is a measured, practical document and if you are a lawyer thinking about where AI fits into your practice, it is worth understanding what it is actually telling you.
Here is our read of the key themes.
The profession is expected to engage, not wait
The guidance does not counsel caution for caution's sake. It opens by acknowledging that lawyers across New Zealand and overseas are already using and investing in AI to enhance their service offering. The Law Society's position is clear: this is a technology that deserves serious attention, and firms that understand it will be better placed than those who do not.
At the same time, the guidance is candid that the opportunities come with specific, manageable risks. The message is not "slow down", it is "go in with your eyes open."
Your professional obligations do not pause for AI
Perhaps the most significant theme running through the guidance is continuity of responsibility. A lawyer who uses AI to draft a contract or conduct research remains fully responsible for the quality of that output. The fact that it was generated by an algorithm provides no defence against a complaint or disciplinary proceeding.
This matters practically. AI tools can produce outputs that appear authoritative but contain errors, a phenomenon the guidance describes as "hallucination." Courts in New Zealand have already flagged this risk, urging judicial officers and support staff to scrutinise submissions that appear to have been AI-generated. Lawyers need robust review processes, not just good prompts.
The guidance also highlights specific rules under the Lawyers and Conveyancers Act (Conduct and Client Care) Rules 2008 that come into play, including duties of competence (r 3), prohibition on misleading conduct (r 10.9), and fidelity to the court (r 13.1). Supervisors have additional exposure if staff are using AI tools in unauthorised or unmonitored ways.
Privacy and confidentiality require careful structural thinking
New Zealand currently has no overarching AI regulation, but existing law applies and the Privacy Act carries real weight here. When data is input into a third-party AI tool, that data may be visible to the provider, may be used for training, and may be transferred to servers located overseas. Each of these scenarios engages the Privacy Act, and Information Privacy Principle 12 specifically governs disclosure outside New Zealand.
Both the government's public service guidance and the Courts' own AI guidelines caution strongly against feeding personal or confidential information into external AI tools. The Law Society echoes this. The practical implication for firms is that data governance, what can go in, what cannot, and how that is enforced, needs to be defined before a tool is deployed, not after.
Legal privilege is also at stake. Inputting privileged material into a publicly accessible AI tool may constitute a breach of privilege. Fictional or anonymised data should be used for testing and template generation.
Intellectual property questions are live and unsettled
Ownership of AI-generated content is not a settled question in New Zealand law, and the guidance flags it as an active risk area. Some AI tools "scrape" content from external sources, which can raise copyright concerns. Separately, some terms of service allow providers to retain ownership of output data or to reuse your inputs, provisions that could place a lawyer in breach of professional obligations if they involve client or privileged material.
Reading the fine print in vendor contracts is not optional. It is part of due diligence.
Fee structures and client disclosure need to evolve
The guidance raises a question that will become increasingly pressing as AI adoption grows: how should firms bill for work assisted or completed by AI?
On the time and attendance model, lawyers are asked to consider what is appropriate when a non-human is performing tasks that were previously billed by the hour. The guidance suggests that billing for AI-assisted research might be analogous to billing for use of a research tool, but that charging a full hourly rate for AI-generated contract drafting or document review warrants more careful thought.
Related to this is the question of transparency. Lawyers have professional obligations to inform clients about how their work will be carried out and by whom. Whether and when to disclose the use of AI, and whether client consent is required, are questions firms will need to address in their engagement letters and client care information.
A structured approach is what the guidance is really asking for
Underlying all of the specific risk areas is a consistent call for firms to approach AI the way they would approach any significant operational change: with a plan. That means vendor due diligence before selecting a tool, a clear internal policy before deploying it, staff training that covers both technical use and professional obligations, and a review process that continues after go-live.
The guidance is explicit that the legal landscape in this area is still developing. Firms that build structured, documented approaches now will be better positioned to adapt as the regulatory environment matures and better protected if their AI use is ever scrutinised.
This article summarises the New Zealand Law Society's guidance "Lawyers and Generative AI" (March 2024). It is intended as general commentary only and does not constitute legal advice.