New Zealand's Responsible AI Guidance for Businesses: What Legal and Tech Leaders Need to Know

Published July 2025 | MBIE | AI & Law Insights

New Zealand's Ministry of Business, Innovation and Employment (MBIE) has released its Responsible AI Guidance for Businesses, a voluntary but substantive framework aimed at helping NZ businesses adopt and deploy AI in a trustworthy, legally sound manner. For firms operating at the intersection of AI and law, this document deserves close attention.

The Framework at a Glance

The Guidance adopts a proportionate, risk-based approach, consistent with international initiatives like the OECD AI Principles and the EU AI Act, and is structured around three layers:

  1. Understanding your "why": clarifying purpose, principles, and objectives before deploying any AI system.
  2. Good business foundations: governance, legal compliance, procurement, cybersecurity, privacy, and stakeholder engagement.
  3. AI system-specific considerations: data quality, model integrity, GenAI inputs/outputs, and human-in-the-loop decision-making.

It is non-binding and intentionally technology-neutral, covering everything from legacy rule-based systems to large language models.

Key Legal Obligations Flagged

The Guidance maps a range of existing NZ legislation onto AI risk scenarios, a practically useful exercise for compliance teams. Businesses should be aware of exposure under:

  • Privacy Act 2020: any AI system processing personal data triggers obligations under the Information Privacy Principles, including the mandatory appointment of a privacy officer.
  • Fair Trading Act 1986: AI-generated content, pricing tools, and chatbots that mislead consumers create liability.
  • Commerce Act 1986: algorithmic pricing tools that pool competitor data can constitute cartel conduct, even without direct communication between parties.
  • Human Rights Act 1993 / Bill of Rights Act 1990: AI systems that produce discriminatory outcomes in hiring, lending, or service delivery carry real legal risk, particularly where bias is embedded in training data.
  • Copyright Act 1994: training data sourcing, output ownership, and GenAI-generated content all raise unresolved IP questions requiring active management.
  • Harmful Digital Communications Act 2015: deepfakes and AI-generated abusive content carry takedown obligations.

The Guidance also flags the relevance of international regimes, particularly the EU AI Act and GDPR, for businesses operating across borders.

Governance and Accountability

The Guidance recommends assembling cross-functional AI governance teams spanning legal, privacy, security, data science, HR, and communications. For smaller firms, this may mean assigning AI governance responsibilities as a portfolio rather than a dedicated role.

Critical governance expectations include:

  • Documented AI policies aligned with existing data, security, and privacy frameworks
  • Clear accountability structures across the AI lifecycle
  • Regular risk inventories using a structured Identify to Assess to Manage to Record to Review cycle
  • Contingency and exit strategies for AI system failure or vendor change

GenAI-Specific Risks

The Guidance dedicates significant attention to generative AI, flagging several concerns directly relevant to legal and professional services:

  • Hallucinations: LLM outputs should be verified against primary sources before use in client work.
  • Prompt data exposure: information entered into public or free GenAI tools may be shared with developers or surfaced in future outputs.
  • IP ownership uncertainty: outputs may lack commercial protection, may infringe existing copyright, or may replicate protected works without attribution.
  • Prompt injection: adversarial inputs can manipulate LLM behaviour, a security risk in AI-assisted legal research or document review tools.
  • Maori data and matauranga Maori: AI systems touching Maori content require culturally informed governance and, in many cases, direct community engagement.

Human-in-the-Loop Requirements

The Guidance draws a clear line between low-risk AI and high-stakes decisions affecting money, health, law, or employment, where human review is not optional. It warns against automation bias, the tendency to accept AI outputs uncritically, which undermines oversight.

For legaltech applications, contract analysis, due diligence, regulatory research, and risk scoring, a robust human-in-the-loop framework is both a compliance expectation and a professional obligation.

Procurement Considerations

Before deploying a third-party AI system, the Guidance recommends assessing:

  • Where operational and input data is stored, and which jurisdiction governs it
  • Model performance metrics, including accuracy and bias testing results
  • Ownership of inputs and outputs under supplier terms of service
  • Vendor lock-in risk and data portability
  • Whether training data meets ethical and legal sourcing standards

The AI Forum NZ's procurement guides and AI model cards are referenced as due diligence tools.

Ethical Data Sourcing and Copyright

For businesses developing or fine-tuning AI models, the Guidance addresses the growing importance of licensed training data. Options include direct licensing deals with publishers, collective licensing schemes (including an anticipated New Zealand scheme from Copyright Licensing NZ later in 2025), and emerging fair marketplaces.

Models trained exclusively on licensed data, such as Adobe Firefly and Te Hiku's te reo Maori ASR model, are cited as examples of best practice.

Bottom Line for AI Legaltech Firms

This Guidance does not create new law, but it crystallises regulatory expectations and provides a defensible framework for responsible AI deployment. For firms advising clients on AI adoption, or deploying AI in their own practice, it offers:

  • A structured compliance checklist mapped to existing NZ legislation
  • Scenario-based illustrations of where AI can generate legal and reputational liability
  • Practical governance templates adaptable to firm size and risk appetite
  • Clear signposting to international standards (ISO/IEC 42001, NIST AI RMF, EU AI Act) for clients operating globally

The full Guidance, checklists, and supplementary resources are available at mbie.govt.nz.

This summary is provided for informational purposes only and does not constitute legal advice. Businesses should seek independent legal counsel regarding their specific obligations under applicable legislation.