Security and confidentiality
Accretive is built for the confidentiality requirements of legal practice in New Zealand. This page sets out how we handle client materials, how workspaces are isolated, and the current compliance position — alongside the New Zealand legal context for AI use.
Summary
- Matter isolation: documents uploaded to one matter are not accessible from another, even within the same firm account.
- Encryption: client materials are protected in transit (TLS) and at rest (AES-256).
- No model training: Accretive does not use uploaded documents to train models or improve its systems.
- Human review: lawyers remain responsible for checking and approving every output before use.
Document confidentiality
Documents you upload to Accretive are used only for the purpose of completing the drafting task you have initiated. They are not shared with other users, other matters, or other firms.
Accretive does not use your documents to train models or improve its systems. Your precedents remain yours.
Matter separation
Each drafting task operates within its own isolated workspace. Documents from one matter are not accessible from another, even within the same firm account. This applies to uploaded files, deal context, and generated output.
Encryption
Documents are encrypted in transit and at rest. All connections to the Accretive platform use TLS. Stored documents are encrypted using AES-256. Encryption keys are managed separately from the data they protect.
User access and permissions
Access to the Accretive platform is controlled at the firm level. Each user account is individual and requires authentication. Document access within the platform is scoped to the matter in which documents were uploaded.
Administrators can manage which users have access to the platform and review activity within their firm's account.
Audit visibility
The platform maintains a record of activity within each workspace: when documents were uploaded, when the drafting task was run, and when output was generated. This record is available to firm administrators and is intended to support internal oversight of how the platform is being used.
Document retention
Accretive does not retain document content indefinitely. Uploaded materials and generated output are available within your workspace for the duration of the active matter. Firms can request deletion of workspace data at any time.
NZ legal context: Core duties when using AI
- The Lawyers and Conveyancers Act 2006, section 4 requires lawyers to uphold the rule of law, act independently, meet fiduciary duties, and protect client interests.
- The RCCC Chapter 8 sets strict confidentiality obligations, including the duty to hold client information in strict confidence.
- Under rule 8.1, the duty of confidence starts at proposed retainer stage and continues indefinitely.
- Lawyers still carry responsibility for competence and conduct, including RCCC rules on competence (r 3), misleading conduct (r 10.9), and duties to the court (r 13.1), as reflected in NZLS and judiciary AI guidance.
NZLS guidance for lawyers using AI
- The NZ Law Society’s Generative AI guidance confirms there is no single NZ AI statute yet, but existing professional and legal duties fully apply.
- Lawyers remain responsible for outputs, including citation and factual accuracy, even where AI is used.
- The guidance highlights confidentiality, privilege, privacy, and supervision risk when using external AI tools, and includes a practical checklist for implementation.
- NZLS materials expressly connect AI risk back to RCCC obligations, including competence, fidelity to the court, and proper supervision of legal practice.
High Court and Supreme Court guidance on AI
- The Courts of New Zealand have issued all-benches Gen AI guidelines for lawyers.
- The lawyers’ guideline document explicitly states it applies to the Senior Courts including the Supreme Court and High Court (see page 7 of the PDF).
- The court guidance requires caution on confidentiality, suppression, and privilege, and reinforces counsel’s duty to verify citations and factual content before filing.
- Disclosure of AI use is not automatically required in every case, but courts/tribunals may request or require it.
Current product security baseline
- Session-based authentication and signed cookies for dashboard/API access.
- Owner and organisation checks on templates, draft jobs, and document retrieval routes.
- Server-side validation for key submission flows (demo requests and draft-job inputs).
- Document privacy controls in schema and migrations, including documents-table RLS migration.
Compliance roadmap: ISO 27001, SOC 2, ISO 42001
- ISO/IEC 27001: building a formal ISMS program around access control, risk treatment, incident response, supplier assurance, and policy governance.
- SOC 2: aligning controls and evidence collection to Trust Services Criteria (security first, then availability/confidentiality as scoped).
- ISO/IEC 42001: establishing AI-specific governance for model use, human-in-the-loop review, transparency, and AI risk management.
- These are active roadmap targets. Formal certification/attestation has not yet been claimed on this page.
This page is an implementation and guidance summary for product transparency. It is not legal advice.