Skip to main content

Command Palette

Search for a command to run...

Preparing for Australia’s 2026 privacy transparency rules on automated decision making

Updated
6 min read
J

I am a legal counsel and IP specialist with technology expertise in software, machine learning and Web3 technologies. I also have extensive experience with medical devices and mechanical devices.

Automated decision-making and AI-enabled processes are increasingly embedded in customer, employee, and operational workflows. Where these processes use personal information, privacy risk is not limited to data handling. It also extends to how decisions are made, what information is used, and whether individuals can understand and challenge outcomes.

Reforms introduced by the Privacy and Other Legislation Amendment Act 2024 (Cth) add new transparency obligations to the Australian Privacy Principles (APPs) that commence on 10 December 2026.

This is not a theoretical compliance exercise. The Office of the Australian Information Commissioner (OAIC) announced it will begin 2026 with its first privacy compliance sweep, reviewing selected businesses’ privacy policies for compliance with existing transparency requirements.

For in-house counsel and business leaders, the immediate decision-support question is: what processes in the organisation are “automated decision making” for Privacy Act purposes, and what must be disclosed in the privacy policy by December 2026?

What is automated decision-making

An automated decision-making (ADM) system is a computerised process that assists or replaces human judgment in making decisions. ADM systems range from simple rule-based scoring (e.g., thresholds and eligibility criteria) to machine-learning and AI systems that infer risk, propensity, or likely outcomes from large datasets.

In practice, ADM is used to:

  • approve, refuse or route applications and requests (credit, services, access, refunds)

  • set pricing, eligibility or prioritisation (including differential pricing)

  • triage, escalate or close customer complaints and service interactions

  • detect fraud or non-compliance and trigger downstream actions

  • support HR processes (screening, ranking, performance flags, workforce analytics)

The privacy implications depend on the personal information used, the impact on individuals, and the level of human involvement.

Why ADM creates specific privacy risk

ADM systems often rely on personal information and may aggregate data from multiple sources. Key risk areas include:

  • Transparency risk: individuals may not understand that a decision was driven or materially shaped by a computer program, or what information was used.

  • Integrity and fairness risk: historical datasets can embed bias or under-representation, which can produce systematically unfair outcomes (for example, persistent disadvantage to a cohort because past data reflects older structural inequities).

  • Data minimisation and retention risk: ADM projects often expand the volume and variety of data collected and may retain data longer than necessary, as it is perceived as useful for “model improvement” or future analysis.

  • Accountability risk: without clear logs and governance, it can be difficult to explain or challenge an outcome, even internally.

These are not only ethical issues. They are operational and regulatory risks, particularly where decisions affect access to services, employment opportunities, education, healthcare, or financial outcomes.

Existing transparency obligations under APP 1

APP 1 requires APP entities to manage personal information openly and transparently. This includes maintaining a clearly expressed, up-to-date privacy policy, available free of charge and in an appropriate format, that describes how personal information is managed (including categories of information collected, purposes, and disclosures).

APP 1 also requires practices, procedures and systems that support compliance with the APPs. In practical terms, that implies an ongoing program of review and maintenance rather than a “set and forget” privacy policy.

The new ADM transparency requirements commencing 10 December 2026

From 10 December 2026, APP 1 will include new requirements (APP 1.7 to 1.9) that require privacy policies to include specified information about the organisation’s use of ADM in certain circumstances.

Which decisions are in scope

The new transparency requirements cover decisions:

  • made entirely by a computer program, and

  • decisions that are substantially made or influenced by a computer program (including where there is a human involved, if the program has a substantial and direct role in making or influencing the decision).

This is important for governance: a “human in the loop” does not automatically remove a process from scope. Conversely, not every use of software in a decision will be captured; the focus is on whether the program is materially shaping the decision.

What must be disclosed in the privacy policy

Where the requirements apply, the privacy policy must include information about:

  • the types of personal information used in the operation of the ADM system

  • the types of decisions made solely by the ADM system

  • the types of decisions for which a thing that is substantially and directly related to making the decision is made by the operation of the ADM system

The OAIC also notes that privacy policies are not expected to include every operational detail, and that a layered approach can be used to present information clearly and accessibly.

Increased regulatory scrutiny in 2026

OAIC privacy policy compliance sweep

The OAIC announced its first privacy compliance sweep will begin in the first week of January 2026, involving a targeted review of selected businesses’ privacy policies, focusing on sectors that collect personal information in person (for example, real estate/rental and property, car rental and car dealerships, chemists and pharmacists).

Transparency is a regulator priority

On 21 January 2026, the Australian Information Commissioner published a report reviewing how transparently Australian Government agencies describe their use of ADM on their websites. While that review relates to government and FOI settings, it reinforces the OAIC’s emphasis on transparency as a compliance theme for 2026.

Practical preparation steps for organisations

A defensible approach to the 2026 ADM transparency requirements usually involves four workstreams.

  1. Identify in-scope decision processes

    • Map decisions that affect individuals (customers, staff, students, patients, members).

    • Identify where computer programs materially influence outcomes (scoring, ranking, recommendations, triage, auto-closures, eligibility gates).

  2. Map personal information flows

    • What personal information is used?

    • Where does it come from (internal systems, third parties, inferred or derived fields)?

    • Who can access it, and how is it secured?

  3. Assess risk and control design

    • What are the foreseeable failure modes (errors, bias, drift, over-collection)?

    • What controls exist (review points, thresholds, exception handling, logging, auditability)?

    • Are there clear accountabilities for model changes and rule updates?

  4. Update privacy policy and supporting governance

    • Draft clear disclosures that meet the requirements of APP 1.8.

    • Ensure the policy is readable, navigable and consistent with actual practice.

    • Implement a maintenance process to ensure disclosures remain accurate as systems evolve.

The role of in-house counsel

In-house counsel is central to making this work practical and credible, because the task is not only legal drafting. It is organisational discovery and risk management.

In-house counsel can add the most value by:

  • ensuring ADM uses are identified early (including in procurement and product design)

  • distinguishing “decision support” tools from systems that materially influence decisions

  • aligning privacy policy wording with actual system behaviour and operational controls

  • driving internal accountability: who owns the system, who approves changes, who handles complaints or challenges

  • keeping disclosures clear and usable, consistent with OAIC expectations for transparent privacy policies

Bottom line

By December 2026, many organisations will need to publicly explain in their privacy policies how automated decision-making uses personal information and materially influences decisions about individuals.

Given the OAIC’s early-2026 privacy policy sweep and broader transparency focus, the sensible approach is to treat 2026 as a preparation year: identify where ADM is already operating, assess what is in scope, and ensure privacy policy disclosures match reality.