Facial recognition and the Privacy Act: a clearer (but stricter) line for businesses
A recent decision of the Administrative Review Tribunal has clarified when Australia’s Privacy Act 1988 (Cth) applies to facial recognition technology (FRT) used by private organisations on their premises. The decision provides a workable pathway for limited use of FRT in high-risk security contexts, but it also reinforces that governance and transparency failures can still result in Privacy Act breaches, even where the underlying security objective is accepted.
What the Tribunal clarified
1) Facial recognition is “biometric information” and therefore “sensitive information”
The Tribunal confirmed that facial images used for automated matching and the biometric templates derived from them constitute biometric information. Under the Privacy Act, biometric information used for automated biometric verification or identification is treated as sensitive information and attracts higher protections. Even if the system deletes non-matching data quickly, the Tribunal treated the processing as a “collection” step for Privacy Act purposes.
2) Collection without consent can be lawful in a “permitted general situation,” but only on evidence
The most important aspect for businesses is the Tribunal’s acceptance that biometric collection without consent may be lawful where an organisation can rely on the “permitted general situation” exception (often described in practice as an unlawful-activity/safety-risk exception). The test is whether the organisation reasonably believes the collection is necessary to prevent or respond to serious unlawful activity or safety risks.
Two practical points flow from the Tribunal’s approach:
The organisation does not need to prove that facial recognition was the only option available, but it must show that the belief was objectively supportable.
The analysis must be evidence-based. Assertions of “safety” or “theft prevention” without an incident history and a clear rationale for necessity are unlikely to be sufficient.
Why Bunnings succeeded on the “permitted exception” point (and why this is not a blanket approval)
The Tribunal accepted Bunnings’ security justification based on the record before it: serious repeat offending, evidence of violence and abuse towards staff, the limits of alternative controls, and system features designed to reduce privacy impact (including rapid deletion of non-matches and restricted access to watchlists).
The Tribunal made it clear that this was a fact-specific outcome and not a general endorsement of retail facial recognition. Businesses should treat the decision as guidance on what a defensible necessity case looks like, not as permission to deploy FRT as a standard loss-prevention tool.
The warning: “lawful collection” is not the end of the compliance task
Even though Bunnings succeeded on the threshold question of whether consent was required in those circumstances, the Tribunal still upheld breaches relating to notice, privacy governance, and privacy policy disclosures.
Key failings included:
generic “video surveillance” signage that did not adequately inform individuals about biometric collection and automated matching
insufficient transparency in privacy policy settings about the use of FRT
weak privacy governance at rollout, including the absence of a robust privacy impact assessment and supporting documentation that would normally be expected for biometrics.
How this decision fits with recent OAIC enforcement activity
This decision aligns with a clear enforcement pattern in Australia: regulators focus heavily on notice, consent, proportionality, and demonstrable governance in biometric deployments.
Kmart: In September 2025, the Privacy Commissioner found Kmart’s use of FRT to tackle refund fraud unlawful, including for collecting biometric information without consent and failing to meet notification requirements.
7-Eleven: The OAIC has also addressed governance failures in biometric deployments, including an incident in which a service provider inadvertently re-enabled FRT functionality, which remained active for approximately 12 months before detection and deactivation.
The operational lesson is consistent: even when a security or business objective is legitimate, organisations materially increase breach and enforcement risk by under-resourcing the transparency, control environment, and documentary record, particularly for biometrics.
Practical takeaways for business decision-makers
Assume FRT is sensitive biometric collection
Treat facial recognition deployments as high-risk from the outset, even if biometric data is held briefly or deleted for non-matches. Design and governance should assume APP obligations will apply.If relying on a permitted exception, “show your working”
If you intend to collect biometrics without consent, you need an evidence-based necessity case: incident data, an explanation of why less intrusive measures are insufficient, and an explanation of why the proposed design is proportionate. The safest way to structure this is to complete a privacy impact assessment as part of the decision-making process, not after deployment.Build proportionality and safeguards into system design from day one
Examples of safeguards that supported Bunnings’ case included limited watchlists, restricted access, and rapid deletion of non-matches. Organisations should adopt a “minimise impact” design approach and document how each control reduces privacy risk.Do not treat notice and privacy policy updates as administrative tasks
The Tribunal’s findings underscore that transparency failures can lead to breach findings even when the underlying collection is accepted. Signage, front-of-house notices, and privacy policy disclosures must clearly address biometric collection and automated matching, not just general CCTV.Treat vendor/device controls as a compliance risk
The 7-Eleven incident demonstrates how vendor maintenance actions can inadvertently reintroduce biometric processing. Contracts, device configuration controls, auditability, and monitoring should be designed to detect and prevent unauthorised reactivation or configuration drift.
Bottom line
This Tribunal decision indicates that FRT can be deployed lawfully in Australia in narrow, well-justified security settings. But it also sets a stricter practical standard: businesses should assume that biometric programs will fail Privacy Act scrutiny unless necessity, proportionality, governance, and transparency are established and evidenced before rollout, not retrofitted after issues arise.

