Login | Support 24x7 Helpdesk Support 02 8011 0210 | 1300 660 368
Business Support Information Technology

Documents reveal AFP’s use of controversial facial recognition technology Clearview AI

Documents reveal how the Australian Federal Police made use of Clearview AI — a controversial facial recognition technology that is now the focus of a federal investigation.

At least one officer tested the software using images of herself and another member of staff as part of a free trial.

In another incident, staff from the Australian Centre to Counter Child Exploitation (ACCE) conducted searches for five “persons of interest”.

According to emails released under Freedom of Information laws, one officer also used the app on their personal phone, apparently without information security approval.

Based in New York, Clearview AI says it has created a tool that allows users to search faces across a database that contains billions of photos taken, or “scraped”, without consent from platforms such as Facebook and Instagram.

The company provoked outrage in January, when the New York Times revealed the extent of its data collection and its use by law enforcement officials in the United States.

The AFP initially denied any ties to Clearview AI before later confirming officers had accepted a trial.

An agency spokeswoman said a “limited pilot of the system” was conducted to assess its suitability in combatting child exploitation and abuse.

She did not comment on questions from the ABC regarding whether the trial was approved and conducted appropriately by officers.

Last week, the Office of the Australian Information Commissioner (OAIC) announced an investigation into Clearview’s use of scraped data and biometrics, working with the UK’s Information Commissioner’s Office (ICO).

AFP initially denied using Clearview AI

The AFP acknowledged in April that members of the ACCE had undertaken a free trial of Clearview’s facial recognition services, but the extent of its use by officers remained unclear.

No formal contract was ever entered into.

“The use by AFP officers of private services to conduct official AFP investigations in the absence of any formal agreement or assessment as to the system’s integrity or security is concerning,” Labor leaders, including Shadow Attorney-General Mark Dreyfus, said in a statement at the time.

The new cache of AFP documents shows officers accessed the Clearview AI platform from early November 2019.

Tests of the tool undertaken using images of AFP staff and several “persons of interest” are detailed in the agency’s response to questions issued by the information commissioner as part of the office’s inquiries.

However, the agency said it did not know how many actual searches officers undertook, because the AFP’s access to Clearview AI was now restricted.

An executive briefing note claims Clearview AI was used operationally only once to locate a suspected victim of imminent sexual assault.

“To date no Australian personal information has been successfully retrieved through the Clearview platform,” the briefing also states.

The use of Clearview AI appears to have caused concern within the agency — and in some cases, officers appear to query whether the tool has been formally approved.

In December 2019, one officer asks if “info sec” (information security) had raised any concerns about the use of Clearview AI.

In response, another officer responds they “haven’t even gone down that path yet”, revealing that they’re “running the app” on their personal phone.

In January, after the media began reporting about Clearview AI, another member of staff notes “there should be no software used without the appropriate clearance”.

The emails also show some bemusement internally at public claims the AFP was not using the tool, with one officer commenting: “Maybe someone should tell the media that we are using it!”

“Or should we stop using it since everyone is raising the issue of approval,” another replies, with a smiley face emoji.

“Interesting that someone says we aren’t using it when we clearly are,” another employee from the ACCCE wrote on January 21.

Officers were directed to cease all access as of January 22, 2020 — four days after the New York Times story was published.

Clearview AI was founded by Australian businessman Hoan Ton-That.

In the documents, he appears to contact an AFP officer personally via email in December 2019 — introducing himself and asking them how they found the tool.

In a statement, Dr Ton-That said Clearview would cooperate with the UK’s ICO and Australia’s OAIC.

“Clearview AI searches publicly available photos from the internet in accordance with applicable laws,” he said. “It’s powerful technology [and] is currently unavailable in UK and Australia.”

Shortly after Mr Ton-That’s December message, an AFP officer wrote in an email that they had run a mugshot through the Clearview system and “got a hit for [the suspect’s] Instagram account”.

“The [facial recognition] tool looks very good,” they wrote.

Article courtesy: www.abc.net.au

Leave a Reply