Documents reveal AFP’s use of controversial facial recognition technology Clearview AI

Documents reveal how the Australian Federal Police made use of Clearview AI — a controversial facial recognition technology that is now the focus of a federal investigation.

At least one officer tested the software using images of herself and another member of staff as part of a free trial.

In another incident, staff from the Australian Centre to Counter Child Exploitation (ACCE) conducted searches for five “persons of interest”.

According to emails released under Freedom of Information laws, one officer also used the app on their personal phone, apparently without information security approval.

Based in New York, Clearview AI says it has created a tool that allows users to search faces across a database that contains billions of photos taken, or “scraped”, without consent from platforms such as Facebook and Instagram.

The company provoked outrage in January, when the New York Times revealed the extent of its data collection and its use by law enforcement officials in the United States.

The AFP initially denied any ties to Clearview AI before later confirming officers had accepted a trial.

An agency spokeswoman said a “limited pilot of the system” was conducted to assess its suitability in combatting child exploitation and abuse.

She did not comment on questions from the ABC regarding whether the trial was approved and conducted appropriately by officers.

Last week, the Office of the Australian Information Commissioner (OAIC) announced an investigation into Clearview’s use of scraped data and biometrics, working with the UK’s Information Commissioner’s Office (ICO).

AFP initially denied using Clearview AI

The AFP acknowledged in April that members of the ACCE had undertaken a free trial of Clearview’s facial recognition services, but the extent of its use by officers remained unclear.

No formal contract was ever entered into.

“The use by AFP officers of private services to conduct official AFP investigations in the absence of any formal agreement or assessment as to the system’s integrity or security is concerning,” Labor leaders, including Shadow Attorney-General Mark Dreyfus, said in a statement at the time.

The new cache of AFP documents shows officers accessed the Clearview AI platform from early November 2019.

Tests of the tool undertaken using images of AFP staff and several “persons of interest” are detailed in the agency’s response to questions issued by the information commissioner as part of the office’s inquiries.

However, the agency said it did not know how many actual searches officers undertook, because the AFP’s access to Clearview AI was now restricted.

An executive briefing note claims Clearview AI was used operationally only once to locate a suspected victim of imminent sexual assault.

“To date no Australian personal information has been successfully retrieved through the Clearview platform,” the briefing also states.

The use of Clearview AI appears to have caused concern within the agency — and in some cases, officers appear to query whether the tool has been formally approved.

In December 2019, one officer asks if “info sec” (information security) had raised any concerns about the use of Clearview AI.

In response, another officer responds they “haven’t even gone down that path yet”, revealing that they’re “running the app” on their personal phone.

In January, after the media began reporting about Clearview AI, another member of staff notes “there should be no software used without the appropriate clearance”.

The emails also show some bemusement internally at public claims the AFP was not using the tool, with one officer commenting: “Maybe someone should tell the media that we are using it!”

“Or should we stop using it since everyone is raising the issue of approval,” another replies, with a smiley face emoji.

“Interesting that someone says we aren’t using it when we clearly are,” another employee from the ACCCE wrote on January 21.

Officers were directed to cease all access as of January 22, 2020 — four days after the New York Times story was published.

Clearview AI was founded by Australian businessman Hoan Ton-That.

In the documents, he appears to contact an AFP officer personally via email in December 2019 — introducing himself and asking them how they found the tool.

In a statement, Dr Ton-That said Clearview would cooperate with the UK’s ICO and Australia’s OAIC.

“Clearview AI searches publicly available photos from the internet in accordance with applicable laws,” he said. “It’s powerful technology [and] is currently unavailable in UK and Australia.”

Shortly after Mr Ton-That’s December message, an AFP officer wrote in an email that they had run a mugshot through the Clearview system and “got a hit for [the suspect’s] Instagram account”.

“The [facial recognition] tool looks very good,” they wrote.

Article courtesy: www.abc.net.au

Australian privacy watchdog launches investigation into Clearview AI

Australia’s privacy watchdog will probe the personal information handling practices of Clearview AI after several policing agencies admitted to having used the controversial facial recognition tool.

The Office of the Australian Information Commissioner (OAIC) on Thursday opened a joint investigation into the software with the United Kingdom’s Information Commissioner’s Office (ICO).

The tool, which is targeted at law enforcement agencies, is capable of matching images with billions of others from across the internet, including social media, to find persons of interest.

As part of the probe, OAIC and its overseas counterpart will look at Clearview AI’s “use of ‘scraped’ data and biometrics of individuals”, as well as how it manages personal information more broadly.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” the OAIC said in a brief statement.

“In line with the OAIC’s privacy regulatory action policy, and the ICO’s communicating our regulatory and enforcement activity policy, no further comment will be made while the investigation is ongoing.”

The investigation follows preliminary enquiries by OAIC earlier this year after the tool was revealed to have been used by 2200 law enforcement agencies globally, including the Australian Federal Police and the Queensland, Victoria and South Australia police forces.

While the four policing agencies initially denied that the software had been used, the AFP and Victoria Police have since been forced to admit to having briefly trialled the tool from late 2019.

The AFP confirmed in answers to questions on notice that seven officers from the Australian Centre to Counter Child Exploitation had used the tool to conduct searches after being sent trial invitations from Clearview AI.

Victoria Police, similarly, confirmed in a freedom of information request that several officers from the Joint Anti-Child Exploitation Team had run more than 10 searches using the tool after signing up.

Both agencies stressed that Clearview AI had not been adopted as an enterprise product and that no formal commercial agreements had been entered into.

Article Courtesy: www.itnews.com.au/