Australians need greater legal protection to prevent tech giants harvesting their personal information, including photos of their children, to train generative AI tools.
Politicians and academics issued the call on Wednesday after Meta executives revealed photos and posts Australians shared on Facebook and Instagram as far back as 2007 had been used to build its AI models.
The US company confirmed its use of the data at the Senate inquiry into Adopting Artificial Intelligence in Canberra, with representatives also revealing European options to prevent the content being used would not be extended to Australians.
The inquiry, which is expected to present a final report next week, is examining AI trends, opportunities and risks, as well as its impact on elections and the environment.
Meta privacy policy global director Melinda Claybaugh told the Senate committee it ingested content users shared on its platforms to train its generative AI tools, Llama and Meta AI, if they shared posts publicly.
Ms Claybaugh also said Meta did not use photos posted by children but, under questioning, revealed any photos of children shared by adults were used to train AI.
âI want to be very clear that we are not using data from accounts of under 18-year-olds to train our models,â she said.
âWe are using public photos posted by people over 18.â
Ms Claybaugh said Australian Facebook and Instagram users could avoid having their content used to train AI by hiding it from public view, but said they would not be offered an option to opt out of the scheme that was available in some other nations.
âWe are offering an opt-out to users in Europe, however that is not a settled legal situation,â she said.
âThe solution in Europe is specific to Europe.â
But Labor Senator Tony Sheldon, who chaired the inquiry, called the tech giantâs use of personal photos âan unprecedented violationâ and called for legal restrictions on its behaviour.
âMeta must think weâre mugs if they expect us to believe someone uploading a family photo to Facebook in 2007 consented to it being used 17 years later to train AI technology that didnât even exist at the time,â he said.
âIf our privacy laws allow this, they need to be changed.â
RMIT University technology and information associate dean Dana McKay said Metaâs use of personal content would probably shock many users and demonstrated the need for stronger regulation.
âThis is a clear sign we need new privacy laws,â she said.
âIn this case, Australian people were unaware and itâs not clear the (data) scraping has benefited them.â
Meta Asia Pacific public policy vice-president Simon Milner defended the companyâs use of Australiansâ data, telling senators AI risks such as bias could be addressed by harvesting more local information.
He admitted the companyâs 20,000-word privacy policy was onerous for users but said asking them to share their data would be a frustrating experience.
âYouâre trying to get that balance right all the time but a kind of compulsory opt-in at all times, it would be extremely annoying for most people across the internet,â Mr Milner said.
âWe know that for a fact.â
The Senate committee, which has also heard from tech firms including Amazon, Microsoft and Google, is expected to present a final report by September 19.
Â
Jennifer Dudley-Nicholson
(Australian Associated Press)
Â