noyb urges 11 DPAs to immediately stop Meta's abuse of personal data for AI

Forced Consent & Consent Bypass
 /  06 June 2024

Over the past few days, Meta has informed millions of Europeans that its privacy policy is changing once again. Only on closer inspection of the links in the notification did it become clear that the company plans to use years of personal posts, private images or online tracking data for an undefined "AI technology" that can ingest personal data from any source and share any information with undefined "third parties". Instead of asking users for their consent (opt-in), Meta argues that it has a legitimate interest that overrides the fundamental right to data protection and privacy of European users. Once their data in the system, users seem to have no option of ever having it removed ("right to be forgotten"). noyb has now filed complaints in 11 European countries, asking the authorities to launch an urgency procedure to stop this change immediately, before it comes into force on 26 June 2024.

Two people sitting at laptops. In the background you can see the Meta logo and the slogan "Meta AI".

All non-public data for some undefined future "AI technology". Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (e.g. a chatbot), Meta's new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future "artificial intelligence technology". This includes the many "dormant" Facebook accounts users hardly interact with anymore – but which still contain huge amounts of personal data. In addition, Meta says it can collect additional information from any "third party" or scrape data from online sources. The only exception seems to be chats between individuals – but even chats with a company are fair game. Users aren't given any information about the purposes of the "AI technology" – which is against the requirements of the GDPR. Meta's privacy policy would theoretically allow for any purpose. This change is particularly worrying because it involves the personal data of about 4 billion Meta users, which will be used for experimental technology essentially without limit. At least users in the EU/EEA should (in theory) be protected from such abuse by the GDPR.

Max Schrems: "Meta is basically saying that it can use 'any data from any source for any purpose and make it available to anyone in the world', as long as it’s done via 'AI technology'. This is clearly the opposite of GDPR compliance. 'AI technology' is an extremely broad term. Much like 'using your data in databases', it has no real legal limit. Meta doesn't say what it will use the data for, so it could either be a simple chatbot, extremely aggressive personalised advertising or even a killer drone. Meta also says that user data can be made available to any 'third party' - which means anyone in the world."

Do Meta's interests override the users' rights? Normally, the processing of personal data in the European Union is illegal by default. Therefore, Meta must rely on one of the six legal bases under Article 6(1) GDPR in order to process personal data. Although the logical choice would be opt-in consent, Meta is again claiming that it has a "legitimate interest" that overrides the fundamental rights of users. Meta has previously argued this in the context of using all personal data for advertising – and was rejected by the Court of Justice (see C-252/21). Now Meta uses the same legal basis to justify an even broader and more aggressive use of people's personal data.

Max Schrems: "The European Court of Justice has already made it clear that Meta has no 'legitimate interest' to override users' right to data protection when it comes to advertising. Yet the company is trying to use the same arguments for the training of undefined 'AI technology'. It seems that Meta is once again blatantly ignoring the judgements of the CJEU."

The objection is a farce. Meta even tries to make users responsible for taking care of their privacy by directing them to an objection form (opt-out) that users are supposed to fill out if they don't want Meta to use all their data. While in theory an opt-out could be implemented in such way that it requires only one click (like the 'unsubscribe' button in newsletters), Meta makes it extremely complicated to object, even requiring personal reasons. A technical analysis of the opt-out links even showed that Meta requires a login to view an otherwise public page. In total, Meta requires some 400 million European users to 'object', instead of asking for their consent.

Max Schrems: "Shifting the responsibility to the user is completely absurd. The law requires Meta to get opt-in consent, not to provide a hidden and misleading opt-out form. If Meta wants to use your data, they have to ask for your permission. Instead, they make users beg to be excluded. We were particularly surprised that Meta has even went to the trouble of builing in tons of little distractions to ensure that only a tiny number of users would actually bother to object."

Irish DPC is complicit (again). According to reports, this blatant breach of the GDPR is (again) based on a "deal" with the Irish Data Protection Commission (the DPC is Meta's EU regulator). The DPC has previously had a deal with Meta that allowed the company to circumvent the GDPR – and ended with a € 395 million fine against Meta after the European Data Protection Board (EDPB) overruled the Irish DPC.

Max Schrems: "It seems that the DPC's new management is just continuing to make illegal 'deals' with big tech companies from the US. It is mind-boggling that the DPC continues to let the misuse of the non-public personal data of about 400 million European users go unchecked."

Deadline 26 June: Urgency procedure requested. Given that Meta's processing for undisclosed "artificial intelligence technology" is already set to take effect on 26 June 2024, and Meta claims that there is no option to opt-out at a later point to have your data removed (as foreseen under Article 17 GDPR and the "right to be forgotten"), noyb has requested an "urgency procedure" under Article 66 GDPR. Data protection authorities (DPAs) in 11 European countries (Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain) received such a request on behalf of local data subjects. Article 66 allows DPAs to issue preliminary halts in situations such as the one described above and allows for an EU-wide decision via the EDPB. The Irish DPC and Meta Ireland have already been subject to two "Urgency Binding Decisions" by the EDPB (see Urgent Binding Decision 01/2023 and Urgent Binding Decision 01/2021) in similar situations before.

Max Schrems: "We hope that the authorities outside of Ireland will take quick action and at least stop this project for a full investigation. The EDPB has already issued two such urgency decisions against Meta and the Irish Data Protection Commissioner. It is sad to see that this measure seems to be necessary again and again."

Additional problems. In addition to the lack of any legal basis for sucking up more than a decade worth of user data, Meta has previously said that it is technically unable to distinguish between data from users in the EU/EEA and other countries where people don't enjoy GDPR protection. Meta has also said that it cannot distinguish between sensitive data under Article 9 GDPR, such as ethnicity, political opinions, religious beliefs (for which the "legitimate interest" argument is not available under the law), and other data for which a "legitimate interest" could theoretically be claimed. With the introduction of its AI technology, Meta appears to have violated a number of other GDPR provisions, including GDPR principles, transparency rules and operational rules. Overall, noyb's complaints list violations of at least Articles 5(1) and (2), 6(1), 9(1), 12(1) and (2), 13(1) and (2), 17(1)(c), 18(1)(d), 19, 21(1) and 25 GDPR.

Max Schrems: "With the approach of simply using any data for any purpose for any 'AI technology', Meta has clearly left almost the entire GDPR framework. We counted violations of at least ten Articles of the law."

Next steps. The relevant DPAs will now have to make a quick decision whether to launch an urgency procedure or to deal with the complaints in a normal procedure. Two days ago, the Norwegian DPA has already published a blog post arguing that it is "doubtful" ("tvilsomt") whether Meta's approach is legal. An urgency procedure could lead to a rapid interim ban and a final decision by the EDPB in a matter of months. While today's complaints are a first step, it seems plausible that other organisations will follow up with injunctions, civil law cases or even class actions, if Meta goes ahead with its plans. This could potentially drown Meta in another round of legal troubles in the European Union. noyb's actions against Meta alone have so far resulted in administrative fines of more than € 1.5 billion.

*The complaint in Norway was filed jointly with the Norwegian Consumer Council ("NCC"). Find more information at www.forbrukerradet.no.