EU Commission internal draft would wreck core principles of the GDPR

As gradually leaked the last days by various news outlets, the EU Commission has secretly set in motion a potentially massive reform of the GDPR. If internal drafts become reality, this would have significant impact on people's fundamental right to privacy and data protection. The reform would be part of the so-called "Digital Omnibus" which was supposed to only bring targeted adjustments to simplify compliance for businesses. Now, the Commission proposes changes to core elements like the definition of "personal data" and all data subject's rights under the GDPR. The leaked draft also suggests to give AI companies (like Google, Meta or OpenAI) a blank check to suck up European's personal data. In addition, the special protection of sensitive data like health data, political views or sexual orientation would be significantly reduced. Also, remote access to personal data on PCs or smart phones without consent of the user would be enabled. Many elements of the envisaged reform would overturn CJEU case law, violate European Conventions and the European Charter of Fundamental Rights. If this extreme draft will become the official position of the European Commission, will only become clear on 19 November, when the "Digital Omnibus" will be officially presented. Schrems: "This would be a massive downgrading of European's privacy ten years after the GDPR was adopted."

wrecking ball destroying GDPR house

Secret "fast-track" attack on the GDPR. The European Commission plans to simplify several EU laws via a so-called "Omnibus" reform, a tool that normally changes various smaller elements of several laws horizontally, to improve the quality of the law, and streamline paperwork obligations. In contrast to the traditional approach to lawmaking, the Omnibus is done via a "fast-track" procedure that jumps several elements of the process, including impact assessments and making time for feedback by legal services and relevant units in the EU institutions. This is acceptable if only non-contentious, simple improvements are made.

However, as was revealed, the European Commission's services under Executive Vice President Henna Virkkunen (so-called "DG CONNECT") are working on a massive reform of the GDPR under the heading of alleged "simplification" or "clarifications". 

Max Schrems: "The draft is not just extreme, but also very poorly drafted. It is not helping 'small business', as promised, but again mainly benefiting 'big tech'."

The original plan of the Commission was to do a so-called "Digital Fitness Check" in 2026, collect the necessary evidence and then do a targeted and well-developed update to the GDPR and other digital laws.

EU "jumps" on German or US demands? Both, stakeholders and Member States have had explicitly asked to not reopen the GDPR. In a summary of Member State positions, almost all Member States that gave feedback say they want no change. However, Germany pushed for significant changes to the GDPR that go beyond the original mandate of this law. It seems the Commission simply "jumped" on a German non-paper that leaked last week, given that many changes in the draft seem to be a 1:1 copy of demands in the leaked German letter. Germany did not provide evidence for the need of said reforms - it is unclear where these demands originate from. Other voices point at recent reporting by Politico, that Virkkunen's message to US businesses in direct meetings was that the EU will review its rules and become more business-friendly.

Max Schrems: "It is unclear where the political pressure comes from. Most Member States asked for tiny changes and no reopening. Germany has traditionally taken an anti-GDPR position in Europe. It seems easier to blame some EU law for German problems with digitisation than to fix matters nationally. We are not surprised that this latest push comes from Germany again. There is reporting that also pressure from the US could play a role."

Reading the first internal draft, it is clear that the potential damage to the GDPR (see details below) would be huge. Large parts of the draft violate European Conventions, the Charter of Fundamental Rights or settled Court of Justice case-law. Whether this is intentional or due to the fast pace of the work and resulting poor quality of the draft remains open. Brussels insiders reported that certain units only had five (!) working days to comment on a 180+ page draft law.

Max Schrems: "One part of the EU Commission seems to try overrunning everyone else in Brussels, disregarding rules on good lawmaking, with potentially terrible results. It is very concerning to see Trump'ian lawmaking practices taking hold in Brussels."

Tunnel Vision on the "AI race"? While there could be several good ways to improve and simplify the GDPR, the proposed changes seem to suffer from "tunnel vision" on enabling the training and use of AI – even on personal data. In a recent study, however, only 7% of Germans say that they want Meta to use their personal data to train AI. Nevertheless, the reform is aiming at every element of the GDPR that could limit AI usage. 

Max Schrems: "What these changes seem to overlook is that most data processing is not AI-based. The potential change that would "liberate" AI would have massive unintended consequences for many other areas of the GDPR. The protection for health data, minorities or employees would also be killed in this draft. Large parts of the online advertisement business may be able to bypass GDPR duties, because of the considered changes."

Death by 1000 cuts. While many changes sound technical, there are the rights of people behind each "definition" in the GDPR. The changes range from the cutting back on what is even considered "personal data" and therefore protected, all the way to limiting the "right to access" so that employees, journalists or reseachers would be unable to access their own data anymore. Changes also allow lenient access to "pull" data from smartphones, PCs or connected devices or largely allow companies to use personal data from Europeans for (commercial) AI training. While companies often argue that the GDPR would be a "burden" the reality is that not even 1.3% of all GDPR complaints lead to a fine. In Ireland only 0.6% of fines were so far paid.

Max Schrems: "The draft would cut many little holes in the law, which would make it overall unusable for most cases. We already see almost no enforcement of EU privacy laws, with these changes most cases we currently win - we would likely lose or face even more complex procedures."

Basically no benefit for SMEs and EU businesses. The official reason for "clarifications" and "simplifications" via the Omnibus was to limit the administrative burden on Small and Medium Enterprises (SMEs) in the EU. noyb fully supports this goal. However, the proposed changes mostly concern companies that engage in AI training - so companies that are now valued in Trillions (like OpenAI, Google, Meta, Amazon or Microsoft). While there are some clarifications that could benefit SMEs, such as rules on when so-called "Data Protection Impact Assessments" must be done, these will not have any larger impact on European competitiveness.

Max Schrems: "If this draft is final, then any talk about 'small business' and 'administrative burden' is clearly just a side-show to get public support. At its core this is a massive deregulation attempt, overthrowing 40 years of European fundamental rights doctrine."

Call for urgent stop & proper lawmaking. Given that this proposal is riddled with problems and would also trigger big legal uncertainty that will make many changes prone to (successful) legal attacks before the Court of Justice, this "fast track" approach might not be helpful to anyone - be it SMEs, data protection authorities or users. First indications out of the European Parliament also highlight that far-reaching changes would clearly hold up the "Omnibus" in Parliament. Given that the Commission intends to pass other elements of the Digital Omnibus as quickly as possible, we think that there is no other way than to remove most changes to the GDPR from the draft.

Max Schrems: "Firing a poorly drafted 'quick shot' in a highly complex and sensitive area will not only harm users, but won't help EU businesses either. It also calls into question if this Omnibus can quickly pass through the European Parliament and Council. The Commission still has a week to decide what goes into the final proposal."


Overview of Changes

The draft Digital Omnibus proposes countless changes to many different Articles of the GDPR. In combination this amounts to a "death by 1000 cuts". We have made a general overview of some of the key changes. For the most detailed legal analysis, download our overview PDF here. All changes listed below are based on our best effort to understand the internal drafts of the Commission. Some elements are unclear and conflicting in the draft text and of such poor quality, that the true meaning and intention of the texts is not always clear. 

(1) A new GDPR loophole via "pseudonyms" or "IDs"? It seems that the Commission plans to significantly narrow the definition of "personal data" – which would result in the GDPR not applying in many cases. The plan is to add a "subjective approach" in the text of the GDPR. This would mean that if a specific company cannot identify a person, the data is not "personal" for that company – and the GDPR ceases to apply.

Furthermore, the draft makes reference to tools for identification that are likely to be used by each specific controller. This would require further investigations and predictions about the inner workings and the future actions of a specific company to know if a person still enjoys his or her rights under the GDPR. In practice this makes the GDPR hardly enforceable anymore and would lead to endless debates and fights about the true intentions and plans of a company.

The draft can at least be read to abandon the notion that techniques to "single out" a person are still covered under the law. Instead, the wording of the draft hints at a general exemption from the GDPR if controllers just use "pseudonyms" (like "user12473" or the data from a tracking cookie) instead of names. This would mean that entire industry sectors that are so far covered by the GDPR and operate via "pseudonyms" or random ID numbers would not be (fully) covered anymore. This could apply to almost all online tracking, online advertisement and most data brokers.

This change would also massively depart from the rather broad interpretation of the Court of Justice (CJEU). There is body of case law for more than 20 years, that supports a broad understanding of what consitutes "personal data".

Given that the term "personal data" is coming from Article 8 of the Charter, it is very likely that such a deep change would not survive the scrutiny of the CJEU.

(2) Right to Access, deletion or correction get "purposes limitation". One massive change (on German demand) is to limit the use of data subject rights (like access to data, rectification or deletion) to "data protection purposes" only. Conversely, this means that if an employee uses an access request in a labor dispute over unpaid hours – for example, to obtain a record of the hours they have worked – the employer could reject it as "abusive". The same would be true for journalists or researchers. 

In a broad reading this could go even further. If a person asks to delete false credit ranking data to get a cheaper loan at the bank, such a "right to rectification" on false financial information may not be exercised purely for a "data protection purpose" but out of economic interest. Even cases like the famous "right to be forgotten" may not be seen as a "data protection interest" if the person demanding deletion of public data does so in a business interest.

This idea is a clear violation of CJEU case law and Article 8(2) of the Charter. The right to informational self-determination is explicitly meant to level the information gap between users and the companies that hold the information, as more and more information is hidden on company servers (e.g. copy of time sheets). The CJEU has held multiple times that Europeans can exercise these rights for any purpose - including litigation or the generate evidence.

The GDPR already has limitations for the "abusive" use of GDPR rights (such as being denied access to data or having to pay a fee), but the Commission now also wants to limit reasons for which these rights can be exercised.

(3) Google, Meta and OpenAI can now train AI with your data. The Commission's draft also foresees changes to Article 6(1) and 9(2) GDPR to allow the processing of personal data for AI. This means that a high-risk technology, fueled by people's most personal thoughts and sensitive data, gets a general "OK" under the GDPR. At the same time, every traditional database or CCTV camera stays strictly regulated. 

The Commission draft highlights the need for "data minimisation" (a principle that already applies under Article 5(1)(b) GDPR anyways) and requires that undefined "safeguards" are put in place. However, there are no technical benchmarks or standards for such "safeguards". The only specific alleged protection is a "right to object". But this idea is bound to fail in most cases:

  1. It would mean that users in the EU would first have to be informed that their data is actually used for AI training, which is largely impossible. Companies like OpenAI have no idea which personal data belongs to whom, let alone what contact details they have. People will therefore never find out that their personal data is used in the first place.
     
  2. If people find out anyway, they would have to constantly object to all of these companies. This means finding hundreds of controllers, filling out forms and repeating the exercise for everyone doing AI training individually.
     
  3. Finally, the objection and the relevant data would have to be "matched", despite users not knowing if their data is used at all – and if so, which data is used. In turn, the controller will mostly be unable to execute an objection when the entire internet is scraped.

Bottom line: Google, Meta, Microsoft or OpenAI can continue to make Trillions (!) with data that was generated by European companies, artists or private individuals, while getting a "free pass" by the European legislator.

(4) Operation of AI systems get a "GDPR wildcard"? The changes to Article 6(1) and 9(2) GDPR go even further than expected. Not only the development of AI systems would be privileged, but also the operation of an AI system. The term "operation" is not defined, but will likely cover any type of data processing. 

This would lead to a grotesque situation: If personal data is processed via a traditional database, Excel sheet or software, a company has to find a legal basis under Article 6(1) GDPR. However, if the same processing is done via an AI system, it can qualify as a "legitimate interest" under Article 6(1)(f) GDPR. This would privilege one (risky) technology over all other forms of data processing and be contrary to the "tech neutral" approach of the GDPR.

(5) Sensitive Data like Health, Politics or Sex Life only covered if "directly revealed"? Article 9 GDPR specifically protects "sensitive" data regarding people's health, political believes, sex life, sexual orientation or trade union membership. So far, the CJEU held that such information is also protected if it can only be deducted from other information. The Commission now tries to overturn the law and wants to limit Article 9 protections only if such sensitive information is "directly revealed".

However, people who "directly reveal" that they are pregnant, have cancer or are gay usually need this protection less than people about whom such sensitive information can only be "deducted" from other information. A classic example for this are employers who use "big data" to deduce that a woman is pregnant – to then fire that person quickly to avoid social payment and alike. Right now, this would fall under Article 9 GDPR. In the future, it would be reduced to protections under Article 6 GDPR – while a person publicly announcing their pregnancy would fall under Article 9 GDPR.

From the perspective of individuals, such a limitation makes no sense, but the Commission seems to be mainly concerned with companies that want to use such data for AI training. Filtering for "I am pregnant" is easier than filtering the 1000 signals that allow Meta or Google to figure out that a person is pregnant.

This change would violate Article 6 of the Council of Europe Convention 108 that uses the current GDPR wording ("revealing") and was ratified by 55 countries.

(6) Pulling personal data from your device? So far, Article 5(3) ePrivacy has protected users against remote access of data stored on "terminal equipment", such as computers or smartphones. This is based on the right to protection of communication under Article 7 of the Charter and made sure that companies cannot "remotely search" devices.

However, the Commission proposal now allows - depending on the reading of the draft - up to 10 (!) legal bases to pull information from a personal device - or place tracking technology on your device (such as "cookies"). The four "white listed" processing operations for the access to terminal equipment would now include "aggregated statistics" and "security purposes". While the general direction of changes is understandable, the wording is extremely permissive and would also allow excessive "searches" on user devices for (tiny) security purposes.

In addition, all legal bases under Article 6(1) GDPR would be available. In combination this could lead to absurd results: AI training would be a "legitimate interest", and companies can now remotely access personal data on your device for such a "legitimate interest". Consequently, it would be a possible reading of the law that companies such as Google can use data from any Android apps to train it's Gemini AI. Especially "Big Tech" companies would very likely have an even more expansive reading of the draft text. It is questionable whether the authors of this draft law have ever thought about these combinations.

Share