Automated Decision-Making and Profiling: Data Subject Rights Under GDPR
Guide to GDPR Article 22 rights against automated decision-making and profiling. When individuals can opt out, what businesses must disclose, and how AI changes the landscape.
Last updated: 2026-04-06
When Algorithms Make Decisions About People
Every time a system automatically approves or denies a loan application, screens a CV, calculates an insurance premium, or flags an account for fraud without a human reviewing the outcome, it is making an automated decision about a person. Under the GDPR, individuals have a specific right to push back against these decisions — and businesses have specific obligations to be transparent about them.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for guidance specific to your business.
Article 22 of the GDPR creates a general prohibition on decisions based solely on automated processing — including profiling — where those decisions produce legal effects or similarly significant effects on the individual. This is not an opt-out right that individuals need to exercise. It is a default rule: you cannot make these decisions automatically unless an exception applies.
For businesses adopting AI tools, automated screening, or algorithmic decision-making, understanding Article 22 is no longer optional. This guide covers what the right means, when exceptions apply, what you must disclose, and how the rules apply to modern AI systems.
Key Definitions
Before examining the rights, it helps to define the terms precisely. Article 22 uses specific language, and the scope of the right depends on how each term is interpreted.
Solely Automated Processing
A decision is "solely automated" when no human is meaningfully involved in the decision-making process. The key word is "meaningfully." A human who rubber-stamps every automated output without genuine review does not constitute meaningful human involvement. Similarly, a human who theoretically could override the system but never does in practice is not providing meaningful involvement.
The Article 29 Working Party (now the EDPB) has clarified that meaningful human involvement requires:
- The person has authority and competence to change the decision
- They have access to all relevant data, including the data used by the automated system
- They actually review the automated output before a decision is applied
- They are not simply following the automated recommendation as a matter of course
If a human reviews the decision but is effectively bound by the algorithm's output, the decision is still "solely automated" for the purposes of Article 22.
Profiling
Article 4(4) defines profiling as "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person." The GDPR specifically mentions evaluating aspects concerning:
- Work performance
- Economic situation
- Health
- Personal preferences
- Interests
- Reliability
- Behavior
- Location
- Movements
Profiling is a broad concept. Sorting customers into risk categories, scoring job applicants, predicting purchasing behavior, and assessing creditworthiness all involve profiling. Not all profiling falls under Article 22 — only profiling that results in solely automated decisions with legal or similarly significant effects.
Legal Effects
A decision has "legal effects" when it affects someone's legal rights or legal status. Examples include:
- Denial of a credit application
- Cancellation of an insurance policy
- Refusal of a social security benefit
- Denial of entry to a country
- Automated termination of a contract
Similarly Significant Effects
This is the broader and more contested category. The EDPB has indicated that "similarly significant" effects include decisions that:
- Significantly affect someone's financial circumstances
- Impact their access to essential services (health, housing, education)
- Significantly affect their employment or employment prospects
- Risk excluding or discriminating against individuals
Examples include:
- Automated rejection of a job application
- Dynamic pricing that results in significantly higher prices for certain individuals
- Automated denial of access to a service (such as an online platform or financial product)
- Credit scoring that affects interest rates or product availability
Decisions with trivial effects — such as a content recommendation algorithm suggesting what to watch next — generally do not reach the "similarly significant" threshold, although this remains an area of evolving interpretation.
The Default Rule: Prohibition
Article 22(1) establishes the default position: "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her."
This is not an opt-out mechanism that individuals must actively invoke. It is a prohibition. If you are making solely automated decisions with legal or similarly significant effects, you are already in breach unless one of the three exceptions applies.
Three Exceptions Where Automated Decisions Are Permitted
Article 22(2) provides three circumstances in which solely automated decisions with legal or similarly significant effects are allowed:
1. Necessary for Entering Into or Performing a Contract
You can make automated decisions when they are necessary for concluding or performing a contract with the individual. The key word is "necessary" — not merely convenient or efficient.
Example: An online lender that processes thousands of loan applications per day may legitimately need automated credit scoring to function. Manual review of every application might not be commercially viable.
However, this exception does not apply simply because automation is faster or cheaper. You must be able to demonstrate that the automated decision-making is genuinely necessary for the contract to be performed.
2. Authorized by EU or Member State Law
Automated decision-making may be authorized by law, provided the law contains suitable measures to safeguard the individual's rights, freedoms, and legitimate interests. This exception is most relevant to public bodies and regulated sectors where legislation specifically permits or requires automated processing.
Examples include automated tax assessments, fraud detection systems mandated by financial regulations, and automated eligibility determinations for government benefits.
3. Based on Explicit Consent
You can make automated decisions when the individual has given explicit consent. Under GDPR, explicit consent is a higher standard than regular consent. It requires:
- A clear, affirmative statement (not just ticking a box buried in terms and conditions)
- Specific information about the automated decision-making that will occur
- The individual must be free to refuse consent without detriment
- Consent must be as easy to withdraw as it is to give
Relying on explicit consent for automated decision-making requires genuinely informed agreement. Pre-ticked boxes, bundled consent, or vague references to "automated processing" in a privacy policy do not meet this standard.
Safeguards That Always Apply
Even when one of the three exceptions permits automated decision-making, Article 22(3) requires you to implement suitable safeguards. At minimum, these must include the right to:
- Obtain human intervention — the individual can request that a human reviews the decision
- Express their point of view — the individual can provide information or arguments relevant to the decision
- Contest the decision — the individual can challenge the automated outcome
These are not optional features. They are mandatory safeguards whenever you rely on any of the three exceptions. If you cannot provide human review on request, you cannot rely on the exception.
For decisions involving special category data (Article 9) — such as health data, racial or ethnic origin, or political opinions — only the explicit consent and member state law exceptions are available, and additional safeguards are required.
Transparency Obligations: What You Must Disclose
Articles 13 and 14 of the GDPR require controllers to provide specific information when automated decision-making (including profiling) is used. When you collect personal data, you must tell individuals about:
The Existence of Automated Decision-Making
You must clearly state that automated decision-making is being used. This should be prominent, not buried in a lengthy privacy policy. If a decision will be made by an algorithm rather than a person, the individual needs to know that upfront.
Meaningful Information About the Logic Involved
This is one of the most debated disclosure requirements in the GDPR. You do not need to reveal your proprietary algorithms or source code. But you must provide enough information for the individual to understand the general logic of how the decision works.
The EDPB guidance suggests disclosing:
- The categories of data used in the decision
- Why those categories are relevant
- How the data influences the outcome (e.g., "higher income increases likelihood of approval")
- The general logic of the algorithm (e.g., "we use a scoring model that weighs your payment history, income level, and existing debts")
The test is whether a reasonable person could understand, in general terms, how the system works and why it might reach a particular decision about them.
Significance and Envisaged Consequences
You must explain what the automated decision means for the individual in practice. What are the potential outcomes? What are the consequences of an unfavorable decision? If the system could deny them a loan, increase their insurance premium, or reject their job application, they need to understand that before the decision is made.
Profiling That Does NOT Fall Under Article 22
Not all profiling triggers Article 22 protections. The article applies only when all three conditions are met:
- The decision is solely automated (no meaningful human involvement)
- The processing includes profiling or is based solely on automated processing
- The decision produces legal or similarly significant effects
Common types of profiling that typically fall outside Article 22:
| Profiling Activity | Why It May Not Trigger Article 22 | |---|---| | Marketing segmentation | Does not produce legal or similarly significant effects | | Content recommendations | Typically not "similarly significant" | | Website personalization | Does not usually affect legal rights | | Profiling that assists human decisions | Not "solely automated" if a human genuinely reviews | | Analytics and statistical profiling | May not produce individual decisions at all |
However, even profiling that falls outside Article 22 still has to comply with the general GDPR principles — including lawfulness, fairness, transparency, and the right to object under Article 21. And profiling for direct marketing purposes is subject to the absolute right to object regardless of whether Article 22 applies.
CCPA and CPRA: Automated Decision-Making in California
The California Privacy Rights Act (CPRA) amended the CCPA to include a right to opt out of automated decision-making technology. Section 1798.185(a)(16) directed the California Privacy Protection Agency (CPPA) to issue regulations governing:
- Access to information about automated decision-making
- The right to opt out of automated decision-making technology
- Requirements for businesses that use such technology
After an extended rulemaking process that began in 2023, the CPPA finalized its ADMT regulations in July 2025. The regulations were approved by the Office of Administrative Law on September 22, 2025, and took effect on January 1, 2026. Key elements of the final rules include:
- Definition of ADMT: Technology that processes personal information and uses computation to generate a decision, prediction, recommendation, or output that replaces or substantially facilitates human decision-making
- Pre-use notice requirement: Businesses must inform consumers before using ADMT for decisions that produce legal or similarly significant effects
- Opt-out right: Consumers can opt out of ADMT for significant decisions (employment, finance, housing, insurance, education, healthcare, essential services)
- Access to logic: Consumers can request information about the logic used in automated decisions
Businesses that use ADMT to make significant decisions must comply with the ADMT requirements beginning January 1, 2027. Related obligations — including risk assessments and cybersecurity audits — follow a phased timeline extending through 2030. The general direction follows the GDPR approach — transparency, opt-out rights, and human review — but the specifics differ in scope and implementation.
AI and Large Language Models: An Evolving Landscape
The rapid adoption of AI and large language models (LLMs) has created new questions about how Article 22 applies to modern technology. While the GDPR was drafted before the current wave of generative AI, its principles apply to any automated processing of personal data.
When AI Decisions Fall Under Article 22
An AI system that makes decisions about individuals based on their personal data — such as an AI-powered hiring tool that screens CVs, a chatbot that determines insurance eligibility, or an LLM-based system that assesses creditworthiness — falls squarely within Article 22 if those decisions are solely automated and produce legal or similarly significant effects.
The key questions are:
- Is personal data being processed? If the AI uses information about identified or identifiable individuals, yes.
- Is the decision solely automated? If no human meaningfully reviews the AI output before it affects the individual, yes.
- Are there legal or similarly significant effects? If the AI output determines whether someone gets a loan, a job, insurance, or access to services, yes.
The EU AI Act Connection
The EU AI Act, which began applying in phases from August 2025, adds a regulatory layer on top of the GDPR for certain AI systems. High-risk AI systems — including those used in employment, credit scoring, essential services, and law enforcement — face additional requirements for risk management, data quality, transparency, human oversight, and accuracy.
The AI Act and GDPR Article 22 are complementary. Compliance with one does not guarantee compliance with the other. A business using AI for automated decision-making needs to assess obligations under both frameworks.
Practical Implications for AI Adoption
If your business is deploying or considering AI tools that process personal data and influence decisions about individuals, consider:
- Map your AI decision points. Identify every place where an AI system contributes to a decision about an individual.
- Assess the level of human involvement. Is a human genuinely reviewing AI outputs, or just passing them through?
- Evaluate the effects. Do the decisions produce legal or similarly significant effects?
- Ensure transparency. Can you explain, in general terms, how the AI system reaches its conclusions?
- Provide opt-out and review mechanisms. Can individuals request human review of AI-assisted decisions?
Data Protection Impact Assessment Requirements
GDPR Article 35 requires a Data Protection Impact Assessment (DPIA) for processing that is likely to result in a high risk to the rights and freedoms of individuals. The EDPB and national supervisory authorities have consistently identified automated decision-making and systematic profiling as activities that require a DPIA.
You should conduct a DPIA before implementing automated decision-making if:
- The processing involves systematic and extensive evaluation of personal aspects (profiling) leading to decisions that produce legal or significant effects
- You are processing special category data on a large scale
- You are systematically monitoring publicly accessible areas
- The processing appears on your supervisory authority's list of processing activities requiring a DPIA
A DPIA for automated decision-making should document:
- Description of the processing: What data is used, how the algorithm works, what decisions are made
- Purpose and necessity: Why automated processing is needed and whether less intrusive alternatives exist
- Risk assessment: Risks to individuals' rights and freedoms, including discrimination, unfairness, and lack of transparency
- Mitigation measures: How risks are addressed — human review mechanisms, accuracy testing, bias auditing, transparency measures
- DPO consultation: The Data Protection Officer's opinion on the assessment
Common Compliance Steps for Businesses
If your business uses any form of automated decision-making or profiling, here is a practical compliance checklist:
1. Audit Your Automated Decisions
Identify every process where a decision about an individual is made without meaningful human involvement. This includes:
- Credit checks and scoring
- Automated application processing (loans, insurance, memberships)
- CV screening and applicant tracking systems
- Fraud detection systems
- Automated content moderation affecting access to services
- Algorithmic pricing that varies by individual characteristics
2. Classify Each Decision
For each automated decision, determine:
- Is it solely automated? (No meaningful human review)
- Does it produce legal or similarly significant effects?
- If both, which exception applies (contract necessity, law, explicit consent)?
3. Implement the Required Safeguards
For each automated decision that falls under Article 22:
- Establish a process for individuals to request human review
- Create a mechanism for individuals to express their point of view
- Allow individuals to contest the decision
- Train the humans who conduct reviews to genuinely evaluate the case, not just confirm the algorithm
4. Update Your Privacy Notices
Ensure your privacy policy and collection notices include:
- Clear disclosure of automated decision-making
- Description of the logic involved (in plain language)
- Explanation of significance and potential consequences
- Information about the right to human intervention
5. Conduct a DPIA
Complete a Data Protection Impact Assessment for all automated decision-making with legal or similarly significant effects. Review and update it when the processing changes or when new risks emerge.
6. Test for Bias and Accuracy
Regularly assess your automated systems for:
- Accuracy: Are the decisions correct? What is the error rate?
- Bias: Do the decisions disproportionately affect people based on protected characteristics?
- Fairness: Are the criteria used in the decision-making process relevant and proportionate?
Document your testing and any corrective actions taken.
Practical Examples
Credit Scoring
A fintech company uses an automated credit scoring model to approve or deny loan applications. The system analyzes income, employment history, existing debts, and payment history to generate a score. Applications above a threshold are approved automatically; those below are denied.
Article 22 analysis: This is solely automated processing that produces legal effects (denial of credit). The company must rely on one of the three exceptions (likely contract necessity or explicit consent), provide the right to human review, disclose the logic of the scoring model, and conduct a DPIA.
Automated CV Screening
A recruitment platform uses AI to screen incoming CVs and rank candidates. The top-ranked candidates are invited to interview; the rest are automatically rejected.
Article 22 analysis: Automated rejection of job candidates produces similarly significant effects (impact on employment prospects). The platform must ensure candidates can request human review, understand why they were ranked as they were, and contest the decision. A DPIA is essential, particularly to assess bias risks.
Insurance Pricing
An insurance company uses automated algorithms to calculate premiums based on individual risk profiles. Some applicants receive significantly higher premiums; others are declined coverage entirely.
Article 22 analysis: Automated coverage denial produces legal effects. Significant premium variations may produce similarly significant effects. Disclosure of the factors used in pricing, human review options, and DPIA requirements all apply.
Fraud Detection
A payment processor uses automated fraud detection that blocks transactions flagged as suspicious. Blocked transactions prevent the customer from completing a purchase.
Article 22 analysis: If blocking is solely automated and prevents access to a financial service, it may produce similarly significant effects. However, fraud detection is often cited as a legitimate interest that may be authorized by law (financial regulations). The safeguards still apply — customers should be able to challenge a block and request human review.
References
- GDPR Article 22: Automated individual decision-making, including profiling. GDPR Article 22
- GDPR Article 4(4): Definition of profiling. GDPR Article 4
- GDPR Article 35: Data protection impact assessment. GDPR Article 35
- EDPB Guidelines on Automated Decision-Making and Profiling (WP251rev.01): Detailed guidance on Article 22 interpretation.
- ICO: Automated decision-making and profiling guidance. ICO guidance
Last reviewed: April 2026. Privacy laws and AI regulation are evolving rapidly. Verify all statutory references against the current text of the law and consult qualified legal counsel before making compliance decisions for your business.