Edpb guidance on ai-driven profiling: what businesses must change

EDPB guidance on AI-driven profiling for marketing: what organisations need to know under the GDPR

The European Data Protection Board’s 2026 guidance tightens how the GDPR applies when AI systems infer attributes about people for marketing. It doesn’t create new law, but clarifies expectations where models draw behavioural or sensitive inferences—think likely health issues, political leanings, sexual orientation, or subtle vulnerabilities. The message is straightforward: meaningful transparency, a clearly justified lawful basis and robust safeguards are non-negotiable when profiling is automated.

Why this matters: marketing teams and tech owners must treat profiling decisions as privacy decisions. Design choices—model selection, feature sets, explanations—are compliance decisions from day one.

Key points from the guidance (EDPB guidance on AI profiling and GDPR)
– Transparency: Notices must explain profiling logic in plain, concise language. Boilerplate won’t do. People should understand what profiling is used for, how models operate in practical terms, and what effects profiles can have.
– Lawful basis: If you rely on consent, it must be specific, informed and freely given. If consent isn’t appropriate, document and justify another lawful basis (for example, legitimate interests) and demonstrate necessity and proportionality.
– Sensitive and behavioural inferences: Predicting sensitive attributes triggers heightened scrutiny. Even behavioural inferences—habits, likely vulnerabilities, or tendencies—require strong justification and protective measures.
– Rights and remedies: Data subjects must be able to access profiling information, object to profiling, and challenge automated decisions that have legal or similarly significant effects.
– Safeguards and accountability: DPIAs, human oversight, bias mitigation, logging and other technical and organisational measures are expected to show proportionality and accountability.

Practical implications for marketing teams and controllers
The guidance raises the bar for AI-driven marketing. Controllers must justify legal grounds, supply risk assessments, and apply stronger protections where profiling is high-impact.

How this plays out in practice:
– Recordkeeping of lawful bases: Keep clear records showing the legal basis for every profiling activity and the reasoning that supports it.
– DPIAs with model detail: Data protection impact assessments should map model inputs, inferred attributes, downstream uses and potential harms to rights and freedoms.
– Prefer simplicity for consumer-facing uses: Where possible, use transparent, explainable models for consumer targeting. For decisions affecting credit, hiring, eligibility or similar outcomes, add mandatory human review and stricter controls.
– Layered, plain-language explanations: Deliver short, clear notices up front with links to deeper technical details. Use concrete examples so non-experts can grasp likely effects.

Priority actions to start now (practical checklist)
1. Treat model and feature choices as compliance decisions – Include privacy, legal and product teams when selecting models and features.
2. Update or run DPIAs for every profiling use – Map inputs, intermediate representations and outputs. – Document mitigation steps and any residual risk.
3. Identify and limit features that enable sensitive inferences – Remove or neutralise features that could reveal sensitive attributes unless you can point to a compelling legal justification.
4. Improve transparency and user choice – Publish concise notices explaining profiling logic and likely effects. – Provide easy ways to object or opt out where appropriate.
5. Build operational controls and audit trails – Log data lineage, model versions, decision outputs and change history. – Schedule regular validation, drift detection and bias testing.
6. Strengthen governance and vendor oversight – Put model oversight on risk registers. – Update contracts with processors to require audit rights, DPIA support and explainability obligations.

Implementation checklist (what to document and prove)
– Inventory: Maintain an auditable register of profiling activities, models, data sources, purposes and retention schedules.
– DPIA annexes: Include technical descriptions of architecture, training data provenance and mitigation controls.
– Privacy-by-design evidence: Record decisions on data minimisation, purpose limitation and explainability from prototype through deployment, including code/version tags.
– Governance logs: Assign model owners, keep formal approvals, and record remediation steps and review outcomes.
– Continuous monitoring: Automate drift/bias detection, retain logs and snapshots for retrospective checks.
– Rights handling: Record legal assessments and test workflows for access, objection and deletion requests.
– User testing of explanations: Produce user-facing explanations, validate them with non-technical reviewers, and iterate until they’re genuinely understandable.

Tips for creating clearer consumer-facing explanations
– Use layered notices: short headline (“Why we profile you”), a one-paragraph summary, and a technical appendix for those who want details.
– Give examples: “This model may infer that you’re likely interested in budget travel, which can affect which offers you see.”
– Avoid jargon: replace “profiling logic” with plain phrases like “how we decide which ads you see.”
– Show choices: explain how to opt out or complain, with direct links or contact points.

Final practical note
The EDPB guidance doesn’t ban marketing profiling—it demands clearer justification and stronger protections where models infer sensitive or meaningful attributes. Start with transparent inventories and DPIAs, limit risky features, and make explanations genuinely useful for people. That combination—documented legal reasoning + technical safeguards + plain-language communications—will be your best defence and the clearest path to compliant, responsible marketing.