Skip to content
Satya Legal - Abogados especializados en startups y derecho tecnológico en España

AI Regulation and Sensitive Data: What the GDPR and Spanish LOPD Require From Your Business

AI Regulation and Sensitive Data: GDPR and LOPD - Satya Legal

Artificial intelligence is no longer science fiction: chatbots handling customers, algorithms screening CVs, models analysing medical records. The problem is that many of these tools process sensitive personal data without businesses being aware of their legal obligations. The EU AI Act, the GDPR and Spain's LOPD-GDD form a regulatory framework that every organisation using AI must understand. Ignoring it can mean fines of up to €35 million or 7% of worldwide annual turnover.

In short

  • The AI Act (EU Regulation 2024/1689) classifies AI systems by risk level and prohibits certain uses (social scoring, mass biometric identification).
  • The GDPR requires a legal basis, transparency and a Data Protection Impact Assessment (DPIA) when AI processes personal data, especially special categories (health, biometrics, racial origin, political opinions).
  • Spain's LOPD-GDD strengthens guarantees: digital rights, DPO obligations and its own penalty regime.
  • Combined fines can reach €35M (AI Act) + €20M (GDPR): a real risk for SMEs and startups.
  • Conducting a Data Protection Impact Assessment (DPIA) before deploying AI is practically mandatory.

The EU AI Act: risk-based classification and prohibitions

The EU AI Regulation (in force since August 2024, with phased application until 2027) is the world's first comprehensive AI legislation. It classifies AI systems into four levels:

Risk level Examples Obligations
Unacceptable Social scoring, subliminal manipulation, real-time biometric identification in public spaces Prohibited
High Recruitment, credit scoring, medical diagnosis, migration systems Conformity assessment, EU registration, human oversight, transparency, risk management
Limited Chatbots, deepfakes, AI-generated content Transparency obligations: users must know they are interacting with AI
Minimal Spam filters, content recommendations, video games No specific obligations (code of conduct recommended)

Note: High-risk AI systems that process biometric, health or protected-category data are subject to both the AI Act and the GDPR simultaneously. Obligations stack — they do not replace each other.

Sensitive data and special categories: what the GDPR says

Article 9 of the GDPR prohibits, as a general rule, the processing of special categories of data: racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data for uniquely identifying a person, health data, and data concerning sex life or sexual orientation. AI multiplies the risk because it can infer these data even when they are not directly provided (for example, deducing health status from purchasing patterns).

Art. 9(2) GDPR exceptions allowing sensitive data processing:

  • Explicit consent of the data subject for specific purposes.
  • Employment and social security obligations.
  • Vital interests of the data subject or another person.
  • Substantial public interest (based on EU or Member State law).
  • Preventive medicine, diagnosis, treatment or healthcare management.
  • Scientific research or statistical purposes with appropriate safeguards.

Spain's LOPD-GDD: reinforcing the GDPR

Spain's Organic Law 3/2018 on Data Protection and Digital Rights Guarantee (LOPD-GDD) adapts the GDPR to the Spanish legal system and adds its own guarantees:

  • Digital rights (Title X): right to digital disconnection, right to privacy in the use of digital devices at work, and the right not to be subject to decisions based solely on automated processing (reinforced Art. 22 GDPR).
  • Data Protection Officer (DPO): mandatory in additional cases beyond GDPR requirements, such as telecoms operators, healthcare centres, insurers or advertising companies.
  • Penalty regime: minor infringements (up to €40,000), serious (up to €300,000) and very serious (up to €20M or 4% of turnover).
  • Processing data of deceased persons: specific regulation not covered by the GDPR.

Data Protection Impact Assessment (DPIA): when is it mandatory?

Article 35 of the GDPR requires a Data Protection Impact Assessment when processing, by its nature, scope or purposes, is likely to result in a high risk to the rights and freedoms of individuals. In practice, almost any use of AI with personal data requires one, especially if two or more of the following factors apply:

Criteria triggering a DPIA:

  • Automated evaluation or scoring (credit profiles, employee performance).
  • Automated decision-making with legal or significant effects.
  • Systematic monitoring (AI-powered CCTV, employee monitoring).
  • Large-scale processing of special category data.
  • Use of new technologies (generative AI, facial recognition, predictive analytics).
  • Matching or combining datasets from different sources.

The DPIA is not a bureaucratic formality: it is the document that demonstrates your company has identified risks, applied mitigation measures and consulted the DPO (or, where applicable, the supervisory authority). Without it, a penalty for irregular processing is considerably more severe.

Practical cases: AI and sensitive data in everyday business

1. AI-powered recruitment

If you use an algorithm to screen CVs, it is a high-risk AI system (AI Act, Annex III). It can infer sensitive data (age, ethnic origin, disability) from the CV. You need: a DPIA, effective human oversight, transparency with candidates (Art. 22 GDPR) and registration as a high-risk system.

2. Customer service chatbot

If your generative-AI chatbot collects personal data (name, email, queries that may reveal health status or financial situation), you need: to inform users they are interacting with AI (AI Act, limited risk), an updated privacy policy, a legal basis (consent or contract performance) and measures to prevent the model from storing or retraining on sensitive data.

3. Predictive analytics in health or insurance

Models that predict disease risk or calculate insurance premiums process health data (special category). They require: explicit consent or a specific legal basis (Art. 9(2) GDPR), a mandatory DPIA, encryption and pseudonymisation, and guarantees for the right to object and to an explanation of the automated decision.

4. Facial recognition at the workplace

Installing a biometric access control system with facial recognition involves processing biometric data (special category) and is a high-risk AI system. In addition to the GDPR and AI Act, Spain's AEPD has issued strict criteria: it is only proportionate if no less invasive alternative exists.

Fines: the cost of non-compliance

Cumulative penalty regime:

  • AI Act: up to €35M or 7% of worldwide turnover for prohibited practices; up to €15M or 3% for high-risk system non-compliance.
  • GDPR: up to €20M or 4% of worldwide turnover for serious infringements (no legal basis, rights violations, missing DPIA).
  • LOPD-GDD: very serious infringements up to €20M or 4% of turnover; serious infringements up to €300,000.

Checklist: AI, Sensitive Data and Regulatory Compliance

  • 1. AI systems inventory — Identify all AI systems your company uses (in-house or third-party) and classify them by AI Act risk level.
  • 2. Personal data mapping — Determine what personal data each system processes, whether it includes special categories and the legal basis.
  • 3. DPIA — Conduct a Data Protection Impact Assessment for each high-risk AI system. Document risks, measures and conclusions.
  • 4. Transparency and information — Update the privacy policy to inform about AI use, purposes, applied logic and consequences of automated processing.
  • 5. Human oversight — Ensure automated decisions with legal or significant effects can be reviewed by a human (Art. 22 GDPR + AI Act).
  • 6. Data subject rights — Implement mechanisms to handle access, rectification, erasure, objection and, especially, the right not to be subject to automated decisions.
  • 7. AI provider contracts — Review contracts with AI tool providers: data processor clauses, data location, sub-processors, security measures.
  • 8. DPO and governance — Assess whether you need a Data Protection Officer and establish an AI governance committee or protocol.
  • 9. Training — Train teams using AI on risks, obligations and data protection best practices.
  • 10. Records and documentation — Keep the record of processing activities up to date with AI systems and maintain compliance evidence (accountability).

AI Act application timeline

Date Milestone
August 2024 Regulation enters into force
February 2025 Prohibition of unacceptable-risk AI practices
August 2025 Obligations for general-purpose AI models (GPAI)
August 2026 Full application: obligations for high-risk AI systems
August 2027 Obligations for high-risk systems embedded in regulated products (Annex I)

Does your business use AI without knowing if it complies with the GDPR and AI Act?

At Satya Legal we help with impact assessments, AI provider contract adaptation, privacy policy and compliance with the new European AI Regulation. Avoid fines and protect your business.

Contact Satya Legal

Share