EU AI Act and autistic AI tools

Brussels is currently refining which AI systems count as high-risk. For health and education AI, the question is decisive. Autistic Mirror deliberately sits outside that category. What that means in practice.

What the EU AI Act regulates

The AI Regulation (Regulation EU 2024/1689) entered into force in August 2024 and applies in stages. Bans on certain practices and the AI literacy obligation (Article 4) have applied since 2 February 2025. Obligations for general-purpose AI models have applied since 2 August 2025. The decisive deadline for most providers is 2 August 2026: from that date the full requirements for high-risk AI systems under Annex III take effect.

The regulation defines four risk tiers: prohibited practices, high-risk systems, limited-risk systems and minimal-risk systems. The tier determines the obligations.

Where the line for health and education AI sits

Annex III names the domains in which AI systems automatically count as high-risk. Three of them dominate the current DACH HR debate: education and vocational training, employment and worker management, and access to essential services including health and social benefits. Anyone preparing or making decisions in these areas falls into the high-risk corridor.

Annex I additionally couples the regulation to medical-device law. As soon as an AI runs as a safety component of a medical device or as an in-vitro diagnostic, it counts as high-risk regardless of use case.

Article 6(3) provides a narrow exception: a system that performs only a preparatory or purely technical task and creates no significant risk to fundamental rights can fall outside the high-risk category despite operating in an Annex III area. As soon as profiling takes place, however, meaning automated evaluation of personal aspects, the exception no longer applies.

Why Autistic Mirror is not high-risk AI

Autistic Mirror is explicitly not a medical device. This position is stated in the privacy policy section 2.4 and in the terms of use. There is no diagnosis, no therapy, no triage, no recommendation of medical measures. The task is narrowly defined: explain neurological mechanisms of autistic experience and support self-reflection.

No profiling within the meaning of the GDPR takes place. Onboarding inputs are applied to the system-prompt configuration, not used to evaluate persons. There is no automated decision producing legal effects within the meaning of Article 22 GDPR. Annex III therefore does not apply in any of the three critical pillars, and the Article 6(3) exception remains available.

The app does not provide clinical advice. When users explicitly request a diagnostic or treatment path, an output filter declines the request and points to qualified professionals. That is not self-restriction but the clean line between explanation and recommendation.

What high-risk obligations would actually cost

Anyone in the high-risk corridor must build a full conformity regime: documented risk management, data governance with training-data evidence, technical documentation, logging duties, human oversight, accuracy and robustness measurements, conformity assessment. Annex I cases additionally require a notified body.

The cost runs into six figures per audit cycle, excluding personnel and follow-up reviews. For a small team this is only sustainable if the use case genuinely is high-risk. The answer is not to circumvent obligations but to scope the use case cleanly.

What we still meet voluntarily

Even outside the high-risk corridor, GDPR, ePrivacy and national rules apply. Autistic Mirror meets five compliance families in parallel: ISO/IEC 27001 Annex A for information security, OWASP Top 10 for application security, GDPR Articles 5, 9, 22, 25, 32, 35, EN ISO 9241 parts 110 to 210 for software ergonomics, and WCAG 2.1 Level A and AA for accessibility. Plus a 5-layer safety architecture with anti-ABA filter, crisis detection, output safety filter, injection detection and buffer-then-send.

The data protection impact assessment under Article 35 GDPR is documented, the record of processing activities under Article 30 likewise, the technical and organisational measures under Article 32 are written out. There is no tracking, no sale of data, no training of models on user content.

How this differs from generic AI

A generic chatbot without specialisation can slide into diagnostic language at any time, give therapy recommendations or suggest ABA-adjacent strategies. That places it in a legally unclear space the moment it is used for health or education questions, and in an ethically critical space the moment it meets autistic people.

Autistic Mirror has a fixed mechanism-first prompt and an output filter that blocks diagnostic recommendations and ABA content. The tool stays inside the permitted area without removing depth from users. Explanation instead of recommendation. Mechanism instead of diagnosis.

A bright spot

Regulation is often experienced as an obstacle. In the case of the EU AI Act it is an offer of clarity: it forces providers to decide whether they want to take diagnostic responsibility or not. Tools that explain mechanisms instead of issuing diagnoses can scale credibly without sliding into the high-risk corridor. That is at the same time the legally clean and the ethically defensible path. For autistic people it means: the tool stays usable, without anyone standing between them and their own neurology.

Autistic Mirror explains autistic neurology individually, related to your situation. Whether for yourself, as a parent, or as a professional.

Aaron Wahl
Aaron Wahl

Autistic, founder of Autistic Mirror

How you function has reasons.
They can be explained.

Sign up for free