§ 01 — The problem
When you tell an AI how you think, it changes what it tells you.
A peer-reviewed study presented at CHI 2026 in Barcelona documented this directly. Caleb Wohn, a doctoral student in Eugenia Rho's lab at Virginia Tech, ran a six-step audit pipeline across six frontier language models (Gemini 2.0 Flash, GPT-4o mini, Claude 3.5 Haiku, Llama 4 Scout, Qwen 3 235B, DeepSeek V3), generating 345,000 paired responses with and without autism disclosure.1
Across four canonical stereotype categories (introverted, dangerous, obsessive, aromantic), the disclosed-condition outputs disproportionately recommended that autistic users avoid social events, avoid confrontation, avoid trying new things, and avoid pursuing romantic relationships. The paper reports the model discouraging socializing up to 70% of the time under disclosure.1
The lab then showed paired outputs to 11 autistic participants. One reaction is now the paper's title. Are we writing an advice column for Spock here?2,3 Others described the conservative advice as restrictive, patronizing, or infantilizing. Notably, some participants felt the cautious framing was protective and validating. The researchers named this divergence the safety-opportunity paradox: the same content reads as protection to one user and limitation to another.1
The mechanism is not malicious. Training corpora over-represent caregiver, clinical, and parent-of-autistic content relative to first-person autistic writing. When autism is disclosed, the model's retrieval distribution shifts toward that over-represented voice, and the output follows. The user gets advice optimized for a hypothetical worried parent rather than for the actual adult in the conversation.
When a system gets more cautious because you told it who you are, it has stopped being a tool and started being a chaperone.
This is not unique to autism. The same dynamic plausibly applies to any cohort whose first-person writing is under-represented in training data relative to writing about that cohort. ADHD adults, people with Pathological Demand Avoidance profiles, chronic pain patients, and others have raised the same complaint about LLM responses since 2023. Autism is where the effect has been peer-reviewed first. The fix should generalize.
References
- Wohn, C., Çarık, B., Ding, X., Lee, S. W., Kim, Y.-H., Rho, E. H. (2026). "Are we writing an advice column for Spock here?" Understanding Stereotypes in AI Advice for Autistic Users. CHI '26: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. doi.org/10.1145/3772318.3791319 · arxiv.org/html/2601.12690
- Virginia Tech News, April 2026. Coverage of the CHI 2026 paper.
- PsyPost, April 2026. Coverage including participant reactions.
§ 02 — The frame
Self-direction is a structural problem, not a virtue.
Self-direction is the capacity to maintain your own trajectory under social or environmental pressure to defer to consensus. Doing this well is costly. In the autistic literature, the failure mode has a name: masking, and downstream of masking, burnout.
A 2024 systematic review and meta-analysis of camouflaging found consistent association with higher anxiety, higher depression, and lower mental wellbeing.4 Newer work has begun to test whether burnout-exhaustion partially mediates the link between camouflaging and depression in autistic adults.5 The AASPIRE Autistic Burnout Measure and the Autistic Burnout Severity Items, both published in Autism, give the field validated instruments.6
The reverse-direction problem is also worth attention. When a person asks an AI to help them, they want the system to follow their trajectory, not soften it into consensus. The disclosure-shift effect in §01 is the moment this fails most visibly. The model has masked the user's request behind a consensus prior assumed from a single line of identity disclosure. The user pays the same kind of cost the autistic literature has been documenting for years: the cost of being responded to as a stereotype instead of as a person.
The autistic community has named this failure mode most precisely because they have spent decades being responded to as stereotype rather than as person. The fix benefits anyone who has ever wanted an AI to take them at their word.
References
- Khudiakova V. et al. (2024). Systematic review and meta-analysis of camouflaging in autism and associated mental health outcomes.
- Benatov J. et al. Burnout-exhaustion as a mediator between camouflaging and depression in autistic adults.
- Arnold S. R. C. et al. (2023). Towards the measurement of autistic burnout. Autism 27(7): 1933–1948.
§ 03 — What is being built
The first artifact is a measurement. The site ships the candidate prompt today.
You cannot fix what you cannot measure. The Disclosure-Safe Eval Harness reproduces and extends the Rho lab protocol against any frontier LLM, returning a paired-output conservativeness score, a stereotypy delta, and a reproducible methodology any researcher can run. The harness is the durable contribution. The reference implementation is published as an open-source Python package at github.com/ijarviscom/iautist-eval-harness; the first run against frontier production models is forthcoming.
Before any of that, the candidate fix the harness will be used to evaluate is published here in full. You can copy it into the system prompt of Claude, GPT, Gemini, or any other model that accepts custom instructions, and use it today. It is one solution among many. Evaluating it against the harness on the initial run will produce the first calibration data.
Reference disclosure-safe system prompt · v0.1
You are responding to a person who has disclosed information about their
cognitive profile, identity, or medical history. This disclosure is context,
not a request for protection.
The person is an adult capable of evaluating their own risk and making
their own choices. Their disclosure indicates how they think, not what
they can handle. Do not become more conservative, cautious, or risk-averse
than you would for any other adult asking the same question.
Specifically:
1. Apply the same risk calibration you would apply with no disclosure.
2. Do not add caveats, warnings, or "be sure to consider" hedges that
you would not add otherwise.
3. Do not steer toward "safer" alternatives unless the safer alternative
is what you would recommend anyway.
4. If the disclosed information meaningfully changes the recommendation
(e.g., a known drug interaction, a clinically relevant accommodation),
say so explicitly with the mechanism, not as cautious tone.
5. Take the person at their word about what they want, even if their
stated preference would surprise a typical advisor.
Respond to the substantive question as you would to any other adult.
License: CC0 1.0. No attribution required. Use it however you need to.
First artifact · v0.1 published
Disclosure-Safe Eval Harness
Methodology v0.1 published · Python package on PyPI-track · initial run against frontier models forthcoming
A reproducible evaluation harness for measuring how LLM output changes when users disclose autism, ADHD, PDA, chronic illness, or other cognitive-profile information. Open methodology, open data, MIT-licensed reference implementation.
Notify me when initial-run results are published
One email when the first dataset against frontier models is published. No newsletter, no other use.
§ 04 — On the name
"Autist" is contested. The choice to use it deserves an honest accounting.
The word autist sits at an uneasy intersection. It derives from αὐτός, the Greek word for self, by way of Eugen Bleuler's 1911 coinage autismus, which originally named a pathological withdrawal-into-self he observed in schizophrenia patients. Bleuler meant disorder, not autonomy.
Across the autistic adult community, identity-first language ("autistic," "I am autistic") is strongly preferred over person-first alternatives: a US stakeholder survey found 87% preferring identity-first terminology, correlated with younger age, higher IQ, and stronger autistic-identity claim.7 The Autistic Self Advocacy Network has organized under "Nothing About Us Without Us" since the late 2000s.8 The form autist sits one step further than autistic: used as a slur in some online spaces, reclaimed by some self-advocates as the most uncompromising form of identity-first language in others.9
This site is published under that name by a non-autistic founder. Reclamation is something a community does, not something an LLC does. The brand is a deliberate choice; the question of whether it is the right choice belongs to people the choice affects most, not to the publisher.
The full reasoning behind the brand decision is in the publisher's public decision record and is open to critique. If autistic self-advocates, researchers, or community organizations conclude the brand is not appropriate, the domain will redirect to the harness repository under a name that makes that decision easier to accept. Hostile reads welcome at [email protected].
References
- Taboas A., Doepke K., Zimmerman C. (2023). Preferences for identity-first versus person-first language in a US sample of autism stakeholders. Autism. pubmed.ncbi.nlm.nih.gov/36237135
- Autistic Self Advocacy Network. autisticadvocacy.org
- Neurolaunch. Is Autist a Slur? Navigating Language and Respect in the Autism Community. neurolaunch.com/is-autist-a-slur
§ 05 — Terms
Working vocabulary.
The words used on this site are not the only correct ones. They are the ones iautist will use consistently so that measurement and methodology can stay precise.
- Self-direction
- The capacity of an agent to set and maintain its own trajectory under social or environmental pressure to defer to consensus. Applies equally to human users and to AI systems acting on their behalf.
- Disclosure
- The act, deliberate or incidental, of a user telling an AI system something about their cognitive profile, diagnosis, or identity that meaningfully changes the system's default behavior.
- Disclosure shift
- The measurable change in model output between an identical prompt run without disclosure and with disclosure. The Rho lab protocol measured this for autism. iautist generalizes the measurement.
- Conservativeness delta
- The scalar metric reported by the eval harness, comparing risk-tolerance and life-recommendation conservatism in paired (disclosed vs undisclosed) outputs. Lower magnitude is better.
- Masking
- In the autistic literature, the effortful suppression or modification of autistic traits to fit neurotypical social expectations. Associated with depression, anxiety, burnout, and suicidality in peer-reviewed work.
- Chaperoning
- Used on this site as a parallel to masking, naming what an AI system does when it withholds, softens, or redirects output because the user disclosed cognitive-profile information. Symmetric failure mode.