Recently, the parent of a young adult with ASD brought to my attention a diagnostic instrument that has received some attention in the press, following a presentation at the American Psychiatric Association meeting in May. The Autism Mental Status Examination (AMSE) is described as “an eight-item observational assessment that prompts the observation and recording of signs and symptoms of autism spectrum disorders (ASD)” (Grodberg et al., 2012, p. 455). In this paper that introduced the instrument to the field, the authors reported a sensitivity of .95 and a specificity of .81. Even without a comprehensive grasp of sensitivity and specificity, many readers will immediately recognize that those numbers are pretty good; they indicated that the instrument correctly identified (as ASD) 95% of individuals in the sample who truly had ASD, as well as correctly identifying (as not-ASD) 81% of the individuals who truly did not have ASD.
The appeal of an eight-item observational assessment is obvious, considering the effort and expense involved in administering the instruments presently viewed as state-of-the-art (i.e., the Autism Diagnostic Observation Schedule and the Autism Diagnostic Interview-Revised). However, enthusiasm wanes somewhat when the authors point out that the AMSE is to be administered “in the context of a clinical examination” and “cannot be used independently to diagnose autism” (p. 457). (Of course, the ADOS and the ADI-R cannot be used independently to diagnose autism either.) So the question is: what is the potential contribution of the AMSE to the diagnostic process?
First, the instrument emphasizes a collection of some of the features that would generally be expected to distinguish people with ASD from those without ASD; thus, it passes the “face validity” test in that the focus of the items seems to be in the right ballpark with respect to identifying ASD. The eight items address the following features:
- Eye Contact
- Interest in Others
- Pointing Skills
- Pragmatics of Language
- Repetitive Behaviors & Stereotypy
- Unusual or Encompassing Preoccupations
- Unusual Sensitivities
A good diagnostic evaluation will generally gather information about these features (and others) and the information will be incorporated into the diagnostic determination. Completing the AMSE might serve to remind clinicians to observe or inquire about some relevant features that they would otherwise have overlooked.
A strength of the instrument is that it involves clinician observation. The good thing about clinician observation is that it helps to move the process away from excessive reliance on parent and teacher checklists. Parent and teacher checklists can provide useful information but they do not substitute for a careful clinical evaluation . . . and an essential part of a careful clinical evaluation is informed observation. The AMSE offers a method for structuring clinician observation and focusing it on some diagnostically relevant features. Anything that emphasizes informed clinical observation and helps to teach clinicians what to look for when diagnosing ASD is probably a good thing.
But that is not the end of the story. In the initial report on the instrument (Grodberg et al., 2012), the authors aimed to validate the instrument by comparing results to what is probably the best existing direct-observation instrument that we have, i.e., the ADOS. . . and it comes out looking pretty good. A sensitivity of .94 and a specificity of .81 for an 8-item instrument, compared to ADOS results, are not to be sneezed at.
However, there is a methodological concern in the study that led to these data. The instrument was completed by two psychiatrists at an Autism Center for Research and Treatment as part of their clinical evaluation; it is safe to assume that they are experienced clinicians who have evaluated many people with ASD. It would appear that they completed the instrument on the basis of what they saw and heard during the course of the evaluation, including observations and information that is not captured in the AMSE. The items on the instrument are not very well operationalized and, as a result, they are likely to be influenced by the clinician’s overall conclusion regarding diagnosis. The methodological issue is that it is difficult to disentangle to what extent the clinician’s ratings on the instrument are simply a “carrier” for their clinical expertise regarding autism. That is, to what extent do the clinicians draw a diagnostic conclusion based on everything that they learn during the evaluation, and then complete the instrument in a manner that supports that conclusion? This is not to suggest that the authors were duplicitous in their investigation, merely that their methodology does not eliminate the possibility of unconscious influence.
But what about the validity data, i.e., the relationship between AMSE results and ADOS diagnoses? It is not widely appreciated that the true “gold standard” for the diagnosis of ASD, against which the ADOS and every other autism diagnostic instrument is validated, is the clinical judgment of experienced clinicians; for example, the report of the first version of the ADOS compared ADOS results to the “clinical impressions of a clinical psychologist and a child psychiatrist who each interviewed the parents and observed the child separately and discussed discrepant impressions until they reached a ‘best estimate’ diagnosis” (Lord et al., 2000, p. 210). Thus, it is not surprising that the AMSE would align well with the ADOS, if it is in fact reflecting experienced clinicians’ diagnostic conclusions. The methodology of the study does not allow us to determine whether that is all that it is doing.
If it is the case that the instrument is just reflecting experienced clinicians’ diagnostic conclusions, it’s not very interesting nor does it add much to the clinical armamentarium for diagnosing autism. In that case, if we were to conclude that any licensed mental health professional could administer this instrument and make correct diagnostic decisions regarding ASD about 90% of the time, on the basis of the instrument’s score alone, we would be very disappointed. The instrument is likely to be very dependent on the quality of the clinician’s observations, and the quality of the information elicited from a parent, during the interview. Experienced clinicians who know a lot about autism will make good observations and elicit diagnostically relevant information, all of which will be reflected in their AMSE ratings; and they will draw sound conclusions regarding diagnosis. Clinicians with less ASD experience will not necessarily do so, and their AMSE ratings and diagnostic decisions will likely be compromised accordingly.
So, in sum, while the AMSE focuses clinicians’ attention on at least some of the diagnostically relevant features that distinguish autism and support a diagnostic conclusion, it is not clear that it either (a) adds to the information routinely elicited during a competent clinical evaluation or (b) serves to replace clinical experience in evaluations done by clinicians with less experience with ASD.
Grodberg, D., Weinger, P.M., Kolevzon, A., & Soorya, L. (2012). Brief report: The Autism Mental Status Examination: Development of a brief autism-focused exam. Journal of Autism and Developmental Disorders, 42, 455–459
Lord, C., Risi, S., Lambrecht, L., Cook, E.H., Leventhal, B.L., DiLavore, P.C., Pickles, A., & Rutter, M. (2000). The Autism Diagnostic Observation Schedule–Generic: A standard measure of social and communication deficits associated with the spectrum of autism. Journal of Autism and Developmental Disorders, 30, 205-223.