Psychometric testing of malingering

10th Aug 2017

‘He uses statistics like a drunken man uses lamp posts – for support rather than illumination.’ Andrew Lang (1844-1912)

In litigation, clients are often referred for psychometric assessment involving administration of a battery of Symptom Validity Assessment Tests (SVTs) to determine whether the client is over-reporting the extent/severity of their psychological symptoms. SVTs and an associated interview usually take 4-6 hours – in one session. Misplaced belief in the accuracy of these tests as they are administered and reported on can cause great mischief in fairly assessing the psychological status of litigants. Indeed, in a substantial proportion of cases, SVTs misclassify honest responders as over-reporters or malingerers.

The remedy is easy enough, if the problem is properly understood. And so some elaboration is necessary…


They are questionnaires or performance-based tests.


One approach is simple: answers to some questions are so improbable that they cannot be founded in reality, for example, “Do you think trees have the ability to feel peoples’ emotions?” Another approach is to test performance by asking the client to perform tests that appear difficult but are actually so easy that virtually everyone who can function in society cannot fail them. Yet another approach is to ask numerous questions and, through sophisticated statistical methods, identify responses that discriminate between individuals trying to fake and those who have genuine psychopathology. After repeated trials the reliability and validity of tests are established, with norms available to compare individuals against the sample population from which the tests were developed.

Psychological reports proceeding from SVTs and clinical assessments typically proceed as follows:

1. Justification for testing

This is usually prefaced by: ‘Symptom validity assessment is conducted to measure exaggeration of complaints and alleged deficits, using validated measures and approaches’.

Comment: Since no SVTs were validated and normed in extended testing sessions, we don't know how badly this affects test performance – but it will not be beneficial.

2. Justification for extended testing sessions

This is never justified. Clients can ask for breaks when required but they are often encouraged to keep on going.

Comment: Due to repeated/lengthy testing clients often become demotivated, disengage from the testing process and answer in a random or lacklustre way. This is particularly prevalent on lengthy tests such as the Minnesota Multiphasic Personality Inventory-2-Revised Form (MMPI-2-RF), which has 338 questions. Since clients referred for testing claim to have psychopathology of import, ethical considerations require that problems of fatigue, demotivation etc. are managed.

3. Test results based on percentile ranks/standard scores/deviations

Results of tests reported in standard scores are easily expressed as percentile ranks. Thus, some assessors report as follows: ‘On the Response Inconsistency Scale, Mr X’s symptoms were more extreme than 95% of all test takers.’  

Comment: The diagnostic accuracy of a test can be expressed in terms of specificity and sensitivity. Specificity is a measure of how accurate a test is against false positives (that is, the proportion of those who do not have a disease/condition that will have a negative result) while sensitivity is essentially how good a test is at finding something (that is, a disease/condition) if it's there. Diagnostic accuracy (that is, sensitivity and specificity) is only one part of the equation that describes the efficacy of a diagnostic instrument. The other part is predictive power, which depends on both diagnostic accuracy and the prior probability (that is, prevalence, or base rate) of the disease/condition being tested for.

For example, if a test to detect a disease whose prevalence is 1/1000 has a specificity of 95% (that is, a false positive rate of 5%) the chance that a person found to have a positive result actually has the disease is only 1.9%. Employing Bayes Theorem, 51/1000 would test positive (1 true positive and 50 false positives). Expressed as a proportion this is 1/51=0.019 or 1.9%.

With a base rate of over-reporting of about 4% in Australia virtually none of the commonly employed SVTs has a predictive power greater than 50%: it usually hovers around 30%. And so these SVTs, in the majority of cases, misclassify honest responders as over-reporters or malingerers.


Finding over-reporting or even a diagnosis of malingering is not evidence that the test taker is free of dysfunction. Individuals with genuine disorders may over-report their symptoms or fabricate others for a variety of reasons. Thus, positive findings on SVTs over-reporting indicators do not, per se, rule out the possibility that the client is psychologically disordered.


Extended psychological testing of clients is contraindicated where psychopathology is claimed: this needs to be considered on an individual basis.

Using statistics for support rather than to explain results in psychological assessments based on SVTs overestimates malingering very significantly indeed.


Professor Ian R Coyle is an ALA member and Psychologist, Ergonomist/Human Factors Engineer and Psychopharmacologist. He has given expert evidence for 35 years in these disciplines, in Australia and internationally.



The views and opinions expressed in these articles are the authors' and do not necessarily represent the views and opinions of the Australian Lawyers Alliance (ALA).

Tags: Access to justice Health, medicine and law Australia Importance of Accuracy Mental health