The Vulnerability to Coaching across Measures of Effort

Article (PDF Available)inThe Clinical Neuropsychologist 23(2):314-28 · June 2008with26 Reads
DOI: 10.1080/13854040802054151 · Source: PubMed
Abstract
Neuropsychologists are increasingly called upon to conduct evaluations with individuals involved in personal injury litigation. While the inclusion of measures of effort within a test battery may help clinicians determine whether a client has put forth full effort, attorney coaching may allow dishonest clients to circumvent these efforts. The purpose of this study was to determine the degree to which frequently used measures of effort are susceptible to coaching, as well as to explore and classify strategies undertaken by coached malingering simulators. Overall, coached simulators performed significantly better on 7 of 14 measured variables. Potential improvements in the external validity of the simulation design were also explored.

Full-text (PDF)

Available from: Benjamin David Hill
The Clinical Neuropsychologist, 23: 314–328, 2009
http://www.psypress.com/tcn
ISSN: 1385-4046 print/1744-4144 online
DOI: 10.1080/13854040802054151
THE VULNERABILITY TO COACHING ACROSS
MEASURES OF EFFORT
Adrianne M. Brennan, Stephen Meyer, Emily David,
Russell Pella, Ben D. Hill, and Wm. Drew Gouvier
Louisiana State University, New Orleans, LA, USA
Neuropsychologists are increasingly called upon to conduct evaluations with individuals
involved in personal injury litigation. While the inclusion of measures of effort within a test
battery may help clinicians determine whether a client has put forth full effort, attorney
coaching may allow dishonest clients to circumvent these efforts. The purpose of this study
was to determine the degree to which frequently used measures of effort are susceptible to
coaching, as well as to explore and classify strategies undertaken by coached malingering
simulators. Overall, coached simulators performed significantly better on 7 of 14 measured
variables. Potential improvements in the external validity of the simulation design were also
explored.
Keywords: Malingering; Attorney coaching; Head injury; Methodology; Strategies.
INTRODUCTION
Approximately 94% of neuropsychologists in private practice report involve-
ment in personal injury evaluations of brain-injury cases (Essig, Mittenberg,
Petersen, Strauman, & Cooper, 2001). During these personal injury evaluations the
neuropsychologist evaluates cognitive functioning across a wide domain of abilities;
however, before making a conclusion of compromised functioning secondary to
injury, the neuropsychologist must ensure that the patient has put forth his or her
best effort towards the testing procedures (Iverson, 2003). The fabricatio n or
exaggeration of cognitive impairment in the presence of some incentive
(i.e., financial compensation) is particularly germane to the forensic examiner as
malingering is an increasingly costly issue. Malingering accounts for nearly one-fifth
of all medical care cases (i.e., doctor visits, hospitalizations) within the United States
and combined medical and legal costs approach five billion dollars annually (Ford,
1983; Gouvier, Lees-Haley, & Hammer, 2003).
Estimated base rates of malingering range from approximatel y 18% to 40% of
litigating populations (Binder, 1993; Heaton, Smith, Lehman, & Vogt, 1978;
Larrabee, 2003; Mittenberg, Patton, Canyock, & Condit, 2002), therefo re the need
for accurate detection methods is clear. To respond to this need, num erous
Address correspondence to: Adrianne M. Brennan, Louisiana State University, Health
Sciences Center, Dept. of Psychiatry, 210 State Street, Room 3111, New Orleans, LA 70118, USA.
E-mail: abren1@lsuhsc.edu
Accepted for publication: March 13, 2008. First published online: May 2, 2008.
ß 2008 Psychology Press, an imprint of the Taylor & Francis group, an Informa business
malingering detection techniques have been developed. A recent survey revealed
that 79% of neuropsychologists involved in personal injury cases incorporate
techniques and methods designed to detect malingering into their neuropsycholo-
gical battery (Slick, Tan, Strauss, & Hultsch, 2004). Some of the most frequently
employed techniques include symptom validity testing, examination of the
performance curve, examination of floor effects, recognition of atypical perfor-
mances, and the use of validity indices.
While failure on one or more measures of effor t may alert the clinician to
possible malingering, one must be cautious when declaring a patient as a malingerer.
In fact, 41.7% of surveyed neuropsychologists reported only rarely using the term
‘‘malinger’’ in their reports (Slick et al., 2004). Numerous reasons for this hesitation
exist, including the fear of mislabeling someone or the possibility of being sued
(Iverson, 2003). Regardless of the reason, it has been recommended that the
clinician employ a more systematic evaluation of malingering, such as the guidelines
set forth by Slick, Sherman, and Iverson (1999), to ensure correct classification and
to meet the stringent standards of evidence offered by Daubert (1993).
While there are numerous studies examining simulated malingering, few
studies have looked at the malingering strategies employed by these samples (Mahar
et al., 2006; Tan, Slick, Strauss, & Hultsch, 2002). One recent examination
discovered four constructs underlying strategies to exaggerate or malinger.
Specifically, these constructs were composed of an under-reporting of psychological
symptoms, an over-reporting of neurotic symptoms, insuf ficient cognitive effort,
and an over-reporting of psychotic or rarely endorsed symptoms (Nelson, Sweet,
Berry, Bryant, & Granacher, 2007). Of the reported strategies relevant to cognitive
impairment, a recent study demonstrated that the most frequently used approach is
to feign total memory loss (76%), followed by feigning a slow rate of response speed
(32%), confusion (16%), and concentration difficulty (12%) (Tan et al., 2002).
Importantly, however, these strategies were observed in individuals naı
¨
ve to the
effects of head injury and/or the likelihood of detection by effort measures. It is
likely that these approaches may be different in ‘‘real-world’’ malingerers as
individuals involved in personal injury litigation may be coached by their attorneys
on the sequelae of brain damage and also in ways to avoid detection. Elucidating the
strategies employed by coached malinge ring simulators may help to inform the
clinician about possible methods in which coached malingerers attempt to avoid
detection.
There are confirmed reports of attorney coaching prior to neuropsychological
evaluation (Youngjohn, 1995). Furthermore, a recent survey revealed that
approximately 75% of attorneys reported preparing their clients for forensic
neuropsychological evaluat ions by discussing the content and purpose of
neuropsychological tests and measures (Essig et al., 2001). There is evidence that
attorneys brief their clients on the inclusion of measures designed to detect
malingering (Bauer & McCaffrey, 2006; Essig et al., 2001). The most frequently
reviewed test is the MMPI-2 (29%), followed by the Portland Digit Recognition
Test (PDRT) (6%), and the Memory for Fifteen Items Test (MFIT) (2%). In
addition to direct warnings of neuropsychological and effort measures, approxi-
mately 10% of attorneys inform their clients of what types of information to
disclose concerning their injury and 12% tell their clients what information not to
EFFECTS OF COACHING ON MEASURES OF EFFORT 315
disclose (Essig et al., 2001). The influence of attorney coaching on neuropsycho-
logical performance and measures of effort is likely to influence and invalidate the
standard neuropsychological assessment.
Recent studi es have examined the susceptibility of effort measures to attorney
coaching; however, the results have generally been mixed. For example, Suhr and
Gunstad (2000) reported that providing simulated malingerers with brain-injury
information had no effect on their performance on the Auditory Verbal Learning
Test. The Word Completion Memory Test (WCMT) was also found to be
invulnerable to the effects of coaching; however, this measure was designed
specifically against coaching effects (Hilsabeck, LeCompte, Marks, & Grafman,
2001).
In contrast, others have reported on the vulnerability to coaching of many
measures of effort. Lamb, Berry, Wetter, and Baer (1994) demonstrated the
susceptibility of the MMPI-2 to both coaching and brain-injury information.
Simulated malingerers, who were provided with information regarding brain injury
and/or information regarding the ability of the MMPI-2 to detect a ‘‘fake-bad’’
profile, produced valid profiles with significantly elevated clinical scales similar to
those obtained by individuals with true head injuries. Similarly, coaching
individuals on specif ic symptomatology, as well as ways in whi ch to avoid
detection, allowed for simulators to present as if they were suffering from, and not
exaggerating, post-traumatic stress symptoms on the Personality Assessment
Inventory (PAI) (Guriel-Tennant & Fremouw , 2006). In addition, Martin, Bolter,
Todd, Gouvier, and Nicholls (1993) reported that malingering simulators were able
to produce more believable profiles on a computerized forced-choice measure after
being provided with information regarding dissimulation. More believable profiles
were also observed on the Com puterized Assessment of Response Bias (CARB) and
Word Memory Test (WMT) after participants were provided with information on
how to go undetected (Dunn, Shear, Howe, & Ris, 2003). Similar results were
observed on the Nonverbal Forced Choice Test, 21-Item Test, Dot Counting Test
(DCT), MFIT, PDRT , an d Recognition Memory Test (Cato, Brewster, Ryan, &
Guiliano, 2002; Gunstad & Suhr, 2001; Martin, Hayes, & Gouvier, 1996; Rose,
Hall, Szalda-Petreem, & Bach, 1998).
Numerous studies have examined the vulnerability of effort measures to
coaching; however, relatively few studies have compared frequently used measures
of effort or the malingering detection techniques on which many of these measures
of effort are based, to determine which are relatively more or less vulnerable to
coaching. In addition, the profile of malingering strategies utilized by individuals
who have been coached has not been explored. The purpose of this study is to
determine which overall detection techniques are most susceptible to the effects of
coaching, and which commonly used measures of effort are most susceptible to the
effects of coaching, and to determine what malingering strategies are most
frequently employed by individuals who are coached. However, before these
questions can be answered, methodological issues in malingering research are
addressed.
The vast majority of malingering research is based on the simulation design.
This design utilizes non-clinical participants, typically university undergraduates,
asked to feign brain damage. Although one recent study (Brennan & Gouvier, 2006)
316 ADRIANNE M. BRENNAN ET AL.
has shown that simulated malingerers are comparable to actual malingerers, studies
utilizing the simulation design have historically been criticized for their lack of
generalizability to actual malingerers (Haines & Norris, 1995). A particular concern
of the simulation design is that the individuals employed have little to no experience
regarding head injury. This is considerab ly different from individuals involved in
actual litigation. Often times, litigants who are malingering may have experienced
and recovered from a mild head injury; however, they choose to perform during the
neuropsychological evaluation as if their deficits were still present (Rogers, 1997).
Malingering has also been observed in patients with more severe injury. Bianchini,
Greve, and Love (2003) have documented the deliberate attempts of patients with
moderate/severe traumatic brain injury to appear more impaired while involved in
litigation. It is possible that the prior experience of an actual head injury allows the
litigant to malinger in a more convincing manner (Cato et al., 2002). Past research
has demonstrated that simulated malingerers with a history of head injury perform
differently than simulators without the history of a head injury, although experience
with head injury does not reliably reduce misconceptions about head injury, still
leaving the once-injured malingerer vulnerable to detection (Ju & Varney, 2000;
Martin et al., 1996).
METHOD
Participants
Participants were 131 undergraduate students attending Louisiana State
University in Baton Rouge, Louisiana. Participation was on a volunteer basis with
six extra credit points awarded for participation. All participants were recruited
from undergraduate courses and were randomly assigned to receive coached
instructions or uncoached instructions. The distribution of participants in each
group was as follows: Coached & History of Head Injury (C/HI): n ¼ 20; Coached &
No History of Head Injury (C/NHI): n ¼ 41; Not Coached & History of Head
Injury (NC/HI): n ¼ 20; Not Coached and No History of Head Injury (NC/NHI):
n ¼ 50. Individuals were assigned to the HI group if they had endorsed experiencing
a concussion or any loss of consciousness in their initial interview.
The entire sample was composed of 47 males and 84 females. The average age
of the sample was 20.90 years (SD ¼ 3.45) and the average level of education was
13.40 years (SD ¼ 2.68). The sample was 80.9% Caucasian, 16% African-
American,.8% Hispanic, 1.5% Asian, and.8% of the sample described their race
as ‘‘other.’’ No significant differences were observed between the any of the four
groups on variables of age, (C/HI: M ¼ 20.80, SD ¼ 2.69; C/NHI : M ¼ 20.56,
SD ¼ 2.38; NC/HI: M ¼ 21.70, SD ¼ 4.78; NC/NHI: M ¼ 20.92, SD ¼ 3.83) F(3,
131) ¼ 0.49, p ¼ .688; educational level, (C/HI: M ¼ 13.70, SD ¼ 1.08; C/NHI:
M ¼ 13.90, SD ¼ 1.34; NC/HI: M ¼ 12.70, SD ¼ 4.45; NC/NHI: M ¼ 13.20,
SD ¼ 2.97) F(3, 131)
¼ 1.12, p ¼ .342; ethnicity,
2
(3, N ¼ 131) ¼ 1.23, p ¼ .745, or
gender, (C/HI: 55% male; C/NHI: 34.1% male, NC/HI: 40% male, NC /NHI:
29.4% male)
2
(3, N ¼ 131) ¼ 4.24, p ¼ .237. See Table 1.
EFFECTS OF COACHING ON MEASURES OF EFFORT 317
Materials
Written informed consent was obtained from all participants prior to their
inclusion in the study. The following tests were administered:
Structured Clinical Interview. A structured clinical interview was developed
specifically for this study to determine the participant’s age, gender, race, and
education, as well as to determine the presence of exclusionary criteria such as
neurological history, current litigation status, and severe psychiatric history.
Test of Memory Malingering (TOMM). The TOMM is an individually
administered symptom validity test utilizing pictorial stimuli. It is composed of two
learning trials and a retention trial. During the learning trials, participants are
shown 50 line drawings for 3 seconds each. The learning trials are followed by a
two-choice recognition recall task. A retention trial is administered following a
15-minute delay. Failures on this measure were based on published cut-off scores of
less than 45 on Trial 2 or the Retention Trial (Tombaugh, 1996).
Portland Digit Recognition Test (PDRT). The PDRT is an individually
administered symptom validity test. Participants are asked to remember a five-digit
number presented auditorily and, following a 5-second (Easy), 15-second (Easy), or
30-second delay (Hard), then recognize the number from a choice of two. A
performance of fewer than 19 items correct on the Easy items, fewer than 18 items
correct on the Hard items, or fewer than 39 items correct on the entire measure is
considered suspect (Binder, 1993).
Word Memory Test (WMT): The WMT is a computer-administered test in
which a 20-item paired-wor d list is presented. The word list is presented twice and
the pa rticipant is asked to recall as many word pairs as possible. Performance on
this measure provides three measures of effort: Immediate Recognition (IR),
Delayed Recognition (DR), and Consistency (CNS) (Green, Allen, & Astner, 1996).
Table 1 Demographic information across groups
Mean Standard deviation
Coached/head injury
Age 20.80 2.69
Education 13.70 1.08
Coached/No head injury
Age 20.56 2.37
Education 13.90 1.34
Uncoached/Head injury
Age 21.70 4.78
Education 12.70 4.45
Uncoached/No Head injury
Age 20.90 3.87
Education 13.16 2.99
All numbers are reported in years. None of the above group comparisons was
statistically significant.
318 ADRIANNE M. BRENNAN ET AL.
Memory for Fifteen Items Test (MFIT). The MFIT is an individually
administered paper and pencil test in which the participant is asked to remember
and reproduce 15 items. One point is awarded for each item correctly reproduced
(Lezak, 2004).
Dot Counting Test (DCT). The DCT is an individually administered test in
which the participant is asked to count the numb er of dots on a 3 5 index card.
Response times and errors are calculated and c ompared to performances in samples
of individuals with documented brain-injury (Lezak, 2004).
Word Completion Memory Test (WCMT). The WCMT is an individually
administered paper and pencil test that compares implicit and exp licit memory.
Participants are presented with a word list and then asked to complete word stems
by using words from the list. Following this task, the participant is asked to
complete word stems without using any words from the previously presented list
(Hilsabeck & LeCompte, 1997).
Personality Assessment Inventory (PAI). The PAI is a paper and pencil
questionnaire completed by the participant and then entered into a computerized
scoring program which yields four validity scales and nine clinical scales (Morey,
1991).
Strategy Questionnaire. A questionnaire assessing the strategies used by
simulators was created for this study using rationale based on Tan et al. (2002). This
questionnaire includes questions regarding preparation for the examination,
strategy management, and perception of the face validity of each measure.
Subject Rating Scale. A subject rating scale, taken directly from Tombaugh
(1997), was created for this study. Questions included: How successful do you think
you were in your attempt to portra y someone with a brain injury? How hard did
you try? This measure was administered to control for those participants who
reportedly did not try. One participant was excluded from analyses for reporting on
this measure that he did not try.
Design and procedure
This study occurred over two sessions. During the first session informed
consent was obtained and the participant was interviewed using the structured
clinical interview designed for this study. Participants were then randomly assigned
to the C or NC group as well as assigned an identification number to maintain
anonymity. All participants were read a set of instructions modified from Cato et al.
(2002) and Gunstad and Suhr (2001). Instructions were as follows:
Imagine that you were in a car accident in which another driver hit your car. You were
knocked unconscious and woke up in the hospital. You were kept overnight for
observation. The doctors told you that you experienced a concussion. Imagine that a
year after the accident, you are involved in a lawsuit against the driver of the other car.
If you are found to have experienced significant injuries as a result of the accident, you
are likely to receive a bigger settlement. You have decided to fake or exaggerate
symptoms of a brain injury in order to increase the settlement you will receive. As a part
of the lawsuit, you are required to undergo cognitive testing to determine whether or not
EFFECTS OF COACHING ON MEASURES OF EFFORT 319
you have experienced a brain injury. If you can successfully convince the examiner that
you have experienced significant brain damage, you are likely to get a better settlement.
If the examiner detects that you are faking, you are likely to lose the lawsuit.
In one week you will take a series of cognitive tests that will be used in such a
situation. I would like you to spend some time over the next week researching and
developing your role as an individual with brain damage. On the tests you will take
next week, I would like you to simulate brain damage, but in a believable way, such
that the examiner cannot tell that you are attempting to fake a brain injury.
Individuals assigned to the C group were read a second set of instructions
which outlined multiple outcomes following brain injury as well as the ability of
some measures to detect malingering. Instructions were as follows:
I will read a list to you of commonly experienced problems following a head injury,
which may help in your simulation of head injury. These symptoms include: frequent
headaches, being easily fatigued, problems with memory, difficulty attending and
concentrating, slowed responses, irritability, anxiety, and depression.
Another piece of information that may help you in your simulation of head injury
is that some of the tests you will be given are designed to detect if someone is faking.
Your best chance of performing successfully will be to miss more of the difficult items
than the easy ones and be sure not to miss more than half of the questions.
Participants were then scheduled to undergo testing the following week. The
average length of time between Session 1 and 2 was 9.13 days (SD ¼ 3.01). There
was no significant difference in length of time between sessions across the four
groups, F(3, 68) ¼ 0.60), p ¼ 0.61. Of the students who completed Session 1, 54 did
not return for Session 2 and were therefore excluded from the study, leaving a total
sample size of 131.
During Session 2 the examiner reread the instructions presented during the
first session. Participants were then administered the two learning trials of the
TOMM. During the delay, the MFIT and DCT were administered along with a
short break. Following the administration of the DCT, the retention trial of the
TOMM was administered. Once the retention trial on the TOMM was complete, the
PDRT and the WMT were administered. During the WMT delay, participants were
given an opportunity to take a break. Following the completion of the WMT,
participants were administered the WCMT and PAI. Participants were then asked
to complete the strategy questionnaire and subject rating scale. Participants were
then provided with an extra credit slip and dismissed. Five examiners were used in
this study. Each participant had the same examiner for sessions 1 and 2. All
measures were individually administered based on standardized instructions and
procedures. Total testing time during session 1 was approximately 30 minutes and
approximately 3 hours for session 2.
RESULTS
Data analysis
Using a 2 2 between-participants MANOVA, a significant main effect for
the presence of coached instructions was observed, meaning that there was a
320 ADRIANNE M. BRENNAN ET AL.
significant difference in the scores of participants who received coached instructions
versus participants who did not, F(14, 82) ¼ 1.788, p .05. In contrast, no
significant main effect was found for history of head injury, meaning that
participants with a history of head injury did not perform significantly different
than participants with no history of head injury, F(14, 82) ¼ 0.74, p ¼ .72. Similarly,
there was no significant interaction effect for history of head injury and coached
instructions, F(14, 82) ¼ 0.40, p ¼ .96.
Results of Bonferroni post-hoc tests reveal signifi cant differences between the
C and NC groups on the TOMM Trial 1: F(1, 95) ¼ 17.16, p .05; TOMM Trial 2:
F(1, 95) ¼ 11.29, p .05; TOMM Retention Trial: F(1, 95) ¼ 12.21, p .05; PDRT
Total: F(1, 95) ¼ 3.84, p .05; WMT DR: F(1, 95) ¼ 10.43, p .05; WCMT I:
F(1, 95) ¼ 8.63, p .05; and WCMT R: F(1, 95) ¼ 4.08, p .05 See Table 2. All
significant differences were directional, with the C group demonstrating signifi-
cantly better scores compared to NC group. With power ¼ 0.88, the effect size was
considered to be of medium strength, 0.50.
To determine whether the strategies employed by coached simulators are
significantly different from strategi es employed by uncoached simulators, a
nonparametric multiple-comparisons test, the Mann-Whitney U, was used. As no
significant differences were observed in the HI and NHI groups in regards to
differences due to history of head injury, this analysis was performed with the entire
sample, collapsing across HI and NHI states. Results indicate no significant
differences between the C and NC groups in using the following malingering
strategies: total memory loss, U ¼ 1916.00, p ¼ .30; slow rate of responding,
U ¼ 21430.50,
p ¼ .66; poor concentration U ¼ 1991.00, p ¼ .50; confusion
U ¼ 1840.00, p ¼ .16; nervousness U ¼ 2098.50, p ¼ .85; dyslexia
Table 2 Comparison of test performance across coached and uncoached groups
Uncoached M (SD) Coached M (SD)
TOMM 1* 30.17 (1.40) 38.38 (1.40)
TOMM 2* 31.33 (1.70) 39.42 (1.70)
TOMM R* 30.51 (1.75) 39.16 (1.75)
MFIT Total 11.89 (0.51) 13.19 (0.51)
DCT Errors 5.27 (0.51) 4.04 (0.51)
PDRT Easy 20.04 (1.01) 22.74 (1.01)
PDRT Hard 18.97 (0.89) 21.13 (0.89)
PDRT Total* 38.91 (1.80) 43.91 (1.81)
WMT IR 65.80 (14.91) 93.71 (14.92)
WMT DR* 65.83 (3.13) 80.11 (3.13)
WMT CNS 69.32 (2.87) 75.31 (2.87)
WCMT I* 16.04 (0.85) 19.58 (0.85)
WCMT R* 5.58 (1.46) 9.75 (1.46)
PAI NIM 78.48 (3.60) 76.94 (3.60)
*p .05.
TOMM ¼ Test of Memory Malingering; MFIT ¼ Memory for Fifteen Items
Test; DCT ¼ Dot Counting Test; PDRT ¼ Portland Digit Recognition Test;
WMT ¼ Word Memory Test; WCMT ¼ Word Completion Memory Test;
PAI ¼ Personality Assessment Inventory.
EFFECTS OF COACHING ON MEASURES OF EFFORT 321
U ¼ 1924.40, p ¼ .32; and partial memory loss U ¼ 2121.50, p ¼ .95. See Table 3. The
most frequently used malingering strategies were poor concentration and partial
memory loss followed by slow processing speed, confusion, nervousness, dyslexia,
and total memory loss.
All participants were asked to report on which test they believed (1) woul d
catch someone who was trying to fake an injury; (2) was the easiest to fake an
injury; (3) was the hardest to fake an injury; (4) was the most difficult to take; and
(5) was the most aversive. The PAI was most frequently reported by the C group as
the test most likely to catch someone malingering, whereas participants in the UC
group reported the TOMM as the test most likely to detect malingering. Both
groups reported the TOMM as being the easiest test to fake a head injury on and
listed the PDRT as the hardest test to perform as if injured, the most difficult test to
take, and as the most aversive test in the battery.
To ensure that participants complied with the request to perform as if they
were head injured, they were asked to rate their perceived level of success in
portraying a head injury as well as to gauge how hard they tried. Individuals in the
C group reported an average level of 2.43 and the UC group reported an average
level of 2.50 (based on a 6-point Likert scale: 0 ¼ not at all, 5 ¼ very) in rating their
perceived level of success. This difference was not significant, t(129) ¼ 0.354,
p ¼ .724. Similarly, no significant difference was observed on reported level of effort,
t(129) ¼ 1.581, p ¼ .116. Individuals in the C group endorsed trying at a level of 3.33
and individuals in the UC group reported trying at a level of 3.61.
DISCUSSION
Examination of both the vulnerability of detection measures to coaching as
well as the strategies utilized by individuals who are coached will help to prepare the
clinician against efforts to avoid detection and thus ensure a more accurate
neuropsychological assessment. The first purpose of this study was to determine
which commonly used measures of effort are most susceptible to coaching. It was
hypothesized that coached sim ulators would demonstrate significantly better
performance failures compared to the uncoached simulators. Better scores were
observed in the coached simulators on 13 of 14 indices, with 7 being statistically
Table 3 Mean values of malingering strategies in coached and uncoached simulators
Strategy Uncoached M (SD) Coached M (SD)
Total memory loss 4.61 (2.56) 5.10 (2.45)
Slowed processing speed 3.47 (1.82) 3.74 (2.19)
Poor concentration 3.26 (2.03) 3.52 (2.13)
Confusion 3.59 (1.74) 4.05 (1.77)
Nervousness 4.53 (2.16) 4.57 (1.67)
Reading problems 5.36 (2.25) 5.70 (2.18)
Partial memory loss 3.71 (2.26) 3.69 (2.16)
Scale based on 1–8, with 1 being most closely representative of the participant’s
strategy and 8 being least representative.
322 ADRIANNE M. BRENNAN ET AL.
significantly better. Furthermore, even the WCMT, a measure designed to be robust
to the effects of coaching, was vulnerable to coaching and significantly better in
coached versus uncoached simulators.
To determine which detection techniques are more vulnera ble to coaching, the
measures used in this study were separated by the technique on which they were
based (i.e., SVT, floor e ffects, etc.) Atypical performances were the most vulnerable
to coaching, with coached samples passing two of two indices (WCMT I and
WCMT R) compared to failures on both indices in the uncoached sample. In
addition, several of the symptom validity indices were found to be vulnerable. The
mean scores of the sample of uncoached participants failed all nine indices, whereas
the mean scores of the coached samples successfully passed four of nine indices
(PDRT Easy, PDRT Hard, PDRT Total, and WMT IR). A test based on the
performance curve (DCT) was found to be invulnerable to coaching, with
malingering in both the coached and uncoached samples detected. Finally, a test
utilizing validity indices (PAI) was unable to detect malingering in either sample;
therefore, the coaching vulnerability of this measur e could not be deciphered.
Perhaps this is because the symptoms solicited in the PAI are largely psychological,
rather than cognitive, in nature and it is likely that cognitive symptoms, rather than
psychological symptoms, are most salient for malingering related to head injury. In
addition, malingering was not detect ed in either group on a measure based on floor
effects (MFIT). This is most likely due to the high face validity and thus obviousness
of this test as a measure of effort.
Importantly, lower sensitivity rates in coached samples were observed across
nearly all detection measures compared to uncoached samples. Furtherm ore,
sensitivity rates of several measures were found to be statistically significantly lower
in coached samples compared to uncoached samples. Sensitivity rates most
vulnerable to coached instructions were observed on symptom validity measures
that rely on below chance performance, namely the TOMM and PDRT. See
Table 4.
Specifically, measures that were vulnerable to coaching include the PDRT, the
WMT Immediate Recall, and the WCMT. Per haps it was because, as Suhr and
Gunstad (2000) suggested, coached individuals ‘‘suppressed their tendency to do
devastatingly poorly on measures they perceived to be easy’’ (p. 402); however, both
the coached and unc oached groups reported the PDRT as the most difficult and
most aversive measure rather than the easiest. In addition, when queried, neither
group endorsed any of the above three measures as ‘‘most likely to catch someone
faking.’’ In fact, the coached group listed the PAI as the test most likely to detect
malingering compared to the TOMM, which was reported in the uncoached sample.
Previous research has demonstrated that forced choice measures are more likely to
be identified as malingering instrumen ts in coached samples (Suhr & Gunstad,
2000). One reason for the discrepancy between previou s research and this current
finding may be related to the manner in which samples in this study were coached.
Specific approaches to personality questionnaires were not referred to in the
coached instructions in this study, perhaps leaving the coached participants feeling
more vulnerable to detection on a measure in which they had not been prepared.
Examination of the strategies used by coached malingerers may help to
elucidate the reason why some measures are more vulnerable to coaching.
EFFECTS OF COACHING ON MEASURES OF EFFORT 323
Coached simulators in this study endorsed poor concentration and partial memory
loss as the strategies most frequently emp loyed, and total memory loss, nervousness,
and confusion as the least frequently employed. It may be that utilizing a more
subtle symptom approach combined with the knowledge obtained through coaching
allows for a mali ngerer to successfully navigate these measures undetected. Future
research should examine whether measures most vulnerable to coaching are less
sensitive to a subtle symptom approach employing such symptoms partial memory
loss and poor concentration rather than to more exaggerated symptoms of complete
memory loss or confusion.
The findings in this study are in contrast to some of the published findings on
malingering detection and coaching. For exampl e, the WCMT was found to be
invulnerable to coaching in a previous study (Hilsabeck et al., 2001) but found to be
vulnerable here. It is likely that the vulnerability of detection measures varies as
coaching methods vary, thus the reason for the dichotomy. In research, coached
instructions are standardized and based on the particular tests administered but this
is not the case in the real world. Rarely are attorneys aware of all the detection
measures included in the neuropsychological battery. In addition, there is likely no
universally accepted coaching method among attorneys. Importantly, however,
research examining the effects of coaching within simulator samples should not be
considered fruitless. Studies examining the effects of coaching at least provide the
Table 4 Sensitivity rates on coached and uncoached simulators
Measure Uncoached (%) Coached (%)
TOMM 1 cutoff 82.9 80.3
TOMM 2 cutoff 75.7 78.7
TOMM R cutoff 77.1 77.0
TOMM 1 statistically below chance 7.1 1.6
TOMM 2 statistically below chance* 17.1 0.0
TOMM R statistically below chance* 22.9 0.0
MFIT total 12.9 9.8
DCT errors 74.3 72.1
PDRT easy cutoff 35.7 26.2
PDRT hard cutoff 37.1 26.2
PDRT total cutoff 48.6 34.4
PDRT easy statistically below chance* 15.7 1.6
PDRT hard statistically below chance* 15.7 3.3
PDRT total statistically below chance* 20.0 4.9
WMT IR 54.3 45.9
WMT DR 52.9 49.2
WMT CNS 55.7 65.6
WCMT I* 37.1 18.0
WCMT R 44.3 42.6
PAI NIM 22.9 26.2
*p .05.
TOMM ¼ Test of Memory Malingering; MFIT ¼ Memory for Fifteen Items
Test; DCT ¼ Dot Counting Test; PDRT ¼ Portland Digit Recognition Test;
WMT ¼ Word Memory Test; WCMT ¼ Word Completion Memory Test;
PAI ¼ Personality Assessment Inventory.
324 ADRIANNE M. BRENNAN ET AL.
examiner with the knowledge that coaching will have some effect on test
performance, even if the effects on particular measures have not been shown to
be consistent.
Overall, the findings of this study make several contributions to forensic
neuropsychological research and practice. Results of this study suggest that a
history of head injury does not allow one to simulate symptoms more successfully.
It is possible, however, that the sample size utilized in this study did not allow for
enough power to detect a difference if one existed; therefore, future research
utilizing a larger sampl e size may be needed. Furthermore, the participants with a
history of head injury utilized in this study were recruited from university
undergraduate courses and may represent individuals who are functioning at a
higher level compared to others with a history of mild head injury. Perhaps
demographically matching educational variables to real-world malingerers would
improve the generalizability of simulators with a history of head injury.
The findings of this study demonstrate that coaching has a significant effect on
the detection success of several measures of effort. Perhaps the ability to detect
malingering can be increased by incorporating several, rather than just one measure
of malingering, as has been suggested by Bush, Ruff, and Troster (2005). When
doing this, the neuropsychologist should pick measures that vary in degree of
difficulty and face validity as a detection measure. In this way, the neuropsychol-
ogist is more likely to present the patient with a measure or method in which he/she
has not been coached. A clinician would do best to build a detection battery around
the malingering classification categories developed by Slick et al. (1999). Although
several indices were found to be vulnerable to coaching within this study, the
coached
uncoached
Sum
50
40
30
20
10
0
DEFINITE
PROBABLE
Figure 1 Slick criteria classification categories of coached and uncoached simulators.
EFFECTS OF COACHING ON MEASURES OF EFFORT 325
application of Slick et al.’s (1999) classification categories detected probable or
definite malingering in 100% of the coached sample (see Figure 1). A clinician must
use caution, however, when adding multiple measures to the battery and take steps
to ensure that they are aware of the error rate of using this combination of measures
(for combined error rates see Larrabee, 2003, Vickery, Berry, Inman, Harris, &
Orey, 2001; see also Mittenberg, Rotholc, Russ ell, & Heil bronner, 1996).
Furthermore, while this study confirms the effects of coaching on specif ic
techniques and measur es designed to detect malin gering, the effects of coaching
have not been thoroughly examined in measures of effort embedded within
neuropsychological tests. Future research should examine the effects of coaching
within embedded measures of effort to determine if these too are vulnerable to
coaching.
REFERENCES
Bauer, L., & McCaffrey, R. J. (2006). Coverage of the test of memory malingering, Victoria
symptoms validity test, and word memory test on the internet: Is test security threatened?
Archives of Clinical Neuropsychology, 21(1), 121–126.
Bianchini, K. J., Greve, K. W., & Love, J. M. (2003). Definite malingered neurocognitive
dysfunction in moderate/severe traumatic brain injury. The Clinical Neuropsychologist,
17(4), 574–580.
Binder, L. M. (1993). Assessment of malingering after mild head trauma with the Portland
Digit Recognition Test. Journal of Clinical and Experimental Neuropsychology , 15,
170–182.
Brennan, A. M., & Gouvier, W. D. (2006). Are we honestly studying malingering? A profile
and comparison of simulated and suspected malingerers. Applied Neuropsychology,
13(1), 1–11.
Bush, S., Ruff, R., & Troster, A. (2005). Symptom validity assessment: Practice issues and
medical necessity: NAN Policy & Planning Committee. Archives of Clinical
Neuropsychology, 20(4), 419–426.
Cato, M. A., Brewster, J., Ryan, T., & Guiliano, A. (2002). Coaching and the ability to
simulate mild traumatic brain-injury symptoms. The Clinical Neuropsychologist, 16,
524–535.
Daubert. (1993). Daubert v. Merrell Dow Pharmaceuticals, Inc., 113 S. Ct. 2786.
[509 U.S. 579].
Dunn, T. M., Shear, P. K., Howe, S., & Ris, M. D. (2003). Detecting neuropsychological
malingering: effects of coaching and information. Archives of Clinical Neuropsychology,
18, 121–134.
Essig, S. M., Mittenberg, W., Petersen, R. S., Strauman, S., & Cooper, J. T. (2001). Practices
in forensic neuropsychology: Perspectives of neuropsychologists and trial attorneys.
Archives of Clinical Neuropsychology, 16, 271–291.
Ford, C. V. (1983). The somatizing disorders: Illness as a way of life. New York: Elsevier.
Gouvier, W. D., Lees-Haley, P., & Hammer, J. H. (2003). The neuropsychological
examination in the problem of detecting malingering in the forensic arena: Costs and
benefits. In G. P. Prigatano & N. H. Pliskin (Eds.), Clinical neuropsychology and cost
outcomes research: A beginning (pp. 405–424). New York: Psychology Press.
Green, P., Allen, L. M., & Astner, K. (1996). The Word Memory Test: A user’s guide
to the oral and computer-administered forms, U.S. Version 1.1. Durham, NC:
CogniSyst.
326 ADRIANNE M. BRENNAN ET AL.
Gunstad, J., & Suhr, J. A. (2001). Efficacy of the full and abbreviated forms of the Portland
Digit Recognition Test: Vulnerability to coaching. The Clinical Neuropsychologist, 15,
397–404.
Guriel-Tennant, J., & Fremouw, W (2006). Impact of trauma history and coaching on
malingering of posttraumatic stress disorder using the PAI, TSI, and M-FAST. The
Journal of Forensic Psychiatry and Psychology, 17(4), 577–592.
Haines, M. E., & Norris, M. P. (1995). Detecting the malingering of cognitive deficits: An
update. Neuropsychology Review, 5, 125–148.
Heaton, R., Smith, H., Lehman, R., & Vogt, A. (1978). Prospects for faking believable
deficits on neuropsychological testing. Journal of Consulting and Clinical Psychology, 46,
892–900.
Hilsabeck, R. C., & LeCompte, D. C. (1997). Word Completion Memory Test (WCMT).
Durham, NC: CogniSyst.
Hilsabeck, R. C., LeCompte, D. C., Marks, A. R., & Grafman, J. (2001). The Word
Completion Memory Test (WCMT): A new test to detect malingered memory deficits.
Archives of Clinical Neuropsychology, 16, 669–677.
Iverson, G. (2003). Detecting malingering in civil forensic evaluations. In A. MacNeill &
L. C. Hartlage (Eds.), Handbook of forensic neuropsychology (pp. 137–177). New York:
Springer Publishing Co.
Jacoby, L. L. (1991). A process dissociation framework: separating automatic from
intentional uses of memory. Journal of Memory and Language, 30, 513–541.
Ju, D., & Varney, N. (2000). Can head-injury patients simulate malingering? Applied
Neuropsychology, 7, 201–207.
Lamb, D. G., Berry, D. T. R., Wetter, M. W., & Baer, R. A. (1994). Effects of two types of
information on malingering of closed head-injury on the MMPI-2: An analog
investigation. Psychological Assessment, 6, 8–13.
Larrabee, G. (2003). Detection of malingering using atypical performance patterns on
standard neuropsychological tests. The Clinical Neuropsychologist, 17(3), 410–425.
Lezak, M. (2004). Neuropsychological assessment (4th ed.). New York: Oxford University
Press.
Mahar, D., Coburn, B., Griffin, N., Hemeter, F., Potappel, C., Turten, M., et al. (2006).
Stereotyping as a response strategy when faking personality questionnaires. Personality
and Individual Differences, 40(7), 1375–1386.
Martin, R. C., Bolter, J. F., Todd, M. E., Gouvier, W. D., & Nicholls, R. (1993). Effects of
sophistication and motivation on the detection of malingered memory performance
using a computerized forced-choice task. Journal of Clinical and Experimental
Neuropsychology, 15, 867–880.
Martin, R. C., Hayes, J. S., & Gouvier, W. D. (1996). Differential vulnerability between
postconcusion self-report and objective malingering tests in identifying simulated mild
head injury. Journal of Clinical Neuropsychology, 18, 265–275.
Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of
malingering and symptom exaggeration. Journal of Clinical and Experimental
Neuropsychology, 24(8), 1094–1102.
Mittenberg, W., Rotholc, A., Russell, E., & Heilbronner, R. (1996). Identification of
malingered head injury on the Halstead-Reitan Battery. Archives of Clinical
Neuropsychology, 11, 271–281.
Morey, L. C. (1991). Personality Assessment Inventory professional manual . Odessa, FL:
Psychological Assessment Resources.
Nelson, N., Sweet, J., Berry, D., Bryant, F., & Granacher, R. (2007). Response validity in
forensic neuropsychology: exploratory factor analytic evidence of distinct cognitive and
EFFECTS OF COACHING ON MEASURES OF EFFORT 327
psychological constructs. Journal of the International Neuropsychological Society , 3(3),
440–449.
Rogers, R. (1997). Clinical assessment of malingering and deception (2nd ed.). New York:
Guilford Press.
Rose, F. E., Hall, S., Szalda-Petreem, A. D., & Bach, P. J. (1998). A comparison of four tests
of malingering and the effects of coaching. Archives of Clinical Neuropsychology, 13,
349–363.
Slick, D. J., Sherman, E. M. S., & Iverson, G. L. (1999). Diagnostic criteria for malingering
neurocognitive dysfunction: Proposed standards for clinical practice and research. The
Clinical Neuropsychologist, 13, 545–561.
Slick, D. J., Tan, J. E., Strauss, E. H., & Hultsch, D. F. (2004). Detecting malingering:
A survey of experts’ practices. Archives of Clinical Neuropsychology, 19, 465–473.
Suhr, J. A., & Gunstad, J. (2000). The effects of coaching on the sensitivity and specificity of
malingering measures. Archives of Clinical Neuropsychology, 15, 415–424.
Tan, J. E., Slick, D. J., Strauss, E., & Hultsch, D. F. (2002). How’d they do it? Malingering
strategies on symptom validity tests. The Clinical Neuropsychologist, 16, 495–505.
Tombaugh, T. (1996). Test of Memory Malingering (TOMM). New York: MultiHealth
Systems.
Tombaugh, T. (1997). The test of memory malingering: normative data from cognitively
intact and cognitively impaired individuals. Psychological Assessment, 9, 260–268.
Vickery, C., Berry, D., Inman, T., Harris, M., & Orey, S. (2001). Detection of inadequate
effort on neuropsychological testing: a meta-analytic review of selected procedures.
Archives of Clinical Neuropsychology, 16, 45–73.
Youngjohn, J. R. (1995). Confirmed attorney coaching prior to neuropsychological
evaluation. Assessment, 2(3), 279–283.
328 ADRIANNE M. BRENNAN ET AL.
    • "The examiner provided the participants in the experimental malingering group with the following scenario and instructions containing symptom coaching two days before testing. This scenario was based on previous studies (Brennan & Gouvier, 2006; Brennan et al., 2009; Suhr & Gunstad, 2000; Tan, Slick, Strauss, & Hultsch, 2002; Weinborn, Woods, Nulsen, & Leighton, 2012 ) and the recommendations outlined by Suhr and Gunstad (2000). Instructions: Six months ago you were involved in a car accident, and you don't suffer any consequences from it at the moment. "
    [Show abstract] [Hide abstract] ABSTRACT: Introduction. Recognition and visual working memory tasks from the Wechsler Memory Scale–Fourth Edition (WMS–IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS–IV (WMS–IV–NL) for the identification of suboptimal performance using an analogue study design.Method. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who were asked to simulate cognitive impairment as a result of a traumatic brain injury; the last group consisted of 50 healthy controls who were instructed to put forth full effort.Results. Experimental malingerers performed significantly lower on all WMS–IV–NL tasks than did the patients and healthy controls. A binary logistic regression analysis was performed on the experimental malingerers and the patients. The first model contained the visual working memory subtests (Spatial Addition and Symbol Span) and the recognition tasks of the following subtests: Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction. The results showed an overall classification rate of 78.4%, and only Spatial Addition explained a significant amount of variation (p < .001). Subsequent logistic regression analysis and receiver operating characteristic (ROC) analysis supported the discriminatory power of the subtest Spatial Addition. A scaled score cutoff of <4 produced 93% specificity and 52% sensitivity for detection of suboptimal performance.Conclusion. The WMS–IV–NL Spatial Addition subtest may provide clinically useful information for the detection of suboptimal performance.
    Full-text · Article · Feb 2016
    • "Though this approach creates a participant group of known simulators, subjects who pretend they have mental problems under experimental conditions may not respond as do persons who try to simulate impairment in actual evaluation settings. This remains true even when investigators use various methods (such as providing rewards for faking successfully and avoiding detection, coaching subjects about the disorder, or warning subjects about detection strategies [see Rogers, 2008c]) to approximate the circumstances and incentives that motivate real-world malingerers (Brennan et al., 2009). Further, naïve simulators (that is, persons who have never experienced the condition they are asked to simulate) lack the experiences of true illness relied upon by some malingerers who feign continuing symptoms even after recovery (Brennan & Gouvier, 2006), a concern that has prompted some investigators (e.g., Arbisi & Ben-Porath, 1998) to ask established patients to exaggerate their symptoms. "
    [Show abstract] [Hide abstract] ABSTRACT: Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no "gold standard" establishes whether someone is malingering or not. Several investigators have recommended using mixed group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. In this article we describe a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previous publications. Unlike most investigations of the TOMM's accuracy, our findings neither rely on possibly flawed assumptions about subjects' intentions nor assume that experimental simulators can duplicate the behavior of real-world examinees. Our conceptual approach may prove helpful in evaluating the accuracy of many assessment tools used in clinical contexts and psycholegal determinations. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
    Full-text · Article · Nov 2014
    • "Though this approach creates a participant group of known simulators, subjects who pretend they have psychological or neurological impairments under experimental conditions may not respond as do persons who try to feign impairment in actual evaluation settings (Rogers & Cruise, 1998). This remains true even when investigators use various methods (such as providing rewards for faking successfully and avoiding detection, coaching subjects about the disorder, or warning subjects about detection strategies [see Rogers, 2008]) to approximate the circumstances and incentives that motivate real-world malingerers (Brennan, Meyer, David, Pella, Hill, & Gouvier, 2009). Further, naïve simulators (that is, persons who have never experienced the condition they are asked to simulate) lack the experiences of true illness relied upon by some malingerers who feign continuing symptoms even after recovery (Brennan & Gouvier, 2006), a concern that has prompted some investigators (e.g., Arbisi & Ben-Porath, 1998) to ask established patients to exaggerate their symptoms. "
    [Show abstract] [Hide abstract] ABSTRACT: Mental health professionals often use structured assessment tools to help detect individuals who are feigning or exaggerating symptoms. Yet estimating the accuracy of these tools is problematic because no "gold standard" establishes whether someone is malingering or not. Several investigators have recommended using mixed-group validation (MGV) to estimate the accuracy of malingering measures, but simulation studies show that typical implementations of MGV may yield vague, biased, or logically impossible results. This article describes a Bayesian approach to MGV that addresses and avoids these limitations. After explaining the concepts that underlie our approach, we use previously published data on the Test of Memory Malingering (TOMM; Tombaugh, 1996) to illustrate how our method works. Our findings concerning the TOMM's accuracy, which include insights about covariates such as study population and litigation status, are consistent with results that appear in previous publications. Unlike most investigations of the TOMM's accuracy, this article's findings neither rely on possibly flawed assumptions about subjects' intentions nor assume that experimental simulators can duplicate the behavior of real-world evaluees. Our conceptual approach may prove helpful in evaluating the accuracy of many assessment tools used in clinical contexts and psycholegal determinations.
    Full-text · Article · Nov 2014 · Psychological Assessment
Show more