Background and Purpose: A psychological assessment of parents in post-divorce child custody disputes highlighted parents’ motivation to appear as adaptive and responsible caregivers. The study hypothesized that personality self-report measures completed by child custody litigants (CCLs) during a parental skills assessment would show underreporting, rendering the measures worthless. The study also analyzed gender differences in a CCL sample, general CCL profiles, and the implicit structure of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in the CCL sample.
Materials and Methods: The sample comprised 400 CCLs undergoing personality evaluation as part of a parenting skills assessment. The mean age of the 204 mothers was 41.31 years (SD = 6.6), with an overall range of 24–59 years. Mothers had a mean educational level of 14.48 years (SD = 3.2). The 196 fathers were aged 20–59 years (M = 42.31; SD = 7.8), with an average of 14.48 years (SD = 3.9) of education. The MMPI-2-RF was administered. To test the hypotheses, multivariate analyses of variance (MANOVAs) and two-step cluster analyses were run.
Results: CCL subjects reported higher scores in underreporting (L-r and K-r) and lower scores in overreporting [F-r, Fp-r, Fs-r, and response bias scale (RBS)] validity scales and restructured clinical (RC) scales, with the exception of RC2 and RC8. RC6 (Ideas of Persecution) was the most elevated. Intercorrelations within the RC scales significantly differed between CCL and normative samples. Women appeared deeply motivated to display a faking-good defensive profile, together with lower levels of cynicism and antisocial behaviors, compared to CCL men. Two-step cluster analyses identified three female CCL profiles and two male CCL profiles. Approximately 44% of the MMPI-2-RF profiles were deemed possibly underreporting and, for this reason, considered worthless.
Discussion: The present study adds useful insight about which instruments are effective for assessing the personality characteristics of parents undergoing a parental skills assessment in the context of a child custody dispute. The results show that almost half of the MMPI-2-RF protocols in the CCL sample were worthless due to their demonstration of an underreporting attitude. This highlights the necessity to interpret CCL profiles in light of normative data collected specifically in a forensic setting and the need for new and promising methods of mainstreaming and administering the MMPI-2-RF.
Background and Purpose. The use of machine learning (ML) models in the detection of malingering has yielded encouraging results, showing promising accuracy levels. We investigated the possible application of this methodology when trained on behavioral features, such as response time (RT) and time pressure, to identify faking behavior in self-report personality questionnaires. To do so, we reintroduced the article of Roma et al. (2018), which highlighted that RTs and time pressure are useful variables in the detection of faking; we then extended the number of participants and applied an ML analysis.
Materials and Methods. The sample was composed of 175 subjects, of whom all were graduates (having completed at least 17 years of instruction), male, and Caucasian. Subjects were randomly assigned to four groups: honest speeded, faking-good speeded, honest unspeeded, and faking-good unspeeded. A software version of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) was administered.
Results. Results indicated that ML algorithms reached very high accuracies (around 95%) in detecting malingerers when subjects are instructed to respond under time pressure. The classifiers’ performance was lower when the subjects responded with no time restriction to the MMPI-2-RF items, with accuracies ranging from 75% to 85%. Further analysis demonstrated that T-scores of validity scales are ineffective to detect fakers when participants were not under temporal pressure (accuracies 55–65%), whereas temporal features resulted to be more useful (accuracies 70–75%). By contrast, temporal features and T-scores of validity scales are equally effective in detecting fakers when subjects are under time pressure (accuracies higher than 90%).
Discussion. To conclude, results demonstrated that ML techniques are extremely valuable and reach high performance in detecting fakers in self-report personality questionnaires over more the traditional psychometric techniques. Validity scales MMPI-2-RF manual criteria are very poor in identifying under-reported profiles. Moreover, temporal measures are useful tools in distinguishing honest from dishonest responders, especially in a no time pressure condition. Indeed, time pressure brings out malingerers in clearer way than does no time pressure condition.
An individual's ability to discriminate lies from truth is far from accurate, and is poorly related to an individual's confidence in his/her detection. Both law enforcement and non-professional interviewers base their evaluations of truthfulness on experiential criteria, including emotional and expressive features, cognitive complexity, and paraverbal aspects of interviewees' reports. The current experimental study adopted two perspectives of investigation: the first is aimed at assessing the ability of naïve judges to detect lies/truth by watching a videotaped interview; the second takes into account the interviewee's detectability as a liar or as telling the truth by a sample of judges. Additionally, this study is intended to evaluate the criteria adopted to support lie/truth detection and relate them with accuracy and confidence of detection. Results showed that judges' detection ability was moderately accurate and associated with a moderate individual sense of confidence, with a slightly better accuracy for truth detection than for lie detection. Detection accuracy appeared to be negatively associated with detection confidence when the interviewee was a liar, and positively associated when the interviewee was a truth-teller. Furthermore, judges were found to support lie detection through criteria concerning emotional features, and to sustain truth detection by taking into account the cognitive complexity and the paucity of expressive manifestations related with the interviewee's report. The present findings have implications for the judicial decision on witnesses' credibility.
Malingering, the feigning of psychological or physical ailment for gain, imposes high costs on society, especially on the criminal-justice system. In this article, we review some of the costs of malingering in forensic contexts. Then the most common methods of malingering detection are reviewed, including those for feigned psychiatric and cognitive impairments. The shortcomings of each are considered. The article continues with a discussion of commonly used means for detecting deception. Although not traditionally used to uncover malingering, new, innovative methods are emphasized that attempt to induce greater cognitive load on liars than truth tellers, some informed by theoretical accounts of deception. As a type of deception, we argue that such cognitive approaches and theoretical understanding can be adapted to the detection of malingering to supplement existing methods.
Many violent offenders report amnesia for their crime. Although this type of memory loss is possible, there are reasons to assume that many claims of crime-related amnesia are feigned. This article describes ways to evaluate the genuineness of crime-related amnesia. A recent case is described in which several of these strategies yielded evidence for feigned crime-related amnesia.
We have been reliably informed by practitioners that police officers and intelligence officers across the world have started to use the Model Statement lie detection technique. In this article we introduce this technique. We describe why it works, report the empirical evidence that it works, and outline how to use it. Research examining the Model Statement only started recently and more research is required. We give suggestions for future research with the technique. The Model Statement technique is one of many recently developed verbal lie detection methods. We start this article with a short overview of the—in our view- most promising recent developments in verbal lie detection before turning our attention to the Model Statement technique.
Major depression is a high-prevalence mental disease with major socio-economic impact, for both the direct and the indirect costs. Major depression symptoms can be faked or exaggerated in order to obtain economic compensation from insurance companies. Critically, depression is potentially easily malingered, as the symptoms that characterize this psychiatric disorder are not difficult to emulate. Although some tools to assess malingering of psychiatric conditions are already available, they are principally based on self-reporting and are thus easily faked. In this paper, we propose a new method to automatically detect the simulation of depression, which is based on the analysis of mouse movements while the patient is engaged in a double-choice computerized task, responding to simple and complex questions about depressive symptoms. This tool clearly has a key advantage over the other tools: the kinematic movement is not consciously controllable by the subjects, and thus it is almost impossible to deceive. Two groups of subjects were recruited for the study. The first one, which was used to train different machine-learning algorithms, comprises 60 subjects (20 depressed patients and 40 healthy volunteers); the second one, which was used to test the machine-learning models, comprises 27 subjects (9 depressed patients and 18 healthy volunteers). In both groups, the healthy volunteers were randomly assigned to the liars and truth-tellers group. Machine-learning models were trained on mouse dynamics features, which were collected during the subject response, and on the number of symptoms reported by participants. Statistical results demonstrated that individuals that malingered depression reported a higher number of depressive and non-depressive symptoms than depressed participants, whereas individuals suffering from depression took more time to perform the mouse-based tasks compared to both truth-tellers and liars. Machine-learning models reached a classification accuracy up to 96% in distinguishing liars from depressed patients and truth-tellers. Despite this, the data are not conclusive, as the accuracy of the algorithm has not been compared with the accuracy of the clinicians; this study presents a possible useful method that is worth further investigation.