Skip to main content

OPINION article

Front. Psychol., 15 July 2021
Sec. Forensic and Legal Psychology

Investigative Interviewing Research: Ideas and Methodological Suggestions for New Research Perspectives

  • 1Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
  • 2Department of Human Sciences, Lumsa University of Rome, Rome, Italy

Introduction

Research studies concerning investigative interviewing and lie detection have seen a steep rise in recent years. Detecting lying is a human ambition and an interesting research question. Yet, people often hold wrong stereotypes about how to detect lies (The Global Deception Research Team, 2006), which makes paramount sound research studies. Two examples are the use of non-verbal communication and microexpressions as a means for lie detection. Neither of the two showed to be a reliable approach to detect lies (DePaulo et al., 2003; Burgoon, 2018; Jordan et al., 2019), and both have been criticised as ineffective (for two recent overviews, see Vrij et al., 2019; Brennen and Magnussen, 2020). Recent research studies prioritise verbal content (Masip et al., 2005), which appears to be a better tool for credibility assessment (Vrij, 2015; Amado et al., 2016) and on developing interviewing approaches that aim at enhancing the differences between truth tellers and liars (Vrij and Granhag, 2012, 2014). Yet, we are far from being able to accurately and reliably discriminate truth tellers from liars. The reasons can be traced back to several issues.

The Issues

There are several issues related to lie detection. First, a unified theory of lying is lacking (Vrij, 2008; Bond et al., 2015; Nahari et al., 2019), which hence needs to be developed accounting for several factors together (e.g., cognitive, social, neurological and strategical decision-making processes, and linguistics). Second, people are good liars but bad judges (Bond and DePaulo, 2006; Levine, 2010). Third, different people show different cues to deception. Fourth, we have generally overlooked the importance of intersubjectivity. An exception is the Interpersonal Deception Theory by Buller and Burgoon (2006), which stresses the importance of the interaction between the sender and the receiver, but this theory has proven controversial (Bond et al., 2015). Fifth, deception detection should not be seen as an endpoint; rather, it should go in parallel with eliciting information (Gränhag, in Nahari et al., 2019).

Other issues are related to the methodological and analytic approaches that have been usually employed. We overlooked several measures, strategies and statistical analyses that can be useful to face some of the issues listed above.

The Proposals

Account for Interpersonal Differences and Intersubjectivity

One interviewing technique that aims to limit the effect of interpersonal differences is the baseline approach, which predicts that if an observer has a truthful baseline of the sender, it would be easier to detect deception. Yet, this approach appears controversial and mostly ineffective (Vrij, 2016; Caso et al.,2019a,b). Efforts have been made to improve its efficacy (Palena et al., 2019; Verigin et al., 2020; Tomas et al., 2021b), and a recent study has questioned its relevance, without rejecting it either (Tomas et al., 2021a). Furthermore, although the baseline attempts to deal with personal characteristics, almost all research studies in investigative interviewing used a variable-centred approach (Magnusson, 1992, 1998). This might be problematic as such an approach lays on the assumption that an effect (the relationship between several variables) is the same for the entire population and describes it with a single set of parameters. Nothing is told about the effect of personal characteristics. Although the variable-centred approach is parsimonious (results are easy to interpret), it has low specificity (low precision in describing a specific subject or subgroup).

A different approach with increasing popularity is the person-centred approach, which allows for the study of people in an integrative manner (Magnusson, 1998). In this study, the experimenter starts by selecting the variables of interest, which will then be used to group people into specific subpopulations, often called “profiles,” via mixture models of cluster analyses. People within a particular profile will be more similar in the patterns of scores on the selected variables than people belonging to a different profile. Once such profiles are obtained, they can then be put in a relationship with other predictors/outcome variables. The main point is that in the variable-centred approach it is presumed that an effect is the same across individuals, whereas in the person-centred approach it is thought that an effect can be different among people. To give an example, an experimenter adopting the variable-centred approach may look at the effect of veracity (truth tellers vs. liars) on the amount of details provided. On the contrary, a researcher adopting the person-centred approach may predict that such an effect is not the same for everyone. Hence, the researcher will explore how different personality profiles (e.g., one profile with high extraversion and high neuroticism vs. one profile with high extraversion but low neuroticism) moderate the relationship between veracity (the predictor) and the outcome (e.g., the amount of details) (Palena et al., under review). It follows that unlike the variable-centred approach, the person-centred approach (i) may provide several sets of parameters; (ii) is more specific but less parsimonious than the variable-centred approach; and (iii) can deal with the issue of interpersonal differences/personal characteristics better than the variable-centred approach. A possible difficulty for adopting this approach is that it requires larger samples of participants (usually > 500, Meyer and Morin, 2016) than those usually used in investigative interviewing research studies (for a detailed overview of the person-specific approach, see Magnusson, 1998). There are also specific approaches such as latent transition analysis that can also explore how people move from one profile to another over time (Lanza et al., 2010), which can be particularly useful to determine what intervening variables may push high-value interviewees moving from a profile indicating uncooperativeness to one indicating cooperativeness.

The last example is the person-specific approach, which aims to explore the effects that are specific to each individual. In this study, scores of the variables of interest are collected on many occasions and inferences are usually made on the individual rather than the profile. It follows that the person-specific approach is the more specific but the least parsimonious of the three.

The person-centred and the person-specific approaches deal with interpersonal characteristics better than the variable-centred approach, but they are still missing the role of the relationship that is built between the interviewer and the interviewee. This can be studied via dyadic analysis such as the actor-partner interdependence model (Cook and Kenny, 2005), which deals with the influence that one individual of the dyad has on the other individual and vice versa. Being investigative interviewing an interactive process between the interviewer(s) and the interviewee(s), such models become relevant.

Methodological and Analytic Suggestions

Investigative interviewing research studies also suffer from methodological and analytical issues. One is related to the reporting of effect sizes. Most of the time, researchers report standardised mean differences, usually Cohen’s d, which is not easily interpretable and tells nothing about the efficacy of an interviewing technique or a deception cue to discriminate truth tellers from liars. Imagine a d = 0.50 with a hypothesis suggesting that truth tellers obtain a higher score than liars on a specific variable. Although this is a “medium” effect, the use of additional statistics might shed light on its true usefulness.

One approach is to explore the overlap between the distributions among the groups. Cohen (1988), for example, developed the U3 statistics, which expresses the percentages of scores in the group with the lower mean that are exceeded by the mean of the group with the higher score. The Probability of Superiority is instead the probability that a person taken randomly from one group will have a higher score than a person picked randomly from the other group. With a Cohen’s d = 0.50, 80.30% of the two groups (e.g., truth tellers and liars) will still overlap, the U3 would be of 69.15%, and the Probability of Superiority would be of 63.80%. Hence, the discriminability here is, in fact, very limited. Other useful measures are the Probability of Superiority of Effect Sizes (PSES), which transforms the effect sizes as percentiles (Arce et al., 2015; Monteiro et al., 2018), the Probability of Inferiority Score (PIS), which is the probability that one group obtains a score that is lower than the mean score of another group (Monteiro et al., 2018; Arias et al., 2020), and the discriminant coefficient (DISCO), which describes the proportion of people in one group that fall below the lower score of the other group (Guttman, 1988, 1989). The problem is that measures such as Cohen’s U3 require specific proprieties of the distribution(s), such as unimodality and symmetry. A recent approach is very useful in dealing with this issue as it is a distribution-free overlapping measure (Pastore and Calcagnì, 2019), which can be easily computed via an R Package (Pastore, 2018). Finally, a recent research study also proposes another effect size that can be easily understood by laypeople: The Persons as Effect Sizes (Grice et al., 2020), which describes the proportion of participants that match the specific theoretical expectation.

Conclusion

Research studies in investigative interviewing and deception detection are a constantly growing field with important applied applications. Yet, improvements and the addition of less common methodological/statistical approaches might be needed. By applying them, it would be possible to have a more complete picture of the effectiveness of specific interviewing techniques and cues to deception. Furthermore, it would be possible to refine theories and to make the results more applicable to applied settings. In particular, it is important to implement person-centred approaches and to explore how different profiles moderate the effect on the outcome variable of several predictors such as interviewing strategy and veracity. This is not meant to say that the variable-centred approach should be disregarded. Rather, the researcher should select the approach according to the aims of the study. The person-centred approach can help cope with a central question in this area: Is this person lying? It will not be the ultimate solution, but it can help to increase specificity at the cost of a limited loss in parsimony.

Also, if we focus on the measures of the effect size other than the classical Cohen’s d, we can compare the efficacy of different techniques concerning not only mean differences between the experimental group but also their capability to discriminate group membership, such as truth telling vs. lying.

Two last points are worth reporting. First, although this research area has seen an increase in the reporting of Bayesian analyses, this is often limited to the reporting of Bayes factors. Yet, in doing so we are missing the most powerful tool of the Bayesian approach toolbox: cumulating and updating knowledge and evidence (Wagenmakers et al.,2018a,b). Future studies should hence try to implement all its features, including the use of prior and posterior distributions, whenever possible.

Second, it has already been suggested elsewhere that we need new measures for the testing of the efficacy of interviewing techniques. But apart from rare exceptions (Vrij et al., 2021), few new measures have been developed. A suggestion that we would like to provide is linked to the exploration of the efficacy of a specific technique to increase the elicitation of the information. The researcher could, for example, compare a given technique with another by focusing on the amount of useful and true information that is provided by the interviewee. Such measures could be obtained through a ratio between true information and false information (true information/true information + false information), which is similar to what is carried out in eyewitnesses research. The difference here is that false information should only account for deliberate lying (not for memory errors). This requires asking the participants what information is true and what information is false after the interview took place, but it would provide the researchers with interesting insight concerning how well a technique can maximise the amount of true information provided and, consequently, how well a technique discourages lying. Similarly, a ratio between relevant information and total information could be developed (relevant information/relevant information + irrelevant information), in collaboration with practitioners who could indicate what is relevant and what is not. Of course, such measures are just coarse-grained suggestions that need refining. We hope that this study will be useful for researchers and practitioners alike working in investigative interviewing.

Author Contributions

NP conceived the idea of this article and wrote the manuscript. LC contributed to the writing of the article and provided feedback. Both authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Amado, B. G., Arce, R., Farina, F., and Vilarino, M. (2016). Criteria-based content analysis (CBCA) reality criteria in adults: a meta-analytic review. Int. J. Clin. Health Psychol. 16, 201–210. doi: 10.1016/j.ijchp.2016.01.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Arce, R., Fariña, F., Seijo, D., and Novo, M. (2015). Assessing impression management With the MMPI-2 in child custody litigation. Assessment 22, 769–777. doi: 10.1177/1073191114558111

PubMed Abstract | CrossRef Full Text | Google Scholar

Arias, E., Fernández, R. A., Vázquez, M. J., and Marcos, V. (2020). Treatment efficacy on the cognitive competence of convicted intimate partner violence offenders. Ann. Psychol. 36, 427–435. doi: 10.6018/analesps.428771

CrossRef Full Text | Google Scholar

Bond, C. F. Jr., and DePaulo, B. M. (2006). Accuracy of deception judgments. Pers. Soc. Psychol. Rev. 10, 214–234.

Google Scholar

Bond, C. F., Levine, T. R., and Hartwig, M. (2015). New Findings in Nonverbal Lie detection Deception Detection: Current Challenges and New Directions. Chichester, UK: Wiley.

Google Scholar

Brennen, T., and Magnussen, S. (2020). Research on non-verbal signs of lies and deceit: a blind alley. Front. Psychol. 11:613410.

Google Scholar

Buller, D. B., and Burgoon, J. K. (2006). Interpersonal deception theory. Commun. Theory 6, 203–242.

Google Scholar

Burgoon, J. K. (2018). Microexpressions are not the best way to catch a liar. Front. Psychol. 9:1672.

Google Scholar

Caso, L., Palena, N., Carlessi, E., and Vrij, A. (2019a). Police accuracy in truth/lie detection when judging baseline interviews. Psychiatry Psychol. Law 26, 841–850. doi: 10.1080/13218719.2019.1642258

PubMed Abstract | CrossRef Full Text | Google Scholar

Caso, L., Palena, N., Vrij, A., and Gnisci, A. (2019b). Observers’ performance at evaluating truthfulness when provided with comparable truth or small talk baselines. Psychiatry Psychol. Law. 26, 571–579. doi: 10.1080/13218719.2018.1553471

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, J. (1988). Statistical Power Analysis. Hillsdale, NJ: Erlbaum.

Google Scholar

Cook, W. L., and Kenny, D. A. (2005). The actor–partner interdependence model: a model of bidirectional effects in developmental studies. Int. J. Behav. Dev. 29, 101–109. doi: 10.1080/01650250444000405

CrossRef Full Text | Google Scholar

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., and Cooper, H. (2003). Cues to deception. Psychol. Bull. 129, 74–118.

Google Scholar

Grice, J. W., Medellin, E., Jones, I., Horvath, S., McDaniel, H., O’lansen, C., et al. (2020). Persons as effect sizes. Adv. Methods Pract. Psychol. Sci. 3, 443–455.

Google Scholar

Guttman, L. (1988). Eta, disco, odisco, andF. Psychometrika 53, 393–405. doi: 10.1007/bf02294220

CrossRef Full Text | Google Scholar

Guttman, L. (1989). ETA, DISCO, ODISCO, AND F. Hist. Soc. Res. 14, 68–88.

Google Scholar

Jordan, S., Brimbal, L., Wallace, D. B., Kassin, S. M., Hartwig, M., and Street, C. N. H. (2019). A test of the micro-expressions training tool: does it improve lie detection? J. Investigative Psychol. Offender Profiling 16, 222–235. doi: 10.1002/jip.1532

CrossRef Full Text | Google Scholar

Lanza, S. T., Patrick, M. E., and Maggs, J. L. (2010). Latent transition analysis: benefits of a latent variable approach to modeling transitions in substance use. J. Drug Issues 40, 93–120. doi: 10.1177/002204261004000106

PubMed Abstract | CrossRef Full Text | Google Scholar

Levine, T. R. (2010). A few transparent liars explaining 54% accuracy in deception detection experiments. Ann. Int. Commun. Assoc. 34, 41–61. doi: 10.1080/23808985.2010.11679095

CrossRef Full Text | Google Scholar

Magnusson, D. (1992). Individual development: a longitudinal perspective. Eur. J. Pers. 6, 119–138.

Google Scholar

Magnusson, D. (1998). “The logic and implications of a person approach,” in Methods and Models for Studying the Individual, eds R. B. Cairns, L. R. Bergman, and J. Kagan (Thousand Oaks, CA: Sage), 33–64.

Google Scholar

Masip, J., Sporer, S. L., Garrido, E., and Herrero, C. (2005). The detection of deception with the reality monitoring approach: a review of the empirical evidence. Psychol. Crime Law 11, 99–122. doi: 10.1080/10683160410001726356

CrossRef Full Text | Google Scholar

Meyer, J. P., and Morin, A. J. S. (2016). A person-centered approach to commitment research: theory, research, and methodology. J. Organ. Behav. 37, 584–612. doi: 10.1002/job.2085

CrossRef Full Text | Google Scholar

Monteiro, A., José Vázquez, M., Seijo, D., and Arce, R. (2018). ¿Son los criterios de realidad válidos para clasificar y discernir entre memorias de hechos auto-experimentados y de eventos vistos en vídeo? Revista Iberoamericana de Psicología y Salud 9, 149–160.

Google Scholar

Nahari, G., Ashkenazi, T., Fisher, R. P., Granhag, P. A., Hershkowitz, I., Masip, J., et al. (2019). ‘Language of lies’: urgent issues and prospects in verbal lie detection research. Legal Criminol. Psychol. 24, 1–23. doi: 10.1111/lcrp.12148

CrossRef Full Text | Google Scholar

Palena, N., Caso, L., and Vrij, A. (2019). Detecting lies via a theme-selection strategy. Front. Psychol. 9:2775.

Google Scholar

Palena, N., Caso, L., Cavagnis, L., and Greco, A. (under review). Profiling the Interrogee: Applying the Person-Centred Approach in Investigative Interviewing Research.

Google Scholar

Pastore, M. (2018). Overlapping: a R package for estimating overlapping in empirical distributions. J. Open Source Softw. 3:1023. doi: 10.21105/joss.01023

CrossRef Full Text | Google Scholar

Pastore, M., and Calcagnì, A. (2019). Measuring distribution similarities between samples: a distribution-free overlapping index. Front. Psychol. 10:1089.

Google Scholar

The Global Deception Research Team (2006). A world of lies. J. Cross Cult. Psychol. 37, 60–74.

Google Scholar

Tomas, F., Dodier, O., and Demarchi, S. (2021a). Baselining affects the production of deceptive narratives. Appl. Cogn. Psychol. 35, 300–307. doi: 10.1002/acp.3768

CrossRef Full Text | Google Scholar

Tomas, F., Tsimperidis, I., Demarchi, S., and El Massioui, F. (2021b). Keyboard dynamics discrepancies between baseline and deceptive eyewitness narratives. Appl. Cogn. Psychol. 35, 112–122. doi: 10.1002/acp.3743

CrossRef Full Text | Google Scholar

Verigin, B. L., Meijer, E. H., and Vrij, A. (2020). A within-statement baseline comparison for detecting lies. Psychiatry Psychol. Law 1–10. doi: 10.1080/13218719.2020.1767712

CrossRef Full Text | Google Scholar

Vrij, A. (2008). Detecting Lies and Deceit: Pitfalls and Opportunities, 2nd edition. Chichester: John Wiley and Sons.

Google Scholar

Vrij, A. (2015). “Verbal lie detection tools: statement validity analysis, reality monitoring and scientific content analysis,” in Detecting Deception: Current Challenges and Cognitive Approaches, eds P. A. Granhag, A. Vrij, and B. Verschuere (Chichester: John Wiley and Sons), 3–35.

Google Scholar

Vrij, A. (2016). Baselining as a lie detection method. Appl. Cogn. Psychol. 30, 1112–1119. doi: 10.1002/acp.3288

CrossRef Full Text | Google Scholar

Vrij, A., and Granhag, P. A. (2012). Eliciting cues to deception and truth: what matters are the questions asked. J. Appl. Res. Memory Cogn. 1, 110–117. doi: 10.1016/j.jarmac.2012.02.004

CrossRef Full Text | Google Scholar

Vrij, A., and Granhag, P. A. (2014). Eliciting information and detecting lies in intelligence interviewing: an overview of recent research. Appl. Cogn. Psychol. 28, 936–944. doi: 10.1002/acp.3071

CrossRef Full Text | Google Scholar

Vrij, A., Hartwig, M., and Granhag, P. A. (2019). Reading lies: nonverbal communication and deception. Annu. Rev. Psychol. 70, 295–317. doi: 10.1146/annurev-psych-010418-103135

PubMed Abstract | CrossRef Full Text | Google Scholar

Vrij, A., Palena, N., Leal, S., and Caso, L. (2021). The relationship between complications, common knowledge details and self-handicapping strategies and veracity: a meta-analysis. Eur. J. Psychol. Appl. Legal Context doi: 10.5093/ejpalc2021a7 Online Head of Print.

CrossRef Full Text | Google Scholar

Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., et al. (2018a). Bayesian inference for psychology. part II: example applications with JASP. Psychon. Bull. Rev. 25, 58–76. doi: 10.3758/s13423-017-1323-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Wagenmakers, E.-J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Love, J., et al. (2018b). Bayesian inference for psychology. part I: theoretical advantages and practical ramifications. Psychon. Bull. Rev. 25, 35–57. doi: 10.3758/s13423-017-1343-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: investigative interviewing, lie detection, overlapping measure, person-centred analyses, intersubjectivity

Citation: Palena N and Caso L (2021) Investigative Interviewing Research: Ideas and Methodological Suggestions for New Research Perspectives. Front. Psychol. 12:715028. doi: 10.3389/fpsyg.2021.715028

Received: 26 May 2021; Accepted: 09 June 2021;
Published: 15 July 2021.

Edited by:

Colleen M. Berryessa, Rutgers University, Newark, United States

Reviewed by:

Olivier Dodier, Université de Nantes, France

Copyright © 2021 Palena and Caso. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nicola Palena, nicola.palena@unibg.it

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.