About this Research Topic
After the scale has been qualitatively developed, it goes through a rigorous quantitative examination evaluating score reliability and validation. Such validation may include construct, concurrent, predictive, concurrent, and discriminant. For example, there are numerous techniques for evaluating construct validity such as using exploratory factor analysis followed by confirmatory factor analysis. Of course, determining the number of factors in an exploratory factor analysis can be quite a problem. Many researchers use the classic Scree test or Kaiser’s eigenvalue greater than 1.0 technique. However, there is research to suggest that these may not be the best techniques. Other procedures have been developed that allegedly have better psychometric properties. Such techniques include Velicer’s MAP, parallel analysis, Ruscio and Roche’s CD technique, and Achim’s NEST method.
Another problem with validation is that a single sample (usually college students) are the participants. Consequently, this can limit the generalizability of findings even though cross-validation could still be used. However, we are beginning to witness questionnaires or scales translated into a variety of languages so that factor structures and factor scores become comparable. This cross-cultural work may aid in assessing measurement invariance.
This topic has numerous applied implications. For example, when hiring from a group of applicants based on a test, if the scores from this measure lack specific types of validity or reliability, then there becomes a question of fairness and discrimination. This could lead to potential lawsuits. In order to combat these types of psychometric difficulties leading to invalid inferences, there are numerous guidelines designed for assessing fair testing practices (e.g., Standards for Educational and Psychological Testing; International Test Commission).
This research topic welcomes all types of empirical articles. We are particularly interested in:
1. Validation of scores on developed scales (questionnaires) in any psychological or social science area. Specific areas may include personality, school psychology, social psychology, developmental psychology, clinical psychology, sport psychology, human-factors psychology, or industrial-organizational psychology. Such types of validation would include construct, concurrent, predictive, concurrent, and discriminant. Moreover, we would especially welcome trans-cultural work. For example, are there differences across cultures (e.g., sexes, ages, education levels, ethnicities) with regard to factor structures, factor scores, correlations between factors (or the total scale) and a criterion? This could also include studies determining measurement invariance.
2. Scale development with solid psychometric score validation techniques. Once again, we welcome newly developed scales in the psychological or social science area. Although trans-cultural work is not necessarily needed here for publication (although highly desirable), nevertheless, a rigorous development and validation process is required. Moreover, a thorough construct validity examination is essential here. Of course, we would particularly welcome the combination of construct validity coupled with another type of validation (other than face or content).
3. Reliability generalization (RG) and validity generalization (VG) studies.
In all cases, reliability data with the appropriate confidence intervals, if applicable, would also be provided.
Keywords: psychological testing, psychometrics, quantitative measurement, questionnaire, scale, validation
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.