Hostname: page-component-848d4c4894-5nwft Total loading time: 0 Render date: 2024-06-02T14:19:49.564Z Has data issue: false hasContentIssue false

How do you Behave as a Psychometrician? Research Conduct in the Context of Psychometric Research

Published online by Cambridge University Press:  17 May 2023

Pablo Ezequiel Flores-Kanter*
Affiliation:
Universidad Católica de Córdoba (Argentina) Consejo Nacional de Investigaciones Científicas y Técnicas (Argentina)
Mariano Mosquera
Affiliation:
Universidad Católica de Córdoba (Argentina)
*
Correspondence concerning this article should be addressed to Pablo Ezequiel Flores-Kanter. Universidad Católica de Córdoba. Centro de Bioética. Córdoba (Argentina). E-mail: ezequielfk@gmail.com; pablo.floreskanter@conicet.gov.ar

Abstract

The identification of fraudulent and questionable research conduct is not something new. However, in the last 12 years the aim has been to identify specific problems and concrete solutions applicable to each area of knowledge. For example, previous work has focused on questionable and responsible research conducts associated with clinical assessment, measurement practices in psychology and related sciences; or applicable to specific areas of study, such as suicidology. One area of study that merits further study of questionable and responsible research behaviors is psychometrics. Focusing on psychometric research is important and necessary, as without adequate evidence of construct validity the overall validity of the research is at least debatable. Our interest here is to (a) identifying questionable research conduct specifically linked to psychometric studies; and (b) promoting greater awareness and widespread application of responsible research conduct in psychometrics research. We believe that the identification and recognition of these conducts is important and will help us to improve our daily work as psychometricians.

Type
Review Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Universidad Complutense de Madrid and Colegio Oficial de la Psicología de Madrid

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Funding Statement: This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflicts of Interest: None.

Authorship credit: Pablo Ezequiel Flores-Kanter: Conceptualization, writing-original draft preparation, writing-reviewing and editing, and supervision. Mariano Mosquera: Writing-reviewing and editing, and supervision.

Data Sharing: Not applicable.

References

Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. The Leadership Quarterly, 28(1), 521. https://doi.org/10.1016/j.leaqua.2017.01.006CrossRefGoogle Scholar
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 325. https://doi.org/10.1037/amp0000191Google ScholarPubMed
Barber, T. X. (1976). Pitfalls in human research: Ten pivotal points. Pergamon Press.Google Scholar
Bonifay, W., Lane, S. P., & Reise, S. P. (2016). Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science, 5(1), 184186. https://doi.org/10.1177/2167702616657069CrossRefGoogle Scholar
Bosma, C. M., & Granger, A. M. (2022). Sharing is caring: Ethical implications of transparent research in psychology. American Psychologist, 77(4), 565575. https://doi.org/10.1037/amp0001002CrossRefGoogle ScholarPubMed
Box, G. E. P. (1979). Robustness in the strategy of scientific model building. In Launer, R. L. & Wilkinson, G. L., Robustness in statistics (pp. 201236). https://doi.org/10.1016/b978-0-12-438150-6.50018-2CrossRefGoogle Scholar
Buchanan, E. M., Crain, S. E., Cunningham, A. L., Johnson, H. R., Stash, H., Papadatou-Pastou, M., Isager, P. M., Carlsson, R., & Aczel, B. (2021). Getting started creating data dictionaries: How to create a shareable data set. Advances in Methods and Practices in Psychological Science, 4(1). https://doi.org/10.1177/2515245920928007CrossRefGoogle Scholar
Burger, J., Isvoranu, A.-M., Lunansky, G., Haslbeck, J. M. B., Epskamp, S., Hoekstra, R. H. A., Fried, E. I., Borsboom, D., & Blanken, T. F. (2022). Reporting standards for psychological network analyses in cross-sectional data. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000471CrossRefGoogle ScholarPubMed
Chester, D. S., & Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16(2), 377395. https://doi.org/10.1177/1745691620950684CrossRefGoogle ScholarPubMed
Chin, J. M., Pickett, J. T., Vazire, S., & Holcombe, A. O. (2023). Questionable research practices and open science in quantitative criminology. Journal of Quantitative Criminology, 39, 2151. https://doi.org/10.1007/s10940-021-09525-6CrossRefGoogle Scholar
Cho, E. (2021). Neither Cronbach’s Alpha nor McDonald’s Omega: A commentary on Sijtsma and Pfadt. Psychometrika, 86(4), 877886. https://doi.org/10.1007/s11336-021-09801-1CrossRefGoogle Scholar
Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological Assessment, 31(12), 14121427. https://doi.org/10.1037/pas0000626CrossRefGoogle ScholarPubMed
Eid, M., Geiser, C., Koch, T., & Heene, M. (2017). Anomalous results in G-factor models: Explanations and alternatives. Psychological Methods, 22(3), 541562. https://doi.org/10.1037/met0000083CrossRefGoogle ScholarPubMed
Epskamp, S. (2019). Reproducibility and replicability in a fast-paced methodological world. Advances in Methods and Practices in Psychological Science, 2(2), 145155. https://doi.org/10.1177/2515245919847421CrossRefGoogle Scholar
Ferrando, P. J., Lorenzo-Seva, U., Hernández-Dorado, A., & Muñiz, J. (2022). Decálogo para el análisis factorial de los ítems de un test [Decalogue for the factor analysis of test items]. Psicothema, 34, 717. https://doi.org/10.7334/psicothema2021.456Google Scholar
Fife, D. A., & Rodgers, J. L. (2022). Understanding the exploratory/confirmatory data analysis continuum: Moving beyond the “replication crisis”. American Psychologist, 77(3), 453466. https://doi.org/10.1037/amp0000886CrossRefGoogle ScholarPubMed
Flake, J. K., Davidson, I. J., Wong, O., & Pek, J. (2022). Construct validity and the validity of replication studies: A systematic review. PsyArXiv. https://doi.org/10.31234/osf.io/369qjCrossRefGoogle Scholar
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456465. https://doi.org/10.1177/2515245920952393CrossRefGoogle Scholar
Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370378. https://doi.org/10.1177/1948550617693063CrossRefGoogle Scholar
Flores-Kanter, P. E., Dominguez-Lara, S., Trógolo, M. A., & Medrano, L. A. (2018). Best practices in the use of bifactor models: Conceptual grounds, fit indices and complementary indicators. Revista Evaluar, 18(3). 4448. https://doi.org/10.35670/1667-4545.v18.n3.22221CrossRefGoogle Scholar
Flores-Kanter, P. E., Garrido, L. E., Moretti, L. S., & Medrano, L. A. (2021). A modern network approach to revisiting the Positive and Negative Affective Schedule (PANAS) construct validity. Journal of Clinical Psychology, 77(10), 23702404. https://doi.org/10.1002/jclp.23191CrossRefGoogle ScholarPubMed
Flores-Kanter, P. E., Toro, R., & Alvarado, J. M. (2022). Internal Structure of Beck Hopelessness scale: An analysis of method effects using the CT-C(M–1) model. Journal of Personality Assessment, 104, 408416. https://doi.org/10.1080/00223891.2021.1942021CrossRefGoogle ScholarPubMed
Fried, E. I., Flake, J. K., & Robinaugh, D. J. (2022). Revisiting the theoretical and methodological foundations of depression measurement. Nature Reviews Psychology, 1(6), 358368. https://doi.org/10.1038/s44159-022-00050-2CrossRefGoogle Scholar
Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist, 102(6), 460. https://doi.org/10.1511/2014.111.460CrossRefGoogle Scholar
Golino, H. F., & Epskamp, S. (2017). Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research. PLOS ONE, 12(6), Article e0174035. https://doi.org/10.1371/journal.pone.0174035CrossRefGoogle ScholarPubMed
Haywood, D., Baughman, F. D., Mullan, B. A., & Heslop, K. R. (2021). Going “up” to move forward: S-1 Bifactor models and the study of neurocognitive abilities in psychopathology. International Journal of Environmental Research and Public Health, 18(14), 7413. https://doi.org/10.3390/ijerph18147413CrossRefGoogle Scholar
Heinrich, M., Zagorscak, P., Eid, M., & Knaevelsrud, C. (2018). Giving g a meaning: An application of the bifactor-(S-1) approach to realize a more symptom-oriented modeling of the Beck Depression Inventory–II. Assessment, 27(7), 14291447. https://doi.org/10.1177/1073191118803738CrossRefGoogle Scholar
Kirtley, O. J., Janssens, J. J., & Kaurin, A. (2022). Open science in suicide research is open for business. Crisis, 43(5), 355360. https://doi.org/10.1027/0227-5910/a000859CrossRefGoogle ScholarPubMed
Laurinavichyute, A., Yadav, H., & Vasishth, S. (2022). Share the code, not just the data: A case study of the reproducibility of articles published in the Journal of Memory and Language under the open data policy. Journal of Memory and Language, 125, Article 104332. https://doi.org/10.1016/j.jml.2022.104332CrossRefGoogle Scholar
Levenstein, M. C., & Lyle, J. A. (2018). Data: Sharing is caring. Advances in Methods and Practices in Psychological Science, 1(1), 95103. https://doi.org/10.1177/2515245918758319CrossRefGoogle Scholar
Lewis, N. A. Jr. (2021). What counts as good science? How the battle for methodological legitimacy affects public psychology. American Psychologist, 76(8), 13231333. https://doi.org/10.1037/amp0000870CrossRefGoogle ScholarPubMed
Lilienfeld, S. O., & Strother, A. N. (2020). Psychological measurement and the replication crisis: Four sacred cows. Canadian Psychology/Psychologie Canadienne, 61(4), 281288. https://doi.org/10.1037/cap0000236CrossRefGoogle Scholar
Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A., & Tomás-Marco, I. (2014). El análisis factorial exploratorio de los ítems: Una guía práctica, revisada y actualizada [Exploratory item factor analysis: A practical guide revised and up-dated]. Anales de Psicología, 30(3), 11511169. https://doi.org/10.6018/analesps.30.3.199361CrossRefGoogle Scholar
Luong, R., & Flake, J. K. (2022). Measurement invariance testing using confirmatory factor analysis and alignment optimization: A tutorial for transparent analysis planning and reporting. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000441CrossRefGoogle ScholarPubMed
Manapat, P. D., Anderson, S. F., & Edwards, M. C. (2022). A revised and expanded taxonomy for understanding heterogeneity in research and reporting practices. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000488CrossRefGoogle ScholarPubMed
Marsh, H. W., Morin, A. J. S., Parker, P. D., & Kaur, G. (2014). Exploratory structural equation modeling: An integration of the best features of exploratory and confirmatory factor analysis. Annual Review of Clinical Psychology, 10(1), 85110. https://doi.org/10.1146/annurev-clinpsy-032813-153700CrossRefGoogle ScholarPubMed
Matsunaga, M. (2008). Item parceling in structural equation modeling: A primer. Communication Methods and Measures, 2(4), 260293. https://doi.org/10.1080/19312450802458935CrossRefGoogle Scholar
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press.CrossRefGoogle Scholar
McNeish, D., & Wolf, M. G. (2021). Dynamic fit index cutoffs for confirmatory factor analysis models. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000425CrossRefGoogle Scholar
Mellor, D. T., Vazire, S., & Lindsay, D. S. (2018). Transparent science: A more credible, reproducible, and publishable way to do science. PsyArXiv. https://doi.org/10.31234/osf.io/7wkdnCrossRefGoogle Scholar
Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and Practices in Psychological Science, 1(1), 131144. https://doi.org/10.1177/2515245917747656CrossRefGoogle Scholar
Moreau, D., & Gamble, B. (2020). Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychological Methods, 27(3), 426432. https://doi.org/10.1037/met0000351CrossRefGoogle ScholarPubMed
Morin, A. J. S., Myers, N. D., & Lee, S. (2020). Modern factor analytic techniques. In Tenenbaum, G. & Eklund, R. C. (Eds.), Handbook of sport psychology, 10441073. Portico. https://doi.org/10.1002/9781119568124.ch51CrossRefGoogle Scholar
Moshontz, H., Binion, G., Walton, H., Brown, B. T., & Syed, M. (2021). A guide to posting and managing preprints. Advances in Methods and Practices in Psychological Science, 4(2). https://doi.org/10.1177/25152459211019948CrossRefGoogle Scholar
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), Article 0021. https://doi.org/10.1038/s41562-016-0021CrossRefGoogle ScholarPubMed
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69(1), 511534. https://doi.org/10.1146/annurev-psych-122216-011836CrossRefGoogle ScholarPubMed
Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., Fidler, F., Hilgard, J., Kline Struhl, M., Nuijten, M. B., Rohrer, J. M., Romero, F., Scheel, A. M., Scherer, L. D., Schönbrodt, F. D., & Vazire, S. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73(1), 719748. https://doi.org/10.1146/annurev-psych-020821-114157CrossRefGoogle ScholarPubMed
PsyTeachR Team. (2022). Open-source tutorials benefit the field. Nature Reviews Psychology, 1, 312313. https://doi.org/10.1038/s44159-022-00058-8CrossRefGoogle Scholar
Raykov, T., Dimitrov, D. M., Marcoulides, G. A., & Harrison, M. (2017). On the connections between item response theory and classical test theory: A note on true score evaluation for polytomous items via item response modeling. Educational and Psychological Measurement, 79(6), 11981209. https://doi.org/10.1177/0013164417745949CrossRefGoogle ScholarPubMed
Reise, S. P., Kim, D. S., Mansolf, M., & Widaman, K. F. (2016). Is the bifactor model a better model or is it just better at modeling implausible responses? Application of iteratively reweighted least squares to the Rosenberg Self-Esteem scale. Multivariate Behavioral Research, 51, 818838. https://doi.org/10.1080/00273171.2016.1243461Google ScholarPubMed
Rohrer, J. M., Hünermund, P., Arslan, R. C., & Elson, M. (2022). That’s a lot to process! Pitfalls of popular path models. Advances in Methods and Practices in Psychological Science, 5(2). https://doi.org/10.1177/25152459221095827CrossRefGoogle Scholar
Rosenbusch, H., Wanders, F., & Pit, I. L. (2020). The Semantic Scale Network: An online tool to detect semantic overlap of psychological scales and prevent scale redundancies. Psychological Methods, 25(3), 380392. https://doi.org/10.1037/met0000244CrossRefGoogle ScholarPubMed
Sarnacchiaro, P., & Boccia, F. (2018). Some remarks on measurement models in the structural equation model: An application for socially responsible food consumption. Journal of Applied Statistics, 45(7), 11931208. https://doi.org/10.1080/02664763.2017.1363162CrossRefGoogle Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12(1), 5374.CrossRefGoogle ScholarPubMed
Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., Oltmanns, T. F., & Shrout, P. E. (2017). It’s time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science, 12(5), 742756. https://doi.org/10.1177/1745691617690042CrossRefGoogle ScholarPubMed
Tackett, J. L., Brandes, C. M., & Reardon, K. W. (2019). Leveraging the Open Science Framework in clinical psychological assessment research. Psychological Assessment, 31(12), 13861394. https://doi.org/10.1037/pas0000583CrossRefGoogle ScholarPubMed
Tijdink, J. K., Horbach, S. P. J. M., Nuijten, M. B., & O’Neill, G. (2021). Towards a Research Agenda for Promoting Responsible Research Practices. Journal of Empirical Research on Human Research Ethics, 16(4), 450460. https://doi.org/10.1177/15562646211018916CrossRefGoogle ScholarPubMed
Tuval-Mashiach, R. (2017). Raising the curtain: The importance of transparency in qualitative research. Qualitative Psychology, 4(2), 126138. https://doi.org/10.1037/qup0000062CrossRefGoogle Scholar
Ulrich, R., & Miller, J. (2018). Some properties of p-curves, with an application to gradual publication bias. Psychological Methods, 23(3), 546560. https://doi.org/10.1037/met0000125CrossRefGoogle ScholarPubMed
Wagenmakers, E.-J., Sarafoglou, A., & Aczel, B. (2022). One statistical analysis must not rule them all. Nature, 605(7910), 423425. https://doi.org/10.1038/d41586-022-01332-8CrossRefGoogle Scholar
Waldman, I. D., & Lilienfeld, S. O. (2016). Thinking about data, research methods, and statistical analyses: Commentary on Sijtsma’s (2014) “Playing with data.” Psychometrika, 81(1), 1626. https://doi.org/10.1007/s11336-015-9447-zCrossRefGoogle Scholar
Widaman, K. F., & Revelle, W. (2023). Thinking thrice about sum scores, and then some more about measurement and analysis. Behavior Research Methods, 55, 788806. https://doi.org/10.3758/s13428-022-01849-wCrossRefGoogle ScholarPubMed