Project Details
The role of measurement in the replicability of empirical findings
Applicants
Dr. Susanne Frick; Professorin Dr. Eunike Wetzel
Subject Area
Personality Psychology, Clinical and Medical Psychology, Methodology
Term
since 2021
Project identifier
Deutsche Forschungsgemeinschaft (DFG) - Project number 464394046
The first part of this project found that reporting on item-based measures is untransparent and incomplete not only in original research, but also in replication studies. Furthermore, the first part investigates how different modifications of measures such as dropping items impact replicability and the heterogeneity of effect sizes. Based on this research, the goals of the second part of this project are twofold: 1) increasing transparency and improving reporting practices on measurement in published research and 2) providing hands-on advice to replicators and metascientists who have to deal with heterogeneity in measurement across studies. To achieve the first goal, we will develop and evaluate the Measures Checklist, a checklist that requires authors to report which measure they used, whether they modified it, and if so, how. To aid the completion of the checklist, we will develop the Measures Shiny app which will fill out the checklist automatically using the manuscript’s text. Authors will then check and revise the responses and submit the Measures Checklist together with their manuscript. The Measures Checklist will be developed and evaluated in a series of seven steps, from the first version, through building and training a machine learning model that the Shiny app is based on, checking the accuracy of responses and feasibility of the checklist in an empirical study, and implementing the checklist at a journal with a follow-up evaluation of the trial period. The second goal will be achieved by a) developing a taxonomy of modifications and their differing impact on replication success and the heterogeneity of effect sizes and developing the Modifications Shiny App, which allows researchers to evaluate the potential impact of a certain modification more specifically. This tool can be used both a priori by replicators anticipating having to modify a measure and post hoc by metascientists. B), we will investigate how much measurement invariance (MI) due to differences in other study characteristics either between the original study and a replication study or between different replication studies can be violated without affecting replicability. We will address this using three complementary approaches: an analytical investigation that quantifies biases of effects when violations of MI occur, a simulation study that extends this to factor score estimation, and an empirical analysis of existing data sets. This research will inform the development of a tool that allows researchers to judge the impact of violations of MI on replication success. This project contributes to META-REP’s questions by investigating violations of MI as a factor explaining replicability (“WHY”). The main focus is the “HOW” question, which is addressed by developing and evaluating tools that will increase transparency in reporting and thereby change norms, and tools that allow researchers to judge the impact of measurement-related cross-study differences on replicability.
DFG Programme
Priority Programmes