For scientific discoveries to be valid—whether in theory or empirically—a phenomenon must be accurately described: The scientist must use appropriate counterfactuals and eliminate competing explanations. Empirical work must also use an appropriate design and method, and empirical claims made about the phenomenon must be correctly characterized. Moreover, valid empirical discoveries must be reliable in the sense that scientists who reexamine the data must be able to reproduce the finding or to replicate the effect from data gathered in a similar context. Only discoveries adhering to the above criteria can be scientifically informative, serve as building blocks for theory, or have policy implications. Unfortunately, as several recent surveys of the literature show, much of the published works in the management and applied psychology fields are uninformative; contributing reasons include several intractable problems in the study design and analysis as well as the failure of the field to adopt open science practices. Against this backdrop, we identify common methodological mistakes made in applied work. We group these mistakes into three major categories: (a) study design and data collection (e.g., fit between hypotheses and methods, design, measurement, open science, literature reviews), (b) data analysis (e.g., data preprocessing, choice of estimators, analysis of data, issues concerning endogeneity, and use of instrumental variables), and (c) diagnostics, inferences, and reporting. We also explain how to avoid these issues, so that published work makes for a useful contribution to the scientific record.