Ed Hagan recently wrote a piece titled “Academic success is either a crapshoot or a scam” in which he uses some simple maths about about the rate at which research is done to argue that “empirical social scientists with lots of pubs in prestigious journals are either very lucky, or they are p-hacking.” That is, regardless of intentions, a situation where research practices are inflating the false-discovery rate.
The interestingness of a set of results is a key variable for publishing in prestigious journals, and therefore acquiring grants and jobs and generally staying in academia.
However, the interestingness of a piece of research is not an objective property of that work, or even one that is set by the norms of a large population. Interestingness can be fabricated by a very small group of people, and even within individual careers. For example, just one article on a given effect, theory, or method can be the thin end of the wedge. Based on such an article, a stream of publications can then follow using each previous publication and the questions it raises as the sole rationale for future work, regardless of external interest in the topic. An exceedingly small group of researchers can then conduct, review, publish, and cite one another’s work on this effect/theory/method enough to boost the topic to be sufficiently interesting to build a career on - even when the object of study is inconsequential to everyone outside this small group.
This is not to say that important work does not take time, go unappreciated for long stretches, or sometimes produce unexpected discoveries or benefits. My point here is that much research is done without the intention or ambition that it could or should ever have relevance outside of a very small group, at least within the field of psychological science. Here, I’m speaking about research that is irrelevant by design, and where the interest surrounding it is fabricated for the sole purpose of conducting more research merely for the sake of it. Often researchers don’t self-identify with this purpose of course, but I would argue that the proximal goals of such research are often incompatible with or extraordinarily unlikely to produce the researcher’s stated long-term goals for their work or their field.1
Worse still, the more questions that an article generates, the more it can be rewarded with future research and citations, as authors argue back and forth past one another about effects/theories/methods that are poorly defined and contain a mix of technical language and slippery “intuitive” everyday terms. When the only output of the research is more publications and citations and not tangible contribution to the world outside of academic publishing, this produces a system where asking intuitively interesting but impossible to answer non-questions (e.g., a Ryleian category error) is rewarded most of all. Whereas science is intended to be an answer-generating process, academia risks being a question-generating game. Not all of it or all of the time, but too much of it and too much of the time.
Depressingly, if you corner researchers at the conference bar, many are quite frank that they are aware they are doing this, due to our perverse incentive structures . Many effects/theories/methods are promissory notes whose actual importance is strongly suspected to be zero, but whose “interestingness” to a small community the researcher thinks he can leverage enough papers and citations out of to sustain a career in academia. Of course, this is rarely wrapped up in cynicism. Usually, it comes with a dose of existential dread and a somewhat thin appeal to science being a long-term, self-correcting process. Many of my peers seem to pursue the strategy, often implicitly, that “first I’ll get a stable career (i.e., tenure) and then the real work will come later.” However, perhaps unsurprisingly, if we spend a decade or more acquiring expertise in playing the question-generating game, it is probably difficult to make a sudden switch to answer-generating - even if such a moment of career serenity did arrive.
Hughes & De Houwer recently argued that, as researchers, we should be clear about both (a) our immediate goals for a given line or work, and also (b) our notional long-term goals for the field in which it is situated. More importantly, we should pay attention to the compatibility of these proximal and distal goals. While unintended discoveries do occasionally happen, their probability is dependent on the nature of the work being done and that of the unintended benefit. For example, the accidental discovery of a new pharmaceutical drug is vastly more likely when conducting pharmacological research than while conducting sociological research. This point was banished to a footnote in their article, but received much discussion and debate among colleagues at the time and since. Similar to this example, but specific to different approaches within psychological science (and probably more contentiously), one is more likely to make a discovery about how to change human behaviour if one is studying environment-behaviour relations (e.g., phylogenetic or ontogenetic adaptions, as in genetics or behavioural/learning psychology) than when one is studying the cognitive mediators of that behaviour (as in cognitive psychology). In the former case, the thing-that-explains (the explanans: Hempel & Oppenheim, 1948) is the environment, which is directly manipulable. As such, such research is likely to provide insight into how the environment can be arranged to produce a given desired pattern of behaviour. In contrast, in the latter case, the thing-that-explains is a mediating mental process(es). These are not directly accessible or manipulable, but are influenced by proxy by manipulating the environment. Many authors have argued that, if one’s proximal or distal goal is to influence behaviour, one would therefore be more effective by focusing on environment-behavior relations rather than mediating mental mechanisms and processes. This does not in any preclude or devalue the study of mental mechanisms, but it clarifies the compatibility of studying them with the distal goal of influencing over behaviour (see De Houwer, Barnes-Holmes, & Moors, 2013; De Houwer, Gawronski, & Barnes-Holmes, 2013; Hayes & Brownstein, 1984; Sloan-Wilson, Hayes, Biglan, & Embry, 2014). ↩