Sometimes there is a paper that just changes your week. This week it was something that I had read before – by two researchers who influence me immensely – but that I re-read at just the right point in my week. Suddenly I went from having a kind of strange, shadowy, understanding of how to think about my results to having this incredibly clear understanding of my research, and my field, and everything that I see wrong with research in my area.
This critical review assesses whether evaluation studies can answer three key questions about behaviour change interventions: ‘Do they work? How well do they work? How do they work?’ Reviews of intervention evaluations are examined, particularly those addressing decreasing unprotected sexual intercourse and smoking. Selection of outcome measures and calculation of effect sizes are discussed. The article also considers the extent to which evaluation reports specify (i) discrete intervention techniques and (ii) psychological mechanisms that account for observed behavioural change. It is concluded that intervention descriptions are often not specific about the techniques employed and that there is no clear correspondence between theoretical inspiration and adoption of particular change techniques. The review calls for experimental testing of specific theory-based techniques, separately and in combination.
Even though I’d read this paper before it hadn’t sunk into me the same way last time I came across it – I think it speaks to the importance of not just reading a something, but reading something at the right time.
That doesn’t mean that we should put-off reading something because we don’t think it’s time has come, but it does mean that you can’t just approach the literature like a checklist – where once you’ve read something you never have to think about it again. I am trying to get better at knowing when I’ve read something too soon and marking things for re-reading later on in the research process but it is something against my natural inclinations.