195 – When All You Have is a Hammer, Everything Looks Like a Nail

The IDEMS Podcast
The IDEMS Podcast
195 – When All You Have is a Hammer, Everything Looks Like a Nail
Loading
/

Description

Lily and David discuss the old adage “When all you have is a hammer, everything looks like a nail”, and how it applies to the realm of research, particularly in the context of statistical analysis and methodology. Emphasising the importance of having a diverse set of tools, they explore how narrow training in data skills can limit the effectiveness of research. They consider how methodologies from different disciplines can benefit from cross-disciplinary approaches, including medical science, climate, education and agriculture.

[00:00:07] Lily: Hello and welcome to the IDEMS podcast. I’m Lily Clements, a Data Scientist, and I’m here with David Stern, a founding director of IDEMS. Hi David.

[00:00:14] David: Hi Lily, great to talk again. What’s the topic?

[00:00:17] Lily: The topic today is this phrase I’ve heard you throw around, which is, and I hope I say it right, if all you have is a hammer, everything looks like a nail.

[00:00:25] David: This is not my expression. I can not take credit for this, I’ve heard this from many different people, it’s quite a common idea. I guess this is particularly in the context of things like statistical analysis and research where we’ve discussed this before.

[00:00:41] Lily: Yes. I don’t mean around the house doing DIY, although I could act that way around the house as well.

[00:00:48] David: Well, I suppose in some ways, I believe has originated from if you only have a small toolbox, you use the tools you’ve got and this doesn’t lead to high quality work. A larger toolbox will enable you to use the right tool for the right job. So that’s my guess of where this expression has originated from, what it really refers to.

But I’ve used it, and I’ve heard it used quite a lot in the context of research where, and this is not a criticism of any particular domain or any particular subject area, but as someone with data skills who has supported people across many different areas, I still find it surprising how narrow people’s training in data skills are.

[00:01:40] Lily: Yeah.

[00:01:41] David: You know, most people in many domains tend to get taught a particular way of doing studies. And so even very senior scientists in their domain use the same methodology almost whatever the research question.

[00:02:00] Lily: I see. So it’s like, I have your toolbox and say you use the tool that fits the job.

[00:02:05] David: If all you know how to do are large scale surveys where you go and you canvas the population to get an answer to a question, then whatever the question you are asking, that’s the tool you use, you take a large scale survey.

Now, sometimes it might have been better to do, let’s say, small qualitative studies where you go in depth somewhere. Other times it might have been better to do something more experimental, and actually to try something out in different ways.

And even we go further, there has been a lot of discussion around randomised control trials being a gold standard for research. And so funders sometimes impose the methodology.

[00:02:47] Lily: Okay.

[00:02:48] David: Less so now. It was a problem more a few years back where funders imposed the methodology and said, well, this is the good method, whatever you are studying, you should be working with this method. And it is not that that was a bad method. On the contrary, in many ways, a good randomised control trial is a very good way of being sure about something specific.

But that doesn’t mean for all questions for all areas. It’s the best method at this point in time. This originated from medical research where you have to be more formalised because of regulatory processes. And there the randomised control trial fits at a specific level after you’ve got certain evidence, then you are ready to do a randomised control trial, and then you have to fit in with that methodology.

And in the medical context, there are critiques of their formalised approach, but given the societal benefits of, you know, rigour related to medicine, this is something where I don’t have an alternative that I think is better.

But when you translate that into other areas, often it’s been translated without the same rigour. And I’ve seen many cases where the methodology has been used inappropriately with inappropriate questions. And similarly, I have other contexts where, and I’ll just simply use an example from education, where many people are trained in how to do relatively small scale studies, but there’s a lot of questions for which we know how to change things in a classroom. What we don’t know is how to change things at scale in a society.

And so it would be much better if there was a shift of the research effort into a different type of methodology and different type of research questions. Yes, there are still questions that can be answered using the existing methodologies, but I had a discussion fairly recently with an education researcher who basically stated they weren’t interested in implementation or scale at all. They only wanted to have an educational theory and then just take it that first step to see if there was a classroom basis where they could find for their educational theory.

And the fact that, as we discussed, it was clear that they already had a lot of knowledge, which could be more usefully scaled up and actually go to society, that’s not our problem. And that’s where, again, I feel that, you know, researchers, if they had a wider toolbox, then they could better serve society with their research. And this is where that idea has really come from.

[00:05:46] Lily: And by this kind of toolbox, I guess, is this that the tools are out there in the world and the researchers need to learn those tools, or is it that these are tools to be developed?

[00:05:57] David: That’s a really good question. And I think broadly the former, I think there are lots and lots of tools out there, and generally speaking within disciplines, there is a very narrow range of those tools being used. As a general rule, that’s what I bump up against. And those tools are different in different areas. In some areas the tools they use would benefit, other areas more, and vice versa.

And I think that, in the position we are where we offer research method support, we are privileged to see many different ways people do research. And so part of our role is to build these bridges. So maybe it is a critique that there aren’t maybe enough bridge builders, because quite often when new methods are introduced in areas, they are very successful, they are well received, not always, but they can be very well received.

[00:06:55] Lily: All the advances that you hear of are the ones that are successful and well received.

[00:06:59] David: That is possibly true, that there may well be many others where they haven’t been well received and therefore they’ve been kept out of the journals and the publication and so on. So you are right, it is maybe not as, what I should say is that there are good, well documented examples of crossover methods where when they have been taken up in other disciplines, they have been well received and they’ve added value to those other disciplines.

But you are right. That doesn’t mean that that is generally what happens. Probably in most cases, the initial efforts of trying to introduce new methods are negatively received, or may be negatively received. We don’t know.

[00:07:42] Lily: So part of the problem then, if I am understanding correctly, is you say it’s kind of this potentially lack of bridge builders. So part of it is that people have their toolbox, which is things for their kind of area. So maybe, I mean, I’m gonna talk about statistics ’cause I know that area best. But maybe like medical statistics using their tools, and then climatic statisticians using their tools when they could actually cross over with one another in different ways.

[00:08:12] David: Well, I think this is a really interesting set of examples. In medical statistics, actually the introduction of new tools is very difficult because you have to fit in with regulatory processes. And so new tools are very difficult to introduce.

Interestingly, COVID was a case where rapid randomised control trials took place, rapid vaccine, rapid development of new vaccines and so on took place because there was an urgent need, and that led to innovation within the medical statistics community, if you want. So, I would argue that there are contexts where it is by design difficult and slow to introduce new methods.

[00:09:04] Lily: I see, yes, and that’s for a reason as well.

[00:09:07] David: And that’s for positive reasons.

I would argue that the climatic sort of analysis and statistics would be an interesting case where there’s a lot less effort. There’s a lot less people working in this area, there’s a lot less happening. There’s a lot happening, of course, related to climate, but there’s a lot less rigour on the methods that are used because it’s not as sensitive as the medical statistics.

And what is interesting is that that could lead to a culture of innovation and, you know, rapid development of methods and trying of methods, and testing and encouragement of these. But in practice, what we tend to see is the same methods being used again and again, and a relatively stagnant set of methods.

And I guess part of the question is what would it take to incentivise this differently so that there was more methodological innovation? What might this look like for climate sort of statistics in different ways?

[00:10:25] Lily: You say in medical statistics, there was kind of rapid, because of a world crisis, well, I don’t wanna go into that too much.

[00:10:32] David: So, you might have hoped that the climate crisis would lead to a rapid development of this, but the pressures are different.

[00:10:39] Lily: Yes, okay, okay.

[00:10:41] David: This is part of the point. In some ways the IPCC, the International Panel for Climate Change, which of course goes back quite a long way, they’ve led for a need for standardisation. They need standardised studies across many countries to be able to build the evidence base. And so in some ways, the climate crisis as we see it, has actually had the opposite effect. It’s led to a need to have more standardisation so that these meta strategies could happen across context and so on, and so that you could actually reuse results to be able to produce these big reports, the IPCC reports, and to build that evidence base.

And so, it’s had the opposite effect of actually leading towards a standardization of methods. Now, in some sense, that is an innovation and that’s a push. And again, this is not bad. The need there, and I think this is an interesting innovation where the need for studies that can feed into these international reports has been a need for more standardised studies, more studies where you get the same results, not results, but the same methods across countries.

[00:12:03] Lily: Interesting.

[00:12:04] David: There you are wanting to get a hammer and, you know, to just keep hitting nails in different places, so you’ve got a really solid structure, which is nailed down really solidly, that there is evidence for this. Our analogy here is actually quite good, there are times when you want to hammer and you want everything to become a nail so you can nail down your evidence really strongly, which is arguably what’s needed and what’s happened, happening, related to climate.

[00:12:30] Lily: Very nice.

[00:12:32] David: I didn’t expect this to go that direction at all.

[00:12:34] Lily: No, no.

[00:12:36] David: Sometimes you just need to only have a hammer and then turn everything into an nail. That’s not the way I expected this discussion to go. But I think it is important to recognise that we are not saying that it is always bad to have this uniformity of methods. That allows for things to be more comparable, it allows for meta-analysis, it allows for different types of studies to build on those, on that work.

And this is, the two fields you’ve given, historically, this is what’s been needed in the medical statistics, and more recently, this is exactly the direction climate statistics sort of has been going.

[00:13:16] Lily: Interesting.

[00:13:17] David: And needing to go.

[00:13:19] Lily: Yeah. And so there are times where we do want that standardisation, and then there’s times where we want to have that kind of interdisciplinary…

[00:13:25] David: Interdisciplinarity. I don’t know that that’s the word you actually want. I think it’s a cross-disciplinary application of methods or transfer of methods, where methods used commonly in one discipline actually play a role in another. And I think that this is, yeah, it’s quite nice that we’ve taken what is an expression I’ve used quite often, often related to working with social scientists or with agricultural scientists, who have been trained related to their discipline.

Breeders have been trained on how to do breeding trials. And if you have a breeder who’s really good at breeding trials, then they often don’t think that, well, maybe part of what’s really needed in participatory breeding is a really deep social analysis to understand why, and they haven’t been trained on those qualitative methods.

But if you’ll get into participatory breeding and actually scaling this out into environments which are small holder led or something like this, then it’s actually a very social thing. So you really need that transdisciplinarity of social and agricultural scientists. So that’s the sort of space that I’ve often come across this in, when actually you can’t distinguish between, let’s say, the work of the agricultural scientist and the social scientist because when it comes to the farming community of small holder farmers, then some of it is social and some of it is agricultural. And you kind of need both together to be working on the same problems.

A lot of the agroecology work we do relates to this sort of scenario where the subject’s training doesn’t give you the full set of skills you need to tackle the social issue. And I would argue the difference that this has with the two examples of the medical and the climate statistic are in both of those cases, what you are looking at are more societal level things rather than individual or community level decisions and processes.

And so, where you need the social dimension, you probably don’t just need a social dimension, you need both a social dimension alongside an education dimension, or an agriculture dimension, or something else, maybe a public health dimension. That implementation space, I feel is very different from the societal level studies.

[00:15:55] Lily: And so then, I thought that this hammer and nail, I’ve heard the phrase used and things and I guess I thought naively it was more about kind of more narrow, I guess, like, okay, I know more about using the wrong model, or kind of constantly fitting one model, again, I’m sorry for the statistics references all the way through, but that’s just what I know, constantly fitting one model here, when actually you should expand your toolbox and learn all these other different models and all these different methods that you could use.

And so how can we, or how can I, or how can someone then discover these other tools? How do you find these other tools?

[00:16:33] David: Well, I think collaboration is of course the key point. Talking to, interacting with, working with people beyond your discipline who have knowledge and awareness of other tools is the starting point in all of these cases. If you are finding yourself when there is a question which comes on your desk saying, okay, I’ll use the same research process that I’ve always used to answer this question, then actually questioning, well actually wait a second, is this the right research process?

And I would argue a lot of the problem with this does come back to funding, that there has been a definite movement of funders towards actually being involved in the decision making process around implementation of research, and therefore the methods used.

And there’s a lot of positives that have come from this. I want to give the example of the IPCC for the climate where that standardisation of methods means you can take on this big request. But the recognition that that same restriction is having negative consequences in other areas, I think is an important one. It isn’t about one being good and the other being bad. Standardisation is good, and diversity is bad. No, both are good in different contexts, in different roles and different ways. And really we should be recognising more when one adds value and the other doesn’t, rather than pushing for the thing that we use being the best way to do it.

And that’s, I think, a really critical element, that whether it comes from how things are funded, what is funded, or from what is published, which is one of the other big drivers of this. You know, if you have a publication which accepts certain types of methods and expects certain types of methods and approaches, then anything which doesn’t have those types of methods and approaches is not going to get published there.

[00:18:46] Lily: Yeah. So there are things that you can do as an individual, but then there’s also some things which these kind of a more systemic.

[00:18:55] David: Of course.

And let’s be clear, these changes need to happen in both directions. There are cases where standardisation is good and it’s warranted, and there are other cases where we want that diversification. Understanding and recognising that complexity is really, as always, this is the key that we’re arguing for, that many times when you hear me complaining that you know the hammer and nail, and it is because I’m in this situation where everybody is trying to use their method when actually there’s a set of methods which might be better or might serve better what they’re trying to do.

And opening up that diversity, and this comes back to actually your question near the beginning, which said, is it about using the methods that are already out there or is it about creating new methods? And I do believe that there is still need and space for new methods to be created, and in particular at this borderline between implementation and research.

That’s an area where I feel there’s real potential for implementation based research to be using methods which are quasi experimental, but currently not well-defined, to be able to gain deep insights at scale within populations, within contexts, that would, I think, be really fascinating and really move our knowledge as a society forward, which is what research is supposed to do.

And this is the thing, if you limit your methods within research, you limit the answers you can get.

[00:20:49] Lily: Yeah. No, no. Very interesting.

I wanna finish on something concrete, because this is all nice and this is all, oh, sometimes you can do this and sometimes you can do that, and, okay, I guess things aren’t always concrete because it comes down to your context.

[00:21:03] David: Exactly. So let’s get very concrete in some of the contexts that we work in, and I’m gonna come back to working with smallholder farmers.

[00:21:10] Lily: Okay.

[00:21:13] David: So, for researchers who work with smallholder farmers, our experience on this is really quite clear, that a lot of the work in the past in agriculture for them was looking at the big ideas, you know, breeding to be able to maximise yield and so on. And a lot of good progress was made.

But right now, in the smallholder farmer context, the richest research I’ve seen is on this borderline between, if you want, agricultural knowledge, knowledge of how to grow things better, and social knowledge, social scientists who understand who needs what and why are they using it, and what are they looking at it for.

Very, very concrete example of this. Colleagues who were doing breeding trials in West Africa, Niger, Burkina Faso, Mali, a specific group eventually reached the conclusion that the biggest factor really in their context was how easily you could sell your, or how close you were to a local urban center.

Because if you were close to a local urban center, your fodder was more valuable than your grains. If you couldn’t transport your fodder to the urban center to sell it, your grains are more valuable than your fodder.

And so, you know, this is a social element, which is going to change over time, but it’s well-defined, it relates to the physical infrastructure, the quality of the roads and other things, transport links, but it also relates to which varieties of a particular crop will you prefer. Are you going to prefer a fodder based variety, a grain producing variety, or a variety, which is dual purpose and actually balances the grains with the fodder?

These are wonderful complexities which come out in so many contexts once you start working with smallholder farmers, that not to integrate methods that bring together the social component with the biophysical component, you are missing out. And so this is a wonderful, wonderful space for that innovation where you need to get breeders interacting with social scientists and social scientists interacting with breeders and them working together to understand the agricultural potential for small holders in particular places.

That to me is a beautiful example of where there isn’t one method. You need this mix of methods to be able to do this. And quite often that needs studies which are transdisciplinary with different people working together on serving the same audience, the smallholder farmer.

So there’s a really concrete example where I don’t want a researcher to come in with their hammer, their method, and impose it on it. I want there to be this basket of methods where you actually understand, well, what are the questions? Where can we help? Where could value be added?

And that is, I think, that’s where you’ll hear me complaining about people coming in with a hammer, that a researcher enters that and says, but I know how to do variety trials, I’m a breeder. And yes, but it’s not necessarily just a breeding question. There might be a breeding question as part of the mix, but if you just do your breeding question, you’re not doing it in the right context, you won’t get the right results, it depends on the social factors.

So anyway, that interaction between these is really important.

[00:25:11] Lily: Excellent. Well, thank you very much David, this has been really insightful and very interesting.

[00:25:16] David: No, thank you. It’s been a good discussion and went in directions I didn’t expect.