I've written a lot about causality elsewhere (e.g. in my 2010 book ) so I won't say much here. (PS if you are already familiar with the CR account of causality and its implications for social research methods, you could jump ahead to the paragraph starting "One consequence is").
The central points are
(1) that events are produced by multiple interacting causal powers;
(2) that causal powers are properties of structured entities, which depend on mechanisms: processes of interaction between the parts of the thing concerned;
(3) that this model applies just as much to social events as to other kinds;
(4) and that the entities whose powers interact to produce social events include human beings, ordinary non-social objects, and social entities. Social entities are like other sorts of entities in some respects, but what is distinctive about them is that their parts include human beings and that the relations between their parts that contribute to their causal powers are at least partly intentional relations: they depend on the beliefs and other mental properties of the individual human beings concerned.
Even this short summary of a realist understanding of causality has implications for social research methods. The first is simply that we can and should be interested in causal explanations of social events. That doesn't mean that causal explanation is the only thing that we should be interested in - we may want to make sense of the cultural meanings that are implicit in people's actions, for example, as the interpretivists demand, but such interpretations are only one part of what we can do. As Max Weber argued in the opening pages of Economy and Society, they are often most useful as a step towards a causal explanation.
A second is that a causal explanation is more than just an empirical association between two kinds of events. The idea that it is nothing more than that comes from Hume, and was one of the central targets of Roy Bhaskar's critique in A Realist Theory of Science . In that view, if an event of type A is always followed by an event of type B, then we can say that A causes B, and there is nothing more that we can know or say about this causal relation: empirical regularity IS cause, and cause is just empirical regularity. But if, as Bhaskar argues, events are never produced by only one cause but by an interaction of multiple causes, then even if one factor A has a tendency to produce a certain outcome B we can never be sure it actually will, because other factors may interfere with and suppress that tendency in some cases.
The positivist tradition in social science has adopted a variation of Hume's argument in which it is assumed that if we can establish an average quantitative relationship between two types of event, or rather 'variables', then this is all we can know of causality. This is the tradition that underpins the use of linear regression analysis to derive equations relating different variables, and then claims that is has 'explained' the variation in a variable because it can be linked to other variables in such equations. But my third point is that for realists a statistical association between two variables explains precisely nothing. On the contrary, a statistical association is an empirical phenomenon that needs an explanation. A real causal explanation will take the form of a narrative: an explanation of (a) which entities and causal powers interact to produce a particular outcome; (b) how the powers are produced; and (c) how the different powers and entities interact to produce a solution.
One consequence is that we should be interested in any research method that can give us clues as to how to construct that narrative. Quantitative methods can certainly be useful: the knowledge that variations in A tend to be linked to variations in B should lead us to suspect that there is a causal mechanism at work that links the two, and to investigate what that mechanism might be. And qualitative methods are equally important: if the powers of social entities depend on the beliefs of the people that are parts of them, then understanding those beliefs may give us evidence about how they interact to produce certain tendencies in the larger entity. Hence, we arrive at the pluralism about methods I mentioned at the beginning: we should pick those methods that can tell us something relevant to the case and the research question we are investigating. A number of other critical realists have made that kind of argument. I particularly like the recent collection edited by Edwards, O'Mahoney and Vincent, Studying Organizations Using Critical Realism, which has chapters on a wide range of research methods (despite the title it is relevant to a wide range of social science disciplines and topics).
But I don't want to call this openness to multiple methods methodological pluralism: while these methods are all potentially useful, the methodologies that were traditionally used to justify them can be highly suspect. The positivist justifications for quantitative methods must be rejected, and the more extreme versions of interpretivism that reject any connection to causal analysis are equally untenable.
And those methodologies also leave us with a significant gap that discussions of research methods tend to ignore: the gap between evidence and explanation. Those discussions tend to focus on how to gather and analyse data, but however many stats we gather and however many themes we identify in our qualitative data, there is still a further step that is required to produce a causal narrative. That is the step that takes us from 'these factors seem to be causally relevant' to 'this is the process of interaction that produced that outcome'. As far as I can see (and I'm not an expert on research methods, so please tell me if you think I've missed something important here) methodologists rarely explain or teach us how to bridge that gap.
One reason for that is that for the positivist tradition and the anti-causal variants of the interpretive tradition the gap simply doesn't need crossing. For the positivists, a set of regression coefficients already is an explanation. For the anti-causal interpretivists, there is no way to go from meaning to causal explanation and so no methods are required to bridge that uncrossable gap. So you have to be a certain kind of realist before the gap becomes important, and hence it is realist methodologists who need to find ways of bridging it.
How can we do it? I've really only started to think about that, and I'd be interested to know other people's views on the question. What I have done is reflect on what I do myself when I'm theorising mechanisms and causal interactions, and it seems to me that this isn't something that we do (or can do) by applying a logical method, stepping through some sort of standardised process. There's a sort of imaginative leap involved, and I suspect that it's a leap that relies rather more on the subconscious than the conscious aspects of our thinking processes. In my case, what I think happens is that I ask myself the causal questions - 'how could this possibly work?' - and my subconscious works away at it until (hopefully) an answer bubbles up into my consciousness - a few minutes later, or the next morning, or a week later, or never. I suspect that this process draws on a capacity that is not subject to strict conscious control, but I also suspect it's a capacity we can channel. The most obvious way of channelling it is by posing ourselves the causal question, but perhaps we can also help in other ways. Drawing speculative diagrams of the relations between the possible factors, for example, might help. But perhaps sometimes, having pointed our subconscious at a problem, we need to make space for our subconscious to work on the task by doing something quite unrelated.
Of course, there's no guarantee that the causal hypotheses we produce in this way will be good ones. Like any other hypotheses, they need to be challenged for coherence with the evidence and with other beliefs we have good grounds for. Even when they pass this test, there may also be other possible explanations, and sometimes we will need to think about how to judge between them. But those are relatively familiar issues. My question here is how do we get to a promising hypothesis in the first place?
This work by Dave Elder-Vass is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.