Reacting to my post on the Stapel Affair Macartan Humphreys sent me an email the other day saying that he was very unhappy to see his research being described as uncritical. Unfortunately, an email exchange is not quite the right communication mode to discuss such matters and even more disappointingly he refused to write his own blog post here to set straight what he thought I got wrong. Yet, the exchange did allow me to reformulate my reservations about using surveys to investigate why people would take up arms and engage in a violent conflict and it might be useful to repeat them here. To get this straigt, however: in my post where I talk about Humphreys and Weinstein’s survey “Who fights?” I do not say that there methods is in any way fraudelous or that they are sloppy or inaccurate in their scientific procedure; on the contrary both are, indeed, very diligent in documenting their data collection and in providing explanations about the process of research.My argument was, however, that by using an extremely inappropriate method, namely surveys, they make themselves complicit of shaping and maintaining an externally created narrative of the reasons and causes of conflicts which might be, can be, could be but very probably is not what the interviewees would say were they allowed to speak in their words and narratives. Surveys in particular deprive the interviewees of their own voice as surveys are only efficient tools of data collection if all data is formatted in the same categories and definitions.
Hence, there are two type of objections that I have about survey research: the first set of reservations is about the methodological weaknesses of surveys more generally and for obtaining knowledge about people’s “real” motives in particular; the second set of objections concerns the scientific logic that makes researchers use surveys in the first place, i.e. of a more epistemological order. As far as the first set of objections is concerned, it is noteworthy that surveys are blunt tools if one seeks to find out the “real” motives (whatever those might be) of people’s doing and especially of their wrongdoings.
However private the athmosphere of the questionning, interviewer and interviewee meet up as strangers and the interviewee will, in the large majority of cases, show her public face. When talking students in methods classes through survey design I like to cite the example of one question of the World Value Survey that asks “Is prostitution defendable?” with the possible answers “under no circumstances, under some circumstances, sometimes, always”. In China the answer is to more than 87% “under no circumstances” which makes my students laugh heartily and whoever has watched the nightly traffic in any hotel in China knows why. Of course, prostitution is as endemic in China as it is elsewhere in the world (maybe even more so as China is still a country where marriage and sex are seen by many as two very different things) but prostitution is also morally highly frowned upon and, consequently, in this culture where “face” is even more important than in other;s no matter how confidential the interviewing athmosphere is, it is extremely hard to get the large majority of people to admit that they even know what prositution is.
There are, apart from the very factual sociologial base data, not many questions where people are not compelled to keep up their public face when confronted with questions in surveys. Very few people want to be seen as thinking differently from the pack, to stand out or not to say what they think is the socially acceptable thing to say. Of course, if the research is concerned with this public face then surveys are a fine method to see how homogenously public discourses and standard narratives are spread, how well people acknowledge the lingo of those and if they adhere to them or not. For electoral or marketing studies, the stern public face shown in surveys is, indeed, an ideal tool to capture how well a brand name or platforme has developed into a commonly recognizable dominant discourse. But if one wants to go beyond the established and socially acceptable narrative, things become much, much more complicated.
That surveys are biased by respondents’ desire to keep up a public face and, also, to please the interviewer is a well-known phenomenon in social science research and one of the major restrictions for the use of surveys and polls. A common response to this problem is to argue that questions and responses simply have to be formulated in a better way, that more explorative tests have to be done and to propose a row of statistical instruments to control for biases; and it is true that some progress has been made to identify particular weaknesses in question or response formulation.
Yet, delving deeper into the issue of response biases has also shown that there are huge cross-cultural differences in the way people understand and respond to questions and that these are compounded by age, gender and social differences. A survey of combattants in an armed conflict in Africa (or Latin America or Asia for that matter) by Western scholars crosses the cultural boundary twice: first, there is the obvious national culture difference between the American, English, French etc. researcher and the Sierra Leonean, Liberian, Columbian or Philippine; second, there is the “professional culture” divide between the academic and the violence professional. This implies an important social divide between the usually well-off, usually middle-class, usually urban and highly educated Westerner and the usually poor, usually underclass, usually rural and barely literate combattant. Double-testing and counter-checking certainly sounds like a promising inroad in controlling for these biases but they also promise to blow up the research project to its double or even triple size (and cost) and are therefore rarely done in a systematic and controlled manner (and in Humphreys’ and Weinstein’s published work there is no reference that they have undertaken any of such tests).
From the above, it is obvious that those biases are more likely to occur if the answers to the questions are restricted, pre-formulated and vague, i.e. if they have the double effect of being on the one hand externally imposed (not the own words of the respondents) yet, on the other hand, sufficiently ambiguous to allow varying interpretations. In fact, recognition of this has pushed parts of social psychology and sociology to move away from overly standardized questionnaires in order to provide inter-discursive room for respondents’ own formulations and words.
Given the difficulties of undertaking a survey in an appropriate way in these circumstances, one may wonder why researchers choose to do so in the first place. As mentioned, it might be the purpose of the research to identify the socially acceptable discourse or to see how compliant respondents are to such desiriability. Yet, this is certainly not the intention of the combattant surveys. Here, the aim is rather to find out the “real” motives of their joining of armed fractions, always assuming that there are a countable and verifiable number of “real” reasons. This is assuming that one can reduce all the forms of hopes, dreams, fears, anxieties, pressures, feelings of obligation and duty, rational calulations of survival, achievement, strategic and opportunistic moves, all the pleasure and misery, all the social interaction and solitary ruminating of an individual to a functionally small number of variables that can be “tested”. This enumeration of emotions, calculations, reasons and thoughts is not exhaustive and does not consider yet that all of them might be present at one given moment or only some, that they might contradict each other and that their importance and meaning will vary from individual to individual. Fear is rarely experienced in the same way by any two individuals and peer pressure is exercised differently on a 16-year old youngster than on a 35-year old family father. This does not mean that one cannot account at all for this wide and varied array of motives and reasons…but it requires an enormous amount of simplification and reduction of complexity if they are to be reduced to a manageable number of variables. That simplification can go as far as rendering any result either banal or inconclusive.
So, again, why would one want to do this then? Using such a method despite these reservations expresses a stern belief that proper academic research only “proves” things if it is carried out over a large number of cases (the question what a large number of cases is, is again relative as the social sciences claim that for instance n=2000 is large is laughable for any natural scientists…just imagine a medical study on a new drug carried out only over n=2000 subjects…). From this point of view, ethnographic or sociological research based on in-depth interviews, participant observation, focus group interviews, analysis of diaries, blogs or other texts written by the study subjects, or the analysis of artefacts (music for instance) are all nice and fine and sweet but not really “proves” because their n is so small; they could be “random observations”.
This reflects an understanding of science in which there are facts somewhere out there, a truth even, that we can discover by excluding (refuting) the false and random observations through hypothesis testing. This implies an uncritical understanding of the categories and concepts we use as any kind of refutation procedure assumes that “we”, and that is you, me, the researcher and the research subject, all know what those categories represent and what they mean. In this case for instance this means that there is one unitary and solidly confirmed — a “true” — understanding of categories like child soldiering or poverty. Of course, some exploratory research might be necessary to obtain this factual knowledge about child soldiering but once we have done our homework we can establish confidently such a category and apply it to our research objects. We can establish criteria which will allow us to refute or confirm that this or that person is a “child soldier” or that “poverty” is a cause of armed conflict. We either assume that it doesn’t matter whether the subjects see themselves as child soldiers (or children or soldiers) or poor. It also doesn’t really matter what kind of child soldiering we are talking about (given a purely age-based defintion, Napoleon was a child soldier as was Clausewitz) and it doesn’t matter what kind of poverty we are talking about …in fact, there must not be any great internal differentiation of these categories, otherwise they cannot function as categories.
And this is where uncritical, undifferentiated and unreflected reproduction of categories becomes also complicit with current power structures. Contrary to the pretention of a “value-free” science, all these categories are imbued by the understanding that is being produced and reproduced by dominant social, economic and political structures and their agents. The so-called common sense rarely is common but most often defines what some define as is supposed to be common. This becomes very clear if one thinks of categories of gender or race…for centuries it was absolute common sense (and for some circles it still is) that women or blacks are simply not as intelligent, active, creative, inventive, industrious etc. as white men. Applying categories witout critically reflecting how these are produced and what they mean, and what they mean for different subjects, including those under investigation, means simply accepting and uncritically reproducing these patterns of domination that have established these as “common sense”.
From the said that this does not mean that any survey or quantitative method is inappropriate; yet, it is so if it uses ascriptions rather than descriptions and if it reproduces categories without critically reflecting on the construction of these categories. The question to ask is actually rather simple: who understands what by which means when I, as observer and as agent of (at least) one specific understanding, talk about “poverty” (or childsoldiering or any other category used) and which power structures are reflected in this understanding? In the absence of this question, the categories used in a survey will always reflect only one specific, namely the observer’s (and at the time of publication the reader’s) understanding. Apart from being hence inconclusive and, at the end of the day, not saying very much, they also misrepresent the “facts” and miss out the largest part of the story that the subjects of inquiry would have to tell.