LA Times OP ED:
Numbers can influence the political response to a natural disaster such as the country’s earthquake. Social scientists must act responsibly in coming up with such life-and-death estimates.
A young boy flies a kite on top a building in the neighborhood of Fort National, one of the worst-hit districts in the capital and an area where the government is doing rubble clean up and making way for new roads. (Liz O. Baylen / Los Angeles Times)
Since the U.S.-led invasion of Iraq in 2003, there have been at least 60,000 civilian deaths that wouldn’t otherwise have occurred. Or maybe that number is closer to 650,000. Between 1998 and 2004, 5.4 million people died in a war and its aftermath in the Democratic Republic of Congo. Or was it one-fifth that number? In Haiti, fewer than 46,000 people were killed in the January 2010 earthquake. Or perhaps the death toll was more than 300,000.
The science of measuring mortality and morbidity is controversial. There are bitter disputes among groups of researchers who study death tolls in the world’s hot spots. Many governments would also prefer to discreetly avoid any discussion of the civilian costs of war. Yet the numbers matter. They can influence political responses to armed conflicts, famines and natural disasters. Statistics are routinely used to draw attention to evidence of systematic human rights violations and even genocide.
Haiti is the site of the latest uproar over civilian death counts.
The earthquake that struck outside the capital last year was without doubt Haiti’s worst natural disaster. Within days of the event, Haitian authorities estimated that more than 230,000 people had been killed and another 300,000 injured. A year later, the prime minister claimed instead that 316,000 citizens had died. Few outsiders questioned the numbers or their underlying methodologies at the time, despite the statistics appearing to have been plucked out of thin air.
In June, a consultancy group commissioned by the U.S. Agency for International Development offered a dramatically reduced death toll. The authors of the study claimed that between 46,000 and 85,000 Haitians had been killed and another 850,000 assembled in camps. Although the numbers were considerably more conservative than the Haitian government’s figures, the authors did not adequately explain how they were generated.
There are reasons to be cautious about both the high and the low estimates. The Haitian government estimates have no factual basis. As for the June report, USAID officials have already distanced themselves from it, describing it as “internally inconsistent.” The lower estimates emerged during a period of intense criticism of the slow pace of Haiti’s recovery and reconstruction efforts. The report’s timing seems to be part of a wider pattern of donor impatience and fatigue. Much of the $10 billion pledged for reconstruction has yet to be disbursed. By diminishing the severity of the situation, some Haitian and foreign aid groups fear, the lower estimates might encourage donors to justify an earlier exit strategy.
Meanwhile, the lead author of the study, who works for the Washington-based organization LTL, responded by saying that “the higher death toll estimates were not supported by research or other evidence.” Though this is no doubt correct, it amounts to the pot calling the kettle black.
We arrived at a different set of numbers from those previous estimates in Haiti, and here’s why we feel ours are more reliable. With support from the United Nations and the International Development Research Center, our North American-Haitian team of researchers was able to carefully examine the costs and consequences of the unfolding crisis on the ground. After administering several household surveys, we estimate that there were roughly 158,000 deaths in Port-au-Prince and surrounding areas in the six weeks after the earthquake. This is at least twice LTL’s estimate and half the Haitian government’s claim.
Administering household surveys in any conflict or disaster zone is challenging. It is often impossible to come up with a representative sample of the pre-crisis population, and therefore extremely difficult to judge how its condition may have changed in the post-crisis period. Coincidentally, we had undertaken a major survey in Haiti in late 2009 that drew on a random sample of residents. We then resurveyed the same households roughly 50 days after the earthquake.
There were formidable obstacles to surveying the post-quake Haitian population. We tracked down more than 90% of the original sample, which had spread across Haiti or relocated to the Dominican Republic, Canada and the United States. Without the earlier 2009 survey, we would never have been able to generate an accurate post-quake sample, owing to the dispersal of residents.
It is inconceivable that LTL, a year after the quake, interviewed a representative sample of the pre-quake population. In fact, it appears that LTL arrived at its low mortality count by focusing narrowly on a selection of adult respondents. Our study, released in a peer-reviewed journal, indicates that children were at much higher risk of dying. LTL also claims to have asked neighbors for information rather than tracking down the missing. In our survey, we found many cases of inconsistent reporting by neighbors.
Establishing Haiti’s post-earthquake death count is not an academic exercise. Too often, spurious numbers are invoked to justify specific ideological viewpoints. For example, in Iraq, people making the case that the war was ill-conceived cite higher rates, while those supporting the intervention point to lower ones. In Haiti, there is a risk that those wishing to justify reductions in aid may seize on the lower figures, and some Haitian officials and relief groups may have strong incentives to go with the higher ones.
It is vital that social scientists get their methods right when counting deaths and injuries after crisis. This is not just a matter of scholarly integrity. It has life-and-death implications for potential aid recipients. A vigorous discussion of estimates is to be encouraged, but these must be premised on good science and not on politics or other types of bias. While every situation is different, researchers must take care to ensure proper sampling procedures, disclose their methodology and be transparent about all of their findings, including biases. It is their duty to ensure that their estimates are sound and valid.
Robert Muggah is research director of the Geneva-based Small Arms Survey at the Graduate Institute of International and Development Studies. Athena Kolbe works with the Department of Political Science and the School of Social Work, University of Michigan. Royce Hutson, an assistant professor of social work at Wayne State University, and Harry Shannon, a professor of clinical epidemiology and biostatistics at McMaster University, contributed to the study and coauthored this column.