Uncertainty in Index Modeling
The study conducted by Malcomb et al aims to create a climate vulnerability index for Malawi based upon a multitude of indicators ranging from electricity access to risk of flood hazard. These indicators are all combined to create a single indicator of resilience in a way that is somewhat unclear and irreplicable due to its subjective nature. Tate (2013)1, provides a framework for understanding the creation of a vulnerability index. Tate specifically interrogates and evaluates the magnitude of uncertainty that results from all stages of index creation. This includes the decisions that go into indicator selection, the normalization of indicators, and the weighting and aggregation of indicators. Malcomb et al2 describe an interview process with experts in which they select specific demographic indicators which they believe will contribute to their model. Because the entire research methodology relies on having a working model of climate vulnerability, it seems surprising that they spend so little time describing the actual formulation of the model. This seems to be a flaw in the study, particularly in regards to reproducing this study in a different location or context. Because the process of the creation of an indicator set is hidden to the reader, we are unable to accurately get a picture of how specific indicators are chosen above others. Tate (2013) ran simulations on the impacts on uncertainty of these adjustments and found this process to be a source of uncertainty.
However, Tate (2013) found that the largest source of uncertainty in index creation stemmed from the weighting of different indicators when they were aggregated into one index score. The weight of a particular indicator decides how much of an impact that particular indicator has on the final index, and because this is a subjective process, there needs to be some accountability for the uncertainty that results from this. Malcomb et al present their model as a static and permanent model based on empirical evidence, yet through the framework of Tate, they fail to accurately describe their methods of weighting. While much of the actual data they use in their model is also interview based and can potentially be somewhat called into question, the assignment of weights to different indicators does not seem a specific pattern or methodology that is well described. My understanding is that Tate might say that there are many potential sources of error and uncertainty in the study, as with all research, but the researchers do not address the uncertainty they create by being vage in describing their model and specifically weighting of indicators. There is no perfect way to go about picking which indicators are allowed to drive the model most, but I believe Tate would argue that there is a bare minimum level of methodological record-keeping that should be done not only to improve the replicability of the study but to improve the reader’s confidence in the model.
-
Tate, E. 2013. Uncertainty Analysis for a Social Vulnerability Index. Annals of the Association of American Geographers 103 (3):526–543. DOI:10.1080/00045608.2012.700616. ↩
-
Malcomb, D. W., E. A. Weaver, and A. R. Krakowka. 2014. Vulnerability modeling for sub-Saharan Africa: An operationalized approach in Malawi. Applied Geography 48:17–30. DOI:10.1016/j.apgeog.2014.01.004 ↩