Best Drupal HostingBest Joomla HostingBest Wordpress Hosting
FOLLOW US

 

 

WORLD POLICY BOOKS

 

In A Deluge of Consequences, the first World Policy e-book, intrepid journalist Jacques Leslie takes us along on a mythic, spell-binding trip to the bucolic kingdom of Bhutan, where the planet's next environmental disaster is set to unfold. 

 

FOCUS ON

The  World Policy Institute understands that policymakers and opinion leaders need creative ways to catalyze innovation and engage wider coalitions in solving some of the world’s biggest challenges.  By working with artists focused on the same issues, this cross-cutting initiative seeks to build a new, collaborative model for social change. 

AddToAny
Share/Save

The Failure of the Failed States Index

By Lionel Beehner and Joseph Young

The annual release of Foreign Policy magazine and The Fund for Peace’s Failed States Index (FSI) has become a much-ballyhooed event among foreign policy wonks. But the list also comes under harsh criticism every year for its supposed bias, flawed metrics, and failure to predict important events, such as the Arab revolts. The answer is not to scrap the index, which could serve as a useful heuristic for both policymakers and political scientists alike, but rather to reform it. Only then will the list be more useful to scholars and better at predicting the global events. Otherwise, it will remain the foreign-policy equivalent of the US News & World Report’s college rankings–a meaningless index that generates annual buzz, but holds little empirical value. 

While identifying states in danger is an important task for policymakers, the problem with the FSI is manifold. Conceptually, the list creates a false dichotomy between failed states and not failed states. But states are complex, spatially diverse political units. So-called failed states have pockets of governance, and even stronger states have zones of failure.  The index masks this spatial variation and treats the state like an on/off switch.

A more accurate term for the index would be a state capabilities continuum. A similar change was made by the State Failure Task Force (renamed the Political Instability Task Force), which was commissioned under President Clinton.

This kind of label also assumes the antidote to many of the developed world’s conundrums related to transnational violence and terrorism, is more state-building. Yet, not all the states that top the FSI are exporters of instability or terrorism. Often these variables run counter to one another, given that non-state actors like al-Qaeda require a certain semblance of order, normalcy, and predictability even to operate. In Somalia, which perennially tops the FSI’s list, the largest buildup of radical Islamist forces coincided with that country’s greatest period of law and order in recent memory–from 2006, when the Islamic Courts defeated the U.S.-backed militias and imposed Islamic law in Mogadishu.

As an actual index, the FSI also falls short. It is an index that is supposed to provide an underlying value for state stability but instead, is lost in tautology. For example, many of the subcomponents measure violence. Since one of the supposed uses of the index is to forecast violence, measuring violence to predict violence is true by definition.

The FSI also lumps in too many various indicators—many of them perception based— into one basket: economic growth, poverty, inequality, corruption, human rights abuses, access to improved services, presence of ethnic violence, levels of authoritarianism and foreign assistance, among others. The mishmash that comes of this, pairs states on different ends of the state-building spectrum into the same basket.

Indeed, the list confuses the concept of stateness, assuming a neat linear model when in fact several of the states deemed failing do not lack stateness but actually have too much of it. North Korea (ranked #22 in 2012) suffers different ailments from, say, Burundi (ranked #18). Likewise with Pakistan (#13) and the Ivory Coast (#11).

A criticism that has been leveled elsewhere is that the list also feels politicized. There’s a tendency to lump states deemed failing with states whose regimes the United States does not support. There are several states out there we find odious but are hardly in danger of failing, such as Iran (#34) and Uzbekistan (#39) come to mind. Conversely, there are relatively weak states whose imperviousness to a coup or revolution remains iffy—e.g. Albania (#118) or Bahrain (#125)—yet their low ranking makes them appear as secure as any Scandinavian country. In short, it should be an index that tracks states that lack capabilities, not rogue states.

Most problematic from an academic’s standpoint is how the actual index is made. A wide array of measures are cobbled together without a clear measurement or aggregation plan. There are 12 components of the measure that are equally weighted. Why? We don’t know. While this may seem like an unimportant issue, let’s examine a quick example.

Let’s say we want to make a National Basketball Association player performance indicator. We gather all of the data on player performance. How do we aggregate the data? There are lots of ways to do so. None is perfect, but there are certainly better (and worse) ways of doing so. We want to make sure that our method is valid (i.e. we are measuring what we claim to be measuring). If we simply create 12 categories (points, assists, rebounds, steals, etc.) and treat them equally, we will get a much different list than if we privilege certain categories over others by weighting them.

Sometimes the weighting scheme can be somewhat arbitrary, but we need to make sure the resulting list matches what we think should occur (a term called face validity). In basketball, most observers would privilege the point’s category over, say, steals. Treating them equally does not help rank players. Would anyone consider Mike Conley as good as Kobe Bryant (#2 in steals vs. #2 in points scored)? In fact, only two of the top five players in steal’s in the NBA in 2012 are legitimate all-stars (LeBron James and Chris Paul). The point of this diversion is that the FSI haphazardly puts data together without considering which components are more important for measuring stability or fragility. An improved FSI would do a better job considering this task and would be more valid on its face.

Regional scholars also groan at the index’s Western biases. The FSI lumps almost all of Africa in the Critical condition, presumably meaning that the entire continent is on the verge of failing or falling into anarchy. This ignores widespread variation of stability across the region. Yes, Kenya has witnessed some political turbulence in recent years, but it is a far less unstable place than the DRC. Yet, both are painted an ominous fire-engine tone of red on FSI’s map. There are virtually no Western states near the top, not even Greece (#138). This would be the equivalent of US News & World Report lumping every southern college near the bottom of its rankings.

If the list has any practical value it should be able to predict where, when, and why we might see political instability, potential coups, or revolution. Yet it has failed on that front as well. The 2011 FSI ranked Tunisia 108th out of 177 states and Bahrain was 129th. Libya, whose regime fell last summer, was ranked 111th, ahead of Turkey (104th). Longitudinally the list is also problematic. What are we to make of temporal variation year to year? Japan, for instance, jumped several spots in the 2012 index, presumably because of the Fukushima nuclear meltdown. Does that mean the state in Tokyo is in greater danger of falling? Doubtful. Is the state less capable? Maybe, and thus more evidence for our claim above for renaming the index.

Again, perhaps the whole binary concept of labeling states as failed or not failed requires a reexamination. Indeed, there is a conspicuous lack of semantic agreement, both within the scholarly and policy communities, over how to define or differentiate a failed from a failing or a fragile state. The phrase is said to have originated in a 1994 Foreign Policy article penned by Gerald Helman and Steven Ratner, who defined failed states as those “utterly incapable of sustaining [themselves]” and “simply unable to function as independent entities.” There is a tendency to reflexively lump all areas outside of Western norms of rule and sovereignty as failed, fragile, or ungoverned, principally because of their perceived threat posed. In the 9/11 Commission Report, the U.S. government defines these as “areas that combine rugged terrain, weak governance, room to hide or receive supplies, and low population density with a town or city near enough to allow necessary interaction with the outside world.”  

Instead of abandoning the index for its flaws, it should be improved. First, the concept of failed states should be reformed and thought of as a continuum of state capabilities. All dichotomous thinking and measurement should be removed. Second, a top-down review by specialists and statisticians should be conducted to generate an index that is valid, critically analyzes how to measure and aggregate the subcomponents, and then evaluates the subsequent list. Aside from these critical issues, the FSI should scrap its Postcards from Hell online slideshow, which misinforms the public and adds no empirical value to the project. Finally, the list should acquire more global partners and indicators, so it does not have a Made-in-America feel to it. That will shield it from international accusations that the index is a tool of American foreign policy and biased against non-Western states.

Of course, there will never be perfect metrics and predicting events like social revolutions is exceedingly difficult. In the end, the FSI should be something with practical but also empirical value. We do not need an index to tell us that Greece is going through tough times or that Finland will not fail anytime soon. Nor do we need an index to remind us of Africa’s post-colonial problems with state building. The index should instead be constructed to give us greater analytical leverage to anticipate events, whether they are acts of political violence, man made catastrophes, or regime changes.

Maybe the failed state term is a failed concept, as it suggests that the modern state is an either-or proposition. The problem is, we know that states in the international system are more fluid. The ability of the state to be a state—establish security, provide services, maintain rule of law, and other important public goods—varies even within a single country. As we all know, state power varies across time. But this power also varies across its territory. The Failed States Index, to be effective and taken seriously by scholars, must be able to more accurately measure variation within regions, as well as disparities within individual states. States, after all, are a lot like Tolstoyan families: Un-failed states are all alike. Every failed state is failed in its own way.

*****

*****

Lionel Beehner is a fellow at the Truman National Security Project. Joseph Young is a Professor of Justice, Law and Society at American University in Washington D.C. 

[Photo courtesy of ctsnow]

Share/Save

Post new comment
The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image. Ignore spaces and be careful about upper and lower case.
World Policy on Facebook