Thursday, March 22, 2018

The Empirical Revolution in Economics: Taking Stock and Looking Ahead

I have been asked to write a piece for the excellent TSEconomist, TSE student magazine. I took the opportunity to put together my ideas on where empirical economics stands and where it is going and what we can do to make things even better.

The last 40 years have witnessed tremendous developments in empirical work in economics. In a recent paper, Josh Angrist and his coauthors show that the proportion of empirical work published in top journals in economics has moved from around 30% in 1980 to over 50% today. This is a very important and welcome trend. In this article, I want to take stock of what I see as the main progress in empirical research and look ahead at the remaining challenges.
In my opinion, the main achievement of the empirical revolution in economics is causal inference. Causal inference, or the ability to rigorously tease out the effect of an intervention, enables us to rigorously test theories and to properly evaluate public policies. The empirical revolution has focused the attention on the credibility of the research design: which features of the data help identify the causal effect of interest? 

With the empirical revolution, economics has grown a second empirical leg along its extraordinary first theoretical leg and is now able to move forward as a fully fledged social science, weeding out wrong theories, and as true social engineering, stopping inefficient policies and reinforcing the ones that resist empirical tests.

The achievements of the empirical revolution are outstanding, in my opinion on par with the most celebrated theoretical results in the field. It is obvious to me that in the following years several of the contributors to the empirical revolution in economics will receive the Nobel prize: Orley Ashenfelter, Josh Angrist, David Card, Alan Krueger, Don Rubin, Guido Imbens, Esther Duflo, Michael Greenstone, David Autor. Just to mention a few important achievements, in no way exhaustive:
What I find extraordinary is how much empirical results have supported as well as falsified basic assumptions in economic theory such as functioning markets and rational agents. Sometimes agents behave rationally, sometimes they do not. Sometimes markets work, sometimes they do not. Sometimes it matters a lot, sometimes it does not. I think we are going to see more and more theory trying to tease out the context in which deviations might matter or not.

The empirical revolution has also brought about a new type of methodological research. The economists' empirical toolkit now has structured around 5 types of tools for causal inference: lab/controlled experiments; Randomized Controlled Trials (RCTs); natural experiments; observational methods and structural models. Aside the impressive continuing achievements of theoretical econometrics, we see now methodological work investigating the empirical properties of methods of causal inference:
But challenges lay ahead that have to be addressed head on. I am very optimistic since I can see the first responses already taking shape, but I think the swiftest our response to these challenges will be, the most credibility our field will have in the public’s eye and the quickest our progress will be.

The first challenge that I see would be an exclusive focus on causality. Science starts with observation, documenting facts about the world that are in need of an explanation. One of the most influential empirical work in the last decades is Thomas Piketty’s effort, along with coauthors, to document the rise of inequality in countries all around the world. Observing new facts should also be a part of the empirical toolkit in economics.

The second and most important challenge that I see for empirical research in economics is the one of publication bias. Publication bias occurs when researchers and editors only publish statistically significant results. When results are imprecise, publication bias leads to drastic overestimation of the magnitude of results. Publication bias has plagued entire research fields such as cancer research and psychology, fields that now both face a replication crisis. A recent paper by John Ioannidis and coauthors measures the extent of publication bias in empirical economics and finds it to be very large: “nearly 80% of the reported effects in [...] empirical economics [...] are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.” This is a critical problem. For example, estimates of the Value of Statistical Life that are used to evaluate policies are overestimated by a factor of 2, leading to incorrect policy decisions.

The third challenge is that of precision: most results in empirical economics are very imprecise. In order to illustrate this I like to use the concept of signal to noise ratio. A result barely statistically significant at 5% has a signal to noise ratio of 0.5, meaning that there is twice as much noise as there is signal. Such a result is compatible with widely different true effects (from very very small to very very large). But things are actually worse than that. Ioannidis and coauthors estimate that the median power in empirical economics is 18%, which implies a signal to noise ratio of 0.26, meaning that the median result in economics contains four times more noise than it has signal. I attribute this issue to an exclusive focus on statistical significance at the expense of looking at actual sampling noise.

How to address these challenges? In my opinion, we need to see at least three major evolutions in publishing, research and teaching.
  1. Editors have to take steps to encourage descriptive work and to curb publication bias. This requires:
    • Ditch p-values and statistical significance and focus on sampling noise, measured for example by confidence intervals. Confidence intervals make explicit the uncertainty around the true estimate. Present sampling noise in abstracts in the form of “the estimated impact is x±y.”
    • Publish null results, they are as interesting and informative as significant results, and maybe more. Favor more precise results.
    • Write clear guidelines about what is expected in an empirical paper using a given technique.
    • Require pre-registration of studies, even for non experimental research. Pre-registration prevents specification search.
    • Encourage the use of blind data analysis. This tool, invented by physicists, enables you to write your code on perturbed data and to run it only once on the true data, preventing specification search.
    • Publish replications and meta-analysis (rigorous summary of results, including tests for publication bias).
  2. Researchers have to join their efforts to obtain much more precise results. This requires:
    • Take stock of where we stand: organize published results using meta-analysis in order to check which theoretical propositions in economics have been validated or refuted and with which level of precision.
    • Identify the critical remaining challenges: what are the 10 or 100 most important empirical questions in economics? Follow the example of David Hilbert stating the 23 problems of the century in mathematics.
    • Focus all of the profession’s efforts on trying to solve these challenges, especially by running very large and very precise critical experiments. Examples that come to mind here are physicists uniting to secure funding for and building the CERN and LIGO/VIRGO experiments required to test critical predictions of the standard model.
  3. Teach economics as an empirical science, by including empirical results on an equal footing with theoretical propositions. This would serve several purposes: identify what is the common core of empirically founded propositions in economics; identify the remaining challenges; help students learn the scientific method and integrate them into the exciting journey of scientific progress.

So many things to do. It is so exciting to see this revolution and to be able to contribute to it!



Thursday, November 10, 2016

The roots of the rise of populism

Following the election of Donald Trump, there has been a spat of various explanations as to why he has won. Let me synthesize here the various explanations that I have been able to find so far. I find this discussion fascinating not only for scientific reasons, but also because they speak to what seems to be major changes affecting all the western democracies.
  1. The losers of trade globalization strike back: white blue collar workers that have been hit by globalization vote for more protection from foreign workers. The best illustration of this phenomenon is Branko Milanovic's elephant graph. The middle class in developed countries has seen stagnating incomes since 1988 (and even earlier) and has in consequence radicalized. This explanation seems to have a grain of truth. Recent work by David Autor, David Dorn, Gordon Hanson and Kaveh Majlesi shows that in US counties harboring more industries competing with Chinese exports, workers not only lose their jobs, have lower earnings and are in worse health, but they also tend to vote for more extreme republican candidates.
  2. The losers of technological change rebel. Erik Bryjlnofsson has tweeted this graph: the vote for Trump seems to be correlated with how much a share of a county's jobs are routine. David Autor, along with various over coauthors, has shown that a huge change in advanced economies has been the progressive disappearance of middle level routine jobs yielding to a polarization of the job market: only highly skilled tech jobs or very low skilled service jobs increase in the economy, while middle level industry and clerk type jobs are replaced by robots and computers. 
  3. The white majority feels threatened by the rise of the ethnic minorities. The idea is that the modern welfare state is easier to maintain when there is ethnic homogeneity. The main reason seems that we are less generous with people that do not look like us. The increase in the size of the Hispanic and African-American communities might have triggered a fear reaction by the white majority that is now trying to protect itself by restricting access to citizenship or even deporting immigrants. Recent results for Austria by my IAST colleague and friend Charlotte Cavaillé seem to give some credit to that explanation. Charlotte and her coauthors find that natives competing with immigrants for access to public housing vote more for the far right.
  4. It's Fox News: Simon Wren-Lewis has a very nice post where he discusses recent evidence on the persuasive impact Fox News has on election outcomes.
  5. Democrats simply did not turn out for Hillary. David Yanagizawa-Drott has a put together a really nice graph that shows that Hillary has failed to mobilize Democrat voters. 
  6. It's the economy, stupid! It turns out that, back in March, Ray Fair's model was predicting a Republican win. The explanation comes from the penalty of having two consecutive Democrat mandates before the actual election and the weak economy. Both of these factors gave a strong lead to any Republican candidate.
What to make of all that? Explanations 1 to 3 are the most frightening. They suggest that the model of the modern western welfare state is under jeopardy under the combined forces of globalization (trade and migration) and of technical change. These explanations are extremely dispiriting since it seems extremely hard to conjure up solutions to respond to the challenges raised by these changes. I'd love to think that it is only 6!

Friday, April 1, 2016

Pourquoi les français sont malheureux au travail et que faire pour que ça change ?

Cela fait plusieurs années maintenant que je m'intéresse de très près à l'éducation des enfants et au management. Je suis très heureux de pouvoir commencer à faire entrer ces thèmes dans mon projet de recherche. Je posterai dans le futur de manière plus régulière sur ces questions, notamment en français car un de mes objectifs est d'avoir un impact concret sur les pratiques des parents, des enseignants et des managers. 

En attendant, je donne, à l'invitation de l'association des Alumni de TSE, une conférence assez programmatique sur ces thèmes le 8 avril prochain à Paris. Voici le texte décrivant le thème de la conférence:
Les Français sont malheureux au travail, de nombreux indicateurs nous l’indiquent. Comment expliquer que le travail soit pour les Français un lieu de stress et d’angoisse et non un lieu d’épanouissement et de bien-être? Quelles solutions peut-on apporter pour changer cet état de fait? Sylvain Chabé-Ferret défendra sa conviction que le malaise français au travail est le produit d’une interaction néfaste entre deux défiances. D’une part, les entreprises ne font pas confiance à leurs salariés, et leur imposent des pratiques managériales et organisationnelles autoritaires, hiérarchiques et stressantes. D’autre part, les salariés supportent ces pratiques parce qu’ils ne font pas confiance à leur ressenti et à leurs émotions et hésitent à reconnaître leur souffrance et à agir en conséquence. Pour lui, ce divorce entre le ressenti profond des salariés et ce qu’ils s’autorisent à penser, trouve son origine dans une structure psychologique qu’il appelle le Faux-Self. Le Faux-Self est construit dans l’enfance par des pratiques parentales et éducatives répressives de l’expression des émotions et des aspirations des enfants. En combinant économie, psychologie, anthropologie et neurologie, Sylvain nous montrera l’origine du Faux-Self et ses impacts dans la société et sur les actions des individus. Sylvain suggérera ensuite des pistes de pratiques parentales, éducatives et managériales respectueuses du Self et présentera les preuves que nous avons de leur efficacité. Enfin, il conclura par quelques conseils pratiques à destination des jeunes diplômés pour leur permettre de mieux écouter leurs émotions et de suivre leurs aspirations de manière à être plus heureux dans leur job et à contribuer à changer la société en profondeur.
Quelques places sont réservées aux extérieurs. Laissez un message en commentaires pour recevoir une invitation.

Friday, November 20, 2015

How Can We Save the Amazon Rainforest?

Recently, I have embarked in a collective effort to write a blog post on the economics of public policies to fight deforestation as part of a series of posts in preparation of COP21, a joint venture between TSE, Le Monde and The Economist. I have had the chance to work with a team of amazingly cool, witty and nice collaborators (Arthur van Benthem, Eduardo Souza Rodrigues, Julie Subervie). The post has appeared in French here and in English there.

What I would like to do here is to summarize our ideas and then highlight the results of one recent paper by Julie which gives the first robust evidence on the effectiveness of one particularly important policy, Payments for Environmental Services (PES), in the Amazon.

The reasons why we want to stop deforestation are pretty straightforward: deforestation is responsible for about 10 percent of climate change emissions and leads to massive biodiversity losses. Actually, deforestation is not the direct result of a private effort by landowners, it is the result of a massive colonization campaign sponsored by the junta government in Brazil in the 70s.

The key question is which policy to choose to halt deforestation. There are a lot of options. Governments tend to favor regulation, like declaring some land protected and banning all cutting there. Economists call these policies "Command and Control" because they are highly interventionist and leave no leeway to agents to adapt to the policy. Economists favor price instruments above all, such as a carbon market or a carbon tax. The key advantage of these policies is that they let much more leeway to agents to adapt: when there is a price on carbon, the less costly carbon reduction options are the ones implemented first. With command and control, you might impose much higher costs to reach the same environmental gain by banning very profitable cuts of trees and allowing some trees to be cut where economic returns are actually small. PES are an intermediate between command and control policies and price instruments. With PES, governments pay farmers who accept to conserve their trees standing a fixed amount per hectare. Theoretically, PES are less efficient than market instruments, since they leave room for farmers to choose not to receive the incentive, whereas a tax or a price is for everyone to pay. Also, those who volunteer might be the ones who would have not cut their trees anyway even in the absence of payment. If this is widespread, they benefit from a windfall effect: they do nothing and get paid. PES should be better than command and control though (we say that they are second best instruments, actually they are third best since they are linear contracts whereas one could think of nonlinear PES contracts that would be the true second best option).


But this is theory. What we would like to know is whether these predictions hold in real life, right? I mean, that's useful to know how policies work in the perfect, vaccum-full world of models but how do these predictions hold up in reality? Many things can go wrong in the imperfect, air-full realm of the real world. Agents might not be as well-informed or as rational as we assume that they are in our models, tax enforcement might be undermined by corruption or inefficiency.

It turns out that it is extremely difficult to measure the effectiveness of forest conservation policies. Why? Because we face two very serious empirical challenges: additionality and leakage.

Additionality is a key measure of a program success: how much did the policy contribute to halt deforestation where it was implemented? For example, by how much did deforestation decrease in protected areas? Or among farmers subject to a deforestation tax or to a price of carbon? Or among farmers taking up a PES? In order to measure additionality, we have to compute how much deforestation there would have been in the absence of the policy. But this is extremely hard to do since it did NOT happen: the policy was actually implemented. The situation of reference to which we would like to compare what happened has not happened, we call this situation a counterfactual.

Since we cannot directly observe the counterfactual, we are going to try to proxy for it using something observed. We could take as a proxy the situation before the policy was implemented. But this proxy might be very bad. For example, after the Brazilian government tightened regulatory policies and improved forest monitoring thanks to satellite imagery in the 2000s, deforestation in the Amazon slowed down to approximately half a million hectares annually. It looks like the policy was successful. But, at the same time, lower prices for soybeans and cattle products also reduced incentives to deforest. So what was the main driver of the decrease in deforestation? How much forest did the policy save exactly?

We could also use areas and farmers not involved in the policy as a proxy for the counterfactual situation. But this proxy might be very bad also. For example, even if we observed that farmers who participate in a PES program have lower deforestation rates than those who do not, this does not imply that the scheme actually reduced deforestation. For sure, farmers who stand to profit the least from cutting down their trees are the most likely to sign up for the program. As a result, the program might end up paying some farmers for doing nothing differently from what they would have done anyway. And so the additional impact of the program may very well be small.

Leakage occurs when a conservation policy, which may be successful locally, triggers deforestation elsewhere. For example, a farmer may stop clearing forest on plots that he has contracted under a PES program but at the same time increase deforestation on plots not covered by the contract. On a larger scale, the threat of paying fines to a provincial government may give incentives to farmers or logging firms to move operations to a neighboring province. In such cases, leakage undermines the additionality of conservation programs. Detecting leakage effects is even more difficult than detecting additionality. Indeed, we not only need to compute a counterfactual, but we first and foremost need to detect where the leakages go.

Ok, so additionality and leakage effects are key to be able to rank the policy options in the real world. So what do we know about the additionality and leakage effects for the various forest policies in the Amazon (and in other threatened rainforests)? Not much actually.

As in medicine when testing the efficacy of a new drug, the gold standard of proof in empirical economics is to conduct a Randomized Control Trial (RCT). In a RCT, we randomly select two groups of individuals or regions and implement the policy only for one group, keeping the second as a control. The difference between treatment and control provides a direct measure of the additionality of the policy. RCTs can also be designed to measure leakage. Though RCTs are commonly run to evaluate education or health policies worldwide, there has been only few randomized evaluations of forest policies. Kelsey Jack from Tufts University performed RCTs to assess tree planting subsidies in Malawi and Zambia. To my knowledge, there are no similar results for forest conservation PES, in Brazil or elsewhere. Seema Jayachandran has been conducting an RCT-based evaluation of a forest-conservation PES program in Uganda. The experiment has been designed to estimate both additionality and leakage effects. We are waiting for her results impatiently.

In the absence of RCTs, economists usually try to identify naturally occurring events or “experiments” that they hope can approximate the conditions of an RCT. In a recent paper, soon to be available here, Julie and her coauthors have conducted such a study of one of the first forest-conservation PES ever implemented in the Amazon. The key graph in this paper is the following:
The program was implemented in 2010. What you can see is that the pace of deforestation decreased after 2010 among participants, while it remained the same among non participants. The change in the difference in land cover between participants and comparison communities is a measure of additionality: it is pretty large, an increase of about 10 p.p. Looking at comparison communities, you can see that the path of deforestation has not increased there, while it should have if leakage was present. What seems to have happened is that farmers have started farming more intensively the previously deforested land, and have actually decreased deforestation on new plots. Using these estimates, an estimate of how much carbon was saved and of the value of a ton of carbon, Julie and her coauthors estimate that the benefits of the program exceed its costs.

A couple of comments here. First, this work is a beautiful example of empirical economists at work. This is how we hunt for causality in the absence of RCT. In order to check for the validity of our natural experiments, we try to see if they do not find effects where they should be none. Here, you can see that before 2010, the deforestation trends in participants and comparison communities were parallel. This is supportive of the critical assumption Julie and her coauthors make here: in the absence of the PES, deforestation trends would have remained the same over time. Second, there is still work to do. The measure of forest cover is declarative and does not rely on satellite data. It is still possible that farmers lied about how many hectares they had in forest still standing. The number of observations is small, so precision is small. And if observations happen to be correlated within communities, precision would even be lower. We would also like to know whether these changes in farming practices are going to persist over time or if the deforestation is going to resume as soon as the program stops. Julie is trying to collect new data on the same guys several years later to check this. Third: these are amazingly encouraging results. It seems that we can do something to save the Amazon rainforest after all. Rejoice.