Friday, November 20, 2015

How Can We Save the Amazon Rainforest?

Recently, I have embarked in a collective effort to write a blog post on the economics of public policies to fight deforestation as part of a series of posts in preparation of COP21, a joint venture between TSE, Le Monde and The Economist. I have had the chance to work with a team of amazingly cool, witty and nice collaborators (Arthur van Benthem, Eduardo Souza Rodrigues, Julie Subervie). The post has appeared in French here and in English there.

What I would like to do here is to summarize our ideas and then highlight the results of one recent paper by Julie which gives the first robust evidence on the effectiveness of one particularly important policy, Payments for Environmental Services (PES), in the Amazon.

The reasons why we want to stop deforestation are pretty straightforward: deforestation is responsible for about 10 percent of climate change emissions and leads to massive biodiversity losses. Actually, deforestation is not the direct result of a private effort by landowners, it is the result of a massive colonization campaign sponsored by the junta government in Brazil in the 70s.

The key question is which policy to choose to halt deforestation. There are a lot of options. Governments tend to favor regulation, like declaring some land protected and banning all cutting there. Economists call these policies "Command and Control" because they are highly interventionist and leave no leeway to agents to adapt to the policy. Economists favor price instruments above all, such as a carbon market or a carbon tax. The key advantage of these policies is that they let much more leeway to agents to adapt: when there is a price on carbon, the less costly carbon reduction options are the ones implemented first. With command and control, you might impose much higher costs to reach the same environmental gain by banning very profitable cuts of trees and allowing some trees to be cut where economic returns are actually small. PES are an intermediate between command and control policies and price instruments. With PES, governments pay farmers who accept to conserve their trees standing a fixed amount per hectare. Theoretically, PES are less efficient than market instruments, since they leave room for farmers to choose not to receive the incentive, whereas a tax or a price is for everyone to pay. Also, those who volunteer might be the ones who would have not cut their trees anyway even in the absence of payment. If this is widespread, they benefit from a windfall effect: they do nothing and get paid. PES should be better than command and control though (we say that they are second best instruments, actually they are third best since they are linear contracts whereas one could think of nonlinear PES contracts that would be the true second best option).


But this is theory. What we would like to know is whether these predictions hold in real life, right? I mean, that's useful to know how policies work in the perfect, vaccum-full world of models but how do these predictions hold up in reality? Many things can go wrong in the imperfect, air-full realm of the real world. Agents might not be as well-informed or as rational as we assume that they are in our models, tax enforcement might be undermined by corruption or inefficiency.

It turns out that it is extremely difficult to measure the effectiveness of forest conservation policies. Why? Because we face two very serious empirical challenges: additionality and leakage.

Additionality is a key measure of a program success: how much did the policy contribute to halt deforestation where it was implemented? For example, by how much did deforestation decrease in protected areas? Or among farmers subject to a deforestation tax or to a price of carbon? Or among farmers taking up a PES? In order to measure additionality, we have to compute how much deforestation there would have been in the absence of the policy. But this is extremely hard to do since it did NOT happen: the policy was actually implemented. The situation of reference to which we would like to compare what happened has not happened, we call this situation a counterfactual.

Since we cannot directly observe the counterfactual, we are going to try to proxy for it using something observed. We could take as a proxy the situation before the policy was implemented. But this proxy might be very bad. For example, after the Brazilian government tightened regulatory policies and improved forest monitoring thanks to satellite imagery in the 2000s, deforestation in the Amazon slowed down to approximately half a million hectares annually. It looks like the policy was successful. But, at the same time, lower prices for soybeans and cattle products also reduced incentives to deforest. So what was the main driver of the decrease in deforestation? How much forest did the policy save exactly?

We could also use areas and farmers not involved in the policy as a proxy for the counterfactual situation. But this proxy might be very bad also. For example, even if we observed that farmers who participate in a PES program have lower deforestation rates than those who do not, this does not imply that the scheme actually reduced deforestation. For sure, farmers who stand to profit the least from cutting down their trees are the most likely to sign up for the program. As a result, the program might end up paying some farmers for doing nothing differently from what they would have done anyway. And so the additional impact of the program may very well be small.

Leakage occurs when a conservation policy, which may be successful locally, triggers deforestation elsewhere. For example, a farmer may stop clearing forest on plots that he has contracted under a PES program but at the same time increase deforestation on plots not covered by the contract. On a larger scale, the threat of paying fines to a provincial government may give incentives to farmers or logging firms to move operations to a neighboring province. In such cases, leakage undermines the additionality of conservation programs. Detecting leakage effects is even more difficult than detecting additionality. Indeed, we not only need to compute a counterfactual, but we first and foremost need to detect where the leakages go.

Ok, so additionality and leakage effects are key to be able to rank the policy options in the real world. So what do we know about the additionality and leakage effects for the various forest policies in the Amazon (and in other threatened rainforests)? Not much actually.

As in medicine when testing the efficacy of a new drug, the gold standard of proof in empirical economics is to conduct a Randomized Control Trial (RCT). In a RCT, we randomly select two groups of individuals or regions and implement the policy only for one group, keeping the second as a control. The difference between treatment and control provides a direct measure of the additionality of the policy. RCTs can also be designed to measure leakage. Though RCTs are commonly run to evaluate education or health policies worldwide, there has been only few randomized evaluations of forest policies. Kelsey Jack from Tufts University performed RCTs to assess tree planting subsidies in Malawi and Zambia. To my knowledge, there are no similar results for forest conservation PES, in Brazil or elsewhere. Seema Jayachandran has been conducting an RCT-based evaluation of a forest-conservation PES program in Uganda. The experiment has been designed to estimate both additionality and leakage effects. We are waiting for her results impatiently.

In the absence of RCTs, economists usually try to identify naturally occurring events or “experiments” that they hope can approximate the conditions of an RCT. In a recent paper, soon to be available here, Julie and her coauthors have conducted such a study of one of the first forest-conservation PES ever implemented in the Amazon. The key graph in this paper is the following:
The program was implemented in 2010. What you can see is that the pace of deforestation decreased after 2010 among participants, while it remained the same among non participants. The change in the difference in land cover between participants and comparison communities is a measure of additionality: it is pretty large, an increase of about 10 p.p. Looking at comparison communities, you can see that the path of deforestation has not increased there, while it should have if leakage was present. What seems to have happened is that farmers have started farming more intensively the previously deforested land, and have actually decreased deforestation on new plots. Using these estimates, an estimate of how much carbon was saved and of the value of a ton of carbon, Julie and her coauthors estimate that the benefits of the program exceed its costs.

A couple of comments here. First, this work is a beautiful example of empirical economists at work. This is how we hunt for causality in the absence of RCT. In order to check for the validity of our natural experiments, we try to see if they do not find effects where they should be none. Here, you can see that before 2010, the deforestation trends in participants and comparison communities were parallel. This is supportive of the critical assumption Julie and her coauthors make here: in the absence of the PES, deforestation trends would have remained the same over time. Second, there is still work to do. The measure of forest cover is declarative and does not rely on satellite data. It is still possible that farmers lied about how many hectares they had in forest still standing. The number of observations is small, so precision is small. And if observations happen to be correlated within communities, precision would even be lower. We would also like to know whether these changes in farming practices are going to persist over time or if the deforestation is going to resume as soon as the program stops. Julie is trying to collect new data on the same guys several years later to check this. Third: these are amazingly encouraging results. It seems that we can do something to save the Amazon rainforest after all. Rejoice.

Sunday, March 8, 2015

Investir dans la jeunesse des banlieues, une urgence nationale

Les attentats du 7 janvier dernier m'ont bousculé, comme beaucoup d'entre nous. Après le chagrin et les marches, j'ai aussi eu envie d'agir. J'ai commencé par écrire une tribune. Grâce à l'excellente responsable de la communication de TSE, Jenni, elle a été publiée hier sur le site de La Tribune et sur le blog de TSE. En voici une version légèrement améliorée, avec liens vers les travaux cités. J'ai aussi donné une conférence sur ce thème devant le Cercle du Bazacle, le club d'entreprises partenaires de TSE. Merci à Joel et Karine, les organisateurs des conférence du Cercle, d'avoir accepté avec enthousiasme ma proposition et de m'avoir donné la parole. Merci à tous ceux qui sont venus ce jour là de leurs commentaires et de leurs encouragements.

Voici le texte de la tribune.

Les attaques terroristes contre Charlie Hebdo et le supermarché Hyper Casher et les marches historiques qui les ont suivies appellent une réponse politique. La nature de cette réponse nous définira en tant que société et exprimera nos valeurs. Nous sommes à un carrefour. Nous pouvons bâtir sur ce gâchis et ces meurtres inqualifiables, mais aussi sur la magnifique réaction qui les a suivis, une société meilleure, ou une société de la peur.

Bien entendu, il y aura une réponse sécuritaire. Mais limiter notre réponse à ces événements à un Patriot Act à la française serait un désastre. Nous barricader dans nos maisons, calfeutrer nos enfants dans leurs écoles, ériger notre pays en forteresse infranchissable ? Si c’est notre seule réponse, elle est terrible, car elle porte le ferment de la peur et du délitement de notre société dans l'entre-soi, dans la méfiance de tout ce qui est différent, et finalement dans la peur de tous envers tous.

Il faut une autre réponse, complémentaire. Plus ambitieuse. Plus belle aussi. Cette réponse, c'est d'investir dans la jeunesse de nos banlieues, de valoriser et de soutenir l'émergence de participants actifs à la société de demain. Ils existent déjà. Mais nous ne les voyons pas. Ils sont barrés dans notre esprit par les Merah, les Kouachi, les gangs des barbares, les règlements de compte, les trafics de drogue, le chômage, les émeutes. Mais ils sont là, l'immense majorité silencieuse qui s'accroche, qui a choisi la vie, ses frustrations et ses joies, et qui a rejeté toute idéologie mortifère. Je pense à mon camarade prépa, Mohamed, le seul arabe qui n'était pas fils d'émir dans notre lycée huppé du centre-ville de Toulouse. Momo est ingénieur maintenant. Il venait du quartier des Izards, comme Merah. Je pense à mes amis volleyeurs de Villejuif, avec lesquels j'ai joué pendant des années, qui m'ont accueilli à bras ouverts, moi, le "çaifran". Avec mon accent du Sud-Ouest et ma barbichette, ils m'appelaient d'Artagnan. Ils sont devenus mes amis, eux, les renois, les noichs, les rebeus. J'ai passé tellement de bons moments avec eux que j'ai fini par prendre leurs expressions et leurs intonations au point que mes amis "normaux" m'appelaient « la racaille ».

Ils sont là, ceux qui ont dit non aux extrémismes et oui à la société française. Ils sont l'immense majorité, mais ils ont besoin de nous. Comment les aider ? Comment faire pour qu’il y ait plus de Momo et moins de Merah ? Quelle est la meilleure approche ? Investir dans l'école ? Changer la politique urbaine ? Lutter contre les discriminations ? Intervenir sur le fonctionnement du marché du travail ? Un débat légitime doit avoir lieu autour de ces options, éclairé au mieux par des évaluations rigoureuses.

Ma propre conviction est que la forme d'investissement la plus efficace est dans des programmes éducatifs auprès des très jeunes enfants et de leurs parents. Ces programmes ne visent pas à développer les capacités cognitives des enfants ou à leur apporter des connaissances scolaires, mais à les aider à être mieux eux-mêmes en leur apprenant à planifier des tâches, gérer leurs émotions et résoudre leurs conflits avec les autres de manière pacifique. Certaines interventions transmettent aussi aux parents des informations simples et parfois ignorées comme les bénéfices de parler à son enfant même s’il ne parle pas encore lui-même ou de lui lire des histoires le soir. Des recherches récentes, résumées par Jim Heckman et Tim Kautz dans un excellent document pour l'OCDE, ont mis en évidence que des versions expérimentales de ces programmes permettent de réduire de manière drastique l'engagement dans des activités illégales à l'âge adulte mais aussi qu'ils augmentent le pourcentage de diplômés du supérieur dans de fortes proportions. De tels effets sont obtenus avec un investissement somme toute limité: le programme étudié par Yann Algan et ses coauteurs par exemple est constitué de 19 séances de jeux de rôle par groupe de 3 avec un travailleur social. Yann a présenté les impacts à long terme de ce programme lors d'une conférence à l'Institut d'Etudes Avancées de Toulouse (IAST): ils sont spectaculaires. Les résultats de ces recherches montrent aussi que ces programmes sont d'autant plus efficaces qu'ils arrivent tôt dans la vie de l'enfant. Plus on laisse le temps à certains comportements de s'installer, plus ils sont difficiles à modifier par la suite. Ce n'est bien sûr pas une raison pour ne rien faire pour les adolescents et les jeunes adultes, mais c'est une raison pour réfléchir sérieusement à des interventions dès la petite enfance. C'est ce qui a conduit Jim Heckman a proposer son équation pour un meilleur investissement éducatif: investir tôt en ciblant mieux.

Je trouve ces preuves empiriques convaincantes, mais ma conviction est aussi plus viscérale. Je pense à tous mes amis de Villejuif qui m'ont dit "Si j'avais su, j'aurais travaillé plus à l'école. Mais je m’en foutais. Et puis c’était toujours la foire." Je pense à mon amie directrice de centre aéré dans un quartier difficile de Toulouse qui a démissionné avec l'ensemble de son équipe au début de l'année, victimes d'un burnout collectif face à l'extrême détresse sociale dont ils étaient témoins, jour après jour, face à ces enfants perdus, violents, tristes et face à ces parents dépassés, démunis, avec parfois aucune autre réponse que l'indifférence ou la violence. Ces programmes apportent des réponses concrètes au désarroi des parents et à la souffrance des enfants.

Il ne faut pas s'y tromper, la lutte commence maintenant pour gagner les cœurs et les esprits des enfants des banlieues. La lutte contre les extrémistes, les bandes, les délinquants, les trafiquants. Si nous ne voulons pas que ces gamins aillent grossir leurs rangs, c'est maintenant qu'il faut leur donner leur chance, leur donner les bonnes armes, celles qui leur permettront de s’intégrer à la société et d’y poursuivre leur bonheur. Existe-t-il un plus beau projet collectif ? Qui y a-t-il de plus beau que la gratitude d'un enfant ? Et qu'avons-nous à risquer sinon à les voir s'engager encore plus dans la société et à y contribuer de manières que nous n'imaginons même pas encore aujourd'hui ?

Tuesday, February 10, 2015

Land reallocation in France: some nice maps

Some time ago, I blogged about one of my current projects on land reallocation in France. I have made some progress on this project in the meantime and I am going to report on it here.

I have worked with Elise Maigné, at Inra. Together, and with the help of Eric Cahuzac, we have been able to secure an access to the data on reparcelling events at the commune level. This data has generously been transmitted to us by Nadine Polombo, who has worked together with Marc-André Philippe to digitize the dataset originally in the hands of the French Ministry of Agriculture. Nadine believes that their dataset is the inly one that remains, since the Ministry of Agriculture has decided to destroy the original data and does not take care of reparcelling events any more. Since then, the data have been made accessible through the open data portal of the French government.

First thing is that there has been 22,374 reallocation events in France reported in this dataset. This is huge, since we have 36,681 communes in France. Some communes have actually undergone more than one reallocation event. There are 18,227 communes that have undergone at least one reallocation event. This means that 49.7% of all French communes have undergone at least one reallocation event.

The first issue with the dataset is that I miss some information: the opening date of the reallocation event is missing for 201 events, the closing date for 380 events and both dates are missing for 291 events. So I have 21,502 events with non missing information on the both opening and closing dates of the reallocation event.

Figure 1: Reallocation Events in France
The events with information on the opening date are presented in Figure 1. Reallocation events start with the end of WWII, with this first wave stopping around 1953. A second wave starts in the late 50s and peaks during the 60s. That is the main wave of land reallocation. Then several waves occur in the 70s, 80s and 90s.

Figure 2: First (1) vs Subsequent (2) Reallocation Events
Since some communes have undergone more than one reallocation event, it is interesting to plot the reallocation events depending on whether they are the first or not. This is done in Figure 2. The wave of the 90s seems to be mainly due to reallocation events occurring on communes that have already been reparcelled once. It is possible though that a different portion of the commune has been reparcelled in the two events.

What would be great now is to have an idea of the way reparcelling was rolled out over space and time. It would especially be nice to know which reparcelling events occurred in between 1955, 1970, 1979, 1988, 2000 and 2010, the dates at which agricultural censuses have been conducted in France. I would add 1963 and 1967 as two large surveys have been conducted at these dates. In order to do this, I have to use a GIS software. Since I use Stata to analyse this dataset, I'm going to use its GIS facilities (for the first time). The beautiful map presented on Figure 3 is the result of this exercise.

Figure 3: Map of the Reallocation Events in France
The first striking feature of this map is that land reallocation mainly occurred in the north of France and much less so in the South. One explanation could be that land in the north is much more fertile, but I do not think this exhausts all possible explanations. This will be the topic of subsequent investigation. The second striking feature is how much the timing of land reallocation is spatially autocorrelated. For example, the area around Paris (the Paris basin) seems to have been almost completely reparceled before 1955. The first wave of reparceling thus seems to have been mainly concentrated in this area. The outskirts of the basin are reached progressively during the 60s and 70s.

The second striking feature of this map is that it coincides very well with a rough map of the agricultural regions in France (see Figure 4).
Figure 4: Map of the Agricultural Regions in France
The cereal growing regions (yellow) seem to have reparcelled very early, while the areas in mixed cultures (light green) have reparcelled more slowly. Finally, forest regions or regions with open range cattle (dark green) have almost not reparcelled.

Obviously, this strong spatial autocorrelation is not good news for studying the causal effect of land reallocation on agricultural technology adoption. Indeed, what would have been great would have been that reparcelling occurs randomly across space, with communes within the Paris basin reparcelling early and others not so that comparing them captures the effect of reparcelling. Here, a raw comparison of reparcelling communes with non reparcelling ones would be biased by the soil qulaity and types of productions. One better comparison would condition on the agricultural zones: comparing communes within the Paris basin with early and late reallocation (if we can find any) is already better.  Actually, my idea is to try to use the finest possible grid size to compare close communes with different reparcelling date.

A last striking feature of the data is that sometimes communes undergoing reallocation seem to be aligned like on a line on the map. This is because land reallocation has occurred along a railroad track or a highway, when these infrastructures were built.


TBC

Tips in Scientific Writing

The student paper at TSE (TSEconomist) has asked some of us to provide writing tips for students. Here is my take.

I can say that I have not been very good at writing papers until recently, and that practice is the essence of progress. But, there are a few things I can say that I think can help make writing easier.

The first and main thing is: do NOT start writing when you have finished the theoretical/empirical work. This is a rookie mistake that I repeatedly made over the 3 papers I have out now and the 3 others that I am currently writing. This is stupid. Writing should be intricately related to the work itself, and the paper should be written all along the course of the project. (I think we should think in terms of project, not of papers, since a project is made of several papers, and you have to conduct research, not write papers, papers are the outcome, not the goal.) 

What I do now is that I blog: first, I blog about a research idea. This makes for a nice post where I have to explain why I think the idea is important, why I should spend time and effort exploring it and why people should be interested by the results. This is maybe the most critical part of any project. This is also the part that most people overlook, especially students. They generally want to rush for the technical things that seem more reassuring instead of taking time to elaborate their intuition about why something is important. Do elaborate on the why of the project. Spend time and effort explaining why this is an important question for economic science, economic policy and why the literature has not found an answer yet and why you think you can solve that with your idea. If you cannot do that, I would say stop and think again. Do you really want to spend one year of effort on something you do not even know why you are doing it? If you do not do this, you will eventually end up repeating previous research with a small tweak, or you are going to lose the reader into the details and lose track of the ambitious and novel idea that you have. With the blog, I usually write updates of the research as I go along, and this keeps me focused on the original idea and on the eventual changes that I might have made. I have found that I, and students also, tend to lose sight of the original goal as we enter the technical aspect of the project, and we bury ourselves in details instead of exploring the deep important research question. So, first advice, write a blog (or write for my blog, or for any blog). Then, writing the paper is just a matter of wrapping things up. It becomes so much easier.

My second advice is: write as if you would explain your research to your grandma. Do use a relaxed tone, avoid technical words. Try talking yourself, your friends, your family, your colleagues, your teachers, anyone, through your research project, as often as you can. Especially confront specialists of your field and see if you can convince them. If you cannot, it does not mean that your idea is stupid, it means that it still is not clear enough.

My third advice would be: read the LSE blog on scientific writing. It is full of sound detailed advice like "find the essence of your message," "never anticipate on an argument or go back to one," "start paragraphs with the main idea and then develop," "choose an accurate and catchy title."

My fourth advice is: read John Cochrane's writing tips for PhD students. They are excellent. Find the main message would be the essence of it, and it is in general realy hard to do.

Friday, February 6, 2015

The Credibility Revolution in Economics

In a thought-provoking paper, Josh Angrist and Steve Pischke describe the credibility revolution that is currently going on in economics. Having grown in the Haavelmo-Cowles-Heckman tradition of structural econometrics, I have to admit that I resisted the intuitive attraction that this paper had on me. But the more I think about it, the more I can see all that is correct in the view that Josh and Steve defend in their paper, and the more I see myself adapting this view to my own everyday research, and the more I find myself happy about it. The credibility revolution makes a lot of sense to me since I can relate it to the way I was taught biology and physics, and the reasons why I loved these sciences: for their convincing empirical background. I admittedly have my own interpretation of the credibility revolution, that does not fully overlap with that of Josh and Steve. I am going to try to make it clear in what follows.

To me, the credibility revolution means that data and empirical validation are as important as sound and coherent theories. It means that I cannot accept a theoretical proposition unless I have access to repeated tests that it is not rejected in the data. It also means that I do not use tools that have not proven repeatedly that they work. 

Let me give three examples in economics. In economics as a behavioral science, a very important tool to model the behavior of agents under uncertainty is the expected utility framework that dates back at least to Bernoulli, who introduced it to solve the Saint Petersburg paradox. von Neumann and Morgenstern have shown that this framework could be rationalized by some simple axioms of behavior. Allais, in a very famous experiment, tested the implication of one of these axioms. What he found was that people consistently rejected this axiom. This results has been reproduced many times since then. This means that the expected utility framework as a scientific description of how people behave has been refuted. This lead to the development of other axioms and other models of behavior under uncertainty, the most famous being Kahneman and Tversky's prospect theory. This does not mean that the expected utility framework is useless for engineering purposes. We seem to have good empirical evidence that it is approximately correct in a lot of situations (readers, feel free to leave references on this type of evidence in the comments). It might be more simple to use it rather than the more complex competing models of behavior that have been proposed since. The only criteria on which we should judge its performance as an engineering tool is by its ability to predict actual choices. We are seeing more and more of this type of crucial tests of our theories, and this is for the best. I think we should emphasize these empirical results in our teaching of economics: they are as important as the underlying theory that they test.

The second example is in economics as engineering: McFadden's random utility model. McFadden used the utility maximization framework to model people's choices of their transportation mode. He modeled the choice of using your car, the bus, your bike or walking as depending on the characteristics of the travels (time to go to work) and your intrinsic preferences for one mode or the other. He estimated the preferences on a dataset of individuals in the San Francisco bay area in 1972. He then used his model to predict what would happen when an additional mode of transportation would be proposed (the subway, or BART). Based on his estimates, he predicted that the market share of the subway would be 6.3%, well below the engineering estimates of the time that rounded around 15%. When the subway opened in 1976, its market share soon reached 6.2% and stabilized there. This is one of the most beautiful and convincing example of testing of an engineering tool in economics. Actually, this amazing performance decided transportation researchers to abandon their old engineering models and use McFadden's. I think it is for this success than Dan was eventually  awarded the Nobel prize in economics. We see more and more of this type of tests of structural models, and this is for the best.

The third example is in economics, or rather behavioral, engineering (when I use the term "behavioral," I encompass all the sciences that try to understand man's behavior). From psychology, and increasingly economics, we know that cognitive and non-cognitive (or socio-emotional) skills are malleable all along an individual's lifetime. We believe that it is possible to design interventions that help kids acquire these skills. But one still has to prove that these interventions actually work. That's why psychologists, and more recently economists, use randomized experiments to check whether these interventions actually work. In practice, they randomly select among a group of children the one that are going to receive the intervention (the treatment group) and the ones that are going to stay in the business as usual scenario (the control group). By comparing the outcomes of the treatment anc control group, we can infer the effect of the intervention free of any source of bias since both groups are initially identical thanks to the randomization This is exactly what doctors do to evaluate the effects of drugs. Jim Heckman and Tim Kautz summarize the evidence that we have so far on these experiments. The most famous one is the Perry preschool program, that followed the kids until their forties. The most fascinating finding of this experiment is that by providing a nurturing environment during the early years of the kids' lives (from 3 to 6), the Perry project has been able to change durably the kids' lives. The surprising result is that this change has not been triggered by a change in cognitive skills, but only by a change in non-cognitive skills. This impressive evidence has directed a lot of attention to childhood programs and to the role of non-cognitive skills. Jim Heckman is one of the most ardent proponents of this approach in economics.

The credibility revolution makes sense to me also because of the limitations of Haavelmo's framework. As I already said, trying to infer stable autonomous laws from observational data is impossible, since there is not enough free variation in this data. There are too many unknowns and not enough observations to recover each of them. Haavelmo was well-aware of this problem, but the solution that he and the Cowles Commission advocated-using a priori restrictions to restore identification-was doomed to fail. What we need to learn something about how our theories and our engineering models perform is not a priori restrictions on how the world behaves, but more free and independent information about how the world works. This is basically what Josh's argument is about: think about these restrictions as to make them as convincing as experiments. That's why Josh coined the term natural experiments: the variation in the observed data that we use should be as good as an experiment, not stemming from theory but from luck: the world has offered us some free variation and we can use it to recover something about its deeper relationships.

The problem with the natural experiment approach is that whether we have identified free variation and whether it really can be used to discriminate among theories is highly debatable. Sometimes, we cannot do better, and we have to try to prove that the natural variation is as good as an experiment. But, a lot of the times, we can think of a way of generating free variation ourselves by building a field experiment. And this is exactly what is happening today in economics. All these experiments (or RCTs: Randomized Control Trials) that we see in the field are just ways of generating free variation, with several purposes in mind: testing policies, testing the prediction accuracy of models, testing scientific theories. Some experiments can do several of these things at the same time.

This is an exciting time to do economics. I will post in the future on other early engineering and scientific tests, and I will report on my own and others' research that I find exciting.

Tuesday, February 3, 2015

Engineers vs Scientists

In a previous post, I tried to make a case for a separation between economists as engineers and economists as scientists. In this post, I make my view of these two roles more precise in a general sense. I will dedicate several posts to examples of engineering and science in economics.

For a scientist, the only thing that matters is whether a given law holds true. For example, I only care whether Newton's laws are true or not. They are not, so I just should discard them as a way to explain the world. And that is basically what physicist have done. The fact that Newton's law can be approximately true in some conditions does not matter for the scientist. It is interesting for learning purposes or for engineering, but it does not say anything about the true behavior of the world. We have better representations of the world that have a wider range of applicability. The truth is not convenient nor simple, it is true. For a scientist, the ultimate criteria is whether a theory survives a crucial experiment.

For an engineer, the only thing that imports is that a plane flies. Whether he can explain why it does does not really matter. Sometimes, engineers tweak machines based on experience and obtain good performance without being able to explain how. Sometimes, engineers use laws that have proven to be wrong (e.g. Newton laws) because they offer convenient simplifications. They will only use the more complex (and true) version of the law if it provides sufficient improvement. For example, engineers in charge of the GPS switched from Newton to Einstein relativity because it provided much better location performance.For en engineer, the ultimate criteria is the performance of the device: does it do what it is supposed to do, as efficiently as possible?

Scientists and engineers are also easily differenciated by the way they deal with the problem of induction. We know at least since Hume that it is not because some phenomena has happened in the past that it will happen in the future. Hence, every scientific law is provisional. Since Popper, we know that truth in science means "non refuted yet." So scientists are aware of the provisional nature of knowledge. This is not a problem as long as you are contemplating the universe in search of an explanation of how it works. For engineers though, this is a tough problem, because it means that what has worked in the past might not work in the future. All their devices might fail for an unkown reason and they have to accept that and live with it.

A final difference between science and engineering is how they deal with Cartesian slicing. Cartesian slicing is the idea that the best way to study a problem for a scientist is to slice it into smaller and smaller problems that can be studied independently. A consequence of this is the ever increasing sophistication and complexity of scientific explanation in every subfield of science. Engineers cannot slice too much, because they have to deal with the fact that all the separated phenomena might interact in the real world and have an influence on their devices. For example, it is hard for an engineer to ignore frictions. Engineers face computational limitations, and they therefore have to make useful simplifications, like ignoring one phenomenon, or one side of it, for the sake of implementation. When diregarding a phenomenon, they assume, and very often check, that it does not alter the efficiency of their device too much.

Overall, science is about provisional knowledge of non refuted laws on sliced phenomena while engineering is about making device that work, sometimes using useful simplifications.

I am not saying that engineers and scientists do not talk to each other or live in completely separate worlds. Engineers constantly seek to use more recent laws to perfect their devices. Scientists try to understand why some of the enginner's tricks work, or why sometimes something they predict should work does not. There is a fruitful and fertile dialogue between scientists and engineers. All that I'm saying is that scientists and engineers have distinct aims, distinct criteria for success and distinct methods.

Economists as Engineers and Economists as Scientists

In 2006, Greg Mankiw published an essay in the Journal of Economic Perspectives titled "The Macroeconomist as Scientist and Engineer." I really liked reading through this paper then and the more I think about it, the more I think Greg has struck a fundamental chord here. The distinction he makes resonates extremely strongly with me and my own experience with my field and how I view my own work. In this post, I would like to discuss Greg's essay, and give some thoughts on why I think this distinction is essential, why it is healthy to make it and why, historically, economists seem to not have paid sufficient attention to it.

Greg starts his paper by ackowledging how much economists want to pose as scientists:

Economists like to strike the pose of a scientist. I know, because I often do it myself. When I teach undergraduates, I very consciously describe the field of economics as a science, so no student will start the course thinking that he or she is embarking on some squishy academic endeavor. Our colleagues in the physics department across campus may find it amusing that we view them as close cousins, but we are quick to remind anyone who will listen that economists formulate theories with mathematical precision, collect huge data sets on individual and aggregate behavior, and exploit the most sophisticated statistical techniques to reach empirical judgments that are free of bias and ideology (or so we like to think).

I love Greg's writing, full of humour and self-deprecation. He mocks economists for posing as scientists, but immediately includes himself in the lot, so that the blow is not so strong. We even empathize. But self-derision aside, I think that economists have a right to pose as scientists. Economists are scientists because they want to understand how men behave, make decisions and interact with each other and with their envitonment. Economists, along with sociologists, and psychologists (and maybe anthropologists), are part of behavioral social science, in my opinion.

Then Greg goes on describing now why economists are also engineers.

Having recently spent two years in Washington as an economic adviser at a time when the U.S. economy was struggling to pull out of a recession, I am reminded that the subfield of macroeconomics was born not as a science but more as a type of engineering. God put macroeconomists on earth not to propose and test elegant theories but to solve practical problems. The problems He gave us, moreover, were not modest in dimension. The problem that gave birth to our field—the Great Depression of the 1930s— was an economic downturn of unprecedented scale, including incomes so depressed and unemployment so widespread that it is no exaggeration to say that the viability of the capitalist system was called into question.

Again, Greg is both fun and efficient. But he is also to the point. Economists are engineers because they deal with pressing social issues: how Central Banks should set the interest rate? How to forecast and respond to crises? How to decrease unemployment? I would add that this is not limited to macroeconomists. In microeconomics, we also have pressing policy questions to solve: How to set taxes? How to organize the education system? How to curb pollution? How to best organize markets?

Greg goes on with the aim of his essay.


This essay offers a brief history of macroeconomics, together with an evaluation of what we have learned. My premise is that the field has evolved through the efforts of two types of macroeconomists—those who understand the field as a type of engineering and those who would like it to be more of a science. Engineers are, first and foremost, problem solvers. By contrast, the goal of scientists is to understand how the world works. 

I think Greg is really on to something big here. I think this distinction between scientists and engineers is key. To me, it has been summarized extremely efficiently by a famous quote by Neil Armstrong: "Science is about what is and Engineering is about what can be."

To Greg, the history of macroeconomics has started as an engineering venture that has slowly drifted into a more scientific ground. 

The research emphasis of macroeconomists has varied over time between these two motives. While the early macroeconomists were engineers trying to solve practical problems, the macroeconomists of the past several decades have been more interested in developing analytic tools and establishing theoretical principles. These tools and principles, however, have been slow to find their way into applications. As the field of macroeconomics has evolved, one recurrent theme is the interaction—sometimes productive and sometimes not—between the scientists and the engineers. The substantial disconnect between the science and engineering of macroeconomics should be a humbling fact for all of us working in the field.

Science has very often started with a strong applied question, before drifting away into more abstract areas. Think of the theory of optimal transportation that started with Monge - as a way to displace rocks from a quareer to a hole in the ground - has developped to an abstract and very general modern theory.

Though I understand it, I disagree with Greg's last comment. It might seem humbling that the most recent theories do not find their way into applications, but I think it is rather healthy and the sign of a maturing science. In the other sciences, scientists are always extremely cautious when discussing the potential applications of a major fundamental scientific breakthrough. The discovery of how a virus operates does not immediately pave the way for a vaccine. Decades of research are needed. First, the scientific result has to be reproduced a sufficient number of times so that we know it is correct. Second, a feasible way to exploit this result has to be found and its efficiency evaluted. This is the work of what I call engineers, in that case doctors. It would be crazy to try to use the last scientific theory/hypothesis as a workhorse for policy purposes. Which engineer uses string theory today?

My feeling is that hard-pressed by politicians to find anwers to policy questions, economists have always made useful simplifying assumptions about human behavior. At some point, these assumptions seemed to be shaky, or some of the conclusions did not seem to rigorously be drawn from them. Then economists entered a phase of rigorous mathematisation and axiomatisation, which is the first leg of any science, the theoretical one. This has produced amazing theoretical results. But until recently, economists did not make a lot of use of the other leg of any science: the empirical leg. Empirical validation is something different, a way to tell what's wrong, it is a way to discriminate between all the theoretically sound theories we have that make different empirical predictions. We are in the middle of an empirical revolution in economics (some have called it the credibility revolution). Economics is slowly starting to use data to discriminate between competing theories. In the process, I think it would be extremely useful to distinguish between the use scientists and engineers make of the data. For most of its existence, economics have used the data with mainly one aim in mind: estimate the values of theoretical parameters that theory did not provide. This is extremely important and we have made a lot of progress in this direction. Extremely beautiful theories have been developed, but this is not what I have in mind when I think about how engineering and science use data.

Engineers use the empirical data to check whether their devices work whereas scientists use to data to refute theories.

In the following posts of this series, I will examine some of my favorite results in economics that use data in a engineering or a scientific fashion. I will also try to give a sense of what empirical economics and econometrics have achieved up to now, and why they have mostly focused on the limited goal of estimating theoretical parameters.

As a conclusion to this post, I would like to quote the last part of Greg's introduction to his essay:

To avoid any confusion, I should say at the outset that the story I tell is not one of good guys and bad guys. Neither scientists nor engineers have a claim to greater virtue. The story is also not one of deep thinkers and simple-minded plumbers. Science professors are typically no better at solving engineering problems than engineering professors are at solving scientific problems. In both fields, cutting-edge problems are hard problems, as well as intellectually challenging ones. Just as the world needs both scientists and engineers, it needs macroeconomists of both mindsets. But I believe that the discipline would advance more smoothly and fruitfully if macroeconomists always kept in mind that their field has a dual role.