Friday, November 20, 2015

How Can We Save the Amazon Rainforest?

Recently, I have embarked in a collective effort to write a blog post on the economics of public policies to fight deforestation as part of a series of posts in preparation of COP21, a joint venture between TSE, Le Monde and The Economist. I have had the chance to work with a team of amazingly cool, witty and nice collaborators (Arthur van Benthem, Eduardo Souza Rodrigues, Julie Subervie). The post has appeared in French here and in English there.

What I would like to do here is to summarize our ideas and then highlight the results of one recent paper by Julie which gives the first robust evidence on the effectiveness of one particularly important policy, Payments for Environmental Services (PES), in the Amazon.

The reasons why we want to stop deforestation are pretty straightforward: deforestation is responsible for about 10 percent of climate change emissions and leads to massive biodiversity losses. Actually, deforestation is not the direct result of a private effort by landowners, it is the result of a massive colonization campaign sponsored by the junta government in Brazil in the 70s.

The key question is which policy to choose to halt deforestation. There are a lot of options. Governments tend to favor regulation, like declaring some land protected and banning all cutting there. Economists call these policies "Command and Control" because they are highly interventionist and leave no leeway to agents to adapt to the policy. Economists favor price instruments above all, such as a carbon market or a carbon tax. The key advantage of these policies is that they let much more leeway to agents to adapt: when there is a price on carbon, the less costly carbon reduction options are the ones implemented first. With command and control, you might impose much higher costs to reach the same environmental gain by banning very profitable cuts of trees and allowing some trees to be cut where economic returns are actually small. PES are an intermediate between command and control policies and price instruments. With PES, governments pay farmers who accept to conserve their trees standing a fixed amount per hectare. Theoretically, PES are less efficient than market instruments, since they leave room for farmers to choose not to receive the incentive, whereas a tax or a price is for everyone to pay. Also, those who volunteer might be the ones who would have not cut their trees anyway even in the absence of payment. If this is widespread, they benefit from a windfall effect: they do nothing and get paid. PES should be better than command and control though (we say that they are second best instruments, actually they are third best since they are linear contracts whereas one could think of nonlinear PES contracts that would be the true second best option).


But this is theory. What we would like to know is whether these predictions hold in real life, right? I mean, that's useful to know how policies work in the perfect, vaccum-full world of models but how do these predictions hold up in reality? Many things can go wrong in the imperfect, air-full realm of the real world. Agents might not be as well-informed or as rational as we assume that they are in our models, tax enforcement might be undermined by corruption or inefficiency.

It turns out that it is extremely difficult to measure the effectiveness of forest conservation policies. Why? Because we face two very serious empirical challenges: additionality and leakage.

Additionality is a key measure of a program success: how much did the policy contribute to halt deforestation where it was implemented? For example, by how much did deforestation decrease in protected areas? Or among farmers subject to a deforestation tax or to a price of carbon? Or among farmers taking up a PES? In order to measure additionality, we have to compute how much deforestation there would have been in the absence of the policy. But this is extremely hard to do since it did NOT happen: the policy was actually implemented. The situation of reference to which we would like to compare what happened has not happened, we call this situation a counterfactual.

Since we cannot directly observe the counterfactual, we are going to try to proxy for it using something observed. We could take as a proxy the situation before the policy was implemented. But this proxy might be very bad. For example, after the Brazilian government tightened regulatory policies and improved forest monitoring thanks to satellite imagery in the 2000s, deforestation in the Amazon slowed down to approximately half a million hectares annually. It looks like the policy was successful. But, at the same time, lower prices for soybeans and cattle products also reduced incentives to deforest. So what was the main driver of the decrease in deforestation? How much forest did the policy save exactly?

We could also use areas and farmers not involved in the policy as a proxy for the counterfactual situation. But this proxy might be very bad also. For example, even if we observed that farmers who participate in a PES program have lower deforestation rates than those who do not, this does not imply that the scheme actually reduced deforestation. For sure, farmers who stand to profit the least from cutting down their trees are the most likely to sign up for the program. As a result, the program might end up paying some farmers for doing nothing differently from what they would have done anyway. And so the additional impact of the program may very well be small.

Leakage occurs when a conservation policy, which may be successful locally, triggers deforestation elsewhere. For example, a farmer may stop clearing forest on plots that he has contracted under a PES program but at the same time increase deforestation on plots not covered by the contract. On a larger scale, the threat of paying fines to a provincial government may give incentives to farmers or logging firms to move operations to a neighboring province. In such cases, leakage undermines the additionality of conservation programs. Detecting leakage effects is even more difficult than detecting additionality. Indeed, we not only need to compute a counterfactual, but we first and foremost need to detect where the leakages go.

Ok, so additionality and leakage effects are key to be able to rank the policy options in the real world. So what do we know about the additionality and leakage effects for the various forest policies in the Amazon (and in other threatened rainforests)? Not much actually.

As in medicine when testing the efficacy of a new drug, the gold standard of proof in empirical economics is to conduct a Randomized Control Trial (RCT). In a RCT, we randomly select two groups of individuals or regions and implement the policy only for one group, keeping the second as a control. The difference between treatment and control provides a direct measure of the additionality of the policy. RCTs can also be designed to measure leakage. Though RCTs are commonly run to evaluate education or health policies worldwide, there has been only few randomized evaluations of forest policies. Kelsey Jack from Tufts University performed RCTs to assess tree planting subsidies in Malawi and Zambia. To my knowledge, there are no similar results for forest conservation PES, in Brazil or elsewhere. Seema Jayachandran has been conducting an RCT-based evaluation of a forest-conservation PES program in Uganda. The experiment has been designed to estimate both additionality and leakage effects. We are waiting for her results impatiently.

In the absence of RCTs, economists usually try to identify naturally occurring events or “experiments” that they hope can approximate the conditions of an RCT. In a recent paper, soon to be available here, Julie and her coauthors have conducted such a study of one of the first forest-conservation PES ever implemented in the Amazon. The key graph in this paper is the following:
The program was implemented in 2010. What you can see is that the pace of deforestation decreased after 2010 among participants, while it remained the same among non participants. The change in the difference in land cover between participants and comparison communities is a measure of additionality: it is pretty large, an increase of about 10 p.p. Looking at comparison communities, you can see that the path of deforestation has not increased there, while it should have if leakage was present. What seems to have happened is that farmers have started farming more intensively the previously deforested land, and have actually decreased deforestation on new plots. Using these estimates, an estimate of how much carbon was saved and of the value of a ton of carbon, Julie and her coauthors estimate that the benefits of the program exceed its costs.

A couple of comments here. First, this work is a beautiful example of empirical economists at work. This is how we hunt for causality in the absence of RCT. In order to check for the validity of our natural experiments, we try to see if they do not find effects where they should be none. Here, you can see that before 2010, the deforestation trends in participants and comparison communities were parallel. This is supportive of the critical assumption Julie and her coauthors make here: in the absence of the PES, deforestation trends would have remained the same over time. Second, there is still work to do. The measure of forest cover is declarative and does not rely on satellite data. It is still possible that farmers lied about how many hectares they had in forest still standing. The number of observations is small, so precision is small. And if observations happen to be correlated within communities, precision would even be lower. We would also like to know whether these changes in farming practices are going to persist over time or if the deforestation is going to resume as soon as the program stops. Julie is trying to collect new data on the same guys several years later to check this. Third: these are amazingly encouraging results. It seems that we can do something to save the Amazon rainforest after all. Rejoice.

Sunday, March 8, 2015

Investir dans la jeunesse des banlieues, une urgence nationale

Les attentats du 7 janvier dernier m'ont bousculé, comme beaucoup d'entre nous. Après le chagrin et les marches, j'ai aussi eu envie d'agir. J'ai commencé par écrire une tribune. Grâce à l'excellente responsable de la communication de TSE, Jenni, elle a été publiée hier sur le site de La Tribune et sur le blog de TSE. En voici une version légèrement améliorée, avec liens vers les travaux cités. J'ai aussi donné une conférence sur ce thème devant le Cercle du Bazacle, le club d'entreprises partenaires de TSE. Merci à Joel et Karine, les organisateurs des conférence du Cercle, d'avoir accepté avec enthousiasme ma proposition et de m'avoir donné la parole. Merci à tous ceux qui sont venus ce jour là de leurs commentaires et de leurs encouragements.

Voici le texte de la tribune.

Les attaques terroristes contre Charlie Hebdo et le supermarché Hyper Casher et les marches historiques qui les ont suivies appellent une réponse politique. La nature de cette réponse nous définira en tant que société et exprimera nos valeurs. Nous sommes à un carrefour. Nous pouvons bâtir sur ce gâchis et ces meurtres inqualifiables, mais aussi sur la magnifique réaction qui les a suivis, une société meilleure, ou une société de la peur.

Bien entendu, il y aura une réponse sécuritaire. Mais limiter notre réponse à ces événements à un Patriot Act à la française serait un désastre. Nous barricader dans nos maisons, calfeutrer nos enfants dans leurs écoles, ériger notre pays en forteresse infranchissable ? Si c’est notre seule réponse, elle est terrible, car elle porte le ferment de la peur et du délitement de notre société dans l'entre-soi, dans la méfiance de tout ce qui est différent, et finalement dans la peur de tous envers tous.

Il faut une autre réponse, complémentaire. Plus ambitieuse. Plus belle aussi. Cette réponse, c'est d'investir dans la jeunesse de nos banlieues, de valoriser et de soutenir l'émergence de participants actifs à la société de demain. Ils existent déjà. Mais nous ne les voyons pas. Ils sont barrés dans notre esprit par les Merah, les Kouachi, les gangs des barbares, les règlements de compte, les trafics de drogue, le chômage, les émeutes. Mais ils sont là, l'immense majorité silencieuse qui s'accroche, qui a choisi la vie, ses frustrations et ses joies, et qui a rejeté toute idéologie mortifère. Je pense à mon camarade prépa, Mohamed, le seul arabe qui n'était pas fils d'émir dans notre lycée huppé du centre-ville de Toulouse. Momo est ingénieur maintenant. Il venait du quartier des Izards, comme Merah. Je pense à mes amis volleyeurs de Villejuif, avec lesquels j'ai joué pendant des années, qui m'ont accueilli à bras ouverts, moi, le "çaifran". Avec mon accent du Sud-Ouest et ma barbichette, ils m'appelaient d'Artagnan. Ils sont devenus mes amis, eux, les renois, les noichs, les rebeus. J'ai passé tellement de bons moments avec eux que j'ai fini par prendre leurs expressions et leurs intonations au point que mes amis "normaux" m'appelaient « la racaille ».

Ils sont là, ceux qui ont dit non aux extrémismes et oui à la société française. Ils sont l'immense majorité, mais ils ont besoin de nous. Comment les aider ? Comment faire pour qu’il y ait plus de Momo et moins de Merah ? Quelle est la meilleure approche ? Investir dans l'école ? Changer la politique urbaine ? Lutter contre les discriminations ? Intervenir sur le fonctionnement du marché du travail ? Un débat légitime doit avoir lieu autour de ces options, éclairé au mieux par des évaluations rigoureuses.

Ma propre conviction est que la forme d'investissement la plus efficace est dans des programmes éducatifs auprès des très jeunes enfants et de leurs parents. Ces programmes ne visent pas à développer les capacités cognitives des enfants ou à leur apporter des connaissances scolaires, mais à les aider à être mieux eux-mêmes en leur apprenant à planifier des tâches, gérer leurs émotions et résoudre leurs conflits avec les autres de manière pacifique. Certaines interventions transmettent aussi aux parents des informations simples et parfois ignorées comme les bénéfices de parler à son enfant même s’il ne parle pas encore lui-même ou de lui lire des histoires le soir. Des recherches récentes, résumées par Jim Heckman et Tim Kautz dans un excellent document pour l'OCDE, ont mis en évidence que des versions expérimentales de ces programmes permettent de réduire de manière drastique l'engagement dans des activités illégales à l'âge adulte mais aussi qu'ils augmentent le pourcentage de diplômés du supérieur dans de fortes proportions. De tels effets sont obtenus avec un investissement somme toute limité: le programme étudié par Yann Algan et ses coauteurs par exemple est constitué de 19 séances de jeux de rôle par groupe de 3 avec un travailleur social. Yann a présenté les impacts à long terme de ce programme lors d'une conférence à l'Institut d'Etudes Avancées de Toulouse (IAST): ils sont spectaculaires. Les résultats de ces recherches montrent aussi que ces programmes sont d'autant plus efficaces qu'ils arrivent tôt dans la vie de l'enfant. Plus on laisse le temps à certains comportements de s'installer, plus ils sont difficiles à modifier par la suite. Ce n'est bien sûr pas une raison pour ne rien faire pour les adolescents et les jeunes adultes, mais c'est une raison pour réfléchir sérieusement à des interventions dès la petite enfance. C'est ce qui a conduit Jim Heckman a proposer son équation pour un meilleur investissement éducatif: investir tôt en ciblant mieux.

Je trouve ces preuves empiriques convaincantes, mais ma conviction est aussi plus viscérale. Je pense à tous mes amis de Villejuif qui m'ont dit "Si j'avais su, j'aurais travaillé plus à l'école. Mais je m’en foutais. Et puis c’était toujours la foire." Je pense à mon amie directrice de centre aéré dans un quartier difficile de Toulouse qui a démissionné avec l'ensemble de son équipe au début de l'année, victimes d'un burnout collectif face à l'extrême détresse sociale dont ils étaient témoins, jour après jour, face à ces enfants perdus, violents, tristes et face à ces parents dépassés, démunis, avec parfois aucune autre réponse que l'indifférence ou la violence. Ces programmes apportent des réponses concrètes au désarroi des parents et à la souffrance des enfants.

Il ne faut pas s'y tromper, la lutte commence maintenant pour gagner les cœurs et les esprits des enfants des banlieues. La lutte contre les extrémistes, les bandes, les délinquants, les trafiquants. Si nous ne voulons pas que ces gamins aillent grossir leurs rangs, c'est maintenant qu'il faut leur donner leur chance, leur donner les bonnes armes, celles qui leur permettront de s’intégrer à la société et d’y poursuivre leur bonheur. Existe-t-il un plus beau projet collectif ? Qui y a-t-il de plus beau que la gratitude d'un enfant ? Et qu'avons-nous à risquer sinon à les voir s'engager encore plus dans la société et à y contribuer de manières que nous n'imaginons même pas encore aujourd'hui ?

Tuesday, February 10, 2015

Land reallocation in France: some nice maps

Some time ago, I blogged about one of my current projects on land reallocation in France. I have made some progress on this project in the meantime and I am going to report on it here.

I have worked with Elise Maigné, at Inra. Together, and with the help of Eric Cahuzac, we have been able to secure an access to the data on reparcelling events at the commune level. This data has generously been transmitted to us by Nadine Polombo, who has worked together with Marc-André Philippe to digitize the dataset originally in the hands of the French Ministry of Agriculture. Nadine believes that their dataset is the inly one that remains, since the Ministry of Agriculture has decided to destroy the original data and does not take care of reparcelling events any more. Since then, the data have been made accessible through the open data portal of the French government.

First thing is that there has been 22,374 reallocation events in France reported in this dataset. This is huge, since we have 36,681 communes in France. Some communes have actually undergone more than one reallocation event. There are 18,227 communes that have undergone at least one reallocation event. This means that 49.7% of all French communes have undergone at least one reallocation event.

The first issue with the dataset is that I miss some information: the opening date of the reallocation event is missing for 201 events, the closing date for 380 events and both dates are missing for 291 events. So I have 21,502 events with non missing information on the both opening and closing dates of the reallocation event.

Figure 1: Reallocation Events in France
The events with information on the opening date are presented in Figure 1. Reallocation events start with the end of WWII, with this first wave stopping around 1953. A second wave starts in the late 50s and peaks during the 60s. That is the main wave of land reallocation. Then several waves occur in the 70s, 80s and 90s.

Figure 2: First (1) vs Subsequent (2) Reallocation Events
Since some communes have undergone more than one reallocation event, it is interesting to plot the reallocation events depending on whether they are the first or not. This is done in Figure 2. The wave of the 90s seems to be mainly due to reallocation events occurring on communes that have already been reparcelled once. It is possible though that a different portion of the commune has been reparcelled in the two events.

What would be great now is to have an idea of the way reparcelling was rolled out over space and time. It would especially be nice to know which reparcelling events occurred in between 1955, 1970, 1979, 1988, 2000 and 2010, the dates at which agricultural censuses have been conducted in France. I would add 1963 and 1967 as two large surveys have been conducted at these dates. In order to do this, I have to use a GIS software. Since I use Stata to analyse this dataset, I'm going to use its GIS facilities (for the first time). The beautiful map presented on Figure 3 is the result of this exercise.

Figure 3: Map of the Reallocation Events in France
The first striking feature of this map is that land reallocation mainly occurred in the north of France and much less so in the South. One explanation could be that land in the north is much more fertile, but I do not think this exhausts all possible explanations. This will be the topic of subsequent investigation. The second striking feature is how much the timing of land reallocation is spatially autocorrelated. For example, the area around Paris (the Paris basin) seems to have been almost completely reparceled before 1955. The first wave of reparceling thus seems to have been mainly concentrated in this area. The outskirts of the basin are reached progressively during the 60s and 70s.

The second striking feature of this map is that it coincides very well with a rough map of the agricultural regions in France (see Figure 4).
Figure 4: Map of the Agricultural Regions in France
The cereal growing regions (yellow) seem to have reparcelled very early, while the areas in mixed cultures (light green) have reparcelled more slowly. Finally, forest regions or regions with open range cattle (dark green) have almost not reparcelled.

Obviously, this strong spatial autocorrelation is not good news for studying the causal effect of land reallocation on agricultural technology adoption. Indeed, what would have been great would have been that reparcelling occurs randomly across space, with communes within the Paris basin reparcelling early and others not so that comparing them captures the effect of reparcelling. Here, a raw comparison of reparcelling communes with non reparcelling ones would be biased by the soil qulaity and types of productions. One better comparison would condition on the agricultural zones: comparing communes within the Paris basin with early and late reallocation (if we can find any) is already better.  Actually, my idea is to try to use the finest possible grid size to compare close communes with different reparcelling date.

A last striking feature of the data is that sometimes communes undergoing reallocation seem to be aligned like on a line on the map. This is because land reallocation has occurred along a railroad track or a highway, when these infrastructures were built.


TBC

Tips in Scientific Writing

The student paper at TSE (TSEconomist) has asked some of us to provide writing tips for students. Here is my take.

I can say that I have not been very good at writing papers until recently, and that practice is the essence of progress. But, there are a few things I can say that I think can help make writing easier.

The first and main thing is: do NOT start writing when you have finished the theoretical/empirical work. This is a rookie mistake that I repeatedly made over the 3 papers I have out now and the 3 others that I am currently writing. This is stupid. Writing should be intricately related to the work itself, and the paper should be written all along the course of the project. (I think we should think in terms of project, not of papers, since a project is made of several papers, and you have to conduct research, not write papers, papers are the outcome, not the goal.) 

What I do now is that I blog: first, I blog about a research idea. This makes for a nice post where I have to explain why I think the idea is important, why I should spend time and effort exploring it and why people should be interested by the results. This is maybe the most critical part of any project. This is also the part that most people overlook, especially students. They generally want to rush for the technical things that seem more reassuring instead of taking time to elaborate their intuition about why something is important. Do elaborate on the why of the project. Spend time and effort explaining why this is an important question for economic science, economic policy and why the literature has not found an answer yet and why you think you can solve that with your idea. If you cannot do that, I would say stop and think again. Do you really want to spend one year of effort on something you do not even know why you are doing it? If you do not do this, you will eventually end up repeating previous research with a small tweak, or you are going to lose the reader into the details and lose track of the ambitious and novel idea that you have. With the blog, I usually write updates of the research as I go along, and this keeps me focused on the original idea and on the eventual changes that I might have made. I have found that I, and students also, tend to lose sight of the original goal as we enter the technical aspect of the project, and we bury ourselves in details instead of exploring the deep important research question. So, first advice, write a blog (or write for my blog, or for any blog). Then, writing the paper is just a matter of wrapping things up. It becomes so much easier.

My second advice is: write as if you would explain your research to your grandma. Do use a relaxed tone, avoid technical words. Try talking yourself, your friends, your family, your colleagues, your teachers, anyone, through your research project, as often as you can. Especially confront specialists of your field and see if you can convince them. If you cannot, it does not mean that your idea is stupid, it means that it still is not clear enough.

My third advice would be: read the LSE blog on scientific writing. It is full of sound detailed advice like "find the essence of your message," "never anticipate on an argument or go back to one," "start paragraphs with the main idea and then develop," "choose an accurate and catchy title."

My fourth advice is: read John Cochrane's writing tips for PhD students. They are excellent. Find the main message would be the essence of it, and it is in general realy hard to do.

Friday, February 6, 2015

The Credibility Revolution in Economics

In a thought-provoking paper, Josh Angrist and Steve Pischke describe the credibility revolution that is currently going on in economics. Having grown in the Haavelmo-Cowles-Heckman tradition of structural econometrics, I have to admit that I resisted the intuitive attraction that this paper had on me. But the more I think about it, the more I can see all that is correct in the view that Josh and Steve defend in their paper, and the more I see myself adapting this view to my own everyday research, and the more I find myself happy about it. The credibility revolution makes a lot of sense to me since I can relate it to the way I was taught biology and physics, and the reasons why I loved these sciences: for their convincing empirical background. I admittedly have my own interpretation of the credibility revolution, that does not fully overlap with that of Josh and Steve. I am going to try to make it clear in what follows.

To me, the credibility revolution means that data and empirical validation are as important as sound and coherent theories. It means that I cannot accept a theoretical proposition unless I have access to repeated tests that it is not rejected in the data. It also means that I do not use tools that have not proven repeatedly that they work. 

Let me give three examples in economics. In economics as a behavioral science, a very important tool to model the behavior of agents under uncertainty is the expected utility framework that dates back at least to Bernoulli, who introduced it to solve the Saint Petersburg paradox. von Neumann and Morgenstern have shown that this framework could be rationalized by some simple axioms of behavior. Allais, in a very famous experiment, tested the implication of one of these axioms. What he found was that people consistently rejected this axiom. This results has been reproduced many times since then. This means that the expected utility framework as a scientific description of how people behave has been refuted. This lead to the development of other axioms and other models of behavior under uncertainty, the most famous being Kahneman and Tversky's prospect theory. This does not mean that the expected utility framework is useless for engineering purposes. We seem to have good empirical evidence that it is approximately correct in a lot of situations (readers, feel free to leave references on this type of evidence in the comments). It might be more simple to use it rather than the more complex competing models of behavior that have been proposed since. The only criteria on which we should judge its performance as an engineering tool is by its ability to predict actual choices. We are seeing more and more of this type of crucial tests of our theories, and this is for the best. I think we should emphasize these empirical results in our teaching of economics: they are as important as the underlying theory that they test.

The second example is in economics as engineering: McFadden's random utility model. McFadden used the utility maximization framework to model people's choices of their transportation mode. He modeled the choice of using your car, the bus, your bike or walking as depending on the characteristics of the travels (time to go to work) and your intrinsic preferences for one mode or the other. He estimated the preferences on a dataset of individuals in the San Francisco bay area in 1972. He then used his model to predict what would happen when an additional mode of transportation would be proposed (the subway, or BART). Based on his estimates, he predicted that the market share of the subway would be 6.3%, well below the engineering estimates of the time that rounded around 15%. When the subway opened in 1976, its market share soon reached 6.2% and stabilized there. This is one of the most beautiful and convincing example of testing of an engineering tool in economics. Actually, this amazing performance decided transportation researchers to abandon their old engineering models and use McFadden's. I think it is for this success than Dan was eventually  awarded the Nobel prize in economics. We see more and more of this type of tests of structural models, and this is for the best.

The third example is in economics, or rather behavioral, engineering (when I use the term "behavioral," I encompass all the sciences that try to understand man's behavior). From psychology, and increasingly economics, we know that cognitive and non-cognitive (or socio-emotional) skills are malleable all along an individual's lifetime. We believe that it is possible to design interventions that help kids acquire these skills. But one still has to prove that these interventions actually work. That's why psychologists, and more recently economists, use randomized experiments to check whether these interventions actually work. In practice, they randomly select among a group of children the one that are going to receive the intervention (the treatment group) and the ones that are going to stay in the business as usual scenario (the control group). By comparing the outcomes of the treatment anc control group, we can infer the effect of the intervention free of any source of bias since both groups are initially identical thanks to the randomization This is exactly what doctors do to evaluate the effects of drugs. Jim Heckman and Tim Kautz summarize the evidence that we have so far on these experiments. The most famous one is the Perry preschool program, that followed the kids until their forties. The most fascinating finding of this experiment is that by providing a nurturing environment during the early years of the kids' lives (from 3 to 6), the Perry project has been able to change durably the kids' lives. The surprising result is that this change has not been triggered by a change in cognitive skills, but only by a change in non-cognitive skills. This impressive evidence has directed a lot of attention to childhood programs and to the role of non-cognitive skills. Jim Heckman is one of the most ardent proponents of this approach in economics.

The credibility revolution makes sense to me also because of the limitations of Haavelmo's framework. As I already said, trying to infer stable autonomous laws from observational data is impossible, since there is not enough free variation in this data. There are too many unknowns and not enough observations to recover each of them. Haavelmo was well-aware of this problem, but the solution that he and the Cowles Commission advocated-using a priori restrictions to restore identification-was doomed to fail. What we need to learn something about how our theories and our engineering models perform is not a priori restrictions on how the world behaves, but more free and independent information about how the world works. This is basically what Josh's argument is about: think about these restrictions as to make them as convincing as experiments. That's why Josh coined the term natural experiments: the variation in the observed data that we use should be as good as an experiment, not stemming from theory but from luck: the world has offered us some free variation and we can use it to recover something about its deeper relationships.

The problem with the natural experiment approach is that whether we have identified free variation and whether it really can be used to discriminate among theories is highly debatable. Sometimes, we cannot do better, and we have to try to prove that the natural variation is as good as an experiment. But, a lot of the times, we can think of a way of generating free variation ourselves by building a field experiment. And this is exactly what is happening today in economics. All these experiments (or RCTs: Randomized Control Trials) that we see in the field are just ways of generating free variation, with several purposes in mind: testing policies, testing the prediction accuracy of models, testing scientific theories. Some experiments can do several of these things at the same time.

This is an exciting time to do economics. I will post in the future on other early engineering and scientific tests, and I will report on my own and others' research that I find exciting.

Tuesday, February 3, 2015

Engineers vs Scientists

In a previous post, I tried to make a case for a separation between economists as engineers and economists as scientists. In this post, I make my view of these two roles more precise in a general sense. I will dedicate several posts to examples of engineering and science in economics.

For a scientist, the only thing that matters is whether a given law holds true. For example, I only care whether Newton's laws are true or not. They are not, so I just should discard them as a way to explain the world. And that is basically what physicist have done. The fact that Newton's law can be approximately true in some conditions does not matter for the scientist. It is interesting for learning purposes or for engineering, but it does not say anything about the true behavior of the world. We have better representations of the world that have a wider range of applicability. The truth is not convenient nor simple, it is true. For a scientist, the ultimate criteria is whether a theory survives a crucial experiment.

For an engineer, the only thing that imports is that a plane flies. Whether he can explain why it does does not really matter. Sometimes, engineers tweak machines based on experience and obtain good performance without being able to explain how. Sometimes, engineers use laws that have proven to be wrong (e.g. Newton laws) because they offer convenient simplifications. They will only use the more complex (and true) version of the law if it provides sufficient improvement. For example, engineers in charge of the GPS switched from Newton to Einstein relativity because it provided much better location performance.For en engineer, the ultimate criteria is the performance of the device: does it do what it is supposed to do, as efficiently as possible?

Scientists and engineers are also easily differenciated by the way they deal with the problem of induction. We know at least since Hume that it is not because some phenomena has happened in the past that it will happen in the future. Hence, every scientific law is provisional. Since Popper, we know that truth in science means "non refuted yet." So scientists are aware of the provisional nature of knowledge. This is not a problem as long as you are contemplating the universe in search of an explanation of how it works. For engineers though, this is a tough problem, because it means that what has worked in the past might not work in the future. All their devices might fail for an unkown reason and they have to accept that and live with it.

A final difference between science and engineering is how they deal with Cartesian slicing. Cartesian slicing is the idea that the best way to study a problem for a scientist is to slice it into smaller and smaller problems that can be studied independently. A consequence of this is the ever increasing sophistication and complexity of scientific explanation in every subfield of science. Engineers cannot slice too much, because they have to deal with the fact that all the separated phenomena might interact in the real world and have an influence on their devices. For example, it is hard for an engineer to ignore frictions. Engineers face computational limitations, and they therefore have to make useful simplifications, like ignoring one phenomenon, or one side of it, for the sake of implementation. When diregarding a phenomenon, they assume, and very often check, that it does not alter the efficiency of their device too much.

Overall, science is about provisional knowledge of non refuted laws on sliced phenomena while engineering is about making device that work, sometimes using useful simplifications.

I am not saying that engineers and scientists do not talk to each other or live in completely separate worlds. Engineers constantly seek to use more recent laws to perfect their devices. Scientists try to understand why some of the enginner's tricks work, or why sometimes something they predict should work does not. There is a fruitful and fertile dialogue between scientists and engineers. All that I'm saying is that scientists and engineers have distinct aims, distinct criteria for success and distinct methods.

Economists as Engineers and Economists as Scientists

In 2006, Greg Mankiw published an essay in the Journal of Economic Perspectives titled "The Macroeconomist as Scientist and Engineer." I really liked reading through this paper then and the more I think about it, the more I think Greg has struck a fundamental chord here. The distinction he makes resonates extremely strongly with me and my own experience with my field and how I view my own work. In this post, I would like to discuss Greg's essay, and give some thoughts on why I think this distinction is essential, why it is healthy to make it and why, historically, economists seem to not have paid sufficient attention to it.

Greg starts his paper by ackowledging how much economists want to pose as scientists:

Economists like to strike the pose of a scientist. I know, because I often do it myself. When I teach undergraduates, I very consciously describe the field of economics as a science, so no student will start the course thinking that he or she is embarking on some squishy academic endeavor. Our colleagues in the physics department across campus may find it amusing that we view them as close cousins, but we are quick to remind anyone who will listen that economists formulate theories with mathematical precision, collect huge data sets on individual and aggregate behavior, and exploit the most sophisticated statistical techniques to reach empirical judgments that are free of bias and ideology (or so we like to think).

I love Greg's writing, full of humour and self-deprecation. He mocks economists for posing as scientists, but immediately includes himself in the lot, so that the blow is not so strong. We even empathize. But self-derision aside, I think that economists have a right to pose as scientists. Economists are scientists because they want to understand how men behave, make decisions and interact with each other and with their envitonment. Economists, along with sociologists, and psychologists (and maybe anthropologists), are part of behavioral social science, in my opinion.

Then Greg goes on describing now why economists are also engineers.

Having recently spent two years in Washington as an economic adviser at a time when the U.S. economy was struggling to pull out of a recession, I am reminded that the subfield of macroeconomics was born not as a science but more as a type of engineering. God put macroeconomists on earth not to propose and test elegant theories but to solve practical problems. The problems He gave us, moreover, were not modest in dimension. The problem that gave birth to our field—the Great Depression of the 1930s— was an economic downturn of unprecedented scale, including incomes so depressed and unemployment so widespread that it is no exaggeration to say that the viability of the capitalist system was called into question.

Again, Greg is both fun and efficient. But he is also to the point. Economists are engineers because they deal with pressing social issues: how Central Banks should set the interest rate? How to forecast and respond to crises? How to decrease unemployment? I would add that this is not limited to macroeconomists. In microeconomics, we also have pressing policy questions to solve: How to set taxes? How to organize the education system? How to curb pollution? How to best organize markets?

Greg goes on with the aim of his essay.


This essay offers a brief history of macroeconomics, together with an evaluation of what we have learned. My premise is that the field has evolved through the efforts of two types of macroeconomists—those who understand the field as a type of engineering and those who would like it to be more of a science. Engineers are, first and foremost, problem solvers. By contrast, the goal of scientists is to understand how the world works. 

I think Greg is really on to something big here. I think this distinction between scientists and engineers is key. To me, it has been summarized extremely efficiently by a famous quote by Neil Armstrong: "Science is about what is and Engineering is about what can be."

To Greg, the history of macroeconomics has started as an engineering venture that has slowly drifted into a more scientific ground. 

The research emphasis of macroeconomists has varied over time between these two motives. While the early macroeconomists were engineers trying to solve practical problems, the macroeconomists of the past several decades have been more interested in developing analytic tools and establishing theoretical principles. These tools and principles, however, have been slow to find their way into applications. As the field of macroeconomics has evolved, one recurrent theme is the interaction—sometimes productive and sometimes not—between the scientists and the engineers. The substantial disconnect between the science and engineering of macroeconomics should be a humbling fact for all of us working in the field.

Science has very often started with a strong applied question, before drifting away into more abstract areas. Think of the theory of optimal transportation that started with Monge - as a way to displace rocks from a quareer to a hole in the ground - has developped to an abstract and very general modern theory.

Though I understand it, I disagree with Greg's last comment. It might seem humbling that the most recent theories do not find their way into applications, but I think it is rather healthy and the sign of a maturing science. In the other sciences, scientists are always extremely cautious when discussing the potential applications of a major fundamental scientific breakthrough. The discovery of how a virus operates does not immediately pave the way for a vaccine. Decades of research are needed. First, the scientific result has to be reproduced a sufficient number of times so that we know it is correct. Second, a feasible way to exploit this result has to be found and its efficiency evaluted. This is the work of what I call engineers, in that case doctors. It would be crazy to try to use the last scientific theory/hypothesis as a workhorse for policy purposes. Which engineer uses string theory today?

My feeling is that hard-pressed by politicians to find anwers to policy questions, economists have always made useful simplifying assumptions about human behavior. At some point, these assumptions seemed to be shaky, or some of the conclusions did not seem to rigorously be drawn from them. Then economists entered a phase of rigorous mathematisation and axiomatisation, which is the first leg of any science, the theoretical one. This has produced amazing theoretical results. But until recently, economists did not make a lot of use of the other leg of any science: the empirical leg. Empirical validation is something different, a way to tell what's wrong, it is a way to discriminate between all the theoretically sound theories we have that make different empirical predictions. We are in the middle of an empirical revolution in economics (some have called it the credibility revolution). Economics is slowly starting to use data to discriminate between competing theories. In the process, I think it would be extremely useful to distinguish between the use scientists and engineers make of the data. For most of its existence, economics have used the data with mainly one aim in mind: estimate the values of theoretical parameters that theory did not provide. This is extremely important and we have made a lot of progress in this direction. Extremely beautiful theories have been developed, but this is not what I have in mind when I think about how engineering and science use data.

Engineers use the empirical data to check whether their devices work whereas scientists use to data to refute theories.

In the following posts of this series, I will examine some of my favorite results in economics that use data in a engineering or a scientific fashion. I will also try to give a sense of what empirical economics and econometrics have achieved up to now, and why they have mostly focused on the limited goal of estimating theoretical parameters.

As a conclusion to this post, I would like to quote the last part of Greg's introduction to his essay:

To avoid any confusion, I should say at the outset that the story I tell is not one of good guys and bad guys. Neither scientists nor engineers have a claim to greater virtue. The story is also not one of deep thinkers and simple-minded plumbers. Science professors are typically no better at solving engineering problems than engineering professors are at solving scientific problems. In both fields, cutting-edge problems are hard problems, as well as intellectually challenging ones. Just as the world needs both scientists and engineers, it needs macroeconomists of both mindsets. But I believe that the discipline would advance more smoothly and fruitfully if macroeconomists always kept in mind that their field has a dual role.

Haavelmo and the birth of econometrics: engineering, science or both?

Following my previous post on Haavelmo, here is a longer description of why I like the guy and what our disagreements are in the light of his major paper.

In 1944, Trygve Haavelmo published a 124 pages paper in Econometrica that would set the stage for almost four decades of research in econometrics. It is extremely interesting to examine this fundational milestone, especially to contrast it with Haavelmo's subequent writings when he became president of the Econometric Society in 1958 and when he received the Nobel prize in 1989. I think we can see Haavelmo's thought slowly changing and a way for empirics to enter economics emerge, thrive and slowly come to a halt. I think it is important to understand where we come from, where we stand and where we go. Haavelmo is also an extremely clean example of mixing engineering with science, and of putting too much faith on economic theory. In the meantime, Haavelmo's legacy is beautiful, and many of the concepts that he and his fellows at the Cowles Commission have built are extremely useful today, especially to think about causality and how it relates to the observed data. The theoretical apparatus that they set up is nothing short of impressive and awesome. In this post, I want to guide you through three moments of Haavelmo's life and of that of econometrics.

What is extremely surprising in Haavelmo's writings is how intricately the scientific and engineering aspects of economics are intertwined. The scientific ambition is especially strong in the beginning of the 1944 monograph. But then, Haavelmo seems to lose sight of this aim when he becomes more practical and technical. I'm going to start with a description of the scientific aspect of econometrics as Haavelmo sees it. Then, I'll describe some strange renouncements when he becomes more practical and I'll finish with a description of the lore limited goal he sets for econometrics.

The role of Econometrics for testing economic theories

In the beginning of the paper, Haavelmo defines econometric research:

The method of econometric research aims, essentially, at a conjunction of economic theory and actual measurements, using the theory and technique of statistical inference as a bridge pier.

So we are going to try to relate the theories in economics to their counterpart in the actual real world. Theory is necessary, but it is only the first leg of a science, as Haavelmo recognizes building upon Pareto:

Theoretical models are necessary tools in our attempts to understand and "explain" events in real life. Within such theoretical models we draw conclusions of the type, "if A is true, then B is true." Also, we may decide whether a particular statement or a link in the theory is right or wrong, i.e., whether it does or does not violate the requirements as to inner consistency of our model. As long as we remain in the world of abstractions and simplifications there is no limit to what we might choose to prove or to disprove; or, as Pareto has said, "Il n'y a pas de proposition qu'on ne puisse certifier vraie sous certaines conditions, A determiner." Our guard against futile speculations is the requirement that the results of our theoretical considerations are, ultimately, to be compared with some real phenomena. [All emphasis in the quotes are mine.]

I like a lot this part of Haavelmo's paper because he really puts forward that economics is an empirical science. Actually, economics is not all about mathematical economics, as he explains later:

One of the most characteristic features of modern economic theory is the extensive use of symbols, formulae, equations, and other mathematical notions. Modern articles and books on economics are "full of mathematics." Many economists consider "mathematical economics" as a separate branch of economics. The question suggests itself as to what the difference is between "mathematical economics" and "mathematics." Does a system of equations, say, become less mathematical and more economic in character just by calling x "consumption," y "price," etc.? There are certainly many examples of studies to be found that do not go very much further than this, as far as economic signifiance is concerned. But they hardly deserve the ranking of contributions to economics. What makes a piece of mathematical economics not only mathematics but also economics is, I believe, this: When we set up a system of theoretical relationships and use economic names for the otherwise purely theoretical variables involved, we have in mind some actual experiment, or some design of an experiment, which we could at least imagine arranging, in order to measure those quantities in real economic life that we think might obey the laws imposed on their theoretical namesakes. For example, in the theory of choice we introduce the notion of indifference surfaces, to show how an individual, at given prices, would distribute his fixed income over the various commodities. This sounds like "economics" but is actually only a formal mathematical scheme, until we add a design of experiments that would indicate, first, what real phenomena are to be identified with the theoretical prices, quantities, and income; second, what is to be meant by an "individual"; and, third, how we should arrange to observe the individual actually making his choice.

I really love this part: economics is not math because it says something about the world, and, even more critically, because it involves the design of an experiment. What are the experiments that we can perform in economics? Haavelmo distinguishes two:

A design of experiments (a prescription of what the physicists call a "crucial experiment") is an essential appendix to any quantitative theory. And we usually have some such experiments in mind when we construct the theories, although-unfortunately-most economists do not describe their designs of experiments explicitly. If they did, they would see that the experiments they have in mind may be grouped into two different classes, namely, (1) experiments that we should like to make to see if certain real economic phenomena-when artificially isolated from "other influences"-would verify certain hypotheses, and (2) the stream of experiments that Nature is steadily turning out from her own enormous laboratory, and which we merely watch as passive observers. 

We can see that the first class of experiments corresponds to the classical definition of an experiment in the natural sciences. We try to devise a crucial experiment in order to test the predictions of the theory. In order to do so, we have to be able to isolate the phenomenon of interest from all the other influences. This is what scientific experiments are for. Haavelmo actually makes this clear in what follows:

In the first case we can make the agreement or disagreement between theory and facts depend upon two things: the facts we choose to consider, as well as our theory about them. As Bertrand Russell has said: "The actual procedure of science consists of an alternation of observation, hypothesis, experiment, and theory."' 

And he goes on to acknowledge that economic science needs this kind of experiments badly and mostly:

Now, if we examine current economic theories, we see that a great many of them, in particular the more profound ones, require experiments of the first type mentioned above. On the other hand, the kind of economic data that we actually have belong mostly to the second type.

But what is this second type of experiments, the ones that Nature turns out from her enormous laboratory? Haavelmo tells more about them:

In the second case we can only try to adjust our theories to reality as it appears before us. And what is the meaning of a design of experiments in this case? It is this: We try to choose a theory and a design of experiments to go with it, in such a way that the resulting data would be those which we get by passive observation of reality. If we succeed in doing so, we become master of reality-by passive agreement.

So Havelmo, as Yule before him, ackowledges that we cannot run experiments in economics and that we have to take the facts as given to us by Nature, without the possibility of interfering with them. What do we do then? We have to assume that the data of passive observation results from our theory and somehow make them agree. We then master reality, at least by passive agreement. I think this is still confusing but it is extremely important because this view underlies most of the subsequent developments in econometrics. Haavelmo makes what he believes we should do clearer afterwards:

The economist is usually a rather passive observer with respect to important economic phenomena. He is not in a position to enforce the prescriptions of his own designs of ideal experiments. "Observational" variables, when contradicting the theory, leave the possibility that we might be trying out the theory on facts for which the theory was not meant to hold, the confusion being caused by the use of the same names for quantities that are actually different. The statistical observations available have to be "corrected," or the theory itself has to be adjusted. To use a mechanical illustration, suppose we should like to verify the law of falling bodies (in vacuum), and suppose our measurements for that purpose consisted of a series of observations of a stone (say) dropped through the air from various levels above the ground. To use such data we should at least have to calculate the extra effect of the air resistance and extract this element from the data. Or, what amounts to the same, we should have to expand the simple theory of bodies falling in vacuum, to allow for the air resistance (and probably many other factors). A physicist would dismiss these measurements as absurd for such a pur- pose because he can easily do much better. The economist, on the other hand, often has to be satisfied with rough and biased measurements.

I find this analogy with physics to be particularly enlightening. Because we only see the data of passive observation, we have to account for the fact that a lot of other relationships are confounding the relation of interest. Accounting for these influences means modeling all of these other influences. I think this is an impossible task if done at the same time as building the model and testing the theory. This is where Haavelmo mixes science with engineering and takes the field in a wrong direction. Let me explain. Science is about Cartesian slicing of reality into smaller subsets that are studied in isolation. Engineering is about combining these relationships in a computable model (which generally requires some degree of simplification, the choice of which makes modeling an art) and testing the predictions of the model against new data. What Haavelmo proposes is to blend all these steps into one unique estimation procedure: theory gives us all the theoretical slices and estimation from the data of a passive observation should be able to deliver the properties of the relationships of interest. This sounds crazy and unrealistic. It puts too much weight on the combination of data with theory. Actually, this is generally an ill-posed problem that does not have a solution with passive data alone. Some information is lacking and the system has too many unknown relationships and not enough independent information. Haavelmo was actually aware of that problem, that he called the problem of confluence and that we now call the problem of identification.

Autonomy and the problem of confluent relationships

Haavelmo makes it clear from the beginning that we are interested in relationships that it might not be possible to observe in the data, because they are confounded by other ones occurring simultaneously. The problem of estimating supply and demand curves out of a data set of price and quantities is the classical exemplification of this problem of lack of autonomy that we now call identification. Haavelmo recalls first the goal of economic research as Cartesian slicing:

Our hope in economic theory and research is that it may be possible to establish constant and relatively simple relations between dependent variables, y and a relatively small number of independent variables, x.

Just before that, he discusses the mere existence of constant economic laws:

We might be inclined to say that the possibility of such fruitful hypothetical constructions and deductions depends upon two separate factors, namely, on the one hand, the fact that there are laws of Nature, on the other hand, the efficiency of our analytical tools. However, by closer inspection we see that such a distinction is a dubious one. Indeed, we can hardly describe such a thing as a law of nature without referring to certain principles of analysis. And the phrase, "In the natural sciences we have stable laws," means not much more and not much less than this: The natural sciences have chosen very fruitful ways of looking upon physical reality. So also, a phrase such as "In economic life there are no constant laws," is not only too pessimistic, it also seems meaningless.

So, it is possible to find laws in economic life if we look at the phenomena in the right way. What is the problem of autonomy then?

Let us consider one such particular relation, say x1=f(x2, x3). In constructing such a relation, we reason in the following way: If x2 be such and such, x3 such and such, etc., then this implies a certain value of x1. In this process we do not question whether these "ifs" can actually occur or not. When we impose more relations upon the variables, a great many of these "ifs," which were possible for the relation x1=f(x2, x3) separately, may be impossible, because they violate the other relations. After having imposed a whole system of relations, there may not be very much left of all the hypothetical variation with which we started out. At the same time, if we have made a lucky choice of theoretical relations, it may be that the possible variations that are left over agree well with those of the observed variables. 

So, because the data from passive observation are generated by the interaction of a lot of various phenomena, the actual variation that remains might be much less than the one we would need to test or validate one of the relationships of interest individually. Haavelmo then rightfully asks why we care about these fundamental relationships:

But why do we start out with much more general variations than those we finally need? For example, suppose that the Walrasian system of general-equilibrium relations were a true picture of reality; what would be gained by operating with this general system, as compared with the simple statement that each of the quantities involved is equal to a constant? The gain is this: In setting up the different general relations we conceive of a wider set of possibilities that might correspond to reality, were it ruled by one of the relations only. The simultaneous system of relations gives us an explanation of the fact that, out of this enormous set of possibilities, only one very particular one actually emerges. But once this is established, could we not then forget about the whole process, and keep to the much simpler picture that is the actual one? Here is where the problem of autonomy of an economic relation comes in. 

We care about the deeper relationships because they give us insights into many more possible variations that the ones we actually see in the data. These relations are true even if some other relations in the system get altered by economic policy or some other event: they are autonomous from these changes, hence the term autonomy. Autonomy is nice because it gives a relationship a lot of power to predict things that would happen would we change the environment in some way. Haavelmo illustrates the notion of autonomy with what I think is a beautiful analogy: that of a car.

The meaning of this notion, and its importance, can, I think, be rather well illustrated by the following mechanical analogy: If we should make a series of speed tests with an automobile, driving on a flat, dry road, we might be able to establish a very accurate functional relationship between the pressure on the gas throttle (or the distance of the gas pedal from the bottom of the car) and the corresponding maximum speed of the car. And the knowledge of this rela- tionship might be sufficient to operate the car at a prescribed speed. But if a man did not know anything about automobiles, and he wanted to understand how they work, we should not advise him to spend time and effort in measuring a relationship like that. Why? Because (1) such a relation leaves the whole inner mechanism of a car in complete mystery, and (2) such a relation might break down at any time, as soon as there is some disorder or change in any working part of the car. We say that such a relation has very little autonomy, because its existence depends upon the simultaneous fulfilment of a great many other relations, some of which are of a transitory nature. On the other hand, the general laws of thermodynamics, the dynamics of friction, etc., etc., are highly autonomous relations with respect to the automobile mechanism, because these relations describe the functioning of some parts of the mechanism irrespective of what happens in some other parts. 

We want deeper, more autonomous relationships because they enable us to make sound predictions when circumstances in the economy change because they stay the same when the policy change occurs. They are autonomous with respect to this policy change:

The principal task of economic theory is to establish such relations as might be expected to possess as high a degree of autonomy as possible. Any relation that is derived by combining two or more relations within a system, we call a confluent relation. Such a confluent relation has, of course, usually a lower degree of autonomy (and never a higher one) than each of the relations from which it was derived, and all the more so the greater the number of different relations upon which it depends. 

A classical example of a non autonomous relationship is that of the correlation between prices and quantities on a market other time. This correlation breaks down as soon as the supply or demand function changes. It could be after the introduction of a tax or a subsidy. The very famous Lucas critique of econometric models is simply a mere restatement of the notion of autonomy, which Lucas actually ackowledges (footnote 3).

Identification: not enough data, too much theory

One of the key questions for econometricians aware of the confluence problem is to be able to extract relationships with a high degree of autonomy with observational data where the actual amount of variation is deeply limited by the interaction of several relationships:

In scientific research-in the field of economics as well as in other fields-our search for "explanations" consists of digging down to more fundamental relations than those that appear before us when we merely "stand and look." Each of these fundamental relations we conceive of as invariant with respect to a much wider class of variations than those particular ones that are displayed before us in the natural course of events. Now, if the real phenomena we observe day by day are really ruled by the simultaneous action of a whole system of fundamental laws, we see only very little of the whole class of hypothetical variations for which each of the fundamental relations might be assumed to hold.

How is it possible to resolve this tension? In my opinion, Haavelmo does not take this problem seriously enough and tends to consider it as a technical issue. He, and subsequent researchers at the Cowles Commission, merely tend to make the resolution of this problem a property of the theory itself. It is the so-called identification problem. Haavelmo is well-aware of the problem, even if he does not use the term identification itself, that will be coined later by Koopmans:


We may fail to recognize that one or more of the parameters to be estimated might, in fact, be arbitrary with respect to the system of equations. This is the statistical side of the problem of autonomous relations. Suppose that, in particular, it is possible to derive an infinity of new systems which have exactly the same form as the original system, but with different values of the coefficients involved. Then, if we do not know anything about the values of the parameters in the original equation system, it is clearly not possible to obtain a unique estimate of them by any number of observations of the variables. And if we did obtain some "estimate" that appeared to be unique in such cases, it could only be due to the application of estimation formulae leading to spurious or biased results. For example, the question of deriving both demand and supply curves from the same set of price-quantity data is a classical example of this type of problems. 

Indeed, in the supply and demand case, a rigorous investigation of the theoretical model should make us aware that it is impossible to recover the properties of the demand and the supply curve from price-quantity data alone. If we were to run a regression between prices and quantities, we would have a spurious confluent coefficient.

This question (in the case of linear relations known as the problems of multicollinearity) is of great importance in economic research, because such research has to build, mostly, on passive observations of facts, instead of data obtained by rationally planned experiments. And this means that we can obtain only such data as are the results of the economic system as it in fact is, and not as it would be under those unrestricted hypothetical variations with which we operate in economic theory, and in which we are interested for the purpose of economic policy.

So how are we going to solve for the fact that the data obtained from passive observations might not give us enough variation to pin down (identify) the autonomous relationships? For Haavelmo, this is mainly a technical problem, a property of the system of simultaneous equations that we think has generated the data:


In the following we shall see that the investigation of this problem of indeterminate coefficients, as well as other questions of estimation in relation to economic equation systems, all come down to one and the same thing, namely, to study the properties of the joint probability distribution of the random (observable) variables in a stochastic equation system

This problem of going from the observed data to the deep invariant autonomous parameters of interest, that has been later called the identification problem, has fascinated economists for a long time. And it all started with Haavelmo attracting the attention of the profession on this issue:

This problem, however, is of particular significance in the field of econometrics, and relevant to the very construction of economic models, and besides, this particular mathematical problem does not seem to have attracted the interest of mathematicians.

So for Haavelmo, this is a both a theoretical and a technical problem: is there a sufficient amount of variation in the assumed economic system of simultaneous equations to be able to go from the data to the deep autonomous relationships? This problem has attracted a lot of research since then, and some of it is still underway. Nowadays, and following Anderson and Rubin (1949), that themselves follow Frish and Haavelmo, we talk about structural and reduced form relationships instead of autonomous and confluent. Frish used the terms superflux and coflux.

Where does this lead us? Well, identification is a property of the system of theoretical relations that we postulate. So that all our inference is going to be conducted conditional on this system being restricted enough so that it is identified. Koopmans, Rubin and Leipnick (1950) recognize this fact in a chapter of the famous Cowles Commission monograph #10: the structural model is identified under a set of a priori restrictions. For example, it was well known since Phillip Wright that supply and demand models are identified if there exists a shifter of supply that is restricted not to affect demand and a shifter of demand that does not affect supply. As a consequence, the identification of economic relationships and all of our empirical knowledge rests on a set of untestable assumptions. The key question now becomes: how to justify these assumptions? They tend to be extremely important, but for a long time in economics assuming these restrictions seemed almost unimportant and was left to footnotes. Since the early 90s, Josh Angrist and others have done a lot to put these restrictions back into the picture, and to discuss them. Basically, these restrictions are the experiments that we are postulating in the data to be able to tests our theories. They better be reliable. Hence the quest for natural experiments that would be a credible source of identification of autonomous relationships and of testing of economic theories.

As for Haavelmo, his discarding of the possibility of running experiments in economics and his faith into the mathematical analysis of the identifiability of a set of simultaneous equations stemming from economic theory as a surrogate for good experiments did not seem to bother him too much. Two things strike me. First, how can a man so adamant about testing theories and finding crucial experiments can settle for such an insatisfactory device as a priori restrictions when it comes to the actual implementation of econometric analysis? I suspect the influence of Frish, who had been his master and had already outlined the path of research along these lines. Frish was apparently a very energetic and charismatic figure, and it is possible that his respect for Frish explains the apparent schizophrenia of Haavelmo's paper. This approach of identification with a priori restrictions has persisted until today, and papers are published in leading journals studying the restrictions needed to identify some quantities of interest. Second, I am surprised by how much Haavelmo mixes science with engineering in his paper. The separate equations are studied simultaneously under the a priori identifying restrictions. Each of them is then interpreted as an autonomous relationship with causal implications, and as a test of some theories. This is the scientific aspect of the endeavor. The equations are then combined in order to predict a policy change, the engineering part. All of this analysis rests upon the validity of the a priori restrictions only. Some work in economics still uses this approach of causally interpreting structural models identified by a priori restrictions.

Actually, Haavelmo predates the credibility revolution on the engineering side when he advocates for the following cycle, that opens up the need for new data to validate the predictions of the model, a sound engineering advice:


If we have found a certain hypothesis, and, therefore, the model behind it, acceptable on the basis of a certain number of observations, we may decide to use the theory for the purpose of predictions. If, after a while, we find that we are not very successful with these predictions, we should be inclined to doubt the validity of the hypothesis adopted (and, therefore, the usefulness of the theory behind it). We should then test it again on the basis of the extended set of observations. 

Haavelmo is also well-aware of the Popperian limitation to knowledge:

Now suppose that we have a set of observations that all confirm the statements that are permissible within our model. Then these statements become facts interpreted in the light of our theoretical model, or, in other words, our model is acceptable so far as the known observations are concerned. But will the model hold also for future observations? We cannot give any a priori reason for such a supposition. We can only say that, according to a vast record of actual experiences, it seems to have been fruitful to believe in the possibility of such empirical inductions. 

Disenchantment 

Along the years, Haavelmo has grown dissatisfied with the actual results of the research program that he had delineated in his 1944 landmark paper. In his 1958 presidential address to the econometric society, he goes as far as saying:

The concrete results of our efforts at quantitative measurements often seem to get worse the more refinement of tools and logical stringency we call into play!

He then describes the progress that econometricians, a lot of them linked to the Cowles Commission, have made in the direction of solving the identification problem and proposing estimation devices for simultaneous equations models. And he concludes:

But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

What is to blame for this apparent failure? For Haavelmo, the main responsibility lies within the insufficiencies of economic theory:

I think we may well find part of the explanation [...] in the shortcomings of basic economic theory.

Haavelmo then proposes two directions in which to improve economic theory: first, including what people actually think (their expectations) in models, and second to relax the stability of preferences and make them dependent on neighbors, friends and so on. But I think he does not touch upon the key issue. When talking about what has been learned, he says:

We have been striving to develop more efficient statistical methods of testing hypotheses and estimation. In particular, we have been concerned about general principles of consistent procedure in the important field of multiple regression technique. We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact "correct" or "true," we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be. We have learned to understand the futility of arguing that the data "in practice" may behave differently, because such an argument would simply mean that we contradict our own model. 

So he remains completely faithful to the a priori approach. And he goes as far as saying that the fact that the data might contradict the model is futile. If I can understand this in the context of identification analysis, it still is a very poor way of seeing empirical validation. In his 1989 Nobel prize reception speech, he blames the theory again for the failure of the econometrics program:

the possibility of extracting information from observations of the world we live in depends on good economic theory. Econometrics has to be founded on theories that describe in a reasonably accurate way the fashion in which the observed world has operated in the past. [...] I think existing economic theories are not good enough for this purpose.

A final word

As a conclusion, I cannot help thinking of the wonderful adventure of understanding the sources of the identification problem as a beautiful endeavor but at the same time as a waste of time. Would looking for actual crucial experiments in practice not been a sounder way of spending this huge amount of energy? I think Haavelmo and the Cowles Commission members were much too theoretically oriented to be really fascinated by the adventure of realistically looking at the data with a crucial experiment in mind. But their work was fascinating as a way to encode and understand causlaity and the difficulties iof causal inference. Nowadays, any applied empirical paper has to discuss its identification strategy: how it extracts causality from observational data. Also, the apparatus used to study causality and identification using simultaneous equation models have resurfaced recently in artificial intelligence: Judea Pearl has ackowledged the legacy of Haavelmo when thinking about extracting causal factors from observational data (p.158).

I have been fascinated by all these works and have studied them in detail. What I think now was that we were quick in declaring defeat over the data and quick to call a priori restrictions to the rescue. I am happy that the data makes its way into economics big time. I cannot help thinking about Claude Bernard, the father of experimental medicine, that said in his 1865 book on experimental medecine that:

Experimentation is undeniably harder in medecine than in any other science; but for that very reason it was never so necessary, and indeed so indispensable. The more complex a science, the more important is it, in fact, to establish a good experimental standard, so as to secure comparable facts, free of sources of error.  Nothing os today, I believe, more important to the progress of medecine.

Replacing medicine by economics would seem to describe the status of our science nowadays. We are in the middle of a credibility revolution where data and crucial experiments enter the picture big time. Here, I cannot help thinking of the young Haavelmo, that, in the conclusion of his landmark paper, acknowledges that econometrics seems pretty technical, and tongue in cheek, that we might want to dispense with it:

The patient reader, now at the end of our analysis, might well be left with the feeling that the approach we have outlined, although simple in point of principle, in most cases would involve a tremendous amount of work. He might remark, sarcastically, that. "It would take him a lifetime to obtain one single demand elasticity." And he might be inclined to wonder: Is it worthwhile? 

He then concludes, going back to his good initial intentions:

If economics is to establish itself as a reputable quantitative science, many economists will have to revise their ideas as to the level of statistical theory and technique and the amount of tedious work that will be required, even for modest projects of research. [...] In other quantitative sciences the discovery of "laws," even in highly specialized fields, has moved from the private study into huge scientific laboratories where scores of experts are engaged, not only in carrying out actual measurements, but also in working out, with painstaking precision, the formulae to be tested and the plans for the crucial experiments to be made. Should we expect less in economic research, if its results are to be the basis for economic policy upon which might depend billions of dollars of national income and the general economic welfare of millions of people? 

I could not agree more.