Thursday, November 10, 2016

The roots of the rise of populism

Following the election of Donald Trump, there has been a spat of various explanations as to why he has won. Let me synthesize here the various explanations that I have been able to find so far. I find this discussion fascinating not only for scientific reasons, but also because they speak to what seems to be major changes affecting all the western democracies.
  1. The losers of trade globalization strike back: white blue collar workers that have been hit by globalization vote for more protection from foreign workers. The best illustration of this phenomenon is Branko Milanovic's elephant graph. The middle class in developed countries has seen stagnating incomes since 1988 (and even earlier) and has in consequence radicalized. This explanation seems to have a grain of truth. Recent work by David Autor, David Dorn, Gordon Hanson and Kaveh Majlesi shows that in US counties harboring more industries competing with Chinese exports, workers not only lose their jobs, have lower earnings and are in worse health, but they also tend to vote for more extreme republican candidates.
  2. The losers of technological change rebel. Erik Bryjlnofsson has tweeted this graph: the vote for Trump seems to be correlated with how much a share of a county's jobs are routine. David Autor, along with various over coauthors, has shown that a huge change in advanced economies has been the progressive disappearance of middle level routine jobs yielding to a polarization of the job market: only highly skilled tech jobs or very low skilled service jobs increase in the economy, while middle level industry and clerk type jobs are replaced by robots and computers. 
  3. The white majority feels threatened by the rise of the ethnic minorities. The idea is that the modern welfare state is easier to maintain when there is ethnic homogeneity. The main reason seems that we are less generous with people that do not look like us. The increase in the size of the Hispanic and African-American communities might have triggered a fear reaction by the white majority that is now trying to protect itself by restricting access to citizenship or even deporting immigrants. Recent results for Austria by my IAST colleague and friend Charlotte Cavaillé seem to give some credit to that explanation. Charlotte and her coauthors find that natives competing with immigrants for access to public housing vote more for the far right.
  4. It's Fox News: Simon Wren-Lewis has a very nice post where he discusses recent evidence on the persuasive impact Fox News has on election outcomes.
  5. Democrats simply did not turn out for Hillary. David Yanagizawa-Drott has a put together a really nice graph that shows that Hillary has failed to mobilize Democrat voters. 
  6. It's the economy, stupid! It turns out that, back in March, Ray Fair's model was predicting a Republican win. The explanation comes from the penalty of having two consecutive Democrat mandates before the actual election and the weak economy. Both of these factors gave a strong lead to any Republican candidate.
What to make of all that? Explanations 1 to 3 are the most frightening. They suggest that the model of the modern western welfare state is under jeopardy under the combined forces of globalization (trade and migration) and of technical change. These explanations are extremely dispiriting since it seems extremely hard to conjure up solutions to respond to the challenges raised by these changes. I'd love to think that it is only 6!

Friday, April 1, 2016

Pourquoi les français sont malheureux au travail et que faire pour que ça change ?

Cela fait plusieurs années maintenant que je m'intéresse de très près à l'éducation des enfants et au management. Je suis très heureux de pouvoir commencer à faire entrer ces thèmes dans mon projet de recherche. Je posterai dans le futur de manière plus régulière sur ces questions, notamment en français car un de mes objectifs est d'avoir un impact concret sur les pratiques des parents, des enseignants et des managers. 

En attendant, je donne, à l'invitation de l'association des Alumni de TSE, une conférence assez programmatique sur ces thèmes le 8 avril prochain à Paris. Voici le texte décrivant le thème de la conférence:
Les Français sont malheureux au travail, de nombreux indicateurs nous l’indiquent. Comment expliquer que le travail soit pour les Français un lieu de stress et d’angoisse et non un lieu d’épanouissement et de bien-être? Quelles solutions peut-on apporter pour changer cet état de fait? Sylvain Chabé-Ferret défendra sa conviction que le malaise français au travail est le produit d’une interaction néfaste entre deux défiances. D’une part, les entreprises ne font pas confiance à leurs salariés, et leur imposent des pratiques managériales et organisationnelles autoritaires, hiérarchiques et stressantes. D’autre part, les salariés supportent ces pratiques parce qu’ils ne font pas confiance à leur ressenti et à leurs émotions et hésitent à reconnaître leur souffrance et à agir en conséquence. Pour lui, ce divorce entre le ressenti profond des salariés et ce qu’ils s’autorisent à penser, trouve son origine dans une structure psychologique qu’il appelle le Faux-Self. Le Faux-Self est construit dans l’enfance par des pratiques parentales et éducatives répressives de l’expression des émotions et des aspirations des enfants. En combinant économie, psychologie, anthropologie et neurologie, Sylvain nous montrera l’origine du Faux-Self et ses impacts dans la société et sur les actions des individus. Sylvain suggérera ensuite des pistes de pratiques parentales, éducatives et managériales respectueuses du Self et présentera les preuves que nous avons de leur efficacité. Enfin, il conclura par quelques conseils pratiques à destination des jeunes diplômés pour leur permettre de mieux écouter leurs émotions et de suivre leurs aspirations de manière à être plus heureux dans leur job et à contribuer à changer la société en profondeur.
Quelques places sont réservées aux extérieurs. Laissez un message en commentaires pour recevoir une invitation.

Friday, November 20, 2015

How Can We Save the Amazon Rainforest?

Recently, I have embarked in a collective effort to write a blog post on the economics of public policies to fight deforestation as part of a series of posts in preparation of COP21, a joint venture between TSE, Le Monde and The Economist. I have had the chance to work with a team of amazingly cool, witty and nice collaborators (Arthur van Benthem, Eduardo Souza Rodrigues, Julie Subervie). The post has appeared in French here and in English there.

What I would like to do here is to summarize our ideas and then highlight the results of one recent paper by Julie which gives the first robust evidence on the effectiveness of one particularly important policy, Payments for Environmental Services (PES), in the Amazon.

The reasons why we want to stop deforestation are pretty straightforward: deforestation is responsible for about 10 percent of climate change emissions and leads to massive biodiversity losses. Actually, deforestation is not the direct result of a private effort by landowners, it is the result of a massive colonization campaign sponsored by the junta government in Brazil in the 70s.

The key question is which policy to choose to halt deforestation. There are a lot of options. Governments tend to favor regulation, like declaring some land protected and banning all cutting there. Economists call these policies "Command and Control" because they are highly interventionist and leave no leeway to agents to adapt to the policy. Economists favor price instruments above all, such as a carbon market or a carbon tax. The key advantage of these policies is that they let much more leeway to agents to adapt: when there is a price on carbon, the less costly carbon reduction options are the ones implemented first. With command and control, you might impose much higher costs to reach the same environmental gain by banning very profitable cuts of trees and allowing some trees to be cut where economic returns are actually small. PES are an intermediate between command and control policies and price instruments. With PES, governments pay farmers who accept to conserve their trees standing a fixed amount per hectare. Theoretically, PES are less efficient than market instruments, since they leave room for farmers to choose not to receive the incentive, whereas a tax or a price is for everyone to pay. Also, those who volunteer might be the ones who would have not cut their trees anyway even in the absence of payment. If this is widespread, they benefit from a windfall effect: they do nothing and get paid. PES should be better than command and control though (we say that they are second best instruments, actually they are third best since they are linear contracts whereas one could think of nonlinear PES contracts that would be the true second best option).


But this is theory. What we would like to know is whether these predictions hold in real life, right? I mean, that's useful to know how policies work in the perfect, vaccum-full world of models but how do these predictions hold up in reality? Many things can go wrong in the imperfect, air-full realm of the real world. Agents might not be as well-informed or as rational as we assume that they are in our models, tax enforcement might be undermined by corruption or inefficiency.

It turns out that it is extremely difficult to measure the effectiveness of forest conservation policies. Why? Because we face two very serious empirical challenges: additionality and leakage.

Additionality is a key measure of a program success: how much did the policy contribute to halt deforestation where it was implemented? For example, by how much did deforestation decrease in protected areas? Or among farmers subject to a deforestation tax or to a price of carbon? Or among farmers taking up a PES? In order to measure additionality, we have to compute how much deforestation there would have been in the absence of the policy. But this is extremely hard to do since it did NOT happen: the policy was actually implemented. The situation of reference to which we would like to compare what happened has not happened, we call this situation a counterfactual.

Since we cannot directly observe the counterfactual, we are going to try to proxy for it using something observed. We could take as a proxy the situation before the policy was implemented. But this proxy might be very bad. For example, after the Brazilian government tightened regulatory policies and improved forest monitoring thanks to satellite imagery in the 2000s, deforestation in the Amazon slowed down to approximately half a million hectares annually. It looks like the policy was successful. But, at the same time, lower prices for soybeans and cattle products also reduced incentives to deforest. So what was the main driver of the decrease in deforestation? How much forest did the policy save exactly?

We could also use areas and farmers not involved in the policy as a proxy for the counterfactual situation. But this proxy might be very bad also. For example, even if we observed that farmers who participate in a PES program have lower deforestation rates than those who do not, this does not imply that the scheme actually reduced deforestation. For sure, farmers who stand to profit the least from cutting down their trees are the most likely to sign up for the program. As a result, the program might end up paying some farmers for doing nothing differently from what they would have done anyway. And so the additional impact of the program may very well be small.

Leakage occurs when a conservation policy, which may be successful locally, triggers deforestation elsewhere. For example, a farmer may stop clearing forest on plots that he has contracted under a PES program but at the same time increase deforestation on plots not covered by the contract. On a larger scale, the threat of paying fines to a provincial government may give incentives to farmers or logging firms to move operations to a neighboring province. In such cases, leakage undermines the additionality of conservation programs. Detecting leakage effects is even more difficult than detecting additionality. Indeed, we not only need to compute a counterfactual, but we first and foremost need to detect where the leakages go.

Ok, so additionality and leakage effects are key to be able to rank the policy options in the real world. So what do we know about the additionality and leakage effects for the various forest policies in the Amazon (and in other threatened rainforests)? Not much actually.

As in medicine when testing the efficacy of a new drug, the gold standard of proof in empirical economics is to conduct a Randomized Control Trial (RCT). In a RCT, we randomly select two groups of individuals or regions and implement the policy only for one group, keeping the second as a control. The difference between treatment and control provides a direct measure of the additionality of the policy. RCTs can also be designed to measure leakage. Though RCTs are commonly run to evaluate education or health policies worldwide, there has been only few randomized evaluations of forest policies. Kelsey Jack from Tufts University performed RCTs to assess tree planting subsidies in Malawi and Zambia. To my knowledge, there are no similar results for forest conservation PES, in Brazil or elsewhere. Seema Jayachandran has been conducting an RCT-based evaluation of a forest-conservation PES program in Uganda. The experiment has been designed to estimate both additionality and leakage effects. We are waiting for her results impatiently.

In the absence of RCTs, economists usually try to identify naturally occurring events or “experiments” that they hope can approximate the conditions of an RCT. In a recent paper, soon to be available here, Julie and her coauthors have conducted such a study of one of the first forest-conservation PES ever implemented in the Amazon. The key graph in this paper is the following:
The program was implemented in 2010. What you can see is that the pace of deforestation decreased after 2010 among participants, while it remained the same among non participants. The change in the difference in land cover between participants and comparison communities is a measure of additionality: it is pretty large, an increase of about 10 p.p. Looking at comparison communities, you can see that the path of deforestation has not increased there, while it should have if leakage was present. What seems to have happened is that farmers have started farming more intensively the previously deforested land, and have actually decreased deforestation on new plots. Using these estimates, an estimate of how much carbon was saved and of the value of a ton of carbon, Julie and her coauthors estimate that the benefits of the program exceed its costs.

A couple of comments here. First, this work is a beautiful example of empirical economists at work. This is how we hunt for causality in the absence of RCT. In order to check for the validity of our natural experiments, we try to see if they do not find effects where they should be none. Here, you can see that before 2010, the deforestation trends in participants and comparison communities were parallel. This is supportive of the critical assumption Julie and her coauthors make here: in the absence of the PES, deforestation trends would have remained the same over time. Second, there is still work to do. The measure of forest cover is declarative and does not rely on satellite data. It is still possible that farmers lied about how many hectares they had in forest still standing. The number of observations is small, so precision is small. And if observations happen to be correlated within communities, precision would even be lower. We would also like to know whether these changes in farming practices are going to persist over time or if the deforestation is going to resume as soon as the program stops. Julie is trying to collect new data on the same guys several years later to check this. Third: these are amazingly encouraging results. It seems that we can do something to save the Amazon rainforest after all. Rejoice.

Sunday, March 8, 2015

Investir dans la jeunesse des banlieues, une urgence nationale

Les attentats du 7 janvier dernier m'ont bousculé, comme beaucoup d'entre nous. Après le chagrin et les marches, j'ai aussi eu envie d'agir. J'ai commencé par écrire une tribune. Grâce à l'excellente responsable de la communication de TSE, Jenni, elle a été publiée hier sur le site de La Tribune et sur le blog de TSE. En voici une version légèrement améliorée, avec liens vers les travaux cités. J'ai aussi donné une conférence sur ce thème devant le Cercle du Bazacle, le club d'entreprises partenaires de TSE. Merci à Joel et Karine, les organisateurs des conférence du Cercle, d'avoir accepté avec enthousiasme ma proposition et de m'avoir donné la parole. Merci à tous ceux qui sont venus ce jour là de leurs commentaires et de leurs encouragements.

Voici le texte de la tribune.

Les attaques terroristes contre Charlie Hebdo et le supermarché Hyper Casher et les marches historiques qui les ont suivies appellent une réponse politique. La nature de cette réponse nous définira en tant que société et exprimera nos valeurs. Nous sommes à un carrefour. Nous pouvons bâtir sur ce gâchis et ces meurtres inqualifiables, mais aussi sur la magnifique réaction qui les a suivis, une société meilleure, ou une société de la peur.

Bien entendu, il y aura une réponse sécuritaire. Mais limiter notre réponse à ces événements à un Patriot Act à la française serait un désastre. Nous barricader dans nos maisons, calfeutrer nos enfants dans leurs écoles, ériger notre pays en forteresse infranchissable ? Si c’est notre seule réponse, elle est terrible, car elle porte le ferment de la peur et du délitement de notre société dans l'entre-soi, dans la méfiance de tout ce qui est différent, et finalement dans la peur de tous envers tous.

Il faut une autre réponse, complémentaire. Plus ambitieuse. Plus belle aussi. Cette réponse, c'est d'investir dans la jeunesse de nos banlieues, de valoriser et de soutenir l'émergence de participants actifs à la société de demain. Ils existent déjà. Mais nous ne les voyons pas. Ils sont barrés dans notre esprit par les Merah, les Kouachi, les gangs des barbares, les règlements de compte, les trafics de drogue, le chômage, les émeutes. Mais ils sont là, l'immense majorité silencieuse qui s'accroche, qui a choisi la vie, ses frustrations et ses joies, et qui a rejeté toute idéologie mortifère. Je pense à mon camarade prépa, Mohamed, le seul arabe qui n'était pas fils d'émir dans notre lycée huppé du centre-ville de Toulouse. Momo est ingénieur maintenant. Il venait du quartier des Izards, comme Merah. Je pense à mes amis volleyeurs de Villejuif, avec lesquels j'ai joué pendant des années, qui m'ont accueilli à bras ouverts, moi, le "çaifran". Avec mon accent du Sud-Ouest et ma barbichette, ils m'appelaient d'Artagnan. Ils sont devenus mes amis, eux, les renois, les noichs, les rebeus. J'ai passé tellement de bons moments avec eux que j'ai fini par prendre leurs expressions et leurs intonations au point que mes amis "normaux" m'appelaient « la racaille ».

Ils sont là, ceux qui ont dit non aux extrémismes et oui à la société française. Ils sont l'immense majorité, mais ils ont besoin de nous. Comment les aider ? Comment faire pour qu’il y ait plus de Momo et moins de Merah ? Quelle est la meilleure approche ? Investir dans l'école ? Changer la politique urbaine ? Lutter contre les discriminations ? Intervenir sur le fonctionnement du marché du travail ? Un débat légitime doit avoir lieu autour de ces options, éclairé au mieux par des évaluations rigoureuses.

Ma propre conviction est que la forme d'investissement la plus efficace est dans des programmes éducatifs auprès des très jeunes enfants et de leurs parents. Ces programmes ne visent pas à développer les capacités cognitives des enfants ou à leur apporter des connaissances scolaires, mais à les aider à être mieux eux-mêmes en leur apprenant à planifier des tâches, gérer leurs émotions et résoudre leurs conflits avec les autres de manière pacifique. Certaines interventions transmettent aussi aux parents des informations simples et parfois ignorées comme les bénéfices de parler à son enfant même s’il ne parle pas encore lui-même ou de lui lire des histoires le soir. Des recherches récentes, résumées par Jim Heckman et Tim Kautz dans un excellent document pour l'OCDE, ont mis en évidence que des versions expérimentales de ces programmes permettent de réduire de manière drastique l'engagement dans des activités illégales à l'âge adulte mais aussi qu'ils augmentent le pourcentage de diplômés du supérieur dans de fortes proportions. De tels effets sont obtenus avec un investissement somme toute limité: le programme étudié par Yann Algan et ses coauteurs par exemple est constitué de 19 séances de jeux de rôle par groupe de 3 avec un travailleur social. Yann a présenté les impacts à long terme de ce programme lors d'une conférence à l'Institut d'Etudes Avancées de Toulouse (IAST): ils sont spectaculaires. Les résultats de ces recherches montrent aussi que ces programmes sont d'autant plus efficaces qu'ils arrivent tôt dans la vie de l'enfant. Plus on laisse le temps à certains comportements de s'installer, plus ils sont difficiles à modifier par la suite. Ce n'est bien sûr pas une raison pour ne rien faire pour les adolescents et les jeunes adultes, mais c'est une raison pour réfléchir sérieusement à des interventions dès la petite enfance. C'est ce qui a conduit Jim Heckman a proposer son équation pour un meilleur investissement éducatif: investir tôt en ciblant mieux.

Je trouve ces preuves empiriques convaincantes, mais ma conviction est aussi plus viscérale. Je pense à tous mes amis de Villejuif qui m'ont dit "Si j'avais su, j'aurais travaillé plus à l'école. Mais je m’en foutais. Et puis c’était toujours la foire." Je pense à mon amie directrice de centre aéré dans un quartier difficile de Toulouse qui a démissionné avec l'ensemble de son équipe au début de l'année, victimes d'un burnout collectif face à l'extrême détresse sociale dont ils étaient témoins, jour après jour, face à ces enfants perdus, violents, tristes et face à ces parents dépassés, démunis, avec parfois aucune autre réponse que l'indifférence ou la violence. Ces programmes apportent des réponses concrètes au désarroi des parents et à la souffrance des enfants.

Il ne faut pas s'y tromper, la lutte commence maintenant pour gagner les cœurs et les esprits des enfants des banlieues. La lutte contre les extrémistes, les bandes, les délinquants, les trafiquants. Si nous ne voulons pas que ces gamins aillent grossir leurs rangs, c'est maintenant qu'il faut leur donner leur chance, leur donner les bonnes armes, celles qui leur permettront de s’intégrer à la société et d’y poursuivre leur bonheur. Existe-t-il un plus beau projet collectif ? Qui y a-t-il de plus beau que la gratitude d'un enfant ? Et qu'avons-nous à risquer sinon à les voir s'engager encore plus dans la société et à y contribuer de manières que nous n'imaginons même pas encore aujourd'hui ?

Tuesday, February 10, 2015

Land reallocation in France: some nice maps

Some time ago, I blogged about one of my current projects on land reallocation in France. I have made some progress on this project in the meantime and I am going to report on it here.

I have worked with Elise Maigné, at Inra. Together, and with the help of Eric Cahuzac, we have been able to secure an access to the data on reparcelling events at the commune level. This data has generously been transmitted to us by Nadine Polombo, who has worked together with Marc-André Philippe to digitize the dataset originally in the hands of the French Ministry of Agriculture. Nadine believes that their dataset is the inly one that remains, since the Ministry of Agriculture has decided to destroy the original data and does not take care of reparcelling events any more. Since then, the data have been made accessible through the open data portal of the French government.

First thing is that there has been 22,374 reallocation events in France reported in this dataset. This is huge, since we have 36,681 communes in France. Some communes have actually undergone more than one reallocation event. There are 18,227 communes that have undergone at least one reallocation event. This means that 49.7% of all French communes have undergone at least one reallocation event.

The first issue with the dataset is that I miss some information: the opening date of the reallocation event is missing for 201 events, the closing date for 380 events and both dates are missing for 291 events. So I have 21,502 events with non missing information on the both opening and closing dates of the reallocation event.

Figure 1: Reallocation Events in France
The events with information on the opening date are presented in Figure 1. Reallocation events start with the end of WWII, with this first wave stopping around 1953. A second wave starts in the late 50s and peaks during the 60s. That is the main wave of land reallocation. Then several waves occur in the 70s, 80s and 90s.

Figure 2: First (1) vs Subsequent (2) Reallocation Events
Since some communes have undergone more than one reallocation event, it is interesting to plot the reallocation events depending on whether they are the first or not. This is done in Figure 2. The wave of the 90s seems to be mainly due to reallocation events occurring on communes that have already been reparcelled once. It is possible though that a different portion of the commune has been reparcelled in the two events.

What would be great now is to have an idea of the way reparcelling was rolled out over space and time. It would especially be nice to know which reparcelling events occurred in between 1955, 1970, 1979, 1988, 2000 and 2010, the dates at which agricultural censuses have been conducted in France. I would add 1963 and 1967 as two large surveys have been conducted at these dates. In order to do this, I have to use a GIS software. Since I use Stata to analyse this dataset, I'm going to use its GIS facilities (for the first time). The beautiful map presented on Figure 3 is the result of this exercise.

Figure 3: Map of the Reallocation Events in France
The first striking feature of this map is that land reallocation mainly occurred in the north of France and much less so in the South. One explanation could be that land in the north is much more fertile, but I do not think this exhausts all possible explanations. This will be the topic of subsequent investigation. The second striking feature is how much the timing of land reallocation is spatially autocorrelated. For example, the area around Paris (the Paris basin) seems to have been almost completely reparceled before 1955. The first wave of reparceling thus seems to have been mainly concentrated in this area. The outskirts of the basin are reached progressively during the 60s and 70s.

The second striking feature of this map is that it coincides very well with a rough map of the agricultural regions in France (see Figure 4).
Figure 4: Map of the Agricultural Regions in France
The cereal growing regions (yellow) seem to have reparcelled very early, while the areas in mixed cultures (light green) have reparcelled more slowly. Finally, forest regions or regions with open range cattle (dark green) have almost not reparcelled.

Obviously, this strong spatial autocorrelation is not good news for studying the causal effect of land reallocation on agricultural technology adoption. Indeed, what would have been great would have been that reparcelling occurs randomly across space, with communes within the Paris basin reparcelling early and others not so that comparing them captures the effect of reparcelling. Here, a raw comparison of reparcelling communes with non reparcelling ones would be biased by the soil qulaity and types of productions. One better comparison would condition on the agricultural zones: comparing communes within the Paris basin with early and late reallocation (if we can find any) is already better.  Actually, my idea is to try to use the finest possible grid size to compare close communes with different reparcelling date.

A last striking feature of the data is that sometimes communes undergoing reallocation seem to be aligned like on a line on the map. This is because land reallocation has occurred along a railroad track or a highway, when these infrastructures were built.


TBC

Tips in Scientific Writing

The student paper at TSE (TSEconomist) has asked some of us to provide writing tips for students. Here is my take.

I can say that I have not been very good at writing papers until recently, and that practice is the essence of progress. But, there are a few things I can say that I think can help make writing easier.

The first and main thing is: do NOT start writing when you have finished the theoretical/empirical work. This is a rookie mistake that I repeatedly made over the 3 papers I have out now and the 3 others that I am currently writing. This is stupid. Writing should be intricately related to the work itself, and the paper should be written all along the course of the project. (I think we should think in terms of project, not of papers, since a project is made of several papers, and you have to conduct research, not write papers, papers are the outcome, not the goal.) 

What I do now is that I blog: first, I blog about a research idea. This makes for a nice post where I have to explain why I think the idea is important, why I should spend time and effort exploring it and why people should be interested by the results. This is maybe the most critical part of any project. This is also the part that most people overlook, especially students. They generally want to rush for the technical things that seem more reassuring instead of taking time to elaborate their intuition about why something is important. Do elaborate on the why of the project. Spend time and effort explaining why this is an important question for economic science, economic policy and why the literature has not found an answer yet and why you think you can solve that with your idea. If you cannot do that, I would say stop and think again. Do you really want to spend one year of effort on something you do not even know why you are doing it? If you do not do this, you will eventually end up repeating previous research with a small tweak, or you are going to lose the reader into the details and lose track of the ambitious and novel idea that you have. With the blog, I usually write updates of the research as I go along, and this keeps me focused on the original idea and on the eventual changes that I might have made. I have found that I, and students also, tend to lose sight of the original goal as we enter the technical aspect of the project, and we bury ourselves in details instead of exploring the deep important research question. So, first advice, write a blog (or write for my blog, or for any blog). Then, writing the paper is just a matter of wrapping things up. It becomes so much easier.

My second advice is: write as if you would explain your research to your grandma. Do use a relaxed tone, avoid technical words. Try talking yourself, your friends, your family, your colleagues, your teachers, anyone, through your research project, as often as you can. Especially confront specialists of your field and see if you can convince them. If you cannot, it does not mean that your idea is stupid, it means that it still is not clear enough.

My third advice would be: read the LSE blog on scientific writing. It is full of sound detailed advice like "find the essence of your message," "never anticipate on an argument or go back to one," "start paragraphs with the main idea and then develop," "choose an accurate and catchy title."

My fourth advice is: read John Cochrane's writing tips for PhD students. They are excellent. Find the main message would be the essence of it, and it is in general realy hard to do.

Friday, February 6, 2015

The Credibility Revolution in Economics

In a thought-provoking paper, Josh Angrist and Steve Pischke describe the credibility revolution that is currently going on in economics. Having grown in the Haavelmo-Cowles-Heckman tradition of structural econometrics, I have to admit that I resisted the intuitive attraction that this paper had on me. But the more I think about it, the more I can see all that is correct in the view that Josh and Steve defend in their paper, and the more I see myself adapting this view to my own everyday research, and the more I find myself happy about it. The credibility revolution makes a lot of sense to me since I can relate it to the way I was taught biology and physics, and the reasons why I loved these sciences: for their convincing empirical background. I admittedly have my own interpretation of the credibility revolution, that does not fully overlap with that of Josh and Steve. I am going to try to make it clear in what follows.

To me, the credibility revolution means that data and empirical validation are as important as sound and coherent theories. It means that I cannot accept a theoretical proposition unless I have access to repeated tests that it is not rejected in the data. It also means that I do not use tools that have not proven repeatedly that they work. 

Let me give three examples in economics. In economics as a behavioral science, a very important tool to model the behavior of agents under uncertainty is the expected utility framework that dates back at least to Bernoulli, who introduced it to solve the Saint Petersburg paradox. von Neumann and Morgenstern have shown that this framework could be rationalized by some simple axioms of behavior. Allais, in a very famous experiment, tested the implication of one of these axioms. What he found was that people consistently rejected this axiom. This results has been reproduced many times since then. This means that the expected utility framework as a scientific description of how people behave has been refuted. This lead to the development of other axioms and other models of behavior under uncertainty, the most famous being Kahneman and Tversky's prospect theory. This does not mean that the expected utility framework is useless for engineering purposes. We seem to have good empirical evidence that it is approximately correct in a lot of situations (readers, feel free to leave references on this type of evidence in the comments). It might be more simple to use it rather than the more complex competing models of behavior that have been proposed since. The only criteria on which we should judge its performance as an engineering tool is by its ability to predict actual choices. We are seeing more and more of this type of crucial tests of our theories, and this is for the best. I think we should emphasize these empirical results in our teaching of economics: they are as important as the underlying theory that they test.

The second example is in economics as engineering: McFadden's random utility model. McFadden used the utility maximization framework to model people's choices of their transportation mode. He modeled the choice of using your car, the bus, your bike or walking as depending on the characteristics of the travels (time to go to work) and your intrinsic preferences for one mode or the other. He estimated the preferences on a dataset of individuals in the San Francisco bay area in 1972. He then used his model to predict what would happen when an additional mode of transportation would be proposed (the subway, or BART). Based on his estimates, he predicted that the market share of the subway would be 6.3%, well below the engineering estimates of the time that rounded around 15%. When the subway opened in 1976, its market share soon reached 6.2% and stabilized there. This is one of the most beautiful and convincing example of testing of an engineering tool in economics. Actually, this amazing performance decided transportation researchers to abandon their old engineering models and use McFadden's. I think it is for this success than Dan was eventually  awarded the Nobel prize in economics. We see more and more of this type of tests of structural models, and this is for the best.

The third example is in economics, or rather behavioral, engineering (when I use the term "behavioral," I encompass all the sciences that try to understand man's behavior). From psychology, and increasingly economics, we know that cognitive and non-cognitive (or socio-emotional) skills are malleable all along an individual's lifetime. We believe that it is possible to design interventions that help kids acquire these skills. But one still has to prove that these interventions actually work. That's why psychologists, and more recently economists, use randomized experiments to check whether these interventions actually work. In practice, they randomly select among a group of children the one that are going to receive the intervention (the treatment group) and the ones that are going to stay in the business as usual scenario (the control group). By comparing the outcomes of the treatment anc control group, we can infer the effect of the intervention free of any source of bias since both groups are initially identical thanks to the randomization This is exactly what doctors do to evaluate the effects of drugs. Jim Heckman and Tim Kautz summarize the evidence that we have so far on these experiments. The most famous one is the Perry preschool program, that followed the kids until their forties. The most fascinating finding of this experiment is that by providing a nurturing environment during the early years of the kids' lives (from 3 to 6), the Perry project has been able to change durably the kids' lives. The surprising result is that this change has not been triggered by a change in cognitive skills, but only by a change in non-cognitive skills. This impressive evidence has directed a lot of attention to childhood programs and to the role of non-cognitive skills. Jim Heckman is one of the most ardent proponents of this approach in economics.

The credibility revolution makes sense to me also because of the limitations of Haavelmo's framework. As I already said, trying to infer stable autonomous laws from observational data is impossible, since there is not enough free variation in this data. There are too many unknowns and not enough observations to recover each of them. Haavelmo was well-aware of this problem, but the solution that he and the Cowles Commission advocated-using a priori restrictions to restore identification-was doomed to fail. What we need to learn something about how our theories and our engineering models perform is not a priori restrictions on how the world behaves, but more free and independent information about how the world works. This is basically what Josh's argument is about: think about these restrictions as to make them as convincing as experiments. That's why Josh coined the term natural experiments: the variation in the observed data that we use should be as good as an experiment, not stemming from theory but from luck: the world has offered us some free variation and we can use it to recover something about its deeper relationships.

The problem with the natural experiment approach is that whether we have identified free variation and whether it really can be used to discriminate among theories is highly debatable. Sometimes, we cannot do better, and we have to try to prove that the natural variation is as good as an experiment. But, a lot of the times, we can think of a way of generating free variation ourselves by building a field experiment. And this is exactly what is happening today in economics. All these experiments (or RCTs: Randomized Control Trials) that we see in the field are just ways of generating free variation, with several purposes in mind: testing policies, testing the prediction accuracy of models, testing scientific theories. Some experiments can do several of these things at the same time.

This is an exciting time to do economics. I will post in the future on other early engineering and scientific tests, and I will report on my own and others' research that I find exciting.