|Date:||09:30 14 October 2014 - 17:00 17 October 2014|
|Tutor:||Barbara Sianesi Institute for Fiscal Studies|
|Venue:||University of Manchester|
|Prices:||UK postgraduate students: £105; UK academics: £210; other delegates: £770; All prices incl. VAT|
How can one evaluate whether a government labour market programme such as the New Deal, or a subsidy to education such as the EMA is actually working? This course deals with the econometric and statistical tools that have been developed to estimate the causal impact on one or more outcomes of interest of any generic 'treatment' - from government programmes, policies or reforms, to the returns to education, the impact of unionism on wages, or of smoking on own and children's health.
After highlighting the 'evaluation problem' and the challenges it poses to the analyst, we focus on the empirical methods to solve it: randomised social experiments, naive non-experimental estimator, natural experiments or instrumental variables, regression discontinuity design, econometric selection (or control function) models, regression analysis, matching methods, before-after and difference-in-differences methods.
For each of these approaches, we give the basic intuition, discuss the assumptions needed for its validity, highlight the question it answers and formally show identification of the parameter of interest. There will be plenty of discussion of the relative strengths and weaknesses of each approach, drawing from example applications in the literature. Each method will be implemented 'hands-on' in practical Stata sessions.
By the end of the course, participants will be able to:
Level of knowledge required:
Please note that this is an intermediate-level course.
While offering an in-depth and thorough overview and discussion of the various evaluation methods, this is not an advanced course at the post-graduate level. Most emphasis is devoted to understanding the issues, to the choice of the most appropriate method for a given context and to the implementation of evaluation methods in practice. On the other hand, the course does rely on notation and there is a certain degree of formalisation (at the level of this paper), so please consider that PEPA also holds a much less formal introductory course aimed at those who design, commission or manage evaluation work. An Introduction to Impact Evaluation is scheduled once a year. For further information about this course please contact the PEPA Administrator.
No preparatory reading is required as the course does not assume previous knowledge of evaluation methods and issues.
The following references are offered as examples covering evaluation issues and methods at very different levels of formalisation:
Ravallion, M. (1999), “The Mystery of the Vanishing Benefits: Ms. Speedy Analyst's Introduction to Evaluation”, World Bank Policy Research Working Paper No. 2153. This papers offers a very gentle introduction to evaluation.
Blundell, R., Dearden, L. and Sianesi, B. (2005), “Evaluating the Effect of Education: Models, Methods and Results from the National Child Development Survey”, Journal of the Royal Statistical Society, Series A, 168, 473-512. This paper covers some of the evaluation methods that will be discussed in detail during the course, and does so at the same level of formalisation that will be used in the technical part of the course.
A very thorough yet accessible book with a clear emphasis on causal effects is Angrist, J.D. and Pischke, J.S. (2009), Mostly Harmless Econometrics, Princeton University Press.