Please note the dates for this course: January 4-24 for self-paced study, January 25 + 26 for the 2 live days.
Prices include VAT.
The Remote PEM Course consists of a mix between self-paced study and live online meetings.
Specifically:
1)
- 21 hours of pre-recorded lectures. Participants will receive access to the lecture videos for three weeks prior to the course to go through them at their own pace. Note while the videos continue to be accessible for an additional two weeks starting from the live classes, it is vital that participants have finished watching the lectures BEFORE the live meetings, which are devoted to questions and Stata practicals. Extended access to the videos after the end of the course is meant to be used for a second listening in order to revise/review/deepen/refresh specific (or all) topics.
- Introduction to the Course
- The Evaluation Problem & Overview of Analytical Challenges
- Evaluation Methods
- The Naive Estimator
- Randomised Experiments
- Instrumental Variables
- Regression Discontinuity Design: Sharp
- Regression Discontinuity Design: Fuzzy
- Regression Methods
- Matching Methods
- Longitudinal Methods: Before-After
- Longitudinal Methods: Difference-in-Differences
- Longitudinal Methods: Synthetic Control Method
For each evaluation approach, we give the basic intuition, discuss the assumptions needed for its validity, highlight the question it answers and formally show identification of the parameter of interest. The relative strengths and weaknesses of each approach are discussed in detail, drawing from example applications in the economics literature.
- Wrap-up & Conclusions
2)
- 10.5 hours of live sessions split over 2 days consisting of
- A quick summary/recap of each topic, taking questions relating to that topic
- Time for participants to work on the corresponding guided Stata practical
- Barbara going through and discussing that Stata practical live
A rough breakdown of the live sessions:
Day 1 | Day 2 | |
10.00 | Evaluation Problem Naive Estimator | RDD – Fuzzy Regression Methods |
11.30 | Break | Break |
11.45 | Randomised Experiments Instrumental Variables | Matching Methods Before-After |
13.15 | Break | Break |
13.45 | RDD – Sharp | Difference-in-Differences Synthetic Control Method |
16.00 | End of day | End of day |
PLEASE NOTE
- To participate in the remote course you will need Zoom.
- If you’d like to work on the practical part on your own (either during the live sessions or later in your own time) you will need Stata.
How can one evaluate whether a government labour market programme such as the Work Programme, or a subsidy to education such as the EMA is actually working? This course deals with the econometric and statistical tools that have been developed to estimate the causal impact on one or more outcomes of interest of any generic ‘treatment’ – from government programmes, policies or reforms, to the returns to education, the impact of unionism on wages, or of smoking on own and children’s health.
After highlighting the ‘evaluation problem’ and the challenges it poses to the analyst, we focus on the main empirical methods to solve it. Specifically, in this 3 and a half-day course we cover:
- Randomised social experiments
- Naive non-experimental estimator
- Natural experiments or instrumental variables
- Regression Discontinuity Design
- Regression analysis
- Matching methods
- Before-after
- Difference-in-differences
- Synthetic control methods
For each of these approaches, we give the basic intuition, discuss the assumptions needed for its validity, highlight the question it answers and formally show identification of the parameter of interest. The relative strengths and weaknesses of each approach are discussed in detail, drawing from example applications in the economics literature. Each method will be implemented ‘hands-on’ in practical Stata sessions.
By the end of the course, participants will be able to:
- frame a variety of micro-econometric problems into the evaluation framework, and be aware of the concomitant methodological and modelling issues;
- be discerning users of econometric output – able to interpret the results of applied work in the evaluation literature and to assess its strengths and limitations;
- access the evaluation literature to further deepen knowledge on their own;
- choose the appropriate evaluation method and strategy to estimate causal effects in different contexts; and
- use simple statistical packages (we use Stata in the course) to implement the different evaluation methods to real data.
Level of knowledge required:
This is an intermediate-level course on quantitative empirical methods for policy evaluation. As such, familiarity with basic statistical concepts (e.g. significance testing) and basic econometric tools like OLS regression and probit/logit models is required. Note also that the course does rely on notation and there is a certain degree of formalisation (at the level of this paper).
The practical part of the course will make use of Stata; although the exercises will be guided, basic familiarity with this software is strongly recommended.
In considering this course, please note that while offering an in-depth and thorough overview and discussion of the various evaluation methods, this is not an advanced course at the post-graduate level. Most emphasis is devoted to understanding the issues, to the choice of the most appropriate method for a given context and to the implementation of evaluation methods in practice. On the other hand, there is also a certain degree of formalisation, so this course might not be the most suitable to those purely interested in commissioning or managing evaluation work.
Note also that while participants have often come from a range of backgrounds and have reported enjoying the course, the examples and discussions come from a mostly (labour) economics perspective.
Background reading
No preparatory reading is required as the course does not assume prior knowledge of evaluation methods and issues.
Past participants have however suggested emphasising the importance of reading the following paper to get acquainted with the notation and concepts, while others have recommended that it be set as required reading at the end of the first day (a copy will be included in the course pack for reference). This paper in fact covers some of the evaluation methods that will be discussed in detail during the course, and does so at the same level of formalisation that will be used in the technical part of the course. It may thus be worthwhile to scan it in advance of the course to get an idea of the issues, notation and material more generally:
Blundell, R., Dearden, L. and Sianesi, B. (2005), “Evaluating the Effect of Education: Models, Methods and Results from the National Child Development Survey”, Journal of the Royal Statistical Society, Series A, 168, 473-512. IFS Working Paper.
Alternatively, a gentle introduction to evaluation issues and methods is offered by Ravallion, M. (1999), “The Mystery of the Vanishing Benefits: Ms. Speedy Analyst’s Introduction to Evaluation”, World Bank Policy Research Working Paper No. 2153
Trainer’s bio