Working Paper

Non-Bayesian updating in a social learning experiment

Authors

Roberta De Filippis, Antonio Guarino, Philippe Jehiel, Toru Kitagawa

Published Date

14 December 2020

Type

Working Paper (CWP60/20)

In our laboratory experiment, subjects, in sequence, have to predict the value of a good. The second subject in the sequence makes his prediction twice: first (“first belief”), after he observes his predecessor’s prediction; second (“posterior belief”), after he observes his private signal. We find that the second subjects weigh their signal as a Bayesian agent would do when the signal confirms their first belief; they overweight the signal when it contradicts their first belief. This way of updating, incompatible with Bayesianism, can be explained by the Likelihood Ratio Test Updating (LRTU) model, a generalization of the Maximum Likelihood Updating rule. It is at odds with another family of updating, the Full Bayesian Updating. In another experiment, we directly test the LRTU model and find support for it.


Previous version

Non-Bayesian updating in a social learning experiment
Roberta De Filippis, Antonio Guarino, Philippe Jehiel, Toru Kitagawa
CWP39/18