Sep 17, 2020Liked by Brian Albrecht, Josh Hendrickson

I want to add an anecdote to this discussion. Last week, I had a long discussion with an economist about how graduate students (and undergrads!) dismiss benchmark models too quickly and instead go to complex models with too many parameters, too many equations, and have no idea what is going on or how the model is actually solved (because the computer does it). I think this is true to some extent. If you solve an RBC model with tax and productivity shocks and calibrate it correctly, you'll get practically the same elasticity of investment with respect to taxation as you would with a far more complex model which may as well be written in Sumerian for all the good it does in terms of understanding and communication. Compared to the more complex model, there is clearer understanding of the processes at work, which in turn leaves one more able to think and communicate about possible shortcomings. Sure, there tradeoffs, but the benchmark models get short shrift.

That's a great point. And it may relate to the previous post on labor markets. There is another force within the field: we need to get published. So that maybe leads to excessive differentiation. I can't just take a benchmark model to study this thing I'm interested int. I need to make a tweak. Ten tweaks later and you have an unintelligible model.

I agree that the RBC model can serve as a useful benchmark. However I agree with Caballero (2010) who outlines that one failure of macroeconomists to identify the GFC was our desire to build models with "one-deviation at a time". Which I think as a first pass is a fine way to go about building models, as you say we understand the propogation of a shock through a new mechanism when it is only one deviation from a model we understand. In the real world however we need a model with a large number of deviations from the RBC model (all the bells and whistles). If we discount these larger models as not parsimonious enough then as policy makers we are really stuck with a suite of models that in a few different dimensions are only one deviation away from the RBC model.

Ultimately then we are leaving it up to the judgement of the policymaker to weigh up the evidence of a range of simple models that deviate from the RBC model in order to set policy, a process that has a complicated model implicit in the policymakers brain and is therefore even less parsimonious than a full bells and whistles DSGE! With these large scale DSGE models the RBC becomes a less useful counterfactual for analysing shock propagation mechanisms.

I want to add an anecdote to this discussion. Last week, I had a long discussion with an economist about how graduate students (and undergrads!) dismiss benchmark models too quickly and instead go to complex models with too many parameters, too many equations, and have no idea what is going on or how the model is actually solved (because the computer does it). I think this is true to some extent. If you solve an RBC model with tax and productivity shocks and calibrate it correctly, you'll get practically the same elasticity of investment with respect to taxation as you would with a far more complex model which may as well be written in Sumerian for all the good it does in terms of understanding and communication. Compared to the more complex model, there is clearer understanding of the processes at work, which in turn leaves one more able to think and communicate about possible shortcomings. Sure, there tradeoffs, but the benchmark models get short shrift.

That's a great point. And it may relate to the previous post on labor markets. There is another force within the field: we need to get published. So that maybe leads to excessive differentiation. I can't just take a benchmark model to study this thing I'm interested int. I need to make a tweak. Ten tweaks later and you have an unintelligible model.

I agree that the RBC model can serve as a useful benchmark. However I agree with Caballero (2010) who outlines that one failure of macroeconomists to identify the GFC was our desire to build models with "one-deviation at a time". Which I think as a first pass is a fine way to go about building models, as you say we understand the propogation of a shock through a new mechanism when it is only one deviation from a model we understand. In the real world however we need a model with a large number of deviations from the RBC model (all the bells and whistles). If we discount these larger models as not parsimonious enough then as policy makers we are really stuck with a suite of models that in a few different dimensions are only one deviation away from the RBC model.

Ultimately then we are leaving it up to the judgement of the policymaker to weigh up the evidence of a range of simple models that deviate from the RBC model in order to set policy, a process that has a complicated model implicit in the policymakers brain and is therefore even less parsimonious than a full bells and whistles DSGE! With these large scale DSGE models the RBC becomes a less useful counterfactual for analysing shock propagation mechanisms.

Great post and am loving the blog!

I don't really disagree with anything that you said. I think that maybe we are just making slightly different points. Thanks for reading!