Nonlinear Dynamics of Oil Shocks: Dynare, MATLAB, and MATLAB Parallel Server
Konstantinos Theodoridis, European Stability Mechanism
This presentation employs threshold VAR models to investigate the asymmetric impact of oil supply news shocks, analyzing variations in both the size and direction of the shocks. Our findings reveal that large and adverse oil shocks exert a stronger effect on real activity, labor market indicators, and risk variables than small and favorable shocks. Interestingly, we observe no asymmetry in the response of prices and monetary policy to oil shocks of different magnitudes and signs. Supported by a theoretical framework integrating search and matching frictions with Epstein-Zin preferences, the study utilizes Dynare, MATLAB®, and MATLAB Parallel Server™ to enhance model estimation and provide valuable insights into the nonlinear dynamics of oil shocks.
Published: 22 Oct 2024
Thank you very much for the invitation. I mean, this is more an academic paper, but I will try to highlight as we go along how we use MATLAB in order to facilitate the calculations.
So the topic is about the nonlinear effects that oil supply news can have on the economy. And this is a joint work with Mirela Miescu from Lancaster University and Haroon Mumtaz from Queen Mary University.
So, OK. I think it's-- OK. So the usual disclaimer applies that the views in this presentation are those of the author and do not necessarily represent those of the ESM or the managing board or the policy.
And the motivation-- I mean, I have two slides for the motivation. You can see here the year-on-year change on oil prices. And you can see from this chart that sometimes, these changes are-- let's call it smalls versus other periods of time, where we have very sharp increases. And recently, after the Russian-Ukrainian war, we have a sharp increase on oil prices.
So then the question that we ask-- we are not the first one to ask this question-- is do big changes in oil prices are transmitted to the economy differently? And the motivation, again, is very straightforward if you think that interest rates have increased from 0%, in some cases all the way to 5%, and still, inflation remained quite high. Until recently, we are now experiencing inflation falling below the objective set by the central bank. So despite this very sharp and sizable increase in the interest rate, then they place-- remain persistently high for quite some time.
So then the question-- the literature or the academics, they go out and they say, OK, maybe big shocks have-- they are transmitted to the economy differently. And there are many ways to think why this would be the case. And it's not just the size of the shock which get bigger, but I think geopolitics now-- they are definitely much different than during the Great Moderation period.
So then if there are going to be changes in the oil prices, and usually they would come from the supply, and then they can be quite sizable.
Now, so-- the I kind of already hit it into these questions, is the sole purpose of this paper is to find out whether these big supply-type shocks-- they have-- are transmitted to the economy different, so actually, bigger shocks have a bigger effect on the economy. And if yes, what is the mechanism behind this transmission, and what are the implications of the monetary policy?
So at the very end, once you move into-- I mean, Junior tried to hit it into-- during his talk when he says, OK, when you start thinking about nonlinearities, when you move away from a linear world, then complications arise. And in some cases, you can deal with these complications if you have small models. But the usefulness of these small models is-- I mean, is not that big.
So you actually-- if you want to have a plausible explanation of the underlying economic mechanism that explain this transmission, you are actually going to need big models. And now you have nonlinearities with a-- combined with the big models. The computational cost rise enormously. And that's what is the most-- I think the biggest cost in this paper.
So for us in this exercise, we have two models. We have a nonlinear time series model and a nonlinear structural model. And because we use simulation analysis, either posterior inference or predictive prior analysis, we need to simulate these models that-- in this simulation, either the nonlinear or the time series or the structural model, the DSGE model. And these simulations are extremely costly. They take even months, in some cases, for the simulations to be completed.
And the only thing that I want to say additional from this chart is actually, we need an exogenous perturbation, an instrument to capture this variation, the exogenous variations in the supply of the news. And for that, we use an instrument proposed by Kanzing in the very first paper published in 2021 in American Economic Review.
So the key contributions of this paper is we do find these nonlinearities, and that's not always the case. Many people have search for these nonlinearities, but they cannot find it. Interestingly, we find that this transmission-- this nonlinearity is associated with a nonlinear behavior in the labor market variables, and in particular, what the economists call extensive margin, variables related to the vacancies and unemployment, and not necessarily to those that relate to the intensive margins, like hours of work.
And the most important thing, and this is against our prior belief, is that the-- we didn't find any nonlinear transmission on the prices. And because the prices, they seem to-- the shock, even small, big shocks, they do seem to have, actually, the same effect in a marginal sense on prices. That will also-- not just on prices, but also on financial variables. That means that the monetary policy or the policy variables will also display this-- the same marginal effects, so no variation, whether the shock is big or small.
Of course, as I said before, in order to be able to justify this model-- because we start from the data, we find these nonlinearities. But then identifying the nonlinearities is one thing. Trying to explain where they come from or identifying the transmission mechanism is also another important thing. But then mapping this from the data into a model can be-- is not so easy.
So we start with a very general model that has many nonlinearities, and then we use this predictive prioer analysis to identify which components of that model are necessary to explain the emperical results.
And then what we find-- as I said before, we have this real labor market search and matching frictions, that they give rise to precautionary motives that they seem to be correlated or they seem to be a function of the size of the shock. So the bigger is the shock, the more people are going out from the market, so the more people that are going to compete for fewer jobs, and that makes the people to be more concerned for when a big shock takes place versus a small shock. And that's what drives these nonlinearities-- sorry, the different effects.
OK. There are many-- so this is the good thing about the paper. It leads with many streams of the literatures. In the sake of time, I will not go-- review the literature in great detail. So what we want to say here is because of the nonlinearities that we use in terms of the structural model, we are not-- we provide set not just only on the narratives regarding the transmission of the oil shock, but also, we can contribute to other questions studied in the literature, like the Shimer's critique and/or endogenous uncertainty variations.
And also, even if we use the representative agent models, we can show that within this representative agent class models, precautionary savings could play an important role, which is a key feature of the heterogeneous agent literature.
OK. Now-- so this is the first very costly exercise where we use MATLAB, and we use the parallel classes, this-- to estimate this nonlinear model, and actually, to simulate the generalized impulse response. So we need to-- once we estimate the model, which is still time consuming, but not as time consuming as to produce the impulse responses, is we use-- and you will see if you read the paper, we carry out a really large number of the exercise.
So we take the exhaustive list-- this enormous list by canceling when he tries to define the relationship of this energy-- sorry, actually oil supply shocks, to a very large set of the indicators. So we are going to do the same, and then we are going to pretty much double this list of the variables.
So the way we do-- we run the simulations-- now, because this is the way we estimate the models, we cannot really parallelize the estimation because there is the serial dependence, so we just create multiple tasks, and we estimate all these variables by using the class.
So the estimation of the model is-- the model we estimate is known as a threshold proxy model, so there is a threshold variable which allows the dynamics to be different. So relative to what Junior presented earlier, this is a regime-dependent dynamics. However, this regime-- the evolution from one to another regime is not governed by stochastic Markov process, as it is in Junior's case.
So once you estimate these models, that's fine. You can capture the different dynamics. But still, the model has not been identified, and then the identification of the model is also something that is demanding and again, has been carried out in-- using MATLAB is to use these proxy identification techniques to understand how the exogenous perturbation in energy in that proxy will be transmitted into the economy.
So there are many details about the estimation, perhaps not so interesting for your audience. But again, the important thing that I want to say is the estimation of these models can take several-- estimation and simulation, the impulse responses, just particular exercise, can take a couple of days.
And then that's why, if you plan to do this for a large number of variable, like the 50 variables that we have here, actually, if you think to do this every two days, it will take you a long time to produce the results.
Yeah. So in terms of the specification of the model, again, we follow Kanzig. And here, you see once you will find once you estimate the model, you estimate also this threshold variable. And you can capsule-- you can see this evolution of the regimes, which is what this charts shows about. And this is also related to what Junior showed before, but then the process is slightly different.
So here, once you estimate this threshold, then you-- pretty much the evolution of the regime is observed, so you can just track them on the data, so you don't need to use filters to estimate.
So here, you can see that there are three kind of regimes, the average, small increase, and then you have this very big increase that we saw in the beginning of this talk.
And once you estimate these models, what you will see this chart shows you-- it tries to identify whether these responses-- they are significant. So we do-- we simulate a impulsed response using only small shocks, and then we simulate an impulsed response using a big shock.
And then we scale the response. So if the model if the transmission is linear, then although we use a five-standard deviation shock to simulate the impulsed responses, once we scale by five the responses, then it should look very similar.
So the difference between the high-- big shock versus a normal shock-- [COUGHS] excuse me-- response is if the model-- if they are not significant nonlinearities, the difference must be 0. And then what you see on the right-hand side column, shows 0 must included in this difference.
So what you don't-- where you do not see the 0 is on real economic activity. So even I don't show you hear the prices, the prices will look the same, so we'll include the zero. But the world economic activity or the US industrial production-- you will see that the bigger the shock, the bigger the effect on the economic activity, even after you divide the high responses by the relative standard deviation.
OK. So when we carry out many exercises-- I haven't included all of them, but then what we find out in the simulations is that the variables, as I said before-- the responses are-- the nonlinear seems to be associated with the labor market variables, where with prices and employment and all of this stuff, you don't find-- you will find the 0 to be well included within the confidence band when we try to take the difference between the two distributions.
And so the literature as we go about when you talk about these precautionary savings-- it will indicate if there is an-- it's an important channel in the transmission, you should see some of these indices that will capture this risk associated-- some concerns that are associated with risk to display significant variation. And then when we check these indicators, we do find that the bigger shocks seems to mean-- move that-- these variables, these proxies, by significantly more than a smaller shock.
So this comes from, as I said, from the model. Although we only report a set of the variables, we estimate a really large number of indicators. And here, I only discuss those that they display this big variation. I think the crucial point from here to be is that without using the cluster, we will not be able to have this large number of results.
And I think this is one of this contribution of this paper because we are able to do this literally before any of the variables that the people have considered-- for all the variables that the people have considered out there.
And again, the use of the cluster, especially the MATLAB job scaler-- MATLAB Job Scheduler is what makes this, I think-- is what makes this exercise to be completed in an acceptable time, especially for myself. I'm not any-- I'm an economist for a policy institution, which means time for research is always limited.
OK. So this is with the model. We can say, OK, this is an empirical model. You may use high-tech techniques, the MATLAB Job Scheduler, simulations, whenever you need MATLAB. Why about-- why you need the MATLAB for-- or the MATLAB Job Scheduler these parallel computing techniques, and I think they are very efficient in MATLAB for a DSGE model.
So actually, DSGE exercise-- it's even more complex than the-- it's even more costly than the time series exercise because, as I said before, the model that we had in mind or we start with-- it's a very general one. That means a very large number of the state variables. And then in order to start capturing the nonlinearities, you need to consider approximations of this model that end up to have a huge, really, really-- so the model is very high, but again, with the-- you end up with this approximation. You have a model with really big matrices that consume a lot of memory.
So in order to solve this model can take-- sometimes, even when you use Dynare, OK? So you can-- just solving the model take a bit of time.
So here, we don't only solve the model, so that's one thing. But then what we want to do is-- I'll skip the model just for the sake so you can see it-- is we want to find for which set of the parameters that people have considered or the set of structural parameters have considered is-- we can produce responses that obey certain characteristics.
So what we do is we take the model. We draw from the prior distribution. We solve the model. And then we ask whether these restrictions, either for the level response or for the difference between big and small responses-- they obey these characteristics. And that's the way we want to assess whether the model is plausible or not.
But as I said before, for every draw now, the model needs to be solved. Now, these draws-- they can be parallelized, and not only that-- of course, they can be parallelized, but the model-- not always solved. So this exercise is you draw, you don't solve, so you need also to find the parameters that they solve the model. And then from those parameters that they solve the model-- to find also those that they satisfy these restrictions.
And these restrictions is, for us, what we find the-- to assess which channel of these models are quite important for explaining the results.
There are a few things here that-- in these tables what I'm trying to explain is that as I said before, we do see the nonlinearities in the labor market variables, but we don't see nonlinearities in the prices in the financial variables. So when we carry out these responses, we leave the responses to the inflation and policy rates and some other variables and constraints.
So if the explanation that we propose is the right, it should have significant effects, obviously, for the variables that we constrain. That's by construction. But also, the effect on prices and policy rate is left unconstrained, so if there is an explanation consistent, then the effect of this should be-- zero should be included within this simulation. So if it is not included, then this is not-- this cannot be an explanation, so we keep looking for that. And that's what we call the predictive prior analysis.
And then what you can see here when we try to find out from these draws-- then we can find when we draw parameters from the blue distribution, we can find another distribution which will satisfy-- that will solve the model and will also be consistent with these stylized facts.
So you can see some of these parameters-- the overlap is quite significant. In some cases, there are differences. And when there are differences, we go and search to find out what is the role of this parameter to the transmission mechanism. And this is how we added define which of these channel are important in explaining the differences across big and small shocks.
How much time is left?
You've got a couple more minutes, Kostas.
Oh, OK. So yes. And this is when we go-- this is what I'm trying to say. So implicit-- this is the impulsed responses. The blue line is the responses after a small shock. The red line is the responses after a big shock. But again, the responses-- all the red responses-- they have been scaled. They have been defined by the increase in the standard deviation.
So what we should see is-- in a standard linear, there is no significant nonlinearity, the two responses to be on top of each other. These are the generalized-- sorry, these are the posterior mean, so of course, here, I give you an idea that this difference can be significant.
And there are two particular responses-- this is the consumption response, the real interest rate response, and the response associated with the Hanson bound as well as the unemployment risk responses that they point to the direction about this precautionary savings, that they arise only in a nonlinear models. This channel doesn't exist on first-order approximated models.
These are the posterior response. And then the people can argue, OK, are these statistically significant? And then once we have this distribution, you can find out we have the posterior distribution for the-- sorry, the prior distribution for the one standard deviation. So the scale posterior distribution of five standard deviation-- if we take the difference, we can assess whether this responses-- zero are included.
And then what you will see here, that for rates, for the inflation, as we started before, you will not be-- zero will be included, so we can conclude that actually, at least from that way of assessing plausible explanation, the one that we propose here will be consistent with the stylized fact that we saw before.
So there are other things. Also, I'm using the skewness, what is called that-- the person who actually-- Junior closed the presentation. Again, once we have this, we simulate these distributions, we can actually find the-- calculate skewness measures, higher moments that allow us to see the asymmetries with this distribution, which can be also another indicator of the nonlinearities.
So I'll stop here, what-- and the point I want to make-- want to say is you need to go to the big models. You need to-- sorry, you need to have nonlinear models, first of all. You need to go to big models in order to have an explanation that will be consistent with a wide range of stylized fact.
But with the current state of the computers, you need also to use these parallel computing techniques in order to conclude this exercises in a feasible time. Even for this paper, it took us about 1 and 1/2 year to finish it. But still, it's something that we could do, I think, without using the MATLAB Job Scheduler.
And to thank here Ed, who actually help us, because-- bringing Dynare within the parallel-- with the Job Scheduler is not straightforward. And then Ed spent a lot of time, and actually days, helping us to out this issue. And we are really grateful to the MathWorks for their support and time on this. And I will conclude here. Thank you.