I’ve been thinking about distributional forecasts. In particular I’ve been considering Quantile Autoregressions (QAR) as defined in KOENKER AND XIAO 2006. There are some handy lecture notes I’ll borrow from at this link (pdf) in the exercise here.
This is all speculative, but I think this might be a useful way to think about the assymetry in likely outcomes given the uncertainty inherent in today’s economic forecasts.
Setup
Let’s define the QAR(1) model for quantile \(Q(\tau)\),
Coronavirus Recession
Over on LinkedIn I posted a summary of recent economic talks I have been giving: The Coronavirus Recession. Read the whole things for analysis and lots of charts, but I leave off with three key questions:
Recession was here, but is it already gone?
Housing market indicators have rebounded, but will the recovery be sustained?
After effects of shutdown and possible second wave to the pandemic remain as risks to the outlook, how big are these risks?
As an economist and all-around friend of strictly positive numbers I often use the log function. The natural logarithm of course, need I specify it? Apparently in certain spreadsheet software you do.
In this note I just wanted to write down a couple of observations about how to generate mean or median forecasts of a variable \(y\) given the model is fit in \(log(y)\). Of course, I am going to borrow heavily from Rob Hyndman’s blog, where he coverse this.
Mortgage interest rates have moved about a percentage point lower from where they were a year ago. The housing market seems to have responded favorably.
On my way into D.C. the other day to do some business, I joined a Twitter exchange originally between [at]Graykimbrough and Adam Ozimek, [at]ModeledBehavior about the effects of Federal Reserve interest policy on the housing market.
Seems unlikely housing market was slowed by trade war.
This post is for me and future me, though if you get something out of that, that’s great too. Here I will jot down some notes on something I’ve been thinking about.
Because reasons, I have been interested in Vector Error Correction Models (VECM). I’ve been thinking of the case where you estimate an error correction model, and have available external forecasts for one of the variables. How can you easily construct the conditional forecasts for the VECM in R?
It’s the time of the year where everybody is dusting off their crystal balls and peering into the future. There’s even still time to send out your “Winter is Coming” newsletter.
Let’s take a step back and look at how forecasts of U.S. macro variables have evolved. Is forecasting still hard?
Last year we looked at historical forecasts of economic conditions in the post forecasting is hard. Let’s update it.
My recent economic and housing market talks see for example here have been titled: “Will the U.S. housing market get back on track in 2019?”. My general conclusion has been cautiously optimistic. There is enough strength in the broader economy and enough of a tailwind from demographic forces to push the U.S. housing market to modest growth next year.
I still think that’s true, but as I have said in my talks, risks are weighted to the downside.
The year is winding down, and folks are starting to think about next year. With lots of folks reviewing strategic plans and whatnot, there’s increased demand for me to talk about my 2019 economic outlook.
Over on LinkedIn I posted a summary of my most recent chartbook: Will the housing market get back on track in 2019?.
Do check it out.
Slidecraft
For these slides I used a mixture of R and Excel.
As an economist with a background in econometrics and forecasting I recognize that predictions are often (usually?) an exercise in futility. Forecasting, after all, is hard. While non-economists have great fun pointing this futility out, many critics miss out on why it’s so hard.
There are at least two reasons why forecasting is hard. The first, the unknown future, is pretty well understood. Empirical regularities with much forecasting power in the social sciences are hard to come by and are rarely stable.
LET’S PICK BACK UP where we left off and think about communicating forecast results. To help guide our thinking, let’s set up a little game.
Basic setup
Like last time we’re going to focus on a situation where a forecaster observes some information about the world and makes an announcement about a future binary outcome. A decision maker observes the forecaster’s announcement and takes a binary action. Then the outcome is realized and the forecaster receives a payoff.
LAST WEEK IN THE WALL STREET JOURNAL an article LINK talked about how pundits can strategically make probabilistic forecasts. It seems 40% is a sort of magic number, where it’s high enough that if the event comes true you can claim credit as a forecaster, but if it doesn’t happen, you still gave it less than 50/50 odds.
Since I’m often asked to make forecasts I’m interested in this problem.
BACK WE GO INTO THE VASTY DEEP. LAST TIME we introduced the idea of using dynamic model averaging to forecast recessions. I was so excited about the new approach that I didn’t take the time to break down what was going on with it. In this post we’ll look more closely at what’s happening with the dma packaged when we try to forecast recessions.
Per usual we’ll do it with R and I’ll include code so you can follow along.
HERE THE LITERATURE IS VASTY DEEP. In this post we’ll dip our toes, every so slightly, into the dark waters of macroeconometric forecasting. I’ve been studying some techniques and want to try them out. I’m still at the learning and exploring stage, but let’s do it together.
In this post we’ll conduct an exercise in forecasting U.S. recessions using several approaches. Per usual we’ll do it with R and I’ll include code so you can follow along.
R statistics dataviz housing mortgage data