Sunday, May 31, 2015

Seasonality, Measurement, and First Quarter U.S. GDP

After the latest revisions to U.S. real GDP by the Bureau of Economic Analysis (BEA), the estimate for real GDP growth in the first quarter of 2015, seasonally adjusted at an annual rate, was -0.7%. So, have we entered a recession or what? In answering that question, we can learn something about how GDP is measured, and how seriously we want to take GDP measurement and the interpretation of quarterly real GDP growth rates.

Real GDP in the United States since the beginning of 2007 looks like this:
If you focus on what's happened since the end of the last recession, in 2009Q2, you'll notice that GDP has not grown in every quarter. Indeed, if we focus just on the quarterly growth rates in the first quarter, we get:

2010Q1: 1.7%
2011Q1: -1.5%
2012Q1: 2.3%
2013Q1: 2.7%
2014Q1: -2.1%
2015Q1: -0.7%

So, the average first-quarter growth rate since the end of the recession has been 0.4%, while the average growth rate over that period was 2.2%. This might make you wonder whether there is something funny going on with the seasonal adjustment of the data. The same thought occurred to Glenn Rudebusch et al. at the S.F. Fed, and they showed that, in fact, there is residual seasonality in the real GDP time series. They ran the supposedly-seasonally-adjusted real GDP time series through Census X-12 (a standard statistical seasonal adjustment filter), and came up with an estimate of first-quarter 2015 real GDP growth that was 1.6 percentage points higher than the reported number at the time (before the latest revisions). Apparently the BEA has been made aware of this problem, and is working on it.

Why do we have this problem? In the United States, the collection of economic data is a decentralized activity, conducted by several government agencies. The Bureau of Labor Statistics collects labor market and price data (why the consumer price index is a labor statistic I'm not sure), the Fed collects financial data, the Congressional Budget Office collects data on government activity, and the Census Bureau collects demographic data. Finally, the Bureau of Economic Analysis collects data for the National Income and Product Accounts (NIPA), and international trade statistics. It's this hodgepodge of data collection that makes FRED useful (shameless advertising), as FRED puts all of that data together (plus much more!) in a rather user-friendly way.

When the BEA constructs an estimate for real GDP, it uses as inputs data that comes from other sources, including (I think) some of the other government statistical agencies listed in the previous paragraph. Some of the data used by the BEA as inputs has been seasonally adjusted before it even gets to the BEA; some has not been adjusted. What the BEA does is to seasonally adjust all the inputs, and then construct a real GDP estimate. It might be surprising to you, as it is to me, that the resulting GDP estimate could exhibit seasonality. But, behold, it does. Maybe somebody can explain this for us.


As economists, what we would like from the BEA are estimates of real GDP that are both seasonally adjusted and unadjusted, for reasons I'll discuss in what follows. But given how the BEA currently does the data collection, that's impossible, and the BEA reports only seasonally adjusted real GDP. It might help if we had a single centralized federal government statistical agency in the United States, but if you have ever dealt with Statistics Canada, you'll understand that centralization is no guarantee of success. Statistics Canada's CANSIM database is the antithesis of user-friendliness. Go to their website and do a search for anything you might be interested in, and you'll see what I mean. For example, the standard GDP time series go back only to 1981, and the standard labor market time series to 1990. Why? Statisticians in the agency have decided that GDP prior to 1981, for example, is not measured consistently with post-1981 GDP, so they don't splice together the post-1981 data and the pre-1981 data. You have to figure out how to do that yourself if you want a long time series.

Getting back to seasonal adjustment, why do we subject time series data to this procedure? Many time series have a very strong seasonal component, for example, monthly housing starts:
That's not too bad, as we can eyeball the raw time series and roughly discern long-run trends, cyclical movement, and the regular seasonal pattern. But try getting something out of the monthly percentage changes, where the seasonal effects dominate:
However, if we take year-over-year (12-month) percentage changes, we get something that is more eyeball-friendly:
That's not unrelated to what seasonal adjustment does. If you're interested in the details of seasonal adjustment, or want to acquire your own seasonal adjustment software, go to the Census Bureau's website.

As macroeconomists or macroeconomist policymakers, if we're making use only of seasonally-adjusted data, we're throwing away information. Basically, what we're getting is the executive summary. I don't think we would like it much, for example, if the government provided us only with detrended time series data - detrended using some complicated procedure we could not reverse-engineer - and wouldn't provide us with the original time series. I have not been able to find much recent work on this, but at one time there was an active (though small) research program on seasonality in macroeconomics. For example, Barsky and Miron wrote about seasonality and business cycles, and Jeff Miron has a whole book on the topic. Is the seasonal business cycle a big deal? It's hard to tell from U.S. data as, again, the BEA does not appear to provide us with the unadjusted data (I searched to no avail). However, Statistics Canada (though they fall down on the job in other ways) publishes adjusted and unadjusted nominal GDP data. Here it is:
So, you can see that the peak-to-trough seasonal decrease in nominal GDP in Canada is frequently on the order of the peak-to-trough decrease in nominal GDP in the last recession. Barsky and Miron found that the seasonal cycle looks much like the regular business cycle we see in seasonally adjusted data, in terms of comovements and relative volatilities. So, the seasonal ups and downs are comparable to cyclical ups and downs, in terms of magnitude and character. Not only that, but these things happen every year. In particular, we get a big expansion every fourth quarter and a big contraction every first quarter. That raises a lot of interesting questions, I think. If seasonal cycles look like regular business cycles, why isn't there more discussion about them? There are macroeconomists who get very exercised about regular business cycles, but they seem to have no interest in seasonal cycles. How come?

Aside from seasonality, what else could be going on with respect to first-quarter 2015 U.S. real GDP? A couple of years ago, the Philadelphia Fed introduced an alternative GDP measure, GDP-Plus. As we teach undergraduates in macro class, there are three ways we could measure gdp: (i) add up value-added at each stage of production, for every good and service produced in the economy; (ii) add up expenditure on all final goods and services produced within U.S. borders; (iii) add up all incomes received for production carried on inside U.S. borders. If there were no measurement error, we would get the same answer in each case, as everything produced is ultimately sold (with some fudging for inventory accumulation), and the revenue from sales has to be distributed as income. In most countries, we don't actually try to follow approach (i), but there are GDP measures that follow approaches (ii) and (iii). In the U.S. we call those GDP (actually a misnomer, as it's really gross domestic expenditure) and GDI (gross domestic income). Over a long period of time, the two measures look like this:
You can see that for long-term economic activity, it makes little difference whether we're looking at GDP or GDI. Over short periods of time, it does make a difference:
You can see in this last chart that the differences in quarterly growth rates could be large. Note in particular the first quarter of 2015, where GDI goes up and GDP goes down.

The GDP-Plus measure, developed by Aruoba et al., uses signal extraction methods to jointly extract from GDI and GDP the available information that is useful in measuring actual GDP - treated as a latent variable. The most recent GDP-Plus observation is +2.03% for first-quarter 2015. So that's another indication that the BEA estimate, at -0.7%, is off.

Further, the recent labor market information we have certainly does not look like labor market data for an economy at the beginning of a recession. Employment growth has been good:
Weekly initial claims for unemployment insurance, relative to the labor force, are at an all-time low:
And the unemployment rate is at 5.4%, the same as in February 2005, in the midst of the housing market boom.

So, it would be hard to conclude from the available data that a recession began in the first quarter of 2015 in the U.S. But, I think the more you dig into macroeconomic data, and learn the details of how it is constructed, the more skeptical you will be about what it can tell us. Most macro data is contaminated with a large amount of noise. We can't always trust it to tell us what we want to know, or to help us discriminate among alternative theories.

Thursday, May 21, 2015

Don't get mathy with me, or I'll give you a good shunning.

I had heard Paul Romer is disgruntled, and now that he's written down his thoughts, we can perhaps sort this out. We'll start with his recent blog post on "Protecting the Norms of Science in Economics." Here is Paul's view of science:
My reading of the evidence convinces me that a group of scholars can make progress toward the truth only if they share a commitment to the norms of science, a set of norms that support a reputational equilibrium that encourages trust and that rewards progress toward truth.
Think of truth as existing at the top of a mountain. Once we get to the top of the mountain we'll know it, as we'll be able to see a long way, but while we're climbing the mountain we're in a fog, and we can't see the top of the mountain. But we might be able to discern whether we're moving up, down, or just sitting in one place. Paul thinks that we can't just let scientists run loose to take various paths up the mountain with different kinds of gear, and with different companions of their choosing. According to him, we have to organize this enterprise, and it's absolutely necessary that we write down a set of rules that we will abide by, come hell or high water. And when he says "reputational equilibrium" most economists will know what he has in mind - there will be punishments (imposed by the group) for deviating from the rules.

Paul isn't just throwing this out as a vague idea. He has a specific set of rules in mind. We'll go through them one by one:

1. We trust that what each person says is an honest account of what he or she thinks is true. So, that seems fine. We'll all agree that people are at least trying to be honest.

2. We all recognize that reasonable people can differ and that no one has privileged access to the truth. Sure, people are going to differ. Otherwise it would be no fun. But there's that word "truth" coming up again. I really don't know what truth in science is - if I ever find it the surprise will likely induce cardiac arrest, a stroke, or some such. To my mind, we only have a set of ideas, which we might classify as useful, not-so-useful, and useless. One person's useful idea may be another's useless idea. Particularly in economics, there are many of us who can't be convinced that our works of genius are actually not-so-useful or useless. Truth? Forget it.

3. We take seriously the claims of people who disagree with us. What if the people disagreeing with us are idiots?

4. We are ready to admit that others might be right that each of us might be wrong. At first I thought there was a typo in this one, but I think this is what Paul intended. Sure, sometimes two people are having a fight, and no one else gives a crap.

5. In our discussions, claims that are recognized by a clear plurality of members of the community by as being better supported by logic and evidence are the ones that are provisionally accepted as being true. This is absurd of course. We don't take polls to decide scientific merit. Indeed, revolutionary ideas - the ones that take the biggest steps toward Romerian truth - would be the ones that would fail, by this criterion. Scientists, particularly the older ones, become heavily invested in the status quo, and don't want to give it up. In casting their negative votes, they may even by convinced that they are adhering to (1)-(4).

6. In judging what constitutes a “clear plurality,” we put more weight on the views of people who have more status in the community and are recognized as having more expertise on the topic. The problem with (5) of course kicks in with a vengeance here. What community? Recognized how? What expertise relative to what topic? I get no weight because I work at the University of Saskatchewan and not Harvard, or what?

7. We update the status of a members of our community on the basis of his or her contribution to progress a clearer understanding of what is true, not on the basis of “unwavering conviction” or “loyalty to the team.” This I suppose is intended to answer my concerns from (6) about what "status" might mean. I guess our status is our ranking in the profession, according to goodness. Do a good thing, and you move up. Do a bad thing, and you move down. Who decides what's good and bad, and how good, and how bad? What prevents a promotion based on "loyalty to the team," disguised as a good thing?

8. We shun, or exclude from the community, someone who reveals the he or she is not committed to these working principles. Well, I would be happy to be shunned by this community - it really doesn't look like it's built for success. Faced with these rules, I'll deviate and find my own like-minded community.

So, to me those rules seem strange, particularly coming from an economist who, like the rest of us, is schooled in the role of incentives, the benefits of decentralization, and the virtues of competition. We might wish that things were more clear-cut in economics, but it's not going to happen. Our models have to be so simple that they are guaranteed to be wrong - they're always inconsistent with some phenomena, hopefully the ones we're not focused on when we construct the model. There can be radically different theories, with different implications, that are all consistent with the empirical evidence we have (which is often not so great). This just reflects the technological limitations of science - our ability to construct and analyze models, and our ability to collect data. Why not just embrace the diversity and move on?

At this point, you may be wondering what's bugging Paul. He must have something specific he's concerned about. To get some ideas about that, read Romer's recent AER Papers and Proceedings paper. This paper is in part about "mathiness." What could that mean? It certainly doesn't mean that using mathematics in economics is a bad thing. Paul seems on board with the idea that mathematical precision lends clarity to our economic ideas, while potentially keeping people honest. Once you write your economic argument down in formal mathematical terms, it's hard to cheat. Math, unlike the English language (or any other language on the planet), is unambiguous.

But, in trying to get our ideas across, math can work against us. A sophisticated mathematical argument may be impenetrable to the average reader. And a rigorous, mathematically-detailed, internally consistent model is not necessarily a good model. The model-builder may have left out details that are essential for addressing the economic problem at hand, or there may be blatant inconsistencies between the model and the empirical regularities that are germane to the problem. Even though Paul gives specific examples, however, I'm still not entirely clear on "mathiness." As far as I can tell, it's related to the impenetrability problem. A dishonest economist can construct a mathematically sophisticated model, churn out some results without being too careful, claim success, and hope no one notices the errors and inconsistencies. That would certainly be a problem, and I could imagine recommending rejection to the editor if I were asked to referee such a paper, or rejecting the paper if I were in an editorial position.

Is that what's going on in the growth papers that Paul cites in his AER P&P piece? Are the authors guilty of "mathiness," - dishonesty? I'm not convinced. What McGrattan-Prescott, Boldrin-Levine, Lucas, and Lucas-Moll appear to have in common, is that they think about the growth process in different ways than Paul does, with somewhat different models. Sometimes they come up with different policy conclusions. Paul seems to think that, after 30 years, more or less, of research on the economics of technological change, we should have arrived at some consensus as to whether, for example, Paul's view of the world, or the views of his competitors, are somehow closer to Romerian truth. His conclusion is that there is something wrong with the winnowing-out process, hence his list of rules, and the attempt to convince us that M-G, B-L, L, L-M, and Piketty-Zucman too, are doing crappy work. I'm inferring that he thinks their papers were published in good places (our usual measure of value-added science) because they are well-connected big shots. It could also be that Paul just doesn't like competition - in more ways than one.