Sunday, January 15, 2017

What's a Macro Model Good For?

What's a macro model? It's a question, and an answer. If it's a good model, the question is well-defined, and the model gives a good answer. Olivier Blanchard has been pondering how we ask questions of models, and the answers we're getting, and he thinks it's useful to divide our models into two classes, each for answering different questions.

First, there are "theory models,"
...aimed at clarifying theoretical issues within a general equilibrium setting. Models in this class should build on a core analytical frame and have a tight theoretical structure. They should be used to think, for example, about the effects of higher required capital ratios for banks, or the effects of public debt management, or the effects of particular forms of unconventional monetary policy. The core frame should be one that is widely accepted as a starting point and that can accommodate additional distortions. In short, it should facilitate the debate among macro theorists.
At the extreme, "theory models" are purist exercises that, for example, Neil Wallace would approve of. Neil has spent his career working with tight, simple, economic models. These are models that are amenable to pencil-and-paper methods. Results are easily replicable, and the models are many steps removed from actual data - though to be at all interesting, they are designed to capture real economic phenomena. Neil has worked with fundamental models of monetary exchange - Sameulson's overlapping generations model, and the Kiyotaki-Wright (JPE 1989) model. He also approves of the Diamond-Dybvig (1983) model of banking. These models give us some insight into why and how we use money, what banks do, and (perhaps) why we have financial crises, but no one is going to estimate the parameters in such models, use them in calibration exercises, or use them at an FOMC meeting to argue why a 25 basis point increase in the fed funds rate target is better than a 50 basis point increase.

But Neil's tastes - as is well-known - are extreme. In general, what I think Blanchard means by "theory model" is something we can write up and publish in a good, mainstream, economics journal. In modern macro, that's a very broad class of work, including pure theory (no quantitative work), models with estimation (either classical or Bayesian), calibrated models, or some mix. These models are fit to increasingly sophisticated data.

Where I would depart from Blanchard is in asking that theory models have a "core frame...that is widely accepted..." It's of course useful that economists speak a common language that is easily translatable for lay people, but pathbreaking research is by definition not widely accepted. We want to make plenty of allowances for rule-breaking. That said, there are many people who break rules and write crap.

The second class of macro models, according to Blanchard, is the set of "policy models,"
...aimed at analyzing actual macroeconomic policy issues. Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals. They should be used to think, for example, about the quantitative effects of a slowdown in China on the United States, or the effects of a US fiscal expansion on emerging markets.
This is the class of models that we would use to evaluate a particular policy option, write a memo, and present it at the FOMC meeting. Such models are not what PhD students in economics work on, and that was the case 36 years ago, when Chris Sims wrote "Macroeconomics and Reality."
...though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models.
The "large models" Sims had in mind are the macroeconometric models constructed by Lawrence Klein and others, beginning primarily in the 1960s. The prime example of such models is the FRB/MIT/Penn model, which reflected in part the work of Klein, Ando, and Modigliani, among others, including (I'm sure) many PhD students. There was indeed a time when a satisfactory PhD dissertation in economics could be an estimation of the consumption sector of the FRB/MIT/Penn model.

Old-fashioned large-scale macroeconometric models borrowed their basic structure from static IS/LM models. There were equations for the consumption, investment, government, and foreign sectors. There was money demand and money supply. There were prices and wages. Typically, such models included hundreds of equations, so the job of estimating and running the model was subdivided into manageable tasks, by sector. There was a consumption person, an investment person, a wage person, etc., with further subdivision depending on the degree of disaggregation. My job in 1979-80 at the Bank of Canada was to look after residential investment in the RDXF model of the Canadian economy. No one seemed worried that I didn't spend much time talking to the price people or the mortgage people (who worked on another floor). I looked after 6 equations, and entered add factors when we had to make a forecast.

What happened to such models? Well, they are alive and well, and one of them lives at the Board of Governors in Washington D.C. - the FRB/US model. FRB/US is used as an explicit input to policy, as we can see in this speech by Janet Yellen at the last Jackson Hole conference:
A recent paper takes a different approach to assessing the FOMC's ability to respond to future recessions by using simulations of the FRB/US model. This analysis begins by asking how the economy would respond to a set of highly adverse shocks if policymakers followed a fairly aggressive policy rule, hypothetically assuming that they can cut the federal funds rate without limit. It then imposes the zero lower bound and asks whether some combination of forward guidance and asset purchases would be sufficient to generate economic conditions at least as good as those that occur under the hypothetical unconstrained policy. In general, the study concludes that, even if the average level of the federal funds rate in the future is only 3 percent, these new tools should be sufficient unless the recession were to be unusually severe and persistent.
So, that's an exercise that looks like what Blanchard has in mind, though he discusses "unconventional monetary policy" as an application of the "theory models."

It's no secret what's in the FRB/US model. The documentation is posted on the Board's web site, so you can look at the equations, and even run it, if you want to. There's some lip service to "optimization" and "expectations" in the documentation for the model, but the basic equations would be recognizable to Lawrence Klein. It's basically a kind of expanded IS/LM/Phillips curve model. And Blanchard seems to have a problem with it. He mentions FRB/US explicitly:
For example, in the main model used by the Federal Reserve, the FRB/US model, the dynamic equations are constrained to be solutions to optimization problems under high order adjustment cost structures. This strikes me as wrongheaded. Actual dynamics probably reflect many factors other than costs of adjustment. And the constraints that are imposed (for example, on the way the past and the expected future enter the equations) have little justification, theoretical or empirical.
Opinions seem to differ on how damning this is. The watershed in macroeconomists' views on large scale macreconometric models was of course Lucas's critique paper, which was aimed directly at the failures of such models. In the "Macroeconomics and Reality" paper, Sims sees Lucas's point, but he still thinks large-scale models could be useful, in spite of misidentification.

But, it's not clear that large-scale macroeconometric models are taken that seriously these days, even in policy circles, Janet Yellen aside. While simulation results are presented in policy discussions, it's not clear whether those results are changing any minds. Blanchard recognizes that we need different models to answer different questions, and one danger of the one-size-fits-all large-scale model is its use in applications for which it was not designed. Those who constructed FRB/US certainly did not envision the elements of modern unconventional monetary policy.

A modern macroeconometric approach is to scale down the models, and incorporate more theory - structure. The most well-known such models, often called "DSGE" are the Smets-Wouters model, and the Christiano/Eichenbaum/Evans model. Blanchard isn't so happy with these constructs either.
DSGE modelers, confronted with complex dynamics and the desire to fit the data, have extended the original structure to add, for example, external habit persistence (not just regular, old habit persistence), costs of changing investment (not just costs of changing capital), and indexing of prices (which we do not observe in reality), etc. These changes are entirely ad hoc, do not correspond to any micro evidence, and have made the theoretical structure of the models heavier and more opaque.
Indeed, in attempts to fit DSGE to disaggregated data, the models tend to suffer increasingly from the same problems as the original large-scale macroeconometric models. Chari, Kehoe, and McGrattan, for example, make a convincing case that DSGE models in current use are misidentified and not structural, rendering them useless for policy analysis. This has nothing to do with one's views on intervention vs. non-intervention - it's a question of how best to do policy intervention, once we've decided we're going to do it.

Are there other types of models on the horizon that might represent an improvement? One approach is the HANK model, constructed by Kaplan, Moll, and Violante. This is basically a heterogeneous-agent incomplete-markets model in the style of Aiyagari 1994, with sticky prices and monetary policy as in a Woodford model. That's interesting, but it's not doing much to help us understand how monetary policy works. It's assumed the central bank can dictate interest rates (as in a Woodford model), with no attention to the structure of central bank assets and liabilities, the intermediation done by the central bank, and the nature of central bank asset swaps. Like everyone, I'm a fan of my own work, which is more in the Blanchard "theory model" vein. For recent work on heterogeneous agent models of banking, secured credit, and monetary policy, see my web site.

Blanchard seems pessimistic about the future of policy modeling. In particular, he thinks the theory modelers and the policy modelers should go their own ways. I'd say that's bad advice. If quantitative models have any hope of being taken seriously by policymakers, this would have to come from integrating better theory in such models. Maybe the models should be small. Maybe they should be more specialized. But I don't think setting the policy modelers loose without guidance would be a good idea.

Review of "The Curse of Cash"

This is a review of Ken Rogoff's "The Curse of Cash," forthoming in Business Economics.

Kenneth Rogoff has written an accessible and informative book on the role of currency in modern economies, and its importance for monetary policy. Rogoff makes some recommendations that would radically change the nature of retail payments and banking in the United States, were policymakers to take them to heart. In particular, Rogoff proposes that, at a minimum, large-denomination Federal Reserve notes be eliminated. If he could have everything his way, currency issue would be reduced to small-denomination coins, and these coins might at times be subjected to an implicit tax so as to support monetary policy regimes with negative interest rates on central bank reserves.

The ideas in “The Curse of Cash” are not new to economists, but Rogoff has done a nice job of articulating these ideas in straightforward terms that non-economists should be able to understand. The book is long enough to cover the territory, but short enough to be interesting.

Why is cash a “curse?” As Rogoff explains, one of currency’s advantages for the user is privacy. But people who want privacy include those who distribute illegal drugs, evade taxation, bribe government officials, and promote terrorism, among other nefarious activities. Currency – and particularly currency in large denominations – is thus an aid to criminals. Indeed, as Rogoff points out, the quantity of U.S. currency in existence is currently about $4,200 per U.S. resident. But Greene et al. (2016) find in surveys that the typical law-abiding consumer holds $207 in cash, on average. This, and the fact that about 80% of the value of U.S. currency outstanding is in $100 notes, suggest that the majority of cash in the U.S. is not used for anything we would characterize as legitimate. Rogoff makes a convincing case that eliminating large-denomination currency would significantly reduce crime, and increase tax revenues. One of the nice features of Rogoff’s book is his marshalling of the available evidence to provide ballpark estimates of the effects of the policies he is recommending. The gains from reforming currency issue for the United States appear to be significant – certainly not small potatoes.

But, what about the costs from making radical changes in our currency supply? There are two primary factors that, as economists, we should be concerned with. The first is implications for government finance. In general, part of the government’s debt takes the form of currency – the government issues interest-bearing debt, which is purchased by the central bank with outside money (currency plus reserves), and then the demand for currency determines how much of the government debt purchased by the central bank is financed with currency. Currency of course has a zero interest rate, so the quantity of currency held by the public represents an interest saving for the government. This interest saving is the difference between the interest rate on the government debt in the central bank’s portfolio and the zero interest rate on currency. This interest saving reverts to the government as a transfer from the central bank. Therefore, if steps are taken (such as the elimination of large-denomination currency) to make currency less desirable, the quantity in circulation will fall, and it will cost the government more to service its debt.

The second key cost of partial or complete elimination of currency would be the harm done to the poor, who use currency intensively. Rogoff discusses the possibility of government intervention to ameliorate these costs, including the subsidization of alternative means of payment for the poor (debit cards or stored value cards) or central bank innovation in supplying electronic alternatives to cash. Costs to the poor of withdrawing currency should be taken seriously, particularly given recent experience in India and Venezuela. In both countries, announcements were made that there would be a brief window in which large-denomination currency would remain convertible (to other denominations or reserves) at the central bank, after which convertibility would cease. The old large-denomination notes would then be replaced by new ones. In India and Venezuela, this led to long lines at banks to convert notes, and a partial shutdown of cash-intensive sectors of the economy. Though these currency reforms differed from what Rogoff is suggesting, in that neither reform involved a permanent withdrawal of large-denomination currency, these experiments demonstrate the serious disruption that can result if currency reforms are mismanaged.

Rogoff makes a strong case that the net social benefits from a partial reduction in the use of currency could be large, considering only the positive and negative effects discussed above. I agree with that conclusion. However, a significant portion of Rogoff’s book makes the case that the partial or complete elimination of currency would also have large benefits in terms of monetary policy. In that case, I think, his conclusions are questionable.

Rogoff’s argument as to why currency impedes monetary policy is, for the most part, consistent with conventional New Keynesian (NK) thinking. In NK macroeconomic models (see Woodford 2003, for example) policy acts to mitigate or eliminate the distorting effects of sticky prices, and the zero lower bound (ZLB) on the nominal interest rate is a constraint for monetary policy. Typically, in NK models, the central bank conducts policy according to a Taylor rule, whereby the nominal interest rate increases when the “output gap” (the difference between efficient output and actual output) goes down, and the nominal interest rate increases when inflation goes up. But, if we accept the NK framework, there are good reasons to think that the ZLB will be a frequently binding constraint on monetary policy for some time. In particular, the well-documented secular decline in the real rate of return on government debt implies that the average level of nominal interest rates consistent with 2% inflation is much lower than in the past. This then leaves much less latitude for countercyclical interest rate cuts in future recessions.

Given the NK framework, one proposed solution to the ZLB problem is to simply relax the ZLB constraint. How? Miles Kimball of the University of Colorado-Boulder is currently the most prominent proponent of negative nominal interest rate policy as a solution to the ZLB problem. This problem, according to Kimball, exists only because asset-holders can always flee to currency, which bears a nominal interest rate of zero. But, if the government eliminates currency, or devises schemes to make currency sufficiently unattractive, then there is an effective lower bound (ELB) on the nominal interest rate that is lower than zero. Rogoff appears to be on board with this idea, and thinks that reducing the role for currency in the economy could produce large welfare benefits, because of a relaxation in the ZLB constraint.

Rogoff thinks that negative nominal interest rates are an extreme measure that will increase inflation when it is deemed to be too low. In this respect, I think Rogoff is wrong, but he’s in good company. A typical central banking misconception is that a reduction in the nominal interest rate will increase inflation. But every macroeconomist knows about the Fisher effect, whereby a reduction in the nominal interest rate reduces inflation – in the long run. What about the short run? In fact, mainstream macroeconomic models, including NK models, have the property that a reduction in the nominal interest rate reduces inflation, even in the short run (see Rupert and Sustek 2016 and Williamson 2016). This is the basis for neo-Fisherism – the idea that central bankers have the sign wrong, i.e. reducing inflation is accomplished with central bank interest rate cuts. This is consistent with empirical evidence. For example, the central banks that have experimented with negative nominal interest rates – the Swedish Riksbank, the European Central Bank, the Swiss National Bank, and the central bank of Denmark – appear to have produced very low (and sometimes negative) inflation.

So, I think Rogoff is correct that currency reform would be beneficial on net, and that the gains in terms of economic welfare could be significant. But those gains are unlikely to come from unconventional monetary policy in the form of negative nominal interest rates.

In summary, Ken Rogoff’s The Curse of Cash is an accessible and provocative book – one of the best I have read on economic policy. I do not agree with all of his recommendations, but this is a good start for the policy debate.

References
Greene, C., Schuh, S., and Stavins, J. 2016. “The 2014 Survey of Consumer Payment Choice: Summary Results,” Research Data Report 16-3, Federal Reserve Bank of Boston.
Rupert, P. and Sustek, R. 2016. “On the Mechanics of New Keynesian Models,” working paper.
Williamson, S. 2016. “Neo-Fisherism: A Radical Idea, or the Most Obvious Solution to the Low-Inflation Problem?” The Regional Economist 24, no. 3, 5-9, Federal Reserve Bank of St. Louis.

Tuesday, January 10, 2017

The Trouble with Paul Romer

Please bear with me, as I'm out of practice. My last blog post was September 8 (seems like yesterday). As a warmup, we're going to get into Paul Romer's "The Trouble with Macroeconomics." This is somewhat old hat (in more ways than one), but this paper got a lot of attention in various media outlets, and still comes up occasionally. The titles of the articles are entertaining in themselves. For example:

"The Rebel Economist Who Blew Up Macroeconomics"
"It's Time to Junk the Flawed Economic Models That Make the World a Dangerous Place"

And who can forget:

"Famous Economist Paul Romer Says Macroeconomics is All Bullshit"

Paul has some rules for bloggers writing about his stuff. These are:
1. If you are interested in a paper, read it.
2. If you want to blog about what is in a paper, read it.
I've taken these to heart, and have indeed read the paper thoroughly. Comprehension, of course, is another matter altogether.

Paul's conclusions are pretty clear. From his abstract:
For more than three decades, macroeconomics has gone backwards...A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.
So, those are strong charges:

1. The current stock of macroeconomists knows less than did the stock of macroeconomists practicing in 1986 or earlier.
2. A primary reason for this retrogression is that junior macroeconomists are unabashed butt-kissers. They can't bear to criticize their senior colleagues.

If you are a macroeconomist, one question that might occur to you at this point is how Romer views his role in all this macro bullshit. As we know, Romer's early work was highly influential. It's hard to say where the endogenous growth research program would be without him. This paper and this one have more than 22,000 Google scholar citations each. That's enormous. In order to collect that many citations, those papers had to be on many PhD macro reading lists, had to be read carefully, and had to form the basis for much subsequent published research. It's not for nothing that NYU, in the manner of various obituary writers, has a blurb ready to go should Paul ever collect a Nobel. Thus, it would appear that Paul singlehandedly shaped a large piece of modern macroeconomics as we know it, and should take credit for some of the bullshit he claims we are mired in. But, apparently he thinks it was someone else's fault.

A key part of Paul's paper has to do with what he views as flaws in the approach to identification in some modern macroeconomics, which he thinks yields bizarre results:
Macro models now use incredible identifying assumptions to reach bewildering conclusions.
He then sets up a straw man:
To appreciate how strange these conclusions can be, consider this observation, from a paper published in 2010, by a leading macroeconomist: "... although in the interest of disclosure, I must admit that I am myself less than totally convinced of the importance of money outside the case of large inflations."
In a paper that aims to debunk modern macro as pseudoscience, Romer here commits his first sin. Where's the citation, so we can check this quote? Of course, through the miracles of modern search engines, it's not hard to find the paper in question. It's this one, by Jesus Fernandez-Villaverde. Jesus is indeed a "leading macroeconomist," and I highly recommend his paper, if you want to learn something about the technical aspects of modern DSGE modeling.

It's useful to see the quote from Jesus's paper in context. Here's most of the paragraph in which it is contained:
At least since David Hume, economists have believed that they have identified a monetary transmission mechanism from increases in money to short-run fluctuations caused by some form or another of price stickiness. It takes much courage, and more aplomb, to dismiss two and a half centuries of a tradition linking Hume to Woodford and going through Marshall, Keynes, and Friedman. Even those with less of a Burkean mind than mine should feel reluctant to proceed in such a perilous manner. Moreover, after one finishes reading Friedman and Schwartz’'s (1971) A Monetary History of the U.S. or slogging through the mountain of Vector Autoregressions (VARs) estimated over 25 years, it must be admitted that those who see money as an important factor in business cycles fluctuations have an impressive empirical case to rely on. Here is not the place to evaluate all these claims (although in the interest of disclosure, I must admit that I am myself less than totally convinced of the importance of money outside the case of large inflations). Suffice it to say that the previous arguments of intellectual tradition and data were a motivation compelling enough for the large number of economists who jumped into the possibility of combining the beauty of DSGE models with the importance of money documented by empirical studies.
Paul wants us to think that the view expressed in the quote - that monetary factors are relatively unimportant for aggregate economic activity - is: (i) a mainstream view; (ii) a standard implication of "DSGE" models; (iii) total nonsense. First, the full paragraph makes clear that, as Jesus says, there is "an impressive empirical case" that supports the view that money is important. Jesus's parenthetical remark (the quote) states his skepticism, without getting into the reasons. Surely Paul should approve, as he thinks we're all too deferential to authority. Second, the New Keynesian research program is primarily engaged with the question of how and why monetary policy matters. We may quarrel about their mechanisms and the conclusions, but surely Mike Woodford can't be accused of arguing that monetary policy is irrelevant.

On the third point, that Jesus's skepticism is nonsense, Paul looks at some data from the Volcker disinflation. Volcker, as we all know, was Fed chair during the disinflationary period during the early 1980s. Here are Paul's conclusions:
The data displayed in Figure 2 suggest a simple causal explanation for the events that is consistent with what the Fed insiders predicted:
1. The Fed aimed for a nominal Fed Funds rate that was roughly 500 basis points higher than the prevailing inflation rate, departing from this goal only during the first recession.
2. High real interest rates decreased output and increased unemployment.
3. The rate of inflation fell, either because the combination of higher unemployment and a bigger output gap caused it to fall or because the Fed’s actions changed expectations.
So, that's a standard narrative that we hear about the Volcker era. What's wrong with it? First, it's incorrect to say that the Fed was "aiming" for a fed funds rate target at the time. It's quite interesting to read the FOMC transcripts from the Volcker era, in the context of modern monetary policy implementation. Volcker and his FOMC colleagues were operating on quantity theory principles. They aimed to reduce inflation by reducing the rate of money growth, without regard for the path of the fed funds rate, and you can see that in the data. Here's the fed funds rate during Volcker's time as Fed chair:
You can see that it's highly volatile leading up to the 1981-82 recession. And here's the growth rate in the money base, along with the unemployment rate:
The latter chart shows us that Volcker did what he said he was going to do - the money growth rate fell from about 10% in 1979 to about 3% in 1981. What about the real interest rate, which Paul focuses on?
I don't know about you, but the "simple causal explanation" in Paul's point #2 above doesn't jump out at me from the last chart. We could also look at the scatter plot of the same data:
Do high real interest rates make the unemployment rate go up? It's hard to conclude that from looking at this picture. But, the second chart (money growth rate and unemployment rate) does lend itself to a causal interpretation - money growth falls, then unemployment goes up, with a lag. That said: (i) The Volcker episode is basically one observation - we need more evidence than that; (ii) Raw correlations are suggestive only, as Paul evidently knows; (iii) If we had used the money growth/unemployment rate correlation observed in the second chart as a guide to monetary control in the post-Volcker era, we would have done really badly. As is well known, the relationship between monetary aggregates and other aggregate variables fell apart post-Volcker. I have never heard anyone in the Fed system mention monetary aggregates in a substantive policy discussion.

Paul's point #3 above reflects a Phillips curve view of the world - a higher "output gap" causes lower inflation, and lower expected inflation causes lower inflation. Though that story may appear to fit the Volcker experience, Phillips curves are notoriously unreliable. Just ask the people who keep predicting that low interest rates will make inflation take off. A more reliable guide is the Fisher effect, which fits this episode nicely. This is a scatter plot of the inflation rate vs. the fed funds rate, from the peak in the fed funds rate in June 1981 until the end of Volcker's term, with the observations connected in temporal sequence:
More often than not, the fed funds rate and the inflation rate are moving in the same direction. An interpretation is that Volcker reduced inflation by reducing the nominal interest rate - that's just Neo-Fisherism 101. Indeed, most of our models have neo-Fisherian properties, even when they have Phillips curve relations built in, as in New Keynesian models.

What's the conclusion? None of what Paul claims to be obvious concerning the effects of monetary policy during the Volcker era is actually obvious. Many hours have been spent, and many papers and books have been written by economists about the macroeconomic effects of monetary policy. But, given all that work, Jesus is not being unscientific in his skepticism about the importance of monetary factors for aggregate economic activity. It's widely accepted that central banks can and should control inflation, but there is wide dispersion - for good reasons - in views about the quantitative real effects of monetary policy.

So much for the case that modern macro leads to obviously false conclusions. What else does Paul have to say?

1. Paul doesn't like the notion of modeling business cycles as being driven by what he calls "imaginary shocks." Of course, economics is rife with such "imaginary shocks," otherwise known as stochastic disturbances. In macro, one approach - and an arguably very productive one - to constructing models that can be used to to understand the world and allow us to formulate policy, is to construct structural models (based on behavior, optimization, and market interaction) that are stochastically disturbed in ways that we can analyze and compute. Such models can produce aggregate fluctuations that look like what we actually observe. There are other approaches. For example, we can construct models with intrinsic rather than extrinsic aggregate uncertainty - models with multiple sunspot equilibria. Roger Farmer, for example, is very fond of models with self-fulfilling volatility. These models were popular for a time, particularly in the 1990s, but never caught on with the policy people. With respect to business cycle models with exogenous shocks, I can understand some of Paul's concerns. If I can't measure total factor productivity (TFP) directly, what good does it do to tell me that TFP is causing business cycles? If I have a model with 17 shocks and various ad-hoc bells and whistles - adjustment costs, habit persistence, etc. - is this any better than what Lawrence Klein was doing in the 1960s? But, and I hate to say this again, but economics is hard. In contrast to what Paul thinks, I think we have learned a lot in the last 30+ years. If he thinks these models are so bad, he should offer an alternative.

2. Paul thinks that identification - in part through the use of Bayesian econometrics - in modern macro models, is obfuscation. I might have been inclined to agree, but if you read Jesus's paper, he has some nice arguments in support of the Bayesian approach. Again, I recommend reading his paper.

3. The remaining sections of the paper, 6 through 10, are mostly free of substance. Paul asserts that macroeconomists are badly behaved in various ways. We're overly self-confident, monolithic, religious rather than scientific, we ignore parallel research programs and evidence that contradicts our theories. He seems to have it in for Lucas, Sargent, and Prescott, who apparently are engaged in some mutual fraudulent conspiracy. My favorite section is #9, "A Meta Model of Me." This paragraph sums that up:
When the person who says something that seems wrong is a revered leader of a group ... there is a price associated with open disagreement. This price is lower for me because I am no longer an academic. I am a practitioner, by which I mean that I want to put useful knowledge to work. I care little about whether I ever publish again in leading economics journals or receive any professional honor because neither will be of much help to me in achieving my goals. As a result, the standard threats ... do not apply.
This is supposed to convince us that Paul is being up front and honest - he's got nothing to lose, and what could he possibly gain from bad-mouthing the profession? Well, by the same logic, Brian Bazay had nothing to lose and nothing to gain by pushing me down in the school yard. But he did it anyway. In my experience, there is a positive payoff to demonstrating the weakness of another economist's arguments in public - and the bigger the economist, the bigger the payoff. If macroeconomists were actually as wimpy as Paul says we are, I wouldn't want to belong to this group. In general, economists are an argumentative lot - that's what we're known for. As Paul says, he's not an academic any more. Maybe he's been away so long he's forgotten how it works.

So, this is a pseudoscientific paper purporting to be about the pseudoscientific nature of modern macro. It makes bold assertions, and offers little or no evidence to back those assertions up. There's a name for that, but fear of shunning keeps me from going there. :)