|
Methodenstreit
Wikipedia entry - https://en.wikipedia.org/wiki/Methodenstreit
Methodenstreit (German for "method dispute"), in intellectual history
beyond German-language discourse, was an economics controversy commenced in the
1880s and persisting for more than a decade, between that field's Austrian
School and the (German) Historical School. The debate concerned the place of
general theory in social science and the use of history in explaining the
dynamics of human action. It also touched on policy and political issues,
including the roles of the individual and state. Nevertheless, methodological
concerns were uppermost and some early members of the Austrian School also
defended a form of welfare state, as prominently advocated by the Historical
School. When the debate opened, Carl Menger developed the Austrian School's
standpoint, and Gustav von Schmoller defended the approach of the Historical
School. (In German-speaking countries, the original of this Germanism is not
specific to the one controversywhich is likely to be specified as
Methodenstreit der Nationalökonomie, i.e. "Methodenstreit of
economics".)
Background The Historical School contended that economists could develop new
and better social laws from the collection and study of statistics and
historical materials, and distrusted theories not derived from historical
experience. Thus, the German Historical School focused on specific dynamic
institutions as the largest variable in changes in political economy. The
Historical School were themselves reacting against materialist determinism, the
idea that human action could, and would (once science advanced enough), be
explained as physical and chemical reactions.[1] The Austrian School, beginning
with the work of Carl Menger in the 1860s, argued against this (in
Grundsätze der Volkswirtschaftslehre, English title: Principles of
Economics), that economics was the work of philosophical logic and could only
ever be about developing rules from first principles seeing human
motives and social interaction as far too complex to be amenable to statistical
analysis and purporting to deduce universally valid precepts from human
actions. Menger and the German Historical School Untersuchungen über die
Methode der Socialwissenschaften, und der politischen Ökonomie
insbesondere, 1933 The first move was when Carl Menger attacked Schmoller and
the German Historical School, in his 1883 book Investigations into the Method
of the Social Sciences, with Special Reference to Political Economics
(Untersuchungen über die Methode der Socialwissenschaften, und der
politischen Ökonomie insbesondere). Menger thought the best method of
studying economics was through reason and finding general theories which
applied to broad areas. Menger, as did the Austrians and other neo-classical
economists, concentrated upon the subjective, atomistic nature of economics. He
emphasized the subjective factors. He said the grounds for economics were built
upon self-interest, evaluation on the margin, and incomplete knowledge. He said
aggregative, collective ideas could not have adequate foundation unless they
rested upon individual components. The direct attack on the German Historical
School lead Schmoller to respond quickly with an unfavourable and quite hostile
review of Menger's book.[2] Menger accepted the challenge and replied in a
passionate pamphlet,[3] written in the form of letters to a friend, in which he
(according to Hayek) "ruthlessly demolished Schmoller's position".
The encounter between the masters was soon imitated by their disciples. A
degree of hostility not often equaled in scientific controversy developed.[4]
Consequences The term "Austrian school of economics" came into
existence as a result of the Methodenstreit, when Schmoller used it in an
unfavourable review of one of Menger's later books, intending to convey an
impression of backwardness and obscurantism of Austria compared to the more
modern Prussians. A serious consequence of the hostile debate was that
Schmoller went so far as to declare publicly that members of the
"abstract" school were unfit to fill a teaching position in a German
university, and his influence was quite sufficient to make this equivalent to a
complete exclusion of all adherents to Menger's doctrines from academic
positions in Germany. The result was that even thirty years after the close of
the controversy Germany was still less affected by the new ideas now spreading
elsewhere, than any other academically important country in the world.[5]
Related rivalry Another famous and somewhat related
Methodenstreit in the 1890s pitted the German social and economic historian
Karl Lamprecht against several prominent political historians, particularly
Friedrich Meinecke, over Lamprecht's use of social scientific and psychological
methods in his research. The dispute resulted in Lamprecht and his work being
widely discredited among academic German historians. As a consequence, German
historians pursued more political and ideological historical questions, while
Lamprecht's style of interdisciplinary history was largely abandoned.
Lamprecht's work remained influential elsewhere, however, particularly in the
tradition of the French Annales School.
History of macroeconomic thought
Macroeconomic theory has its origins in the study of business cycles and
monetary theory.[1][2] In general, early theorists believed monetary factors
could not affect real factors such as real output. John Maynard Keynes attacked
some of these "classical" theories and produced a general theory that
described the whole economy in terms of aggregates rather than individual,
microeconomic parts. Attempting to explain unemployment and recessions, he
noticed the tendency for people and businesses to hoard cash and avoid
investment during a recession. He argued that this invalidated the assumptions
of classical economists who thought that markets always clear, leaving no
surplus of goods and no willing labor left idle.[3] The generation of
economists that followed Keynes synthesized his theory with neoclassical
microeconomics to form the neoclassical synthesis. Although Keynesian theory
originally omitted an explanation of price levels and inflation, later
Keynesians adopted the Phillips curve to model price-level changes. Some
Keynesians opposed the synthesis method of combining Keynes's theory with an
equilibrium system and advocated disequilibrium models instead. Monetarists,
led by Milton Friedman, adopted some Keynesian ideas, such as the importance of
the demand for money, but argued that Keynesians ignored the role of money
supply in inflation.[4] Robert Lucas and other new classical macroeconomists
criticized Keynesian models that did not work under rational expectations.
Lucas also argued that Keynesian empirical models would not be as stable as
models based on microeconomic foundations. The new classical school culminated
in real business cycle theory (RBC). Like early classical economic models, RBC
models assumed that markets clear and that business cycles are driven by
changes in technology and supply, not demand. New Keynesians tried to address
many of the criticisms leveled by Lucas and other new classical economists
against Neo-Keynesians. New Keynesians adopted rational expectations and built
models with microfoundations of sticky prices that suggested recessions could
still be explained by demand factors because rigidities stop prices from
falling to a market-clearing level, leaving a surplus of goods and labor. The
new neoclassical synthesis combined elements of both new classical and new
Keynesian macroeconomics into a consensus. Other economists avoided the new
classical and new Keynesian debate on short-term dynamics and developed the new
growth theories of long-run economic growth.[5] The Great Recession led to a
retrospective on the state of the field and some popular attention turned
toward heterodox economics.
Macroeconomics descends from two areas of research: business cycle theory and
monetary theory.[1][2] Monetary theory dates back to the 16th century and the
work of Martín de Azpilcueta, while business cycle analysis dates from
the mid 19th.[2] Business cycle theory Beginning with William Stanley Jevons
and Clément Juglar in the 1860s,[8] economists attempted to explain the
cycles of frequent, violent shifts in economic activity.[9] A key milestone in
this endeavor was the foundation of the U.S. National Bureau of Economic
Research by Wesley Mitchell in 1920. This marked the beginning of a boom in
atheoretical, statistical models of economic fluctuation (models based on
cycles and trends instead of economic theory) that led to the discovery of
apparently regular economic patterns like the Kuznets wave.[10] Other
economists focused more on theory in their business cycle analysis. Most
business cycle theories focused on a single factor,[9] such as monetary policy
or the impact of weather on the largely agricultural economies of the time.[8]
Although business cycle theory was well established by the 1920s, work by
theorists such as Dennis Robertson and Ralph Hawtrey had little impact on
public policy.[11] Their partial equilibrium theories could not capture general
equilibrium, where markets interact with each other; in particular, early
business cycle theories treated goods markets and financial markets
separately.[9] Research in these areas used microeconomic methods to explain
employment, price level, and interest rates.[12] Monetary theory Initially, the
relationship between price level and output was explained by the quantity
theory of money; David Hume had presented such a theory in his 1752 work Of
Money (Essays, Moral, Political, and Literary, Part II, Essay III).[13]
Quantity theory viewed the entire economy through Say's law, which stated that
whatever is supplied to the market will be soldin short, that markets
always clear.[3] In this view, money is neutral and cannot impact the real
factors in an economy like output levels. This was consistent with the
classical dichotomy view that real aspects of the economy and nominal factors,
such as price levels and money supply, can be considered independent from one
another.[14] For example, adding more money to an economy would be expected
only to raise prices, not to create more goods.[15] The quantity theory of
money dominated macroeconomic theory until the 1930s. Two versions were
particularly influential, one developed by Irving Fisher in works that included
his 1911 The Purchasing Power of Money and another by Cambridge economists over
the course of the early 20th century.[13] Fisher's version of the quantity
theory can be expressed by holding money velocity (the frequency with which a
given piece of currency is used in transactions) (V) and real income (Q)
constant and allowing money supply (M) and the price level (P) to vary in the
equation of exchange:[16] M · V=P · Q {\displaystyle M\cdot V=P\cdot
Q} M\cdot V=P\cdot Q Most classical theories, including Fisher's, held that
velocity was stable and independent of economic activity.[17] Cambridge
economists, such as John Maynard Keynes, began to challenge this assumption.
They developed the Cambridge cash-balance theory, which looked at money demand
and how it impacted the economy. The Cambridge theory did not assume that money
demand and supply were always at equilibrium, and it accounted for people
holding more cash when the economy sagged. By factoring in the value of holding
cash, the Cambridge economists took significant steps toward the concept of
liquidity preference that Keynes would later develop.[18] Cambridge theory
argued that people hold money for two reasons: to facilitate transactions and
to maintain liquidity. In later work, Keynes added a third motive, speculation,
to his liquidity preference theory and built on it to create his general
theory.[19] In 1898, Knut Wicksell proposed a monetary theory centered on
interest rates. His analysis used two rates: the market interest rate,
determined by the banking system, and the real or "natural" interest
rate, determined by the rate of return on capital.[20] In Wicksell's theory,
cumulative inflation will occur when technical innovation causes the natural
rate to rise or when the banking system allows the market rate to fall.
Cumulative deflation occurs under the opposite conditions causing the market
rate to rise above the natural.[2] Wicksell's theory did not produce a direct
relationship between the quantity of money and price level. According to
Wicksell, money would be created endogenously, without an increase in quantity
of hard currency, as long as the natural exceeded the market interest rate . In
these conditions, borrowers turn a profit and deposit cash into bank reserves,
which expands money supply. This can lead to a cumulative process where
inflation increases continuously without an expansion in the monetary base.
Wicksell's work influenced Keynes and the Swedish economists of the Stockholm
School.[21] Keynes's General Theory Photo of Keynes Keynes (right) with Harry
Dexter White, assistant secretary of the U.S. Treasury, at a 1946 International
Monetary Fund meeting Modern macroeconomics can be said to have begun with
Keynes and the publication of his book The General Theory of Employment,
Interest and Money in 1936.[22] Keynes expanded on the concept of liquidity
preferences and built a general theory of how the economy worked. Keynes's
theory was brought together both monetary and real economic factors for the
first time,[9] explained unemployment, and suggested policy achieving economic
stability.[23] Keynes contended that economic output is positively correlated
with money velocity.[24] He explained the relationship via changing liquidity
preferences:[25] people increase their money holdings during times of economic
difficulty by reducing their spending, which further slows the economy. This
paradox of thrift claimed that individual attempts to survive a downturn only
worsen it. When the demand for money increases, money velocity slows. A
slowdown in economic activities means markets might not clear, leaving excess
goods to waste and capacity to idle.[26] Turning the quantity theory on its
head, Keynes argued that market changes shift quantities rather than
prices.[27] Keynes replaced the assumption of stable velocity with one of a
fixed price-level. If spending falls and prices do not, the surplus of goods
reduces the need for workers and increases unemployment.[28] Classical
economists had difficulty explaining involuntary unemployment and recessions
because they applied Say's Law to the labor market and expected that all those
willing to work at the prevailing wage would be employed.[29] In Keynes's
model, employment and output are driven by aggregate demand, the sum of
consumption and investment. Since consumption remains stable, most fluctuations
in aggregate demand stem from investment, which is driven by many factors
including expectations, "animal spirits", and interest rates.[30]
Keynes argued that fiscal policy could compensate for this volatility. During
downturns, government could increase spending to purchase excess goods and
employ idle labor.[31] Moreover, a multiplier effect increases the effect of
this direct spending since newly employed workers would spend their income,
which would percolate through the economy, while firms would invest to respond
to this increase in demand.[25] Keynes's prescription for strong public
investment had ties to his interest in uncertainty.[32] Keynes had given a
unique perspective on statistical inference in A Treatise on Probability,
written in 1921, years before his major economic works.[33] Keynes thought
strong public investment and fiscal policy would counter the negative impacts
the uncertainty of economic fluctuations can have on the economy. While
Keynes's successors paid little attention to the probabilistic parts of his
work, uncertainty may have played a central part in the investment and
liquidity-preference aspects of General Theory.[32] The exact meaning of
Keynes's work has been long debated. Even the interpretation of Keynes's policy
prescription for unemployment, one of the more explicit parts of General
Theory, has been the subject of debates. Economists and scholars debate whether
Keynes intended his advice to be a major policy shift to address a serious
problem or a moderately conservative solution to deal with a minor issue.[34]
Keynes's successors Keynes's successors debated the exact formulations,
mechanisms, and consequences of the Keynesian model. One group emerged
representing the "orthodox" interpretation of Keynes; They combined
classical microeconomics with Keynesian thought to produce the
"neoclassical synthesis"[35] that dominated economics from the 1940s
until the early 1970s.[36] Two camps of Keynesians were critical of this
synthesis interpretation of Keynes. One group focused on the disequilibrium
aspects of Keynes's work, while the other took a fundamentalist stance on
Keynes and began the heterodox post-Keynesian tradition.[37] Neoclassical
synthesis Main article: Neoclassical synthesis The generation of economists
that followed Keynes, the neo-Keynesians, created the "neoclassical
synthesis" by combining Keynes's macroeconomics with neoclassical
microeconomics.[38] Neo-Keynesians dealt with two microeconomic issues: first,
providing foundations for aspects of Keynesian theory such as consumption and
investment, and, second, combining Keynesian macroeconomics with general
equilibrium theory.[39] (In general equilibrium theory, individual markets
interact with one another and an equilibrium price exists if there is perfect
competition, no externalities, and perfect information.)[35][40] Paul
Samuelson's Foundations of Economic Analysis (1947) provided much of the
microeconomic basis for the synthesis.[38] Samuelson's work set the pattern for
the methodology used by neo-Keynesians: economic theories expressed in formal,
mathematical models.[41] While Keynes's theories prevailed in this period, his
successors largely abandoned his informal methodology in favor of
Samuelson's.[42] By the mid 1950s, the vast majority of economists had ceased
debating Keynesianism and accepted the synthesis view;[43] however, room for
disagreement remained.[44] The synthesis attributed problems with market
clearing to sticky prices that failed to adjust to changes in supply and
demand.[45] Another group of Keynesians focused on disequilibrium economics and
tried to reconcile the concept of equilibrium with the absence of market
clearing.[46] Neo-Keynesian models Main article: Neo-Keynesian economics Chart
showing a positive sloped Liquidity preference/Money supply supply line with an
upward shifting, negative sloped Investment/Saving line. IS/LM chart with an
upward shift in the IS curve. The chart illustrates how a shift in the IS
curve, caused by factors like increased government spending or private
investment, will lead to higher output (Y) and increased interest rates (i). In
1937 John Hicks[a] published an article that incorporated Keynes's thought into
a general equilibrium framework[47] where the markets for goods and money met
in an overall equilibrium.[48] Hick's IS/LM (Investment-Savings/Liquidity
preference-Money supply) model became the basis for decades of theorizing and
policy analysis into the 1960s.[49] The model represents the goods market with
the IS curve, a set of points representing equilibrium in investment and
savings. The money market equilibrium is represented with the LM curve, a set
of points representing the equilibrium in supply and demand for money. The
intersection of the curves identifies an aggregate equilibrium in the
economy[50] where there are unique equilibrium values for interest rates and
economic output.[51] The IS/LM model focused on interest rates as the
"monetary transmission mechanism," the channel through which money
supply affects real variables like aggregate demand and employment. A decrease
in money supply would lead to higher interest rates, which reduce investment
and thereby lower output throughout the economy.[52] Other economists built on
the IS/LM framework. Notably, in 1944, Franco Modigliani[b] added a labor
market. Modigliani's model represented the economy as a system with general
equilibrium across the interconnected markets for labor, finance, and
goods,[47] and it explained unemployment with rigid nominal wages.[53] Growth
had been of interest to 18th-century classical economists like Adam Smith, but
work tapered off during the 19th and early 20th century marginalist revolution
when researchers focused on microeconomics.[54] The study of growth revived
when neo-Keynesians Roy Harrod and Evsey Domar independently developed the
HarrodDomar model,[55] an extension of Keynes's theory to the long-run,
an area Keynes had not looked at himself.[56] Their models combined Keynes's
multiplier with an accelerator model of investment,[57] and produced the simple
result that growth equaled the savings rate divided by the capital output ratio
(the amount of capital divided by the amount of output).[58] The
HarrodDomar model dominated growth theory until Robert Solow[c] and
Trevor Swan[d] independently developed neoclassical growth models in 1956.[55]
Solow and Swan produced a more empirically appealing model with "balanced
growth" based on the substitution of labor and capital in production.[59]
Solow and Swan suggested that increased savings could only temporarily increase
growth, and only technological improvements could increase growth in the
long-run.[60] After Solow and Swan, growth research tapered off with little or
no research on growth from 1970 until 1985.[55] Economists incorporated the
theoretical work from the synthesis into large-scale macroeconometric models
that combined individual equations for factors such as consumption, investment,
and money demand[61] with empirically observed data.[62] This line of research
reached its height with the MIT-Penn-Social Science Research Council (MPS)
model developed by Modigliani and his collaborators.[61] MPS combined IS/LM
with other aspects of the synthesis including the neoclassical growth model[63]
and the Phillips curve relation between inflation and output.[64] Both
large-scale models and the Phillips curve became targets for critics of the
synthesis. Phillips curve Main article: Phillips curve Chart showing an
apparently stable relationship between inflation and unemployment. The US
economy in the 1960s followed the Phillips curve, a correlation between
inflation and unemployment. Keynes did not lay out an explicit theory of price
level.[65] Early Keynesian models assumed wage and other price levels were
fixed.[66] These assumptions caused little concern in the 1950s when inflation
was stable, but by the mid-1960s inflation increased and became an issue for
macroeconomic models.[67] In 1958 A.W. Phillips[e] set the basis for a price
level theory when he made the empirical observation that inflation and
unemployment seemed to be inversely related. In 1960 Richard Lipsey[f] provided
the first theoretical explanation of this correlation. Generally Keynesian
explanations of the curve held that excess demand drove high inflation and low
unemployment while an output gap raised unemployment and depressed prices.[68]
In the late 1960s and early 1970s, the Phillips curve faced attacks on both
empirical and theoretical fronts. The presumed trade-off between output and
inflation represented by the curve was the weakest part of the Keynesian
system.[69] Disequilibrium macroeconomics Main article: Disequilibrium
macroeconomics Despite its prevalence, the neoclassical synthesis had its
Keynesian critics. A strain of disequilibrium or "non-Walrasian"
theory developed[70] that criticized the synthesis for apparent contradictions
in allowing disequilibrium phenomena, especially involuntary unemployment, to
be modeled in equilibrium models.[71] Moreover, they argued, the presence of
disequilibrium in one market must be associated with disequilibrium in another,
so involuntary unemployment had to be tied to an excess supply in the goods
market. Many see Don Patinkin's work as the first in the disequilibrium
vein.[70] Robert W. Clower (1965)[g] introduced his "dual-decision
hypothesis" that a person in a market may determine what he wants to buy,
but is ultimately limited in how much he can buy based on how much he can
sell.[72] Clower and Axel Leijonhufvud (1968)[h] argued that disequilibrium
formed a fundamental part of Keynes's theory and deserved greater
attention.[73] Robert Barro and Herschel Grossman formulated general
disequilibrium models[i] in which individual markets were locked into prices
before there was a general equilibrium. These markets produced "false
prices" resulting in disequilibrium.[74] Soon after the work of Barro and
Grossman, disequilibrium models fell out of favor in the United
States,[75][76][77] and Barro abandoned Keynesianism and adopted new classical,
market clearing hypotheses.[78] Diagram for Malinvaud's typology of
unemployment. Diagram shows curves for the labor and goods markets with
Walrasian equilibrium in the center. Regions for Keynesian unemployment,
classical unemployment, repressed inflation, and underconsumption Diagram based
on Malinvaud's typology of unemployment shows curves for equilibrium in the
goods and labor markets given wage and price levels. Walrasian equilibrium is
achieved when both markets are at equilibrium. According to Malinvaud the
economy is usually in a state of either Keynesian unemployment, with excess
supply of goods and labor, or classical unemployment, with excess supply of
labor and excess demand for goods.[79] While American economists quickly
abandoned disequilibrium models, European economists were more open to models
without market clearing.[80] Europeans such as Edmond Malinvaud and Jacques
Drèze expanded on the disequilibrium tradition and worked to explain
price rigidity instead of simply assuming it.[81] Malinvaud (1977)[j] used
disequilibrium analysis to develop a theory of unemployment.[82] He argued that
disequilibrium in the labor and goods markets could lead to rationing of goods
and labor, leading to unemployment.[82] Malinvaud adopted a fixprice framework
and argued that pricing would be rigid in modern, industrial prices compared to
the relatively flexible pricing systems of raw goods that dominate agricultural
economies.[82] Prices are fixed and only quantities adjust.[79] Malinvaud
considers an equilibrium state in classical and Keynesian unemployment as most
likely.[83] Work in the neoclassical tradition is confined as a special case of
Malinvaud's typology, the Walrasian equilibrium. In Malinvaud's theory,
reaching the Walrasian equilibrium case is almost impossible to achieve given
the nature of industrial pricing.[83] Monetarism Main article: Monetarism
Milton Friedman developed an alternative to Keynesian macroeconomics eventually
labeled monetarism. Generally monetarism is the idea that the supply of money
matters for the macroeconomy.[84] When monetarism emerged in the 1950s and
1960s, Keynesians neglected the role money played in inflation and the business
cycle, and monetarism directly challenged those points.[4] Criticizing and
augmenting the Phillips curve The Phillips curve appeared to reflect a clear,
inverse relationship between inflation and output. The curve broke down in the
1970s as economies suffered simultaneous economic stagnation and inflation
known as stagflation. The empirical implosion of the Phillips curve followed
attacks mounted on theoretical grounds by Friedman and Edmund Phelps. Phelps,
although not a monetarist, argued that only unexpected inflation or deflation
impacted employment. Variations of Phelps's "expectations-augmented
Phillips curve" became standard tools. Friedman and Phelps used models
with no long-run trade-off between inflation and unemployment. Instead of the
Phillips curve they used models based on the natural rate of unemployment where
expansionary monetary policy can only temporarily shift unemployment below the
natural rate. Eventually, firms will adjust their prices and wages for
inflation based on real factors, ignoring nominal changes from monetary policy.
The expansionary boost will be wiped out.[85] Importance of money Anna Schwartz
collaborated with Friedman to produce one of monetarism's major works, A
Monetary History of the United States (1963), which linked money supply to the
business cycle.[86] The Keynesians of the 1950s and 60s had adopted the view
that monetary policy does not impact aggregate output or the business cycle
based on evidence that, during the Great Depression, interest rates had been
extremely low but output remained depressed.[87] Friedman and Schwartz argued
that Keynesians only looked at nominal rates and neglected the role inflation
plays in real interest rates, which had been high during much of the
Depression. In real terms, monetary policy had effectively been contractionary,
putting downward pressure on output and employment, even though economists
looking only at nominal rates thought monetary policy had been stimulative.[88]
Friedman developed his own quantity theory of money that referred to Irving
Fisher's but inherited much from Keynes.[89] Friedman's 1956 "The Quantity
Theory of Money: A Restatement"[k] incorporated Keynes's demand for money
and liquidity preference into an equation similar to the classical equation of
exchange.[90] Friedman's updated quantity theory also allowed for the
possibility of using monetary or fiscal policy to remedy a major downturn.[91]
Friedman broke with Keynes by arguing that money demand is relatively
stableeven during a downturn.[90] Monetarists argued that
"fine-tuning" through fiscal and monetary policy is
counterproductive. They found money demand to be stable even during fiscal
policy shifts,[92] and both fiscal and monetary policies suffer from lags that
made them too slow to prevent mild downturns.[93] Prominence and decline Chart
showing stable money velocity until 1980 after which the line becomes less
stable. Money velocity had been stable and grew consistently until around 1980
(green). After 1980 (blue), money velocity became erratic and the monetarist
assumption of stable money velocity was called into question.[94] Monetarism
attracted the attention of policy makers in the late-1970s and 1980s. Friedman
and Phelps's version of the Phillips curve performed better during stagflation
and gave monetarism a boost in credibility.[95] By the mid-1970s monetarism had
become the new orthodoxy in macroeconomics,[96] and by the late-1970s central
banks in the United Kingdom and United States had largely adopted a monetarist
policy of targeting money supply instead of interest rates when setting
policy.[97] However, targeting monetary aggregates proved difficult for central
banks because of measurement difficulties.[98] Monetarism faced a major test
when Paul Volcker took over the Federal Reserve Chairmanship in 1979. Volcker
tightened the money supply and brought inflation down, creating a severe
recession in the process. The recession lessened monetarism's popularity but
clearly demonstrated the importance of money supply in the economy.[4]
Monetarism became less credible when once-stable money velocity defied
monetarist predictions and began to move erratically in the United States
during the early 1980s.[94] Monetarist methods of single-equation models and
non-statistical analysis of plotted data also lost out to the
simultaneous-equation modeling favored by Keynesians.[99] Monetarism's policies
and method of analysis lost influence among central bankers and academics, but
its core tenets of the long-run neutrality of money (increases in money supply
cannot have long-term effects on real variables, such as output) and use of
monetary policy for stabilization became a part of the macroeconomic mainstream
even among Keynesians.[4][98] New classical economics Main article: New
classical economics Photo of University of Chicago buildings. Much of new
classical research was conducted at the University of Chicago. "New
classical economics" evolved from monetarism[100] and presented other
challenges to Keynesianism. Early new classicals considered themselves
monetarists,[101] but the new classical school evolved. New classicals
abandoned the monetarist belief that monetary policy could systematically
impact the economy,[102] and eventually embraced real business cycle models
that ignored monetary factors entirely.[103] New classicals broke with
Keynesian economic theory completely while monetarists had built on Keynesian
ideas.[104] Despite discarding Keynesian theory, new classical economists did
share the Keynesian focus on explaining short-run fluctuations. New classicals
replaced monetarists as the primary opponents to Keynesianism and changed the
primary debate in macroeconomics from whether to look at short-run fluctuations
to whether macroeconomic models should be grounded in microeconomic
theories.[105] Like monetarism, new classical economics was rooted at the
University of Chicago, principally with Robert Lucas. Other leaders in the
development of new classical economics include Edward Prescott at University of
Minnesota and Robert Barro at University of Rochester.[103] New classical
economists wrote that earlier macroeconomic theory was based only tenuously on
microeconomic theory and described its efforts as providing "microeconomic
foundations for macroeconomics." New classicals also introduced rational
expectations and argued that governments had little ability to stabilize the
economy given the rational expectations of economic agents. Most
controversially, new classical economists revived the market clearing
assumption, assuming both that prices are flexible and that the market should
be modeled at equilibrium.[106] Rational expectations and policy irrelevance
Chart showing a supply and demand curve where price causes quantity produced to
spiral away from the equilibrium intersection. John Muth first proposed
rational expectations when he criticized the cobweb model (example above) of
agricultural prices. Muth showed that agents making decisions based on rational
expectations would be more successful than those who made their estimates based
on adaptive expectations, which could lead to the cobweb situation above where
decisions about producing quantities (Q) lead to prices (P) spiraling out of
control away from the equilibrium of supply (S) and demand (D).[107][108]
Keynesians and monetarists recognized that people based their economic
decisions on expectations about the future. However, until the 1970s, most
models relied on adaptive expectations, which assumed that expectations were
based on an average of past trends.[109] For example, if inflation averaged 4%
over a period, economic agents were assumed to expect 4% inflation the
following year.[109] In 1972 Lucas,[l] influenced by a 1961 agricultural
economics paper by John Muth,[m] introduced rational expectations to
macroeconomics.[110] Essentially, adaptive expectations modeled behavior as if
it were backward-looking while rational expectations modeled economic agents
(consumers, producers and investors) who were forward-looking.[111] New
classical economists also claimed that an economic model would be internally
inconsistent if it assumed that the agents it models behave as if they were
unaware of the model.[112] Under the assumption of rational expectations,
models assume agents make predictions based on the optimal forecasts of the
model itself.[109] This did not imply that people have perfect foresight,[113]
but that they act with an informed understanding of economic theory and
policy.[114] Thomas Sargent and Neil Wallace (1975)[n] applied rational
expectations to models with Phillips curve trade-offs between inflation and
output and found that monetary policy could not be used to systematically
stabilize the economy. Sargent and Wallace's policy ineffectiveness proposition
found that economic agents would anticipate inflation and adjust to higher
price levels before the influx of monetary stimulus could boost employment and
output.[115] Only unanticipated monetary policy could increase employment, and
no central bank could systematically use monetary policy for expansion without
economic agents catching on and anticipating price changes before they could
have a stimulative impact.[116] Robert E. Hall[o] applied rational expectations
to Friedman's permanent income hypothesis that people base the level of their
current spending on their wealth and lifetime income rather than current
income.[117] Hall found that people will smooth their consumption over time and
only alter their consumption patterns when their expectations about future
income change.[118] Both Hall's and Friedman's versions of the permanent income
hypothesis challenged the Keynesian view that short-term stabilization policies
like tax cuts can stimulate the economy.[117] The permanent income view
suggests that consumers base their spending on wealth, so a temporary boost in
income would only produce a moderate increase in consumption.[117] Empirical
tests of Hall's hypothesis suggest it may understate boosts in consumption due
to income increases; however, Hall's work helped to popularize Euler equation
models of consumption.[119]
The Lucas critique and microfoundations In 1976 Lucas wrote a paper[p]
criticizing large-scale Keynesian models used for forecasting and policy
evaluation. Lucas argued that economic models based on empirical relationships
between variables are unstable as policies change: a relationship under one
policy regime may be invalid after the regime changes.[112] The Lucas's
critique went further and argued that a policy's impact is determined by how
the policy alters the expectations of economic agents. No model is stable
unless it accounts for expectations and how expectations relate to policy.[120]
New classical economists argued that abandoning the disequilibrium models of
Keynesianism and focusing on structure- and behavior-based equilibrium models
would remedy these faults.[121] Keynesian economists responded by building
models with microfoundations grounded in stable theoretical relationships.[122]
Lucas supply theory and business cycle models See also: Lucas islands model
Lucas and Leonard Rapping[q] laid out the first new classical approach to
aggregate supply in 1969. Under their model, changes in employment are based on
worker preferences for leisure time. Lucas and Rapping modeled decreases in
employment as voluntary choices of workers to reduce their work effort in
response to the prevailing wage.[123] Lucas (1973)[r] proposed a business cycle
theory based on rational expectations, imperfect information, and market
clearing. While building this model, Lucas attempted to incorporate the
empirical fact that there had been a trade-off between inflation and output
without ceding that money was non-neutral in the short-run.[124] This model
included the idea of money surprise: monetary policy only matters when it
causes people to be surprised or confused by the price of goods changing
relative to one another.[125] Lucas hypothesized that producers become aware of
changes in their own industries before they recognize changes in other
industries. Given this assumption, a producer might perceive an increase in
general price level as an increase in the demand for his goods. The producer
responds by increasing production only to find the "surprise" that
prices had increased across the economy generally rather than specifically for
his goods.[126] This "Lucas supply curve" models output as a function
of the "price" or "money surprise," the difference between
expected and actual inflation.[126] Lucas's "surprise" business cycle
theory fell out of favor after the 1970s when empirical evidence failed to
support this model.[127][128] Real business cycle theory Bush shakes hands with
Kydland with Prescott behind them. George W. Bush meets Kydland (left) and
Prescott (center) at an Oval Office ceremony in 2004 honoring the year's Nobel
Laureates. While "money surprise" models floundered, efforts
continued to develop a new classical model of the business cycle. A 1982 paper
by Kydland and Prescott[s] introduced real business cycle theory (RBC).[129]
Under this theory business cycles could be explained entirely by the supply
side, and models represented the economy with systems at constant
equilibrium.[130] RBC dismissed the need to explain business cycles with price
surprise, market failure, price stickiness, uncertainty, and instability.[131]
Instead, Kydland and Prescott built parsimonious models that explained business
cycles with changes in technology and productivity.[127] Employment levels
changed because these technological and productivity changes altered the desire
of people to work.[127] RBC rejected the idea of high involuntary unemployment
in recessions and not only dismissed the idea that money could stabilize the
economy but also the monetarist idea that money could destabilize it.[132] Real
business cycle modelers sought to build macroeconomic models based on
microfoundations of ArrowDebreu[133] general
equilibrium.[134][135][136][137] RBC models were one of the inspirations for
dynamic stochastic general equilibrium (DSGE) models. DSGE models have become a
common methodological tool for macroeconomistseven those who disagree
with new classical theory.[129] New Keynesian economics Main article: New
Keynesian economics New classical economics had pointed out the inherent
contradiction of the neoclassical synthesis: Walrasian microeconomics with
market clearing and general equilibrium could not lead to Keynesian
macroeconomics where markets failed to clear. New Keynesians recognized this
paradox, but, while the new classicals abandoned Keynes, new Keynesians
abandoned Walras and market clearing.[138] During the late 1970s and 1980s, new
Keynesian researchers investigated how market imperfections like monopolistic
competition, nominal frictions like sticky prices, and other frictions made
microeconomics consistent with Keynesian macroeconomics.[138] New Keynesians
often formulated models with rational expectations, which had been proposed by
Lucas and adopted by new classical economists.[139] Nominal and real rigidities
Stanley Fischer (1977)[t] responded to Thomas J. Sargent and Neil Wallace's
monetary ineffectiveness proposition and showed how monetary policy could
stabilize an economy even in a model with rational expectations.[139] Fischer's
model showed how monetary policy could have an impact in a model with long-term
nominal wage contracts.[140] John B. Taylor expanded on Fischer's work and
found that monetary policy could have long-lasting effectseven after
wages and prices had adjusted. Taylor arrived at this result by building on
Fischer's model with the assumptions of staggered contract negotiations and
contracts that fixed nominal prices and wage rates for extended periods.[140]
These early new Keynesian theories were based on the basic idea that, given
fixed nominal wages, a monetary authority (central bank) can control the
employment rate.[141] Since wages are fixed at a nominal rate, the monetary
authority can control the real wage (wage values adjusted for inflation) by
changing the money supply and thus impact the employment rate.[141] By the
1980s new Keynesian economists became dissatisfied with these early nominal
wage contract models[142] since they predicted that real wages would be
countercyclical (real wages would rise when the economy fell), while empirical
evidence showed that real wages tended to be independent of economic cycles or
even slightly procyclical.[143] These contract models also did not make sense
from a microeconomic standpoint since it was unclear why firms would use
long-term contracts if they led to inefficiencies.[141] Instead of looking for
rigidities in the labor market, new Keynesians shifted their attention to the
goods market and the sticky prices that resulted from "menu cost"
models of price change.[142] The term refers to the literal cost to a
restaurant of printing new menus when it wants to change prices; however,
economists also use it to refer to more general costs associated with changing
prices, including the expense of evaluating whether to make the change.[142]
Since firms must spend money to change prices, they do not always adjust them
to the point where markets clear, and this lack of price adjustments can
explain why the economy may be in disequilibrium.[144] Studies using data from
the United States Consumer Price Index confirmed that prices do tend to be
sticky. A good's price typically changes about every four to six months or, if
sales are excluded, every eight to eleven months.[145] While some studies
suggested that menu costs are too small to have much of an aggregate impact,
Laurence Ball and David Romer (1990)[u] showed that real rigidities could
interact with nominal rigidities to create significant disequilibrium. Real
rigidities occur whenever a firm is slow to adjust its real prices in response
to a changing economic environment. For example, a firm can face real
rigidities if it has market power or if its costs for inputs and wages are
locked-in by a contract.[146][147] Ball and Romer argued that real rigidities
in the labor market keep a firm's costs high, which makes firms hesitant to cut
prices and lose revenue. The expense created by real rigidities combined with
the menu cost of changing prices makes it less likely that firm will cut prices
to a market clearing level.[144] Coordination failure Chart showing an
equilibrium line at 45 degrees intersected three times by an s-shaped line. In
this model of coordination failure, a representative firm ei makes its output
decisions based on the average output of all firms (e). When the representative
firm produces as much as the average firm (ei=e), the economy is at an
equilibrium represented by the 45-degree line. The decision curve intersects
with the equilibrium line at three equilibrium points. The firms could
coordinate and produce at the optimal level of point B, but, without
coordination, firms might produce at a less efficient equilibrium.[148][149]
Coordination failure is another potential explanation for recessions and
unemployment.[150] In recessions a factory can go idle even though there are
people willing to work in it, and people willing to buy its production if they
had jobs. In such a scenario, economic downturns appear to be the result of
coordination failure: The invisible hand fails to coordinate the usual,
optimal, flow of production and consumption.[151] Russell Cooper and Andrew
John (1988)[v] expressed a general form of coordination as models with multiple
equilibria where agents could coordinate to improve (or at least not harm) each
of their respective situations.[152] Cooper and John based their work on
earlier models including Peter Diamond's (1982)[w] coconut model,[153] which
demonstrated a case of coordination failure involving search and matching
theory.[154] In Diamond's model producers are more likely to produce if they
see others producing. The increase in possible trading partners increases the
likelihood of a given producer finding someone to trade with. As in other cases
of coordination failure, Diamond's model has multiple equilibria, and the
welfare of one agent is dependent on the decisions of others.[155] Diamond's
model is an example of a "thick-market externality" that causes
markets to function better when more people and firms participate in them.[156]
Other potential sources of coordination failure include self-fulfilling
prophecies. If a firm anticipates a fall in demand, they might cut back on
hiring. A lack of job vacancies might worry workers who then cut back on their
consumption. This fall in demand meets the firm's expectations, but it is
entirely due to the firm's own actions.[152] Labor market failures New
Keynesians offered explanations for the failure of the labor market to clear.
In a Walrasian market, unemployed workers bid down wages until the demand for
workers meets the supply.[157] If markets are Walrasian, the ranks of the
unemployed would be limited to workers transitioning between jobs and workers
who choose not to work because wages are too low to attract them.[158] They
developed several theories explaining why markets might leave willing workers
unemployed.[159] Of these theories, new Keynesians were especially associated
with efficiency wages and the insider-outsider model used to explain long-term
effects of previous unemployment,[160] where short-term increases in
unemployment become permanent and lead to higher levels of unemployment in the
long-run.[161] Insider-outsider model Economists became interested in
hysteresis when unemployment levels spiked with the 1979 oil shock and early
1980s recessions but did not return to the lower levels that had been
considered the natural rate.[162] Olivier Blanchard and Lawrence Summers
(1986)[x] explained hysteresis in unemployment with insider-outsider models,
which were also proposed by of Assar Lindbeck and Dennis Snower in a series of
papers and then a book.[y] Insiders, employees already working at a firm, are
only concerned about their own welfare. They would rather keep their wages high
than cut pay and expand employment. The unemployed, outsiders, do not have any
voice in the wage bargaining process, so their interests are not represented.
When unemployment increases, the number of outsiders increases as well. Even
after the economy has recovered, outsiders continue to be disenfranchised from
the bargaining process.[163] The larger pool of outsiders created by periods of
economic retraction can lead to persistently higher levels of
unemployment.[163] The presence of hysteresis in the labor market also raises
the importance of monetary and fiscal policy. If temporary downturns in the
economy can create long term increases in unemployment, stabilization policies
do more than provide temporary relief; they prevent short term shocks from
becoming long term increases in unemployment.[164]
Efficiency wages Chart showing the relationship of the non-shirking condition
and full employment. In the Shapiro-Stiglitz model workers are paid at a level
where they do not shirk, preventing wages from dropping to full employment
levels. The curve for the no-shirking condition (labeled NSC) goes to infinity
at full employment. In efficiency wage models, workers are paid at levels that
maximize productivity instead of clearing the market.[165] For example, in
developing countries, firms might pay more than a market rate to ensure their
workers can afford enough nutrition to be productive.[166] Firms might also pay
higher wages to increase loyalty and morale, possibly leading to better
productivity.[167] Firms can also pay higher than market wages to forestall
shirking.[167] Shirking models were particularly influential.[168] Carl Shapiro
and Joseph Stiglitz (1984)[z] created a model where employees tend to avoid
work unless firms can monitor worker effort and threaten slacking employees
with unemployment.[169] If the economy is at full employment, a fired shirker
simply moves to a new job.[170] Individual firms pay their workers a premium
over the market rate to ensure their workers would rather work and keep their
current job instead of shirking and risk having to move to a new job. Since
each firm pays more than market clearing wages, the aggregated labor market
fails to clear. This creates a pool of unemployed laborers and adds to the
expense of getting fired. Workers not only risk a lower wage, they risk being
stuck in the pool of unemployed. Keeping wages above market clearing levels
creates a serious disincentive to shirk that makes workers more efficient even
though it leaves some willing workers unemployed.[169] New growth theory
Further information: Endogenous growth theory Chart plotting growth rates of
various countries against their income level in 1960. Low income countries have
a diversity of growth rates instead of uniformly high rates expected under
convergence Empirical evidence showed that growth rates of low income countries
varied widely instead of converging to a uniform income level[171] as expected
in earlier, neoclassical models.[172] Following research on the neoclassical
growth model in the 1950s and 1960s, little work on economic growth occurred
until 1985.[55] Papers by Paul Romer[aa][ab] were particularly influential in
igniting the revival of growth research.[173] Beginning in the mid-1980s and
booming in the early 1990s many macroeconomists shifted their focus to the
long-run and started "new growth" theories, including endogenous
growth.[174][173] Growth economists sought to explain empirical facts including
the failure of sub-Saharan Africa to catch up in growth, the booming East Asian
Tigers, and the slowdown in productivity growth in the United States prior to
the technology boom of the 1990s.[175] Convergence in growth rates had been
predicted under the neoclassical growth model, and this apparent predictive
failure inspired research into endogenous growth.[172] Three families of new
growth models challenged neoclassical models.[176] The first challenged the
assumption of previous models that the economic benefits of capital would
decrease over time. These early new growth models incorporated positive
externalities to capital accumulation where one firm's investment in technology
generates spillover benefits to other firms because knowledge spreads.[177] The
second focused on the role of innovation in growth. These models focused on the
need to encourage innovation through patents and other incentives.[178] A third
set, referred to as the "neoclassical revival", expanded the
definition of capital in exogenous growth theory to include human capital.[179]
This strain of research began with Mankiw, Romer, and Weil (1992),[ac] which
showed that 78% of the cross-country variance in growth could be explained by a
Solow model augmented with human capital.[180] Endogenous growth theories
implied that countries could experience rapid "catch-up" growth
through an open society that encouraged the inflow of technology and ideas from
other nations.[181] Endogenous growth theory also suggested that governments
should intervene to encourage investment in research and development because
the private sector might not invest at optimal levels.[181] New synthesis Main
article: New neoclassical synthesis Chart shows an initial positive response of
consumption and output followed by a negative response several years later.
Real interest rates and inflation have initial negative responses followed by a
slight positive response. Based on the DSGE model in Christiano, Eichenbaum,
and Evans (2005),[ad] impulse response functions show the effects of a one
standard deviation monetary policy shock on other economic variables over 20
quarters. A "new synthesis" or "new neoclassical synthesis"
emerged in the 1990s drawing ideas from both the new Keynesian and new
classical schools.[182] From the new classical school, it adapted RBC
hypotheses, including rational expectations, and methods;[183] from the new
Keynesian school, it took nominal rigidities (price stickiness)[150] and other
market imperfections.[184] The new synthesis implies that monetary policy can
have a stabilizing effect on the economy, contrary to new classical
theory.[185][186] The new synthesis was adopted by academic economists and soon
by policy makers, such as central bankers.[150] Under the synthesis, debates
have become less ideological (concerning fundamental methodological questions)
and more empirical. Woodford described the change:[187] It sometimes appears to
outsiders that macroeconomists are deeply divided over issues of empirical
methodology. There continue to be, and probably will always be, heated
disagreements about the degree to which individual empirical claims are
convincing. A variety of empirical methods are used, both for data
characterization and for estimation of structural relations, and researchers
differ in their taste for specific methods, often depending on their
willingness to employ methods that involve more specific a priori assumptions.
But the existence of such debates should not conceal the broad agreement on
more basic issues of method. Both calibrationists and the
practitioners of Bayesian estimation of DSGE models agree on the importance of
doing quantitative theory, both accept the importance of the
distinction between pure data characterization and the validation of structural
models, and both have a similar understanding of the form of model that can
properly be regarded as structural. Woodford emphasised that there was now a
stronger distinction between works of data characterisation, which make no
claims regarding their the results' relationship to specific economic
decisions, and structural models, where a model with a theoretical basis
attempts describe actual relationships and decisions being made by economic
actors. The validation of structural models now requires that their
specifications reflect "explicit decision problems faced by households or
firms". Data characterisation, Woodford says, proves useful in
"establishing facts structural models should be expected to explain"
but not as a tool of policy analysis. Rather it is structural models,
explaining those facts in terms of real-life decisions by agents, that form the
basis of policy analysis.[188] New synthesis theory developed RBC models called
dynamic stochastic general equilibrium (DSGE) models, which avoid the Lucas
critique.[189][190] DSGE models formulate hypotheses about the behaviors and
preferences of firms and households; numerical solutions of the resulting DSGE
models are computed.[191] These models also included a "stochastic"
element created by shocks to the economy. In the original RBC models these
shocks were limited to technological change, but more recent models have
incorporated other real changes.[192] Econometric analysis of DSGE models
suggested that real factors sometimes affect the economy. A paper by Frank
Smets and Rafael Woulters (2007)[ae] stated that monetary policy explained only
a small part of the fluctuations in economic output.[193] In new synthesis
models, shocks can affect both demand and supply.[185] More recent developments
in new synthesis modelling has included the development of heterogenous agent
models, used in monetary policy optimisation: these models examine the
implications of having distinct groups of consumers with different savings
behaviour within a population on the transmission of monetary policy through an
economy.[194] 2008 financial crisis, Great Recession, and the evolution of
consensus The 20072008 financial crisis and subsequent Great Recession
challenged the short-term macroeconomics of the time.[195] Few economists
predicted the crisis, and, even afterwards, there was great disagreement on how
to address it.[196] The new synthesis formed during the Great Moderation and
had not been tested in a severe economic environment.[197] Many economists
agree that the crisis stemmed from an economic bubble, but neither of the major
macroeconomic schools within the synthesis had paid much attention to finance
or a theory of asset bubbles.[196] The failures of macroeconomic theory at the
time to explain the crisis spurred macroeconomists to re-evaluate their
thinking.[198] Commentary ridiculed the mainstream and proposed a major
reassessment.[199] Particular criticism during the crisis was directed at DSGE
models, which were developed prior to and during the new synthesis. Robert
Solow testified before the U.S. Congress that DSGE modeling "has nothing
useful to say about anti-recession policy because it has built into its
essentially implausible assumptions the 'conclusion' that there is nothing for
macroeconomic policy to do."[200] Solow also criticized DSGE models for
frequently assuming that a single, "representative agent" can
represent the complex interaction of the many diverse agents that make up the
real world.[201] Robert Gordon criticized much of macroeconomics after 1978.
Gordon called for a renewal of disequilibrium theorizing and disequilibrium
modeling. He disparaged both new classical and new Keynesian economists who
assumed that markets clear; he called for a renewal of economic models that
could included both market clearing and sticky-priced goods, such as oil and
housing respectively.[202] The crisis of confidence in DGSE models did not
dismantle the deeper consensus that characterises the new synthesis,[af][203]
and models which could explain the new data continued development. Areas that
had seen increased popular and political attention, such as income inequality,
received greater focus, as did models which incorporated significant
heterogeneity (as opposed to earlier DSGE models).[204] Whilst criticising DGSE
models, Ricardo J. Caballero argued that work in finance showed progress and
suggested that modern macroeconomics needed to be re-centered but not scrapped
in the wake of the financial crisis.[205] In 2010, Federal Reserve Bank of
Minneapolis president Narayana Kocherlakota acknowledged that DSGE models were
"not very useful" for analyzing the financial crisis of 2007-2010,
but argued that the applicability of these models was "improving" and
claimed that there was a growing consensus among macroeconomists that DSGE
models need to incorporate both "price stickiness and financial market
frictions."[206] Despite his criticism of DSGE modelling, he stated that
modern models are useful: In the early 2000s, ...[the] problem of fit[ag]
disappeared for modern macro models with sticky prices. Using novel Bayesian
estimation methods, Frank Smets and Raf Wouters[207] demonstrated that a
sufficiently rich New Keynesian model could fit European data well. Their
finding, along with similar work by other economists, has led to widespread
adoption of New Keynesian models for policy analysis and forecasting by central
banks around the world.[208] University of Minnesota professor of economics
V.V. Chari said in 2010 that the most advanced DSGE models allowed for
significant heterogeneity in behaviour and decisions, from factors such as age,
prior experiences and available information.[209] Alongside such improvements
in DSGE modelling, work has also included the development of heterogenous-agent
models of more specific aspects of the economy, such as monetary policy
transmission.[210][211]
Heterodox theories Main article: Heterodox economics Heterodox economists
adhere to theories sufficiently outside the mainstream to be marginalized[212]
and treated as irrelevant by the establishment.[213] Initially, heterodox
economists including Joan Robinson, worked alongside mainstream economists, but
heterodox groups isolated themselves and created insular groups in the late
1960s and 1970s.[214] Present day heterodox economists often publish in their
own journals rather than those of the mainstream and eschew formal modeling in
favor of more abstract theoretical work.[212] According to The Economist, the
2008 financial crisis and subsequent recession highlighted limitations of the
macroeconomic theories, models, and econometrics of the time.[215] The popular
press during the period discussed post-Keynesian economics[216] and Austrian
economics, two heterodox traditions that have little influence on mainstream
economics.[217][218] Post Keynesian economics Main article: Post-Keynesian
economics While neo-Keynesians integrated Keynes's ideas with neoclassical
theory, post-Keynesians went in other directions. Post-Keynesians opposed the
neoclassical synthesis and shared a fundamentalist interpretation of Keynes
that sought to develop economic theories without classical elements.[219] The
core of post-Keynesian belief is the rejection of three axioms that are central
to classical and mainstream Keynesian views: the neutrality of money, gross
substitution, and the ergodic axiom.[220][221] Post-Keynesians not only reject
the neutrality of money in the short-run, they also see money as an important
factor in the long-run,[220] a view other Keynesians dropped in the 1970s.
Gross substitution implies that goods are interchangeable. Relative price
changes cause people to shift their consumption in proportion to the
change.[222] The ergodic axiom asserts that the future of the economy can be
predicted based on the past and present market conditions. Without the ergodic
assumption, agents are unable to form rational expectations, undermining new
classical theory.[222] In a non-ergodic economy, predictions are very hard to
make and decision-making is hampered by uncertainty. Partly because of
uncertainty, post-Keynesians take a different stance on sticky prices and wages
than new Keynesians. They do not see nominal rigidities as an explanation for
the failure of markets to clear. They instead think sticky prices and long-term
contracts anchor expectations and alleviate uncertainty that hinders efficient
markets.[223] Post Keynesian economic policies emphasize the need to reduce
uncertainty in the economy including safety nets and price stability.[224][221]
Hyman Minsky applied post-Keynesian notions of uncertainty and instability to a
theory of financial crisis where investors increasingly take on debt until
their returns can no longer pay the interest on leveraged assets, resulting in
a financial crisis.[221] The financial crisis of 20072008 brought
mainstream attention to Minsky's work.[216] Austrian business cycle theory Main
article: Austrian business cycle theory Photo of Hayek. Friedrich Hayek,
founder of Austrian business cycle theory The Austrian School of economics
began with Carl Menger's 1871 Principles of Economics. Menger's followers
formed a distinct group of economists until around the World War II when the
distinction between Austrian economics and other schools of thought had largely
broken down. The Austrian tradition survived as a distinct school, however,
through the works of Ludwig von Mises and Friedrich Hayek. Present-day
Austrians are distinguished by their interest in earlier Austrian works and
abstention from standard empirical methodology including econometrics.
Austrians also focus on market processes instead of equilibrium.[225]
Mainstream economists are generally critical of its methodology.[226][227]
Hayek created the Austrian business cycle theory, which synthesizes Menger's
capital theory and Mises's theory of money and credit.[228] The theory proposes
a model of inter-temporal investment in which production plans precede the
manufacture of the finished product. The producers revise production plans to
adapt to changes in consumer preferences.[229] Producers respond to
"derived demand," which is estimated demand for the future, instead
of current demand. If consumers reduce their spending, producers believe that
consumers are saving for additional spending later, so that production remains
constant.[230] Combined with a market of loanable funds (which relates savings
and investment through the interest rate), this theory of capital production
leads to a model of the macroeconomy where markets reflect inter-temporal
preferences.[231] Hayek's model suggests that an economic bubble begins when
cheap credit initiates a boom where resources are misallocated, so that early
stages of production receive more resources than they should and overproduction
begins; the later stages of capital are not funded for maintenance to prevent
depreciation.[232] Overproduction in the early stages cannot be processed by
poorly maintained later stage capital. The boom becomes a bust when a lack of
finished goods leads to "forced saving" since fewer finished goods
can be produced for sale.
|
|