Milton Friedman: Great Classical Liberal Political Economist


The path to scientific recognition



Yüklə 97,5 Kb.
səhifə2/3
tarix15.08.2018
ölçüsü97,5 Kb.
#62809
1   2   3

The path to scientific recognition


Unlike several of his classical liberal contemporaries  James M. Buchanan, Ronald H. Coase and Gordon Tullock  Milton Friedman forged his scientific reputation not by traveling less well-trodden paths but by a sequence of brilliant challenges to mainstream economics. The Royal Swedish Academy of Science, in awarding him the Nobel Prize in Economic Science in 1976, cited Friedman ‘for his achievements in the fields of consumption analysis, monetary history and theory and for his demonstration of the complexity of stabilization policy’. This section focuses on these contributions and assesses their implications for classical liberal political economy and for the public choice revolution:

The Methodology of Positive Economics

During his graduate years at Chicago Friedman had been taught by Frank Knight who evidenced extreme skepticism towards empirical economic analysis. None of the leading scholars at Chicago during the 1930’s showed any real interest in numbers. Quite possibly, Friedman would have embraced that skepticism, had he been able to move directly into an academic position in 1935. Experience at the NRC and the NBER during his Wilderness Years, however, taught him to respect empirical analysis and led him to think deeply about the methodology of positive economics. When he returned to Chicago in 1946, he determined to make sense of the kind of work that he had undertaken with Kuznets and Burns. In so doing, he would make an important contribution to methodology that would be the defining characteristic of the new Chicago School of Economics.

During the 1930’s the economics profession had become enamored of a view advanced by Lionel Robbins that the veracity of an economic model should be tested primarily by the correspondence between its assumptions and the facts (Walters 1987, 423). Specifically Robbins explained: “But the final test of the validity of any such definition is not its apparent harmony with certain usages of every day speech, but its capacity to describe exactly the ultimate subject matter of the main generalizations of science (Robbins 1938, 4-5). Thus Robbin’s view was that the assumptions of good science must directly reflect empirical reality.

This view encouraged significant challenges to the model of perfect competition from critics such as Joan Robinson and Edward Chamberlin who claimed that the assumptions of the perfectly competitive model failed to conform to the reality of twentieth-century markets. It also stimulated attacks on all theories that incorporated the assumption that firms maximize profits. More fundamentally, the Robbins test was being widely deployed to attack the laissez-faire model of economics (Samuelson 1963, 213).

As early as 1947, Friedman was able to circulate in draft form a radically different view of the proper methodology for positive economics than that espoused by Robbins. Six years later, in 1953, Friedman’s article on the methodology of positive economics would make a controversial but long-lasting entry into the litany of economics.

In preparing his essay, Friedman benefited slightly both from a brief conversation with Karl Popper (whose great book Logik der Forschung was not yet available in the English language) and from his collaboration with James Savage whose book The Foundations of Statistics would shortly revolutionize the philosophical foundations of statistics (Friedman and Friedman 1998, 215). Ultimately, however, the methodology outlined in Friedman’s 1953 essay is uniquely his own.

At the outset of his Essay Friedman states that: ‘The ultimate goal of a positive science is the development of a “theory” or “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed.’ (Friedman 1953, 7). He reinforces this view in the following terms: ‘Viewed as a body of substantive hypotheses, theory is to be viewed by its predictive power for the class of phenomena which it is intended to “explain”’ (Friedman 1953, 8). In this respect, a hypothesis can be falsified but never verified:

The hypothesis is rejected if its predictions are contradicted (“frequently”

or more often than predictions from an alternative hypothesis); it is

accepted if its predictions are not contradicted; great confidence is

attached to it if it has survived many opportunities for contradiction.

Factual evidence can never “prove” a hypothesis; it can only fail to

disprove it, which is what we generally mean when we say somewhat

inexactly, that the hypothesis has been “confirmed” by experience.’

(Friedman 1953, 8-9)

This emphasis on prediction leads Friedman to reverse the epistemic order presumed in orthodox methodology (Hirsch and Marchi 1990, 76). Instead of reasoning from true causes to implications, Friedman reasons from observed implications to possible premises. In this view, the premises of a successful theory are accepted to the extent to which they yield a set of predictions that has not been falsified by the available evidence. The simpler and the more fruitful the premises involved, the more acceptable they are, given the accuracy of the predictions that they generate.

From this perspective, Friedman launched a controversial and in retrospect almost certainly an exaggerated attack on the ruling convention that a theory should be tested by the realism of its assumptions.

Truly important and significant hypotheses will be found to have

“assumptions” that are wildly inaccurate descriptive representations

of reality, and, in general, the more significant the theory, the more

unrealistic the assumption…...A hypothesis is important if it “explains”

much by little, that is if it abstracts the common and crucial elements

from the mass of complex and detailed circumstances surrounding the

phenomena to be explained and permits valid predictions on the basis

of them alone. (Friedman 1953, 14)

Friedman immediately modified this startling and memorable assertion with a more cautious explanation:

The relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic”, for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions. (Friedman 1953, 15)
Friedman’s statement of methodology did not meet with widespread early acceptance within an economics profession yet unacquainted with the writings of Karl Popper. Most of the early critiques were ad hoc in nature, more designed to buttress the ongoing attack on neoclassical theory than to provide profound insights. In 1963, however, Paul Samuelson entered the debate with a more formal attempted rebuttal of the Friedman’s methodology (Samuelson 1963).

Samuelson focused attention on Friedman’s assertions (1) that a theory is vindicated if some of its consequences are empirically valid to a useful degree of approximation (2) that the empirical unrealism of the theory itself, or of its assumptions, is quite irrelevant to its validity and worth and (3) that it is a positive merit of a theory that some of its content and assumptions are unrealistic.

According to Samuelson (1963), this methodology is incorrect as a matter of logic. Define a theory (call it B) as a set of axioms, postulates or hypotheses that stipulate something about observable reality. Fundamentally, this theory contains everything  assumptions as well as consequences  and is refuted or not as a whole by reference to how well it conforms to the relevant evidence. Friedman denies this and argues instead that B has consequences (call them C) that somehow come after it and assumptions (call them A) that somehow are antecedent to it. What are the implications of this separation?

According to Samuelson A=B=C. If C is the complete set of consequences of B, it is identical with B. B implies itself and all the things that itself implies. Thus, if C is empirically valid, then so is B. Consider, however, a proper subset of C (call it C-) that contains some but not all the implications of B and consider a widened set of assumptions that includes A as a proper subset (call it A+). Now suppose that C has complete empirical validity. Then so has B and so has A. However, the same cannot be said for A+. Similarly, the empirical validity of C- does not of itself impart validity to A or to B.

If Samuelson is correct, Friedman’s methodology is scientifically flawed. For example, it may well be the case that certain characteristics of the model of perfect competition conform to reality (C-as Friedman would argue). However, other parts do not (A as Friedman would acknowledge). In such circumstances, the model (B in Samuelson’s broader sense) has not been validated and economists should proceed with extreme care in making use of it even if the evidence strongly and consistently conforms to C-.

Samuelson’s deconstruction is valid, however, only for a methodology that views theory as moving from cause to effect, the very methodology that Friedman rejected in his 1953 essay. The real question for Friedman is to guage the extent to which the assumptions of a theory are adequate for the job in hand, which is to generate predictions that conform with the available evidence. He rejects on methodological grounds the notion advanced by Samuelson (1963) that a theory must be realistic in all its aspects.

To put Friedman’s central thesis in a nutshell it is that ‘the ultimate test of the validity of a theory is not conformity to the canons of formal logic, but the ability to deduce facts that have not yet been observed, that are capable of being contradicted by observation, and that subsequent observation does not contradict’ (Friedman, 1953, 300). In this respect, Friedman’s 1953 views on methodology, though contentious at the time, proved to be consistent with those of Karl Popper and provided the intellectual foundations first for the new Chicago School of Economics and subsequently for a significant section of the economics profession.

This shift of methodology proved to be very important for Friedman’s subsequent empirical re-evaluation of Keynesian economics and for his empirical work on the role of money in the macro-economy. By persuading many economists that economic science could be advanced by exposing the predictions of very simple models to the evidence, Friedman would be able to demonstrate, for example, that the quantity equation was a better predictor of economic behavior than the Keynesian income-expenditure equation. This result would have enormous implications for reining in fiscal interventions that threatened individual liberties.



Fiscal Policy is Overrated

In evaluating the evolution of a scholar’s career, it is important not to do so from the end-point of that career from the perspective of hindsight. This is particularly so when evaluating Friedman’s critique of Keynesian economics. Ultimately, the success of this critique would constitute his most important contribution to classical liberal political economy and a significant assist to the public choice revolution. However, Friedman’s critique of Keynesian economics was piecemeal in nature, and certainly did not start out as a grand design.

Friedman was always more impressed with the scholarship of Maynard Keynes than with that of the Keynesians. Indeed, Friedman viewed Keynes, like himself, as a purveyor of the economics of Alfred Marshall (Hirsch and Marchi 1990, 187). Keynes’s General Theory (1936) made an indelible impression on economic thinking during the immediate postwar years, and the young Friedman was sufficiently impressed by it to allow the Keynesian model to dictate much of his research agenda during the 1940’s and 1950’s.

Friedman’s early preoccupation with the Keynesian model was motivated not by ideological concerns but rather by empirical puzzles surrounding a relationship at the core of the Keynesian system, namely the consumption function. According to the Keynesians, current consumption expenditure was a stable function of current income. A fundamental psychological rule of any modern community dictated that the marginal propensity to consume was less than one and that the average propensity to consume declined with income.

These two conjectures became matters of policy importance. Governments seized on the first as a scientific justification for deficit spending during periods of recession. Economists seized on the latter to consolidate the secular stagnation thesis and to suggest that advanced economies would be condemned to stagnation in the absence of deficit financing. In both instances, the fallacy of a free lunch enticed the unwary into embracing the palliative of government growth given that government apparently could exploit the consumption function, increasing household incomes by increasing government expenditures, in order to achieve a leveraged impact on the macro economy through the multiplier mechanism.

In his book, A Theory of the Consumption Function (Friedman 1957), Friedman addressed a number of empirical puzzles surrounding this theory. Early work using US data for the interwar period had seemed to support the theory (Friedman 1957, 3). However, postwar studies were more problematic. Estimates of saving in the United States made by Kuznets for the period since 1899 revealed no increase in the percentage of income saved during the past half century despite a substantial rise in real income (Kuznets 1952, 507-526). The ratio of consumption expenditure to income was decidedly higher than had been computed from the earlier studies.

Examination of budget studies for earlier periods strengthened the appearance of conflict. The average propensity to consume was roughly the same for widely separated dates despite substantial differences in average real income. Yet each budget study separately yielded a marginal propensity decidedly lower than the average propensity. Finally, the savings ratio in the period after World War II was sharply lower than that predicted by the relationships estimated for the interwar period. According to Friedman’s methodology something was seriously amiss. The Keynesian consumption function had failed a basic empirical test (Friedman 1957, 4).

In his book, A Theory of the Consumption Function (Friedman 1957), Friedman adapted a dynamic theory of Irving Fisher (1930) to explain some of the empirical anomalies that had arisen in attempts to test the static Keynesian model against time series and cross-section data on consumption and income (Sargent 1987). This book is Friedman’s best purely scientific contribution and the work that best reflects his methodology of positive economics (Walters 1987).

Irving Fisher (1930) had posited that consumption should be a function of the present value of income, not of its current value. Friedman accepted the dynamic implications of this theory, but replaced Fisher’s concept with the concept of ‘permanent income’. He posited that consumers separated their current income into two parts, namely a permanent part equivalent to the income from a bond and a transitory part equivalent to a non-recurring windfall. In testing the theory of the consumption function against cross-section data, econometricians must resolve a signal extraction problem in order to estimate the permanent component of income from observations on the sum of the permanent and the transitory components of income.

To model the time series data, Friedman introduced the concept of ‘adaptive expectations’ to create a statistical representation of permanent income. Agents were assumed to form expectations about the future path of income as a geometric distributed lag of past values. The decay parameter in the distributed lag ought to equal the factor by which the consumer discounted future utility. Friedman estimated his model on time series data using the method of maximum likelihood.

On this basis Friedman (1957) demonstrated that there exists a ratio between permanent consumption and permanent income that is stable across all levels of permanent income, but that depends also on other variables, most notably the interest rate and the ratio of wealth to income. The transitory components of income have no effect on consumption except as they are translated into permanent income.

From the perspective of the 1950’s, Friedman’s analysis had very important consequences for macroeconomic policy. First, it suggested that the immediate fiscal policy multiplier was markedly lower than that posited by the Keynesians. Second, it indicated that the dynamic responses of income to fiscal policy shocks were much more complicated than those indicated by textbook IS-LM curves. Both results suggested caution in the use of fiscal policy as a stabilization device.

Although the Keynesians argued that fiscal policy should be used even-handedly across the business cycle, countering recessions with budget deficits and booms with budget surpluses, the political system confounded such naïve expectations (Buchanan and Wagner 1977). The political incentives to maintain budget deficits during booms as well as slumps simply overwhelmed economic logic. Therefore, to the extent that Friedman’s theory dampened economists’ enthusiasm for an active fiscal policy, it thus helped to dampen the rate of growth of government. Friedman’s book, although devoid of any notions of public choice, nevertheless provided an invaluable foundation for the later work in 1977 by Buchanan and Wagner on the political economy of deficit-finance.

It is important to note that Friedman has never argued that fiscal policy is completely impotent. His own theory of adaptive expectations indeed supposes that individual responses to fiscal policy occur with lags, allowing fiscal policy to exert an influence on the macro-economy during the period of adjustment. Friedman’s crucial insight is that monetary policy typically is more effective than fiscal policy as an instrument of macro-economic policy.

It is also important to note, that New Keynesian views are now very much the mainstream in macro-economics, albeit operating within a rational expectations framework that offers only limited scope for fiscal intervention and a much greater role for monetary policy than was envisaged by the original followers of Keynes.

Money Matters

Friedman’s interest in the role of money in the macro-economy was first sparked in 1948 when Arthur Burns at the NBER asked him to research the role of money in the business cycle. Thus began a thirty-year program of research with Anna Schwartz that would demonstrate that money matters – indeed that it matters a great deal – and that would further erode the perceived empirical importance of the Keynesian model.

By 1948, Keynesian economic theory ruled triumphant throughout the academies of the Western World. The classical quantity theory for the most part had been eliminated from textbook economics; and where it was mentioned it was treated as a curiosum. The conventional view throughout the economics profession was that money did not matter much, if at all. What really mattered was autonomous spending, notably in the form of private investment and government outlays. Fiscal policy was crucial; monetary policy was all but irrelevant in the sense that ‘you cannot push on a string’.

Only the University of Chicago, through the teachings of Henry Simons, Lloyd Mints, Frank Knight and Jacob Viner, had stood at all resolutely against this pervasive doctrine during the 1930’s and 1940’s as Keynesian doctrine swept through the academy. Friedman was well-versed in the subtle version of the quantity theory expounded at Chicago, a version in which the quantity theory was connected and integrated with general price theory and became ‘a flexible and sensitive tool for interpreting movements in aggregate economic activity and for developing relevant policy prescriptions’ (Friedman, 1956, 3).

Systematically, over the period 1950-1980, Friedman and his research associates would challenge the empirical relevance of the Keynesian model by demonstrating the empirical superiority of the quantity theory as expounded at Chicago. By the time that his research program was complete, and prior to the rational expectations revolution, almost all economists would recognize that money did matter, that what happened to the quantity of money had important effects on economic activity in the short run and on the price level in the long run (Friedman and Friedman 1998, 228).

Before Keynes, the quantity theory of money had played an important role in classical economics. Using the behavioral equation MV = PY, classical theorists had argued that the income velocity of circulation of money, V, was a constant; that real income, Y, was unaffected by changes in the quantity of money (the so-called classical dichotomy); and therefore that changes in the supply of money, M, directly affected the price level, P. Keynes (1936) derided this naïve textbook version of the quantity theory, arguing instead that V was not a constant but was highly variable and that it served as a cushion to prevent any change in the supply of money from exerting an impact on either real income or the level of prices.

In conjunction with his work at the NBER, Friedman established a Workshop in Money and Banking at the University of Chicago. The first product of this Workshop was a book: Studies in the Quantity Theory of Money (1956) which Friedman edited. In retrospect, this publication was the first major step in a counter-revolution that succeeded in restoring the quantity theory to academic respectability. There is no evidence that Friedman was aware at that time of the dimensions of the impending battle. His express intent in writing the introductory essay was simply to ‘set down a particular “model” of a quantity theory in an attempt to convey the flavor of the (Chicago) oral tradition’ (Friedman 1956, 4). Of course, the impact of his essay would be much more dramatic than he and his colleagues at that time could possibly foresee.

Friedman’s introductory essay provided a subtle and sophisticated restatement of the quantity theory of money as a stable money-demand function (Breit and Ransom 1998, 228). Unlike the classical economists, Friedman rejected the notion that V, the income velocity of circulation of money, was a constant. Instead, he modeled V as a stable function of several variables, since money was an asset, one way of holding wealth. Within this framework, he posited that V would respond to nominal monetary expansion in the short run by accentuating rather than by cushioning the impact of such expansion on nominal income. This restatement became recognized as the theoretical position of the Chicago School on monetary economics.

The four empirical studies in the book – dealing with inflationary and hyperinflationary experiences in Europe and the United States – provided support for the quantity theory in its restated form by demonstrating a striking regularity in economic responses to monetary changes. The most significant finding was that velocity was a stable function of permanent income. Since money is a luxury good, the demand for which rises as income increases, velocity would tend to decline over time as income rose. The monetary authority therefore must increase the stock of money to offset this decline in velocity, if it wished to maintain price stability (Breit and Ransom 1998, 230).

These results met with skepticism from Keynesian economists who counter-claimed that the supply of money merely accommodated demand and did not impact independently on the macro-economy. It would take Friedman and his colleagues the better part of a decade of high-quality theoretical and empirical analysis to mount a persuasive case for the quantity theory.

One important component of this research program was the comparative test (Friedman and Meiselman 1963) in which a simple version of the income-expenditure theory, C = a + kA was compared with a simple version of the quantity theory, C = b + vM. For the period 1897 to 1958, using annual data, and for a shorter period using quarterly data, the quantity theory performed better than the income-expenditure theory, implying that v was more stable than k, except for the period of the Great Depression.

More influential was the monumental book co-authored with Anna Schwartz, A Monetary History of the United States, 1867-1960 (Friedman and Schwartz 1963).This monumental piece of empirical research offered substantial support for the restated quantity theory and sent shock waves through the economics profession by explaining the Great Depression in terms of the failure of the federal reserve to deploy effective open-market operations that would have prevented the banking crisis that brought about a significant decline in the supply of money (Breit and Ransom 1998, 239).

Subsequent research by Friedman determined (1) that the impact of a fiscal deficit on nominal income was short lived whereas, after a lag, an increased rate of growth of the nominal money supply permanently augmented the rate of price inflation; (2) that the adjustment of nominal income to an increased rate of monetary growth occurred with a long and variable lag; (3) that in the long run additional monetary growth affected only the rate of inflation and exerted virtually no effect on the level or rate of growth of real output (Walters 1987, 425).

So successful was Friedman’s empirical work in supporting the quantity theory that economists began to clamor for an explicit theory of the role of money in income determination, a theory capable of generating the propositions supported by the empirical investigations. In response Friedman published two strictly theoretical articles (Friedman 1970, 1971) that sparked critical reviews from leading Keynesian scholars. The debate between the Keynesians and the quantity theorists would continue for another decade before the worldwide stagflation of the 1970’s brought a close to decisive victory for Friedman’s position.

The restoration of the quantity theory undoubtedly weakened the reliance by governments on fiscal policy as a means of countering the business cycle. This alone was a major contribution to classical liberalism, weakening as it did the justification for government macro-economic intervention through fiscal policy. However, Friedman would fail to persuade the economics profession and the wider public that monetary policy also should be eschewed in favor of a non-discretionary rate of increase in the nominal money supply at the underlying rate of growth of productivity.

Although Friedman was very slow to recognize it, failure in this regard reflected more the pressures of public choice than any weakness in Friedman’s research on the long and variable lags in the relationship between changes in the nominal supply of money and changes in the behavior of nominal income (Rowley 1999, 419). The Federal Reserve Board and its influential staff in the United States and central bank systems elsewhere would not easily be dislodged from playing an active role in monetary policy.

Failure was also, in part, the consequence of Friedman’s success in promoting free markets. Deregulation of the banking system made it difficult from the early 1980’s onwards to determine just which M should be subjected to the non-discretionary rule. Perhaps most important, however, was Friedman’s neglect (typical of the Chicago School) of any detailed institutional analysis of the monetary sector. In the absence of such an analytical framework, the call for non-discretionary policy too easily could be categorized as dogma rather than as science.

Fundamentally, of course, the case in favor of the non-discretionary rule collapsed during the 1980’s once it became apparent that the demand for money was unstable in the wake of banking deregulations.



The Fallacy of the Phillips Curve

An important component of the Keynesian orthodoxy during the 1960’s was the notion that there existed a stable negative relationship between the level of unemployment and the rate of price inflation. This relationship was characterized as the Phillips curve in recognition of the celebrated 1958 paper by A.W. Phillips that plotted unemployment rates against the rates of change of money wages and found a significant statistical relationship between the two variables.

Keynesian economists had focused on this apparent relationship to persuade government that there existed a permanent trade-off between price inflation and unemployment, allowing choices to be made between alternative rates of unemployment and alternative rates of price inflation. By accepting a modest increase in prices and wages, politicians, if they so wished, could lower the rate of unemployment in an economy.

Friedman had questioned the validity of the Phillips curve in the early 1960’s, but without any significant intellectual impact. In his Presidential Address to the American Economic Association in December 1967 (Friedman 1968), Friedman was able to raise the tone of this questioning, arguing convincingly that the concept of the stable Phillips curve was an illusion and that any trade-off that existed between the rate of inflation and the rate of unemployment was strictly temporary in nature. Once again, Friedman placed himself directly against the thrust of Keynesian doctrine deconstructing it from the perspective of Marshallian economics (De Vroey 2001).

Keynes had rendered money non-neutral and had made fiscal policy potent in its effects on output by withdrawing one equation (the labor supply schedule) and one variable (money wages) from the classical model (Sargent 1987, 6). The Keynesian model was thus short one equation and one variable by comparison with the classical model. To close that gap, the Keynesians had incorporated the Phillips curve as a structural relationship. In so doing, they mis-interpreted the true nature of labor market equilibrium.

Friedman in his 1967 Address re-asserted the classical assumption that markets clear and that agents’ decision rules are homogeneous of degree zero in prices. When agents confront inter-temporal choice problems, the relevant price vector includes not only current prices but also expectations about future prices. This the proponents of the stable Phillips curve had failed to recognize.

The trade-off between inflation and unemployment captured in the Phillips curve regression equations represented the outcomes of experiments that had induced forecast errors in private agents’ views about prices. If the experiment under review was a sustained and fully anticipated inflation, Friedman asserted, then there would exist no trade-off between inflation and unemployment. The Phillips curve would be vertical and the classical dichotomy would hold.

Friedman in his 1967 paper utilized a version of adaptive expectations to demonstrate that any trade-off between inflation and unemployment would be strictly temporary and would result solely from unanticipated changes in the inflation rate. The natural rate of unemployment, defined essentially in terms of the ‘normal equilibrium’ of Marshall rather than in the Walrasian terms of the subsequent rational expectations school (De Vroey 2001, 130), was a function of real forces. If monetary expansion fools the workers temporarily so that they do not recognize that their real wage has been lowered, it might stimulate a temporary reduction in the level of employment below the ‘normal equilibrium (or natural rate). As soon as the money illusion dissipates, unemployment will drift back to the natural rate. To keep unemployment below the natural rate requires an ever accelerating rate of inflation.

On the basis of this logic, Friedman predicted that the apparent Phillips curve trade-off evident in the data from the 1950’s and 1960’s would disappear once governments systematically attempted to exploit it. In the 1970’s, the Phillips curve trade-off vanished from the data. Indeed, estimated Phillips curves became positive as rising rates of inflation began to coincide with rising rates of unemployment.

Once again Friedman’s positive economic analysis paved the way for a reduction in the extent of government economic intervention now through monetary policy. Economic events would ultimately invalidate the Phillips curve hypothesis. However, by directing the attention of economists to model mis-specification, Friedman hastened the process, further weakening the economic case for government intervention in the macro-economy.



The Reason of Rules

Friedman’s views on monetary policy were greatly influenced by Henry Simons’ teachings on the superiority of rules over discretionary policy (Breit and Ransom 1998, 241). From the outset of his career, but with increased vigor following his empirical work on the quantity theory of money, not least his analysis of the Great Contraction (Friedman and Schwartz 1963), Friedman argued in favor of committing macro-economic policy to a series of monetary and fiscal rules designed to reduce the degree of discretionary power available to government agents. It should be noted, however, that this argument was not based on any knowledge of public chocie. Rather, Friedman was concerned that central banks typically failed to predict the pattern of the business cycle and the distributed lags of monetary intervention, thus destabilizing the macro-economy.

Friedman’s advocacy of rules stemmed from recognition that monetary policy could not peg interest rates, could not generate full employment and could not stabilize cyclical fluctuations in income (Butler 1985, 177). Yet, monetary policy had a considerable power for mischief, since it affected every part of the economy. Therefore, it deserved great respect. In particular, because changes in the supply of money exerted an impact on the macro-economy only with long and variable lags, the potential for destabilizing policy intervention was high even at the hands of a benevolent government.

At different times, Friedman advocated two comprehensive and simple plans for coordinating monetary and fiscal policies. In 1948, he advocated an automatic adjustment mechanism that would overcome the problem of the lag and that would be more likely to move the economy in the right direction than would discretionary monetary policy.

Friedman advocated (1) the imposition of 100 per cent reserve requirements on the banks, making the supply of money equal to the monetary base and (2) a prohibition on government placing interest-bearing debt with the public. The Federal Reserve would be required to monetize all interest-bearing government debt, so government deficits would lead to increases in the monetary base, and government surpluses would lead to reductions in that base. Such a mechanism would act as an automatic stabilizer and would also assign a clear responsibility for growth in the money supply (and in inflation) to its primary determinant, the federal deficit (Sargent 1987, 9).

If implemented, Friedman’s proposed rule would have eliminated much of the discretionary power that enabled governments to implement Keynesian macroeconomic policy. For that reason alone, it was doomed during the era of Keynesian hegemony. In addition, it implied the abolition of the central banking institutions that determine the course of monetary policy. Such powerful pillars of the economic establishment would not easily surrender their power and wealth by stepping down in favor of an automatic rules-based system.

By 1960, Friedman pragmatically recognized that central banks, open market operations and fractional reserve banking were here to stay, at least for the foreseeable future. In such circumstances, he advanced an alternative rules-based mechanism that was in some respects quite contradictory to his earlier, preferred ideal. Its essential element would be a legislated monetary rule designed to ensure the smooth and regular expansion of the quantity of money.

According to this mechanism, the Federal Reserve would be required by statute to follow a rule of increasing high-powered money by a constant k-percent per annum, where k was a small number designed to accommodate productivity growth in the economy. This rule would permanently limit the fiscal authorities’ access to the printing press to the stipulated k-percent increase and would force them to finance current deficits only by credibly promising future surpluses (Sargent 1987, 9).

Cyclical movements in real income would not be avoided by this non-discretionary mechanism. However, the non-discretionary nature of the rule would prevent some of the wilder swings induced by inept and ill-timed monetary measures (Butler 1985, 185).

So far, this advocacy has failed, not least because of public choice pressures combined with some skepticism as to the importance of high-powered money as the key monetary variable. Nevertheless, Friedman’s advocacy has not been in vain. Monetary authorities in the United States and elsewhere are now aware of the relationship between the quantity of money and the price level. Throughout the world, there is far more reliance on monetary restraint as the basis for price stability than was the case during the Keynesian era. Such monetary restraint has increased the political costs of fiscal expansion. Once again, a largely positive program of economic analysis has served well the cause of liberty.



Yüklə 97,5 Kb.

Dostları ilə paylaş:
1   2   3




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə