Why the "Sound Money" Components of Popular Economic Freedom Indexes Should Be Used with Caution

Institutions matter. Economists of the classical period knew that well. In recent years, economists have increasingly included institutional variables in their empirical work. The economic freedom indexes from the Fraser Institute and the Heritage Foundation have been among the most widely used institutional indicators.

The purpose of an economic freedom index, according to Heritage, is to “document the positive relationship between economic freedom and a variety of positive social and economic goals.” Many studies support that claim, finding that countries with high economic freedom scores are, in fact, more prosperous and dynamic than those that are less free. However, not all aspects of economic freedom turn out to be equally important. In earlier posts, I have been critical of the components of the economic freedom indexes that focus on the size of government and regulation. Here, I take on those that focus on price stability—the Fraser “Sound Money” component and the Heritage “Monetary Freedom” component.

I find that these price stability components add little to our understanding of economic freedom. Furthermore, because they incorporate an exaggerated fear of even moderate inflation, an attempt to achieve maximum price stability, as defined by these indicators, would be less likely to bring prosperity than to undermine it.

Is price stability an institution or a policy outcome?

The first problem with the Fraser and Heritage price stability indicators is that they do not really measure economic freedom in the sense that it underlies other components of the indexes. Robert Lawson, a senior member of the team that publishes Fraser’s Economic Freedom of the World reports, once wrote that an economic freedom index is, or should be, only that—“not an index of economic growth policies, efficient government provision of public goods, macroeconomic stabilization policies, or ideal income distribution policies.” If so, then the sound money indicators, as indexes of the success of monetary policy, are just what an economic freedom index should not be.

Measures of inflation and money growth account for three-quarters of the data that go into the Fraser Sound Money indicator and four-fifths of the Heritage Monetary Freedom indicator. Rather than measuring the freedom of individuals to work, produce, consume and invest as they see fit, they measure the performance of central banks. To be sure, if we are going to have central banks, we want them to do a good job, but conceptually, good central banking and economic freedom are different things.

Instead, it seems to me that the natural meaning of economic freedom, as applied to monetary matters, ought to be the freedom to use and exchange whatever kind of money one wants for whatever purpose. To their credit, neither Fraser nor Heritage entirely neglects that notion of monetary freedom. Fraser’s Sound Money includes a subcomponent, weighted at 25 percent, that measures the extent of restrictions on ownership of foreign currency bank accounts. The Heritage Monetary Freedom indicator gives a weight of up to 20 percent to a measure of the extent of price controls. If I were to try to build a better measure of monetary freedom, I would begin by including both of those, but I would not stop there.

Why not, for example, include a measure of central bank independence, much as both the Fraser and Heritage indexes include measures of judicial independence in evaluating the quality of legal institutions? There is a large literature linking central bank independence to the desired policy outcome of price stability. (This IMF working paper discusses measurement issues and provides links to previous literature.) In a similar vein, it might make sense to rate central banks on their use of policy rules. Many economists have argued that using policy rules to constrain or even fully replace central bank discretion can improve price stability. (A recent conference sponsored by the Boston Fed provides a detailed airing of the case for policy rules and contained discretion.)

Another indicator for possible inclusion would be the presence or absence of black markets for foreign currency, and, where black markets exist, the spread between the official and black-market exchange rates. (Fraser does include a measure of black-market exchange rates, although it appears as a subcomponent of the Freedom of Trade category rather than of Sound Money.) The multiple official exchange rates for different types of transactions that some countries maintain also represent limitations on freedom.

Finally, we might try to measure the degree to which a country’s monetary system is open to competing forms of money—a subject that Larry White, among others, has written about at length. An up-to-date treatment would want to cover regulatory limits on the use of cryptocurrencies.

A greater emphasis on these and other institutional variables, and less on the rate of inflation itself, would make the monetary components of the Fraser and Heritage indicators more consistent with the conceptual framework that underlies their other components.

An exaggerated fear of inflation

Setting aside the conceptual question of whether sound money is a form of freedom, an institution, or simply good macroeconomic policy, we come to the second major problem with the Fraser and Heritage price stability measures. By enshrining zero percent inflation as the ideal, both of them reflect an exaggerated fear of even moderate inflation that is not supported by the preponderance of evidence.

To understand why, we need to dig into some technical details, starting with Fraser’s Sound Money. This indicator has four subcomponents: the growth rate of the M1 money stock (currency plus checkable deposits) relative to the growth of real GDP; the standard deviation of inflation over the previous five years; the rate of inflation in the most recent year; and a measure of the right to own foreign currency bank accounts. The raw data for each subcomponent are converted to a scale of 0 to 10, with higher numbers representing greater price stability, and then averaged to get the full Sound Money indicator.

Of the four subcomponents, M1 growth appears to be a holdover from Milton Friedman’s personal involvement in the early stages of Fraser’s Economic Freedom of the World project. Few, if any, economists today consider holding M1 growth equal to the long-run rate of real GDP growth to be a reasonable target for monetary policy. One of the main reasons is that the difference between the rate of money growth and the rate of real economic growth equals the rate of inflation only if M1 velocity (the ratio of nominal GDP to the money stock) is constant.

In practice, however, as the following chart shows, velocity can be highly variable. For example, from 2005 to 2008, the Fraser money growth index averaged an excellent 9.7, but because velocity was rising, inflation averaged 3.2 percent. From 2009 to 2014, inflation slowed to 1.6 percent, but the Fed’s policy of quantitative easing caused a rapid growth of the money stock matched by a sharp drop in velocity. During this period, the Fraser money growth index fell to 8.45, its lowest level in the 45 years for which it has been calculated.

Beyond the problems with M1 growth, there are problems with the way Fraser’s Sound Money indicator measures inflation itself. The inflation subcomponent is based on the rate of CPI inflation in the most recent year, but manipulates the raw data extensively. First, the distribution of inflation rates is truncated at 50 percent per year. All countries that experience hyperinflation, no matter how rapid, are assigned the same score. Next, any deflation, that is, any negative rate of inflation, is converted to its absolute value. Finally, the data are converted to a scale of 0 through 10. This procedure can be summarized by the formula

FINFi = (50 – |пi|)/5

where FINFi is the Fraser inflation score for country i and пi is the raw inflation rate. By this method, then, an inflation rate of 0 percent becomes a perfect 10, a rate of either +2 percent or -2 percent gives a score of 9.6, and any inflation rate of 50 percent or higher earns a zero.

This procedure exaggerates the harm done by moderate rates of inflation in three ways. First, it assumes a priori that the optimal rate of inflation is zero. Second, by arbitrarily truncating the distribution at an inflation rate of 50 percent, it makes the scores of countries with moderate inflation look worse than they otherwise would. (For example, if the maximum permitted value of inflation were 100 percent rather than 50 percent, the score for a country with 2 percent inflation would be 9.8 rather than 9.6.) Third, by using absolute values of inflation rates, it implicitly assumes that a rate of +2 percent inflation is just as harmful as deflation of -2 percent, something that few economists believe to be the case. (See here for a full discussion of deflation and its effects.)

The Heritage Monetary Freedom indicator uses a somewhat different method, but one that similarly exaggerates the harm done by moderate inflation and understates the risks of deflation. Like Fraser, Heritage uses absolute values to convert deflation to an equivalent rate of inflation. Rather than truncating the distribution of observed inflation rates, Heritage reduces the statistical impact of extreme hyperinflation values by basing its score on the square root of observed inflation rates, multiplied by a constant. Rather than the most recent year’s inflation, it uses a three-year weighted average of inflation rates, and rather than a scale of 0 to 10, it uses a scale of 0 to 100, with higher values indicating less inflation. The full procedure is given by the formula

HINFi = (100 – 6.33*√п*i) – CONTROLS

where HINFi is the Heritage inflation score for country i and п*i is a weighted average of the absolute values of the last three years’ inflation rates, and CONTROLS is a penalty of up to 20 points depending on the pervasiveness of price controls.

The effect of this formula is to reduce the scores of countries with moderate inflation rates by even more, in comparison to those with zero inflation, than does the Fraser method. For example, raising the inflation rate from zero to 2 percent lowers the Heritage inflation score from 100 to 91 (assuming no price controls), compared to a smaller relative reduction from 10 to 9.6 using the Fraser formula.
Rather than making a priori assumptions that idealize zero inflation, penalize moderate inflation, and understate the risks of deflation, it would make more sense to look at the actual effects of inflation as revealed by empirical studies. There is an abundant literature to draw on.

Probably the most extensively studied question is the relationship between price stability and economic growth. (See here and here for recent surveys of the literature.) The question is not simply one of whether inflation is good or bad for growth. Three frequent findings suggest that a more nuanced view is appropriate.

  • Many studies find a nonlinear relationship, in which inflation up to a certain rate is associated with higher growth and with lower growth after that.
  • The optimal level of inflation—that is, the rate associated with the highest growth rates—appears to be lower in more developed economies.
  • Country-specific effects are strong. Both the optimal inflation rate and the degree of harm from excess inflation vary according to circumstances.

Some of the studies that reached these conclusions were done years ago, raising the question of whether they continue to hold for the period leading up to the global financial crisis and the recovery from it. For a quick-and-dirty check, I examined the most recent ten years of IMF data on inflation and growth. The findings were generally consistent with those of past studies:

  • Nonlinear formulations of the inflation-growth relation produced significantly better fits than linear formulations.
  • The relationship of inflation to growth varied according to income level. The rate of inflation associated with the best growth performance was lower for countries in the highest income quartile than for middle- and lower-income countries.
  • Although there was a statistically significant relationship between growth and inflation overall, the fit was far from tight. As in previous studies, country-specific factors accounted for a large part of the cross-country variations in growth.

In addition to the literature on the relationship between inflation and growth, there are many studies of the relationship between inflation and unemployment, both in the short run and the long run. One strand of that literature suggests that there is a “backward bending Phillips curve.” The idea is that there is some moderate, positive rate of inflation that produces the lowest minimum unemployment rate that can be sustained without accelerating inflation. Proponents of this view emphasize behavioral factors, especially downward rigidity of nominal wages. The most widely cited paper proposing a backward-bending Phillips curve was published in 1996 by George A. Akerlof, George L. Perry, and William T. Dickens. Thomas Palley has proposed a simpler version of their model that reaches the same conclusion.

In short, whereas the Fraser and Heritage price stability indicators assign the highest possible scores to countries with zero inflation, economists who have studied the effects of inflation in the real world have found, more often than not, that a moderately positive rate produces better results.


Taking all of the above into account, I reach the following conclusions regarding the price stability components of the Fraser and Heritage economic freedom indexes:

  1. Conceptually, the idea of measuring economic freedom in the first place is motivated by the hypothesis that good institutions produce good outcomes. In my view, explorations of that hypothesis are best conducted using economic freedom indicators that focus on institutional quality. Including indicators of policy outcomes, such as inflation rates, only confuses matters.
  2. The price stability components of Fraser and Heritage economic freedom indexes, which assign the highest possible scores to countries with zero inflation, reflect an exaggerated fear of the effects of moderate inflation. By using the absolute value of inflation rates in their indicators, they also understate the detrimental effects of deflation.

These conclusions have important implications both for researchers and for policymakers.

Researchers should treat the price stability components of the Fraser and Heritage indexes with caution. The results of any statistical tests that use the economic freedom indexes as a whole should be checked against tests that disaggregate those indexes into their principal components. The results of statistical tests that use the Sound Money and Monetary Freedom indicators as independent variables should be checked against tests that use raw inflation data, instead, or strip out price stability data altogether. Researchers interested in exploring the relationship between macroeconomic performance and the quality of monetary institutions should consider augmenting the Fraser and Heritage data with additional institutional indicators, such as measures of central bank independence, the use of monetary policy rules, freedom to use competing forms of money, and exchange rate regimes.

At the same time, policymakers should be cautioned against using the Sound Money or Monetary Freedom indexes as performance benchmarks. There is little evidence to support these indexes’ implicit assumptions that zero is the optimal rate of inflation or that a given rate of deflation is no more damaging than the same rate of inflation. Far from promoting freedom and prosperity, any attempt to maximize the soundness of money as defined by Fraser or monetary freedom as defined by Heritage would be more likely to place economies in a low-growth, high-unemployment straitjacket.

Reposted from NiskanenCenter.com

Guaranteed Jobs, Hungarian Style

In a recent blog post, Niskanen Center’s Samuel Hammond expressed skepticism about the idea of job guarantees. In his view, such policies do not attack the real problem, they are easily politicized, and, as active labor market policy, are inferior to wage subsidies for private sector jobs.

To see how guaranteed jobs work out in practice, we need look no farther than Hungary, where Prime Minister Viktor Orban has made the replacement of welfare by workfare a centerpiece of the claimed Hungarian economic miracle that helped him win re-election in last Sunday’s election.

Writing recently for The New York Times, Patrick Kingsley and Benjamin Novak provide an overview of job guarantees, Hungarian style. Their article, which focuses on a small village in which 73 of 472 residents participate in the program, makes an effort to show both the positive and negative side of workfare. They note that although the guaranteed jobs pay only about half the minimum wage, that is twice what participants previously received in unemployment benefits. Participants told them that the pay, although minimal, was enough to make a difference. The program has also brought some small but welcome improvements in the town’s infrastructure.

“This little bit of money goes a long way in this village,” said Eva Petrovics, 60 who helps to clean the village nursery school. “The fridge is full now.”

The program has also helped to spruce up the village. Since 2012, workfare participants have built a small bridge, added a drainage system, and renovated the town hall and sports fields.

However, there are downsides to workfare, too. Hammond’s concerns about politicization seem to have been borne out. Kingsley and Novak note that the program have made participants more dependent on Orban’s Fidezs party, which is expected to retain power in this weekend’s election, and on the town’s mayor, who determines job assignments.

Moreover, despite better drainage and tidier soccer fields, workfare participants do not really put in all that much time doing useful work. Often, they report to work for an hour or so and then go home. There is especially little to do in winter.

Popular though it may be in the Hungarian countryside, Orban’s workfare policy has many critics, both in Hungary and in Western Europe. Annamária Artner is a Senior Research Fellow at the Centre for Economic and Regional Studies of the Hungarian Academy of Sciences. Writing for the progressive website Social Europe, headquartered in London, Artner maintains that

The implied threat of the punitive workfare regime is effectively sweeping the unemployed under the carpet. The unemployment insurance system in Hungary, introduced in the early 1990s following the transition to a market economy, effectively no longer exists.

Reposted from NiskanenCenter.com.

Trump Signs Executive Order on Work Requirements. Why Such Requirements Have Failed in the Past

On Tuesday, President Donald Trump signed an executive order directing all federal agencies to review and strengthen work requirements for all federal poverty programs, including Medicaid, food stamps, housing, and others. The preface to the order invokes familiar rhetoric:

The United States and its Constitution were founded on the principles of freedom and equal opportunity for all.  To ensure that all Americans would be able to realize the benefits of those principles, especially during hard times, the Government established programs to help families with basic unmet needs. Unfortunately, many of the programs designed to help families have instead delayed economic independence, perpetuated poverty, and weakened family bonds.  While bipartisan welfare reform enacted in 1996 was a step toward eliminating the economic stagnation and social harm that can result from long-term Government dependence, the welfare system still traps many recipients, especially children, in poverty and is in need of further reform and modernization in order to increase self-sufficiency, well-being, and economic mobility.

Conservatives have hoped for years that shouting “Get a job!” loudly enough will induce people now on public assistance either to enter the workforce, or if not, to quietly fade from view. Although work requirements have proved ineffective time and again, hope dies last.

There are better ways to address the problems of our broken social safety net. Here are some points I have made in previous posts:

  • The new executive order invokes the 1996 welfare-to-work reforms as a success story, but a careful review of the results shows that the effects of those reforms were disappointingly small.
  • The main reason that work requirements have small effect on actual work behavior is that a majority of low-income people who can work already do work. Most of those who do not have paid jobs are working as unpaid caregivers, in school, or hampered by physical and mental health problems.
  • Where requirements are introduced simply as a stick to drive people to work, they fail. To the extent they are successful, they must be backed up by substantial investment in training, job placement, and one-on-one counseling to cajole people into overcoming personal problems that may make them unattractive to employers. If state and federal governments are unwilling to invest in the administrative infrastructure needed to run work requirement programs well, they will do more harm than good.
  • Often the biggest barriers to self-sufficiency are the punitive benefit reduction rates and other implicit and explicit marginal taxes on low-income workers. This earlier post provides details and explains how programs could be redesigned to reduce work disincentives.

The continuing conservative hope is that shouting “Get a job!” loudly enough will induce people now on public assistance either to the workforce, or if not, to quietly fade from view. Hope dies last.

Reposted from NiskanenCenter.com

Why a Balanced Budget Amendment Would Be Profoundly Destabilizing

This week the House is expected to vote on a balanced budget amendment (BBA), introduced by Bob Goodlatte (R-VA), chairman of the Judiciary Committee. The amendment would require federal budget outlays to equal receipts each year.

Subjecting fiscal policy to rules, rather than allowing it to be driven purely by political impulse, would be a good idea, but not if the rules are the ones envisioned by this amendment. Far from stabilizing the economy, this kind of BBA would radically destabilize it, leading to dizzier booms and deeper recessions. Here is why.

How the budget affects the business cycle, and vice versa

To see why a balanced budget amendment would undermine stability, we need to understand how the budget affects the business cycle, and how the business cycle affects the budget. When we look at the pattern of federal receipts and outlays over time, as shown in the following chart, we see a lot of ups and downs. Where do they come from?

Three main factors account for the variations in outlays and receipts:

  • Politics. Members of Congress have an interest in pleasing their donors and constituents. One good way to do that is to cut their taxes and add spending for their favorite causes in the lead-up to an election. That is true regardless of party and regardless of the state of the economy.
  • Discretionary counter-cyclical policy. As every Econ 101 student learns, the government can soften recessions with tax cuts and spending increases, or cool off booms by cutting spending and increasing taxes. In practice, the tools of discretionary fiscal policy are not used consistently, but there are important exceptions, for example, the fiscal stimulus packages adopted by Congress under Presidents George W. Bush (2008) and Barack Obama (2009) to ease the worst effects of the Great Recession.
  • Automatic fiscal policy. Even when there are no changes in tax or spending laws, the government’s receipts and outlays vary over time according to the state of the economy. In a recession, tax receipts fall along with personal incomes and corporate profits. At the same time, outlays for unemployment insurance, food stamps, Medicaid, health care subsidies under the ACA, and other items increase. During a boom, tax receipts rise and outlays fall. We call those changes automatic fiscal policy or automatic stabilizers because they require no action by the administration or Congress.

The perverse effects of annual budget balance

If a balanced budget amendment did nothing but curb political impulse-spending and tax cuts tailored to the wishes of the donor class, it would be a good thing, but that is not how the BBA now under consideration would work. Instead, such a BBA would destabilize the economy in two important ways.

First, a BBA would undermine stability by making it impossible to use discretionary fiscal policy to mitigate the effects of recessions. In 2001, or example, a BBA would have required Congress to respond to the mild recession with tax increases, rather than tax cuts. That recession might not have been so mild after all, and the recovery would probably have been slower. More recently, massive spending cuts and tax increases would have been needed to balance the budget after the 2007-08 financial crisis. A BBA would have made it impossible for the Bush administration to soften the early effects of the 2008 recession with tax rebates, and it would have made it impossible for the Obama administration to pass its more aggressive stimulus program in 2009.

The inability to use discretionary fiscal policy would be felt most acutely during crises severe enough to drive interest rates to zero. Since the Fed cannot reduce interest rates below zero, monetary policy becomes ineffective at that point. Arguably, the mild 2001 recession could have been managed using monetary policy alone, without discretionary fiscal measures. The same cannot be said for the global financial crisis and its aftermath. By the end of 2008, interest rates hit their effective zero bound, severely limiting the power of monetary policy. Without discretionary fiscal stimulus, the U.S. economy might well still be stuck in a Japanese-style lost decade, rather than finally making it back toward full employment after a slow recovery.

Wisely, the BBA now under consideration lifts the annual balance requirement in time of war. At a minimum, it ought also to relax the constraint whenever the Fed advises Congress that interest rates have hit the effective zero bound.

A second, and even more serious, flaw is that a BBA would disable the economy’s automatic stabilizers, replacing them with automatic destabilizers. Under current rules, when the economy enters a recession, falling tax receipts and rising outlays automatically moderate the decreases in aggregate demand that are brought on by contraction of the private sector. Under a BBA, automatic stabilizers would have to be offset by some combination of mandatory tax increases and spending cuts, which would add to the economy’s downward momentum.

The effect could be quite large. The next chart covers fiscal policy on a quarter-basis from 2000 through 2016. Unlike the first chart, this one shows only the changes in receipts and outlays that occur automatically as GDP rises and falls. (In principle, the lines for both receipts and outlays would be at zero in any year when real output was exactly at its potential level, but that is not true exactly for the method used to generate the CBO data on which the chart is based.)

In 2009, the actual budget deficit reached 9.8 percent of GDP, as the first chart showed, but, as we see here, the deficit would have been 2 percent of GDP even without the effects of the discretionary Bush and Obama fiscal stimulus. A BBA, in effect, would have required more than just forgoing discretionary tax cuts and spending increases. It would also have required Congress to enact tax increases and spending cuts equal to another 2 percent of GDP (to offset automatic spending). With federal outlays running at about 20 percent of GDP, that would mean tax increases or spending cuts equal to some 10 percent of the total budget.

Those are only static estimates. They assume that tighter fiscal policy would have done nothing to deepen the recession, even at a time when zero interest rates severely limited the power of monetary stimulus. My back-of-the-envelope dynamic estimates suggest that for 2009, a BBA would have mandated spending cuts and tax increases closer to 15 percent of the federal budget. It is not too much to suggest that such measures would have made the Great Recession into another Great Depression.

Other problems with the BBA

Since the BBA is unlikely even to make it through Congress, much less through the entire ratification process, I don’t want to spend too much time on its more technical defects, but two are worth at least a mention.

One is its vagueness about the actual mechanisms for ensuring budget balance. Section 4 requires that the president submit a balanced budget plan before the start of the fiscal year, but such a plan would necessarily be based on estimates of outlays, receipts, and GDP. It is not clear what would happen if those estimates turned out to be unrealistic, and the budget ended up in deficit despite the best intentions of Congress.

Section 7 of the BBA gives Congress the power to establish rules, including rules for estimating outlays and receipts, but what kind of rules? Would they treat bygones as bygones—that is, if an unexpected deficit developed, would the shortfall be allowed to go uncorrected? If they did, the Office of Management and Budget’s historic penchant for overly optimistic economic projections would, over time, cause the debt to grow. Raising the debt limit would, under the BBA, require a three-fifths vote of both houses of Congress. That would make the chronic debt-limit chaos we now have even worse.

If, on the other hand, Congress required any unexpected deficit in one year to be made up by an equivalent surplus in the next year, the inherently pro-cyclical nature of the BBA rules would be made even worse.

Section 5 of the BBA, which requires a three-fifths vote of both houses for any tax increase, but only a simple majority for a tax cut, poses a further problem. In the past, Congress has had a tendency to cut tax rates during periods of prosperity, when automatic stabilizers move the budget temporarily into, or close to, surplus. That would require only a majority vote. Restoring taxes to their previous level during a downturn would, however, take a three-fifths vote. In practice, that means that the austerity measures required during a recession would be much more likely to take the form of spending cuts, which could be passed by majority votes, rather than tax increases. Such a procedure could set up a ratchet effect that would gradually diminish the ability of the federal government to meet its essential obligations.

The bottom line

There is a good case to be made for fiscal policy rules. Without them, fiscal policy as we have known it has been pro-cyclical almost as often as it has had a moderating influence on the business cycle. Unfortunately, the balanced budget amendment that Congress is preparing to vote on would make things worse. It would end up requiring fiscal policy to be pro-cyclical almost all the time.

Are there alternative rules that would work better? Yes. Many other countries have them. One simple variant is that followed by Chile, long the most prosperous country in South America. The Chilean rule aims to balance the structural budget, rather than the actual budget, each year. The structural budget refers to the levels of spending, taxes, and the deficit that prevail when the economy is at full employment. A structural balance rule, unlike a simple-minded BBA, thus allows full operation of automatic stabilizers.

Sweden follows a more sophisticated set of rules. The Swedish rules also require budget balance over the business cycle. However, compared to the Chilean rules, those in Sweden allow greater scope for discretionary counter-cyclical fiscal stimulus during recessions, provided it is balanced by appropriate surpluses during booms. U.S. conservatives should love the fact that Sweden has steadily reduced its debt-to-GDP ratio since its fiscal rules came into force.

So yes, rules for fiscal policy are a good idea. But any rule that requires annual balance of the actual budget takes a good idea and stands it on its head.

Previously posted at NiskanenCenter.com

The Role of Preventive Care in Healthcare Reform

“Remember the old saying that ‘an ounce of prevention is worth a pound of cure’?” asks United Healthcare. “Maintaining or improving your health is important – and a focus on regular preventive care, along with following the advice of your doctor, can help you stay healthy … Routine checkups and screenings can help you avoid serious health problems, allowing you and your doctor to work as a team to manage your overall health, and help you reach your personal health and wellness goals.”

You would think that a private insurer, on the hook for big claims down the road if preventive measures did not catch health problems early, would know, if anyone did. Popular opinion seems to agree. Readers of some of my own earlier healthcare posts have often offered opinions such as, “A couple hundred dollars of preventive medicine will prevent tens to hundreds of thousands of dollars being spent,” and, “Anyone who has ever visited a physician knows that preventive care is cheaper in the long run.”

But there is more to the story. Yes, healthcare reform needs to get preventive care right, but preventive care by itself will not make us healthier and cut national healthcare spending. Here are some of the issues.

Preventive care is not just about saving money

The “ounce of prevention” cliché suggests that preventive care is mostly about saving money, but it is more accurate to say that it is about spending money efficiently. If we look at actual data on effectiveness of preventive care we find that even though it does not often reduce total healthcare spending, even when it does not, it can be very much worthwhile.

First, though, we should clarify just what the term means. Primary preventive care aims to keep a disease from developing in the first place. Childhood vaccinations, smoking cessation, and good diet are examples. Secondary prevention means catching a disease in its earliest stages so that timely treatment can minimize the consequences, for example by screening for early detection of cancer or high blood pressure. Tertiary prevention means taking measures to arrest or slow the progression of chronic diseases like diabetes or HIV that cannot be fully cured with current treatments. The term “preventive care” without qualification usually refers to a combination of the primary and secondary categories — a usage we will follow here.

Health economists often measure the payoff to preventive care in quality-adjusted life years, or QALYs. QALYs weight years of extended life according to their comfort and restrictions on activity. For example, a year spent in a hospital bed might count just half as much as one spent in perfect health. Beyond the payoff in QALYs, preventive care has subjective payoffs, such as the peace of mind that comes when a cancer or heart-disease screening comes out negative.

Childhood vaccinations are an example of preventive care that does have the potential to save costs. Vaccinations reduce the cost of treatment for people who, without them, would later develop the disease in question, and they also reduce the spread of contagious diseases in the population at large. They are so inexpensive and so effective that vaccinating as large a share of the population as possible reduces total healthcare expenditures. Intervention to discourage smoking is another example of preventive care that has the potential to reduce total healthcare spending. A 2010 study by Michael V. Maciosek and colleagues identified twenty preventive measures that could reduce total spending if more widely used as a package, although not all would save costs individually.

More commonly, however, providing preventive care improves health but increases total spending. For example, colonoscopies can detect cancer at an early stage and improve the chance of a cure. However, colonoscopies are expensive, many people are tested who turn out to be cancer-free, and occasionally the tests themselves can cause costly complications. Similarly, taking statins to lower cholesterol reduces the incidence of heart attacks, but statins themselves are costly. Some people who take statins die of other causes before they get a heart attack; some suffer heart attacks despite taking statins; and others suffer side effects of the statins themselves that require further treatment. On balance, both colonoscopies and use ofstatins can be cost-effective ways of improving quality of life and preventing premature deaths, but the more widely their use is expanded to parts of the population that are at lower risk, the more likely they are to increase, rather than decrease, total spending.

In some cases, overzealous application of preventive measures can both increase total healthcare costs and lead to worse health outcomes. Widespread screening for prostate cancer, followed by aggressive treatment of the disease in its earliest stages, has been found to fit that pattern, at least according to some studies. Routine use of PSA tests to screen for prostate cancer has fallen into disfavor, and a policy of “watchful waiting” is often recommended when early symptoms are detected. For women, many practitioners no longer consider annual Pap tests and mammograms necessary for older patients with no histories of the relevant cancers.

On balance, the range of cost-effectiveness for different kinds of preventive care is not all that much different than that for treatments for existing conditions. The following chart, provided by health economist Austin Frakt, shows the wide range of cost-effectiveness for both categories.

The chart is based on a review of 599 individual cost-effectiveness studies published in the New England Journal of Medicine. The treatments and preventive services are rated in terms of their cost in dollars per added quality adjusted life year. There is no universally agreement over the dividing line between “cost-effective” and “not cost-effective,” but some health economists consider $100,000 per QALY to be a reasonable rule of thumb.

Note that the dark- and light-blue bars are sorted by cost-effectiveness, not by the type of disorder.  For example, the dark bars in the “less than 10K” column may refer to preventive treatments for a completely different set of conditions than the light bar in the same column. In the case of any given condition, prevention might be much more, or much less, cost-effective than waiting for the disease to develop and then treating it.

As the chart shows, less than a fifth of interventions are actually cost-saving, in the sense of reducing total healthcare expenditures. However, most preventive measures and most treatments studied are cost-effective, in the sense that the extra money spent produces a positive return in terms of QALYs — a very good return, in many cases.

Should preventive care be shielded from cost sharing?

Preventive care does not pose any special problem for policy reforms that would provide first-dollar coverage for all health needs. However, plans that require substantial cost sharing in the form of deductibles or copays need to consider whether to make special provisions for preventive care.

The simple answer would be to waive cost sharing for all cost-saving preventive services and subject other preventive care to the same copays and deductibles as treatment of existing conditions. Encouraging maximum use of cost-saving preventive services would be in the interest of any public or private insurer as well as that of the insured.

The Affordable Care Act recognizes that principle by exempting a list of primary and secondary preventive services from copays and deductibles. Other reform plans, such as the Medicare Extra proposal from the Center for American Progress and the universal catastrophic coverage plan proposed by Kip Hagopian and Dana Goldman also include provision of selected preventive services at no cost to the user.

Whether to expand waivers of copays and deductibles beyond preventive services that actually save costs raises additional problems. One is that consumers do not seem to be very good at determining what kinds of care are worthwhile and what are not. A recent survey of the literature finds that cost sharing does lower total healthcare spending, but the reductions come both from appropriate and inappropriate care. The appropriate care that people with high copays and deductibles often skip includes not only primary and secondary preventive services, such as vaccinations and screenings, but also tertiary preventive care, such as medications and periodic checkups to control chronic conditions like diabetes and hypertension.

The study by Maciosek et al., cited above, provides some additional indirect evidence on consumer behavior. The authors of that study looked at two scenarios in evaluating the cost-effectiveness of their package of 20 preventive services. One scenario was an increase in use of the services from zero to 90 percent of the population. The other was an increase in use from current levels to 90 percent. On the face of it, it would seem that the people who now use these preventive services would be those who would derive the greatest benefits relative to costs, while those who didn’t use them would be those with smaller benefits relative to costs. That, in turn, would suggest that the incremental ratio of benefits to costs for the scenario that started from current rates of usage would be lower than the ratio for the zero-base scenario. In fact, the Maciosek study found the opposite to be true. Their results suggest that people who now forego preventive care would, in fact, derive greater net benefits than those who now use it.

Measures to improve transparency and competition would be one way to help consumers make better choices for preventive care, and for other healthcare services, too. John Goodman, an advocate of market-based healthcare reform, describes the problem in these terms:

Because we’ve suppressed the market, no one ever sees a real price for anything. No patient, no doctor, no employer, no employee. What we have is a bureaucratic system that gives us all perverse incentives. And when we act on those incentives, we make costs higher, quality lower, and access to care more difficult than otherwise would have been the case.

Goodman applauds the growth of walk-in clinics, telephone and e-mail consultations, online pharmacies, and other innovative care options as steps in the right direction. Conservative and progressive critics agree that the present system, in which consumers are bombarded with television ads for medical services of all types, but are often unable to receive straight answers from insurers, doctors, or hospitals about costs and risks, is not doing its job.

Measuring cost-effectiveness of preventive care

In order to deciding which preventive services to provide free of charge and to keep patients well informed about their options, we need to be able measure the cost-effectiveness of those services. That is not always easy.

The problems of measuring cost-effectiveness are succinctly summarized in a 2014 Health Affairs article by Mark V. Pauly, Frank A. Sloan, and Sean D. Sullivan. The article notes that the list of required preventive services under the ACA is based on recommendations of two federal panels, the Advisory Committee on Immunization Practices and the Preventive Services Task Force. However, neither body is explicitly required to take a cost-benefit approach in making recommendations. Their members consist principally of people with medical training, and their decisions tend to emphasize clinical considerations, with cost-effectiveness considered on only an ad-hoc basis.

Pauly and colleagues recommend that the current evaluation procedure be replaced by one that explicitly includes cost-benefit analysis, and offer several suggestions on how such a proposal might be implemented. They propose more than a yes-no assessment of whether or not a given preventive service saves costs. Rather, they suggest giving each preventive service one of three levels of recommendation:

  • A strong recommendation would carry the expectation that physicians would routinely offer and encourage the use of the service, and that insurance or public subsidies would reduce the user price to zero.
  • A permissive recommendation would emphasize the positive health benefits of the service. It would encourage providers not only to alert patients about its availability and cost in general, but to counsel them about its suitability for them as individuals. However, there would be no requirement to waive cost sharing.
  • A discouraging recommendation would be given to services that are found to be clinically ineffective or not effective enough to justify the cost for the great majority of people.

The classification could be based, at least in part, on specific cost-effectiveness thresholds. For example, procedures with a cost of less than $100,000 per QALY might get a strong recommendation, those in a range of $100,000 to $300,000 per QALY a permissive recommendation, and those over $300,000 per QALY a discouraging recommendation. Setting the thresholds would be a political decision, rather than a clinical or economic one.

The strength of the recommendation for a service would also depend, in part, on how broad a subset of the population might benefit. Services that would benefit almost everyone, such as childhood immunizations, would be more likely to get strong recommendations. Permissive recommendations would be more appropriate for services that might be of great benefit to some people but not to others. For example, vaccination against yellow fever is recommended for travelers to Panama but not for travelers to Norway. Discouraging recommendations would signal that few people would be likely to benefit from the service, but their use would not be prohibited. People who wanted to pay could give them a try.

Implications for policymakers

This discussion has several implications for policymakers.

First, any reform plan that includes substantial cost sharing (for example, universal catastrophic coverage or Medicare Extra) should waive copays and deductibles for cost-saving preventive services. Consideration could also be given to adding other high-value preventive services to the list even if they do not reduce total healthcare spending.

Second, current institutions for evaluation of cost-effectiveness are inadequate. At a minimum, the Advisory Committee on Immunization Practices and the Preventive Services Task Force should be directed explicitly to include cost-benefit considerations in their recommendations. Ideally, these panels should be restructured and provided with greater resources, including expertise in economic as well as clinical aspects of preventive care.

Third, reform plans should include measures to improve consumer decision making about preventive care. In addition to better information about cost-effectiveness, those measures should include greater pricing transparency, greater competition, and encouragement of new forms of preventive care delivery, such as walk-in clinics and online consultations.

Better preventive care is not a magic bullet that will solve all the problems of cost and quality of the U.S. healthcare system. However, it has an important role to play and deserves the special attention of reformers.

Previously posted at NiskanenCenter.com