OCT 4, 2012 19:45 UTC
In many ways, we’ve already fallen off the cliff. The focus by government and market analysts on the potential medium-term impact on GDP has obscured some themes that deserve far greater attention.
Entitlement reform is no longer something that can be pushed aside. If the government had used GAAP accounting (generally accepted accounting principles), as do practically all private-sector companies (and even local governments), the deficit in 2011 would have been $3.7 trillion higher. This would have been on top of the $1.3 trillion official deficit that was recorded by the Congressional Budget Office and the president’s Office of Management and Budget.
While John Williams of Shadow Stats
has been writing about this theme for years, mainstream outlets like USA Today
are starting to cover how the government basically exempts itself from including the costs of promised retirement and healthcare costs in its deficit calculation. Including the growing costs of future Social Security and Medicare liabilities, the deficit in 2011 was $5 trillion.
Despite an accumulated “Trust Fund” of nearly $3 trillion, Social Security is already losing $165 billion in cash annually. This is partly due to the recent payroll tax cut. But the financial capacity of the Trust Fund also peaked in 2008 and is set to decline steadily for the next 30 years. At the same time, Medicare expenses are shown to exceed dedicated payroll tax receipts by $255 billion
per year. The current accounting conventions distract us from the obviously perilous situation of spending $400 billion more per year on these programs than the U.S. is taking in – even before considering the much larger annual cash flow imbalances of future liabilities.
For the government to meet its future Social Security promises, it would have to set aside more than $20 trillion and find a place to park these reserves to earn some interest. USA Today notes that this figure is almost double what would have been required as recently as 2004.
What makes the government so special that it can model its accounting rules so differently from private business? Simple: Congress can always change what it owes by raising taxes or cutting benefits. But who really believes that Congress will ever talk straight about its accounting?
The looming fiscal cliff has tax rates climbing back up to their pre-2001 levels (from 25 percent to 28 percent in the lowest bracket). A payroll tax holiday is set to expire, and there are the across-the-board spending cuts that will come from the “sequestration”
– the result of last year’s failed shot at budget reform. Add in a handful of other expiring benefits, and the total tab for the “cliff” amounts to more than $600 billion, or about four percent of next year’s GDP. The consensus opinion is that Congress will, yet again, punt on these tough decisions.
Fannie Mae and Freddie Mac have run up a taxpayer tab of $150 billion since the crisis, but their roughly $5 trillion in assets and liabilities are kept “off balance sheet
It’s time to link budget reform to government accounting reform so that the true cost of federal credit guarantees and liabilities is recorded honestly and fairly.
For some perspective, we can look to big banks and large financial institutions, a world apart from government, but one where accounting practices present the same sorts of problems. The largest financial institutions have balance sheets that are opaque and largely impenetrable.
Andrew Haldane, the UK’s head of financial stability, has perhaps been the sharpest critic and advocate for change when it comes to the reports and disclosures of large financial institutions. Haldane explains that the financial reports of banks differ over the course of the business cycle, and the current risk management tools often fail to accurately price or account for risk.
This column has argued that too many assets are deemed “safe” by regulators
. Yet the principal indicator of how much leverage there is in the U.S. financial system concerns derivatives. GAAP accounting allows U.S. firms to net their derivative exposures. JPMorgan, for example, reports that it has $1.7 trillion in total derivative exposure. But when you net out all the longs and shorts, only $85.5 billion appears on its balance sheet.
Imagine if a bank issued $100 billion in debt to purchase $100 billion in corporate bonds that later declined in value to $96 billion. If this were a short credit derivatives position, the bank would only account for $4 billion in “net” liabilities. The $100 billion would remain off the balance sheet as long as the price of the corporate bonds remains unchanged. This is the sort of example that has many market watchers, including the Wall Street Journal
, wondering whether JPMorgan is a bank with $2.3 trillion in assets or one with roughly $4 trillion.
These issues are exacerbated when viewed through a global lens. Europe is applying a different set of rules on netting derivatives, which leads to an apples-to-oranges comparison of banks across the pond. If Deutsche Bank fell under U.S. accounting rules, its leverage ratio would be almost half of the 40 times it shows under European accounting rules. Claims that European banks are “insolvent” relative to their U.S. peers depend, in many cases, on discrepant accounting treatments.
The scale of this issue is also enormous. On a notional basis, the total amount of derivatives today is greater than global GDP – 10 times over. This is why the answers to seemingly obscure questions can have such profound impacts on our understanding of how much leverage exists in our financial system. Do the derivatives that are supposedly matched account for counterparty risk? Are they of the relative-value variety, and if so, how sensitive are they to changes in interest rates or currencies? Amazingly, the current public disclosure regime doesn’t effectively cover these issues.
What’s particularly terrifying is that the solvency of many financial institutions – and the U.S. federal government – can be called into question even when they are evaluated using rules that are almost purposely crafted to overstate their financial positions. In short, it is not possible to measure the ultimate cost of public and private-sector indebtedness when there is no transparent accounting method available; one that measures the cost of spending commitments in excess of income and parses non-trivial questions about derivatives.
If the fiscal cliff gets “solved” at the end of the year by Congress committing to start yet another budget process in the middle of 2013 (i.e., kicking the can even further down the road), one of the binding objectives ought to be accomplishing some overdue accounting and disclosure reforms. The questions of how to accurately measure public and private debt and leverage have never been more important.
SEP 18, 2012 15:47 UTC
The Fed’s decision to begin open-ended purchases of Fannie Mae and Freddie Mac mortgage-backed securities comes on the heels of the European Central Bank’s decision to purchase unlimited quantities of short-duration government bonds across the euro zone.
The ultimate goal of the central banks’ action is to influence the prices and yields of financial assets. In turn, they hope to have an impact on households and businesses by lowering borrowing costs and encouraging an increase in spending to boost GDP.
However, there is a growing worry that these actions can’t solve the underlying problem and that the returns on these actions are diminishing
fast. Against this backdrop, fiscal authorities are warning
that they see central bankers and the decisions that emerge from their closed meetings as going “deeper and deeper into fiscal-type waters,” jeopardizing central bank independence. As John Cochrane has noted
, “central banking really is the last refuge of central planners, which should embolden [us] to search for rules, institutions, and mechanisms to do a better job.”
The only solution now may be for a new accord between central banks and governments, an idea put forward by Professor Marvin Goodfriend of Carnegie Mellon. The premise is that the U.S. Congress has supported the Fed’s independence insofar as it has been a necessary precondition for the Fed to do an effective job. “Hence, the Fed should perform only those functions that must be carried out by an independent central bank,” writes
The principles in the accord should acknowledge that purchases of long-duration bonds introduce interest rate risk and fair value losses. A commitment should be made that credit-related initiatives occur only with the permission of fiscal authorities.
The objective would be to get ahead of the political forces that are starting to circle central banks and the growing risk that indiscriminate constraints might soon be imposed on them.
What is not getting enough attention in these debates about new monetary intervention is that the very nature of central banking has dramatically changed in the post-crisis era. The pre-crisis central banks conducted monetary policy by open-market operations in short-term debt with no credit risk.
ECB President Mario Draghi is formulating a monetary backstop to tackle rising yields on government bonds driven by worries that certain European countries –Portugal, Ireland, Italy, Greece and Spain – will soon default. The idea is that by buying government bonds, these governments will be able to finance their deficits in the short to medium term.
The contours of the European debt crisis, and the future of the euro zone, are increasingly set by the ECB – a supposedly apolitical and technocratic institution. The ECB is using its new program as leverage to push unprecedented levels of policy changes against the democratic will of sovereign countries. Even aspects of sovereign domestic policy, such as banking regulation, are now under its purview.
The ECB has made the purchases conditional on other reforms to try and achieve a self-fulfilling sequence of monetary easing and fiscal consolidation. These “outright monetary transactions (OMT)” have shone perhaps the brightest light on the limits of monetary, rather than fiscal, policy.
These large-scale asset purchases leave the ECB in a position to take sizable losses in the case of a sovereign default, which may then require a central bank recapitalization (i.e., bailouts) from fiscal authorities in countries like Germany. Today, the ECB is effectively implementing fiscal policy by exposing itself to credit risk, which exists even if it pays too high a price for the sovereign bonds and a country does not default.
Like the ECB, the U.S. Federal Reserve now routinely engages in activities that are not clearly monetary in nature. Rather than funding other wayward governments, the Fed’s domestic actions look more like domestic credit policy
, which is fiscal in nature.
The Fed’s new commitment to purchase more Fannie and Freddie MBS on an open-ended basis will leave it with an enormous balance sheet – approaching $3 trillion in assets today, from less than $1 trillion back in 2008. Though Fannie and Freddie are in conservatorship and backed by the federal government, Professor Goodfriend makes the point that only Congress possesses the power to designate a security as backed by the “full faith and credit” of taxpayers.
Fed MBS purchases are akin to direct credit allocation in the economy, especially when compared with traditional monetary operations based on small holdings of short-duration Treasury debt. In his dissent against the recent Fed actions, Richmond Fed President Jeffrey Lacker said the MBS purchases “distort investment allocations and raise interest rates for other borrowers. Channeling the flow of credit to particular economic sectors is an inappropriate role for the Federal Reserve.”
With this recent action, the Fed will now be buying half of all newly issued mortgage-backed securities by Fannie and Freddie. In fiscal year 2011, the Fed also purchased 77 percent of all new Treasury debt that was issued.
These trends have sparked a growing chorus of economists warning that “by replacing large decentralized markets with centralized control by a few government officials, the Fed is distorting incentives and interfering with price discovery with unintended economic consequences.” These actions may also be “weakening the economy’s output by misallocating capital.”
If the Fed and ECB are not strictly monetary policymaking institutions, but rather technocratic bodies responsible for fine-tuning
the economy, there are no limits to what they can be blamed for. Absent an accord, the market will only continue to take its cues from these central authorities – wondering
whether they will intervene
next in a different
credit market, like student or automobile loans.
Ascribing every economic problem to the central banks pushes these institutions to fight battles they have no chance of winning. Moreover, any further economic destabilization that can be linked to their activism will only boomerang, as they get blamed for trying to solve problems that can and should only be handled by elected fiscal authorities.
Perhaps most important, striking a new accord now could relieve the Fed of the burden of trying to do what increasingly looks impossible: preserving its independence long enough so it can move responsibly when the time is right to raise interest rates and shrink its balance sheet, in spite of the views of elected officials.
As Phil Gramm and John Taylor commented recently: “[S]elling a trillion dollars of Treasury bonds on the market – at the same time the government is running trillion-dollar annual deficits – will drive up interest rates, crowd out private-sector borrowers and impede the recovery.” It’s not hard to predict what the chattering political class will be saying when the Fed faces these tough decisions.
Establishing an accord now wouldn’t involve the Fed wading into unchartered territory. There was the historic Treasury-Fed Accord in 1951 and a “joint-statement” between the two bodies in March of 2009. But the Fed’s recent actions call into question whether it still thinks “[g]overnment decisions to influence the allocation of credit are the province of the fiscal authorities.”
Striking a fresh accord now – before the Fed has to pursue its exit strategy – may be the only way to save the Fed from losing its independence at exactly the time it will need it most.
Arpit Gupta, a Ph.D. student in finance at Columbia University’s Graduate School of Business, contributed to this column.
PHOTO: U.S. Federal Reserve Chairman Ben Bernanke takes his seat to deliver remarks about a significant shift in the direction of U.S. monetary policy at the Federal Reserve in Washington September 13, 2012. REUTERS/Jonathan Ernst
AUG 13, 2012 18:00 UTC
Recent changes to a mortgage refinancing program finally have it running smoothly and helping “underwater borrowers”: Can Congress really ignore this?
Fannie Mae and Freddie Mac’s Home Affordable Refinance Program (HARP) helps borrowers who are underwater (and only those who are current on their payments) refinance their mortgage at lower interest rates. While it took lenders a few months to implement the changes, hundreds of thousands of borrowers have already refinanced their mortgage or are in the pipeline to do so. The new HARP 2.0 includes a pricing regime that encourages borrowers to refinance into mortgages with shorter terms, which means borrowers are building back equity in their homes faster. And that is good.
Congress just broke for its August holiday, so we will have to wait until September for the next flare-up in housing policy. However, a debate was recently reignited that leaves plenty of room for thought.
Ed DeMarco, acting director of the Federal Housing Finance Agency, decided on the last day of July against allowing Fannie Mae and Freddie Mac to write down mortgage debt for underwater borrowers. This debate is several years old – and there is perhaps no other housing-related issue that has engendered as much partisan frustration.
Treasury Secretary Tim Geithner immediately sent a letter
condemning DeMarco’s decision. But as Neil Barofsky (the former inspector general of TARP) noted in a recent column for Reuters, the simple fact that Secretary Geithner chose (merely) to send a letter demonstrates that what matters most in housing these days is, sadly, politics over policy. Barofsky continues: “Although one can argue whether principal reductions are the right way to address the ongoing housing slump … no one should be fooled that the administration’s entreaties to Mr. DeMarco are anything but political posturing.” Moreover, the entire effort “seems primarily intended to distract attention from [the administration’s] own failed policies.”
As the FHFA made clear in a recent report, there are approximately 4.6 million underwater borrowers with loans backed by Fannie Mae or Freddie Mac, but “approximately 80 percent of [these] underwater borrowers are current on their loans.” Against this backdrop, the regulator estimates that only around 75,000 to 250,000 borrowers might be eligible for a write-down. Yet it would only take a few thousand “strategic defaulters” (i.e., those who previously were able and willing to pay but who decide to stop paying to qualify for the write-downs) for the program to result in taxpayer losses. Given the sheer size of the national mortgage market, DeMarco says a new principal reduction program “would not make a meaningful improvement in reducing foreclosures in a cost effective way for taxpayers.”
It is important to keep in mind that nobody – on the left or right – wants to “own” the risk that there could be even more taxpayer losses associated with the housing bust, or accept the political fallout from angry voters who believe policymakers are rewarding others’ bad decisions.
Nick Timiraos, a reporter for the Wall Street Journal, notes how the Federal Housing Administration (FHA) should be a prime “contender for the kind of principal forgiveness many would like to see Fannie and Freddie undertake,” since it would come without the hassle of having to work through an independent regulator. Unlike Fannie Mae and Freddie Mac, the FHA is a full instrument of the administration and subject to the congressional appropriations process. If the Obama administration is such a strong believer in the economics of principal forgiveness, why would it not pursue this policy at FHA, over which it possesses complete control?
Perhaps it’s because the FHA is on the brink of insolvency – and may very well require a congressional bailout. In fact, the Obama administration narrowly escaped having to announce a bailout earlier this year. This is unsurprising, considering the FHA has $2.6 billion in reserves against more than $1 trillion in total mortgage value insured (an effective leverage ratio more than 10 times greater than Lehman Brothers at the time of its collapse), and about 31 percent of its loans are underwater.
Taxpayers have lost more than $150 billion on bailing out Fannie Mae and Freddie Mac. Those are sunk costs. But a quirk in the government budget rules, coupled with a general inaction by the administration and Congress, has allowed the government sponsored enterprises (GSEs) to remain off budget and maintain an open line of credit from the Treasury to cover any future losses. As Timiraos notes, additional losses as a result of a principal forgiveness plan “may not be easily seen” at Fannie and Freddie, and certainly not in the context of a huge number like $150 billion.
The administration and Congress will probably take a pass on trying to overrule DeMarco on principal forgiveness. It’s just too dangerous for any politician to get linked to further taxpayer losses in and around housing before the election.
While more than 1 million loans were refinanced under HARP from 2009 to late 2011 – the HARP program had long been viewed as a disappointment (e.g., early promises by the administration had the program reaching millions more). But then FHFA decided to change the program’s underwriting criteria to try to responsibly boost borrower participation.
The new volume coming in under HARP 2.0 has been very impressive since it went into effect in March. This poses questions about whether the Senate should even consider a bill by Senators Barbara Boxer and Robert Menendez (aka HARP 3.0), particularly since changing HARP rules again would slow the nascent boom as lender systems are again changed. Here are the four key stats on HARP 2.0:
- In June 2012, HARP refinance volume was about 125,000, whereas in June 2011, it was only 28,000.
- In the second quarter of 2012, volume ran about 243,000, versus only 86,000 during the same quarter last year.
- In June 2012, borrowers with loan-to-value (LTV) ratios greater than 105 percent accounted for 62 percent of all HARP volume.
- June 2012 was the first month with meaningful refinance volume for mortgages with LTVs above 125 percent; over 53,000 refinancings were completed in the month, whereas March, April and May all fluctuated around 3,000.
The narrative behind these numbers is powerful. Even the objective underlying a new measure from Senator Jeff Merkley – “faster amortization,” or borrowers choosing shorter-term mortgages – is already ramping up. While only 10 percent of borrowers chose to refinance into shorter-term mortgages under HARP in 2011, in May the number was 19 percent and in June it was 18 percent. This hurts arguments for further legislative action as well, since a bill would come with a delay for implementation and unsettle the current process, which is working.
Senators Boxer and Menendez are the key champions of expanding the HARP program. (Note: I testified
before the Senate Banking Committee on this bill back in May.) Senator Merkley’s measure would provide greater incentives under HARP (perhaps with a taxpayer cost) for borrowers who choose to refinance into shorter-term mortgages, so they end up paying down their balance faster and regain equity in their homes. In many ways, this effort is an alternative to principal forgiveness for underwater borrowers.
Acting Director DeMarco and FHFA deserve more credit than criticism these days. Examining the nuances of the principal forgiveness debate, especially in the context of the new HARP 2.0, provides a solid guide for where the policy discussion is headed this fall: nowhere fast. And that’s probably a good thing for taxpayers and struggling borrowers.
PHOTO: Homeowners Jesse Fernandez (R) and his brother Joel Fernandez (C) speak with a Freddie Mac representative as they try to get a home loan modification during the Neighborhood Assistance Corporation of America event in Phoenix, February 4, 2011. REUTERS/Joshua Lott
JUL 27, 2012 16:43 UTC
Sandy Weill’s comments this week
are just the latest dustup in the debate about the existence of financial institutions that are labeled by regulators and market participants as being too big to fail. Despite the criticisms leveled at these firms, the largest banks have only gotten bigger over the last few years – and U.S. regulators still appear underprepared to resolve a future failure of a systemically important financial institution without setting off broader market panic.
Against this backdrop, new bank reform proposals are likely to get a lot more attention on Capitol Hill heading into the November election. Catalysts for this debate are sure to include the stories around JPMorgan’s London Whale
trader and the brewing Libor scandal.
In a recent paper, academics Frederic Schweikhard and Zoe Tsesmelidakis examined the borrowing advantage that large financial institutions had from 2007 to 2010 as result of the market’s perception that their liabilities were backed by the federal government. Taxpayers subsidized TBTF banks to the tune of $130 billion, according to their findings. Citigroup was the single biggest beneficiary of government support, totaling $50 billion, but even well capitalized JPMorgan is estimated to have gained $10 billion in value from taxpayer guarantees.
Privately, the big banks think this is old news. They are quick to note that Congress addressed the issue of taxpayer bailouts back in 2010, when it passed the 2,000-page Dodd-Frank financial reform bill. Among its many directives, the law created a new systemic risk council of regulators and tasked it with designing and implementing a new “resolution regime” for big and complex financial institutions. The goal was to empower a new council to regulate and oversee the failure of future financial institutions and to guard against taxpayer guarantees or systemic consequences for the overall economy. While it’s certainly debatable whether Dodd-Frank achieved some progress in this area, credit rating agencies are still signaling to the market that the government would likely step in and protect bank creditors. As long as this is true, big banks continue to have a borrowing advantage. Even granting additional discretion to regulators solves nothing if economic vulnerabilities still make a bailout the less harmful choice when the next crisis arrives. During the debates this fall, Mitt Romney will surely confront President Obama on the limitations of the financial reforms that his administration advanced, including the Dodd-Frank Act. The TBTF problem stands above all others in this area and presents an opportunity for a defining contrast before the election. A new choir of analysts – including those on the right that are steeped in both policy and political strategy – are also starting to argue that what looks like good policy on the big banks might also be good politics (see here
In the near term, Weill’s comments have helped push the policy spotlight back on whether a version of Glass-Steagall should be imposed – effectively drawing new lines between essential banking functions (i.e., deposit taking, settling payments and offering loans) and more speculative financial functions (i.e., brokerage and derivatives trading). Proponents of this idea argue that the latter should still be allowed – and under less regulation or supervision – but only without the promise of taxpayer support should one of these institutions fail. Deciding where to draw these lines can be extremely challenging, as proved by the rulemaking for permissible activities under the Volcker Rule, which is barely comprehensible even to the most seasoned experts.
While the chorus in favor of breaking apart TBTF banks into separate commercial and investment entities will surely continue, there are two other categories of reforms that are likely to get even more traction as this debate unfolds. The first would seek to impose more meaningful caps on the size and leverage of TBTF banks, effectively shrinking them over time. The second would attack the prospect of future bailouts head-on by making it clear upfront that the equity holders and creditors of TBTF institutions would face haircuts in the event of a failure.
Thomas Hoenig, a former Federal Reserve governor and current director at the FDIC, is the author of one of the more far-reaching proposals to restructure bank activities. The general thrust of this plan (as he calls it, a Glass-Steagall “for today”) would be for regulators to designate what activities are allowable – for example, traditional banking services, such as taking deposits and underwriting securities – while prohibiting more complicated tasks, such as proprietary trading or derivative transactions that expose banks to financial risks and increase the complexity of bank regulation.
The UK’s Independent Commission on Banking, commonly known as the “Vickers Report”, has offered a similar plan to ring-fence retail deposits and allow for limited risk management or hedging strategies. Outside the fence would be securities and derivatives trading, underwriting, and securitization. The fence would effectively separate bank subsidiaries so they would be operationally independent.
Both Hoenig’s plan and the ring-fencing concept essentially seek to bring back a Glass-Steagall-like barrier between commercial banking – everyday banking involving deposits and lending – and investment banking, which handles more complicated banking functions like security underwriting and company mergers. However, as University of Chicago economists John Cochrane and Luigi Zingales have argued, the real distinction is between financial intermediary functions – collecting deposits, making loans, serving as a broker between buyers and sellers of securities – and proprietary trading. These intermediary functions are socially valuable in that a lot of economic activity depends on their very existence. The same is not true of proprietary trading. Yet, losses on proprietary trading can erode the capital necessary to support intermediation.
On Capitol Hill,
Senator Sherrod Brown just reintroduced the SAFE Act
, which takes a different route by effectively shrinking the largest banks through a series of caps on the total share of bank deposits that one institution can control, caps on non-deposit liabilities, and a tougher limit on allowable leverage. What most people have forgotten is that a version of this bill was offered as an amendment to the Dodd-Frank Act back in 2010. While it failed to pass then, garnering only 33 votes in favor, parsing
the yeas and nays reveals some important clues as to where senators may be heading in 2012. It’s particularly interesting that some key Republicans voted for his amendment, including Senators Richard Shelby and Tom Coburn. A few others sought to keep their powder dry on this tough vote, choosing to abstain (Senators David Vitter and Jim DeMint).
Over the last few years, other Republicans in the both the House (see Representative Spencer Bachus’s proposal
) and the Senate (see Senator Jeff Sessions’s proposal
) have introduced bills to update the bankruptcy code so there is an orderly way to resolve large and complex firms that are failing. (Creating a bail-in regime is also under consideration.) The measures all seek to improve the code as a means to promote market discipline and thus protect taxpayers long term. The principal advantage, compared with Dodd-Frank (and specifically Title 2), would be to substitute regulator discretion during the next crisis with clear rules that could eliminate any borrowing advantage in the near term, or imputed subsidy for banks and their creditors.
Bank bailouts are not favors randomly dispensed to clients – they are generally motivated by a desire to prevent the chaos that would arise from cascading losses. By reducing expectations of bailouts and increasing the cushions to absorb losses, clear rules and increased capital change the facts on the ground and make bailouts less likely. As long as market participants (and creditors specifically) interpret regulator discretion as increasing the chances that some government power will be used to bail them out when the next panic ensues, big banks will still be advantaged at the expense of taxpayers. The logic behind an alternative resolution regime is to squarely address the economic costs that bailouts seek to suppress.
A group of academics from across the country has come together to propose
just such a thing: an entirely new chapter in the bankruptcy code. The authors note that the existing
bankruptcy process does not work particularly well for large and complex financial firms, as we saw with Lehman Brothers. At the same time, the new bank resolution authority in Dodd-Frank is an incomplete fix. The current system preserves regulatory discretion over how to handle the failures of financial institutions and leaves open the possibility that some creditors may be bailed out in the future. It is better, this group argues, to fix the current bankruptcy regime so that clear rules govern the resolution of major financial institutions.
The overarching goal should be to change the expectations of market participants and regulators on the risk of future bailouts. Market participants should have incentives to engage in more extensive credit analysis and upfront monitoring of bank liabilities and leverage. This will make future bailouts less likely, because through greater market discipline there will be fewer panics. And if regulators perceive that banks can be allowed to fail in an orderly manner, they will also be less inclined to prop up or confirm their TBTF status through interventions and bailouts.
Driven by the political debate, policymakers will increasingly have reason to revisit several aspects of Dodd-Frank this fall and into 2013. The leading candidates to take over the chairmanship of the powerful House Financial Services Committee are certainly convinced that the system is still plagued by the TBTF syndrome. This is true both for Republicans like Representatives Jeb Hensarling
and Ed Royce, as well as the next-in-line Democratic member, Representative Maxine Waters
. Senators are likewise focused on the threat of future taxpayer bailouts, including the top Republican on the Banking Committee, Senator Shelby
, and his Democratic colleague Senator Brown.
Thankfully, there is still time to fix financial regulation and end the bailouts that go along with too-big-to-fail banks.
Arpit Gupta, a Ph.D. student in finance at Columbia University’s Graduate School of Business, contributed to this column.
PHOTO: Sanford I. Weill, chief of the Travelers Group (L), and John S. Reed, chief of Citicorp, shake hands at a press conference in April 1998 in New York, where it was announced that the two companies would merge in a deal worth more than $70 billion.
REUTERS Mike Segar
JUL 12, 2012 19:24 UTC
Arpit Gupta, a Ph.D. student in finance at Columbia University’s Graduate School of Business, contributed to this column.
Just as the recession in the early 2000s became linked with the bursting of the tech bubble, for many the financial crisis in 2008 has been synonymous with the blow-up of subprime mortgages.
But there was more to 2008 than that.
Gary Gorton, an economist at Yale, recently published an analysis that shows how well some subprime mortgage-backed securities have performed over the past few years – a very counterintuitive conclusion. Citing one of his graduate students, Gorton explains that AAA/Aaa-rated subprime bonds issued in the peak bubble years (when mortgage underwriting was arguably the weakest in history) were only down 0.17 percent as of 2011. In other words, the highly rated subprime bonds – or toxic assets so associated with the financial crisis – have experienced only minimal losses since the bubble popped.
Of course, that bond statistic ignores the numerous costs borne by the federal government in response to the crisis. For example, the mortgage giants Fannie Mae and Freddie Mac required more than a $150 billion bailout, and the Federal Reserve dropped and held interest rates to historical lows, in part so that millions of homeowners could refinance their mortgages (often into new mortgage products that were also backed by taxpayers through another government program). That is, it’s important to acknowledge that subprime mortgage products have done relatively well partly because of post-crisis government interventions that were costly for taxpayers.
Nevertheless, the subprime bonds’ better-than-expected performance can help us think through the broader role of highly rated securities in the financial crisis. In particular, it helps focus our attention away from the assets themselves (as it now appears that some subprime securities held up surprisingly well) and toward how the assets were financed using leverage and risky holding structures.
Consider the production process for mortgage securities during the crisis. To convert bundles of poorer-quality mortgages into valuable securities, banks made use of waterfall structures in securitization. Any future losses on subprime mortgages would first go to the lower-rated tranches of the securities. The highest-rated tranches – given AAA status – would only lose money in the event of extraordinary mortgage losses, which effectively wiped out the lower-rated tranches. Generally, lower-rated tranches accounted for 20 to 25 percent of the securitization. This meant that if mortgages defaulted and only 50 percent of the value was recovered through foreclosure, then between 40 percent and 50 percent of the mortgages in a pool would have to default for the AAA noteholder to suffer any losses. While defaults and loss severities on subprime loans have been bad, they have not been this extreme in most cases thus far, which is why the highly rated subprime bonds seem to have escaped serious losses so far – declining in value only a small amount, as Gorton suggests.
If the performance of these mortgage pools remains sustainable in the future, then one surprising conclusion post-crisis may be that the securitization waterfall structure will have actually held up surprisingly well. The construction of mortgage-backed securities in most instances will have sufficiently left enough of a buffer in place to protect the highest-rated tranches from serious losses.
Of course, the much bigger problems lay in the quality of that buffer – the lower-rated pieces of subprime mortgage-backed securities. These were generated in many cases almost as a waste by-product of the securitization process. As the housing bubble burst, it was the holders of these assets that suffered massive losses, since they were in the first loss position.
While it was originally difficult to find willing buyers of the lower-rated pieces of subprime mortgage-backed securities, issuers eventually combined and repackaged them into derivative products called collateralized debt obligations. It was these products – including the so-called synthetic variety, which relied on credit default swaps – that proved to be the real problem products
. Many of them were held by structured investment vehicles (often sponsored by banks) and constitute one of the reasons financial institutions faced insolvency during the crisis.
Underlying the demand for CDO products was the phenomenon previously discussed: the universal hunger for highly rated financial products. Overwhelming demand for AAA-rated securities induced banks to create new financial instruments that effectively stretched the definitional bounds of what was truly a quality or safe asset. These structured products wound up constituting a large fraction of the losses borne by subprime mortgages.
The problem with securities during the financial crisis wasn’t just how they impacted the asset side of the balance sheet. Rather, the greatest fallout appears to be the result of how these (and other similar) securities were funded through short-term loans on the wholesale market.
Prior to the financial crisis, in the so-called shadow banking system, banks came to use securities – highly rated MBS and asset-backed commercial paper – for the purposes of short-term borrowing and lending. The higher the security was rated, the greater its collateral value, which allowed the bank to secure more funding on better terms.
The onset of the financial crisis led to large-scale downgrades of many of the securities that were used as collateral. Even though a lot of these securities did not end up experiencing large credit losses over time (per Gorton), they did suffer huge declines in market value (at least initially).
Many mortgage securities were used in relationships of overnight lending referred to as repo – between banks and other financial institutions. The downgrading of mortgage-backed securities led to greater margin calls, leading to trading losses and finally some fire-asset sales. In short, these assets could no longer support the loans secured against them; their collateral value fell and, effectively, there was a “run” on major aspects of the financial system as lenders demanded their money back. The resulting losses – from the collapse of trading arrangements, not of the underlying securities – wound up bankrupting major financial participants like Bear Sterns and Lehman Brothers.
Recent research by Northwestern economist Arvind Krishnamurthy and colleagues has found that similar problems with asset-backed commercial paper were actually far greater in scope. The resulting collapse of credit in other areas of the financial sector, such as money market mutual funds, subsequently fueled the recession.
The lesson is that it wasn’t just the product that was the issue – fragile financing mechanisms were really the key driver in the financial crisis. If financial intermediaries had held their asset positions with less leverage or with longer-duration borrowings, they would have been able to ride out market gyrations. Instead, reliance on debt and short-term holdings forced banks into costly sales and drove widespread insolvency.
The mix of leverage and the shared interdependence on extremely short-term wholesale funding markets (comprising both repos and commercial paper) turned what were initially relatively small losses into tens of trillions in lost output globally.
In many respects, the pre-crisis shadow banking system resembled the pre-Depression-era banking system, which was also prone to fragility and frequent crises. The problem back in the 1930s was partially addressed through the use of deposit insurance, which limited the potential for bank runs (though with the cost of boosting moral hazard).
Regulators today face two options in dealing with this complex financial system. One option would be to further encourage financial complexity but offer sufficient insurance (along the lines of the FDIC deposit guarantee) to financial institutions. However, this would expose taxpayers to future losses that could rise into the trillions, meaning the guarantee itself might not be sufficient to prevent a crisis to begin with. Additionally, it would further fuel moral hazard.
Another option would be to recognize the systemic fragility and work to combat the underlying sources. The first of these is the widespread reliance on “safe assets,” which itself is partially a regulator-driven phenomenon. As we’ve seen, regulators’ preferences for seemingly safe assets incentivizes market participants to create and transform risky assets into new products that can be passed off as safe. Additionally, no asset is truly safe from losses to begin with, and doubling down on that fiction simply raises the stakes when default finally happens.
The second core reform should be to restructure the liability system of systemically important financial institutions. Maturity mismatch, to the extent it happens, should take place in traditionally regulated commercial banking institutions. Firms should be free to pursue real financial innovation, so long as their actions do not result in demands for bailouts or contagious financial losses for others.
A review of this narrative suggests a rather stark picture of the reality of the modern financial system. Rather than the financial crisis being a one-off result of a historically anomalous housing boom, it increasingly appears that the central problem was a financial system so levered and dependent on near-term financing that relatively small losses could spark big problems. Absent reform, look for this pattern to return.
JUL 2, 2012 22:20 UTC
The policy response that perhaps best connects the U.S. financial crisis and the still brewing eurozone problem is that regulators have endeavored to make financial institutions more resilient. Policymakers on both sides of the Atlantic have focused on increasing financial institutions’ capital and liquidity positions to try to limit future bank failures and systemic risk. Both goals are served by increasing banks’ holdings of “safe assets” that are easily sold and retain value across different global economic environments.
But what if there simply aren’t enough safe assets to go around? After all, safe assets aren’t only being gobbled up in the name of financial stability. Today, the global investor universe is undoubtedly more risk-adverse and is naturally hungrier for these same stores of value. From insurers to pension funds, the demand for safe assets – and the corresponding dearth of supply – has led to strange, if not ominous, distortions in the market.
For example, over the last few years there have been numerous periods where the yields for short-term U.S. and German sovereign debt have turned negative. The real yields, or the amount earned after adjusting for inflation, on front-end Treasury notes are currently less than -1 percent. This means investors have been putting aside their search for yield, willing to lock in (small) losses with their new purchases because there were very few alternative and liquid markets where they could park their money on better terms.
In a recent report
, the IMF explores this growing tension between the supply and demand of safe assets – and the takeaways are nothing short of frightening.
Let’s start with what are considered safe assets today – even though, of course, this categorization is crude and likely to be overly inclusive. There is sovereign debt (with debt ratings above A/BBB), investment grade corporate debt, gold, and also highly rated securitizations and covered bonds. The IMF estimates that there are approximately $75 trillion of these safe assets in the market today.
With banks, pension funds, insurance companies, sovereign wealth funds and central banks all gorging on these assets to differing degrees, real prices have been on a tear. But, surely the market will eventually adjust as the supply trajectory for these assets grows overtime to meet the new demand, naturally releasing some of the pressure around prices?
A close look at just the narrative around government bonds reveals the extent of the problem ahead and why the market shouldn’t be counted on to self-correct. Back in 2007, before the housing bubble popped in the U.S., roughly 70 percent of the sovereign debt for the world’s most advanced countries was rated -AAA. Today, this rating only applies to 50 percent of these countries, a drop affecting approximately $15 trillion in “safe” sovereign assets.
The fluid crisis in Europe is one reason that a snapshot of countries’ credit profiles is of limited value, at least for the foreseeable future. The IMF projected that more than a dozen countries could fall from the class of safe asset issuers in the coming years. It concluded that by 2016 the total pool of safe assets might fall by 16 percent, or more than $9 trillion. In short, global fiscal retrenchment – here in the U.S. and across Europe – is expected to be slow and painful.
Some might argue that a drop in the supply of safe assets means that buyers will have to move down the safety scale and purchase assets that are only a little bit riskier. But this only makes the system more vulnerable. If the financial crises of the last few years have taught one consistent lesson, it’s that there really isn’t such a thing as a truly safe asset. Practically everyone now appreciates that there is no hidden part of the globe where an investor can buy an asset that doesn’t contain at least some amount of credit, inflation, currency or market risk.
There is a problem with the theory that the global economic crises have corrected all the past flaws with how risk is measured and priced, particularly with regard to once-heralded safe assets like sovereign bonds or mortgage-related securities.
First, it overlooks just how ingrained the concept of safe assets is in the global financial regulatory architecture. The problem in the euro zone banking system is partly due to banks holding their home government’s debt for regulatory and central bank funding purposes. Banks and other financial institutions are increasingly asked to use high-quality collateral as margin in derivatives trades. Reforms in this area make sense since derivative bets were insufficiently capitalized – but the impact of new rules, like margin requirements, on the demand for safe assets and on credit availability has to be acknowledged.
The problem is that all the regulatory efforts that seek to reduce leverage in the financial system not only presuppose the existence of safe assets but also assume that what is and is not a safe asset can be known with any degree of certainty.
Japan’s government debt, for example, is still considered to be among the safest in the world, despite a gross debt to GDP ratio of over 200 percent. How can one know if or when market participants will regard Japan’s debt with the same apprehension that they regard Spain’s today?
On the supply side, there are very few bright spots. Sure, a greater number of emerging countries will join their more developed brethren over time, but few analysts expect a flurry of new emerging economies to start issuing AAA-rated sovereign debt in the near term. Building the required legal institutions and financial architecture will take years, if not decades, for some countries to make this transition.
The private sector used to be a prime provider of safe assets through production channels like securitization. Private-sector issuance in the U.S. alone has declined by more than $3 trillion since 2007. There are obvious tensions between the desire for additional debt issuance and its impact on the safety of the issuer. The future of the housing government-sponsored enterprises (GSEs) also looms large given the importance of their debt and mortgage-backed securities as collateral.
While some are rightfully calling for a rebalancing of the U.S. mortgage market away from the government (since it is guaranteeing practically all new mortgages today), others are worried that this transition will lead to less government-backed MBS issuance – an important component of the current global supply of safe assets.
Eventually, the labored search for safe assets will drive prices to the point where investors have to settle for riskier assets. With interest rates that are expected to stay close to zero for some time (reflecting a world with slow growth and increased financial stress), the market is only becoming more susceptible to ripple effects from sudden drops in prices that turn safe assets almost overnight into unsafe ones, which then may no longer count or satisfy a key regulatory requirement. As the IMF puts it: “Demand-supply imbalances in safe asset markets could also lead to more short-term volatility jumps, herding, and cliff effects” – and even fuel new asset bubbles.
Ratings downgrades on U.S. and European sovereign securities teach us that what is a safe asset one day can be almost toxic the next. Building a regulatory architecture on these assets becomes dangerous because the transition from “safe” to “toxic” is likely to come at the same time that a bank’s dependence on the asset’s safety is greatest. As we have seen, the biggest panics are those that involve what were presumed to be safe assets, like the short-term commercial paper of Lehman. By hinging regulation on them again, the world seems to be tempting fate.
JUN 19, 2012 21:12 UTC
There are about 1 billion cars on the world’s roads today. By mid-century, forecasts have that number climbing to 4 billion. Meanwhile, Congress is mired in a debate over whether to pass a new
highway bill. Senator Barbara Boxer, a chief negotiator of the pending bill, lamented recently that she was “embarrassed for the people of this country” that this measure had not been enacted. After all, she said, passing highway bills used to be as popular and as important as “motherhood and apple pie.”
As with all previous highway bills, proponents generally wrap their arguments in projections for new jobs, or rhetoric that links fresh infrastructure spending to unclogging the arteries of commerce. For the president, a highway bill fits his campaign theme of getting America back to work. In a recent speech in Cleveland, the president issued a call to “rebuild America” and to do “some nation-building here at home.” The main obstacle remains how to pay for new spending and investment. Flashback to 1998 and 2005: Those were the last years Washington enacted “highway bills,” or measures to reauthorize federal infrastructure spending programs. Now that the economy is sputtering in 2012, many would like to see Congress pull a page from the playbooks of those years. The taxpayer price tags for the ’98 and ’05 multiyear highway bills were $218
billion and $286 billion, respectively. Count President Obama as part of today’s infrastructure-stimulus choir, as he has proposed a $556 billion six-year bill.
Harvard Professor Edward Glaeser argues: “America’s infrastructure needs intelligent reform, not floods of extra financing or quixotic dreams of new moon adventures or high-speed railways to nowhere.”
U.S. policymakers would be wise to take a moment this summer to reflect on whether the national strategy they are contemplating for infrastructure investment properly prioritizes performance and leverages technology.
Federal and state spending on transportation has grown faster than inflation for decades, yet the broader system’s performance has continued to deteriorate. The future of infrastructure in the U.S. is about achieving system performance – like attacking problems such as road congestion – rather than always adding raw capacity.
Over the last five or so years, an alternative vision for the future of infrastructure has unfolded, one that views travelers as customers who prioritize an efficient commute and a transportation system that’s safe. This recast framework has been enabled, in part, by the emergence of new tools to measure travelers’ objectives and system deficiencies. Private investment is also starting to flow to develop the new underlying technologies and creative new business models.
While the infrastructure grid has long had cameras to help spot accidents causing delays, the pervasiveness of smartphones, new GPS technologies and other sensors (those in and above ground) has exponentially added to the data pool.
One of the top complaints from driving customers is congestion, traffic delays and overly long commutes. New startups are developing applications to help cities do everything from identifying potholes faster to spotting in almost real time the fender bender that is slowing down traffic. The fresh focus on performance has also led to straightforward tech ideas like flexible screens
that can be erected quickly at the scene of an accident to stop the rubbernecking by nearby travelers that causes congestion.
New companies like SFPark
are seeking to transform the conventional parking meter. These companies utilize apps, linking data from wireless sensors (either embedded or tacked onto the parking spot pavement), to match parking availability with consumer location and demand.
With the explosion of data in and around our transportation infrastructure, large companies have also set their sights on developing analytical platforms for cities and other urban planners. Cisco’s “Smart + Connected Communities” initiative
and IMB’s “Smarter Cities” visions are leading the way. The tagline for Smarter Cities lays out the broader premise: “that the world is becoming more interconnected, instrumented, and intelligent, and this constitutes an opportunity for new savings, efficiency, and possibility for progress.”
Over the last couple of years IBM helped design the first ever, citywide clearinghouse for infrastructure data in Brazil, called the Operations Center of the City of Rio
. What makes this center unique is that it has integrated practically all of the city’s major information or response-related departments and agencies so that there is “a holistic view” of how the city is functioning or performing in real time, 365 days a year.
As the New York Times
reported in a profile on the Rio center earlier this year, these platforms are being utilized not only by cities but also by smaller organizations like the Miami Dolphins, which wants to more efficiently manage the traffic
around its new stadium. Schools are another good example. Everyday Solutions
, a relatively new startup, provides a Web-based utility that monitors travel times and ridership rates and helps parents track the school bus their kids are on. (For more examples, check out Fast Company
’s top 10 list of most innovative companies in transportation.)
Academia is also advancing both tech research and deployment: Check out Carnegie Mellon’s Traffic21 and Singapore-MIT Alliance for Research and Technology, or SMART
The units of transportation are facing a frontier of change that will see cars, trucks and buses transformed into intelligent vehicles. Earlier this year at the 2012 Mobile World Congress in Barcelona, Ford Motor Co executive Bill Ford shared his “Blueprint for Mobility”, which lays out how transportation can change over the next decade. The auto company is investing in platforms that take advantage of the increasing number of sensors in and around vehicles as well as vehicle-to-vehicle communication initiatives, including accident warning or prevention systems. Sebastian Thrun
for self-driving, or “semiautonomous,” cars
has the potential to improve mobility, and more important, safety. Over the last 10 years, more than 350,000 people have lost their lives on American roads. Thrun and his colleagues at Google X Lab have developed working prototypes that can travel thousands of miles without a driver behind the wheel. The cars can travel on highways, merge at high speeds and navigate city streets, eliminating the thousands of little decisions that drivers make that contribute to congestion and accidents. The self-driving car, with its ability to communicate with other vehicles and utilize precision technology, offers the potential to circumvent many of these problems.
Given that this sector is just starting to sprout up on its own, perhaps the federal government should stay on the sidelines in the near term to avoid stifling innovation. Yet just last year Google helped Nevada draft the nation’s first state law
to allow self-driving cars on its roads (with preset conditions like requiring human co-pilots).
For Bill Ford, the opportunities on the more immediate horizon are quite clear. Cars could become “a billion computing devices” or “rolling collections of sensors” and made part of one large data network to “advance mobility, reduce congestion, and improve safety.” Sure, the benefits might be realized more quickly with the right help from the government. But if the value proposition exists and infrastructure customers start to demand better performance, this new vision may already be inevitable.
JUN 7, 2012 17:14 UTC
The recent jobs and GDP numbers released by the government were a broad disappointment, and plenty of analysts have discussed the implications of the data. Yet, most of the analysis has focused on two dimensions – whether it’s now more or less likely that Congress or the Fed will act on either the fiscal or monetary fronts to try to boost the economic recovery.
The consensus is that the odds are marginally higher now that the Fed will signal something stimulative
at its next meeting on June 19-20, while Congress is still hopelessly deadlocked, and the economy will have to show significantly more weakness for this dynamic to change.
However, there is a third dimension happening at the state level. The state-by-state GDP numbers out this week suggest that the probability that Republicans will take the Senate is rising. The weak economic growth numbers in some battleground states imply that Republicans could pick up several key U.S. Senate seats and probably take back the majority for the first time since 2006.
The Bureau of Economic Analysis reported
yesterday that the U.S. real GDP by state grew 1.5 percent in 2011 after a 3.1 percent increase in 2010.
First, some quick facts about the current Senate composition. Democrats have a 53-47 majority (which includes two “independents” who caucus with the Democrats).
A flip in control of the Senate has the potential to dramatically alter the framework for the policy negotiations around the looming federal deficit and the end-of-year “fiscal cliff”
. Since Republicans are expected to hold their majority in the House, a full Republican Congress would be more likely to extend the current tax rates and maintain significant spending cuts while blocking President Obama’s plans to increase taxes (should he win in November), including those on capital and savings.
There are 33 Senate seats up for election this November. Democrats hold 23 of them, and 7 are “open,” meaning that the incumbent isn’t running for re-election. By comparison, Republicans have only 3 open seats (Maine, Texas and Arizona) among the 10 that will be defended by incumbents.
If Republicans: 1) hold Texas, Arizona and Massachusetts; 2) lose Maine to an independent who caucuses with the Democrats; and 3) snatch all six of the remaining Senate seats (Virginia, Florida, North Dakota, Nebraska, Montana and Missouri), then Republicans would have a 53-47 majority.
So, how did each of these states do in terms of the percent change in real GDP in 2011? Here is how the 2011 numbers compare with 2009 and 2010 by region:
The answer addresses a question that many prospective voters will be asking themselves come November – whether their local economy is improving or not.
The new 2011 GDP stats in these three states probably won’t move the forecast needle – with Texas and Arizona staying Republican and Maine flipping to an independent caucusing with Democrats:
Texas: Highest quintile with 3.3 percent GDP growth in 2011.
Arizona: Second-lowest quintile with 1.5 percent.
Maine: Lowest quintile with -0.4 percent.
How about the six states where Democrats might lose or are most vulnerable:
Virginia: Second-lowest quintile with 0.5 percent growth.
Florida: Second-lowest quintile with 0.5 percent growth.
North Dakota: Highest quintile with 7.6 percent growth. (A recent boom in the mining sector contributed greatly to this figure.)
Nebraska: Lowest quintile with 0.1 percent growth.
Montana: Lowest quintile with zero growth.
Missouri: Lowest quintile with zero growth.
The weak state-by-state GDP numbers suggest that Senate candidates, particularly Democrats across the country (including President Obama – who will also be battling in these states), are more likely to be on the defensive regarding the economy and the underwhelming recovery.
It is important that these convictions about state growth be tempered. It’s certainly possible that if the economy improves on a national basis or the president benefits from a second wave of youth voters in November, akin to that in 2008, the impact will end up trickling down the ballot to the Senate contests.
But it’s difficult to be optimistic about the broader backdrop this summer, as the EU unravels and investment is expected to slow down in the second half of the year as the fiscal cliff approaches. The prospect for a significant improvement in the economy before November appears to be rather low at this point.
MAY 31, 2012 16:27 UTC
This post is adapted from the author’s testimony at a recent hearing before the U.S. Senate Banking, Housing, and Urban Affairs Committee.
Many of the major negative housing trends that dominated the headlines since the crisis are now well off their post-crisis peaks. While prices are only flat to slightly down year-over-year, there is finally some optimism for probably the first time in more than three years. But before we get ahead of ourselves, let’s examine some of the economic fundamentals and also assess the policy and regulatory headwinds that are still blowing from Washington.
New delinquencies are trending lower on a percentage basis. The decline in home prices also appears to be leveling off or approaching a bottom on a national basis. Data from CoreLogic
suggests that house prices have increased, on average, across the country over the first three months of 2012 when excluding distressed sales. Even the numbers from the Case-Shiller Index out this week
suggest that a floor in home prices has been reached.
There is also a relative decline in the supply of homes for sale. The chart below shows how the existing stock of homes for sale is now approaching a level equal to five to six months of sales. This is a very promising development. According to the Commerce Department, the housing inventory fell to just over five months of sales in the first quarter, the lowest level since the end of 2005.
In short, the level of housing supply today suggests that the market is close to equilibrium, which implies house prices should rise at a rate consistent with rents. Market analysts often look at a level above or below six months of sales as either favoring buyers or sellers, respectively. It’s not surprising then that the recent stabilization of home prices nationally has occurred as the existing inventory, or supply level, has declined.
A couple of important caveats should be kept in mind, however. First, almost any discussion of national inventory trends can gloss over regional problems, or acute supply challenges in individual state markets. Second, the transaction data around home sales suggests that any near-term demand-supply equilibrium is occurring off of an extremely low transaction volume. In essence, weak demand for single-family homes appears to have eclipsed the supply challenge moving forward for the housing market.
Consider that homes are more affordable than they have been in decades.
The National Association of Realtors Home Affordability Index measures the “affordability” of a median-income family purchasing a median-priced home (using a 20 percent downpayment for a 30-year fixed rate mortgage). All of which is to say that house prices look low on a historical, user-cost basis.
So, this begs the question: Why are home sales still so depressed?
One major reason: tight lending and underwriting standards. Earlier this month, Federal Reserve Chairman Ben Bernanke commented on this trend by reviewing information from the latest Senior Loan Officer Opinion Survey on Bank Lending Practices (SLOOS).
Most banks indicated that their reluctance to accept mortgage applications from borrowers with less-than-perfect records is related to “putback risk”– the risk that a bank might be forced to buy back a defaulted loan if the underwriting or documentation was judged deficient in some way.
Federal Reserve Governor Elizabeth Duke also gave a speech earlier this month on this theme, providing yet even more detail on the conclusions from the April SLOOS:
- Compared with 2006, lenders are less likely to originate Government Sponsored Enterprise-backed loans when credit scores are below 620 regardless of whether the downpayment was 20 percent or not.
- Lenders reported a decline in credit availability for all risk-profile buckets except those with FICO scores over 720 and high downpayments.
When the lenders were asked why they were now less likely to offer these loans:
- More than 84 percent of respondents who said they would be less likely to originate a GSE-eligible mortgage cited the difficulty obtaining mortgage insurance as factor.
- More than 60 percent of lenders pointed to the risks of higher servicing costs associated with delinquent loans or that the GSEs might require them to repurchase loans (i.e., putback risk).
Another important market development to acknowledge is that lenders can’t keep up with demand, particularly with regard to mortgage refinancings. Anecdotal evidence suggests that some lenders are simply struggling to process all the loan applications coming their way. Part of the problem appears to be the structural shift in the market toward full and verified documentation of income and assets, which has lengthened the processing time for mortgage applications.
But if lenders and servicers don’t have enough capacity, why are they not just hiring more staff or upgrading their infrastructure so they can handle more loans or business? This seemingly innocent question is really important. Don’t market participants still perceive this business as profitable long term with a comparatively good return on investment when viewed against other business lines?
Governor Duke’s conclusion is spot-on. Lenders or servicers are hesitating in the near term because they just don’t have a good sense of how profitable the housing finance-and-servicing business will be over the medium-to-long term.
And that’s because of the policy questions that haunt the housing sector. There is perhaps no other major industry that faces more micro-policy uncertainty than housing today. Putting aside broader GSE reform, these uncertainties can be grouped into two buckets: servicing and underwriting.
On the servicing side, federal regulators are in the process of establishing new industrywide rules governing their behavior, changing how servicers get compensated and altering the way the business itself can be valued if it’s part of a broader bank balance sheet.
On the underwriting side, the Dodd-Frank law pushed regulators to try to finalize very complicated rules governing who should be able to qualify for certain types of mortgages (i.e., ability to pay standards), including those that are bundled into mortgage-backed securities.
All of these actions will affect the future of house prices, as credit terms and mortgage availability are intimately linked to the user-cost of housing generally.
The urgency to resolve all of this uncertainty is all the more important because while there are clear short-term impacts on the market, there are also potential long-term consequences. For example, if lenders decide to hold off on making new near-term investments in their mortgage business, the long-term potential of a full rebound in housing may be diminished as the existing or legacy infrastructure and skills can be expected to atrophy further.
Mortgage servicers are not in business to lose money. Moreover, the total volume of resources devoted to performing this function – employees, investment in computers and telecommunications infrastructure, legal compliance officers, sales staff – is not static. It adjusts upward and downward based on perceived opportunities, expected future revenues and government involvement.
Some big investments are not being made because of concerns that regulations will impose costs on the industry that cannot be recovered through servicing fees or other revenue streams. Here there have been a few positive developments recently, suggesting that at least some investors are willing or able to take on the aforementioned headwinds.
Non-bank and specialty mortgage servicers like Nationstar and Ocwen are buying up MSRs from the large banks. Home prices also appear to have reached the point where investors could buy properties and rehabilitate them for less than it would cost to construct them brand-new. This trend helped spark some fresh investments in late 2011, which has generated some modest momentum for 2012. In the first quarter this year, GDP was 2.2 percent and residential investment provided a 0.4 percent contribution or share of that figure.
But so much policy uncertainty still looms.
No one really knows who the ultimate purchasers of mortgages are likely to be five years from now. Since the ultimate holders of mortgages – currently Fannie Mae and Freddie Mac, on behalf of the government – are the servicers’ client base, the current lack of clarity on who or what is likely to fund mortgages in the future has obvious ripple effects on servicers and all other professions exposed to mortgage finance.
A similar phenomenon is casting a shadow over the mortgage insurance industry. The difficulties in obtaining mortgage insurance are constraining lenders from selling to Fannie and Freddie, even if they have found buyers and are willing to originate the loans. Several mortgage insurance companies have failed in recent years, others are no longer offering insurance on a forward-looking basis and are just managing their existing exposures.
Resolving even some of the uncertainty holds by far the greatest potential for responsibly helping the housing market moving forward. It’s just too bad that there is an election in November, since it means policymakers and regulators can be expected to dither out of fear of upsetting a particular interest group before votes are cast.
MAY 16, 2012 19:25 UTC
In light of JPMorgan Chase’s bad derivatives trades, the media’s spotlight has appropriately turned to the pending Volcker Rule. That’s the moniker for the still-under-development regulation that might restrict big banks from pursuing hedging strategies across their entire portfolio, including their own bets. Proponents say banks shouldn’t be able to do this, since banks hold consumer deposits that are effectively guaranteed by taxpayers and since taxpayers could be forced to bail out a foundering bank if it’s deemed too big to fail. On the other hand, the underlying law for the Volcker Rule, the Dodd-Frank financial reform law, specifically exempts or allows hedging related to individual or “aggregated” positions, contracts or other holdings, which may very well have covered JPMorgan’s recent trade.
Inside the beltway, a fresh dispute
is now emerging between regulators and policymakers on whether the current draft of the Volcker Rule can even apply to scenarios like JPMorgan’s, given how explicit Dodd-Frank is on this topic. On the one hand, there is the Office of the Comptroller of the Currency, which is starting to argue that these JPMorgan trades would likely have been exempted from the not-yet-final Volcker Rule. But then there are policymakers (namely, Senator Carl Levin
) who are trying to make the case
that Congress never intended for the law’s language to be interpreted so broadly.
While all the details around JPMorgan’s failed trading strategy emerge, there is an even more interesting backdrop to consider – whether JPMorgan Chase and other banks are still too big to fail. It was only a week ago that the Senate Banking Committee held a hearing
where Paul Volcker, Thomas Hoenig and Randall Kroszner testified on “Limiting Federal Support for Financial Institutions.” While they each expressed different viewpoints, it was newly installed FDIC Director Hoenig who made the most news. He used the stage to discuss a paper he wrote in May 2011 on “Restructuring the Banking System to Improve Safety and Soundness.”
In broad strokes, Hoenig doesn’t think that the “too big to fail” (TBTF) problem has been adequately addressed. His conclusion is that the TBTF banks are effectively too big to manage and too complex to understand, and should be made smaller by defining what is and isn’t an “allowable activity.” For Hoenig, “banks should not engage in activities beyond their core services of loans and deposits if those activities disproportionately increase the complexity of banks such that it impedes the ability of the market, bank management, and regulators to assess, monitor, and/or control bank risk taking.”
Hoenig’s plan is bold, to say the least, and even Senator Bob Corker joked at one point that maybe it should be called the Hoenig Rule.
Volcker actually spoke first at the hearing and alluded to Hoenig’s plan when the committee asked if anything should be done about the great increase in concentration at the largest banks (before and through the crisis years):
I don’t know how to break up these banks very easily. But some of the things we’re talking about – reduced trading, for instance – will reduce the overall size of the bank reasonably. Some of the restraints on derivatives will reduce their off-balance sheet liabilities significantly.
So they are at least modest steps. There is a provision in the law they cannot grow beyond certain limits by merger or acquisition. So there are some limits here. But if you say – asked me whether I prefer a banking system that had less concentration, I would. But I – I think we can live more or less with what we have.
While Volcker thinks we can more or less live with this ongoing TBTF problem, the broader public and even some regulators aren’t so sure. For example, the acting head of the FDIC, Martin Gruenberg, recently gave an important speech
that was intended to persuade the market that existing tools will work when the next crisis hits. As a Dow Jones reporter put it, “regulators are looking to chip away at the tacit understanding that the government will step in to save top financial institutions seen as vital to the economy or banking system.”
This is where the JPMorgan story provides a potentially revealing case study on just how much progress has actually been made over the past few years. Josh Rosner, managing director at Graham Fisher & Co and co-author of the great book Reckless Endangerment, posed a very interesting question during a recent television interview:
Can we really talk about there being a free market when at the end of the day you’ve got institutions that are in fact Too Big to Fail? … One of the questions I would ask is, did JPMorgan’s counterparties demand more collateral from them in the face of these exposures the way they [JPM] did against Lehman right before its failure?
If the answer is no… then clearly everyone assumes that JPM is always money good because it is Too Big to Fail … so we’re not talking about regular risk taking behavior of firms that can win or lose or succeed or fail. We’re talking about a specific subset of firms. I keep coming back to when are we going to actually address that issue.
Right now, there doesn’t appear to be any (public) evidence that counterparties demanded more collateral as JPMorgan revealed the position details or the extent of its losses. Then again, perhaps it’s just that a $2 billion-plus loss isn’t viewed by the market as that big a deal when the balance sheet of the firm in question is about 1,000 times
larger (and is still generating profits)?
Look for the Senate to stitch all these themes together in the next couple of weeks when it holds its first hearing since JPMorgan’s disclosure. Remember, the aforementioned Senate hearing with Volcker and Hoenig occurred just before
the JPMorgan story broke. Senators will surely want to examine whether this derivatives loss is a (potential) public policy problem because it occurred at a bank, or because it occurred at a $2 trillion-plus
bank. Context is very important. Risk taking is not “bad” unless it occurs inside an institution that cannot fail, and cannot fail because the government, on behalf of taxpayers, won’t let it.
For now, however, it’s hard to see how any new Dodd-Frank-related legislation moves through the divided Congress, especially in an election year. This means that it’s the regulators and the signals they send out to the market that will matter most for gauging when, how and in what form the Volcker Rule gets finalized. That said, one hopes this JPMorgan situation will spark a fresh debate about the shortcomings of Dodd-Frank and specifically on the most important systemic problem post-crisis – ending too big to fail, once and for all.