ONE
Origins of a Culture
In the 1970s, a candidate for president advanced the novel proposition that the money in the Social Security system should be funneled into, of all places, the stock market. The candidate's name was Ronald Reagan. The incumbent president, Gerald Ford, had a good deal of fun with this evidently zany proposition. "I am not sure a lot of people would think it was a very good place to invest funds over the longer period of time," Ford declared.1 His advisers had no trouble tarring the idea as kooky. The president likened it to "something dragged out of the sky." If not certifiably alien, then it might even be-perish the thought-an example of "wild-eyed socialism," which was no doubt something worse.
Ford did not have to explain why he thought the stock market was not a safe place "over the longer period of time." Stocks were considered simply too risky. Indeed, in 1976, the market was no higher than its level of eleven years before. Adjusted for inflation, the picture was far worse: the purchasing power of the average stock had fallen by two-thirds. Even over the longer sweep of a half century, stocks had managed a gain of only 312 percent a year, so that people thought of the stock market as a place that went upwards a little but sideways mostly, with wrenching nosedives along the way. Indeed, the number of Americans who owned stock would actually fall during the '70s by seven million.2
Such grim statistics were reflected in a certain distance between the market and people's ordinary lives. Most newspapers carried at most a single account of the previous day's action on Wall Street, and television barely covered it at all. Today, at my daughter's middle school in New Jersey, an investing club is busily educating future market wizards, but in the '70s, through four years on an Ivy League campus, I didn't hear a mention of the stock market. Professors spoke darkly of America's "economic interests," but if any of those interests happened to be corporations with publicly traded shares, it was a detail that went unspoken.
Unlike in the '90s, when people would become accustomed to faithfully adding a little bit to mutual funds, rain or shine, every month, in the '70s, they withdrew a little bit, month after month, and they did so for eight long years. For Wall Street it was one long night, one long depression. Even the pros who managed pension funds were little more interested in stocks than my professors were. By 1979, of the money managed by pension funds, 90 percent was invested not in stocks but in bonds, bills, and cash, which was practically like stuffing it under a mattress.3 That summer, BusinessWeek sized up America's non-love affair with the stock market in a morbid, instantly famous cover story-"The Death of Equities."4
But equities were not dead, only dormant. And the renaissance began in short order. Three months after the article, mutual funds-finally-took in more money from investors than they redeemed. The net addition was a trifle-a mere $12 million. But deep in the giant furnace room where the economy is engineered, a long-stuck wheel had emitted a creak, shaken off its cobwebs, and, finally, turned. People were buying stocks.
Over time, this little shift, this rediscovered habit that ripened into a passion, affected far more than the Dow Jones average. When investors awoke, executives found that they, too, inhabited a different world. The rules soon changed for auditors and analysts and ordinary savers as well-an entire culture was retailored. By the late 1990s, America had become more sensitive to markets, more ruled by markets, than any country on earth.
This was the culture that led to prosperity and also to Enron. Markets became virtually sovereign-unchecked by corporate watchdogs or by government. Distortions followed, and with the temptation of wealth that distortions brought, corruption. But in the late '70s, no one was thinking of markets as powerful or pervasive. The country's problem was that it was too insensitive, too unresponsive, to markets. They were not hyperactive or feverish then but-potentially-a cure.
The bullishness and greed of the '90s had their origins in the very different environment of the '70s and, in some sense, much earlier. The financial culture had for most of the twentieth century suffered from a deficiency in what is known, rather antiseptically, as corporate governance. Since most executives owned no more than a nominal amount of stock, their interests were less than precisely aligned with those of the stockholders. It is no wonder that many a corporate CEO took home a large salary and enjoyed the perks of "success" even while his stockholders grew poorer.
Of course, the CEO was nominally supervised by the directors. But the typical board was larded with the CEO's cronies, even with his golfing buddies. They were generally as independent as a good cocker spaniel. It is true that textbooks spoke of shareholder democracy and that, in theory, the shareholders could vote the directors out. But proxy challenges virtually never succeeded-indeed, they were rarely attempted. The electoral mechanism was too cumbersome and management's advantages too numerous. Some other means was needed of holding managers' feet to the fire.
This had been evident in a crude sense since the 1930s, when Congress held hearings into the roots of the great market crash, and it was in the '30s that the basic rules for protecting investors were put in place. A string of scandalous revelations had left a clear impression of Wall Street as unsavory and, indeed, untrustworthy. In one episode, the National City Company (a predecessor of the present-day Citigroup) peddled foreign bonds, issued by Peru, to naïve investors while concealing from the public information that left no doubt as to the dubious nature of Peruvian credit. "No further national loan can be safely issued..." wrote the bank's agent in Peru, all the while as its salesmen in New York were lustily hawking three distinct issues of Peruvian bonds.5 And there was widespread evidence that, during the 1920s, stocks had been secretly manipulated by powerful insiders. The most notorious was Albert Wiggin, president of Chase National Bank, who, without bothering to inform his shareholders, was privately dealing in his own stock and, indeed, helping to drive it down.
In retrospect, it is startling how similar these stunts were to episodes of the '90s. National City might have been the Internet analyst of its day, and Wiggin was merely a harbinger of Dennis Kozlowski, the quick-fingered chief executive of Tyco International. So much recurred that one almost wonders if the government had adopted any protections at all.
But of course it had. The New Deal's response was extensive, but it can be summarized in one word: "disclosure." Legislation created the Securities and Exchange Commission as a cop on Wall Street, but the SEC could never have the manpower to go poking into every single company's files. Instead, the burden of preventing would-be Enrons, Tycos and WorldComs would rest with the companies and their auditors, who were now required to disclose all the material facts that an investor would want to know. The real policing would be done by markets.
The theory was that a CEO, knowing that markets were watching, would keep his hands clean. Disclosure was the least intrusive form of supervision-like a mother's telling her child to keep the cookie jar in plain sight. Or as Louis D. Brandeis had explained, "Sunlight is said to be the best of disinfectants; electric light the most efficient policeman."6
It worked, but only to a point. As long as a CEO made proper disclosure, a poor performance-even a poor record over a long period of time-generally did not result in his ouster. In other words, the requirement to disclose motivated a CEO not to do ill and generally not to violate the law, but it did not ensure that he would build value for the owners.
By the 1970s this had become painfully clear. CEOs such as Harold Geneen of International Telephone & Telegraph (ITT) had built huge conglomerates that, while enhancing their fiefdoms (and their regal lifestyles) had done precious little for their shareholders. Executives in industry after industry had been so complacent they did not see the oncoming freight train of international competition. Detroit saw its share of the world auto market plunge from 75 percent in 1950 to an abysmal 20 percent. At IBM, too, dominance bred smugness. So satisfied was the computer giant with the fat, 60 percent profit margins on its flagship mainframe that it was asleep to the tectonic shift unfolding in computing, which dislodged mainframes in favor of the personal computer.7 For some reason, at these and at many other companies, the market check-the need of executives to perform for their investors-wasn't working.
Not surprisingly, the generation that ran these companies had come of age after World War II, in an era of fixed exchange rates and government regulation. They were programmed for stability, not change-for gradual evolutions planned by managers, not for chaos wrought by markets.
But chaos found them anyway. By the end of the '70s, stocks had fallen far enough to scream "cheap." The values inherent in stocks inspired a new and distinctly American phenomenon: the hostile takeover. The phrase refers to the practice of acquiring a company over the objection of management. Instead of waiting for their intended to say, "I do," raiders simply asked the stockholders to tender (sell) their shares, though there was nothing tender about it. Most of the early hostile bids involved companies in the same line of business, frequently energy (Conoco Oil was a celebrated target). With prices so compelling, so the saying went, the cheapest place to drill for oil was on the floor of the New York Stock Exchange.
By the early 1980s, Wall Street had spawned a new occupational class: the raider or takeover artist for hire-the gunslinger without portfolio. Carl Icahn, Henry Kravis, Irwin Jacobs, and a host of lesser gunmen were financiers as distinct from operators; they went after whole companies in diverse industries, typically offering premiums of 30 percent to 40 percent above the market price. People's interest in stocks naturally began to revive.
Takeovers had a similarly energizing effect on managers, in particular on CEOs. Previously, theirs had been the safest jobs around; now, their fortress was under siege and their pulse rate was on the rise. Given the dreadful state of their companies, a little anxiety was no bad thing. To escape a buyout, CEOs felt they had to raise their share price. This was a significant departure. Previously, stock prices had been seen as a long-term barometer. Prices in the short term were notoriously unreliable (this was the lesson of the Great Crash). But with a Henry Kravis lurking, the long term might not exist. Or as John Maynard Keynes liked to say, in the long run we are dead. Now CEOs had to demonstrate that they (and not T. Boone Pickens) had the shareholders' best interests at heart. They began to think more often, and more urgently, about their stock price. Willy-nilly, takeovers had become a tool of corporate governance.
A new phrase crept into the argot: "shareholder value." It was painfully redundant. After all, any value in a corporation is shareholder value; a CEO has no other constituency. But more and more, when a CEO did something-anything-to get his company on track, it was cloaked in the flag of the shareholder. A computerized search found five references to the term in the entire decade of the '70s, six in 1982, and steadily more thereafter. By 1985 there were ninety-nine, after which it becomes ubiquitous.
When underachieving companies such as Walt Disney and ITT promised to boost shareholder value, in the mid- to late-1980s, they meant something other than pursuing their usual business of shooting films or managing hotels. What they were really intent on pursuing was their share price. The distinction was intuitively understood by the public, even if it were not articulated. "Shareholder value" became a rallying cry; CEOs who resorted to the cliché quickly cultivated a following among investors. In a prescient piece in 1984, The New York Times observed that takeovers "had put renewed emphasis on stock price."8
But takeovers did not cure the patient, not immediately at any rate. The early '80s were even gloomier than the '70s. The Federal Reserve, hoping to squelch inflation, let interest rates soar to the midteens, which sent the economy into not one but two recessions, back-to-back. Bankruptcies were widespread, and the mood was somber. A spate of articles and books forecast a coming depression, a runaway inflation, a catastrophic energy shock, or worse. Consultants predicted that in the "next" crisis oil would rise to $100 a barrel, choking off the country's growth and even threatening democracy!9 A private economist with a flair for phrase-making-Alan Greenspan-termed it "The Great Malaise."10
. . .
Some of the problems ailing business seemed to be cultural as much as economic. Many of the companies getting pounded hardest, such as Xerox and Kodak, were losing out to rivals from Japan, a more consensual society. The notion of Japanese shareholders' launching a hostile takeover-a hostile anything-was preposterous, but perhaps that was Japan's secret. Japanese executives did not have stock market worries or the threat of takeovers to make them jumpy. They could focus on the longer term-on making quality products.
By the mid '80s, a new despairing expert-the Nippon-phile-began to crusade for a wholesale imitation of Japanese ways. This crowd tried to steer American executives in the opposite direction from takeovers-toward relationships, not markets; toward deeper strategy, not daily stock quotes. In fact, if Japan's soaring stock market was any clue, the way to get industry as well as the market moving was to disregard the clamor from myopic investors.
Consultants returning from Tokyo bemoaned America's obsessive short-termism, its low rate of savings, its lack of team spirit. Michael Crichton's Rising Sun warned of a new Japanese supremacy. Articles such as "Meeting the Japanese Challenge" became common, and a few were openly defeatist. Imagine, only a generation after Pearl Harbor, American book buyers swarming to a volume entitled, Japan as No 1: Lessons for America.11
Eager to see the light, American companies experimented with quality circles, with fostering more contact between workers and managers, with forming partnerships as opposed to unceasing competition. There was a vogue for so-called stakeholders, a dissonant term suggesting that a corporation existed to serve not just its stockholders but a nest of related interest groups-its workers, its community, its suppliers.
The stakeholder movement, essentially an attempt to plant the Japanese model on American soil, emphasized a company's ongoing relationships, whereas in a market system, workers might be sacked, suppliers might be replaced, whole divisions might be sold, depending on what was most advantageous at the moment. Incorporating the needs of stakeholders, it was said, would resolve the age-old dilemma of governance by aligning American companies with the interests of society. At the very least, it would soften free enterprise-tame its Darwinian edge.
But the movement was slow to gain steam, not only because the notion of a stakeholder was fuzzy and lacking any legal basis but also because it was, in a profound sense, un-American. Whatever the ideals of reformers, individualism was simply too central to the country's spirit. Admiration for the Japanese miracle was fine, but in America, innovation was the result of heated competition; it occurred precisely on the Darwinian edge. The question for faltering companies was how to restore that edge.
Takeovers alone hadn't worked, but the raiders had a new weapon, even a new idea. The idea was leverage. It may sound absurd, now, to think that companies could borrow their way to health or that investors could lend their way to prosperity, but then, investors had once been eager to lend to Peru. Leveraged buyouts (literally, buyouts financed by debt) had always been around, but in the late '70s and especially in the '80s LBOs became the rage. Much of this, of course, was due to Michael Milken.
Milken, in the '70s an obscure bond trader at Drexel Burnham Lambert, had theorized that junk bonds (meaning bonds that were issued by distressed corporations, and hence were of lesser credit quality) often traded at excessive discounts. Milken thought these bonds offered high-enough interest rates to more than compensate investors for the risk of default. For a bond trader this was a fine theme, and it certainly made Milken money.
But Drexel (and Milken) soon found they could make higher profits by underwriting new junk bonds rather than merely trading existing ones.12 He thus began to float bonds for low-rated borrowers who, previously, had been shut out of Wall Street. Some of the borrowers used the money for internal expansion or for paying off prior debts-but others saw that junk bonds could fund a war chest for acquisitions. For instance, Ronald Perelman, an ambitious deal maker who had started at his father's metal fabrication company, managed, at thirty-five, to buy his first business, a jewelry distributor, with precisely $1.9 million in borrowed funds.13 After meeting Milken, Perelman could borrow not merely millions but billions. In an episode that set the tone for the latter half of the '80s, Perelman, still a little-known financier in 1985, launched a bid for the cosmetics giant, Revlon, run by Michel Bergerac, a courtly Frenchman and an icon of the establishment. When Perelman, a crude, cigar-chomping newcomer with seemingly smaller resources, prevailed, it triggered anxious spasms among CEOs everywhere. Executives who had slept while Japan and Europe retooled could not sleep now, for with the raiders armed with debt, no company was safe.
Could LBOs, then, be the tonic? There was always an element of fantasy about this. After all, the deal money could not be endless; it had to come from somewhere. Not that this was a problem during the '80s. Ordinary investors were flocking to mutual funds that indiscriminately purchased every junk bond in sight. The lure, of course, was the huge interest rate-12 percent, 14 percent, sometimes more-that junk-bond issuers promised. The notion that investors could earn high returns from borrowers that were on the verge of bankruptcy was remarkably innocent, not to say naïve. Just as equity investors forgot about risk a decade later, so bond investors in the '80s ignored the possibility of default. Milken had conditioned them not to worry, for when a bond did run into trouble Milken would sponsor a new batch of junk bonds to retire the earlier, troubled issue (in other words, companies would encumber new debts to pay off the old). By such a circularity, junk bonds obtained an aura-altogether pleasing to the investor-of inevitability.
It was easy to say that the mania couldn't last but hard to resist the trend (just as, later, people who recognized the insanity of the dot-com mania participated in it). LBO artists were appearing out of the woodwork, their pockets swollen with cash with which to bid up shares. What was a CEO to do? Even raiders who had not announced a target had no trouble raising blind money to use in an LBO-any LBO-if they should find one.
As the LBO trend gained momentum, corporate managers grew desperate. There was no time for holding the workers' hands and improving the product-they had to get the stock up. The only defense was to imitate the raiders and borrow. Thus, under the unlikely rubric of shareholder value, companies such as Harcourt Brace Jovanovich and Western Union hollowed out their balance sheets by adding mountains of debt, the better to buy back shares and, they hoped, boost the stock. Managers at R. H. Macy and many other corporations went, so to speak, all the way and bought out (with borrowed funds) all of their own shareholders.
Improbable as it may sound, given the desperately mortgaged condition of their companies, such managers were hailed as saviors. The notion that owner-managers would run a business more efficiently than hired salarymen had intuitive appeal. And let a man buy a company on borrowed funds, with either fortune or bankruptcy hanging in the balance, and his incentive was profound. Indeed, the fact that LBOs were financed with debt added to their mystique, for the operators truly existed on the edge.
By the latter part of the '80s, every investment bank-not just Drexel Burnham-was underwriting LBOs, often with management participating. Many of the early buyouts succeeded, and there is no doubt that some achieved efficiencies and that corporate America had been in need of a belt-tightening. But the deals became steadily, then recklessly, more leveraged. Finance has its own Peter Principle, by which a successful model will be adapted to progressively riskier cases until it fails. Ultimately, borrowers such as Federated Department Stores (acquired by the blustery Robert Campeau) promised to pay far more in junk bond interest than they had earnings. These later LBOs were-by simple arithmetic-doomed to fail.
Nonetheless, LBOs were now, uniformly, winning the warm praise reserved for an elixir, a cure-all for corporate America.14 The fact that the buyers could afford to pay a premium for the stock was seen as proof that a leveraged company was inherently more efficient. (Conveniently overlooked was the possibility that the raiders were overpaying.) Michael Jensen, a prominent theorist at Harvard Business School, even suggested that the commonplace public corporation might be an anachronism.15 His "Eclipse of the Public Corporation," published at the height of the LBO craze, argued that LBOs had led to efficiencies apparently beyond the grasp of companies with public shares. Jensen's argument was based on a premise, known as the efficient-market hypothesis, beloved to academics: that stock prices were ever as rational and correct as they could be. Thus, if raiders were paying more for stocks, it must be because stocks were worth more in their hands.
Thinly buried in the praise of Jensen and others was the suggestion that the raiders were a throwback to the industrialists of a century ago-the Carnegies, Morgans, and so forth. Takeover artists were championed as the vessels of a provident Invisible Hand, weeding out the slothful corporate bureaucrats and restoring America's entrepreneurial spirit. This was a theme that the acquirers, naturally, did nothing to discourage. Henry Kravis, the architect of the RJR Nabisco buyout, the decade's largest, reaped hundreds of millions of dollars in buyout profits and fees and, indeed, lived in the Park Avenue building that once was home to John D. Rockefeller Jr. Not surprisingly, he argued that LBOs were "restoring" America's competitive edge, sweeping away "stagnation" and so forth.16
Nelson Peltz, one of the more affected of Milken's raider-clients, who divided his time among an oceanfront home in Palm Beach, Florida (complete with hydraulically operated glass panels to protect his pool from winds), a twenty-two-room estate in Bedford, New York, and an apartment in Paris, actually likened himself to a Carnegie. "The industrialists of the nineteenth century were highly paid and highly criticized," Peltz observed, "and I guess we'll have to bear that burden too."17
The recession of 1990 put paid to the raiders' pretensions, though not to their mansions and sliding panels. Scores of LBOed companies filed for bankruptcy; dozens of others, notably Perelman's Revlon and Kravis's RJR, were permanently weakened by their improvident levels of debt. Milken, by then, was gone from the scene, convicted of securities fraud. Like embittered soldiers, a band of free-market ideologues have ever since complained that his prosecution was a willful stab in the back to the junk-bond business. But no healthy market ever depended on a single trader.
And the junk bond market, stripped of its excesses, revived. What died was the notion of LBOs as a governance vehicle-a tool of natural selection that pruned the poorly run (and thus, cheaper) companies, "the corporate deadwood."18 In fact, when financing had been so easy, good managements had been vulnerable too. Moreover, since the buyout artists had not, mostly, used their own money, there was a gaping hole in the idea that Kravis, Perelman, et al. were instruments of Adam Smith, wielding society's proxy. When an LBO failed, society was the poorer-jobs were lost, innovations were foregone-but the raiders simply went on to the next deal. The LBO operators bore a moral hazard, for they had nothing to lose. What they had was a free ride: heads they won big; tails they tried again (while the lenders got wiped out).
As we will see, this also describes the equation for a corporate executive endowed with a stock option. And it was no coincidence that as the LBO era neared a crescendo, options were granted with increasing largesse. Since it was clear, by 1990, that LBOs were not a universal answer (piling on ever more debt was simply suicidal), attention reverted to public company executives. What could be done-what carrot could be bestowed-to make them more like the idealized owner-managers of academics' dreams?
CEOs had already embraced much of the raiders' agenda. They were buying back shares, cutting costs, selling divisions. They had bought into shareholder value. What was needed, it was said, to make CEOs truly behave like owner-managers was to make them owners. Soon, a new model of governance, a new elixir, and not incidentally a new culture was born, and it revolved around the stock option.
--from Origins of the Crash: The Great Bubble and Its Undoing by Roger Lowenstein, copyright © 2004 Roger Lowenstein, published by The Penguin Press, a member of Penguin Group (USA) Inc. all rights reserved, reprinted with permission from the publisher.