Archive for the ‘The Archdruid Report’ tag
by John Michael Greer
Wednesday, March 04, 2015
Last week’s discussion of externalities—costs of doing business that get dumped onto the economy, the community, or the environment, so that those doing the dumping can make a bigger profit—is, I’m glad to say, not the first time this issue has been raised recently. The long silence that closed around such things three decades ago is finally cracking; they’re being mentioned again, and not just by archdruids. One of my readers—tip of the archdruidical hat to Jay McInerney—noted an article in Grist a while back that pointed out the awkward fact that none of the twenty biggest industries in today’s world could break even, much less make a profit, if they had to pay for the damage they do to the environment.
Now of course the conventional wisdom these days interprets that statement to mean that it’s unfair to make those industries pay for the costs they impose on the rest of us—after all, they have a God-given right to profit at everyone else’s expense, right? That’s certainly the attitude of fracking firms in North Dakota, who recently proposed that they ought to be exempted from the state’s rules on dumping radioactive waste, because following the rules would cost them too much money. That the costs externalized by the fracking industry will sooner or later be paid by others, as radionuclides in fracking waste work their way up the food chain and start producing cancer clusters, is of course not something anyone in the industry or the media is interested in discussing.
Watch this sort of thing, and you can see the chasm opening up under the foundations of industrial society. Externalized costs don’t just go away; one way or another, they’re going to be paid, and costs that don’t appear on a company’s balance sheet still affect the economy. That’s the argument of The Limits to Growth, still the most accurate (and thus inevitably the most reviled) of the studies that tried unavailingly to turn industrial society away from its suicidal path: on a finite planet, once an inflection point is passed, the costs of economic growth rise faster than growth does, and sooner or later force the global economy to its knees.
The tricks of accounting that let corporations pretend that their externalized costs vanish into thin air don’t change that bleak prognosis. Quite the contrary, the pretense that externalities don’t matter just makes it harder for a society in crisis to recognize the actual source of its troubles. I’ve come to think that that’s the unmentioned context behind a dispute currently roiling those unhallowed regions where economists lurk in the shrubbery: the debate over secular stagnation.
Secular stagnation? That’s the concept, unmentionable until recently, that the global economy could stumble into a rut of slow, no, or negative growth, and stay there for years. There are still plenty of economists who insist that this can’t happen, which is rather funny, really, when you consider that this has basically been the state of the global economy since 2009. (My back-of-the-envelope calculations suggest, in fact, that if you subtract the hallucinatory paper wealth manufactured by derivatives and similar forms of financial gamesmanship from the world’s GDP, the production of nonfinancial goods and services worldwide has actually been declining since before the 2008 housing crash.)
Even among those who admit that what’s happening can indeed happen, there’s no consensus as to how or why such a thing could occur. On the off chance that any mainstream economists are lurking in the shrubbery in the even more unhallowed regions where archdruids utter unspeakable heresies, and green wizards clink mugs of homebrewed beer together and bay at the moon, I have a suggestion to offer: the most important cause of secular stagnation is the increasing impact of externalities on the economy. The dishonest macroeconomic bookkeeping that leads economists to think that externalized costs go away because they’re not entered into anyone’s ledger books doesn’t actually make them disappear; instead, they become an unrecognized burden on the economy as a whole, an unfelt headwind blowing with hurricane force in the face of economic growth.
Thus there’s a profound irony in the insistence by North Dakota fracking firms that they ought to be allowed to externalize even more of their costs in order to maintain their profit margin. If I’m right, the buildup of externalized costs is what’s causing the ongoing slowdown in economic activity worldwide that’s driving down commodity prices, forcing interest rates in many countries to zero or below, and resurrecting the specter of deflationary depression. The fracking firms in question thus want to respond to the collapse in oil prices—a result of secular stagnation—by doing even more of what’s causing secular stagnation. To say that this isn’t likely to end well is to understate the case considerably.
In the real world, of course, mainstream economists don’t listen to suggestions from archdruids, and fracking firms, like every other business concern these days, can be expected to put their short-term cash flow ahead of the survival of their industry, or for that matter of industrial civilization as a whole. Thus I propose to step aside from the subject of economic externalities for a moment—though I’ll be returning to it at intervals as we proceed with this sequence of posts—in order to discuss a subtler and less crassly financial form of the same phenomenon.
That form came in for discussion in the same post two weeks ago that brought the issue of externalities into this blog’s ongoing conversation. Quite a few readers commented about the many ways in which things labeled “more advanced,” “more progressive,” and the like were actually less satisfactory and less effective at meeting human needs than the allegedly more primitive technologies they replaced. Some of those comments focused, and quite sensibly, on the concrete examples, but others pondered the ways that today’s technology fails systematically at meeting certain human needs, and reflected on the underlying causes for that failure. One of my readers—tip of the archdruidical hat here to Ruben—gave an elegant frame for that discussion by suggesting that the peak of technological complexity in our time may also be described as peak meaninglessness.
I’d like to take the time to unpack that phrase. In the most general sense, technologies can be divided into two broad classes, which we can respectively call tools and prosthetics. The difference is a matter of function. A tool expands human potential, giving people the ability to do things they couldn’t otherwise do. A prosthetic, on the other hand, replaces human potential, doing something that under normal circumstances, people can do just as well for themselves. Most discussions of technology these days focus on tools, but the vast majority of technologies that shape the lives of people in a modern industrial society are not tools but prosthetics.
Prosthetics have a definite value, to be sure. Consider an artificial limb, the sort of thing on which the concept of technology-as-prosthetic is modeled. If you’ve lost a leg in an accident, say, an artificial leg is well worth having; it replaces a part of ordinary human potential that you don’t happen to have any more, and enables you to do things that other people can do with their own leg. Imagine, though, that some clever marketer were to convince people to have their legs cut off so that they could be fitted for artificial legs. Imagine, furthermore, that the advertising for artificial legs became so pervasive, and so successful, that nearly everybody became convinced that human legs were hopelessly old-fashioned and ugly, and rushed out to get their legs amputated so they could walk around on artificial legs.
Then, of course, the manufacturers of artificial arms got into the same sort of marketing, followed by the makers of sex toys. Before long you’d have a society in which most people were gelded quadruple amputees fitted with artificial limbs and rubber genitals, who spent all their time talking about the wonderful things they could do with their prostheses. Only in the darkest hours of the night, when the TV was turned off, might some of them wonder why it was that a certain hard-to-define numbness had crept into all their interactions with other people and the rest of the world.
In a very real sense, that’s the way modern industrial society has reshaped and deformed human life for its more privileged inmates. Take any human activity, however humble or profound, and some clever marketer has found a way to insert a piece of technology in between the person and the activity. You can’t simply bake bread—a simple, homely, pleasant activity that people have done themselves for thousands of years using their hands and a few simple handmade tools; no, you have to have a bread machine, into which you dump a prepackaged mix and some liquid, push a button, and stand there being bored while it does the work for you, if you don’t farm out the task entirely to a bakery and get the half-stale industrially extruded product that passes for bread these days.
Now of course the bread machine manufacturers and the bakeries pitch their products to the clueless masses by insisting that nobody has time to bake their own bread any more. Ivan Illich pointed out in Energy and Equity a long time ago the logical fallacy here, which is that using a bread machine or buying from a bakery is only faster if you don’t count the time you have to spend earning the money needed to pay for it, power it, provide it with overpriced prepackaged mixes, repair it, clean it, etc., etc., etc. Illich’s discussion focused on automobiles; he pointed out that if you take the distance traveled by the average American auto in a year, and divide that by the total amount of time spent earning the money to pay for the auto, fuel, maintenance, insurance, etc., plus all the other time eaten up by tending to the auto in various ways, the average American car goes about 3.5 miles an hour: about the same pace, that is, that an ordinary human being can walk.
If this seems somehow reminiscent of last week’s discussion of externalities, dear reader, it should. The claim that technology saves time and labor only seems to make sense if you ignore a whole series of externalities—in this case, the time you have to put into earning the money to pay for the technology and into coping with whatever requirements, maintenance needs, and side effects the technology has. Have you ever noticed that the more “time-saving technologies” you bring into your life, the less free time you have? This is why—and it’s also why the average medieval peasant worked shorter hours, had more days off, and kept a larger fraction of the value of his labor than you do.
Something else is being externalized by prosthetic technology, though, and it’s that additional factor that gives Ruben’s phrase “peak meaninglessness” its punch. What are you doing, really, when you use a bread machine? You’re not baking bread; the machine is doing that. You’re dumping a prepackaged mix and some water into a machine, closing the lid, pushing a button, and going away to do something else. Fair enough—but what is this “something else” that you’re doing? In today’s industrial societies, odds are you’re going to go use another piece of prosthetic technology, which means that once again, you’re not actually doing anything. A machine is doing something for you. You can push that button and walk away, but again, what are you going to do with your time? Use another machine?
The machines that industrial society uses to give this infinite regress somewhere to stop—televisions, video games, and computers hooked up to the internet—simply take the same process to its ultimate extreme. Whatever you think you’re doing when you’re sitting in front of one of these things, what you’re actually doing is staring at little colored pictures on a glass screen and pushing some buttons. All things considered, this is a profoundly boring activity, which is why the little colored pictures jump around all the time; that’s to keep your nervous system so far off balance that you don’t notice just how tedious it is to spend hours at a time staring at little colored pictures on a screen.
I can’t help but laugh when people insist that the internet is an information-rich environment. It’s quite the opposite, actually: all you get from it is the very narrow trickle of verbal, visual, and auditory information that can squeeze through the digital bottleneck and turn into little colored pictures on a glass screen. The best way to experience this is to engage in a media fast—a period in which you deliberately cut yourself off from all electronic media for a week or more, preferably in a quiet natural environment. If you do that, you’ll find that it can take two or three days, or even more, before your numbed and dazzled nervous system recovers far enough that you can begin to tap in to the ocean of sensory information and sensual delight that surrounds you at every moment. It’s only then, furthermore, that you can start to think your own thoughts and dream your own dreams, instead of just rehashing whatever the little colored pictures tell you.
A movement of radical French philosophers back in the 1960s, the Situationists, argued that modern industrial society is basically a scheme to convince people to hand over their own human capabilities to the industrial machine, so that imitations of those capabilities can be sold back to them at premium prices. It was a useful analysis then, and it’s even more useful now, when the gap between realities and representations has become even more drastic than it was back then. These days, as often as not, what gets sold to people isn’t even an imitation of some human capability, but an abstract representation of it, an arbitrary marker with only the most symbolic connection to what it represents.
This is one of the reasons why I think it’s deeply mistaken to claim that Americans are materialistic. Americans are arguably the least materialistic people in the world; no actual materialist—no one who had the least appreciation for actual physical matter and its sensory and sensuous qualities—could stand the vile plastic tackiness of America’s built environment and consumer economy for a fraction of a second. Americans don’t care in the least about matter; they’re happy to buy even the most ugly, uncomfortable, shoddily made and absurdly overpriced consumer products you care to imagine, so long as they’ve been convinced that having those products symbolizes some abstract quality they want, such as happiness, freedom, sexual pleasure, or what have you.
Then they wonder, in the darkest hours of the night, why all the things that are supposed to make them happy and satisfied somehow never manage to do anything of the kind. Of course there’s a reason for that, too, which is that happy and satisfied people don’t keep on frantically buying products in a quest for happiness and satisfaction. Still, the little colored pictures keep showing them images of people who are happy and satisfied because they guzzle the right brand of tasteless fizzy sugar water, and pay for the right brand of shoddily made half-disposable clothing, and keep watching the little colored pictures: that last above all else. “Tune in tomorrow” is the most important product that every media outlet sells, and they push it every minute of every day on every stop and key.
That is to say, between my fantasy of voluntary amputees eagerly handing over the cash for the latest models of prosthetic limbs, and the reality of life in a modern industrial society, the difference is simply in the less permanent nature of the alterations imposed on people here and now. It’s easier to talk people into amputating their imaginations than it is to convince them to amputate their limbs, but it’s also a good deal easier to reverse the surgery.
What gives this even more importance than it would otherwise have, in turn, is that all this is happening in a society that’s hopelessly out of touch with the realities that support its existence, and that relies on bookkeeping tricks of the sort discussed toward the beginning of this essay to maintain the fantasy that it’s headed somewhere other than history’s well-used compost bin. The externalization of the mind and the imagination plays just as important a role in maintaining that fantasy as the externalization of costs—and the cold mechanical heart of the externalization of the mind and imagination is mediation, the insertion of technological prosthetics into the space between the individual and the world. We’ll talk more about that in next week’s post.
In other news, I’m delighted to report the publication of a new book of mine that may be of particular interest to readers of this blog: Collapse Now and Avoid the Rush: The Best of the Archdruid Report, which is just out from Founders House Publishing. As the title suggests, it’s an anthology of twenty-five of the most popular weekly posts from this blog, including such favorites as "Knowing Only One Story," "An Elegy for the Age of Space," "The Next Ten Billion Years," and "The Time of the Seedbearers," as well as the title essay and many more. These are the one-of-a-kind essays that haven’t appeared in my books; if you’re looking for something to hand to the spouse or friend or twelve-year-old kid who wants to know why you keep visiting this sight every Wednesday night, or simply want this blog’s best essays in a more permanent form, this is the book. It’s available in print and e-book formats and can be ordered here
by John Michael Greer
The Archdruid Report | February 4, 2015
I was saddened to learn a few days ago, via a phone call from a fellow author, that William R. Catton Jr. died early last month, just short of his 89th birthday. Some of my readers will have no idea who he was; others may dimly recall that I’ve mentioned him and his most important book, Overshoot, repeatedly in these essays. Those who’ve taken the time to read the book just named may be wondering why none of the sites in the peak oil blogosphere has put up an obituary, or even noted the man’s passing. I don’t happen to know the answer to that last question, though I have my suspicions.
I encountered Overshoot for the first time in a college bookstore in Bellingham, Washington in 1983. Red letters on a stark yellow spine spelled out the title, a word I already knew from my classes in ecology and systems theory; I pulled it off the shelf, and found the future staring me in the face. This is what’s on the front cover below the title:
If you want to know where I got the core ideas I’ve been exploring in these essays for the last eight-going-on-nine years, in other words, now you know. I still have that copy of Overshoot; it’s sitting on the desk in front of me right now, reminding me yet again just how many chances we had to turn away from the bleak future that’s closing in around us now, like the night at the end of a long day.
Plenty of books in the 1970s and early 1980s applied the lessons of ecology to the future of industrial civilization and picked up at least part of the bad news that results. Overshoot was arguably the best of the lot, but it was pretty much guaranteed to land even deeper in the memory hole than the others. The difficulty was that Catton’s book didn’t pander to the standard mythologies that still beset any attempt to make sense of the predicament we’ve made for ourselves; it provided no encouragement to what he called cargoism, the claim that technological progress will inevitably allow us to have our planet and eat it too, without falling off the other side of the balance into the sort of apocalyptic daydreams that Hollywood loves to make into bad movies. Instead, in calm, crisp, thoughtful prose, he explained how industrial civilization was cutting its own throat, how far past the point of no return we’d already gone, and what had to be done in order to salvage anything from the approaching wreck.
As I noted in a post here in 2011, I had the chance to meet Catton at an ASPO conference, and tried to give him some idea of how much his book had meant to me. I did my best not to act like a fourteen-year-old fan meeting a rock star, but I’m by no means sure that I succeeded. We talked for fifteen minutes over dinner; he was very gracious; then things moved on, each of us left the conference to carry on with our lives, and now he’s gone. As the old song says, that’s the way it goes.
There’s much more that could be said about William Catton, but that task should probably be left for someone who knew the man as a teacher, a scholar, and a human being. I didn’t; except for that one fifteen-minute conversation, I knew him solely as the mind behind one of the books that helped me make sense of the world, and then kept me going on the long desert journey through the Reagan era, when most of those who claimed to be environmentalists over the previous decade cashed in their ideals and waved around the cornucopian myth as their excuse for that act. Thus I’m simply going to urge all of my readers who haven’t yet read Overshoot to do so as soon as possible, even if they have to crawl on their bare hands and knees over abandoned fracking equipment to get a copy. Having said that, I’d like to go on to the sort of tribute I think he would have appreciated most: an attempt to take certain of his ideas a little further than he did.
The core of Overshoot, which is also the core of the entire world of appropriate technology and green alternatives that got shot through the head and shoved into an unmarked grave in the Reagan years, is the recognition that the principles of ecology apply to industrial society just as much as they do to other communities of living things. It’s odd, all things considered, that this is such a controversial proposal. Most of us have no trouble grasping the fact that the law of gravity affects human beings the same way it affects rocks; most of us understand that other laws of nature really do apply to us; but quite a few of us seem to be incapable of extending that same sensible reasoning to one particular set of laws, the ones that govern how communities of living things relate to their environments.
If people treated gravity the way they treat ecology, you could visit a news website any day of the week and read someone insisting with a straight face that while it’s true that rocks fall down when dropped, human beings don’t—no, no, they fall straight up into the sky, and anyone who thinks otherwise is so obviously wrong that there’s no point even discussing the matter. That degree of absurdity appears every single day in the American media, and in ordinary conversations as well, whenever ecological issues come up. Suggest that a finite planet must by definition contain a finite amount of fossil fuels, that dumping billions of tons of gaseous trash into the air every single year for centuries might change the way that the atmosphere retains heat, or that the law of diminishing returns might apply to technology the way it applies to everything else, and you can pretty much count on being shouted down by those who, for all practical purposes, might as well believe that the world is flat.
Still, as part of the ongoing voyage into the unspeakable in which this blog is currently engaged, I’d like to propose that, in fact, human societies are as subject to the laws of ecology as they are to every other dimension of natural law. That act of intellectual heresy implies certain conclusions that are acutely unwelcome in most circles just now; still, as my regular readers will have noticed long since, that’s just one of the services this blog offers.
Let’s start with the basics. Every ecosystem, in thermodynamic terms, is a process by which relatively concentrated energy is dispersed into diffuse background heat. Here on Earth, at least, the concentrated energy mostly comes from the Sun, in the form of solar radiation—there are a few ecosystems, in deep oceans and underground, that get their energy from chemical reactions driven by the Earth’s internal heat instead. Ilya Prigogine showed some decades back that the flow of energy through a system of this sort tends to increase the complexity of the system; Jeremy England, a MIT physicist, has recently shown that the same process accounts neatly for the origin of life itself. The steady flow of energy from source to sink is the foundation on which everything else rests.
The complexity of the system, in turn, is limited by the rate at which energy flows through the system, and this in turn depends on the difference in concentration between the energy that enters the system, on the one hand, and the background into which waste heat diffuses when it leaves the system, on the other. That shouldn’t be a difficult concept to grasp. Not only is it basic thermodynamics, it’s basic physics—it’s precisely equivalent, in fact, to pointing out that the rate at which water flows through any section of a stream depends on the difference in height between the place where the water flows into that section and the place where it flows out.
Simple as it is, it’s a point that an astonishing number of people—including some who are scientifically literate—routinely miss. A while back on this blog, for example, I noted that one of the core reasons you can’t power a modern industrial civilization on solar energy is that sunlight is relatively diffuse as an energy source, compared to the extremely concentrated energy we get from fossil fuels. I still field rants from people insisting that this is utter hogwash, since photons have exactly the same amount of energy they did when they left the Sun, and so the energy they carry is just as concentrated as it was when it left the Sun. You’ll notice, though, that if this was the only variable that mattered, Neptune would be just as warm as Mercury, since each of the photons hitting the one planet pack on average the same energetic punch as those that hit the other.
It’s hard to think of a better example of the blindness to whole systems that’s pandemic in today’s geek culture. Obviously, the difference between the temperatures of Neptune and Mercury isn’t a function of the energy of individual photons hitting the two worlds; it’s a function of differing concentrations of photons—the number of them, let’s say, hitting a square meter of each planet’s surface. This is also one of the two figures that matter when we’re talking about solar energy here on Earth. The other? That’s the background heat into which waste energy disperses when the system, eco- or solar, is done with it. On the broadest scale, that’s deep space, but ecosystems don’t funnel their waste heat straight into orbit, you know. Rather, they diffuse it into the ambient temperature at whatever height above or below sea level, and whatever latitude closer or further from the equator, they happen to be—and since that’s heated by the Sun, too, the difference between input and output concentrations isn’t very substantial.
Nature has done astonishing things with that very modest difference in concentration. People who insist that photosynthesis is horribly inefficient, and of course we can improve its efficiency, are missing a crucial point: something like half the energy that reaches the leaves of a green plant from the Sun is put to work lifting water up from the roots by an ingenious form of evaporative pumping, in which water sucked out through the leaf pores as vapor draws up more water through a network of tiny tubes in the plant’s stems. Another few per cent goes into the manufacture of sugars by photosynthesis, and a variety of minor processes, such as the chemical reactions that ripen fruit, also depend to some extent on light or heat from the Sun; all told, a green plant is probably about as efficient in its total use of solar energy as the laws of thermodynamics will permit.
What’s more, the Earth’s ecosystems take the energy that flows through the green engines of plant life and put it to work in an extraordinary diversity of ways. The water pumped into the sky by what botanists call evapotranspiration—that’s the evaporative pumping I mentioned a moment ago—plays critical roles in local, regional, and global water cycles. The production of sugars to store solar energy in chemical form kicks off an even more intricate set of changes, as the plant’s cells are eaten by something, which is eaten by something, and so on through the lively but precise dance of the food web. Eventually all the energy the original plant scooped up from the Sun turns into diffuse waste heat and permeates slowly up through the atmosphere to its ultimate destiny warming some corner of deep space a bit above absolute zero, but by the time it gets there, it’s usually had quite a ride.
That said, there are hard upper limits to the complexity of the ecosystem that these intricate processes can support. You can see that clearly enough by comparing a tropical rain forest to a polar tundra. The two environments may have approximately equal amounts of precipitation over the course of a year; they may have an equally rich or poor supply of nutrients in the soil; even so, the tropical rain forest can easily support fifteen or twenty thousand species of plants and animals, and the tundra will be lucky to support a few hundred. Why? The same reason Mercury is warmer than Neptune: the rate at which photons from the sun arrive in each place per square meter of surface.
Near the equator, the sun’s rays fall almost vertically. Close to the poles, since the Earth is round, the Sun’s rays come in at a sharp angle, and thus are spread out over more surface area. The ambient temperature’s quite a bit warmer in the rain forest than it is on the tundra, but because the vast heat engine we call the atmosphere pumps heat from the equator to the poles, the difference in ambient temperature is not as great as the difference in solar input per cubic meter. Thus ecosystems near the equator have a greater difference in energy concentration between input and output than those near the poles, and the complexity of the two systems varies accordingly.
All this should be common knowledge. Of course it isn’t, because the industrial world’s notions of education consistently ignore what William Catton called “the processes that matter”—that is, the fundamental laws of ecology that frame our existence on this planet—and approach a great many of those subjects that do make it into the curriculum in ways that encourage the most embarrassing sort of ignorance about the natural processes that keep us all alive. Down the road a bit, we’ll be discussing that in much more detail. For now, though, I want to take the points just made and apply them systematically, in much the way Catton did, to the predicament of industrial civilization.
A human society is an ecosystem. Like any other ecosystem, it depends for its existence on flows of energy, and as with any other ecosystem, the upper limit on its complexity depends ultimately on the difference in concentration between the energy that enters it and the background into which its waste heat disperses. (This last point is a corollary of White’s Law, one of the fundamental principles of human ecology, which holds that a society’s economic development is directly proportional to its consumption of energy per capita.) Until the beginning of the industrial revolution, that upper limit was not much higher than the upper limit of complexity in other ecosystems, since human ecosystems drew most of their energy from the same source as nonhuman ones: sunlight falling on green plants. As human societies figured out how to tap other flows of solar energy—windpower to drive windmills and send ships coursing over the seas, water power to turn mills, and so on—that upper limit crept higher, but not dramatically so.
The discoveries that made it possible to turn fossil fuels into mechanical energy transformed that equation completely. The geological processes that stockpiled half a billion years of sunlight into coal, oil, and natural gas boosted the concentration of the energy inputs available to industrial societies by an almost unimaginable factor, without warming the ambient temperature of the planet more than a few degrees, and the huge differentials in energy concentration that resulted drove an equally unimaginable increase in complexity. Choose any measure of complexity you wish—number of discrete occupational categories, average number of human beings involved in the production, distribution, and consumption of any given good or service, or what have you—and in the wake of the industrial revolution, it soared right off the charts. Thermodynamically, that’s exactly what you’d expect.
The difference in energy concentration between input and output, it bears repeating, defines the upper limit of complexity. Other variables determine whether or not the system in question will achieve that upper limit. In the ecosystems we call human societies, knowledge is one of those other variables. If you have a highly concentrated energy source and don’t yet know how to use it efficiently, your society isn’t going to become as complex as it otherwise could. Over the three centuries of industrialization, as a result, the production of useful knowledge was a winning strategy, since it allowed industrial societies to rise steadily toward the upper limit of complexity defined by the concentration differential. The limit was never reached—the law of diminishing returns saw to that—and so, inevitably, industrial societies ended up believing that knowledge all by itself was capable of increasing the complexity of the human ecosystem. Since there’s no upper limit to knowledge, in turn, that belief system drove what Catton called the cornucopian myth, the delusion that there would always be enough resources if only the stock of knowledge increased quickly enough.
That belief only seemed to work, though, as long as the concentration differential between energy inputs and the background remained very high. Once easily accessible fossil fuels started to become scarce, and more and more energy and other resources had to be invested in the extraction of what remained, problems started to crop up. Tar sands and oil shales in their natural form are not as concentrated an energy source as light sweet crude—once they’re refined, sure, the differences are minimal, but a whole system analysis of energy concentration has to start at the moment each energy source enters the system. Take a cubic yard of tar sand fresh from the pit mine, with the sand still in it, or a cubic yard of oil shale with the oil still trapped in the rock, and you’ve simply got less energy per unit volume than you do if you’ve got a cubic yard of light sweet crude fresh from the well, or even a cubic yard of good permeable sandstone with light sweet crude oozing out of every pore.
It’s an article of faith in contemporary culture that such differences don’t matter, but that’s just another aspect of our cornucopian myth. The energy needed to get the sand out of the tar sands or the oil out of the shale oil has to come from somewhere, and that energy, in turn, is not available for other uses. The result, however you slice it conceptually, is that the upper limit of complexity begins moving down. That sounds abstract, but it adds up to a great deal of very concrete misery, because as already noted, the complexity of a society determines such things as the number of different occupational specialties it can support, the number of employees who are involved in the production and distribution of a given good or service, and so on. There’s a useful phrase for a sustained contraction in the usual measures of complexity in a human ecosystem: “economic depression.”
The economic troubles that are shaking the industrial world more and more often these days, in other words, are symptoms of a disastrous mismatch between the level of complexity that our remaining concentration differential can support, and the level of complexity that our preferred ideologies insist we ought to have. As those two things collide, there’s no question which of them is going to win. Adding to our total stock of knowledge won’t change that result, since knowledge is a necessary condition for economic expansion but not a sufficient one: if the upper limit of complexity set by the laws of thermodynamics drops below the level that your knowledge base would otherwise support, further additions to the knowledge base simply mean that there will be a growing number of things that people know how to do in theory, but that nobody has the resources to do in practice.
Knowledge, in other words, is not a magic wand, a surrogate messiah, or a source of miracles. It can open the way to exploiting energy more efficiently than otherwise, and it can figure out how to use energy resources that were not previously being used at all, but it can’t conjure energy out of thin air. Even if the energy resources are there, for that matter, if other factors prevent them from being used, the knowledge of how they might be used offers no consolation—quite the contrary.
That latter point, I think, sums up the tragedy of William Catton’s career. He knew, and could explain with great clarity, why industrialism would bring about its own downfall, and what could be done to salvage something from its wreck. That knowledge, however, was not enough to make things happen; only a few people ever listened, most of them promptly plugged their ears and started chanting “La, la, la, I can’t hear you” once Reagan made that fashionable, and the actions that might have spared all of us a vast amount of misery never happened. When I spoke to him in 2011, he was perfectly aware that his life’s work had done essentially nothing to turn industrial society aside from its rush toward the abyss. That’s got to be a bitter thing to contemplate in your final hours, and I hope his thoughts were on something else last month as the night closed in at last.
by John Michael Greer
Wednesday, January 07, 2015
Well, the Fates were apparently listening last week. As I write this, stock markets around the world are lurching through what might just be the opening moves of the Crash of 2015, whipsawed by further plunges in the price of oil and a range of other bad economic news; amid a flurry of layoffs and dropping rig counts, the first bankruptcy in the fracking industry has been announced, with more on their way; gunfire in Paris serves up a brutal reminder that the rising spiral of political violence I traced in last week’s post is by no means limited to North American soil. The cheerleaders of business as usual in the media are still insisting at the top of their lungs that America’s new era of energy independence is still on its way; those of my readers who recall the final days of the housing bubble that burst in 2008, or the tech-stock bubble that popped in 2000, will recognize a familiar tone in the bluster.
It’s entirely possible, to be sure, that central banks and governments will be able to jerry-rig another round of temporary supports for the fraying architecture of the global economy, and postpone a crash—or at least drag out the agony a bit longer. It’s equally possible that other dimensions of the crisis of our age can be forestalled or postponed by drastic actions here and now. That said, whether the process is fast or slow, whether the crunch hits now or a bit further down the road, the form of technic society I’ve termed abundance industrialism is on its way out through history’s exit turnstile, and an entire world of institutions and activities familiar to all of us is going with it.
It doesn’t require any particular genius or prescience to grasp this, merely the willingness to recognize that if something is unsustainable, sooner or later it won’t be sustained. Of course that’s the sticking point, because what can’t be sustained at this point is the collection of wildly extravagant energy- and resource-intensive habits that used to pass for a normal lifestyle in the world’s industrial nations, and has recently become just a little less normal than it used to be. Those lifestyles, and most of what goes with them, only existed in the first place because a handful of the world’s nations burned through half a billion years of fossil sunlight in a few short centuries, and stripped the planet of most of its other concentrated resource stocks into the bargain.
That’s the unpalatable reality of the industrial era. Despite the rhetoric of universal betterment that was brandished about so enthusiastically by the propagandists of the industrial order, there were never enough of any of the necessary resources to make that possible for more than a small fraction of the world’s population, or for more than a handful of generations. Nearly all the members of our species who lived outside the industrial nations, and a tolerably large number who resided within them, were expected to carry most of the costs of reckless resource extraction and ecosystem disruption while receiving few if any of the benefits. They’ll have plenty of company shortly: abundance industrialism is winding down, but its consequences are not, and people around the world for centuries and millennia to come will have to deal with the depleted and damaged planet our actions have left them.
That’s a bitter pill to swallow, and the likely aftermath of the industrial age won’t do anything to improve the taste. Over the last six months or so, I’ve drawn on the downside trajectories of other failed civilizations to sketch out how that aftermath will probably play out here in North America: the disintegration of familiar political and economic structures, the rise of warband culture, the collapse of public order, and the failure of cultural continuity, all against a backdrop of rapid and unpredictable climate change, rising seas, and the appearance of chemical and radiological dead zones created by some of industrial civilization’s more clueless habits. It’s an ugly picture, and the only excuse I have for that unwelcome fact is that falling civilizations look like that.
The question that remains, though, is what we’re going to do about it all.
I should say up front that by “we” I don’t mean some suitably photogenic collection of Hollywood heroes and heroines who just happen to have limitless resources and a bag of improbable inventions at their disposal. I don’t mean a US government that has somehow shaken off the senility that affects all great powers in their last days and is prepared to fling everything it has into the quest for a sustainable future. Nor do I mean a coterie of gray-skinned aliens from Zeta Reticuli, square-jawed rapists out of Ayn Rand novels, or some other source of allegedly superior beings who can be counted upon to come swaggering onto the scene to bail us out of the consequences of our own stupidity. They aren’t part of this conversation; the only people who are, just now, are the writer and the readers of this blog.
Within those limits, the question I’ve posed may seem preposterous. I grant that for a phenomenon that practically defines the far edges of the internet—a venue for lengthy and ornately written essays about wildly unpopular subjects by a clergyman from a small and distinctly eccentric fringe religion—The Archdruid Report has a preposterously large readership, and one that somehow manages to find room for a remarkably diverse and talented range of people, bridging some of the ideological and social barriers that divide industrial society into so many armed and uncommunicative camps. Even so, the regular readership of this blog could probably all sit down at once in a football stadium and still leave room for the hot dog vendors. Am I seriously suggesting that this modest and disorganized a group can somehow rise up and take meaningful action in the face of so vast a process as the fall of a civilization?
One of the things that gives that question an ironic flavor is that quite a few people are making what amounts to the same claim in even more grandiose terms than mine. I’m thinking here of the various proposals for a Great Transition of one kind or another being hawked at various points along the social and political spectrum these days. I suspect we’re going to be hearing a lot more from those in the months and years immediately ahead, as the collapse of the fracking bubble forces people to find some other excuse for insisting that they can have their planet and eat it too.
Part of the motivation behind the grand plans just mentioned is straightforwardly financial. One part of what drove the fracking bubble along the classic trajectory—up with the rocket, down with the stick—was a panicked conviction on the part of a great many people that some way had to be found to keep industrial society’s fuel tanks somewhere on the near side of that unwelcome letter E. Another part of it, though, was the recognition on the part of a somewhat smaller but more pragmatic group of people tht the panicked conviction in question could be turned into a sales pitch. Fracking wasn’t the only thing that got put to work in the time-honored process of proving Ben Franklin’s proverb about a fool and his money; fuel ethanol, biodiesel, and large-scale wind power also had their promoters, and sucked up their share of government subsidies and private investment.
Now that fracking is falling by the wayside, there’ll likely be a wild scramble to replace it in the public eye as the wave of the energy future. The nuclear industry will doubtless be in there—nuclear power is one of the most durable subsidy dumpsters in modern economic life, and the nuclear industry has had to become highly skilled at slurping from the government teat, since nuclear power isn’t economically viable otherwise—it’s worth recalling that no nation on earth has been able to create or maintain a nuclear power program without massive ongoing government subsidies. No doubt we’ll get plenty of cheerleading for fusion, satellite-based solar power, and other bits of high-end vaporware, too.
Still, I suspect the next big energy bubble is probably going to come from the green end of things. Over the last few years, there’s been no shortage of claims that renewable resources can pick right up where fossil fuels leave off and keep the lifestyles of today’s privileged middle classes intact. Those claims tend to be long on enthusiasm and cooked numbers and short on meaningful assessment, but then that same habit didn’t slow the fracking boom any; we can expect to see a renewed flurry of claims that solar power must be sustainable because the sticker price has gone down, and similar logical non sequiturs. (By the same logic, the internet must be sustainable if you can pay your monthly ISP bill by selling cute kitten photos on eBay. In both cases, the sprawling and almost entirely fossil-fueled infrastructure of mines, factories, supply chains, power grids, and the like, has been left out of the equation, as though those don’t have to be accounted for: typical of the blindness to whole systems that pervades so much of contemporary culture.)
It’s not enough for an energy technology to be green, in other words; it also has to work. It’s probably safe to assume that that point is going to be finessed over and over again, in a galaxy of inventive ways, as the fracking bubble goes whereved popped financial bubbles go when they die. The point that next to nobody wants to confront is the one made toward the beginning of this week’s post: if something is unsustainable, sooner or later it won’t be sustained—and what’s unsustainable in this case isn’t simply fossil fuel production and consumption, it’s the lifestyles that were made possible by the immensely abundant and highly concentrated energy supply we got from fossil fuels.
You can’t be part of the solution if your lifestyle is part of the problem. I know that those words are guaranteed to make the environmental equivalent of limousine liberals gasp and clutch their pearls or their Gucci ties, take your pick, but there it is; it really is as simple as that. There are at least two reasons why that maxim needs to be taken seriously. On the one hand, if you’re clinging to an unsustainable lifestyle in the teeth of increasingly strong economic and environmental headwinds, you’re not likely to be able to spare the money, the free time, or any of the other resources you would need to contribute to a solution; on the other, if you’re emotionally and financially invested in keeping an unsustainable lifestyle, you’re likely to put preserving that lifestyle ahead of things that arguably matter more, like leaving a livable planet for future generations.
Is the act of letting go of unsustainable lifestyles the only thing that needs to be done? Of course not, and in the posts immediately ahead I plan on talking at length about some of the other options. I’d like to suggest, though, that it’s the touchstone or, if you will, the boundary that divides those choices that might actually do some good from those that are pretty much guaranteed to do no good at all. That’s useful when considering the choices before us as individuals; it’s at least as useful, if not more so, when considering the collective options we’ll be facing in the months and years ahead, among them the flurry of campaigns, movements, and organizations that are already gearing up to exploit the crisis of our time in one way or another—and with one agenda or another.
An acronym I introduced a while back in these posts might well be worth revisiting here: LESS, which stands for “Less Energy, Stuff, and Stimulation.” That’s a convenient summary of the changes that have to be made to move from today’s unsustainable lifestyles to ways of living that will be viable when today’s habits of absurd extravagance are fading memories. It’s worth taking a moment to unpack the acronym a little further, and see what it implies.
“Less energy” might seem self-evident, but there’s more involved here than just turning off unneeded lights and weatherstripping your windows and doors—though those are admittedly good places to start. A huge fraction of the energy consumed by a modern industrial society gets used indirectly to produce, supply, and transport goods and services; an allegedly “green” technological device that’s made from petroleum-based plastics and exotic metals taken from an open-pit mine in a Third World country, then shipped halfway around the planet to the air-conditioned shopping mall where you bought it, can easily have a carbon footprint substantially bigger than some simpler item that does the same thing in a less immediately efficient way. The blindness to whole systems mentioned earlier has to be overcome in order to make any kind of meaningful sense of energy issues: a point I’ll be discussing further in an upcoming post here.
“Less stuff” is equally straightforward on the surface, equally subtle in its ramifications. Now of course it’s hardly irrelevant that ours is the first civilization in the history of the planet to have to create an entire industry of storage facilities to store the personal possessions that won’t fit into history’s biggest homes. That said, “stuff” includes a great deal more than the contents of your closets and storage lockers. It also includes infrastructure—the almost unimaginably vast assortment of technological systems on which the privileged classes of the industrial world rely for most of the activities of their daily lives. That infrastructure was only made possible by the deluge of cheap abundant energy our species briefly accessed from fossil fuels; as what’s left of the world’s fossil fuel supply moves deeper into depletion, the infrastructure that it created has been caught in an accelerating spiral of deferred maintenance and malign neglect; the less dependent you are on what remains, the less vulnerable you are to further systems degradation, and the more of what’s left can go to those who actually need it.
“Less stimulation” may seem like the least important part of the acronym, but in many ways it’s the most crucial point of all. These days most people in the industrial world flood their nervous systems with a torrent of electronic noise. Much of this is quite openly intended to manipulate their thoughts and feelings by economic and political interests; a great deal more has that effect, if only by drowning out any channel of communication that doesn’t conform to the increasingly narrow intellectual tunnel vision of late industrial society. If you’ve ever noticed how much of what passes for thinking these days amounts to the mindless regurgitation of sound bites from the media, dear reader, that’s why. What comes through the media—any media—is inevitably prechewed and predigested according to someone else’s agenda; those who are interested in thinking their own thoughts and making their own decisions, rather than bleating in perfect unison with the rest of the herd, might want to keep this in mind.
It probably needs to be said that very few of us are in a position to go whole hog with LESS—though it’s also relevant that some of us, and quite possibly a great many of us, will end up doing so willy-nilly if the economic contraction at the end of the fracking bubble turns out to be as serious as some current figures suggest. Outside of that grim possibility, “less” doesn’t have to mean “none at all”—certainly not at first; for those who aren’t caught in the crash, at least, there may yet be time to make a gradual transition toward a future of scarce energy and scarce resources. Still, I’d like to suggest that any proposed response to the crisis of our time that doesn’t start with LESS simply isn’t serious.
As already noted, I expect to see a great many nonserious proposals in the months and years ahead. Those who put maintaining their comfortable lifestyles ahead of other goals will doubtless have no trouble coming up with enthusiastic rhetoric and canned numbers to support their case; certainly the promoters and cheerleaders of the soon-to-be-late fracking bubble had no difficulty at all on that score. Not too far in the future, something or other will have been anointed as the shiny new technological wonder that will save us all, or more precisely, that will give the privileged classes of the industrial world a new set of excuses for clinging to some semblance of their current lifestyles for a little while longer. Mention the growing list of things that have previously occupied that hallowed but inevitably temporary status, and you can count on either busy silence or a flustered explanation why it really is different this time.
There may not be that many of us who get past the nonserious proposals, ask the necessary but unwelcome questions about the technosavior du jour, and embrace LESS while there’s still time to do so a step at a time. I’m convinced, though, that those who manage these things are going to be the ones who make a difference in the shape the future will have on the far side of the crisis years ahead. Let go of the futile struggle to sustain the unsustainable, take the time and money and other resources that might be wasted in that cause and do something less foredoomed with them, and there’s a lot that can still be done, even in the confused and calamitous time that’s breaking over us right now. In the posts immediately ahead, as already mentioned, I’ll discuss some of the options; no doubt many of my readers will be able to think of options of their own, for that matter.
I’ve noted before more than once that the collapse of industrial society isn’t something located off in the nearer or further future; it’s something that got under way a good many years ago, has been accelerating around us for decades, and is simply hitting one of the rougher patches of the normal process of decline and fall just now. Most of the nonserious proposals just referred to start from the insistence that that can’t happen. Comforting in the short term, that insistence is a rich source of disaster and misery from any longer perspective, and the sooner each of us gets over it and starts to survey the wreckage around us, the better. Then we can make camp in the ruins, light a fire, get some soup heating in a salvaged iron pot, and begin to talk about where we can go from here.
Imagine for a moment that one of the current US elite—an executive from a too-big-to-fail investment bank, a top bureaucrat from inside the DC beltway, a trust-fund multimillionaire with a pro forma job at the family corporation, or what have you—were to turn up in some chaotic failed state on the fringes of the industrial world, with no money, no resources, no help from abroad, and no ticket home. What’s the likelihood that, without anything other than whatever courage, charisma, and bare-knuckle fighting skills he might happen to have, some such person could equal Odoacer’s feat, win the loyalty and obedience of thousands of gang members and unemployed mercenaries, and lead them in a successful invasion of a neighboring country?
by John Michael Greer
The senility that afflicts ruling elites in their last years, the theme of the previous post in this sequence, is far from the only factor leading the rich and influential members of a failing civilization to their eventual destiny as lamppost decorations or come close equivalent. Another factor, at least as important, is a lethal mismatch between the realities of power in an age of decline and the institutional frameworks inherited from a previous age of ascent.
That sounds very abstract, and appropriately so. Power in a mature civilization is very abstract, and the further you ascend the social ladder, the more abstract it becomes. Conspiracy theorists of a certain stripe have invested vast amounts of time and effort in quarrels over which specific group of people it is that runs everything in today’s America. All of it was wasted, because the nature of power in a mature civilization precludes the emergence of any one center of power that dominates all others.
Look at the world through the eyes of an elite class and it’s easy to see how this works. Members of an elite class compete against one another to increase their own wealth and influence, and form alliances to pool resources and counter the depredations of their rivals. The result, in every human society complex enough to have an elite class in the first place, is an elite composed of squabbling factions that jealously resist any attempt at further centralization of power. In times of crisis, that resistance can be overcome, but in less troubled times, any attempt by an individual or faction to seize control of the whole system faces the united opposition of the rest of the elite class.
One result of the constant defensive stance of elite factions against each other is that as a society matures, power tends to pass from individuals to institutions. Bureaucratic systems take over more and more of the management of political, economic, and cultural affairs, and the policies that guide the bureaucrats in their work slowly harden until they are no more subject to change than the law of gravity. Among its other benefits to the existing order of society, this habit—we may as well call it policy mummification—limits the likelihood that an ambitious individual can parlay control over a single bureaucracy into a weapon against his rivals.
Our civilization is no exception to any of this. In the modern industrial world, some bureaucracies are overtly part of the political sphere; others—we call them corporations—are supposedly apart from government, and still others like to call themselves “non-governmental organizations” as a form of protective camouflage. They are all part of the institutional structure of power, and thus function in practice as arms of government. They have more in common than this; most of them have the same hierarchical structure and organizational culture; those that are large enough to matter have executives who went to the same schools, share the same values, and crave the same handouts from higher up the ladder. No matter how revolutionary their rhetoric, for that matter, upsetting the system that provides them with their status and its substantial benefits is the last thing any of them want to do.
All these arrangements make for a great deal of stability, which the elite classes of mature civilizations generally crave. The downside is that it’s not easy for a society that’s proceeded along this path to change its ways to respond to new circumstances. Getting an entrenched bureaucracy to set aside its mummified policies in the face of changing conditions is generally so difficult that it’s often easier to leave the old system in place while redirecting all its important functions to another, newly founded bureaucracy oriented toward the new policies. If conditions change again, the same procedure repeats, producing a layer cake of bureaucratic organizations that all supposedly exist to do the same thing.
Consider, as one example out of many, the shifting of responsibility for US foreign policy over the years. Officially, the State Department has charge of foreign affairs; in practice, its key responsibilities passed many decades ago to the staff of the National Security Council, and more recently have shifted again to coteries of advisers assigned to the Office of the President. In each case, what drove the shift was the attachment of the older institution to a set of policies and procedures that stopped being relevant to the world of foreign policy—in the case of the State Department, the customary notions of old-fashioned diplomacy; in the case of the National Security Council, the bipolar power politics of the Cold War era—but could not be dislodged from the bureaucracy in question due to the immense inertia of policy mummification in institutional frameworks.
The layered systems that result are not without their practical advantages to the existing order. Many bureaucracies provide even more stability than a single bureaucracy, since it’s often necessary for the people who actually have day to day responsibility for this or that government function to get formal approval from the top officials of the agency or agencies that used to have that responsibility, Even when those officials no longer have any formal way to block a policy they don’t like, the personal and contextual nature of elite politics means that informal options usually exist. Furthermore, since the titular headship of some formerly important body such as the US State Department confers prestige but not power, it makes a good consolation prize to be handed out to also-rans in major political contests, a place to park well-connected incompetents, or what have you.
Those of my readers who recall the discussion of catabolic collapse three weeks ago will already have figured out one of the problems with the sort of system that results from the processes just sketched out: the maintenance bill for so baroque a form of capital is not small. In a mature civilization, a large fraction of available resources and economic production end up being consumed by institutions that no longer have any real function beyond perpetuating their own existence and the salaries and prestige of their upper-level functionaries. It’s not unusual for the maintenance costs of unproductive capital of this kind to become so great a burden on society that the burden in itself forces a crisis—that was one of the major forces that brought the French Revolution, for instance. Still, I’d like to focus for a moment on a different issue, which is the effect that the institutionalization of power and the multiplication of bureaucracy has on the elites who allegedly run the system from which they so richly benefit.
France in the years leading up to the Revolution makes a superb example, one that John Kenneth Galbraith discussed with his trademark sardonic humor in his useful book The Culture of Contentment. The role of ruling elite in pre-1789 France was occupied by close equivalents of the people who fill that same position in America today: the “nobility of the sword,” the old feudal aristocracy, who had roughly the same role as the holders of inherited wealth in today’s America, and the “nobility of the robe,” who owed their position to education, political office, and a talent for social climbing, and thus had roughly the same role as successful Ivy League graduates do here and now. These two elite classes sparred constantly against each other, and just as constantly competed against their own peers for wealth, influence, and position.
One of the most notable features of both sides of the French elite in those days was just how little either group actually had to do with the day-to-day management of public affairs, or for that matter of their own considerable wealth. The great aristocratic estates of the time were bureaucratic societies in miniature, ruled by hierarchies of feudal servitors and middle-class managers, while the hot new financial innovation of the time, the stock market, allowed those who wanted their wealth in a less tradition-infested form to neglect every part of business ownership but the profits. Those members of the upper classes who held offices in government, the church, and the other venues of power presided decorously over institutions that were perfectly capable of functioning without them.
The elite classes of mature civilizations almost always seek to establish arrangements of this sort, and understandably so. It’s easy to recognize the attractiveness of a state of affairs in which the holders of wealth and influence get all the advantages of their positions and have to put up with as few as possible of the inconveniences thereof. That said, this attraction is also a death wish, because it rarely takes the people who actually do the work long to figure out that a ruling class in this situation has become entirely parasitic, and that society would continue to function perfectly well were something suitably terminal to happen to the titular holders of power.
This is why most of the revolutions in modern history have taken place in nations in which the ruling elite has followed its predilections and handed over all its duties to subordinates. In the case of the American revolution, the English nobility had been directly involved in colonial affairs in the first century or so after Jamestown. Once it left the colonists to manage their own affairs, the latter needed very little time to realize that the only thing they had to lose by seeking independence was the steady hemorrhage of wealth from the colonies to England. In the case of the French and Russian revolutions, much the same thing happened without the benefit of an ocean in the way: the middle classes who actually ran both societies recognized that the monarchy and aristocracy had become disposable, and promptly disposed of them once a crisis made it possible to do so.
The crisis just mentioned is a significant factor in the process. Under normal conditions, a society with a purely decorative ruling elite can keep on stumbling along indefinitely on sheer momentum. It usually takes a crisis—Britain’s military response to colonial protests in 1775, the effective bankruptcy of the French government in 1789, the total military failure of the Russian government in 1917, or what have you—to convince the people who actually handle the levers of power that their best interests no longer lie with their erstwhile masters. Once the crisis hits, the unraveling of the institutional structures of authority can happen with blinding speed, and the former ruling elite is rarely in a position to do anything about it. All they have ever had to do, and all they know how to do, is issue orders to deferential subordinates. When there are none of these latter to be found, or (as more often happens) when the people to whom the deferential subordinates are supposed to pass the orders are no longer interested in listening, the elite has no options left.
The key point to be grasped here is that power is always contextual. A powerful person is a person able to exert particular kinds of power, using particular means, on some particular group of other people, and someone thus can be immensely powerful in one setting and completely powerless in another. What renders the elite classes of a mature society vulnerable to a total collapse of power is that they almost always lose track of this unwelcome fact. Hereditary elites are particularly prone to fall into the trap of thinking of their position in society as an accurate measure of their own personal qualifications to rule, but it’s also quite common for those who are brought into the elite from the classes immediately below to think of their elevation as proof of their innate superiority. That kind of thinking is natural for elites, but once they embrace it, they’re doomed.
It’s dangerous enough for elites to lose track of the contextual and contingent nature of their power when the mechanisms through which power is enforced can be expected to remain in place—as it was in the American colonies in 1776, France in 1789, and Russia in 1917. It’s far more dangerous if the mechanisms of power themselves are in flux. That can happen for any number of reasons, but the one that’s of central importance to the theme of this series of posts is the catabolic collapse of a declining civilization, in which the existing mechanisms of power come apart because their maintenance costs can no longer be met.
That poses at least two challenges to the ruling elite, one obvious and the other less so. The obvious one is that any deterioration in the mechanisms of power limits the ability of the elite to keep the remaining mechanisms of power funded, since a great deal of power is always expended in paying the maintenance costs of power. Thus in the declining years of Rome, for example, the crucial problem the empire faced was precisely that the sprawling system of imperial political and military administration cost more than the imperial revenues could support, but the weakening of that system made it even harder to collect the revenues on which the rest of the system depended, and forced more of what money there was to go for crisis management. Year after year, as a result, roads, fortresses, and the rest of the infrastructure of Roman power sank under a burden of deferred maintenance and malign neglect, and the consequences of each collapse became more and more severe because there was less and less in the treasury to pay for rebuilding when the crisis was over.
That’s the obvious issue. More subtle is the change in the nature of power that accompanies the decay in the mechanisms by which it’s traditionally been used. Power in a mature civilization, as already noted, is very abstract, and the people who are responsible for administering it at the top of the social ladder rise to those positions precisely because of their ability to manage abstract power through the complex machinery that a mature civilization provides them. As the mechanisms collapse, though, power stops being abstract in a hurry, and the skills that allow the manipulation of abstract power have almost nothing in common with the skills that allow concrete power to be wielded.
Late imperial Rome, again, is a fine example. There, as in other mature civilizations, the ruling elite had a firm grip on the intricate mechanisms of social control at their uppermost and least tangible end. The inner circle of each imperial administration—which sometimes included the emperor himself, and sometimes treated him as a sock puppet—could rely on sprawling many-layered civil and military bureaucracies to put their orders into effect. They were by and large subtle, ruthless, well-educated men, schooled in the intricacies of imperial administration, oriented toward the big picture, and completely dependent on the obedience of their underlings and the survival of the Roman system itself.
The people who replaced them, once the empire actually fell, shared none of these characteristics except the ruthlessness. The barbarian warlords who carved up the corpse of Roman power had a completely different set of skills and characteristics: raw physical courage, a high degree of competence in the warrior’s trade, and the kind of charisma that attracts cooperation and obedience from those who have many other options. Their power was concrete, personal, and astonishingly independent of institutional forms. That’s why Odoacer, whose remarkable career was mentioned in an earlier post in this sequence, could turn up alone in a border province, patch together an army out of a random mix of barbarian warriors, and promptly lead them to the conquest of Italy.
There were a very few members of the late Roman elite who could exercise power in the same way as Odoacer and his equivalents, and they’re the exceptions that prove the rule. The greatest of them, Flavius Aetius, spent many years in youth as a hostage in the royal courts of the Visigoths and the Huns and got his practical education there, rather than in Roman schools. He was for all practical purposes a barbarian warlord who happened to be Roman by birth, and played the game as well as any of the other warlords of his age. His vulnerabilities were all on the Roman side of the frontier, where the institutions of Roman society still retained a fingernail grip on power, and so—having defeated the Visigoths, the Franks, the Burgundians, and the massed armies of Attila the Hun, all for the sake of Rome’s survival—he was assassinated by the emperor he served.
Fast forward close to two thousand years and it’s far from difficult to see how the same pattern of elite extinction through the collapse of political complexity will likely work out here in North America. The ruling elites of our society, like those of the late Roman Empire, are superbly skilled at manipulating and parasitizing a fantastically elaborate bureaucratic machine which includes governments, business firms, universities, and many other institutions among its components. That’s what they do, that’s what they know how to do, and that’s what all their training and experience has prepared them to do. Thus their position is exactly equivalent to that of French aristocrats before 1789, but they’re facing the added difficulty that the vast mechanism on which their power depends has maintenance costs that their civilization can no longer meet. As the machine fails, so does their power.
Nor are they particularly well prepared to make the transition to a radically different way of exercising power. Imagine for a moment that one of the current US elite—an executive from a too-big-to-fail investment bank, a top bureaucrat from inside the DC beltway, a trust-fund multimillionaire with a pro forma job at the family corporation, or what have you—were to turn up in some chaotic failed state on the fringes of the industrial world, with no money, no resources, no help from abroad, and no ticket home. What’s the likelihood that, without anything other than whatever courage, charisma, and bare-knuckle fighting skills he might happen to have, some such person could equal Odoacer’s feat, win the loyalty and obedience of thousands of gang members and unemployed mercenaries, and lead them in a successful invasion of a neighboring country?
There are people in North America who could probably carry off a feat of that kind, but you won’t find them in the current ruling elite. That in itself defines part of the path to dark age America: the replacement of a ruling class that specializes in managing abstract power through institutions with a ruling class that specializes in expressing power up close and in person, using the business end of the nearest available weapon. The process by which the new elite emerges and elbows its predecessors out of the way, in turn, is among the most reliable dimensions of decline and fall; we’ll talk about it next week.
by John Michael Greer
Nothing is easier, as the Long Descent begins to pick up speed around us, than giving in to despair—and nothing is more pointless. Those of us who are alive today are faced with the hugely demanding task of coping with the consequences of industrial civilization’s decline and fall, and saving as many as possible of the best achievements of the last few centuries so that they can cushion the descent and enrich the human societies of the far future. That won’t be easy; so? The same challenge has been faced many times before, and quite often it’s been faced with relative success.
The circumstances of the present case are in some ways more difficult than past equivalents, to be sure, but the tools and the knowledge base available to cope with them are almost incomparably greater. All in all, factoring in the greater challenges and the greater resources, it’s probably fair to suggest that the challenge of our time is about on a par with other eras of decline and fall. The only question that still remains to be settled is how many of the people who are awake to the imminence of crisis will rise to the challenge, and how many will fail to do so.
The suicide of peak oil writer Mike Ruppert two days ago puts a bit of additional emphasis on that last point. I never met Ruppert, though we corresponded back in the days when his “From The Wilderness” website was one of the few places on the internet that paid any attention at all to peak oil, and I don’t claim to know what personal demons drove him to put a bullet through his brain. Over the last eight years, though, as the project of this blog has brought me into contact with more and more people who are grappling with the predicament of our time, I’ve met a great many people whose plans for dealing with a postpeak world amount to much the same thing. Some of them are quite forthright about it, which at least has the virtue of honesty. Rather more of them conceal the starkness of that choice behind a variety of convenient evasions, the insistence that we’re all going to die soon anyway being far and away the most popular of these just now.
I admit to a certain macabre curiosity about how that will play out in the years ahead. I’ve suspected for a while now, for example, that the baby boomers will manage one final mediagenic fad on the way out, and the generation that marked its childhood with coonskin caps and hula hoops and its puberty with love beads and Beatlemania will finish with a fad for suicide parties, in which attendees reminisce to the sound of the tunes they loved in high school, then wash down pills with vodka and help each other tie plastic bags over their heads. Still, I wonder how many people will have second thoughts once every other option has gone whistling down the wind, and fling themselves into an assortment of futile attempts to have their cake when they’ve already eaten it right down to the bare plate. We may see some truly bizarre religious movements, and some truly destructive political ones, before those who go around today insisting that they don’t want to live in a deindustrial world finally get their wish.
There are, of course, plenty of other options. The best choice for most of us, as I’ve noted here in previous posts, follows a strategy I’ve described wryly as “collapse first and avoid the rush:” getting ahead of the curve of decline, in other words, and downshifting to a much less extravagant lifestyle while there’s still time to pick up the skills and tools needed to do it competently. Despite the strident insistence from defenders of the status quo that anything less than business as usual amounts to heading straight back to the caves, it’s entirely possible to have a decent and tolerably comfortable life on a tiny fraction of the energy and resource base that middle class Americans think they can’t possibly do without. Mind you, you have to know how to do it, and that’s not the sort of knowledge you can pick up from a manual, which is why it’s crucial to start now and get through the learning curve while you still have the income and the resources to cushion the impact of the inevitable mistakes.
This is more or less what I’ve been saying for eight years now. The difficulty at this stage in the process, though, is that a growing number of Americans are running out of time. I don’t think it’s escaped the notice of many people in this country that despite all the cheerleading from government officials, despite all the reassurances from dignified and clueless economists, despite all those reams of doctored statistics gobbled down whole by the watchdogs-turned-lapdogs of the media and spewed forth undigested onto the evening news, the US economy is not getting better. Outside a few privileged sectors, times are hard and getting harder; more and more Americans are slipping into the bleak category of the long-term unemployed, and a great many of those who can still find employment work at part-time positions for sweatshop wages with no benefits at all.
Despite all the same cheerleading, reassurances, and doctored statistics, furthermore, the US economy is not going to get better: not for more than brief intervals by any measure, and not at all if “better” means returning to some equivalent of America’s late 20th century boomtime. Those days are over, and they will not return. That harsh reality is having an immediate impact on some of my readers already, and that impact will only spread as time goes on. For those who have already been caught by the economic downdrafts, it’s arguably too late to collapse first and avoid the rush; willy-nilly, they’re already collapsing as fast as they can, and the rush is picking up speed around them as we speak.
For those who aren’t yet in that situation, the need to make changes while there’s still time to do so is paramount, and a significant number of my readers seem to be aware of this. One measure of that is the number of requests for personal advice I field, which has gone up steeply in recent months. Those requests cover a pretty fair selection of the whole gamut of human situations in a failing civilization, but one question has been coming up more and more often of late: the question of what jobs might be likely to provide steady employment as the industrial economy comes apart.
That’s a point I’ve been mulling over of late, since its implications intersect the whole tangled web in which our economy and society is snared just now. In particular, it assumes that the current way of bringing work together with workers, and turning the potentials of human mind and muscle toward the production of goods and services, is likely to remain in place for the time being, and it’s becoming increasingly clear to me that this won’t be the case.
It’s important to be clear on exactly what’s being discussed here. Human beings have always had to produce goods and services to stay alive and keep their families and communities going; that’s not going to change. In nonindustrial societies, though, most work is performed by individuals who consume the product of their own labor, and most of the rest is sold or bartered directly by the people who produce it to the people who consume it. What sets the industrial world apart is that a third party, the employer, inserts himself into this process, hiring people to produce goods and services and then selling those goods and services to buyers. That’s employment, in the modern sense of the word; most people think of getting hired by an employer, for a fixed salary or wage, to produce goods and services that the employer then sells to someone else, as the normal and natural state of affairs—but it’s a state of affairs that is already beginning to break down around us, because the surpluses that make that kind of employment economically viable are going away.
Let’s begin with the big picture. In any human society, whether it’s a tribe of hunter-gatherers, an industrial nation-state, or anything else, people apply energy to raw materials to produce goods and services; this is what we mean by the word “economy.” The goods and services that any economy can produce are strictly limited by the energy sources and raw materials that it can access.
A principle that ecologists call Liebig’s law of the minimum is relevant here: the amount of anything that a given species or ecosystem can produce in a given place and time is limited by whichever resource is in shortest supply. Most people get that when thinking about the nonhuman world; it makes sense that plants can’t use extra sunlight to make up for a shortage of water, and that you can’t treat soil deficient in phosphates by adding extra nitrates. It’s when you apply this same logic to human societies that the mental gears jam up, because we’ve been told so often that one resource can always be substituted for another that most people believe it without a second thought.
What’s going on here, though, is considerably more subtle than current jargon reflects. Examine most of the cases of resource substitution that find their way into economics textbooks, and you’ll find that what’s happened is that a process of resource extraction that uses less energy on a scarcer material has been replaced by another process that takes more energy but uses more abundant materials. The shift from high-quality iron ores to low-grade taconite that reshaped the iron industry in the 20th century, for example, was possible because ever-increasing amounts of highly concentrated energy could be put into the smelting process without making the resulting iron too expensive for the market.
The point made by this and comparable examples is applicable across the board to what I’ve termed technic societies, that subset of human societies—ours is the first, though probably not the last—in which a large fraction of total energy per capita comes from nonbiological sources and is put to work by way of machines rather than human or animal muscles. Far more often than not, in such societies, concentrated energy is the limiting resource. Given an abundant enough supply of concentrated energy at a low enough price, it would be possible to supply a technic society with raw materials by extracting dissolved minerals from seawater or chewing up ordinary rock to get a part per million or so of this or that useful element. Lacking that—and there are good reasons to think that human societies will always be lacking that—access to concentrated energy is where Liebig’s law bites down hard.
Another way to make this same point is to think of how much of any given product a single worker can make in a day using a set of good hand tools, and comparing that to the quantity of the same thing that the same worker could make using the successive generations of factory equipment, from the steam-driven and belt-fed power tools of the late 19th century straight through to the computerized milling machines and assembly-line robots of today. The difference can be expressed most clearly as a matter of the amount of energy being applied directly and indirectly to the manufacturing process—not merely the energy driving the tools through the manufacturing process, but the energy that goes into manufacturing and maintaining the tools, supporting the infrastructure needed for manufacture and maintenance, and so on through the whole system involved in the manufacturing process.
Maverick economist E.F. Schumacher, whose work has been discussed in this blog many times already, pointed out that the cost per worker of equipping a workplace is one of the many crucial factors that mainstream economic thought invariably neglects. That cost is usually expressed in financial terms, but underlying the abstract tokens we call money is a real cost in energy, expressed in terms of the goods and services that have to be consumed in the process of equipping and maintaining the workplace. If you have energy to spare, that’s not a problem; if you don’t, on the other hand, you’re actually better off using a less complex technology—what Schumacher called “intermediate technology” and the movement in which I studied green wizardry thirty years ago called “appropriate technology.”
The cost per worker of equipping a workplace, in turn, also has a political dimension—a point that Schumacher did not neglect, though nearly all other economists pretend that it doesn’t exist. The more costly it is to equip a workplace, the more certain it is that workers won’t be able to set themselves up in business, and the more control the very rich will then have over economic production and the supply of jobs. As Joseph Tainter pointed out in The Collapse of Complex Societies, social complexity correlates precisely with social hierarchy; one of the functions of complexity, in the workplace as elsewhere, is thus to maintain existing social pecking orders.
Schumacher’s arguments, though, focused on the Third World nations of his own time, which had very little manufacturing capacity at all—most of them, remember, had been colonies of European empires, assigned the role of producing raw materials and buying finished products from the imperial center as part of the wealth pump that drove them into grinding poverty while keeping their imperial overlords rich. He focused on advising client nations on how to build their own economies and extract themselves from the political grip of their former overlords, who were usually all too eager to import high-tech factories which their upper classes inevitably controlled. The situation is considerably more challenging when your economy is geared to immense surpluses of concentrated energy, and the supply of energy begins to run short—and of course that’s the situation we’re in today.
Even if it were just a matter of replacing factory equipment, that would be a huge challenge, because all those expensive machines—not to mention the infrastructure that manufactures them, maintains them, supplies them, and integrates their products into the wider economy—count as sunk costs, subject to what social psychologists call the “Concorde fallacy,” the conviction that it’s less wasteful to keep on throwing money into a failing project than to cut your losses and do something else. The real problem is that it’s not just factory equipment; the entire economy has been structured from the ground up to use colossal amounts of highly concentrated energy, and everything that’s been invested in that economy since the beginning of the modern era thus counts as a sunk cost to one degree or another.
What makes this even more challenging is that very few people in the modern industrial world actually produce goods and services for consumers, much less for themselves, by applying energy to raw materials. The vast majority of today’s employees, and in particular all those who have the wealth and influence that come with high social status, don’t do this. Executives, brokers, bankers, consultants, analysts, salespeople—well, I could go on for pages: the whole range of what used to be called white-collar jobs exists to support the production of goods and services by the working joes and janes managing all the energy-intensive machinery down there on the shop floor. So does the entire vast maze of the financial industry, and so do the legions of government bureaucrats—local, state, and federal—who manage, regulate, or oversee one or another aspect of economic activity.
All these people are understandably just as interested in keeping their jobs as the working joes and janes down there on the shop floor, and yet the energy surpluses that made it economically viable to perch such an immensely complex infrastructure on top of the production of goods and services for consumers are going away. The result is a frantic struggle on everyone’s part to make sure that the other guy loses his job first. It’s a struggle that all of them will ultimately lose—as the energy surplus needed to support it dwindles away, so will the entire system that’s perched on that high but precarious support—and so, as long as that system remains in place, getting hired by an employer, paid a regular wage or salary, and given work and a workplace to produce goods and services that the employer then sells to someone else, is going to become increasingly rare and increasingly unrewarding.
That transformation is already well under way. Nobody I know personally who works for an employer in the sense I’ve just outlined is prospering in today’s American economy. Most of the people I know who are employees in the usual sense of the word are having their benefits slashed, their working conditions worsened, their hours cut, and their pay reduced by one maneuver or another, and the threat of being laid off is constantly hovering over their heads. The few exceptions are treading water and hoping to escape the same fate. None of this is accidental, and none of it is merely the result of greed on the part of the very rich, though admittedly the culture of executive kleptocracy at the upper end of the American social pyramid is making things a good deal worse than they might otherwise be.
The people I know who are prospering right now are those who produce goods and services for their own use, and provide goods and services directly to other people, without having an employer to provide them with work, a workplace, and a regular wage or salary. Some of these people have to stay under the radar screen of the current legal and regulatory system, since the people who work in that system are trying to preserve their own jobs by making life difficult for those who try to do without their services. Others can do things more openly. All of them have sidestepped as many as possible of the infrastructure services that are supposed to be part of an employee’s working life—for example, they aren’t getting trained at universities, since the US academic industry these days is just another predatory business sector trying to keep itself afloat by running others into the ground, and they aren’t going to banks for working capital for much the same reason. They’re using their own labor, their own wits, and their own personal connections with potential customers, to find a niche in which they can earn the money (or barter for the goods) they need or want.
I’d like to suggest that this is the wave of the future—not least because this is how economic life normally operates in nonindustrial societies, where the vast majority of people in the workforce are directly engaged in the production of goods and services for themselves and their own customers. The surplus that supports all those people in management, finance, and so on is a luxury that nonindustrial societies don’t have. In the most pragmatic of economic senses, collapsing now and avoiding the rush involves getting out of a dying model of economics before it drags you down, and finding your footing in the emerging informal economy while there’s still time to get past the worst of the learning curve.
Playing by the rules of a dying economy, that is, is not a strategy with a high success rate or a long shelf life. Those of my readers who are still employed in the usual sense of the term may choose to hold onto that increasingly rare status, but it’s not wise for them to assume that such arrangements will last indefinitely; using the available money and other resources to get training, tools, and skills for some other way of getting by would probably be a wise strategy. Those of my readers who have already fallen through the widening cracks of the employment economy will have a harder row to hoe in many cases; for them, the crucial requirement is getting access to food, shelter, and other necessities while figuring out what to do next and getting through any learning curve that might be required.
All these are challenges; still, like the broader challenge of coping with the decline and fall of a civilization, they are challenges that countless other people have met in other places and times. Those who are willing to set aside currently popular fantasies of entitlement and the fashionable pleasures of despair will likely be in a position to do the same thing this time around, too.
by John Michael Greer
Man, the conqueror of Nature, died Monday night of a petroleum overdose, the medical examiner’s office confirmed this morning. The abstract representation of the human race was 408 years old. The official announcement has done nothing to quell the rumors of suicide and substance abuse that have swirled around the death scene since the first announcement yesterday morning, adding new legal wrinkles to the struggle already under way over Man’s inheritance.
Man’s closest associates disagree about what happened. His longtime friend and confidant Technology thinks it was suicide. “Sure, Man liked to have a good time,” he said at a press conference Tuesday evening, “and he was a pretty heavy user, but it wasn’t like he was out of control or anything. No, I’m sure he did it on purpose. Just a couple of weeks ago we were hanging out at his place, looking up at the moon and talking about the trips we made out there, and he turned to me and said, ‘You know, Tech, that was a good time—a really good time. I wonder if I’ll ever do anything like that again.’ He got into moods like that more and more often in the last few years. I tried to cheer him up, talking about going to Mars or what have you, and he’d go along with it but you could tell his heart wasn’t in it.”
Other witnesses told a different story. “It was terrifying,” said a housekeeper who requested that her name not be given. “He was using more and more of the stuff every day, shooting it up morning, noon and night, and when his connections couldn’t get him as much as he wanted, he’d go nuts. You’d hear him screaming at the top of his lungs and pounding his fists on the walls. Everybody on the staff would hide whenever that happened, and it happened more and more often—the amount he was using was just unbelievable. Some of his friends tried to talk him into getting help, or even just cutting back a little on his petroleum habit, but he wouldn’t listen.”
The medical examiner’s office and the police are investigating Man’s death right now. Until their report comes out, the tragic end of humanity’s late self-image remains shrouded in mystery and speculation.
A Tumultuous Family Saga
“He was always a rebel,” said Clio, the muse of history, in an exclusive interview in her office on Parnassus this morning. “That was partly his early environment, of course.span style=”mso-spacerun: yes;” /spanHe was born in the household of Sir Francis Bacon, remember, and brought up by some of the best minds of seventeenth-century Europe; an abstract image of humanity raised by people like that wasn’t likely to sit back and leave things as they were, you know. Still, I think there were strong family influences too. His father was quite the original figure himself, back in the day.”
Though almost forgotten nowadays, Man’s father Everyman, the abstract representation of medieval humanity, was as mediagenic in his own time as his son became later on.span style=”mso-spacerun: yes;” /spanThe star of a wildly popular morality play and the subject of countless biographies, Everyman was born in extreme poverty in a hovel in post-Roman Europe, worked his way up to become a wealthy and influential figure in the Middle Ages and Renaissance, then stepped aside from his financial and political affairs to devote his last years to religious concerns. Savage quarrels between father and son kept the broadsheet and pamphlet press fed with juicy stories all through the seventeenth and eighteenth centuries, and eventually led to their final breach over Darwin’s theory of evolution in 1859.
By that time Man was already having problems with substance abuse. “He was just using coal at first,” Technology reminisced. “Well, let’s be fair, we both were. That was the hot new drug in those days.span style=”mso-spacerun: yes;” /spanIt was cheap, you could get it without too much hassle, and everybody on the cutting edge was using it. I remember one trip we took together—it was on one of the early railroads, at thirty miles an hour. We thought that was really fast.span style=”mso-spacerun: yes;” /spanWere we innocent back then, or what?”
Clio agreed with that assessment. “I don’t think Man had any idea what he was getting into, when he started abusing coal,” she said. “It was an easy habit to fall into, very popular in avant-garde circles just then, and nobody yet knew much about the long term consequences of fossil fuel abuse. Then, of course, he started his campaign to conquer Nature, and he found out very quickly that he couldn’t keep up the pace he’d set for himself without artificial help. That was when the real tragedy began.”
The Conquest of Nature
It’s an open question when Man first decided to conquer Nature. “The biographers all have their own opinions on that,” Clio explained, gesturing at a shelf loaded with books on Man’s dramatic and controversial career.span style=”mso-spacerun: yes;” /span”Some trace it back to the influence of his foster-father Francis Bacon, or the other mentors and teachers he had in his early days. Others say that the inspiration came from the crowd he ran with when he was coming of age in the eighteenth and nineteenth centuries. He used to tell interviewers that it was a family thing, that everyone in his family all the way back to the Stone Age had been trying to conquer Nature and he was just the one who finally succeeded, but that won’t stand up to any kind of scrutiny. Examine the career of Everyman, for example, and you’ll find that he wasn’t interested in conquering Nature; he wanted to conquer himself.”
“The business about conquering Nature?” Technology said. “He got into that back when we were running around being young and crazy. I think he got the idea originally from his foster-father or one of the other old guys who taught him when he was a kid, but as far as I know it wasn’t a big deal to him until later. Now I could be wrong, you know. I didn’t know him that well in those days; I was mostly just doing my thing then, digging mines, building water mills, stuff like that. We didn’t get really close until we both got involved in this complicated coal deal; we were both using, but I was dealing, too, and I could get it cheaper than anybody else—I was using steam, and none of the other dealers knew how to do that. So we got to be friends and we had some really wild times together, and now and then when we were good and ripped, he’d get to talking about how Nature ought to belong to him and one of these days he was going to hire some soldiers and just take it.
“Me, I couldn’t have cared less, except that Man kept on bringing me these great technical problems, really sweet little puzzles, and I’ve always been a sucker for those. He figured out how I was getting the coal for him so cheap, you see, and guessed that I could take those same tricks and use them for his war against Nature. For me, it was just a game, for Nature, against Nature, I couldn’t care less.” Just give me a problem and let me get to work on it, and I’m happy.
“But it wasn’t just a game for him. I think it was 1774 when he really put me to work on it.span style=”mso-spacerun: yes;” /spanHe’d hired some mercenaries by then, and was raising money and getting all kind of stuff ready for the war.span style=”mso-spacerun: yes;” /spanHe wanted steam engines so, like the man said, it was steam engine time—I got working on factories, railroads, steamships, all the rest. He already had some of his people crossing the border into Nature to seize bits of territory before then, but the eighteenth century, that’s when the invasion started for real. I used to stand next to him at the big rallies he liked to hold in those days, with all the soldiers standing in long lines, and he’d go into these wild rants about the glorious future we were going to see once Nature was conquered. The soldiers loved it; they’d cheer and grab their scientific instruments and lab coats and go conquer another province of Nature.”
The Triumphant Years
It was in 1859, Technology recalled, that Man first started using petroleum. “He’d just had the big spat with his dad over this Darwin dude: the worst fight they ever had, and in fact Man never spoke to the old man again. Man was still steaming about the fight for days afterwards, and then we heard that this guy named Edwin Drake over in Pennsylvania could get you something that was an even bigger rush than coal. Of course Man had to have some, and I said to myself, hey, I’ll give it a try—and that was all she wrote, baby. Oh, we kept using coal, and a fair bit of it, but there’s nothing like petroleum.
“What’s more, Man figured out that that’s what he needed to finish his conquest of Nature. His mercs had a good chunk of Nature by then, but not all of it, not even half, and Man was having trouble holding some of the territory he’d taken—there were guerrillas behind his lines, that sort of thing. He’d pace around at headquarters, snapping at his staff, trying to figure out how to get the edge he needed to beat Nature once and for all. ‘I’ve gotta have it all, Tech,’ he’d say sometimes, when we were flopped on the couch in his private quarters with a couple of needles and a barrel of petroleum, getting really buzzed. ‘I’ve conquered distance, the land, the surface of the sea—it’s not enough. I want it iall/i.’ And you know, he got pretty close.”
Petroleum was the key, Clio explained. “It wasn’t just that Man used petroleum, all his soldiers and his support staff were using it too, and over the short term it’s an incredibly powerful drug; it gives users a rush of energy that has to be seen to be believed. Whole provinces of Nature that resisted every attack in the first part of the war were overrun once Man started shipping petroleum to his forces. By the 1950s, as a result, the conquest of Nature was all but complete. Nature still had a few divisions holed up in isolated corners where they couldn’t be gotten at by Man’s forces, and partisan units were all over the conquered zone, but those were minor irritations at that point. It was easy enough for Man and his followers to convince themselves that in a little while the last holdouts would be defeated and Nature would be conquered once and for all.
“That’s when reality intervened, though, because all those years of abusing coal, petroleum, and other substances started to catch up with Man. He was in bad shape, and didn’t know it—and then he started having problems feeding his addiction.”
On and Off the Wagon
“I forget exactly how it happened,” Technology recounted. “It was some kind of disagreement with his suppliers—he was getting a lot of his stuff from some Arab guys at that point, and he got into a fight with them over something, and they said, ‘Screw you, man, if you’re going to be like that we’re just not going to do business with you any more.’ So he tried to get the stuff from somebody else, and it turned out the guy from Pennsylvania was out of the business, and the connections he had in Texas and California couldn’t get enough. The Arab guys had a pretty fair corner on the market. So Man went into withdrawal, big time. We got him to the hospital, and the doctor took one look at him and said, ‘You gotta get into rehab, now.’ So me and some of his other friends talked him into it.”
“The records of his stays in rehab are heartbreaking,” Clio said, pulling down a tell-all biography from her shelf. “He’d start getting the drug out of his system, convince himself that he was fine, check himself out, and start using again almost immediately. Then, after a little while, he’d have problems getting a fix, end up in withdrawal, and find his way back into rehab. Meanwhile the war against Nature was going badly as the other side learned how to fight back effectively. There were rumors of ceasefire negotiations, even a peace treaty between him and Nature.”
“I went to see him in rehab one day,” said Technology. “He looked awful. He looked iold/i—like his old man Everyman. He was depressed, too, talking all the time about this malaise thing. The thing is, I think if he’d stuck with it then he could have gotten off the stuff and straightened his life out. I really think he could have done it, and I tried to help. I brought him some solar panels, earth-sheltered housing, neat stuff like that, to try to get him interested in something besides the war on Nature and his petroleum habit. That seemed to cheer him up, and I think all his friends had high hopes for a while.
“Then the next thing I heard, he was out of rehab. He just couldn’t hack it any longer. I went to his place, and there he was, laughing and slapping everybody’s back and full of big ideas and bigger plans, just like before. That’s what it looked like at first, but the magic was gone. He tried to do a comeback career, but he just couldn’t get it back together, and things went downhill from there.”
The Final Years
The last years of Man’s career as representation of the human race were troubled. “The war against Nature wasn’t going well by then,” Clio explained. “Man’s forces were holding onto the most important provinces and cities, but insurgencies were springing up all over—drug-resistant microbes here, herbicide-tolerant weeds there. Morale was faltering, and a growing fraction of Man’s forces in the struggle against Nature no longer believed in what they were doing. They were in it for the money, nothing more, and the money was running out. Between the costs of the war, the costs of Man’s lavish lifestyle, and the rising burden of his substance abuse problem, Man was in deep financial trouble; there’s reason to believe that he may have been engaged in outright fraud to pay his bills during the last few years of his life.”
Meanwhile, Man was becoming increasingly isolated. “He’d turned his back on most of his friends,” said the anonymous housekeeper quoted earlier. “Art, Literature, Philosophy—he stopped talking to any of them, because they kept telling him to get off the stuff and straighten out his life. I remember the last time Science came to visit—she wanted to talk to Man about the state of the atmosphere, and Man literally threw her out of the house and slammed the door in her face.span style=”mso-spacerun: yes;” /spanI was working downstairs in the laundry, where you usually can’t hear much, but I could hear Man screaming, ‘I own the atmosphere! I own the planet! I own the solar system! I own the goddam istars/i! They’re mine, mine, imine/i—how dare you tell me what to do with my property?’ He went on like that for a while, then collapsed right there in the entry. A couple of us went up, carried him into his bedroom, and got him cleaned up and put to bed. We had to do that pretty often, the last year or so.”
His longtime friend Technology was apparently the last person to see Man alive. “I went over to his place Monday afternoon,” Technology recalled. “I went there pretty often, and we’d do some stuff and hang out, and I’d start rapping about all kinds of crazy stuff, omniscient supercomputers, immortal robot bodies, stuff like that. I told him, ‘Look, Man, if you want to get into stuff like omniscience and immortality, go talk to Religion.span style=”mso-spacerun: yes;” /spanThat’s her bag, not mine.’ But he didn’t want to do that; he had some kind of falling out with her a while back, you know, and he wanted to hear it from me, so I talked it up. It got him to mellow out and unwind, and that’s what mattered to me.
“Monday, though, we get to talking, and it turns out that the petroleum he had was from this really dirty underground source in North Dakota. I said to him, ‘Man, what the frack were you thinking?’ He just looked at me and said, ‘I’ve gotta have the stuff, Tech. I’ve gotta have the stuff.’ Then he started blubbering, and I reached out to, like, pat his shoulder—and he just blew up at me. He started yelling about how it was my fault he was hooked on petroleum, my fault the war against Nature wasn’t going well, my fault this and that and blah blah blah. Then he got up and stormed out of the room and slammed the door behind him. I should have gone after him, I know I should have, but instead I just shook my head and left. Maybe if I’d gone and tried to talk him down, he wouldn’t have done it.”
“Everything was quiet,” the housekeeper said. “Too quiet. Usually we’d hear Man walking around, or he’d put some music on or something, but Monday night, the place might as well have been empty. Around ten o’ clock, we were really starting to wonder if something was wrong, and two of us from the housekeeping staff decided that we really had to go check on Man and make sure he was all right. We found him in the bathroom, lying on the floor. It was horrible—the room stank of crude oil, and there was the needle and all his other gear scattered around him on the floor. We tried to find a pulse, but he was already cold and stiff; I went and called for an ambulance anyway, and—well, you know the rest.”
The Troubled Aftermath
Man’s death leaves a great many questions unanswered. “By the time Everyman died,” Clio explained, “everyone knew who his heir would be.span style=”mso-spacerun: yes;” /spanMan had already taken over his father’s role as humanity’s idealized self-image. That hasn’t happened this time, as you know. Man didn’t leave a will, and his estate is a mess—it may be years before the lawyers and the accountants finish going through his affairs and figure out whether there’s going to be anything at all for potential heirs to claim. Meanwhile there are at least half a dozen contenders for the role of abstract representation of the human race, and none of them is a clear favorite. It may be a long time before all the consequences are sorted out.”
Meanwhile, one of the most important voices in the debate has already registered an opinion. Following her invariable habit, Gaia refused to grant any personal interviews, but a written statement to the media was delivered by a spokesrabbit on Tuesday evening. “Please accept My sympathy for the tragic demise of Man, the would-be conqueror of Nature,” it read. “I hope it will not be out of place, though, to suggest that whomever My human children select as their new self-image might consider being a little less self-centered—not to mention a little less self-destructive.”
by John Michael Greer
It’s been a little more than a year since I launched the present series of posts on the end of America’s global empire and the future of democracy in the wake of this nation’s imperial age. Over the next few posts I plan on wrapping that theme up and moving on.However traumatic the decline and fall of the American empire turns out to be, after all, it’s just one part of the broader trajectory that this blog seeks to explore, and other parts of that trajectory deserve discussion as well.
I’d planned to have this week’s post take last week’s discussion of voluntary associations further, and talk about some of the other roles that can be filled, in a time of economic contraction and social disarray, by groups of people using the toolkit of democratic process and traditional ways of managing group activities and assets. Still, that topic is going to have to wait another week, because one of the other dimensions of the broader trajectory just mentioned is moving rapidly toward crisis.
It’s hard to imagine that anybody in today’s America has escaped the flurry of enthusiastic media coverage of the fracking phenomenon.Still, that coverage has included so much misinformation that it’s probably a good idea to recap the basics here. Hydrofracturing—“fracking” in oil industry slang—is an old trick that has been used for decades to get oil and natural gas out of rock that isn’t porous enough for conventional methods to get at them. As oil and gas extraction techniques go, it’s fairly money-, energy- and resource-intensive, and so it didn’t see a great deal of use until fairly recently.
Then the price of oil climbed to the vicinity of $100 a barrel and stayed there. Soaring oil prices drove a tectonic shift in the US petroleum industry, making it economically feasible to drill for oil in deposits that weren’t worth the effort when prices were lower. One of those deposits was the Bakken shale, a sprawling formation of underground rock in the northern Great Plains, which was discovered back in the 1970s and sat neglected ever since due to low oil prices. To get any significant amount of oil out of the Bakken, you have to use fracking technology, since the shale isn’t porous enough to let go of its oil any other way.Once the rising price of crude oil made the Bakken a paying proposition, drilling crews headed that way and got to work, launching a lively boom.
Another thoroughly explored rock formation further east, the Marcellus shale, attracted attention from the drilling rigs for a different reason, or rather a different pair of reasons.The Marcellus contains no oil to speak of, but some parts of it have gas that is high in natural gas liquids—“wet gas” is the industry term for this—and since those liquids can replace petroleum in some applications, they can be sold at a much higher price than natural gas.Meanwhile, companies across the natural gas industry looked at the ongoing depletion of US coal reserves, and the likelihood of government mandates favoring natural gas over coal for power generation, and decided that these added up to a rosy future for natural gas prices.Several natural gas production firms thus started snapping up leases in the Marcellus country of Pennsylvania and neighboring states, and a second boom got under way.
As drilling in the Bakken and Marcellus shales took off, several other shale deposits, some containing oil and natural gas, others just natural gas, came in for the same sort of treatment. The result was a modest temporary increase in US petroleum production, and a more substantial but equally temporary increase in US natural gas production.It could never be anything more than temporary, for reasons hardwired into the way fracking technology works.
If you’ve ever shaken a can of soda pop good and hard and then opened it, you know something about fracking that countless column inches of media cheerleading on the subject have sedulously avoided. The technique is different, to be sure, but the effect of hydrofracturing on oil and gas trapped in shale is not unlike the effect of a hard shake on the carbon dioxide dissolved in soda pop:in both cases, you get a sudden rush toward the outlet, which releases most of what you’re going to get.Oil and gas production from fracked wells thus starts out high but suffers ferocious decline rates—up to 90% in the first year alone.Where a conventional, unfracked well can produce enough oil or gas to turn a profit for decades if it’s well managed, fracked wells in tight shales like the Bakken and Marcellus quite often stop becoming a significant source of oil or gas within a few years of drilling.
The obvious response to this problem is to drill more wells, and this accordingly happened. That isn’t a panacea, however. Oil and gas exploration is a highly sophisticated science, and oil and gas drilling companies can normally figure out the best sites for wells long before the drill bit hits the ground. Since they are in business to make money, they normally drill the best sites first. When that sensible habit intersects with the rapid production decline rates found in fracked wells, the result is a brutal form of economic arithmetic:as the best sites are drilled and the largest reserves drained, drilling companies have to drill more and more wells to keep the same amount of oil or gas flowing.Costs go up without increasing production, and unless prices rise, profits get hammered and companies start to go broke.
They start to go broke even more quickly if the price of the resource they’re extracting goes down as the costs of maintaining production go up.In the case of natural gas, that’s exactly what happened. Each natural gas production company drew up its projections of future prices on the assumption that ordinary trends in production would continue.As company after company piled into shale gas, though, production soared, and the harsh economic downturn that followed the 2008 housing market crash kept plummeting natural gas prices from spurring increased use of the resource; so many people were so broke that even cheap natural gas was too expensive for any unnecessary use.
Up to that point, the fracking story followed a trajectory painfully familiar to anyone who knows their way around the economics of alternative energy.From the building of the first solar steam engines before the turn of the last century, through the boom-and-bust cycle of alternative energy sources in the late 1970s, right up to the ethanol plants that were launched with so much fanfare a decade ago and sold for scrap much more quietly a few years later, the pattern’s the same, a repeated rhythm of great expectations followed by shattered dreams. .
Here’s how it works.A media panic over the availability of some energy resource or other sparks frantic efforts to come up with a response that won’t require anybody to change their lifestyles or, heaven help us, conserve. Out of the flurry of available resources and technologies, one or two seize the attention of the media and, shortly thereafter, the imagination of the general public. Money pours into whatever the chosen solution happens to be, as investors convince themselves that there’s plenty of profit to be made backing a supposedly sure thing, and nobody takes the time to ask hard questions.In particular, investors tend to lose track of the fact that something can be technically feasible without being economically viable, and rosy estimates of projected cash flow and return on investment take the place of meaningful analysis.
Then come the first financial troubles, brushed aside by cheerleading “analysts” as teething troubles or the results of irrelevant factors certain to pass off in short order.The next round of bad news follows promptly, and then the one after that; the first investors begin to pull out; sooner or later, one of the hot companies that has become an icon in the new industry goes suddenly and messily bankrupt, and the rush for the exits begins.Barring government subsidies big enough to keep some shrunken form of the new industry stumbling along thereafter, that’s usually the end of the road for the former solution du jour, and decades can pass before investors are willing to put their money into the same resource or technology again.
That’s the way that the fracking story started, too. By the time it was well under way, though, a jarring new note had sounded:the most prestigious of the US mass media suddenly started parroting the most sanguine daydreams of the fracking industry.They insisted at the top of their lungs that the relatively modest increases in oil and gas production from fracked shales marked a revolutionary new era, in which the United States would inevitably regain the energy independence it last had in the 1950s, and prosperity would return for all—or at least for all who jumped aboard the new bandwagon as soon as possible. Happy days, we were told, were here again.
What made this barrage of propaganda all the more fascinating was the immense gaps that separated it from the realities on and under the ground in Pennsylvania and North Dakota. The drastic depletion rates from fracked wells rarely got a mention, and the estimates of how much oil and gas were to be found in the various shale deposits zoomed upwards with wild abandon.Nor did the frenzy stop there; blatant falsehoods were served up repeatedly by people who had every reason to know that they were false—I’m thinking here of the supposedly energy-literate pundits who insisted, repeatedly and loudly, that the Green River shale in the southwest was just like the Bakken and Marcellus shales, and would yield abundant oil and gas once it was fracked. (The Green River shale, for those who haven’t been keeping score, contains no oil or gas at all; instead, it contains kerogen, a waxy hydrocarbon goo that would have turned into oil or gas if it had stayed deep underground for a few million years longer, and kerogen can’t be extracted by fracking—or, for that matter, by any other economically viable method.)
Those who were paying attention to all the hoopla may have noticed that the vaporous claims being retailed by the mainstream media around the fracking boom resembled nothing so much as the equally insubstantial arguments most of the same media were serving up around the housing boom in the years immediately before the 2008 crash.The similarity isn’t accidental, either. The same thing happened in both cases:Wall Street got into the act.
A recent report from financial analyst Deborah Rogers, Shale and Wall Street (you can download a copy in PDF format here), offers a helpful glimpse into the three-ring speculative circus that sprang up around shale oil and shale gas during the last three years or so.Those of my readers who suffer from the delusion that Wall Street might have learned something from the disastrous end of the housing bubble are in for a disappointment:the same antics, executed with the same blissful disregard for basic honesty and probity, got trotted out again, with results that will be coming down hard on what’s left of the US economy in the months immediately ahead of us.
If you remember the housing bubble, you know what happened.Leases on undrilled shale fields were bundled and flipped on the basis of grotesquely inflated claims of their income potential; newly minted investment vehicles of more than Byzantine complexity—VPPs, “volumetric production payments,” are an example you’ll be hearing about quite a bit in a few months, once the court cases begin—were pushed on poorly informed investors and promptly began to crash and burn; as the price of natural gas dropped and fracking operations became more and more unprofitable, “pump and dump” operations talked up the prospects of next to worthless properties, which could then be unloaded on chumps before the bottom fell out.It’s an old story, if a tawdry one, and all the evidence suggests that it’s likely to finish running its usual course in the months immediately ahead.
There are at least two points worth making as that happens. The first is that we can expect more of the same in the years immediately ahead.Wall Street culture—not to mention the entire suite of economic expectations that guides the behavior of governments, businesses, and most individuals in today’s America—assumes that the close-to-zero return on investment that’s become standard in the last few years is a temporary anomaly, and that a good investment ought to bring in what used to be considered a good annual return:4%, 6%, 8%, or more. What only a few thinkers on the fringes have grasped is that such returns are only normal in a growing economy, and we no longer have a growing economy.
Sustained economic growth, of the kind that went on from the beginning of the industrial revolution around 1700 to the peak of conventional oil production around 2005, is a rare anomaly in human history.It became a dominant historical force over the last three centuries because cheap abundant energy from fossil fuels could be brought into the economy at an ever-increasing rate, and it stopped because geological limits to fossil fuel extraction put further increases in energy consumption permanently out of reach. Now that fossil fuels are neither cheap nor abundant, and the quest for new energy sources vast and concentrated enough to replace them has repeatedly drawn a blank, we face several centuries of sustained economic contraction—which means that what until recently counted as the groundrules of economics have just been turned on their head.
You will not find many people on Wall Street capable of grasping this. The burden of an outdated but emotionally compelling economic orthodoxy, to say nothing of a corporate and class culture that accords economic growth the sort of unquestioned aura of goodness other cultures assign to their gods, make the end of growth and the coming of permanent economic decline unthinkable to the financial industry, or for that matter to the millions of people in the industrial world who rely on investments to pay their bills.There’s a strong temptation to assume that those 8% per annum returns must still be out there, and when something shows up that appears to embody that hope, plenty of people are willing to rush into it and leave the hard questions for later.Equally, of course, the gap thus opened between expectations and reality quickly becomes a happy hunting ground for scoundrels of every stripe.
Vigorous enforcement of the securities laws might be able to stop the resulting spiral into a permanent bubble-and-bust economy. For all the partisan bickering in Washington DC, though, a firm bipartisan consensus since the days of George W. Bush has placed even Wall Street’s most monumental acts of piracy above the reach of the law.The Bush and Obama administrations both went out of their way to turn a blind eye toward the housing bubble’s spectacular frauds, and there’s no reason to think Obama’s appointees in the Justice Department will get around to doing their jobs this time either.Once the imminent shale bust comes and goes, in other words, it’s a safe bet that there will be more bubbles, each one propping up the otherwise dismal prospects of the financial industry for a little while, and then delivering another body blow to the economies of America and the world as it bursts.
This isn’t merely a problem for those who have investments, or those whose jobs depend in one way or another on the services the financial industry provides when it’s not too busy committing securities fraud to get around to it. The coming of a permanent bubble-and-bust economy puts a full stop at the end of any remaining prospect for even the most tentative national transition away from our current state of dependence on fossil fuels.Pick a project, any project, from so sensible a step as rebuilding the nation’s long-neglected railroads all the way to such pie-in-the-sky vaporware as solar power satellites, and it’s going to take plenty of investment capital.If it’s to be done on any scale, furthermore, we’re talking about a period of decades in which more capital every year will have to flow into the project.
The transition to a bubble-and-bust economy makes that impossible. Bubbles last for an average of three years or so, so even if the bubble-blowers on Wall Street happen by accident on some project that might actually help, it will hardly have time to get started before the bubble turns to bust, the people who invested in the project get burned, and the whole thing tumbles down into disillusionment andbankruptcy.If past experience is anything to go by, furthermore, most of the money thus raised will be diverted from useful purposes into the absurd bonuses and salaries bankers and brokers think society owes them for their services.
Over the longer run, a repeated drumbeat of failed investments and unpunished fraud puts the entire system of investment itself at risk.The trust that leads people to invest their assets, rather than hiding them in a hole in the ground, is a commons; like any commons, it can be destroyed by abuse; and since the federal government has abandoned its statutory duty to protect that commons by enforcing laws against securities fraud, a classic tragedy of the commons is the most likely outcome, wrecking the system by which our society directs surplus wealth toward productive uses and putting any collective response to the end of the fossil fuel age permanently out of reach.
All these are crucial issues. Still, there’s a second point of more immediate importance. I don’t think anybody knows exactly how big the shale bubble has become, but it’s been one of Wall Street’s few really large profit centers over the last three years. It’s quite possible that the bubble is large enough to cause a major financial panic when it bursts, and send the United States and the world down into yet another sharp economic downturn. As Yogi Berra famously pointed out, it’s tough to make predictions, especially about the future; still, I don’t think it’s out of place to suggest that sensible preparations for hard times might be wise just now, and if any of my readers happen to have anything invested in the shale or financial industries, I’d encourage them to consider other options in the fairly near term.
by John Michael Greer
When William Butler Yeats put the phrase I’ve used as the title for this week’s post into the powerful and prescient verses of “The Second Coming,” he had deeper issues in mind than the crisis of power in a declining American empire. Still, the image is anything but irrelevant here; the political evolution of the United States over the last century has concentrated so many of the responsibilities of government in Washington DC that the entire American system is beginning to crack under the strain.
This is admittedly not the way you’ll hear the centralization of power in America discussed by those few voices in our national conversation who discuss it at all. On the one hand are the proponents of centralized power, who insist that leaving any decision at all in the hands of state or local authorities is tantamount to handing it over to their bogeyman du jour—whether that amounts to the bedsheet-bedecked Southern crackers who populate the hate speech of the left, say, or the more diverse gallery of stereotypes that plays a similar role on the right. On the other hand are those who insist that the centralization of power in America is the harbinger of a totalitarian future that will show up George Orwell as an incurable optimist.
I’ve already talked in a number of previous posts about the problems with this sort of thinking, with its flattening out of the complexities of contemporary politics into an opposition between warm fuzzy feelings and cold prickly ones. I’d like, to pursue the point a little further, to offer two unpopular predictions about the future of American government.The first is that the centralization of power in Washington DC has almost certainly reached its peak, and will be reversing in the decades ahead of us. The second is that, although there will inevitably be downsides to that reversal, it will turn out by and large to be an improvement over the system we have today.These predictions unfold from a common logic; both are consequences of the inevitable failure of overcentralized power.
It’s easy to get caught up in abstractions here, and even easier to fall into circular arguments around the functions of political power that attract most of the attention these days—for example, the power to make war. I’ll be getting to this latter a bit further on in this post, but I want to start with a function of government slightly less vexed by misunderstandings. The one I have in mind is education.
In the United States, for a couple of centuries now, the provision of free public education for children has been one of the central functions of government. Until fairly recently, in most of the country, it operated in a distinctive way. Under legal frameworks established by each state, local school districts were organized by the local residents, who also voted to tax themselves to pay the costs of building and running schools.Each district was managed by a school board, elected by the local residents, and had extensive authority over the school district’s operations.
In most parts of the country, school districts weren’t subsets of city, township, or county governments, or answerable to them; they were single-purpose independent governments on a very small scale, loosely supervised by the state and much more closely watched by the local voters. On the state level, a superintendent of schools or a state board of education, elected by the state’s voters, had a modest staff to carry out the very limited duties of oversight and enforcement assigned by the state legislature.On the federal level, a bureaucracy not much larger supervised the state boards of education, and conducted the even more limited duties assigned it by Congress.
Two results of that system deserve notice.First of all, since individual school districts were allowed to set standards, chose textbooks, and manage their own affairs, there was a great deal of diversity in American education. While reading, writing, and ‘rithmetic formed the hard backbone of the school day, and such other standards as history and geography inevitably got a look in as well, what else a given school taught was as varied as local decisions could make them. What the local schools put in the curriculum was up to the school board and, ultimately, to the voters, who could always elect a reform slate to the school board if they didn’t like what was being taught.
Second, the system as a whole gave America a level of public literacy and general education that was second to none in the industrial world, and far surpassed the poor performance of the far more lavishly funded education system the United States has today.In a previous post, I encouraged readers to compare the Lincoln-Douglas debates of 1858 to the debates in our latest presidential contest, and to remember that most of the people who listened attentively to Lincoln and Douglas had what then counted as an eighth-grade education.The comparison has plenty to say about the degeneration of political thinking in modern America, but it has even more to say about the extent to which the decline in public education has left voters unprepared to get past the soundbite level of thinking.
Those of my readers who want an even more cogent example are encouraged to leaf through a high school textbook from before the Second World War. You’ll find that the reading comprehension, reasoning ability, and mathematical skill expected as a matter of course from ninth-graders in 1930 is hard to find among American college graduates today. If you have kids of high school age, spend half an hour comparing the old textbook with the one your children are using today.You might even consider taking the time to work through a few of the assignments in the old textbook yourself.
Plenty of factors have had a role in the dumbing-down process that gave us our current failed system of education, to be sure, but I’d like to suggest that the centralization of power over the nation’s educational system in a few federal bureaucracies played a crucial role. To see how this works, again, a specific example is useful. Let’s imagine a child in an elementary school in Lincoln, Nebraska, who is learning how to read. Ask yourself this: of all the people concerned with her education, which ones are able to help that individual child tackle the daunting task of figuring out how to transform squiggles of ink into words in her mind?
The list is fairly small, and her teacher and her parents belong at the top of it. Below them are a few others: a teacher’s aide if her classroom has one, an older sibling, a friend who has already managed to learn the trick. Everyone else involved is limited to helping these people do their job. Their support can make that job somewhat easier—for example, by making sure that the child has books, by seeing to it that the classroom is safe and clean, and so on—but they can’t teach reading. Each supporting role has supporting roles of its own; thus the district’s purchasing staff, who keep the school stocked with textbooks, depend on textbook publishers and distributors, and so on. Still, the further you go from the child trying to figure out that C-A-T means “cat,” the less effect any action has on her learning process.
Now let’s zoom back 1200 miles or so to Washington DC and the federal Department of Education. It’s a smallish federal bureaucracy, which means that in the last year for which I was able to find statistics, 2011, it spent around $71 billion.Like many other federal bureaucracies, its existence is illegal. I mean that quite literally; the US constitution assigns the federal government a fairly limited range of functions, and “those powers necessary and convenient” to exercise them; by no stretch of the imagination can managing the nation’s public schools be squeezed into those limits. Only the Supreme Court’s embarrassingly supine response to federal power grabs during most of the twentieth century allows the department to exist at all.
So we have a technically illegal bureaucracy running through $71 billion of the taxpayers’ money in a year, which is arguably not a good start. The question I want to raise, though, is this:what can the staff of the Department of Education do that will have any positive impact on that child in the classroom in Lincoln, Nebraska? They can’t teach the child themselves; they can’t fill any of the supporting roles that make it possible for the child to be taught. They’re 1200 miles away, enacting policies that apply to every child in every classroom, irrespective of local conditions, individual needs, or any of the other factors that make teaching a child to read different from stamping out identical zinc bushings.
There are a few—a very few—things that can usefully be done for education at the national level. One of them is to make sure that the child in Lincoln is not denied equal access to education because of her gender, her skin color, or the like. Another is to provide the sort of overall supervision to state boards of education that state boards of education traditionally provided to local school boards. There are a few other things that belong on the same list.All of them can be described, to go back to a set of ideas I sketched out a couple of weeks ago, as measures to maintain the commons.
Public education is a commons. The costs are borne by the community as a whole, while the benefits go to individuals:the children who get educated, the parents who don’t have to carry all the costs of their children’s education, the employers who don’t have to carry all the costs of training employees, and so on. Like any other commons, this one is vulnerable to exploitation when it’s not managed intelligently, and like most commons in today’s America, this one has taken quite a bit of abuse lately, with the usual consequences. What makes this situation interesting, in something like the sense of the apocryphal Chinese proverb, is that the way the commons of public education is being managed has become the principal force wrecking the commons.
The problem here is precisely that of centralization. The research for which economist Elinor Ostrom won her Nobel Prize a few years back showed that, by and large, effective management of a commons is a grassroots affair; those who will be most directly affected by the way the commons is managed are also its best managers.The more distance between the managers and the commons they manage, the more likely failure becomes, because two factors essential to successful management simply aren’t there. The first of them is immediate access to information about how management policies are working, or not working, so that those policies can be adjusted immediately if they go wrong; the second is a personal stake in the outcome, so that the managers have the motivation to recognize when a mistake has been made, rather than allowing the psychology of previous investment to seduce them into pursuing a failed policy right into the ground.
Those two factors don’t function in an overcentralized system.Politicians and bureaucrats don’t get to see the consequences of their failed decisions up close, and they don’t have any motivation to admit that they were wrong and pursue new policies—quite the contrary, in fact.Consider, for example, the impact of the No Child Left Behind (NCLB) Act, pushed through Congress by bipartisan majorities and signed with much hoopla by George W. Bush in 2002. In the name of accountability—a term that in practice means “finding someone to punish”—the NCLB Act requires mandatory standardized testing at specific grade levels, and requires every year’s scores to be higher than the previous year’s, in every school in the nation. Teachers and schools that fail to accomplish this face draconian penalties.
My readers may be interested to know that next year, by law, every child in America must perform at or above grade level. It’s reminiscent of the imaginary town of Lake Wobegon—“where all the children are above average”—except that this is no joke; what’s left of America’s public education system is being shredded by the efforts of teachers and administrators to save their jobs in a collapsing economy, by teaching to the tests and gaming the system, under the pressure of increasingly unreal mandates from Washington DC. Standardized test scores have risen slightly; meaningful measures of literacy, numeracy, and other real-world skills have continued to move raggedly downward, and you can bet that the only response anybody in Washington is going to be willing to discuss is yet another round of federal mandates, most likely even more punitive and less effective than the current set.
Though I’ve used education as an example, nearly every part of American life is pervaded by the same failed logic of overcentralization. Another example?Consider the Obama administration’s giddy pursuit of national security via drone attacks.As currently operated, Predator drones are the ne plus ultra in centralized warfare; each drone attack has to be authorized by Obama himself, the drone is piloted via satellite link from a base in Nevada, and you can apparently sit in the situation room in the White House and watch the whole thing live. Hundreds of people have been blown to kingdom come by these attacks so far, in the name of a war on terror that Obama’s party used to denounce.
Now of course that habit only makes sense if you’re willing to define young children and wedding party attendees as terrorists, which seems a little extreme to me. Leaving that aside, though, there’s a question that needs to be asked: is it working? Since none of the areas under attack are any less full of anti-American insurgents than they have been, and the jihadi movement has been able to expand its war dramatically in recent weeks into Libya and Mali, the answer is pretty clearly no. However technically superlative the drones themselves are, the information that guides them comes via the notoriously static-filled channels of intelligence collection and analysis, and the decision to use them takes place in the even less certain realms of tactics and strategy; nor is it exactly bright, if you want to dissuade people from seeking out Americans and killing them, to go around vaporizing people nearly at random in parts of the world where avenging the murder of a family member is a sacred duty.
In both cases, and plenty of others like them, we have other alternatives, but all of them require the recognition that the best response to a failed policy isn’t a double helping of the same. That recognition is nowhere in our collective conversation at the moment. It would be useful if more of us were to make an effort to put it there, but there’s another factor in play. The center really cannot hold, and as it gives way, a great many of today’s political deadlocks will give way with it.
Eliot Wigginton, the teacher in rural Georgia who founded the Foxfire project and thus offered the rest of us an elegant example of what can happen when a purely local educational venture is given the freedom to flower and bear fruit, used to say that the word “learn” is properly spelled F-A-I-L. That’s a reading lesson worth taking to heart, if only because we’re going to have some world-class chances to make use of it in the years ahead. One of the few good things about really bad policies is that they’re self-limiting; sooner or later, a system that insists on embracing them is going to crash and burn, and once the rubble has stopped bouncing and the smoke clears away, it’s not too hard for the people standing around the crater to recognize that something has gone very wrong.In that period of clarity, it’s possible for a great many changes to be made, especially if there are clear alternatives available and people advocating for them.
In the great crises that ended each of America’s three previous rounds of anacyclosis—in 1776, in 1861, and in 1933—a great many possibilities that had been unattainable due to the gridlocked politics of the previous generation suddenly came within reach. In those past crises, the United States was an expanding nation, geographically, economically, and in terms of its ability to project power in the world; the crisis immediately ahead bids fair to arrive in the early stages of the ensuing contraction. That difference has important effects on the nature of the changes before us.
Centralized power is costly—in money, in energy, in every other kind of resource.Decentralized systems are much cheaper.In the days when the United States was mostly an agrarian society, and the extravagant abundance made possible by a global empire and reckless depletion of natural resources had not yet arrived, the profoundly localized educational system I sketched out earlier was popular because it was affordable.Even a poor community could count on being able to scrape together the political will and the money to establish a school district, even if that meant a one-room schoolhouse with one teacher taking twenty-odd children a day through grades one through eight. That the level of education that routinely came out of such one-room schoolhouses was measurably better than that provided by today’s multimillion-dollar school budgets is just one more irony in the fire.
On the downside of America’s trajectory, as we descend from empire toward whatever society we can manage to afford within the stringent limits of a troubled biosphere and a planet stripped of most of its nonrenewable resources, local systems of the one-room schoolhouse variety are much more likely to be an option than centralized systems of the sort we have today. That shift toward the affordably local will have many more consequences; I plan on exploring another of them next week.
byJohn Michael Greer
The return to an older American concept of government as the guarantor of the national commons, the theme of last week’s post here on The Archdruid Report, is to my mind one of the crucial steps that might just succeed in making a viable future for the post-imperial United States. A viable future, mind you, does not mean one in which any signficant number of Americans retain any significant fraction of the material abundance we currently get from the “wealth pump” of our global empire. The delusion that we can still live like citizens of an imperial power when the empire has gone away will be enormously popular, not least among those who currently insist they want nothing to do with the imperial system that guarantees their prosperity, but it’s still a delusion.
The end of American empire, it deserves repeating, means the end of a system in which the five per cent of humanity that live in the United States get to dispose of a quarter of the planet’s energy and a third of its raw materials and industrial product. Even if the fossil fuels that undergird the industrial product weren’t depleting out of existence—and of course they are—the rebalancing of global wealth driven by the decline of one empire and the rise of another will involve massive and often traumatic impacts, especially for those who have been living high on the hog under the current system and will have to get used to a much smaller portion of the world’s wealth in the years immediately ahead. Yes, dear reader, if you live in the United States or its inner circle of allies—Canada, Britain, Australia, Japan, and a few others—this means you.
I want to stress this point, because habits of thought already discussed in this sequence of posts make it remarkably difficult for most Americans to think about a future that isn’t either all warm fuzzy or all cold prickly. If an imagined future is supposed to be better than the one we’ve got, according to these habits of thought, it has to be better in every imaginable way, and if it’s worse, it has to be worse just as uniformly. Suggest that the United States might go hurtling down the far side of its imperial trajectory and come out of the process as a Third World nation, as I’ve done here, and you can count on blank incomprension or self-righteous anger if you go on to suggest that the nation that comes out the other side of this project might still be able to provide a range of basic social goods to its citizens, and might even recover some of the values it lost a century ago in the course of its headlong rush to empire.
Now in fact I’m going to suggest this, and indeed I’ve already sketched out some of the steps that individual Americans might choose to take to lay the foundations for that project. Still, it’s also worth noting that the same illogic shapes the other end of the spectrum of possible futures. These days, if you pick up a book offering a vision of a better future or a strategy to get there, it’s usually a safe bet that you can read the thing from cover to cover no reference whatsoever to any downsides, drawbacks, or tradeoffs that might be involved in pursuing the vision or enacting the strategy. Since every action in the real world has downsides, drawbacks, and tradeoffs, this is not exactly a minor omission, nor does the blithe insistence on ignoring such little details offer any reason to feel confident that the visions and strategies will actually work as advertised.
One example in particular comes to mind here, because it has immediate relevance to the project of this series of posts. Those of my readers who have been following the peak oil scene for any length of time will have encountered any number of enthusiastic discussions of relocalization: the process, that is, of disconnecting from the vast and extravagant global networks of production, consumption, and control that define so much of industrial society, in order to restore or reinvent local systems that will be more resilient in the face of energy shortages and other disruptions, and provide more security and more autonomy to those who embrace them.
A very good case can be made for this strategy. On the one hand, the extreme centralization of the global economy has become a source of massive vulnerabilities straight across the spectrum from the most abstract realms of high finance right down to the sprawling corporate structures that put food on your table. Shortfalls of every kind, from grain and fuel to financial capital, are becoming a daily reality for many people around the world as soaring energy costs put a galaxy of direct and indirect pressures on brittle and overextended systems. That’s only going to become worse as petroleum reserves and other vital resources continue to deplete. As this process continues, ways of getting access to necessities that are deliberately disconnected from the global economic system, and thus less subject to its vulnerabilities, are going to be well worth having in place.
At the same time, participation in the global economy brings with it vulnerabilities of another kind. For anyone who has to depend for their daily survival on the functioning of a vast industrial structure which is not answerable to the average citizen, talk about personal autonomy is little more than a bad joke, and the ability of communities to make their own choices and seek their own futures in such a context is simply another form of wishful thinking. Many people involved in efforts to relocalize have grasped this, and believe that deliberately standing aside from systems controlled by national governments and multinational corporations offers one of the few options for regaining personal and community autonomy in the face of an increasingly troubled future.
There are more points that can be made in favor of relocalization schemes, and you can find them rehashed endlessly on pro-relocalization websites all over the internet. For our present purposes, though, this fast tour of the upside will do, because each of these arguments comes with its own downside, which by and large you won’t find mentioned anywhere on those same websites.
The downside to the first argument? When you step out of the global economy, you cut yourself off from the imperial wealth pump that provides people in America with the kind of abundance they take for granted, and the lifestyles that are available in the absence of that wealth pump are far more restricted, and far more impoverished, than most would-be relocalizers like to think. Peasant cultures around the world are by and large cultures of poverty, and there’s a good reason for that: by the time you, your family, and the other people of your village have provided food on the table, thatch on the roof, a few necessary possessions, and enough of the local equivalent of cash to cover payments to the powers that be, whether those happen to be feudal magnates or the local property tax collector, you’ve just accounted for every minute of labor you can squeeze out of a day.
That’s the rock on which the back-to-the-land movement of the Sixties broke; the life of a full-time peasant farmer scratching a living out of the soil is viable, and it may even be rewarding, but it’s not the kind of life that the pampered youth of the Baby Boom era was willing to put up with for more than a fairly brief interval. It may well be that economic relocalization is still the best available option for dealing with the ongoing unraveling of the industrial economy—in fact, I’d agree that this is the case—but I wonder how many of its proponents have grappled with the fact that what they’re proposing may amount to no more than a way to starve with dignity while many others are starving without it.
The downside to the second argument is subtler, but in some ways even more revealing. The best way to grasp it is to imagine two relocalization projects, one in Massachusetts and the other in South Carolina. The people in both groups are enthusiastic about the prospect of regaining their personal autonomy from the faceless institutions of a centralized society, and just as eager to to bring back home to their own communities the power to make choices and pursue a better future. Now ask yourself this: what will these two groups do if they get that power? And what will the people in Massachusetts think about what the people in South Carolina will do once they get that power?
I’ve conducted a modest experiment of sorts along these lines, by reminding relocalization fans in blue states what people in red states are likely to do with the renewed local autonomy the people in the blue states want for themselves, and vice versa. Every so often, to be sure, I run across someone—more often on the red side of the line than on the blue one—whose response amounts to “let ‘em do what they want, so long as they let us do what we want.” Far more often, though, people on either side are horrified to realize that their opposite numbers on the other side of America’s widening cultural divide would use relocalization to enact their own ideals in their own communities.
More than once, in fact, the response has amounted to a flurry of proposals to hedge relocalization about with restrictions so that it can only be used to support the speaker’s own political and social agendas, with federal bureaucracies hovering over every relocalizing community, ready to pounce on any sign that a community might try to do something that would offend sensibilities in Boston or San Francisco, on the one hand, or the Bible Belt on the other. You might think, dear reader, that it would be obvious that this would be relocalization in name only; you might also think that it would be just as obvious that those same bureaucracies would fall promptly into the hands of the same economic and political interests that have made the current system as much of a mess as it is. Permit me to assure you that in my experience, among a certain segment of the people who like to talk about relocalization, these things are apparently not obvious at all.
By this point in the discussion, I suspect most of my readers have come to believe that I’m opposed to relocalization schemes. Quite the contrary, I think they’re among the best options we have, and the fact that they have significant downsides, drawbacks, and tradeoffs does not nullify that. Every possible strategy, again, has downsides, drawbacks, and tradeoffs; whatever we choose to do to face the onset of the Long Descent, as individuals, as communities, or as a nation, problems are going to ensue and people are going to get hurt. Trying to find an option that has no downsides simply guarantees that we will do nothing at all; and in that case, equally, problems are going to ensue and people are going to get hurt. That’s how things work in the real world—and it may be worth reminding my readers that we don’t live in Neverland.
Thus I’d like to suggest that a movement toward relocalization is another crucial ingredient of a viable post-imperial America. In point of fact, we’ve got the structures in place to do the thing already; the only thing that’s lacking is a willingness to push back, hard, against certain dubious habits in the US political system that have rendered those structures inoperative.
Back in 1787, when the US constitution was written, the cultural differences between Massachusetts and South Carolina were very nearly as sweeping as they are today. That’s one of the reasons why the constitution as written left most internal matters in the hands of the individual states, and assigned to the federal government only those functions that concerned the national commons as a whole: war, foreign policy, minting money, interstate trade, postal services, and a few other things. The list was expanded in a modest way before the rush to empire, so that public health and civil rights, for example, were brought under federal supervision over the course of the 19th century. Under the theory of government I described last week, these were reasonable extensions, since they permitted the federal government to exercise its function of securing the national commons.
Everything else remained in the hands of the states and the people. In fact, the tenth amendment to the US constitution specifically requires that any power not granted to the federal government in so many words be left to the states and the people—a principle which, perhaps not surprisingly, has been roundly ignored by everyone in Washington DC for most of a century now. Under the constitution and its first nineteen amendments, in fact, the states were very nearly separate countries who happened to have an army, navy, foreign policy, and postal system in common.
Did that system have problems? You bet. What rights you had and what benefits you could expect as a citizen depended to a huge extent on where you lived—not just which state, but very often which county and which township or city as well. Whole classes of citizens might be deprived of their rights or the protection of the laws by local politicians or the majorities that backed them, and abuses of power were pervasive. All of that sounds pretty dreadful, until you remember that the centralization of power that came with America’s pursuit of empire didn’t abolish any of those things; it simply moved them to a national level. Nowadays, serving the interests of the rich and influential at the expense of the public good is the job of the federal government, rather than the local sheriff, and the denial of civil rights and due process that used to be restricted to specific ethnic and economic subgroups within American society now gets applied much more broadly.
Furthermore, one of the things that’s rendered the US government all but incapable of taking any positive action at all in the face of a widening spiral of crises is precisely the insistence, by people in Massachusetts, South Carolina, and the other forty-eight states as well, that their local views and values ought to be the basis of national policy. The rhetoric that results, in tones variously angry and plaintive, amounts to “Why can’t everyone else be reasonable and do it my way?”—which is not a good basis for the spirit of compromise necessary to the functioning of democracy, though it makes life easy for advocacy groups who want to shake down the citizenry for another round of donations to pay for the never-ending fight.
One of the few things that might succeed in unsticking the gridlock, so that the federal government could get back to doing the job it’s supposed to do, would be to let the people in Massachusetts, South Carolina, and the other forty-eight states pursue the social policies they prefer on a state by state basis. Yes, that would mean that people in South Carolina would do things that outraged the people in Massachusetts, and people in Massachusetts would return the favor. Yes, it would also mean that abuses and injustices would take place. Of course abuses and injustices take place now, in both states and all the others as well, but the ones that would take place in the wake of a transfer of power over social issues back to the states would no doubt be at least a little different from the current ones.
Again, the point of relocalization schemes is not that they will solve every problem. They won’t, and in fact they will certainly cause new problems we don’t have yet. The point of relocalization schemes is that, all things considered, if they’re pursued intelligently, the problems that they will probably solve are arguably at least a little worse than the problems that they will probably cause. Does that sound like faint praise? It’s not; it’s as much as can be expected for any policy this side of Neverland, in the real world, where every solution brings new problems of its own.
Now in fact relocalization has at least two other benefits that tip the balance well into positive territory. One of them is an effect I haven’t discussed in this series of posts, and I haven’t seen covered anywhere else in the peak oil blogosphere yet; it will need a post of its own, and that will have to wait a week. The other, though, is a simple matter of resilience.
The more territory has to be governed from a single political center, all things considered, the more energy and resources will be absorbed in the process of governing. This is why, before the coming of the industrial age, nations on the scale of the present United States of America rarely existed, and when they did come into being, they generally didn’t last for more than a short time. In an age of declining energy availability and depleting resources, the maintenance costs of today’s sprawling, centralized United States government won’t be affordable for long. Devolving all nonessential functions of the central government to the individual states, as the US constitution mandates, might just cut costs to the point that some semblance of civil peace and democratic governance can hang on for the long term. That probably doesn’t seem like much to those whose eyes are fixed on fantasies of a perfect world, and are convinced they can transform it from fantasy to reality as soon as everyone else stops being unreasonable and agrees with them. Still, it’s better than most potential outcomes available to us in the real world—and again, we don’t live in Neverland.
by John Michael Greer
The hard work of rebuilding a post-imperial America, as I suggested in last week’s post, is going to require the recovery or reinvention of many of the things this nation chucked into the dumpster with whoops of glee as it took off running in pursuit of its imperial ambitions. The basic skills of democratic process are among the things on that list; so, as I suggested last month, are the even more basic skills of learning and thinking that undergird the practice of democracy.
All that remains crucial. Still, it so happens that a remarkably large number of the other things that will need to be put back in place are all variations of a common theme. What’s more, it’s a straightforward theme—or, more precisely, would be straightforward if so many people these days weren’t busy trying to pretend that the concept at its center either doesn’t exist or doesn’t present the specific challenges that have made it so problematic in recent years. The concept in question? The mode of collective participation in the use of resources, extending from the most material to the most abstract, that goes most often these days by the name of “the commons.”
The redoubtable green philosopher Garrett Hardin played a central role decades ago in drawing attention to the phenomenon in question with his essay The Tragedy of the Commons. It’s a remarkable work, and it’s been rendered even more remarkable by the range of contortions engaged in by thinkers across the economic and political spectrum in their efforts to evade its conclusions. Those maneuvers have been tolerably successful; I suspect, for example, that many of my readers will recall the flurry of claims a few years back that the late Nobel Prize-winning economist Elinor Ostrom had “disproved” Hardin with her work on the sustainable management of resources.
In point of fact, she did no such thing. Hardin demonstrated in his essay that an unmanaged commons faces the risk of a vicious spiral of mismanagement that ends in the common’s destruction; Ostrom got her Nobel, and deservedly so, by detailed and incisive analysis of the kinds of management that prevent Hardin’s tragedy of the commons from taking place. A little later in this essay, we’ll get to why those kinds of management are exactly what nobody in the mainstream of American public life wants to talk about just now; the first task at hand is to walk through the logic of Hardin’s essay and understand exactly what he was saying and why it matters.
Hardin asks us to imagine a common pasture, of the sort that was common in medieval villages across Europe. The pasture is owned by the village as a whole; each of the villagers has the right to put his cattle out to graze on the pasture. The village as a whole, however, has no claim on the milk the cows produce; that belongs to the villager who owns any given cow. The pasture is a collective resource, from which individuals are allowed to extract private profit; that’s the basic definition of a commons.
In the Middle Ages, such arrangements were common across Europe, and they worked well because they were managed by tradition, custom, and the immense pressure wielded by informal consensus in small and tightly knit communities, backed up where necessary by local manorial courts and a body of customary law that gave short shrift to the pursuit of personal advantage at the expense of others. The commons that Hardin asks us to envision, though, has no such protections in place. Imagine, he says, that one villager buys additional cows and puts them out to graze on the common pasture. Any given pasture can only support so many cows before it suffers damage; to use the jargon of the ecologist, it has a fixed carrying capacity for milk cows, and exceeding the carrying capacity will degrade the resource and lower its future carrying capacity. Assume that the new cows raise the total number of cows past what the pasture can support indefinitely, so once the new cows go onto the pasture, the pasture starts to degrade.
Notice how the benefits and costs sort themselves out. The villager with the additional cows receives all the benefit of the additional milk his new cows provide, and he receives it right away. The costs of his action, by contrast, are shared with everyone else in the village, and their impact is delayed, since it takes time for pasture to degrade. Thus, according to today’s conventional economic theories, the villager is doing the right thing. Since the milk he gets is worth more right now than the fraction of the discounted future cost of the degradation of the pasture he will eventually have to carry, he is pursuing his own economic interest in a rational manner.
The other villagers, faced with this situation, have a choice of their own to make. (We’ll assume, again, that they don’t have the option of forcing the villager with the new cows to get rid of them and return the total herd on the pasture to a level it can support indefinitely.) They can do nothing, in which case they bear the costs of the degradation of the pasture but gain nothing in return, or they can buy more cows of their own, in which case they also get more milk, but the pasture degrades even faster. According to most of today’s economic theories, the latter choice is the right one, since it allows them to maximize their own economic interest in exactly the same way as the first villager. The result of the process, though, is that a pasture that would have kept a certain number of cattle fed indefinitely is turned into a barren area of compacted subsoil that won’t support any cattle at all. The rational pursuit of individual advantage thus results in permanent impoverishment for everybody.
This may seem like common sense. It is common sense, but when Hardin first published “The Tragedy of the Commons” in 1968, it went off like a bomb in the halls of academic economics. Since Adam Smith’s time, one of the most passionately held beliefs of capitalist economics has been the insistence that individuals pursuing their own economic interest without interference from government or anyone else will reliably produce the best outcome for everybody. You’ll still hear defenders of free market economics making that claim, as if nobody but the Communists ever brought it into question. That’s why very few people like to talk about Hardin’s tragedy of the commons these days; it makes it all but impossible to uphold a certain bit of popular, appealing, but dangerous nonsense.
Does this mean that the rational pursuit of individual advantage always produces negative results for everyone? Not at all. The theorists of capitalism can point to equally cogent examples in which Adam Smith’s invisible hand passes out benefits to everyone, and a case could probably be made that this happens more often than the opposite. The fact remains that the opposite does happen, not merely in theory but also in the real world, and that the consequences of the tragedy of the commons can reach far beyond the limits of a single village.
Hardin himself pointed to the destruction of the world’s oceanic fisheries by overharvesting as an example, and it’s a good one. If current trends continue, many of my readers can look forward, over the next couple of decades, to tasting the last seafood they will ever eat. A food resource that could have been managed sustainably for millennia to come is being annihilated in our lifetimes, and the logic behind it is that of the tragedy of the commons: participants in the world’s fishing industries, from giant corporations to individual boat owners and their crews, are pursuing their own economic interests, and exterminating one fishery after another in the process.
Another example? The worldwide habit of treating the atmosphere as an aerial sewer into which wastes can be dumped with impunity. Every one of my readers who burns any fossil fuel, for any purpose, benefits directly from being able to vent the waste CO2 directly into the atmosphere, rather than having to cover the costs of disposing of it in some other way. As a result of this rational pursuit of personal economic interest, there’s a very real chance that most of the world’s coastal cities will have to be abandoned to the rising oceans over the next century or so, imposing trillions of dollars of costs on the global economy.
Plenty of other examples of the same kind could be cited. At this point, though, I’d like to shift focus a bit to a different class of phenomena, and point to the Glass-Steagall Act, a piece of federal legislation that was passed by the US Congress in 1933 and repealed in 1999. The Glass-Steagall Act made it illegal for banks to engage in both consumer banking activities such as taking deposits and making loans, and investment banking activities such as issuing securities; banks had to choose one or the other. The firewall between consumer banking and investment banking was put in place because in its absence, in the years leading up to the 1929 crash, most of the banks in the country had gotten over their heads in dubious financial deals linked to stocks and securities, and the collapse of those schemes played a massive role in bringing the national economy to the brink of total collapse.
By the 1990s, such safeguards seemed unbearably dowdy to a new generation of bankers, and after a great deal of lobbying the provisions of the Glass-Steagall Act were eliminated. Those of my readers who didn’t spend the last decade hiding under a rock know exactly what happened thereafter: banks went right back to the bad habits that got their predecessors into trouble in 1929, profited mightily in the short term, and proceeded to inflict major damage on the global economy when the inevitable crash came in 2008.
That is to say, actions performed by individuals (and those dubious “legal persons” called corporations) in the pursuit of their own private economic advantage garnered profits over the short term for those who engaged in them, but imposed long-term costs on everybody. If this sounds familiar, dear reader, it should. When individuals or corporations profit from their involvement in an activity that imposes costs on society as a whole, that activity functions as a commons, and if that commons is unmanaged the tragedy of the commons is a likely result. The American banking industry before 1933 and after 1999 functioned, and currently functions, as an unmanaged commons; between those years, it was a managed commons. While it was an unmanaged commons, it suffered from exactly the outcome Hardin’s theory predicts; when it was a managed commons, by contrast, a major cause of banking failure was kept at bay, and the banking sector was more often a source of strength than a source of weakness to the national economy.
It’s not hard to name other examples of what I suppose we could call “commons-like phenomena”—that is, activities in which the pursuit of private profit can impose serious costs on society as a whole—in contemporary America. One that bears watching these days is food safety. It is to the immediate financial advantage of businesses in the various industries that produce food for human consumption to cut costs as far as possible, even if this occasionally results in unsafe products that cause sickness and death to people who consume them; the benefits in increased profits are immediate and belong entirely to the business, while the costs of increased morbidity and mortality are borne by society as a whole, provided that your legal team is good enough to keep the inevitable lawsuits at bay. Once again, the asymmetry between benefits and costs produces a calculus that brings unwelcome outcomes.
The American political system, in its pre-imperial and early imperial stages, evolved a distinctive response to these challenges. The Declaration of Independence, the wellspring of American political thought, defines the purpose of government as securing the rights to life, liberty, and the pursuit of happiness. There’s more to that often-quoted phrase than meets the eye. In particular, it doesn’t mean that governments are supposed to provide anybody with life, liberty, or happiness; their job is simply to secure for their citizens certain basic rights, which may be inalienable—that is, they can’t be legally transferred to somebody else, as they could under feudal law—but are far from absolute. What citizens do with those rights is their own business, at least in theory, so long as their exercise of their rights does not interfere too drastically with the ability of others to do the same thing. The assumption, then and later, was that citizens would use their rights to seek their own advantage, by means as rational or irrational as they chose, while the national community as a whole would cover the costs of securing those rights against anyone and anything that attempted to erase them.
That is to say, the core purpose of government in the American tradition is the maintenance of the national commons. It exists to manage the various commons and commons-like phenomena that are inseparable from life in a civilized society, and thus has the power to impose such limits on people (and corporate pseudopeople) as will prevent their pursuit of personal advantage from leading to a tragedy of the commons in one way or another. Restricting the capacity of banks to gamble with depositors’ money is one such limit; restricting the freedom of manufacturers to sell unsafe food is another, and so on down the list of reasonable regulations. Beyond those necessary limits, government has no call to intervene; how people choose to live their lives, exercise their liberties, and pursue happiness is up to them, so long as it doesn’t put the survival of any part of the national commons at risk.
As far as I know, you won’t find that definition taught in any of the tiny handful of high schools that still offer civics classes to young Americans about to reach voting age. Still, it’s a neat summary of generations of political thought in pre-imperial and early imperial America. These days, by contrast, it’s rare to find this function of government even hinted at. Rather, the function of government in late imperial America is generally seen as a matter of handing out largesse of various kinds to any group organized or influential enough to elbow its way to a place at the feeding trough. Even those people who insist they are against all government entitlement programs can be counted on to scream like banshees if anything threatens those programs from which they themselves benefit; the famous placard reading “Government Hands Off My Medicare” is an embarrassingly good reflection of the attitude that most American pseudoconservatives adopt in practice, however loudly they decry government spending in theory.
A strong case can be made, though, for jettisoning the notion of government as national sugar daddy and returning to the older notion of government as guarantor of the national commons. The central argument in that case is simply that in the wake of empire, the torrents of imperial tribute that made the government largesse of the recent past possible in the first place will go away. As the United States loses the ability to command a quarter of the world’s energy supplies and a third of its natural resources and industrial product, and has to make do with the much smaller share it can expect to produce within its own borders, the feeding trough in Washington DC—not to mention its junior equivalents in the fifty state capitals, and so on down the pyramid of American government—is going to run short.
In point of fact, it’s already running short. That’s the usually unmentioned factor behind the intractable gridlock in our national politics: there isn’t enough largesse left to give every one of the pressure groups and veto blocs its accustomed share, and the pressure groups and veto blocs are responding to this unavoidable problem by jamming up the machinery of government with ever more frantic efforts to get whatever they can. That situation can only end in crisis, and probably in a crisis big enough to shatter the existing order of things in Washington DC; after the rubble stops bouncing, the next order of business will be piecing together some less gaudily corrupt way of managing the nation’s affairs.
That process of reconstruction might be furthered substantially if the pre-imperial concept of the role of government were to get a little more air time these days. I’ve spoken at quite some length here and elsewhere about the very limited contribution that grand plans and long discussions can make to an energy future that’s less grim than the one toward which we’re hurtling at the moment, and there’s a fair bit of irony in the fact that I’m about to suggest exactly the opposite conclusion with regard to the political sphere. Still, the circumstances aren’t the same. The time for talking about our energy future was decades ago, when we still had the time and the resources to get new and more sustainable energy and transportation systems in place before conventional petroleum production peaked and sent us skidding down the far side of Hubbert’s peak. That time is long past, the options remaining to us are very narrow, and another round of conversation won’t do anything worthwhile to change the course of events at this point.
That’s much less true of the political situation, because politics are subject to rules very different from the implacable mathematics of petroleum depletion and net energy. At some point in the not too distant future, the political system of the United States of America is going to tip over into explosive crisis, and at that time ideas that are simply talking points today have at least a shot at being enacted into public policy. That’s exactly what happened at the beginning of the three previous cycles of anacyclosis I traced out in a previous post in this series. In 1776, 1860, and 1933, ideas that had been on the political fringes not that many years beforehand redefined the entire political dialogue, and in all three cases this was possible because those once-fringe ideas had been widely circulated and widely discussed, even though most of the people who circulated and discussed them never imagined that they would live to see those ideas put into practice. There are plenty of ideas about politics and society in circulation on the fringes of today’s American dialogue, to be sure. I’d like to suggest, though, that there’s a point to reviving an older, pre-imperial vision of what government can do, and ought to do, in the America of the future. A political system that envisions its role as holding an open space in which citizens can pursue their own dreams and experiment with their own lives is inherently likely to be better at dissensus than more regimented alternatives, whether those come from the left or the right—and dissensus, to return to a central theme of this blog, is the best strategy we’ve got as we move into a future where nobody can be sure of having the right answers.