Author Archives: ACL

The Origins of Infrastructure

One of the great mysteries of humanity’s history is how we made the transition from an isolated, emergent species to attain today’s globally dominant civilization. Scientists tell us that the story began as early as 7 million years ago in Eastern Africa. Fossils found in the Awash Valley give evidence of our early precursors. Archaeological findings suggest that some of these precursors began to fabricate and use rudimentary stone tools between 6 million and 2 million years ago. Learning to control fire followed about 1 million years ago. By 70,000 years ago homonins had migrated out of Africa and begun to apply more complex technology evidenced in hafted spears, for which a sharpened stone point was attached to the wooden shaft.

The fossil record indicates that our own species, Homo sapiens, evolved during this progression and became the sole survivor among several homonin species. The evolution included a remarkable growth in brain size as well as emergence of social behavior and technological prowess. Some scientists hypothesize an interaction between physical capability and intellectual accomplishment to explain this evolution.

British archeologist Steven Mithen, for example, surmises that early uses of technology (such as hafting of spears) encouraged development of “cognitive fluidity,” an ability to abstract and combine aspects of experience from different domains such as finding shelter or observing game.  The large brain of Homo sapiens was an essential adaptation that enabled this cognitive fluidity to develop, but does not by itself explain how the development came to be. Adopting and using a cultural innovation provides the stimulus for users to extract more from their brains than they might have otherwise.

Drawing on observations of ants and other animals that exhibit eusocial behavior and altruism—in which some individuals in a colony or nest limit their own reproductive potential by raising the offspring of other nest-mates or defending the group against competitors and predators—noted Harvard biologist Edward Wilson suggests that certain “preadaptations” favor the behaviors’ evolutionary development. Among the most important of these preadaptations, Wilson conjectures, is a species’ propensity for living in defensible nests.  When early humans, tribal by nature, learned to use fire and establish campsites sufficiently persistent to be guarded as a refuge, they had taken a crucial step toward modern social organization.

Wilson and his colleagues Martin Nowak and Corina Tarnita assert that the advantage of a defensible nest located within reach of reliable food sources, particularly one requiring greater energy in its construction, is a crucial causative agent in the evolutionary development of eusociality, a trait that loosely applies to humans as well as ants. A next step in humans’ social evolution beyond the adoption of movable campsites would logically seem to be long-term commitment to a fixed location. The earliest evidence of such commitment arguably is found in the Chauvet Cave walls in southern France.  Images painted on the cave walls here and elsewhere (for example, the El Castillo cave in Cantabria, Spain, and others Romania and Australia) are estimated by various archeologists and methods be 28,000 to 40,000 years old.

We have no convincing evidence of the creators’ motivations for any of the cave paintings, but their permanence and often difficult-to-access locations suggest these were not simply decorations of living space, but rather demonstrations of a particular significance of place, perhaps an effort to preserve human memory as recorded history. I propose that in this sense these ancient markings are humanity’s earliest known infrastructure.

University of Cambridge archaeologist Graeme Barker has presented the evidence suggesting that the domestication of various forms of plants and animals evolved in separate locations worldwide, starting around 12,000 to 14,000 years ago.  For many researchers, this domestication is synonymous with “agriculture,” a technological innovation and foundation of modern civilization.  An alternate model proposed by David Rindos in the 1980s proposed that domestication of locally available plants, a co-evolutionary interaction of humans and their food sources, led to intentional agriculture and consequent selection of preferred species and strains.

This domestication of plants has been characterized as the beginning of the Neolithic or Agricultural Revolution.  Evidence, particularly from the Fertile Crescent region in the Middle East, indicates that cultivation was accompanied by construction of settlements and drainage ditches and landforms to control plant irrigation.  Archeological studies by Harvard archeologist Ofer Bar-Yosef and others are currently thought to indicate that the Natufian culture in the region is the world’s oldest example of sedentary settlements and agriculture, notable particularly because the settlements may have preceded the commencement of crop cultivation.

Whether development of agriculture preceded or followed the birth of cities has long been debated.  Mithan, for example, reflecting recently on the progress of human civilization, expressed a widely held view that agriculture came first, and once farming had originated, towns and cities appear to be an almost inevitable consequence.  On the other hand, Jane Jacobs, an economist and unabashed urbanist, famously argued in the 1970s that labor specialization and trade first gave rise to cities, and that feeding their populations necessitated the development of agriculture. (Archaeologists notably disagree. See Smith, Michael E., Jason Ur, and Gary M. Feinman.  2014.    “Jane Jacobs’s ‘Cities-First’ Model and Archaeological Reality.” International Journal of Urban and Regional Research 38 (4): 1525-1535.)

In either case, however, it would seem that infrastructure came first. The investment of effort in clearing fields; moving earth to adjust water flow; building fences, protective walls, and substantial shelters; maintaining paths for transportation; and the like would have contributed substantially to agricultural productivity, settlement economy, and social functioning of the residents.

Performance-Based Infrastructure Management: From Theory to Practice

Near the end of August, 1971, my advisor signed the paper certifying that my dissertation on Analysis of Systems of Constructed Facilities was accepted, fulfilling the last remaining requirement for completing my Ph.D. studies in M.I.T.’s Department of Civil and Environmental Engineering.  My thesis had been that decision makers—that is to say, the designers and managers responsible for building, operating, and maintaining highways, dams, houses, and other types of constructed facilities—should have as their goal to provide the facilities’ users with system that exhibit qualities of satisfactory performance throughout a defined service life and in a relatively efficient manner.  The novelty lay in bringing together in an explicit and operational way four ideas that were at the time coming into focus in our society and the literatures of engineering, architecture, economics, and political science: First, the concept of a facility’s “performance” has many dimensions. Second, what performance is “satisfactory” depends on users’ values and choices; in a pluralistic society, there will always be debate. Third, the long service lives of constructed facilities, measured in decades or centuries, mandate explicit consideration of uncertainties and risks that performance may become unsatisfactory; something may have to be done in the future to correct the situation. Finally, the resources used to deliver performance cannot adequately be measured on any one scale of value; efficiency can be judged only in relative terms, by comparing available options.

My approach to enabling decision makers to accommodate these ideas drew on principles from economics, psychology, and mathematics to represent performance in terms of three primary measures: serviceability, the degree to which the facility satisfactorily provides the services that users want; reliability, the probability that service will remain satisfactory throughout the facility’s service life; and maintainability, an indication of the effort that may be required for maintenance and repair to ensure satisfactory service.  Serviceability, reliability, and maintainability are not independent, and each may be increased—in principle—by using more resources.  The decision maker’s problem consists, I asserted, in devising and choosing among available options a design or management strategy that offered the best mix, the optimum performance.

Enactment of the National Environmental Policy Act in 1969 (NEPA) was a tangible demonstration of the emergence of a new way of thinking about constructed facilities, or to use more recently popular terminology, our civil infrastructure or built environment. The law’s timing was fortuitous.  As a young professional with a newly minted degree in hand, I became engaged in a thriving consultancy practice, helping government agencies learn how to make their decisions about our infrastructure in a more open, public forum and taking more directly into account the values that a broadly-based user community may place on such resources as parklands, historic associations, wildlife, and clean air.

The one resource that everyone recognized, of course, was money, and infrastructure decision makers soon realized that they needed more of it than in the past to deliver this enhanced concept of satisfactory performance. Limited budgets and competing demands for public-sector spending—notably in the early 1970s on growing military and health-care programs—meant that tradeoffs had to be made.  Maintenance might be neglected or planned repairs deferred.  Of course, one can argue that this was just the crest of wave that had been swelling for decades, but by the end of the ‘70s, some people were growing alarmed at what they saw as an impending infrastructure crisis.  When America in Ruins: Beyond the Public Works Pork Barrel (Pat Choate and Susan Walter, Council of State Planning Agencies, Washington, 1981) was published, it made headlines in the nation’s leading newspapers, a rare feat for any discussion of constructed facilities (later reprints changed the secondary title to The Decaying Infrastructure).  The book argued that the United States as a nation had been investing too little in its infrastructure and in the wrong places for a long time, and the nation’s economy was now at risk.

There followed a decade of federal government studies and intense debate among economists about just how important infrastructure is as a foundation supporting the economy and just how fragile that foundation might have become.  The debate formed a backdrop for renewed consideration of performance as a useful facilities-management concept, and by the early 1990s I found myself at the National Academy of Sciences, working with a committee of diverse professionals tasked with recommending how best to measure and improve infrastructure performance. We visited several cities, meeting with municipal and state officials and private-sector professional responsible for building and operating a wide range of infrastructure facilities.  The committee’s report, Measuring and Improving Infrastructure Performance, was published in 1996 (Washington, National Academies Press). We observed that practices then current for measuring infrastructure performance were “generally inadequate.” Performance measurement was typically undertaken because the effort was mandated by law or regulatory requirements, or when there was a specific problem to be solved, not because of any broad acceptance that performance measurement is an effective management tool.

More important was the committee’s recommendation that no single measure of performance can adequately represent the varied and complex societal needs that infrastructure is meant to serve. As the report’s summary expressed it, “Performance should be assessed on the basis of multiple measures chosen to reflect community objectives, which may conflict…. The specific measures that communities use to characterize infrastructure performance may often be grouped into three broad categories: effectiveness, reliability, and cost. Each of these categories is itself multidimensional, and the specific measures used will depend on the location and nature of the problem to be solved.”

The committee’s concept of performance had similarities to what I had proposed 20 years earlier.  “Effectiveness” was described as the ability of the system to provide the services the community expects…not so different from what I had defined as “serviceability.”  The term “reliability” was used in essentially the same way in my dissertation and the committee’s report.  What I had earlier considered as “maintainability” is now more understandably referred to as “resilience” and incorporated as an aspect of reliability. Describing “cost”—deriving from multiple resources and distributed throughout a facility’s service life, but definitely dollar-denominated—as a measure of performance was the major difference from my thesis and an important insight.

While historians may claim causal connections between events separated in time and space, such connections are fundamentally uncertain unless supported by explicit testimony from the people involved in later action linking their motivations to the earlier occurrences.  Having myself met twice with Congressional staff to discuss these matters and delivered to them copies of Measuring and Improving Infrastructure Performance and other documents presenting similar perspectives, I would like to imagine that what I and others have learned about infrastructure performance influenced the most recent transportation reauthorization bill Moving Ahead for Progress in the 21st Century (MAP-21, Public Law 112-141, enacted in July 2012) , which features a new federal emphasis on performance measurement. Section 1203 of the act asserts that “Performance management will transform the Federal-aid highway program and provide a means to the most efficient investment of Federal transportation funds by refocusing on national transportation goals, increasing the accountability and transparency of the Federal-aid highway program, and improving project decision making through performance based planning and programming.” (While the U.S. Department of Transportation has for some years issued its biennial Conditions and Performance report to Congress on physical and operating characteristics of the highways, bridges, and transit, MAP-21 is transformative in making an explicit link between performance and national goals.).

The law then states 7 goals that are to be the basis for defining performance, focused primarily on the nation’s highways: (1) safety, reducing traffic fatalities and serious injuries; (2) infrastructure condition, keeping the infrastructure asset system in a state of good repair; (3) congestion reduction; (4) system reliability, improving the system’s operating efficiency; (5) freight movement and economic vitality, improving the national freight network to support trade and economic development; (6) environmental sustainability, enhancing transportation while protecting the natural environment; and (7) reducing project delivery delays, to control costs and promote jobs.  Elsewhere the act makes keeping transit system assets in a “state of good repair” a goal as well.  The law tasks the Federal Highway Administration (FHWA) and Federal Transit Administration (FTA) with identifying specific performance measures to be used to administer the funding programs covered by the legislation, and with setting targets to be used to judge acceptable performance.

The stated goals and performance measures likely to be selected under MAP-21, while not necessarily comprehensive in their coverage, at least address ideas of effectiveness, reliability, and cost.  That it has taken more than 40 years to bring performance-based management into the mainstream one of the principal functional subsystems of the nation’s infrastructure is consistent with the very slow evolution that is a characteristic of civil infrastructure generally.

Battling Aging Infrastructure, the Enemy Is Us

Awash in local media headlines about Baltimore’s recent major water-main and sewer failures—and the flooding, street closures and business disruptions the inevitably accompany such events—Maryland’s Senator Ben Cardin and the city’s Mayor Stephanie Rawlings-Blake jointly signed an editorial in the local newspaper calling for reinvestment in our failing water systems. (Baltimore Sun, “Commentary”, 7/31/2012, p. 15)  The need, they wrote, is national.  They did not elaborate, but spectacular failures in other cities—Chicago in December 2011; suburban Atlanta and Washington, DC, in May 2012; and Kansas City in July, to give a few recent examples—offer persuasive support.

That these officials would go on record together in the cause of at least a portion of our nation’s infrastructure is certainly admirable.  However, the scope of their concern is too limited.  The problems of age, obsolescence, and catastrophic failure are not confined to water and sewer systems.  Across the nation bridge closures, natural-gas leaks, potholes, power outages, and erratic data connections have become painfully frequent.

It is also disappointing, albeit understandable, that the senator and mayor failed to acknowledge that we—a profligate citizenry and our elected leaders—are largely to blame for decades of deferred maintenance and failures to upgrade to new technology that have left our infrastructure in many places decrepit.

Parents generally understand that leaving their children a dilapidate house or car is not a great gift, but the typical taxpayer has little knowledge and less redress when a government executive or legislator chooses to satisfy vocal current interests at the expense of silent infrastructure. All residents and businesses suffer from this failure of fiduciary responsibility and leadership. We have systematically squandered a legacy built through the hard work of preceding generations.

Fool me once, so the saying goes, shame on you; fool me twice, shame on me.  If the time has come to reinvest, as Senator Cardin and Mayor Rawlings-Blake wrote, then as voters and taxpayers we should insist on a new deal: First, we should require that adequate funds are dedicated to infrastructure maintenance and upgrading so that decades hence our grandchildren are not confronted with the same crisis we now face.  Second, we should insist that our infrastructure is designed, constructed, and managed to provide reliable service and to be quickly repaired when failures occur.  Finally, we should rebuild with an eye on the future by incorporating smart information technology throughout the system. The people responsible for the infrastructure itself know how to do these things, but it will take leadership from elected officials to get them done.  Calling for reinvestment is only a small first step.

(An edited version of this post was published in the Baltimore Sun web edition in August 2012.)

Learning to Live with New Infrastructure Technology

The headline in The Atlantic, responding to an earlier article in the New York Times, asks the question, “Are we addicted to gadgets or indentured to work?”  (“Silicon Valley Says Step Away From the Device,” Matt Richtel, 7/23/2012, Business Day, New York Times.   “Are We Addicted to Gadgets or Indentured to Work?” Alexis Madrigal, 7/24/2012, The Atlantic. 

Matt Richtel, writing in The Times, reports that leaders at influential Silicon Valley companies are growing concerned about increasingly widespread addiction to gadgets.  Our attraction to smart phones, tablets, and on-line living, some say, reflects “primitive human longings to connect and interact” that threatens to take over our lives.  Next year’s edition of the authoritative Diagnostic and Statistical Manual of Mental Disorders, Richtel writes, is slated to include “Internet use disorder” in its appendix, indicating that the mental health profession thinks there may be a real problem but needs more research to understand it.

Responding in The Atlantic, Alexis Madrigal asserts the problem—for Americans, at least—is our slavish devotion to work.  We—or the upper middle class that reads the Times, at least—is working more and “having to stay more connected to work than ever before,” forced by employers (with the help of “our strange American political and cultural systems”) to be on the job 24/7. Citing both Mother Jones and McKinsey Quarterly as inspiration, Madrigal suggests that we need not simply to tear ourselves away from our electronic devices, but rather “organize politically and in civil society to change our collective relationship to work.,” adopting a more European perspective on our who controls our time.

Whether their myopia has an ideological or technological basis, both writers are overlooking the fundamental influence of our infrastructure.  In past decades motorized transport and telephone service dramatically reduced the influence of distance as an obstacle to economic and social interactions. The demands of maintaining international business networks and global supply chains shifted our ideas about “banker’s hours” and the sanctity of holidays and weekends.     Radio and television brought education and diversion, evolutionary emergence of “couch potatoes,” and threats to book and newspaper publishing.  These new infrastructures also supported and arguably accelerated dramatic expansion of the middle class and service sectors of the economy.  These changes went hand-in-hand with accelerating urbanization of our population and suburbanization of our cities.

As difficult as it may be to believe, digital wireless communication and the devices we carry to take advantage of this new infrastructure have become widespread in just about two decades.  The technology enables me and my colleagues—all of us somewhere well below the infamous top 2% of the income curve—to work from virtually anywhere and to shift working hours.  No longer must I take an entire day off to attend to medical appointments, to have my car repaired, or to attend my child’s school play.

I view this as new freedom rather than a grasping employer’s imposition. Many workers do not yet enjoy such freedom and, as in the past, some jobs are not suited to such changes of practice.

Recent statistics show an international trend of younger people being slower than preceding generations to get their driver’s permits.  Citing a study by the University of Michigan’s Transportation Research Institute, for example, MSNBC’s Paul Eisenstein reports that American teens are not rushing to get a driver’s license as soon as they become eligible, and that another study found similar trends in seven of 14 other industrialized countries.  (“American teens are waiting longer to drive,” Paul A. Eisenstein, 4/9/2012, MSNBC, Bottom Line)  In their own analysis, the Dayton Daily News found a 9 % drop in Ohio’s 16- and 17-year-old licensed drivers from 2006 to 2010, and a 4.7% decline in the number of Ohio 18-year-olds with licenses.

Analysts suggest the Internet, meaning particularly such new social media and communication applications as Facebook and text messaging, may be a key reason for the change.  Whatever the reasons, Eisenstein writes that auto company executives are worried that the trend may signal future declines in new-car demand.  Transit advocates are using the data to argue for higher government spending on urban public transportation systems.

How many hours we spend commuting, whether those hours can be used for anything other than steering and avoiding mishap, and whether the hours otherwise spent are counted as work or leisure are topics for another time.  Only consider for now the possibility that any purported addiction to gadgets and commitment work are simply short-term byproducts of learning to live with new infrastructure.

Are We Selling the Future Too Cheap?

Public concern and even occasional outrage over potholes, broken water mains, sewage spills, and closed bridges have been appearing with some regularity in the U.S. news media and blogosphere. Unemployment has been persistently high, particularly in construction. Interest rates have been at historic lows for several years. So why have we not seen an explosion of infrastructure investment?

Yes, we did have the 2009 American Recovery and Reinvestment Act (ARRA), meant to be a down payment on government action to modernize the nation’s infrastructure, enhance energy independence, and put people to work in the process.  The sudden spending sent government agencies scurrying for “shovel-ready” projects, but the law’s requirements that money be spent quickly precluded any real investment.

Before that, the sale to the private sector of long-term leases on the Indiana Toll Road and Chicago Skyway allowed the government sellers to redeploy some of the proceeds into new facilities, but no new resources were mobilized.

These instances notwithstanding, for the most part we have avoided what Adam Smith described as one of three duties of government, “the erection and maintenance of the public works which facilitate the commerce of any country, such as good roads, bridges, navigable canals, harbours” and the like. (The Wealth of Nations, Book 5, Ch. 1, Part 3)

Public works infrastructure, like a home, represents a commitment to the future.  We use  resources we have now to create something that we imagine will bring us benefits tomorrow.  For infrastructure, as for homes, we expect “tomorrow” to extend for decades.

An easily understood and accepted but nevertheless fundamental principle for making such investments is that we should get more benefits out of the infrastructure than the resources we have to put in for construction and and operation.  Putting the principle into practice, however, deciding exactly what resources we should invest and how, is not such a simple matter.  The future is uncertain.  People’s priorities change.  Our money, time, land, and other resources are limited.  We have many competing demands for using those resources.

So  it is not obvious if future benefits will be greater than the costs of a particular infrastructure investment. We need tools to help us decide.

One of the most widely used tools is “discounted cash flow” (DCF) analysis.  DCF is a way to compare costs incurred and benefits received over some defined time period to judge whether the total benefits exceed the total costs.

Essential to DCF analysis is the idea of a “time value of money,”  that everyone would prefer to have a dollar in hand today rather than waiting until next year for the same amount. We might be willing to wait if we were going to receive a larger amount, say $1.15. The idea is that funds to be received in the future are worth less than funds in hand today.

The measure of money’s time value is the “discount rate,” conventionally the percentage reduction in value per year of waiting.  In the example above, the discount rate is 15%.

Discount rates look a lot like interest rates, the rate to be paid for a home mortgage, for instance, the rate that what banks charge for credit-card loans, or what bondholders receive for lending their money to a corporation. In fact, there is not much difference, except that interest  rates really apply to money only.

Discounting is applied to many benefits and costs to which we assign monetary values. For example, we discount the value of time commuters will save over the next 15 years to a supposedly equivalent present amount to justify building the extra highway lanes that we expect will speed travel.

When the discount rate is larger, investments not likely to yield returns until many years after resources are invested look less attractive.  When the rate is smaller, future returns look more valuable in the present.  Most of the time, the very long time periods over which we expect to realize the benefits of physical infrastructure–three to five decades and longer–do not count for much in the economic analysis because the discounted present values are low. Given a choice between a short-lived but high-benefit investment (attracting a major sports event, for example) and a steady but lower annual return over many years (a new rail transit line, perhaps), high discount rates favor the former.

Very low interest or discount rates should then encourage investment in infrastructure.  For a variety of reasons, U. S. interest rates have been at historic lows for several years. In addition, expressions of public concern and even occasional outrage over potholes, broken water mains, sewage spills, and closed bridges appear with some regularity in the news media and blogosphere.

So, once again, why are we not seeing an explosion of infrastructure investment?

People are thinking about infrastructure as if there will be no tomorrow.  Interest rates may be low, but the discount rates people are using–subliminally–to assess their investment opportunities, are a lot higher.

People who study such matters suggest that rates have three components.  The first component is in fact a financial market interest rate representing the payments that presumably very reliable borrowers—governments and their central banks, for example—must make for the privilege of using other people’s money.  The second component represents a premium presumed to compensate for a possibly less reliable borrower and what risks the lender potentially faces related to the conditions of lending, such as the length of time until the loan is to be repaid and whether the lender has offered any security—the house in the case of a mortgage loan, for example.  The third component is meant to account for the uncertainty of future events and the risk that events will make it  impossible for the lender to recover fully the amount lent.

So if the public loses confidence that people responsible for infrastructure are not likely to be reliable stewards over the coming decades, they will insist on higher rates of return, discount rates. If they feel that the future is less certain to be like the conditions of the past, they will look for a higher discount rate. Sea levels rising, financial crises, political gridlock: higher discount rates demanded.

But we do not have to be paralyzed by such uncertainties. The creators of Iran’s qanats that still supply municipal and agricultural water after nearly 3 millennia, China’s Great Wall, Paris’ Notre Dame Cathedral, and even such recent works as the Panama Canal and the Golden Gate Bridge would not have persisted without a vision that they were building for a long-term future.  We should not discount so deeply our own future.

“Sustainability” may be fundamentally unsustainable, but we have a chance

The idea of “sustainability” has clearly taken root.  The word appears frequently in print as well as Internet media, and national governments around the world have established agencies and programs devoted to it.  There seems to be widespread agreement that the idea has something to do with energy supplies, environmental impact, and economic growth, and perhaps with social engagement and political stability as well, although the scope of what is to be sustained—individual well-being, national prosperity, global status quo, for example—seem to differ from one forum to another.  However, there seems also to be a dawning realization that the idea’s application as a basis for guiding humanity’s actions may not be sustainable.

An important early appearance of the meme, if not its initial source, is often attributed to the World Commission on Environment and Development, commonly known as the Brundtland Commission.  This group of international experts was convened by the United Nations in 1983 to propose long-term environmental strategies for achieving sustainable development; recommend ways that concern for the environment may be translated into greater co-operation among countries; and help define shared perceptions, aspirational goals, a long-term agenda for action.  The Commission’s 1987 report, Our Common Future  suggested that “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”  Among the still-expanding literature on the subject, I have found no more cogent definition of the term.

It might seem like a small step from seeking humanity’s “sustainable development,” a steady advancement of people’s achievement and wellbeing, to the “sustainability” of humanity, our survival as a prosperous species.  For a number of reasons, however, I suggest there is a large gap between the two concepts.  The gap is so broad, in fact, that I doubt the value of sustainability as a meaningful basis for guiding our principles and policies. Let me explain why.

To begin, the time scale for thinking about our sustainability far exceeds our abilities—politically, socially, historically, perhaps psychologically—to plan, take meaningful action, or even pay attention.  Scientific evidence suggests that the biological genus of which humans are a part evolved into being and the first hominid use of stone tools began in Africa perhaps 2.5 to 3.5 million years ago.  Evidence of homo sapiens sapiens, our particular species, dates back about 250,000 years.  (In all of this, my phrasing is meant not to convey any skepticism, but rather to acknowledge that we rely entirely on inference from the limited data available to us to draw conclusions about past events and conditions.)

Our various experiments in culture, social, and political organization are rather brief when seen in sharp contrast with these time periods.  Damascus is often claimed to be the oldest continuously inhabited city in the world, but evidence for large-scale
settlement seems to date back only about 4,000 years. (The earliest Egyptian, Sumerian, and Chinese written records may have been created about 6,000 years ago.)  The community water-management schemes of Bali and other parts of Indonesia, arguably among the better models for sustainable relationship of humans and their environment, began to exist perhaps 1000 years ago.  England’s Magna Carta was first issued in 1215 and the United States, our ongoing experiment in capitalist democracy, was established less than three centuries ago.  Viewed against the backdrop of human history, “sustainability” has had a very brief span of influence.

In addition, there is the fundamental uncertainty of our existence as a species.  While some people prefer alternative explanations, the fossil evidence suggests that many varieties of creatures have come and gone since the first simple cells appeared.  The famously extinct dinosaurs died out, some scientists suggest, after an asteroid colliding with the Earth caused extreme global climate change.  On a less cosmic scale, scientists theorize that the ash from a volcanic eruption approximately 70,000 years ago at Lake Toba on the island of Sumatra, Indonesia, similarly caused such a dramatic global cooling that human population was drastically reduced.  Outbreaks of bubonic plague (the infamous Black Death of 14th Century Europe) and related famines in more recent times have dramatically reduced human populations in Asia and Europe.  Apart from simply not giving in to existential despair, probably the best we can do in light of such evidence is to limit our perspectives to decades at most.  Some government agencies already seem to be unable to maintain funding for the programs they established to enhance their communities’ “sustainability.”  (For example, see the commentary on Sustainable Cities Collective.)

Finally, we really cannot know whether our actions are “sustainable” with respect to either our development or our survival.  Application of the Brundtland definition requires forecasting not only the consequences of our current actions but also what future generations may judge to be their own “needs.”  On the one hand, our society and our global environment are a complex system, susceptible to the well-publicized “butterfly effect;” any small perturbation can cause unforeseen consequences.  On the other hand, our values, technologies, and culture change from one generation to the next, so that what may seem to us an inconsequential change may be seen very differently by our children; consider for example the shift in our views about air pollution pesticides.  In general then any assessments of the future consequences of our actions are more than likely to be inaccurate.  Even more fundamentally, it seems quite likely that we simply cannot do anything to meet our own present needs without in some sense compromising the options available to future generations.

While “sustainability” or even “sustainable development” may be problematic as directly useful concepts, the ideas nevertheless do point the way toward useable principles. Applying these principles will at least increase the chances of our long-term survival:

  • Use only renewable resources: No matter how large the supply reservoir may be, it will eventually be exhausted.
  • Eliminate all waste and pollution: What economists refer to as “residuals” are simply an indicator of inefficiencies in
    our production processes.
  • Stabilize our population: Increasing humans’ wellbeing and chances of survival as individuals and as a species depends ultimately on enhancing labor productivity as well as on applying strictly the first two principles.