consumer

Ethical Consumption in the Socialist Imaginary

By Juan Gonzalez Valdivieso

 

Since its advent in the 1990s, globalization has transformed the world. One of its many notable effects was the further siloing of consumers from the labor that produced their goods and services. Increasingly complex global supply chains alongside deceptive advertising make it nearly impossible to uncover every step in a product’s production and distribution. Of course, strategic clarification of these processes would come to represent its own form of advertising, as the professed “social and environmental values of consumer products” became reliable selling points in and of themselves. This was mainly due to an increase in consumer consciousness — spurred by globalization’s poster child, the internet — that begged for opportunities to consume “ethically.”

Though such “ethical consumption” marked an improvement over previous consumptive practices, a socialist lens reveals its limitations. As socialists understand, capitalist production relies on the exploitation of workers by capital owners, meaning that no level of consciousness or self-awareness on the part of traditional companies can shed their fundamentally unethical character. Even in instances where a worker’s experience with their employer is satisfactory — as can happen when receiving a high salary or wage, robust benefits, or other perks — the company’s simultaneous profiteering is more than just a harmless manifestation of mutual benefit. The very act of turning a profit beyond that which would sufficiently refinance operating costs is one of theft, particularly of the value that the worker has produced via their labor. This surplus value is not returned to the worker nor does it serve operational ends. It instead comprises the millionaire salaries of executives and further grows the capital to which the company can now claim legal rights. In other words, as socialists often argue, there is no ethical consumption under capitalism. However, when considering the ethics of capitalist consumption, the analysis cannot stop there.

It is not so much ethical consumption but rather ethical purity which is impossible under capitalism. Moreover, beneath such a threshold of ethical purity, there lie two spectra upon which one’s capitalist consumption can and should still be measured: that of ethics and, more importantly, that of the consumer. 

The spectrum of ethics — henceforth referred to as the ethical spectrum — is that which the deliberately advertised “social and environmental values of consumer products” implies. In other words, a hierarchy of ethics in consumption does exist just shy of ethical purity. And, most pressingly, that hierarchy is primarily highlighted by the aspects of a good or service’s production and distribution that can be observed, analyzed, and understood. Of course, such aspects are most often only made publicly available for observation, analysis, and understanding at the behest of their corporate manufacturers but they are empirical points of ethical reference nonetheless. Take the purchase of a shirt, for example. When a consumer purchases a shirt, the ethical spectrum offers a host of consumptive options based on the available social and environmental factors at hand, ones which, for the sake of argument, will be boiled down here into three outstanding choices.

The first choice, which will be the optimal form of ethical consumption in this scenario, is one in which it is known to the consumer that the shirt is both the product of union labor and produced in an environmentally conscious way, be that through the use of reusable materials, renewable energy, waste minimization, etc. The second choice, which will be the middle-of-the-road, intermediate form of ethical consumption in this scenario, is one in which the shirt is still the product of union labor but environmental considerations are not present, meaning labor exploitation is minimized through the presence of unionized production but the sustainable nature of the product is lacking. The third and final choice, which will be the worst and least preferable form of ethical consumption in this scenario, is one in which the production of the shirt lacks both union labor and environmental considerations, making it an ethically lackluster product regarding its accommodations for both labor exploitation and sustainability. It is in determining which of the three choices one should pursue, if any at all, that the second spectrum — that of the consumer — becomes relevant.

SUPPORT OUR WORK BY MAKING A DONATION TODAY!

The spectrum of the consumer — henceforth referred to as the consumer spectrum — is one which makes an even deeper distinction between consumptive practices than that of the ethical spectrum, as it precedes the question of ethics with the question of ability. To consider consumption under capitalism as an exercise of solely ethical dimensions is to neglect the vital reality underlying such a society: inequality is rampant, poverty is ever-worsening, and the material conditions of the masses only become more dire by the day. As such, it is often the case that for many consumers, ethical considerations are an aspect of capitalist consumption in which they simply do not have the socioeconomic capacity to engage. After all, who is to blame a working-class family for neglecting the exploitative or unsustainable aspects of a good or service they’ve consumed when their socioeconomic conditions may not even allow them to ensure their most basic needs?

The consumer spectrum acknowledges this disparity and ensures that the degree of ethical consideration a consumer engages in is proportional to their socioeconomic standing, one best represented by the consumer’s income. However, conditions beyond those of financial earnings can determine whether disposable income in particular will fluctuate over time, a trend that would then require the consumer’s ethical considerations to similarly shift. These outstanding conditions can take on many forms, incorporating factors such as working conditions — a greater likelihood of on-the-job injuries could decrease disposable income prospects due to evermore frequent medical bills — immigration status — undocumented workers have less access to social safety nets and unemployment benefits than their documented counterparts — and living conditions — crumbling infrastructure could gradually increase the financial burden of maintenance faced by tenants, decreasing their disposable income over time. As such, the consumer spectrum adjusts the ethical considerations incumbent upon a consumer based both on their income and on the potential for their disposable income to fluctuate. In turn, the consumer spectrum ensures two important outcomes.

On the one hand, it makes sure that socioeconomically disadvantaged individuals are not burdened with the task of considering ethics when making consumptive decisions to survive. On the other, it holds socioeconomically advantaged individuals to a higher standard of ethical consumption, one in which they would be remiss to not undergo the kind of ethical considerations previously outlined in the shirt exercise. Admittedly, the former assurance has become more widely accepted in discourse regarding working-class consumption. The latter, on the other hand, risks not achieving the same, as the maxim that there is no ethical consumption under capitalism can serve as low-hanging fruit for socioeconomically advantaged individuals to conveniently justify knowingly unethical consumption. The consumer spectrum seeks to account for such co-optation and counter it head-on.

This layout of consumptive spectra can be useful on the individual level of consumption. For those with the appropriate socioeconomic bandwidth, it offers bountiful considerations that can inform the consumption of a given good or service. However, the utility of the model is perhaps best understood on the macro level. Beyond the pressure that socialists must continue to exert on the existing system — uprooting the power of capital owners and corporations in the process — these spectra provide greater nuance to the socialist perspective on individual accountability and action. Through the ethics and consumer spectra, we can better envision the untapped potential of individualized proactivity in creating a less exploitative and more sustainable society, while also accommodating the diversity of lived experiences and forms of exploitation endured under the current economic system.

Thus, the notion of ethical consumption under capitalism should not simply culminate in an indisputable law of impossibility. Rather, it should be understood as a range of activity that can be engaged in — just shy of ethical purity — based on the ethical considerations at hand and, more pressingly, those which directly pertain to the socioeconomic capacities of the consumer. Only in considering this reality can we better understand the role of individual consumption in the broader socialist project of radical change and revolutionary transformation.


Juan Gonzalez Valdivieso is a Colombian Marxist. In his writing, he seeks to interrogate the nuances of socialist thought and praxis.

Spectacular Death and the Histrionics of Loss

By Michael Templeton

Republished from Peace, Land, and Bread.

For one summer, I worked at a local cemetery mowing grass. Spring Grove Cemetery encompasses over 700 acres of land. It was chartered in 1845 and remains open to this day. The cemetery is a major destination for walking, biking, sight-seeing, and simply relaxing in the natural surroundings. One of the things I came to notice as an employee was the stark contrast between the older parts of the cemetery and the newer plots. The oldest stones and grave markers contain little information. Some stones do not even have names on them. They simply say “Father” or “Infant,” etc. Older stones that do have writing on them generally state the date of birth, the date of death, and a few lines from the Bible. There are symbols on some of the stones which denote certain professions—doctors, clergy, military men—carry an iconography specific to those vocations and most of this iconography is quite ancient. By contrast, the newer stones are covered with writing. Lines from popular songs, poetry, and sentiments from the bereaved clutter these stones. The newest stones may have etched images from photographs so that an image of the deceased is engraved onto the stone. In the newer parts of the cemetery, one can find grave markers shaped like cartoon characters. Some of the stones have the appearance of modernist sculpture so as to set it apart from older gravestones. The change from stones and graves which leave nothing but a bare stone to graves which are covered with information is not attributable to mere fashion or advances in technology. Rather, this change has everything to do with the ways people understand death itself.

Spring Grove Cemetery itself came into existence due to increasing concern over cholera outbreaks and the unsanitary and unsightly presence of old church cemeteries which left dead bodies to decay into sources of drinking water and were an affront to middle-class ideas of how neighborhoods should appear. The dual pressures of public health and changing attitudes toward the emplacement of the dead coincided throughout the Western world with the emergence of the modern cemetery and Spring Grove Cemetery is emblematic of those pressures. It is now an enormous example of the drive to create a space for the dead which was easily accessible to the city center but outside of the city proper, and it is an example of such a space that serves the additional purpose of being a destination for recreation. It is adjacent to the city but not in it. It is a space reserved for the interment of the dead, but it is a marvel of landscape design and architecture. Lastly, it contains something of an archaeological record of a shift in the way individuals understand death itself.

The cemetery is an example of that type of space defined by Foucault as a heterotopia. It is both real and unreal. It occupies a border region in terms of the actual space which is occupied by real individuals.

Heterotopias are liminal places—the way a mirror offers a real place which is both present and absent:

"The mirror functions as a heterotopia in this respect: it makes this place that I occupy at the moment when I look at myself in the glass at once absolutely real, connected with all the space that surrounds it, and absolutely unreal, since in order to be perceived it has to pass through this virtual point which is over there." [1]

The cemetery offers a similar social function. It is the mirror image of the city in that it is completely deliberate in its spatial design and it is occupied. Yet, the cemetery is designed not to facilitate the movement of bodies but to inter bodies—and it is occupied with the dead. It is the inverse version of the city itself. Like the mirror, the cemetery is a real place, but it operates in a manner that is unreal since it does not function as a place for individuals to exist, only to desist. So, the modern cemetery emerged as a site in which societies could place the dead in a real place that functioned as a kind of unreality with regard to everyday life. There is the place of the dead which one could visit and even enjoy, but the place of the dead could be put out of mind when it came to living life.

Spring Grove was born of this social movement. Founded in 1845, it coincides with the historical period described by Foucault and it bears the cultural traces which Foucault describes as signs of the modern cemetery. These are sacred spaces, but they emerged during a time that was distinctly secular. The modern “cult of the dead” emerges during a time of a paradox:

"This cemetery housed inside the sacred space of the church has taken on a quite different cast in modern civilizations, and curiously, it is in a time when civilization has become ‘atheistic,’ as one says very crudely, that western culture has established what is termed the cult of the dead." [2]

An “atheistic,” or secular, society is also the society that creates an entire city devoted to the preservation of the dead. It is under these conditions—conditions in which a firm belief in the life of the soul is fading and therefore must be performed in an ever more elaborate fashion—that the place in which commemoration of the dead becomes a visible and dramatic presence. In previous times, when the conditions of possibility created the conditions in which individuals firmly believed that God guaranteed the care of the soul, people did not need to commemorate bodies. As faith in the soul decreased, care of the body increased. Again, Foucault:

"Basically it was quite natural that, in a time of real belief in the resurrection of bodies and the immortality of the soul, overriding importance was not accorded to the body’s remains. On the contrary, from the moment when people are no longer sure that they have a soul or that the body will regain life, it is perhaps necessary to give much more attention to the dead body, which is ultimately the only trace of our existence in the world and in language. In any case, it is from the beginning of the nineteenth century that everyone has a right to her or his own little box for her or his own little personal decay, but on the other hand, it is only from that start of the nineteenth century that cemeteries began to be located at the outside border of cities." [3]

We create a city of the dead only when we are no longer certain that God has done this for us. This is not to say that the advent of the cemetery coincided with the complete abandonment of faith in the afterlife. Rather, the rise of the modern cemetery marks a time in which faith in the afterlife is no longer a fundamental fact for the living and must therefore be demarcated in the form of a space that is both sacred and secular so that the living may continue to have access to some kind of symbolic place and sign which stands in for both loss and faith in the afterlife. The modern cemetery is a heterotopia in the sense that it is an “other space” and it is a place in which a paradoxical understanding of death could find some measure of reconciliation.

We see evidence of complete faith in the afterlife in the forms of gravestones which carry little to no information. The facts of the life of the deceased are of no importance because the deceased is no longer in the world and has passed on to another world. To consign the dead to a nearly anonymous place in the world requires absolute faith that the soul of the dead has literally passed on to another world. A parent who has lost a child, for example, does not require a stone with the child’s name engraved upon it in order to remember that child. The stone simply does not perform that function. It marks the site of a burial and nothing more. As Foucault states, it is the move toward a more “atheistic” society which demands monuments to testify to the life of the deceased. What is more, the monuments and the small personal boxes for bodies speak more to the living than to the dead. We do not erect monuments for the dead for the simple fact that they are dead. We erect monuments for ourselves. They are markers to prove to ourselves that the deceased were in fact important to us, and the monuments are to show others that we care. The heterotopia of the cemetery has much more in common with the mirror than the dialectic of the real and the unreal.

As we move into the 20th century, the gravestones become more loquacious. Modern and contemporary stones are engraved with lines of biblical scripture. They bear poetry and song lyrics. The most recent stones bear engraved images from photographs. These are extremely realistic images which look like black and white photographs which have been directly printed onto the stone. In another cemetery in Southern Indiana, the stones are almost all this type. People leave photographs, toys, trinkets of all kinds, along with religious items such as rosary beads and crosses. As we move into contemporary times and the function of religion and faith fades from playing any role in everyday life, the demonstrations of grief and loss, the sheer number of words used to mark loss, and the profusion of images just explodes all over the cemetery. The more removed faith in the afterlife becomes, the more pronounced the declarations of faith in the afterlife.

More words are inscribed to mark the faith of those who still live. More realistic images are rendered to commemorate the lost loved ones. This would indicate more than a loss of faith. It indicates a turn away from loss itself and a nearly obsessive focus on the ego of the bereaved.

The contemporary grave marker is a mirror of the ego on which the bereaved can gaze upon themselves. The heterotopic structure remains, but it has returned on the level of the ego.

A fundamental lack of real belief finds an expression in the iconography and cluttered language of the contemporary headstone. What we see in these histrionic displays is a profound inability to confront the reality of death. One forestalls the reality of death by filling in the loss with a profusion (and confusion) of images, words, and trinkets thus shifting the focus away from loss itself and onto the individual who experiences the loss.

Rather than allow the progression of psychological mechanisms in which an individual experiences loss, suffers the process of mourning, and finds resolution in the acceptance of the loss, we see the cultural expression of a complete fixation on loss itself. This is Freudian melancholia on the scale of public theater, and it manifests itself in forms which resemble graffiti. Freudian mourning and melancholia are distinguished by the thorough process of mourning in which the ego is directed outside of itself and melancholia in which the ego contemplates itself:

"In mourning it is the world which has become poor and empty; in melancholia it is the ego itself." [4]

This would be sufficient except that the contemporary ego is already poor and empty since it has been evacuated of substance by finding a place of meaning exclusively in the exterior drama of the spectacle. This is an inversion of Artaud’s “Theater of Cruelty” in that these demonstrations do not reflect what Artaud envisioned as an expression of “both the upper and lower strata of the mind.” These are theatrical advertisements for loss that express only the most superficial marks of grief. [5] Contemporary life projects the ego into the external world and can only find a ground of being and meaning to the extent that this exterior ego function is reified in the system of exchange which only knows consumer existence.

Consumer existence requires the system of exchange in order for anything to be real. The form of melancholia expressed through the verbose and graffiti strewn headstones we find in the newest parts of the cemetery indicate an ego which cannot comprehend death at all except as an affirmation of itself.

Far from paying homage to the deceased and far from a spiritual declaration of faith in the afterlife, the contemporary headstone is a testament to the flimsy ego of the same individuals whose lives are devoid of any reality because at the level of individual experience. There is no reality which exists outside the realm of merchandise and display. The profusion of words and images is designed to compensate for an ego that has been entirely evacuated of substance.

What we witness in the contemporary graveyard is not melancholia proper since the ego fixation on itself is in fact an ego fixation on a prescribed mode of performance loss. There is no confrontation or meaningful experience of loss since it is denied in the form of a spectacular show of loss.

"The dominant trait of the spectacular-metropolitan ethos is the loss of experience, the most eloquent symptom of which is certainly the formation of that category of “experience”, in the limited sense that one has “experiences” (sexual, athletic, professional, artistic, sentimental, ludic, etc.). In the Bloom [the indeterminate form of contemporary life], everything results from this loss, or is synonymous with it. Within the Spectacle, as with the metropolis, men never experience concrete events, only conventions, rules, an entirely symbolic second nature, entirely constructed." [6]

The loss of experience means the loss of the ability to truly experience death. People experience the forms of loss, grief, and mourning only to the extent that there are prescribed modes of experience which come from elsewhere. That is to say “forms” of loss, grief, and mourning because the actual experience is deferred in favor of the performance of these modes of experience. The loss of experience proper negates the experience of loss.

Death, of course, remains a reality, but in its social forms, the reality of death cannot exist except insofar as it can become a commodified abstraction. Death is the abstract nothing forestalled by the business of creating a form of life. Individuals render the loss of their own loved ones with the histrionic displays engraved onto headstones. They otherwise deny death by buying into economic abstractions which further render death an abstraction. There is a business of death prior to death: “Promoters of life insurance merely intimate that it is reprehensible without first arranging for the system’s adjustment to the economic loss one’s death will incur.” [7] Death can only be grasped from within the abstractions prescribed by the spectacle, and rendered in equally abstract images that have more in common with advertising than individual loss and grief.

Under present cultural conditions, this theological ground no longer holds, and we see this clearly in maudlin displays of grief which are in fact desperate displays of melancholia. The nature of contemporary consciousness is such that we find no resolution in the face of death therefore we simply deny it. We hide from death because it is invisible and unknowable, yet we perform grief with ever greater histrionic displays so as to affirm our egos in the face of the one thing we know expunges the ego.

Returning to the most basic features of the spectacle, we can find the same mystifications at work that we saw in spectacular pseudo-belief:

"The spectacle is not a collection of images; rather, it is a social relationship between people that is mediated by images." [8]

Our relationship to each other and to the world around us is mediated by images to the extent that what is known is no longer things in the world but our relationship to images of things in the world. Our understanding of death is now captured in the spectacle as much as any other aspect of life. Death is negated by the image of death and we find a sense of solace in loss through our relationship to these images of death, mourning, and loss.

There is no death, mourning, and loss; there is only the performance and image of death, mourning, and loss. One expresses themselves through engraved images of the lost loved one, not the lost loved one. The contemporary grieving person finds some measure of peace in contemplating the image of the person they lost, and this constitutes a fundamental denial of loss. The only thing that matters is that the grieving person remains alive and anyone who passes the grave of the deceased knows that someone lost someone else. In this way “it is thus the most earthbound aspects of life that have become the most impenetrable and rarefied.” [9]

It is not death that is impenetrable and rarefied, it is the consumer of signs of loss and death.

The spectacle denies the validity of life as it is lived in everyday experience. Nothing so common as loss can be commodified unless images and tangible commodifiable expressions of loss can be made to supersede the lived experience of real loss.

Thus, it is that “the absolute denial of life, in the shape of a fallacious paradise, is no longer projected into the heavens, but finds its place instead within material life itself.” [10] We find a sense of the afterlife only in images that dramatize the beyond because there can be no way of conceptualizing anything that is not material and commodified. Gravestones are no longer markers of death and loss. They are markers of the ongoing participation of one who has lost, but one whose sole understanding of loss is as a histrionic expression of their own ego within the heaven of spectacular images.

Spectacular life cannot include death. There is simply no place for something so utterly final and real. As we saw above, we never experience concrete events; we only experience the conventions and rules of events. The experience of events has been replaced with the formal specifications of events. We do not experience a rock concert, we experience the prescribed modes of behavior which a rock concert demands. There are formal aspects to concert experiences which are dictated ahead of time by representations of musical events. In the same way, contemporary life excludes the possibility of experiencing death.

One does not live the experience of the death of a loved one. One experiences the formal attributes of loss.

The television news will never show you a person bereft of any and all expression as they are overcome with loss and grief. What we see through the screens are rehearsed performances, histrionic displays. People repeat the same clichés: “they were too young,” “they had their whole life ahead of them,” “our thoughts and prayers are with the family,” etc. In the absence of the possibility of belief, as we saw above, there can be no understanding of anything that resists representation. There is no real death, only images that mediate a collective inability to recognize the reality of death.

The function of religion with respect to death was, in essence, a Hegelian sublation. Death negates life. Religion serves as a mediating force which negates the negation. The simultaneous negation and transformation of the fact of death constitutes a resolution. The dead are negated and elevated to another plane of existence. In effect, the religious mediation of death served the function of Freudian mourning. The finality of death is resolved in the sublation of this finality into a spiritual faith in something that transcends death. This step in the psycho-social confrontation with death depended on a qualitative change in one’s existence. The finality of death serves as the negation of our temporal existence. This negation is itself negated as the soul of the deceased is lifted into another plane of existence. In this, the full dialectic is resolved.

Death under the dominance of the spectacle provides no such resolution. Within the spectacle, death negates life. Rather than confronting this fact, the contemporary subject simply disavowals that which cannot be transformed into life.

There is no finality in consumer culture; only a new version of the commodity which is designed to fill the void that does not exist without consumer culture. The contemporary confrontation with death is manifest in the grave marker which is yet another consumer spectacle. It can be consumed endlessly, therefore there is no death. The gravestone stands in for an absence that is never properly experienced as an absence. The clutter of the stone creates presence. Contemporary understandings of death can find no resolution and subsequent sublation. What we have is a childish disavowal of the reality of death and a psychological return to our own ego. Cluttered and outlandish grave markers do not signify the deceased. They signify the living. These grave markers scream “me, me, me” and “I, I, I.” They are infantile demonstrations of impotence. There is no dialectical resolution since contemporary life does not allow for any qualitative differences as valid differences. We have only quantitative differences. Under a regime of knowledge that can admit nothing but quantity, there is no net gain from death. Therefore, death can only be disavowed with quantities of grief. More display equals more grief. The operative term is “more.”

Even the medical establishment disavows death. Even as science moves to endlessly split hairs on the medical definition of death, the mechanisms of medical science cannot find the precise moment or even conditions that constitute death. For centuries, death was defined as the moment the heart and breathing stopped. This was simple. When a body no longer showed basic vital signs, that body was dead. Beginning in 1959, a new definition of death began to emerge. With the medical classification of what is termed coma depasse, or overcoma, medical science began to take account of a body which was by all objective measures dead but would continue to show basic vital functions with the assistance of medical instruments that assist with breathing and feeding. [11] The living person was effectively dead, but they continued to live at the most basic biological level to the extent that organs continue to function with the help of machinery. Near the end of the Twentieth Century, medicine advanced the notion of brain death as the final determination of death. This meant that “(o)nce the adequate medical tests had been confirmed the death of the entire brain (not only of the neocortex but also of the brain stem), the patient was to be considered dead, even if, thanks to life-support technology, he continued breathing.” [12]

However, the definition of brain death was confirmed because brain death finally leads to the cessation of heart and respiratory functions. Brain death is confirmed with the definition of death that preceded it. This is to say that, “According to a clear logical inconsistency, heart failure—which was just rejected as a valid criterion of death—reappears to prove the exactness of the criterion that is to substitute for it.” [13] The moment of death is brain death, but brain death leads to heart failure which is the moment of death. All of this leads to a zone of indeterminacy wherein death occurs but does not occur at the same time. Agamben draws this problem out to further his theory of the state of exception which lies at the heart of contemporary biopolitics. For our purposes, it is enough to understand that death remains a fundamentally unreal thing, even in the realm of medical science.

Contemporary consumer culture depends on externalizing all real lived experience. Individual experience only takes on validity once it is sutured into the realm of consumable images and the commodities which give these images meaning. My “I” only exists to the extent that it enters the flow of other egos who participate in the systems of exchange. Whereas the individual was once a mystification within capitalism insofar as one’s individuality exists in relation to one’s participation as a working subject of capitalism, we have gone many steps further and one’s individual status as a human can only exist insofar as you have projected yourself into the realm of images and rendered yourself a meaningful participant in spectacular culture. All of this renders individual subjectivity a completely external feature of public consumption and the realm of interior life has no value or even any meaning.

Individual beliefs no longer exist because belief takes place elsewhere, in the realm of the image. Individual egos have no meaning other than as externalized performances of ego-ness. I demonstrate myself, therefore I am. Just as images circulate in a state of pseudo-eternity in image space and image time, in the realm of pseudo-cyclical time as we saw above, so the contemporary ego circulates forever in a consumerist limbo that will not admit death.

Medical determinations of death are left to systems of political power. Since doctors are only in the business of life, they have no obligation to offer a final determination of death that would serve in all cases. Death is a political question. It is not a medical or biological question. Death is not even a theological question, no matter the amount of biblical language you inscribe on a stone. Death is not, and the heterotopia of the cemetery serves the dual function of being a place for the dead, and yet another place to publicly perform yourself. No longer that other space where the city lays its dead adjacent to the city proper where people continue to live, the cemetery is now the other space where we wallow in our emptiness against one of the only things that cannot be commodified: the absolute finality of death.

Michael Templeton is an independent scholar, writer, and musician. He completed his Ph.D. in literary studies at Miami University of Ohio in 2005. He has published scholarly studies and written cultural analysis and creative non-fiction. He is also the blog writer for the Urban Appalachian Community Coalition in Cincinnati, Ohio.


Endnotes

[1] Foucault, Michel. “Of Other Spaces," p. 4

[2] Ibid., p. 5

[3] Ibid., pp. 5-6

[4] Freud, Sigmund. “Mourning and Melancholia,” p. 246

[5] Artaud, Antonin. The Theatre and its Double. p. 82

[6] The Invisible Committee. Theory of the Bloom, pp. 47-48

[7] Debord, Guy. The Society of the Spectacle, p. 115

[8] Ibid. 12

[9] Ibid. 18

[10] Ibid. 18

[11] Agamben, Giorgio. Homo Sacer: Sovereign Power and Bare Life, p. 160

[12] Ibid. p. 162

[13] Ibid. p. 163


References

Agamben, Giorgio. Homo Sacer: Sovereign Power and Bare Life. Tr. Daniel Heller-Roazen. Stanford: Stanford University Press, 1998.

Artaud, Antonin. The Theatre and its Double. p. 82.

Debord, Guy. The Society of the Spectacle. Tr. Donald Nicholson-Smith. New York: Zone Books, 1995.

Foucault, Michel. “Of Other Spaces.” Architecture /Mouvement/ Continuité. October, 1984; (“Des Espace Autres,” March 1967 Translated from the French by Jay Miskowiec).

Freud, Sigmund. “Mourning and Melancholia.” From The Standard Edition of the Complete Psychological Works of Sigmund Freud Vol. XIV. Tr. and General Editor James Strachey. London: The Hogarth Press.

The Invisible Committee. Theory of the Bloom. Tr. Robert Hurley. Creative Commons. 2012.

Null Space and Null Existence Under the Spectacle

By Mike Templeton

Exit any stretch of interstate and you will immediately be confronted with the mass of business which defines contemporary American existence. From the multi-lane interchange that draws you off the interstate highway to the seemingly endless retail and restaurant chains, life is one continuous stretch of consumer destinations. Gas stations are full-service outlets selling roller food, beer and wine, lottery tickets, trinkets, ball caps, etc. The gas pumps now have video screens so you can watch sports update videos and some kind of corporate version of the news while you pump gas. From this point onward it is nothing but consumption. Consumer existence is human existence.

The full-service gas stations are generally the first places you encounter upon exiting the highway. BP, Speedway, Pilot—it really does not matter which specific brand you choose they all offer the same things. There are hotdog rollers with taquitos and three or four forms of processed meat tubes. Gourmet coffee and “cappuccino” machines that pour frothy French Vanilla and Caramel flavored hot drinks loaded with high-fructose corn syrup are available at stations with glossy images of crafted Starbuck’s-style drinks. There are generally two walls of coolers stocked with every known soft drink. They have a section for a dozen or so brands of beer ranging from the common American corporate brands to the so-called craft brews (all of which are owned and brewed by the corporate American brands). Row upon row of food-substances the origins of which are unknown and unknowable. Then you move to the microwavable food stations. Many of these service stations have now partnered with fast food chains so some sort of drive-through fare is also available. The entire panoply of consumer choice and consumer life are contained under these multi-purpose service stations designed to make your stop from the interstate as seamless and convenient as possible. Of course, the most important commodity on offer is gasoline: the blood that is the life of contemporary life.

Surrounding these service stations, stretching for miles in any direction, are fast food and restaurant chains of all types and varieties. The obvious McDonald’s, Taco Bell, Wendy’s, etc. are punctuated by the more elaborate fare found in Outback, Cracker Barrell, and Chilie’s. Food of every known kind can be obtained either in drive through or take out, or in the form of an actual dine-in experience with wait staff. Along with food, these thoroughfares will feature Target, Walmart, Home Depot, etc. Each of these big-box retail stores will anchor an entire strip of other retail stores such as Staples and Home Goods. Within these plazas there are also stores with shorter lives: Chinese and Indian take-out, used video game stores, Hallmark stores, Christian bookstores, etc. None of these last long, and each is replaced with something equally transient. Not only is the merchandise consumable and disposable, so are the retail outlets which provide the merchandise.

Nearly any exit off an American interstate will look like what I describe above. Each will be identical. The only changes will be local versions of the same thing—White Castle in the north turns into Krystal in the south. The local fare will reflect the regional identities to the extent that regional identities are easily identifiable across all regions. This is to say that a restaurant in Tennessee will offer something unique to the state of Tennessee only insofar as anyone from outside the region would be able to understand the image. “Hillbilly” will be packaged and marketed so that people from Maine, Minnesota, and California are not in any way mystified by the image of Tennessee. All big-box retail stores are the same in every state and region. Stand in a Target in Ohio and you are standing in the same Target as the one in Nevada. You are effectively in the same place since the place itself is as interchangeable and exchangeable as everything in the store.

Beyond the retail strip and restaurant chains, housing developments stretch off into the distances. Farmland may well still exist, but the developments of new housing will invariably stretch along or through the rural landscape. Each subdivision differs only in the most superficial ways. These are houses which are built in precisely the same way as all mass-produced commodities. Within each subdivision, all individual structures will be virtually identical, differing only in superficial details. These housing developments are arranged so as to create the illusion of a neighborhood. Streets arranged in rows or semi-circles all of which join a central street which is connected to the main artery of retail and commercial sprawl. The neighborhoods are generally named after local features such as trees and geological forms none of which can be seen since all of these things were removed to make way for the retail, restaurant, and housing complexes and sprawls which now occupy the terrain.

Some areas off the interstates are devoted primarily to commercial development. These consist mostly of information processing industries, transportation of goods and services, and corporate headquarters for companies which may still be in the business of manufacturing goods, but the actual manufacturing takes place miles away, often in different countries altogether. Shipping companies occupy large areas in order to facilitate the transfer and movement of consumer goods. Office parks occupy massive geographical areas with enormous parking lots. Surrounding all of the commercial plots are carefully landscaped grounds complete with circulating lakes and manicured greenspaces. The natural environment which once defined these areas, the rural landscapes and natural terrain were completely cleared and replaced by these artificial landscapes which give rise to an industry of landscaping and lawncare all to itself.

The images described above have overtaken the American landscape. Various regions of the country will differ according to the climate, but the basic layout of consumer life, commercial development, and suburban development will remain constant. There is no place that is significantly different than any other place. Place itself is interchangeable and exchangeable so that individual places no longer exist except insofar as places have been commodified and branded. Neighborhoods exist because land developers have named them as neighborhoods. Regional identities exist to the extent that they are marketable brands of regional identities. Individual places are unrecognizable, and the space between individual places exists only to be overcome with the greatest speed and convenience. Even the fundamental identity of the rural world and rural culture has been effaced by the encroachment of consumer life and suburban development. The only remnant of rural life is the brand of rural life found in Cracker Barrell where one can buy “farm-style” breakfast plates stuffed with every example of breakfast food imaginable. These feed people who sit in cars and work in offices and only walk as far as the front doors of their newly constructed pre-fab homes to their cars.

Although all of this development takes place within the domain of civic authority, the actual force of authority are the capitalist ventures which own the land and the points of consumption. This is to say that all actual power and authority remains squarely within the realm of capitalist ownership. Civil law and the concept of a civic arena are subordinate to the private ventures which fuel these forms of consumer developments and the consumer culture which drives the private ventures. It is a reciprocal system to the extent that individual demand drives corporate development and corporate development creates the space and conditions for consumer demand. This is a purely spectacular world, one which is driven by forms of authority which far exceed the civic domain. The cultural conditions of contemporary American terrain are defined by the capitalist drives which fuel consumer culture, and this finds its most extreme expression in landscape I have described above. As Debord explains:

At the core of these conditions we naturally find an authoritarian decision-making process that abstractly develops any environment into an environment of abstraction. The same architecture appears everywhere just as soon as industrialization begins. (The Society of the Spectacle, 122-123)

American geography has become an environment of abstraction. The Real—any idea of the Real—which may have once existed has been plowed under and replaced by abstract forms of geography designed entirely to facilitate a culture of pure consumption, a culture which produces nothing but consumption and waste. The lives of individuals who live in these regions are defined in terms of consumption and waste. All commodities lose value the moment they are purchased and must endlessly be renewed with new versions of the same thing. This is culture abstracted from material life and rendered entirely in the form of consumption.

Consumer capital is all there is, and virtually all of life is subsumed by consumer capital. Basic needs are provided through a diffuse network of supply which is so far removed from the sources of food, fuel, electricity, and water that all of these things appear to simply appear ex nihilo. The massive waste generated by this world is also removed and landfilled in regions largely cut-off from the lives and businesses which generate the waste. Once dumped, it no longer exists. The super-highway interstate system makes all of this possible. A vast system of interstates connects the entire country via a network of space which provides nothing but the means to move past it. The space of the interstate system is nothing but space to be overcome. The sole reason for its being is to pass it behind. The interstate system and the worlds which develop along their length and breadth are heterotopias, abstract spaces on which abstract lives are lived in relation to a world which grows ever more abstract. What is the highway but a space of abstraction in which “(t)he undifferentiated daily flow is punctuated only by the statistical, foreseen, and foreseeable series of accidents, about which THEY keep us all the better informed as we never see them with our own eyes—accidents which are never experienced as events, as deaths, but as a passing disruption whose every trace is erased within the hour” (This is Not a Program, 152). As the highway effaces all difference through its endless uniformity and totalizing program of mathematical planning and control, everything else becomes undifferentiated to the point that what marks one “thing” apart from another is lost. Accidents and real deaths are experienced only as transitory moments in which the ceaseless flow becomes momentarily interrupted. And as all space becomes continuous in a seamless flow of undifferentiated space, space itself is lost. Designed to facilitate the movement over distances, “the pure space of the highway captures the abstraction of all place more than all distance” (152). This “all place” is also the multiple “places” in which everyday life is now lived in the abstraction of space. Suburban sprawl is pure abstraction laid out in accordance with the abstraction of the highway.

The places which emerge at every exit and on-ramp off and onto the highway are completely interchangeable and exchangeable places. They are nothing but forms of abstract space. The housing developments are abstractions based on a flimsy reference to what once occupied actual places. Where there were farms, expanses of woodland, and even small towns, there are now abstractions of those places that bear metonymical links with what once marked those places as real. The woodland that was clear-cut and plowed under is replaced with a pre-fab development of completely indistinguishable housing units arranged in some geometric pattern and then named after a species of tree which once grew in the woodland. It may be named after a native American tribe wiped out centuries ago, and now the local school system takes its name. The lost Lakota Indians become Lakota High School and the people who live in this abstract no-place can find a point of identification with the linguistic representation of an idea no one knows anything about and suture that linguistic representation to a life which unfolds amid absolutely nothing but things to be consumed. Words and individual identities are evacuated of all meaning and re-filled with exchangeable meanings that can be traded along the interstate corridor of abstraction. Consumption is life, and life takes place in the abstract space of pure nullity.

All of life is “presided over in unmediated fashion by the requirements of consumption” (Debord, 123). What of the culture of this world? What emerges within this landscape of nullity is a new form of peasantry, one which is conditioned entirely by the logic of consumer society. Unlike the old peasantry in which natural ignorance was a function of an isolated world, the new peasantry is conditioned to their ignorance by a cultural logic which denies anything exterior to consumer culture. In this landscape of consumerism,

Natural ignorance has given way to the organized spectacle of error. The “new towns” [subdivisions] of technological pseudo-peasantry are the clearest indications, inscribed on the land, of the break with historical time on which they are founded: their motto might well be: “On this spot nothing will ever happen—and nothing ever has.” (124)

An organized spectacle of error is the inevitable result of a population who derive all knowledge of the world from the spectacle of the image and the mediation of the commodity. Nothing can be known except insofar as it is represented in a consumable form that is exchangeable with any other commodity. Therefore, knowledge itself is a commodity, and if it is not commodified knowledge, it is not knowledge. The break with historical time comes about, at least in part, from the ex nihilo fashion in which these communities spring up around consumer culture and consumer culture springs up around these communities. The process is one of expressive causality. One aspect of consumer life does not precede the other. The entire landscape and culture of the American landscape now simply appears on the horizon complete with everything I described above and much more. Any history of the regions which may have preceded the creation of the consumer landscape is denuded with the very land on which the spaces are built. Since this historical narrative is completely negated, any narrative of the existence of these regions is created from within the same cultural logic by which they come into being. Nothing ever happened here because everything happens exactly the same way every minute of every day. Nothing will ever happen here because everything that could happens is a reproduction of everything else that has ever happened. The term peasantry is perfect since what we see in the people whose lives are defined by these regions and the forms of culture which define these regions consists of a population which lives in total ignorance of what is beyond the society of the spectacle.

We are left with a geography of homogeneity and a population which mistakes this homogenous nullification of life for life itself. There are no spaces to be; only spaces to have

Michael Templeton is an independent scholar, writer, and musician. He completed his Ph.D. in literary studies at Miami University of Ohio in 2005. He has published scholarly studies and written cultural analysis, creative non-fiction, and poetry published in small independent publications. He currently works as a freelance writer providing articles for a non-profit called the Urban Appalachian Community Coalition. He is also the lead guitar player for the IdleAires, a communications service and information dissemination apparatus operating as a Rock n' Roll trio. I live in Cincinnati, Ohio with my wife who is an artist.

Works Cited

Debord, Guy. The Society of the Spectacle. Tr. Donald Nicholson-Smith. New York: Zone           Books,1995.

The Invisible Committee. This is Not a Program. Tr. Joshua David Jordan. Cambridge:

Semiotext(e). 2011.

Passing Judgement: A History of Credit Rating Agencies

By Devon Bowers

Credit rating agencies can be useful institutions as ideally they allow lenders to know the likelihood of a borrower repaying loans or if they should even be loaned to at all. In the current era, though, such agencies now have global power and can affect economies the world over, most notably with the 2007 financial crisis where bundled mortgages that were junk received AAA ratings.[1] Given that, it would be prudent to understand their history, how they operate, and the effects that they have had historically and currently, especially as a new financial crisis may be looming.[2]

Credit scores began to form somewhat in the 1800s due to the risks of borne by creditors. This led to several attempts to standardize creditworthiness. One of the most successful experiments occurred in 1841 with the formation of the Mercantile Agency, founded by Lewis Tappan. Tappan wanted to “systematize the rumors regarding debtors’ character and assets,”[3] utilizing correspondents from around the nation to acquire information, report back, and then organize and disseminate that information to paying members. Yet, this was done in response to the Panic of 1837, an economic calamity that would have wide-reaching effects not only for Tappan, but the nation as a whole.

The Bank of the United States

Before delving into the Panic of 1837, there needs to be an examination of The Bank of the United States [BUS], as it set in motion events that would create the Panic.

Alexander Hamilton was the Treasury Secretary under President Washington at the time the idea of a national bank was being floated, with a report being done on the matter in 1790. He supported the creation of a government bank on the grounds that it would allow for the US to ascend economically and therefore politically on the international stage.[4] This didn’t come out of thin air, however, there was some precedent regarding such a bank, found in the Bank of North America, established in Philadelphia in 1781.

Hamilton was primarily concerned with the fact that the Bank of North America “had made money for its investors and [had] operated under a charter granted by the Continental Congress, whose funds had made its establishment possible,”[5] yet, there were severe issues with the bank that would be a foreshadowing of the problems to come decades later, mainly regarding speculation. While the bank enjoyed support from businessmen, farmers were staunchly opposed to it as not only were they forced to deal with high interest rates on loans, which could range from 16 to as high as 96 percent annually, but there was also criticism of the bank being rather flagrant in loaning out money for land speculation.  

In Congress, debates began over the question of creating a national bank. James Madison, representing Virginia’s 15th district, argued that the entire idea was unconstitutional as he couldn’t find anywhere in the Constitution which allowed Congress to grant charters or borrow money. Strangely enough, he had previously proposed an amendment to the Article of Confederation which explicitly noted implied powers. His amendment read:

A general and implied power is vested in the United States in Congress assembled to enforce and carry into effect all the articles of the said Confederation against any of the States which shall refuse or neglect to abide by such determinations.[6] (emphasis added)

This was a rather serious about-face on the issue for Madison.

Massachusetts Congressman Fisher Ames countered those who were against the bank by echoing the findings of Hamilton’s report, “that the bank would improve commerce and industry, [insure] the government's credit, [and aid] in collecting taxes.” He “saw no purpose in the power of Congress to borrow if the agency of borrowing was not available and if the power to establish such an agency was not implied.”[7]

Opposition to the bill proved in vain and it passed Congress and was signed by President Washington, being approved for a 20-year charter, until 1811.

During its initial run, the bank’s purpose was to “make loans to the federal government and [hold] government revenue.”[8] (This was all in the context of a gold and silver-backed currency system.) When state banks were presented with notes or checks from BUS, state banks would exchange the amount noted in gold and silver, something rather unpopular due to making it more difficult for state-based banks to issue loans.

Many in the business community supported the BUS on the grounds that it kept state banks in check by preventing them from making too many loans “and helping them in bad times by not insisting on prompt redemption of notes and checks.”[9] New businesses would finance themselves by borrowing money from the BUS and when economic hardships occurred, the businesses would have some breathing room as the government didn’t demand repayment on scheduled times.

After the bank’s charter expired in 1811, the push to create another bank would be caught up with the War of 1812 and the financial circumstances that it had placed the country in.

In the first year of the War of 1812, the US saw $7 million of foreign investment leave and about a 161% increase in the amount of bank note circulation (from $28.1 to $45.5 million) due to the increase in state banks (from 88 to nearly 200).[10] The US was seeing large amounts of inflation in a war that had just begun.

Businesses were generally concerned about the amount of inflation and lack of a stable currency to the point that some began to become intimately involved with arguing for a renewal of the BUS, among them David Parish, Stephen Girard, John Jacob Astor, and Jacob Barker. There was also the politician John C. Calhoun, the Congressional Representative of South Carolina’s sixth district, who would become involved with creating a second national bank.

In addition to financiers and politicians, there was Alexander James Dallas, the United States Attorney for the Eastern District of Pennsylvania and friends with Treasury Secretary Albert Gallatin. Dallas had help to coordinate a meeting in April 1813 between Parish, Astor, Girard, and Gallatin which resulted in them closing out a deal in which the financiers formed a syndicate and purchased $9,111,800 of government bonds at $88 a share, which allowed the government to obtain the $16 million it needed to continue funding the war.[11] Still, many businessmen were concerned about the general economic situation of the country so heartily pushed for the creation of a second BUS.

Initially, there was a bit of stumbling about. In January 1814, Calhoun proposed a poorly received scheme in which the bank would be set up in Washington D.C. and that each state would be able to buy into it voluntarily, with the number of bond subscriptions corresponding with each state’s respective representation in the House, as a way of getting around those who saw the BUS as unconstitutional.

Seeing Calhoun’s failed attempt only made Barker push harder for the establishment of a national bank, arguing such in the National Intelligencer, a daily newspaper read by many in the nation’s capital. This pushed Astor, Parish, and Girard to discuss the situation in greater detail via correspondence and, after writing up an outline, they began to quietly disseminate it among other capitalists and urging Congressional representatives to take up the cause.

In April 1814, the Madison Administration, realizing that the impossibility of raising $25 million for the war effort, reluctantly gave in to the creation of a second BUS, with the House passing a motion with a 76 to 69 vote..[12] Shortly after this was announced, Parish and Astor corresponded with one another, with Parish noting that the time to increase the pressure on politicians was ripe.

Both men followed through, but kept their contacts quiet until they knew that the administration was all in. Parish contacted Dallas who offered his services as to defend the constitutionality of the Bank, doing so in the form of writing letters to Senators as well as  Acting Treasury Secretary William Jones, who had become such after Gallatin went to help aid in establishing a peace treaty with the British.

Dallas, in part, wrote that the constitutionality of the bank was disputed “only by a few raving printers and rival banks”[13] and that it should be established. However, within one week of the aforementioned House motion, rumors began to circulate that Britain was looking to negotiate an end to the war. This provided an opening for Madison, who only passively supported the implementation of the Bank, the opportunity to withdraw his support, as did the House promptly afterward.

In February 1813, Acting Treasury Secretary William Jones, working on behalf of the President, offered Dallas the full position of Treasury Secretary, which he declined on the basis of it being too much of a financial sacrifice to do so. The situation changed however in 1814, as with knowledge of Astor’s plan to base the new bank’s capital in real estate, Dallas contacted Secretary of War James Monroe to say that he was now interested in the position, if it were still available and in letters with Jones pushed heavily for the creation of a national bank to predict and collect revenue.

While this conversation was going on, Jones “predicted that the government would have a deficit of almost $14,000,000 by the end of 1814, declared that $5,000,000 more revenue must be provided if the war were to continue through 1815, but made no recommendation as to sources of additional revenue.”[14] This was quickly followed by his resignation. Realizing that Dallas was one of the few people who were on good terms with both his administration and the business community, Madison submitted Dallas’ name for Treasury Secretary on October 5, 1814, with Congress ratifying his nomination the following day.

Immediately after Dallas got into the position, he began to plan for the creation of a national bank that was similar to its predecessor, but with some significant differences: it would be chartered for 30 years, operate out of Philadelphia, and it’s capital would be $50 million of which $20 million would be owned by the government with the rest being up for grabs. In addition, the government would choose only 5 of the banks 15 directors, the remainder being chosen by those private individuals holding government stocks.

When presented before the House Ways and Means Committee, though, there were some minor changes made to accommodate the financial and political realities, with the proposal that the bank be charted for 20 years, $6 million of the bank’s capital being in coins, and that the bank would immediately loan the government $30 million. Dallas moved to garner support not only with the House Committee, but also talking to a special Senate committee on the matter of the bank along with Parish and Girard going to Congress to lobby in favor of it.

Strangely enough, one of the bank’s biggest opponents was Congressman John C. Calhoun, who devised his own plan that he thought would unite both sides.

The Calhoun plan called for the creation of a national bank with a capital base of $50 million, one-tenth of which was to be paid in specie and the remainder in new treasury notes. […] To satisfy the Calhoun supporters, the bank would have to pay in specie at all times, and would not be required to make loans to the government. To gain the support of the Federalists, the government was prohibited from participating in the direction of the bank, and there was to be no provision that subscriptions be made only in stock that was issued during the war.[15]

It would seem that the situation had come to an impasse, yet Dallas had a trump card: maturing Treasury bonds. He announced to Congress “that the government would have $5,526,000 due in Treasury notes on January 1, 1815, with at most $3,772,000, including unavailable bank deposits to meet them.”[16] This convinced the Senate to pass the bill, but it failed in the House due to the anti-bank elements, led by New Hampshire Congressman Daniel Webster, pushed back heartily against the bill and killed it.

On February 13, 1815, news reached Washington that the US and Britain had signed a peace treaty at Ghent, Belgium the past December. The ending of the war allowed the differences between Treasury Secretary Dallas and Congressman Calhoun to thaw as there was now not a need to try to unite everyone, but rather push forward with the bank. The two men got together and hammered out an outline and plan for the bank, which soon passed in Congress and was signed into law on April 10, 1816, with the bank being chartered for 20 years.

The Death of the Second Bank

The bank was set to expire in 1836. Yet it was when the Bank was nearing the end of its life, did a struggle occur over its renewal, led by Andrew Jackson.

In his earlier years, Jackson had a business situation involving paper currency go south, leaving him with a bad taste in his mouth. In 1795, Jackson sold 68,000 acres to a man named David Allison in hopes of establishing a trading post, taking his promissory notes as payment and then using the notes as collateral to buy supplies for the trading post. When Allison went bankrupt, Jackson was left with the debt of the supplies.[17] It would take him fifteen years to finally return to a stable financial situation.

There were also deeper reasons for his anti-bank stance than personal animosity. Jackson was among those people who thought that banking

was a means by which a relatively small number of persons enjoyed the privilege of creating money to be lent, for the money obtained by borrowers at banks was in the form of the banks' own notes. The fruits of the abuse were obvious: notes were over-issued, their redemption was evaded, they lost their value, and the innocent husbandman and mechanic who were paid in them were cheated.[18]

This mistrust of banks would put him in a direct, confrontational path with the BUS and its president, Nicholas Biddle.

Nicholas Biddle was a former Pennsylvania state legislator who became President of the BUS in 1823. Considered a good steward of the bank, he ensured that it “met its fiscal obligations to the government, provided the country with sound and uniform currency, facilitated transactions in domestic and foreign exchange, and regulated the supply of credit so as to stimulate economic growth without inflationary excess.”[19] However, he was also undemocratic as he “not only suppressed all internal dissent but insisted flatly that the Bank was not accountable to the government or the people."[20] Actions such as these simply reinforced Jackson’s disdain for the institution.

Jackson became vehemently anti-Bank in 1829 when Biddle, attempting to gain Jackson’s friendship, proposed a quid pro quo deal. The Bank would purchase the remaining national debt, thus eliminating it, something Jackson greatly wanted done and in exchange, the bank would be re-charted years earlier than expected. An early re-charting would allow for stocks to grow and thus provide a major increase in the dividends of the shareholders.[21] Instead of seeing this as an olive branch though, Jackson viewed it as the institution attempting to utilize bribery and corruption to ensure its continued existence, turning Jackson wholly against the Bank.

It was in 1832 where both these individuals would come to a head over the continued existence of a federal bank.

The National Republicans, a group that split off from the Democratic Party due to anti-Jackson sentiment, nominated a Kentucky Senator by the name of Henry Clay as their presidential candidate in 1831. Convinced that he could utilize the issue of the Bank to beat Jackson, Clay convinced Biddle to seek renewal of the Bank’s charter in 1832 rather than 1836.

Clay did have some backing as the House’ and Senate’s respective financial committees issued reports in 1830 “finding the Bank constitutional and praising its operations[.It should be noted that] Biddle himself had drafted the Senate report[and the] Bank paid to distribute the reports throughout the country.”[22] Clay supporters and allies pushed a bill through in both the House and Senate which would reauthorize the bank, but on July 10, 1832, Jackson vetoed the bill, with the Senate failing in an attempted override.

The Bank was now no more, but what of the Treasury surplus?

After the re-chartering of the Bank of the United States was successfully vetoed, Jackson decided to take the Treasury surplus and split it up among certain favored banks, ‘pet banks’ as they came to be known. However, such a term isn’t fully accurate as while funds did go primarily to banks that were friendly to the administration, “six of the first seven depositories were controlled by Jacksonian Democrats,”[23] there were also banks that whose officers were anti-Jackson that received funds such as in South Carolina and Mississippi.

This divvying up of the Treasury’s surplus funds would set the stage for the Panic of 1837.

Panic of 1837

Due to the massive cash influx, people began to set up their own banks, hoping to get a slice of the government pie. From 1829-1837, the number of banks increased by 56%, from 329 to 798. Many of these new banks were wretched, being “organized purely for speculative purposes [with] comparatively little of the capital required by law [actually being paid and] many of the loans [being] protected by collateral of fictitious or doubtful value[.]”[24]

This led to a fight between banks for deposits and meant that large amounts of money was going all over the country, with no regard for if those funds were being put in places with viable markets and stable economies, where the money could be lent out with confidence that it would be invested and repaid.

With the debt being paid off in January 1835, a surplus created due to rising cotton prices, and an increase in public land sells,[25] and newly collected tax money being sent to banks, it created a situation where these banks were effectively getting an interest free loan which they could make money off of by lending at interest.

Such lending practices would have major repercussions in the western US. Due to the Indian Removal Act of 1830, huge swaths of land were opened up to settlers to come and claim, but it was also open to speculators. These individuals would go westward and purchase large amounts of land to sell to new coming settlers at massively marked up rates. They found themselves empowered by the banks as due to the Treasury giving funds to state banks, banks loosened their lending policies, thus giving speculators the access to credit needed to buy up much of the land. So much had the west become infested by speculation that one Englishman went so far as to say “The people of the West became dealers in land, rather than its cultivators.”[26]

There also existed the problem of professional land agents who worked for capitalists in the East. These agents would go out west, charging some type of fee, whether it be a share of the transactions to take place or a flat five percent fee, and purchase land for their employers, in some cases not even physically seeing the land and basing it off of books. This land would then be rented out and in the meanwhile, further money would be made by loaning funds to frontiersmen at rates ranging from 20 to 60 percent.

This real estate bubble was heavily impacting the nation’s currency. The recognized currencies were gold and silver coin, known as specie. Seeing as how there wasn’t enough of such coinage to go around, paper money supplemented the money supply, which was technically redeemable for specie. Effectively, the US dollar was backed by gold and silver.[27] Due to the moving of money from the US Treasury to state banks which in turn loaned it to western speculators, there was a major increase in the paper money supply to the point that it there wasn’t enough specie to back it and created inflationary concerns, thus prompting the Specie Act of 1836 in an attempt to curtail the problem.

To this end, the Specie Circular of 1836 was introduced which  “required that only gold or silver be accepted from purchasers of land, except actual settlers who were permitted to use bank notes for the remainder of the year.”[28] The entire structure, which was based on paper currency and credit, came tumbling down, with land speculation halting almost immediately.

The credit collapse caused a run on the banks as the citizenry, “alarmed by the money stringency, by the numerous failures in the great commercial centers, by reports that the country was being drained of its specie by the English, and convinced by the Specie Circular that the paper money which they held would soon become worthless,”[29] led people to go to banks and redeem their paper money for specie. Due to so many people wanting specie, banks didn’t have enough to meet demand and suspended all such payments.

All of this caused the Panic of 1837 to come about which shattered credit markets as the nation fell into a painful recession, primarily due to the aforementioned lending policies where literally anyone could get a line of credit given to them.

It was in the aftermath that led to the creation of some of the first credit reporting agencies.

Credit Reporting

Early 19th merchants relied mainly on personal ties to decide with whom to conduct business as many of them would travel from the west and south to eastern coastal cities and purchase goods from the same people again and again. As trade and the economy increased, merchants began to want to give lines of credit to people they didn’t know and in order to get some information on the creditworthiness of these individuals, they “would turn to traveling salesmen to appraise those asking for loans, however, this proved to be a problem as the salesmen, wanting to increase his sales, would paint bad creditors as good, thus allowing for loans to be given.”[30] This led some businesses to, in searching for less biased reports, seek out information from agents whose only job was credit reporting. Baring Brothers was the first to do this in 1829 and were followed by another international banking house, Brown Brothers, both of whom developed systematic credit reports.[31]

The first person to start up an agency where the only objective was credit reporting was Lewis Tappan, “an evangelical Christian and noted abolitionist who ran a silk wholesaling business in New York City with his brother Arthur.”[32] Coming out of the Panic of 1837 almost bankrupt, Tappan decided to launch the Mercantile Agency in 1841 in order to create a national system of credit checking, which utilized both residents and credit agents to judge a person or company’s creditworthiness.

Tappan began the work of his agency by sending a circular to lawyers and others in faraway locations, inviting them to become his correspondents with the hope of  ”[securing] sufficient data regarding the standing of traders in other cities, towns, country hamlets, and trading posts to enable New York City wholesalers to determine what amount of credit, if any, could safely be accorded."[33]

There were credit problems for New York wholesalers. They would generally give a line of credit to local distributors to distribute their product(s) in a given area. Rather than asking for cash payment for the goods, wholesalers gave distributors a discount price and the wholesaler would be reimbursed with the money made from the difference of the discounted and regular pricing, which included an interest rate and covered the wholesaler for risk.

In order to get the risk correct, wholesalers relied on agents reports to their employers about the financial trustworthiness of local borrowers, but they could be deceived as the agent could be falsify information regarding the employer or both the agent and shop could conspire against the creditor.

Tappan’s Mercantile Agency gave a slight fix to these problems in the form of being a de facto surveillance system on borrowers by being an independent source of information from which creditors could gauge the reliability of borrowers. Correspondents would send bi-annual reports to the Tappan’s New York office in early August and February, ahead of the spring and fall trading seasons, which were then copied into large ledgers. Those who subscribed to the ledgers would call the Mercantile Agency’s office to inquire of a current or potential recipient, where the clerk would read the report aloud.[34]

While this helped, there were major weaknesses in the system as the “correspondents [many of them part-time] relied on their general, personal knowledge of businessmen and conditions in the town or area of their responsibility,”[35] which was subject to being influenced by gossip and rumor. During the 1860s changes were made which increased professionalism by bringing on paid, full-time reporters and by the 1870s most major cities had full-time reporters. Methods also changed and was based on direct interviews and financial statements that were signed by borrowers, the latter improving greatly in the 1880s after the courts ruled that such individuals could be charged with fraud if they knowingly provided false information to credit reporters.

The industry would evolve with the ushering in of the 20th century, which would see the origins of the current three major ratings agencies: Moody’s, Standard and Poor’s, and Fitch Group.

Notes

[1] Matt Krantz, “2008 crisis still hangs over credit-rating firms,” USA Today, September 13, 2013 (https://www.usatoday.com/story/money/business/2013/09/13/credit-rating-agencies-2008-financial-crisis-lehman/2759025/)

[2] Larry Elliot, “World economy is sleepwalking into a new financial crisis, warns Mervyn King,” The Guardian, October 20, 2019 (https://www.theguardian.com/business/2019/oct/20/world-sleepwalking-to-another-financial-crisis-says-mervyn-king)

[3] Sean Trainor, “The Long, Twisted History of Your Credit Score,” Time, July 22, 2015 (https://time.com/3961676/history-credit-scores/)

[4] H. Wayne Morgan, “The Origins and Establishment of the First Bank of the United States,” The Business History Review 30:4 (December 1956), pg 479

[5] Ibid, pg 476

[6] Sheldon Richman, TGIF: James Madison: Father of the Implied-Powers Doctrine, https://www.fff.org/explore-freedom/article/tgif-james-madison-father-of-the-implied-powers-doctrine/ (July 26, 2013)

[7] Morgan, pg 485

[8] Jean Caldwell, Tawni Hunt Ferrarini, Mark C. Schug, Focus: Understanding Economics in U.S. History (New York, New York: National Council on Economic Education, 2006), pg 187

[9] Federal Reserve Bank of Minneapolis, A History of Central Banking in the United States, https://www.minneapolisfed.org/about/more-about-the-fed/history-of-the-fed/history-of-central-banking

[10] Raymond Walters Jr., “The Origins of the Second Bank of the United States,” Journal of Political Economy 53:2 (June 1945), pg 117

[11] Walters Jr., pg 118

[12] Walters Jr., pg 119

[13] Ibid

[14] Walters Jr., pg 122

[15] Edward S. Kaplan, The Bank of the United States and the American Economy (Westport, Connecticut: Greenwood Press, 1999) pg 50

[16] Walters Jr., pgs 125-126

[17] The Leherman Institute, Andrew Jackson, Banks, and the Panic of 1837, https://lehrmaninstitute.org/history/Andrew-Jackson-1837.html

[18] Bray Hammond, “Jackson, Biddle, and the Bank of the United States,” The Journal of Economic History 7:1 (May 1947), pgs 5-6

[19] https://lehrmaninstitute.org/history/Andrew-Jackson-1837.html

[20] Ibid

[21] Daniel Feller, “King Andrew and the Bank,” Humanities 29:1 (January/February 2008), pg 30

[22] John Yoo, “Andrew Jackson and Presidential Power,” Charleston Law Review 2 (2007), pg 545

[23] Harry N. Scheiber, “The Pet Banks in Jacksonian Politics and Finance, 1833–1841,” The Journal of Economic History 23:2 (June 1963), pg 197

[24] Vincent Michael Conway, The Panic of 1837, Loyola University, https://ecommons.luc.edu/cgi/viewcontent.cgi?article=1469&context=luc_theses (February 1939), pg 21

[25] https://lehrmaninstitute.org/history/Andrew-Jackson-1837.html

[26] Paul Wallace Gates, “The Role of the Land Speculator in Western Development,” The Pennsylvania Magazine of History and Biography 6:3 (July 1942), pg 316

[27] Robert Samuelson, “Andrew Jackson Hated Paper Money As Is,” RealClearMarkets, April 27, 2016 (https://www.realclearmarkets.com/articles/2016/04/27/andrew_jackson_hated_paper_money_as_is_102137.html)

[28] Gates, pg 324

[29]Conway), pg 22

[30] James H. Madison, “The Evolution of Commercial Credit Reporting Agencies in Nineteenth-Century America,” Business History Review 48:2 (Summer 1974), pg 165

[31] Madison, pg 166

[32] Josh Lauer, “From Rumor to Written Record: Credit Reporting and the Invention of Financial Identity in Nineteenth-Century America,” Technology and Culture 49:2 (April 2008), pg 302

[33] Lewis E. Atherton, “The Problem of Credit Rating in the Ante-Bellum South,” The Journal of Southern History 12:4 (1946), pg 540

[34] Madison, pg 167

[35] Madison, pg 171