www.fgks.org   »   [go: up one dir, main page]

Issue 2

Page 1

GA DFLY

Spring 2019 Columbia University Undergraduate Philosophy Magazine


EDITORS-IN-CHIEF Cecilia Bell Emilie Biggs Saikeerthi Rachavelpula CREATIVE DIRECTOR Sam Wilcox SENIOR EDITORS Nick Allen George Menz EDITORS EMERITUS Nicholas Andes Aaron Friedman Alex Garnick LAYOUT Sophie Kovel & Sam Wilcox

This issue is dedicated to Isaac Levi, John Dewey Professor of Philosophy Emeritus at Columbia University


02



FROM THE EDITORS

06

THE PHENOMENOLOGY OF FOOD by Saikeerthi Rachavelpula

09

BODILY TOPOGRAPHIES by Emilie Biggs

16

ON POLITICAL RUPTURE by Ravi Maddali

24

INTERVIEW WITH DENA SHOTTENKIRK

31

SELF-DECEPTION AND BULLSHIT by Cecilia Bell

40

ON THE DEATH PENALTY by Robert Cohen

47

THE TELEOLOGY OF THE CLOSET by Sam Wilcox

57


RUPTURE More than we wished to live up to the high expectations set by the inaugural issue of the Gadfly, for this issue, we wish to dedicate our writings and artwork to Isaac Levi (1930-2018), the John Dewey Professor of Philosophy Emeritus here at Columbia University. In our attempt to commemorate professor Levi, we looked towards his important work in the school of American Pragmatism. Like Richard Rorty, Levi revived its study, revealing its deep implications in American thought, and inevitably in American politics. There is a sense in which the notion of rupture is deeply ingrained in the present political climate. It is one of those words that refuses to be defined, better to be talked about in qualities rather than in definitions. For that very reason, it is one of those words that carries the complicated meaning and force necessary to characterize the complex systems that constitute our social, political, and experiential lives without reducing them. Our politicians and our hyperbolic news sites love to remind us that we live in fragmentary times. America’s fragmented states rearrange themselves in new, fragmented orders along ideological and cultural lines. Social media sites allow users access to an entirely curated picture of current affairs, splitting newsworthy occurrences into different shards of whole stories with different angles and reflections. Slavoj Žižek and Jordan Peterson debate without once listening or responding to one another. Xenophobic reactions to globalization have made it even more important to consider the global notions of rupture. Indeed, Gayatri Spivak’s epistemic violence is a kind of rupture, a violent dissonance that forces us to think outside of the Western mode of historicizing. If the recent past has been constitutive of hairline fractures, the event of rupture has now arrived. It is easy to fall into a trend of apocalyptic rhetoric. It is sometimes even important to do so: ruptures have a permanency, and current rifts have the potential to be catastrophic in a local and global sense 6


if left unattended. In this Gadfly issue, we hope to tend to the seriousness of the ruptures we are witnessing around us, and to contribute to the chorus of young voices attempting to deal with the world they have inherited. We also hope to situate this world and this rhetoric within a history of rupture, and to consider the value of such a cataclysmic outlook in the context of previous attitudes to moments of sudden, violent change. Yet we are simultaneously resistant to the the treatment of rupture as a wholly destructive phenomenon. Putting this issue together, we wanted to examine what rupture and fragmentation can mean as an instrument of construction, as well. Anaïs Nin once described truth as a “laborious mosaic.” Friedrich Schlegel characterized the project of the historian as “piecework”. The idea of rupture requires a recognition that a ‘whole’ once existed – in this issue, we want to explore how fragments can be rearranged to rediscover this ‘whole,’ or create something entirely new. We hope, in each article and in the issue as a whole, to piece together different strains of rupture, to pick up different fragments from different piles of debris, and see what picture they form when placed together. It would be narcissistic to treat this issue as a sort of bandage for the ruptures occurring around us. We are hesitant even to go so far as to label it it a diagnosis. We hope simply to entertain an idea of rupture, to approach it with curiosity, to poke at weighty concepts and write down the results – in the Nietzschean tradition, to relearn the seriousness of play. It has been a pleasure to be able to sit with rupture as we have. It is not often that one gets the freedom to dirty one’s hands with a subject in this way. We thank our contributors, both writers and artists. Also, we thank the Columbia University Department of Philosophy for giving the Gadfly a chance one year ago, granting us the possibility of producing this second issue. Whether you approach this issue as a curious reader or as a dedicated philosopher, we now hold open this area of play to you. “You, higher men, learn, I pray you – to laugh!” 7


8


The Phenomenology of Food

By Saikeerthi Rachavelpula

9


Saikeerthi Rachavelpula

The Phenomonology of Food

G

10

liding down the street with a yoga mat in hand, a friend of mine, who had recently been broken up with, was recounting how wholly transformed she felt after her donation-based yoga class on the Upper West Side. Opening her hips allowed her to let go of all the knotted resentment she had been holding onto. She's now more open towards the world, able to enjoy exercising, cooking, painting, and the company of others, more authentic in Heidegger’s eyes. Naturally, there was a part of me, the largest part of me, that didn’t believe her. But I continued to listen. We went to her apartment and assembled a kale salad with Veganaise and Nutritional Yeast dressing. What I found most strange about this is not her insistence on the claim that a yoga class miraculously allowed her to get over her break up, but the way in which this claim was infused with the jargon of authenticity, including the performative ethics of the “back to the land” movement, which points Whole Foods’ shoppers to the East, towards the mystic authenticity of yoga, turmeric, brown rice, matcha, and açaí. But culinary orientalism is just one facet of the current health food movement; it has its roots in the 1960s as counterculture, as a protestation of the opaque corporations that Upton Sinclair’s The Jungle exposed. Since then, it has evolved and

entangled itself with a more “flexible capitalism” that makes room for pre-existing remedies of apple cider vinegar, dark chocolate, antioxidants, and fiber. Most recently, it has aligned itself with environmentalism, adopting sustainability as one of its ethical goals. Eating food is no longer about managing hunger. Especially in the environmentalism case, it is clear that there are valuable tenets in the current health food movement. Of course, it is important to be conscious of what you are eating and reduce pernicious environmental impacts whenever possible. But there is something much different going on when spiritual-physical claims concerning increased well-being begin to suggest some set of normative ethical behaviors, when the soul-cycler claims the $7 turmeric latte they just bought from a local coffee shop makes them feel better than its turmeric-free counterpart. Already, we are attuned to the irony in claiming that purchasing certain goods makes someone morally good — what does it mean to be an ethical capitalist? But when the goods become foods and the foods become “good for you,” the irony appears to subside and moral extrapolations find their footing. The problem here begins as an existential one: the asymmetry between how one relates to one’s own body and


to others’ precludes skepticism. We cannot coherently deny the magical effects of the overpriced turmeric latte without slating the other person’s lived experience. Thus, the moral suggestion that our soul-cycler who drinks “ethically-sourced,” “small-batch,” morally-infused lattes is morally good grounds itself in what seems to be biological fact. In his collection of essays Against Everything, Mark Grief explains how health — how meticulously taking care of one’s own body through diet and exercise — has become the perfect residence for moral normativity. He identifies that exercise has become a “basic biological process of self-regulation.” And the regulation becomes economic when health anticipates life-span. By keeping fit, we preserve ourselves for an increased amount of time: The person who does not exercise, in our current conception, is a slow suicide. He fails to take responsibility of his life. He doesn’t labor strenuously to forestall his death. Therefore, we begin to think he causes it... The nonexerciser is lumped with other unfortunates whom we socially discount. Their lives are worth a percentage of our own, through their own neglect... “Don't you want to ‘live’?” we say. No answer of theirs could satisfy us.

Standards of personhood are

established. Now, we have a moral code that looks down upon degenerative bodies and discounts them from the order of the society, not too conceptually different from the “mad” lepers who Foucault identifies were the degenerative bodies of the medieval period. But what Greif does not identify is the philosophical problem that makes the biological foothold for moral claims possible at all. As mentioned earlier, this is an existential problem dealing with the phenomenology of human bodies. Famously in Being and Time, Heidegger admits that “this ‘bodily nature’ hides a whole problematic of its own, though we shall not treat it here.” Already, we start to see the complexity of asserting claims regarding the body. Before arriving at the phenomenological account for which we read Merleau-Ponty and Heidegger, we trace the epistemic roots of the problem with Wittgenstein. To Wittgenstein, the sentence “I know that I am in pain” is considered ungrammatical, wholly unintelligible. Meanwhile the assertion “I know that he is in pain” is epistemically safe. This is because there is an asymmetry between the two claims. When we ask for justification for the third-personal claim, we can respond “he is wincing,” “he is clutching his knee,” or “he is crying.” But for the first-personal claim, there is no such justi11


The Phenomenology of Food Saikeerthi Rachavelpula 12

fication that does not already refer to the assertion wanting to be proposed. We can’t prove it in the same way we can point to external signifiers in the third-personal case. Wittgenstein wouldn’t understand the sentence “I know that kombucha makes my stomach feel better.” Already, we notice the epistemic rupture involved in bodily perception, one that severs the first-person account from the third. Merleau-Ponty complicates the issue with a phenomenological account in his discussion on proprioception — the manner in which one’s body senses its own position and movement. Following Schopenhauer’s famous adage “the eye is not a viewer of itself,” he identifies that we don’t perceive others’ bodies in the same way we do our own, and there are serious philosophical consequences. The body is not simply an object in the world which we can substitute as a noun in a sentence; the body is necessarily situated in the world, and the body we experience is the body of action. In Phenomenology of Perception, Merleau-Ponty studies the brain-damaged patient Schneider, who is the famous example for the dissociations between the lived body and the objective body. When a mosquito stung him, Schneider was unable cognitively to identify

where the mosquito had stung him, yet he was able to scratch his leg exactly in that location. There is a certain non-observational relation we have with our own bodies that is partly constituted of what Merleau-Ponty labels motor intentionality — the non-physical affordances we make, our non-deliberate ways of perception. Motor intentionality is precisely the capacity Schneider lacked. What all this means is that we can’t study the body or make claims concerning the body in a systematic, theoretical manner. It is necessarily lived in, and we experience it only from within its worldly-embedding. Thus, it cannot be reduced to a circuitry of muscles, bones and nerves. And perception does not so closely track the causal mechanisms which are thought to constitute our biology. At this point, it might seem unclear how the peculiarity of our bodily experience is related to health food. It seems somewhat a benign notion that we can’t have a purely theoretical conception of our body when we can only sense it from within a mysteriously first-personal and necessarily-embedded point of view. But how does this relate to the ideology of health food? To get there, we look to Heidegger and realize that phenomenology has a necessarily heuristic structure. For Heidegger, man is necessarily with others — being-with is an a


priori transcendental condition for being. Other people inescapably comprise a world we too inhabit (hence, why Being is also Being-in-the-world), and this world is forged by societal heritage and cultural practice. Because we share the same mode of being as others (other Daseins) while having a unique sense of ownership over our own being, the way in which we experience our being is both third and first personal. And we flip through these perspectives interchangeably and simultaneously, sometimes without being directly aware. Then, normativity finds its place in the way we see ourselves. The heuristic structure of phenomenology makes it such that we come to understand our own being, our own experience, from utilizing the social tools of language, cooperation, and joint activity. There is a part of us that must understand our own experience just how everybody else does. We might find ourselves just doing what one does; eating salads because that’s just what healthy, happy people do. And we come to understand its effects in a positively, self-serving manner. Most of the time we really don't know if that salad increased our mood — our bodily perception is too complex to yield such an abstracted causal deduction. For Heidegger, this is precisely what constitutes inauthentic being. It is a manner of

existing that is totally general, devoid of individuality and the attuned ability to resist reducing your experience to platitudes, to what everybody else is doing and thinking, to what makes you fit in. When we are in the mode of inauthentic being, we find ourselves expecting rather than anticipating. In a way, we know how the salad will make us feel before we take our first bite. We expect certain results, rather than being open towards them, leaving possibility for a nuanced individual experience. Authentic being, on the other hand, is that mode which is true to oneself, wholehearted and responsive to the idiosyncrasies of being. As he identifies, the problem is that we are necessarily inauthentic because we rely on others and the heuristics around us to even begin the process of shaping the individuality of ourselves. In this way, however, we leave the door open for moral normativity. We experience it precisely because we must rely on it in 13


The Phenomonology of Food

some capacity. In reality, our biological processes are often opaque to us, but as human beings, we necessarily seek to understand them through heuristics. Because moral normativity is something within our grasp, something we inherit and are thrown into, it is something we can choose to sway ourselves and propagate. Thus, taste becomes malleable, self-serving. It is a slave to normativity laced with moral judgments, while maintaining the appearance that it is rooted in something biological or empirical, something that comes authentically from “within.” till, I do not believe my friend, the one who claimed to feel free from the mental grief of a break up, the one who tried to couch that claim in her increased physical well-being following a yoga class. In a way, hers is the perfect assertion to make: the problem of bodily perception gives it philosophical shape, unfalsifiability. But the real reason we should be suspicious of such wellness claims is not an epistemic one; it does not matter to us that their unfalsifiable status

Saikeerthi Rachavelpula

S

disqualifies them from any supposed achievement of the status of knowledge. The real reason is that these claims service science to ignore the larger socio-economic consequences associated with the movement. It is no coincidence that Soul Cycle classes are nearly $40 an hour. It is no coincidence that Goop’s Yoni Eggs are still selling to wealthy Hamptonites despite the $145,000 lawsuit. And it is no coincidence that the large majority of those participating in these wellness movements are white. As auspicious consumption comes to replace conspicuous consumption1, we cannot ignore the very real socio-economic implications of such cultural movements. Moral capital is no substitute for economic guilt, just as green juice is no substitute for a real meal.

1 Auspicious consumption is an allusion to the New York Times article “What The Rich Won’t Tell You” regarding the emergent class of wealthy left-leaners who no longer strive to show off their wealth in a conspicuous way. Instead, they attempt to hide their wealth and privilege, “distancing themselves from common stereotypes of the wealthy as ostentatious, selfish, snobby, and entitled” to imply an increased moral worth.

14


15


Bodily Topographies

By Emilie Biggs

16


I

t is a dreary November night. In the solitary chamber at the top of the house, Victor Frankenstein stands, ready to animate the work his hands have produced. We, the readers, have been present for this labor: the collection of body parts, their disassembly, their reassembly, the tentative dream of an entity “eight feet tall”. The completed body lies limp and vulnerable – the physical manifestation of bodies of thought, the ideologization of the physical body. A patchwork of production, a fragile little thing. Yet, one step in the meticulous record of Frankenstein’s great work is missing: not once in the novel do we see Dr. Frankenstein pick up a needle, and begin to sew. This is not to criticize Mary Shelley’s own masterpiece, but to identify a gap within the historical concern with the production of art and text. Suture – as labor, as craft, as writing, as the simultaneous overcoming and monumentalization of bodily division – stands as a sort of loose thread in this historical interest, something to be generally ignored, or perhaps idly picked at in moments of leisure. I propose we begin to pull on it. The early history of suture itself exists as a sort of patchwork, composed of fragments of writing and image, interspersed with periods of development in textiles and in medicine. One record exists in the

Scottish Society on the History of Medicine’s History of Sutures, which traces the practice from ancient Babylon to 20th century Scotland. According to this text, the first record of suture is found in the Edwin Smith Papyrus, the oldest known surgical text dating back to the 1600 B.C.E New Egypt Empire, which states of the activity that “thou shouldst draw together for him his gash with stitching.” Roughly 1600 years later, in 30 A.D., Roman physician Aurelius Cornelius Celsus tells us in his treatise De Medicina that sutures should be “soft, and not over twisted, so that they may be more easy on the part.” Nine centuries later, Roger of Solerno of the Schola Medica Salernita — the first known Medieval medical school — recommends in his Practica Chirurgiae the use of silk in suture, to “hold the wounded edges together,” and the technique of “interrupted sutures about a finger-breadth apart.” These texts are crucial in understanding historical conceptions of suture-as-techne in their textuality. They serve as guidelines, as histories, and as ideologies of the body. Something done — stitching — is transformed into something written, with the intention that these writings be translated back into action through their instructive function. Yet as much as suture (and the body) has a textual history, it also has a visual history. 17


Bodily Topographies Emilie Biggs 18

The man considered the ‘father of anatomy,’ Andreas Vesalius, born 1514, was as much artist as he was physician, and founded the anatomical sciences entirely through groundbreaking illustrations of the body reliant upon recently developed Renaissance art techniques. This means that the origin of anatomy as a science is a gallery of anatomic images, and this gallery has been significantly added to since its inception. The corpus of Ambroise Paré, a 16th century surgeon hugely influential in the development of ‘dry suturing’, includes decorated, looping illustrations of different ligatures. The 18th century saw alongside its regularization of the sciences a surge in detailed, realistic depictions of various surgeries — tongues cut open and stitched together again, neatly exposed arteries with a suture hook hovering over them, step-by-step caesarian sections. An article from the American Journal of Nursing in 1933 describes the use of a “course of drawings” in which students are required to draw illustrations of surgical operations and stitch patterns for sutures, with “the objective being that the image be indelibly impressed upon the student’s mind.” The history of anatomic research and surgical practice is in this way not only implicated in the history of text, but in the history of art — underlying medical knowl-

edge and teaching is an anatomic imaginary. The history of the development of suture must then be read alongside the history of the translation of the body into word and image – into an object of investigation and recording. Yet suture, as this impetus for surgical writing and drawing, can itself be seen as a kind of writing — a physical inscription of a medical history onto the body itself, a literal complication of its medium (flesh) in the sense of the Latin cum plicare, “to weave together.” If Nietzsche conceives of forgetting as a process of “incorporation,” suture seems to be the opposite: an excorporation, the externalization of a trauma in permanently stitching it into the skin, the founding of a mnemonic (and therefore historic) body. Nietzsche’s own description of the memory of man is something that has “become flesh in him”— a symbol of the assimilation of an intrusion upon the body, or, more simply, scar tissue. Insofar as Nietzsche claims memory must be “burned in,” it perhaps can also be sewn in — bound to the flesh with the same patterns of stitches used to bind books. With suture, the body then is not only translated into but is practically treated as a text to be written upon, or a textile to be stitched into. It is no accident that the title of Andreas Vesalius’ magnum opus is De


Humanis Corporis Fabrica — “On the Fabric of the Human Body.” An interesting way to consider the body as text or fabric in the context of suture is through Michel de Certeau’s theory of the ‘pedestrian speech act.’ In his essay “Walking in the City,” de Certeau uses the theory of the speech act to analyze the relationship between the path of the pedestrian and the urban topography. De Certeau writes that “[Pedestrians’] intertwined paths give their shape to spaces. They weave places together. … They are not localized; it is rather that they spatialize.” This can be understood in relation to the concept of generativity in language, which argues that within a fixed number of sounds and letters that a

single language permits, an infinite number of utterances are possible. Within de Certeau’s schema, streets act as a political language. As the negative spaces between the edifice of the city, they serve both as spatial ruptures and loci of possibility: within the limited area carved out, the pedestrian is allowed an infinite number of ways to pass through. He stops at one spot to look in a window, turns to dodge a different pedestrian at another, and at every second participates in a network of bodies that determine the actual spatialization of the city in the way speech determines the actualization of a grammar. I want to propose that we can understand the specific engagement of suture with the body in the same way—as 19


Bodily Topographies Emilie Biggs

a “giving shape to spaces,” or a “weaving places together,” an actualization of a medical grammar. Furthermore, using de Certeau’s characterization of the map as a “surface of projection” which “exhibits the (voracious) property that the geographical system has of being able to transform action into legibility,” I argue that we can understand the canon of surgical literature and imagery as an attempt at a cartography of the body. Moving across the skin, creating individual spatial patterns while still confined within a structure of medical practice, encountering and initiating different organizations of space, the needle participates in a bodily topography. De Certeau goes on to consider a defining characteristic of this ‘speech act’ that specifically relates to generativity, the “phatic”: The “phatic“ aspect, by which I mean the function … of terms that initiate, maintain, or interrupt contact, such as “hello.“ “well, well,“ etc. … Walking, which alternately follows a path and has followers, creates a mobile organicity in the environment, a sequence of phatic topoi.

The nonessential particularities of the suture – one stitch 0.1 mm longer than the next, the threads at the end of one knot slightly shorter than those of the next, two sutures mar20

ginally closer together than two others — can be seen as this ‘phatic’ characteristic. A product of the interplay of technique and wound, or of the particularity of the act situated within the particularity of the location of action, stitching becomes walking: navigating the bodily space and adjusting, adapting and reconfiguring within the historically designated path allotted to it. At the meeting of the theoretical body articulated by medical literature (the ‘map’) with the site of medical praxis (the particular body), the surgical ‘phatic’ emerges. The particular-historic body is stitched together. We may return here to Nietzsche’s idea of memory. Alongside incorporation, Nietzsche defines ‘to forget’ as “to make room for new things”. Conversely, trauma seems to be a sort of crowding — with the suture-as-pedestrian, memory or scars serve as a sort of traffic, congesting spaces of movement, forcing diversions. The scar, necessarily a non-rupture, but simultaneously a symbol of rupture, both serves as a monument to the body’s historicity and interacts with any further impositions of history onto the body at the scar’s location. Trauma as the ‘deepening’ of man, as Nietzsche characterizes it, can in this way be seen not as a cutting into but a process of addition or edification – increasing the body’s dimensionality, grafting layers upon


its surface, forcing complexity onto it. De Certeau seems cognizant of this idea of addition when he writes, “the long poem of walking … creates shadows and ambiguities within [spatial organizations]. It inserts its multitudinous references and citations into them (social models, cultural mores, personal factors).” Sutures and scars become citations of a history of text and practice, and the body becomes an intertextual site. This relation between speaking, walking, and surgery as simultaneously a compilation and realization of text can even be found within de Certeau’s writings: it is not insignificant that he describes the “style of use” so crucial to speaking and walking as “a way of operating.” With this understanding of suture as spatialization and complication, a literal engraving of a body of history into the particular body, inextricably linked to trauma and scar; we can consider another instance of the prominence of the suture in cultural history: in its caricaturization, prosthetic makeup. One of the earliest instantiations of what could be considered ‘modern’ prosthetics was seen at the early 20th century Théâtre du Grand Guignol in Paris, known for its graphic horror productions. Accredited to Grand Guignol is significant innovation in the manu-

facturing of different shades of fake blood. The theatre’s stage manager was rumored to receive a daily delivery of bodily horror from a nearby butcher, including sheep eyes and cow tongues to be used as props. Onstage, eyes were carved out, stomachs were torn open, arms were ripped off: violence was treated as festival. What is significant about Grand Guignol’s iteration of prosthesis is that it relied upon the unrealizability of the bodies it created. Its horror was celebrated in its fictionality — violence had to be unnatural or supernatural in order to be spectacle. Following the realization of indescribably horrifying violence in WWII, the theatre, finding it had lost its audience, was forced to close. The necessary fictionality of prosthetic makeup is perhaps most interesting in re21


Bodily Topographies Emilie Biggs 22

lation to the idea of suture as addition. Literally meaning “to place in addition to” (pros tithenai), prosthesis is a practice of accumulation, a placing of exterior materialities in a specific relation to the organic body. Crucially, with prosthetic makeup, this constitutes an impermanent addition. One of the pioneers of prosthesis in film, Lon Chaney, was made famous precisely for his transformation of wax, cotton, wool, tissue paper, latex, and greasepaint into a technology of removable addition. With Richard Smith’s invention of the 3-piece prosthetic face mask in the 1950’s, allowing the actor a wider range of facial expressions, prosthesis became easier to assimilate with the ‘real’ face, but also, more crucially, easier to take off. Prosthetic makeup in its essence allows

for the production of a disposable trauma. In this way, the body in prosthetics can be seen as a sort of false topography: it represents space that does not exist, a surface without depth (complication), speech without content. Insofar as the theoretical body built by the history of medical literature cannot itself physically exist (an existent body must be a particular body), the prosthetic body is particularity without mnemonics and without history. Or, if it does display a history, it is an artificial one – artificial because it is one that the body is not actually, painfully bound to. We can perhaps return here to Vesalius and the “fabric of the human body.” If we accept suture as a textile practice of sewing history into the flesh, prosthesis seems to bring to


the forefront the idea of the body as a space of fabrication – fabrication of depth, of spatiality, of story (the inauguration of the Award for Best Makeup at the Academy Awards in 1981 stands as a testament to the legitimization of prosthesis as a form of storytelling and the body as a tool of narrative). One thinks of Freud’s famous description of the modern, prosthetic man: “When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times.” In what way can additions to the body ‘grow on to’ it? Only when they are forcibly stitched in. Ironically, then, it is only in the fabrication of surgery and suture that it is given weight as an artistic practice, or as a ‘work.’ Considering this, one further moment in prosthesis perhaps deserves consideration as the final point of this investigation. In 1931, Jack Pierce (incredibly, born with the name Janus) created through prosthetics the now virtually canonical image of Frankenstein’s monster. The cultural adoption of Pierce’s monster as the monster indicates one further capacity of prosthesis: that of hermeneutics. Pierce, who heavily studied surgical texts before beginning his design, was able to visually account for the lacunae in Shelley’s text in regards to the physiology of the monster and

fundamentally impact future engagements with the original text (try rereading Frankenstein without visualizing at least one aspect of Pierce’s design in the monster — you can’t). In the iconic picture of Boris Karloff, the monster looks down upon the viewer with sunken eyes and a suggestion of a grin. The famous electrodes jut out of either side of his neck. A gaping wound extends down his face. It is to this prosthetic outfit designed by Pierce, and to this specific wound, that we can attribute one of the most recognizable features of the cartoon monster that infiltrates grocery store windows and candy bar wrappers around Halloween: running down the right side of his forehead, a perfectly neat set of stitches.

23


On Political Rupture

By Ravi Maddali

24


D

uring Henry Kissinger and Zhou Enlai’s famous 1972 meeting, the latter was supposedly asked what he thought of the French Revolution and responded, "too early to say.” Even if this quote is a simple historical fiction resulting from mistranslation, it nevertheless raises an interesting question: how exactly is one to judge a political rupture? Let’s clarify the issue. First, we must address the question of what even constitutes a political rupture. Consider the three following answers: 1. High Bar: A radical idea materializes within a government. 2. Moderate Bar: A government founded on radical principles exists, regardless of any subsequent deviations from the founding ideology. 3. Low Bar: A radical idea remains prevalent within relevant debates. What we might mean by a “radical idea” is worthy of its own discussion, but given this isn’t the focus of our discussion here, let us make do with a rather simple definition: if a political ideology substantially differs from that embodied by the government under consid-

eration, we will call it a “radical idea” (hence, communism would be a radical idea in the United States, but would not have been in Soviet Russia). Given the nature of the discussion at hand, I believe the best way of making sense of these different answers is through exposition and example1, as is provided below. Hopefully what each answer represents will be made clear in the following paragraphs. So, according to the High Bar, a political rupture has occurred when what would constitute a radical idea at time t-1 has materialized within a government from time t onwards. Exactly how this materialization has occurred is not important for our discussion; it could have been through a violent revolution or through a peaceful restructuring of the government per these radical principles, but insofar as an idea which would have been radical at t-1 now constitutes the primary guiding principles of the government at time t, we have passed the High Bar. An important aspect of this first answer is that these new principles remain a core component of the government during all periods of consideration

I’m sure many will have legitimate qualms with my use of historical examples. I’m sure they would also have specific reasons for why they think I am misrepresenting history. Yet, I only use these examples in order to explain my different answers; so, if you disagree with my historical analysis, surely you can identify the reasons for my mistakes and deduce from that what the theory is meant to say. 1

25


On Political Rupture Ravi Maddali

after time t. While the exact manner in which they play out might change over time, we cannot have any substantial deviations from these principles under the High Bar.2 The United States, with regards to democratic republicanism, would pass this bar. Though the United States has been through a number of “revolutions” (Jeffersonian, Jacksonian, etc.), we would be correct to say that the democratic republican principles that led to the founding of the country remained principal throughout these political shifts. The Moderate Bar for political rupture is satisfied if and when the High Bar was passed at some point, and the government which subsequently came to be continues to exist. No matter the deviations from the core principles of the radical idea which brought this government into being – these deviations could be regarding a few of the core principals, or all of them — insofar as this government continues to exist,

we will say this Moderate Bar has been met.3 China, with regards to communism, passes the Moderate Bar. Finally, the Low Bar for political rupture is met whenever a radical idea has serious proponents within the country of consideration.4 The US and the UK, with regards to socialism, pass the Low Bar. Given the substantial range between our Low Bar and High Bar, it is necessary that we justify these endpoints. We have placed the High Bar at a point where a rupture that has occurred is so severe and complete that it is clear something radical has happened and it has happened in a sustained way. Of course, the duration of this sustenance depends on the government we are considering — 69 years for the Soviet Union with respect to communism, and 238 years and counting for the United States with respect to democratic republicanism. It is hard for me to imagine what a higher bar of practical relevance

A substantive deviation has occurred if another instance of the High Bar has been met. For the geopolitical region of consideration during the period of consideration, we can only have seen this High Bar being passed once. 2

So, the High Bar could have been met multiple times, but as long as we are considering the same government, we still pass the Moderate Bar. For example, England from the reign of Charles II through the Interregnum would fail to pass the Moderate Bar since we have an official ending of a government. China would pass the Moderate Bar since its current government is a continuation of that founded by Mao Zedong. 3

An advocate is “serious” if she is worthy of national news coverage. Think Bernie Sanders or Jeremy Corbyn. 4

26


would look like. The more contentious point is where we have placed our Low Bar, namely, why isn’t it higher? My answer is simple — I do not think it is an insignificant fact when there are popular politicians actively advocating for a radical idea, i.e. advocating for a High Bar-passing change to occur. The very fact that such a radical idea would find a number of advocates for itself makes the Low Bar a legitimate candidate for political rupture, even if the idea never makes its way into law. A radical idea’s conceptual introduction at a popular level indicates that there are a number of people who wish for this change to happen, who are seriously dissatisfied with the current political framework, and given thus grounding the legitimacy of the Low Bar. Our answers manifest in the real world as instances, as in the examples provided thus far, and in laying out a set of answers we want to be sure not to allow too many or too few instances to count as some type of political rupture. There are a number of people who would justifiably argue that the newfound popularity of socialist policies in the United States represents a political rupture, and our Low Bar recognizes the significance of these calls. This is why our Low Bar isn’t higher, and I can’t think of a lower bar of practical significance. Having established our reasons for these endpoints,

we can now argue for the location of our Moderate Bar. The Low Bar represents a serious introduction of a radical idea, and the High Bar represents its complete manifestation. Thus, a “good” Moderate Bar should be one in which the would-be advocates of political rupture in a “Low-Bar world” have succeeded in achieving a High Bar-passing change, but failed (or their ideological successors have failed) in maintaining these changes. Now that we have a set of justified definitions of political rupture, the next step seems to be to pick out one of the three as “correct”. Before engaging in such an analysis, however, let us return to the supposed Kissinger-Enlai exchange and see whether there are any more insights to glean. When the American statesman asked 27


On Political Rupture Ravi Maddali

his Chinese counterpart what he thought of the impact of the French Revolution, Kissinger was picking out a rupture in history that he could compare with the rupture of the Chinese communist revolution. Thus far in this piece we have implicitly shown why Kissinger made a “good” comparison – both the Chinese and French Revolutions passed the High Bar, and so he was referencing revolutions of the same kind! What we currently have, then, is a framework for differentiating between different kinds of ruptures which can be used to enrich comparisons between instances of ruptures. When comparing two ruptures that pass the same bar, we know we are comparing two things of a similar kind, and when comparing ruptures that pass different bars, we can keep this in mind in the course of our analysis. This is, I believe, a rather useful tool, but if we were to say that only one of the three bars constitute a “true” rupture and throw the other two answers out, we would lose this capability. Moreover, I do not think that we can even pick one of the three as “correct” since, by their very definitions, each represents a different type of

political rupture. It is not an insignificant fact that the United States passes the High Bar with regards to democratic republicanism, the Moderate Bar with regards to state paternalism5, and the Low Bar with regards to socialism. To say that only one of these three represents a true political rupture is to ignore that countries have multiple radical ideas playing out in different ways at the same time – we would be turning a blind eye to the complexity of the discussion at hand. So, instead of picking one of the three as the only correct answer, we are better off accepting the trio as a framework for making sense of different types of political ruptures. To continue our discussion, then, let us instead examine the implication of Zhou Enlai’s response, “too early to say,” since it brings up the serious issue of the adjudicative worth of this framework. Russia, with respect to communism, passed the High Bar for decades, but with the collapse of the USSR, whether it even passes the Low Bar today is unclear. If we were to consider China with respect to communism in the 1950’s, we would say it passed the High Bar, though now it only passes

There is a legitimate point to be made that many state paternalistic policies represent aspects of a socialist program. When I say socialism, I mean an ideology that advocates for social ownership of the means of production. Clearly, this differs from a capitalistic setup that simply includes some socialistic elements, which would probably best describe the contemporary US. 5

28


the Moderate Bar. Even with the case of the Low Bar, how long an idea must stay relevant for us to say it has “remained” prevalent represents a substantial dilemma. In all these cases, it is clear that for contemporary instances of ruptures, our framework can at best give a time-indexed answer: “As of now, X country has passed Y bar.” Yet, I do not believe this is a serious knock against the utility of the framework for political rupture. To end this piece, there are three specific applications that I would like to draw attention to. The first fruitful application of our framework is one we have seen throughout our discussion — towards historical ruptures. Reconsider the case of China — using the framework, we are able to justify our comparison of the Chinese and French Revolutions and also make sense of the differences between the technocratic oligarchy that China is today and the agrarian communist dictatorship it used to be (the latter passed the High Bar and the former passes the Moderate Bar). The second fruitful application is for predictions. We can use our historical analysis, compare the origins of previous ruptures with contemporary cases, and have jus-

tified cause to predict which kind of political rupture we can expect in a given case.6 The last application is in qualifying the success of radical ideas so as to avoid black-and-white analysis. For example, it is a common claim that “socialism failed because …” (insert failed socialist experiment). Such blanket statements, however, do not explain the seemingly never-ending appeal of socialist-minded politicians across the world. Under our framework, however, we can make sense of the mixed success socialistic rupture has had by saying “countries tend to pass the High Bar with respect to socialism for only a limited period of time, but can pass the Low Bar for extended periods of time.” The answer to our question “how exactly is one to judge a political rupture,” then, is by our three-tiered framework. With this, we can successfully differentiate different kinds of political ruptures, clearly see their underlying differences, and do so in a way that does not ignore internal complexities within nations. By no means is this analysis perfect or complete but it does, I hope, represent a solid starting point.

I’ve decided not to include an example of this since any would be rather controversial; I’d rather limit controversy to the theory at hand and leave this exercise to the reader. 6

29


30


Interview with Dena Shottenkirk

31


Interview

Dena Shottenkirk is an assistant professor at Brooklyn College, specializing in aesthetics and epistemology, as well as a practicing artist. Before beginning this prestigious position, Shottenkirk came from a background of art criticism, having held staff positions at both Artforum and Art in America. Her philosophical slant towards art criticism is evident in her 2009 publication Cover Up the Dirty Parts! Funding, Fighting and the First Amendment, which speaks critically about the epistemological and political roles art plays. For this issue of the Gadfly, we spoke to her about William James, “jist,� and Facebook (among other things) to better understand the tiny ruptures which constitute our perception and restructure our world.

Dena Shottenkirk

GEORGE MENZ, FOR THE GADFLY: So, Dena, tell us a little bit about yourself and your work. DENA SHOTTENKIRK: I'm both a practicing artist and a philosopher. I was also previously a critic for Artforum and Art in America, so I've written a lot about art, and I've also written a lot about philosophy. I run a nonprofit called Philosophers' Ontological Party Club which is sort of an art-philosophy project, and what it consists in is sitting down and having conversations with people. I started it because it occurred to me that what both philosophers and artists do is articulate their own voice in the world, and figure out who they are in their body, and how they experience the world. I started this project as a way to put my voice next to everybody else's voice and to collate everybody's thoughts on something. I think there's too much narcissism in both art and philosophy. Everyone's too focused on trying to figure out what they say, and dominating the conversation, and not really understanding this incredibly, wonderfully complex relationship between my view and your view, and how my view affects the construction of your view, and how your view affects my construction of my own thought. I wanted to focus on the interface between those two processes. Could you delve a little bit into the idea of rupture? I was recently reading William James, who I think is kind of a really wonderful person to read, and who rambles a lot and kind of doesn't get to the point. In one of his ramblings he was talking about how we don't really hear thunder on its own: we hear it 32


as a contrast to the silence before. And how, all perception in that way is a bunch of discontinuous moments, and those are ruptures, right? So, attention, in that way — some people think of attention as either exogenous or endogenous. The exogenous attention is always something from the outside that startles it. But what all of that is, both that distinction and what William James was talking about, was the constant ruptures in all perceptual experience. I don't think you could understand perceptual experience if you don't see it as a bunch of ruptures. It's these discontinuous moments from second to second. We're constantly looking around and switching our attention and it's these uncountable numbers of ruptures that constitute our perceptual experience and our brains are constantly having to sort through them, edit them out, compile them, make a new thing, then remake a new thing after that. It's numerically interesting to look at, because if you look at those ruptures as kind of atomic units and how we slice one second from another second and one — content, I guess you could say — from another second's content, we have this mysterious process of putting it all together and then going, “Oh, that's perceptual experience.“ That's the content of it. But the constituents of that perceptual experience are in many ways those tiny little ruptures that come from second to second. I recently did an interview with a gallery dealer, James Cohen, for this book that I just finished editing. In it, he talked about the kind of art that he likes to deal with, art that he says provides “slippages“ into other experiences. It's just another word for ruptures. But when you look at the visual work, it provides you with a kind of tunnel into another conceptual-cognitive world. So instead of just sitting there, flat on the surface, as decorative, it's opening in to not just anecdotes, but a conceptual framework that one might look at. I think that's the basic drive behind a lot of conceptual work: it's opening up a whole labyrinth — a whole world — that is a point of view, and not just a visual thing in front of you. Do you think that idea of rupture expands not only from an instant-to-instant basis — collecting a continuous experience from what's basically a constant disruption of the manifold — but to a time in a person's life? I feel like there are these kinds of ruptures, where it seems like the way you think about something and the way you perceive something is changed, and you can never see it the same way again. Do you think that's the same kind of thing on a larger scale, or do you think it's the result of something completely different? The picture that I was giving you is that what perception is, is a whole bunch of tiny ruptures. I just wrote something on “jist“ — we get an unbelievable amount of information in the first 100, 200, 300 millisec33


Interview Dena Shottenkirk

onds of our perception. Then, we look again, and compare what we see later on with that first “jist“ experience. We do that all the time. And those are ruptures, in that way, and it's also a constant act of compilation. I think that what you were talking about was that within a very small temporal framework, there are those ruptures, and there is that compilation. And with a larger temporal framework, over years, we almost, in a meta way, look back on those other experiences, but now we're compiling them differently. And because we compile them differently they seem essentially different, because they have a different character. That's why I said at the beginning, I think we're constantly compiling and recompiling and structuring and restructuring, and it's this constant almost liquid-fluid way of building our perceptual experience of the world and at the same time our self-identity. It's a constant reshuffling of those experiences, and a reordering. You think of perception itself. Perception itself is always hugely edited. I don't take everything in, I only take small things in. What is it, like three percent of our eyeballs foveate? Most of it is peripheral. We take things in in very sporadic partial incomplete ways. Then, over time, we're doing an almost identical process where we hierarchize what we see, we edit it out, we forget, we remember, we obsessively remember, we remember things more than other things. Probably on the meta/macro level what we're doing is that same kind of editing and that same kind of restructuring and recombining all the time. Which is why it seems different. How you remember something five years ago is hugely different than what it was at the time, because you've got the intervening five years which are now part of that constituent experience, so you're editing, changing. Probably rupture plays a role there, because the discontinuities that appear over time are not the discontinuities that were there originally. Obviously that implies kind of a secondary rupture. Even primary to the kind of rupture that you're describing, there's a rupture between yourself and the external world or the source of sensation, and there's also a rupture between one thinker, one perceiver, and another perceiver. That raises the question, how do you communicate? How do you have any kind of shared frame of reference? It's really weird, right? You don't want to be entirely the British empiricist, where you think that perception is a matching in your head with the external world in some way. You don't want to think of the external world as existing completely antecedently to your experience of it. On the other hand you don't want to think that

34


it's completely mind-dependent, you don't want to go Berkeley's route, necessarily. You want to think that there's some objective world out there, and in some way we ascertain it. But if we do think that our perception of things is editing, which I think, and that there is a lot of constructionalism that goes on, you have to have some kind of foot in both worlds. There has to be some parts of your perception that constitute an ascertaining of the objective world, and then some parts of your experience that are just ineluctably private — your third-person, first-person experience. And you're trying to sort out which is which. Which foot do you have in which world? I think that's how the conversation between us comes in. I think that's the wonderful thing about social epistemology. You're not just thinking about epistemology from an individual point of view and trying to make sense of it. You're sort of recognizing, actually, we construct this world together. When I want to know the veracity of my perceptual experience, I, in part, look over to you and say, “Hey, did you see that too?” And then when I get that confirmation, I know that my editing process was kind of right and I have more confidence in it. What is remember is that editing process that now agrees with you, and I discard all that stuff that I don’t think anyone else saw. In the same way that we're constantly reconstituting our experience, we are constantly reconstituting our experience in terms of what the guy next to us said. I think that's the only way I can make sense of it. Let’s talk more about the idea of social epistemology. In a sense, it seems that right now, in the current social/political climate, there's no consensus. It seems like it's impossible to have that kind of reaction where you can talk to someone else and expect to have a confirmation of your beliefs. Or, at the very least, if you can find someone who will confirm your beliefs, you can find someone who will deny them, and they both seem equally certain. I think we haven't really settled into the age of the internet. Everyone knows this: the internet, so evilly, almost, democratized all voices. It used to be we had gatekeepers for the “public voice.“ We had editors, we had a whole structure of things that kept the crazy voices out, the marginal voices out. Some voices counted for a whole lot more worth than other voices. It was easier to, at least, believe that there was a consensus in terms of who was allowed to speak, and who got that attention. Now there's so much competition for that, because everybody's voice counts more or less the same as everybody else's voice. I don't think we've worked through, yet, a system where we figure out how to prioritize and hierarchize those voices. I think one of the sad consequences of this is that people don't value argument in terms 35


of a method to get to the truth in the way that people always did in the past. Argument, now, is just a way to spin my position so that I get people on my side to support me more.

Dena Shottenkirk

Interview

So rather than social epistemology, it's a kind of social maneuvering? I think so, yeah. But I'm very optimistic about things in general. I think it'll work out, I just think that the internet is a huge technological change in people's lives. I think in terms of how we get information and how people sort out their opinions vis a vis others' opinions, it's now very difficult, because people don't know now where to go to sort their opinions out. They don't know who the guy next to them is that they can legitimately turn to and go, “Do you think that's right?” It seems as if the same thing has been true with every great advance in technology — going from monks illuminating manuscripts painstakingly by hand over years and years, to the printing press, to the availability of cheap paper and writing materials, now to computers and finally the Internet. Think about in your lifetime, say 20 years ago, or when you were a university student. I think about what I have to do when I write a paper, I'll sit down with my laptop maybe with a couple class materials or my notebook, and I'll write it very quickly without really looking back until the entire thing is done. And I wonder how the experience of that differs from having to sit in front of a typewriter, which is much harder to use, and if you mess up a word you can't just delete it immediately — you have to stop what you're doing entirely, white it out, and then finagle some mechanical process to put it back the way it was. It seems as if there's much less of a disjunction between our ability to put our thoughts into words on a computer than there was with a typewriter. Maybe not so much as with handwriting, but handwriting has other discontinuities, especially in terms of comprehension — if you want to look back on it later, if you want to give it to someone else to read — depending on how good your draftsmanship is, obviously. You know what I think — I've never thought this before, so I'm not 100% sure about this, but I'm gonna jump — I think that, I used a typewriter, but I think in order to do that because it was so linear and so wordby-word you had to have mapped out the structure of your thought better. You had to have an outline, you had to know exactly where you were going. I think one thing that maybe is bad about working on a computer is that you can move whole chunks of it around more easily and import it, and so I think you're not forced always to figure 36


out the structure of it. And that's kind of the way the world works now. Data comes in large chunks to us. I think that we are less responsible for figuring out the structure of our own thought because such large things can be imported. It's wholesale, it's like going into a store and just grabbing things off the shelf. You can get chunks of thought, now. It seems that so much of it is pushed off onto other things. Like Facebook's newsfeed — there are algorithms they use so that you'll see exactly what you want, you don't have to search it out. So it seems like our ability to go in search of information—we've let that go on to other services, which may do it well or may do it poorly, but at the same time as if we want everything presented to us very quickly. I think that's sort of the wholesale thing. This is kind of one of the reasons that I started this organization — I think that a lot of people don't really examine the contents of their own thought very well, and just go: What do I think? We've gotten so in the habit of regurgitating the popular opinion, or the one you just read about, or the one that somebody else holds that people don't spend as much time critically thinking about. It's the same point. I think the bad thing about that is then you lose that sense of the mechanism of connecting one thought to the next, and figuring out the argument, and figuring out the logic, when you're grabbing wholesale other people's thoughts. You don't really work through that process, and that cognitive process of 'this leads to this' — they're tiny little atomistic moments that you go through which I think mirror more closely the perceptual phenomenon of experience. When I perceive, it's a whole bunch of constantly moving, discontinuous moments. And thought is, in its best form, the same thing. What's bad about paint-by-numbers is that the whole thing is figured out for you. Then you don't really see. One of the things that you have to teach students to do when they're first painting and first drawing is actually to see, not to paint what they think they should be seeing, but to actually paint what they in fact see. It's a training. You have to retrain them to actually see all those discontinuous things between the blade of grass and dirt — not just to do the blade of grass and the dirt, but to look at all the little moments in between. So it's all of those little fissures in between the objects that really count as seeing. It's those slippages, those ruptures. The more you look, the more there are. To do it in the details between all of the moments is the really wonderful, profound visual and cognitive experience, and so I think the bad thing about lazily going out there and just grabbing a chunk of words from somebody else is that you didn't go through that wonderfully ruptured process. Seeing and thinking must be alike, right? 37


Dena Shottenkirk

Interview

Let's return to that idea of perception that we started out with. If we could discuss a bit more of what you think that process is rooted in, if it's based on this a priori conceptions of temporality or spatial order or something like that, if it's something more fundamental, something just biological, how we receive information — do you think it's more in the mind or in the organs of sense themselves?

38

I think your disjunct there, so Cartesian, is evil. Because it doesn't really make sense that we are mental and that we are physical. The hard problem isn't really consciousness, the hard problem is somehow the interface between our physical bodies and our abstractions of that physical data: how do we bring all this data in perceptually and then make it into something useful for us. That's the really interesting problem. Perception is such an important thing to think about because it's perception that's fodder for thought. It's not one or the other. Somehow, we bring things in in this way, that we're editing it on the basis of our very pragmatist needs of the moment. I see what's useful to see, I see what I need to see. Attention can alter that a little bit, we can sort of force ourselves to pay attention to something other than what we were originally going to pay attention to, but on the whole we edit for reasons. So it's not a priori.


39


Self-Deception and Bullshit

By Cecilia Bell

40


J

ack has lived the better part of his life on pork sausages and bourbon, and now he is in the hospital with clogged blood vessels and liver damage. Despite unfavorable medical test results, he has recently convinced himself that his health is improving (after all, one month on a low-sodium, low-sugar diet has got to do something). Jack’s situation sounds like a paradigmatic case of self-deception. But now suppose that, as it so happens, his health really is on the mend. Would we still consider him to be deceiving himself in believing that he is recovering? Though self-deception is widely accepted as a prevalent “epistemic malady,” there is little agreement on its definition and nature, and even archetypal cases are contested among philosophers. Conventionally, it is considered to be isomorphic to standard interpersonal deception, where a person intentionally gets another person to believe some proposition p, despite knowing that the proposition is false ~p. In other words, self-deception is usually thought to be like deceiving someone else, except that the self-deceiver is both the deceiver and the deceived. Two paradoxes arise from this interpretation. The first is that it seems impossible that a person can hold two contradictory beliefs, p and ~p, at the same time. The second is that

it seems impossible that a person can intentionally deceive themselves and actually be deceived, since the former requires some kind of awareness and the latter, some kind of unawareness. The first paradox concerns the state of self-deception while the second concerns its process. These two paradoxes have produced a great body of literature, which can roughly be divided under two opposing views: that of the intentionalists and that of the non-intentionalists. The intentionalists generally maintain the contradictory belief and the intention requirement by appealing to some kind of temporal or psychological partitioning (they might say that a person tries to deceive themself at t1 and is deceived at t2, or that one part of the mind does the deceiving while the other gets deceived.) In doing so, they preserve the traditional model for self-deception. The non-intentionalists, on the other hand, take the paradoxes that arise from these requirements as grounds to discard modelling self-deception on interpersonal deception. I find the requirement that the proposition p be false equally puzzling, though it has not shaped the discourse on self-deception as the other two have. The quintessential examples of self-deception certainly seem to uphold it. But what about Jack’s case? Can 41


Self-Deception and Bullshit Cecilia Bell 42

a self-deceiver deceive themselves into believing something that turns out to be true? Amelie Rorty, who is an intentionalist, thinks that this is possible. She argues that: Self-deception need not involve false belief: just as the deceiver can attempt to produce a belief which is — as it happens — true, so too a self-deceiver can set herself to believe what is in fact true. A canny self-deceiver can focus on accurate but irrelevant observations as a way of denying a truth that is importantly relevant to her immediate projects.

Though Rorty claims that self-deception need not hinge on the falsity of some belief p, she still suggests that p must at least impede some other truth q. Perhaps an illustrative case of what she describes might be one where a writer focuses solely on the good reviews they receive “as a way of denying the truth” that most critics think that their work is awful. So, if p is “people think that my writing is good”, then the writer does not exactly believe p falsely, but in doing so, obstructs the truth q that “most people think that my writing is bad.” I am not convinced that Rorty’s interpretation of the kind of self-deception that does not involve false beliefs does not in fact involve false belief. This is because I think that fo-

cusing on some irrelevant belief p in order to deny another belief q involves the self-deceiver falsely believe ~q. And further, that the self-deception critically takes place in falsely believing ~q and not in believing p. In the case of the writer, I would argue that they deceive themself in falsely believing that it is not the case that “most people think that my writing is bad," and that focusing on the insignificant truth that “[some] people think that my writing is good” is rather part of the process of the self-deception. But are there instances of self-deception that do not involve false belief and also do not forsake some other truth? Alfred Mele thinks that this is the case when someone “acquires a true belief that p on the basis of evidence, e, which does not warrant the belief that p.” A self-deceiver, therefore, need not believe something false; they might believesomething true that is based on unwarranted evidence. On this view, Jack’s case is accommodated: he believes that his health is improving based on dubious intuitions that are not supported by medical tests. Mele’s suggestion that self-deceivers can produce true beliefs coheres with his overall account of self-deception. He denies that self-deceivers need to be aware that their belief is false, and hence hold contradictory beliefs, nor that they must intend to


deceive themselves. Instead, “what generates the self-deceived person's belief that p,” on his account, “is a desire-influenced manipulation of data which are, or seem to be, relevant to the truth value of p.” Bertrand Russell, in a section from The Analysis of Mind, proposes a comparable interpretation of self-deception that gives conceptual authority to the self-deceiver’s desires. He claims that we can distinguish self-deception as a species of motivated belief, not by looking to the falsity of the belief and the amount of evidence against it, but by looking at the desire that motivated it. A self-deceiver, according to Russell, has a desire for a certain belief rather than a desire for fact. (That is not to

say that the evidence is entirely irrelevant: Russell contends that what differentiates self-deception from wishful thinking is that self-deception takes place in the face of contrary evidence, while wishful thinking occurs when the evidence is inconclusive.) I think that Mele and Russell’s interpretations are apt. Whatever distinguishes self-deception as an epistemic defect, it must have something to do with the self-deceiver’s desires: their particular contribution to the deception. If it were merely contingent on the falsity of the belief or its lack of evidence, it would not be considered cognitively insidious, but rather a product of ignorance or bad epistemic practices. Note also that this analy43


Self-Deception and Bullshit Cecilia Bell 44

sis of self-deception falls in the non-intentionalist’s camp: the self-deceiver need not intend to deceive themselves, knowing that their belief is false; they must merely desire a certain belief and, unintentionally or not, exploit whatever limited evidence there is to maintain it. Interestingly, Mele and Russell’s characterization of self-deception sounds a lot like Harry Frankfurt’s “bullshit.” In his timely and entertaining essay On Bullshit, Frankfurt differentiates bullshitting from lying. While the liar seeks to deceive their audience by telling a falsehood, the bullshitter solely intends to persuade their audience to suit their personal purposes, without regard for truth or falsity. The liar, therefore, must have some

grip on the truth to disavow it. The bullshitter, however, does not, and as a result tells both truths and falsehoods in accommodating their motives and desires. Analogously, on Mele and Russell’s picture, the self-deceiver does not necessarily intend to believe something that they know is false. They might deceive themself into believing a truth. What is crucial is that the self-deceiver believes something simply because they want to believe it. If, then, self-deception is like reflexive bullshitting, perhaps its pervasiveness — assuming it is pervasive — can be similarly accounted for. While Frankfurt’s diagnosis of the prevalence of bullshit is a little far-fetched (he thinks it is closely related to the post-


modern rejection of objective truth), it offers some perceptive insights that to relate to self-deception. His claim that “bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about,” for example, is comparably applicable: self-deception seems inevitable whenever circumstances compel someone to believe something without affirmative evidence. But though bullshit may be widespread, why should we think that it is any more harmful than lying, according to Frankfurt? The danger of bullshitting is, in part, related to its potential to produce true statements: it is hard to challenge the credibility of a bullshitter who tells many truths. And something similar can be said of self-deception. If we deceive ourselves into believing something that turns out to be true, we are less likely to question our epistemic practices and more likely to develop malign ones. If Jack, for example, finds out that his health is genuinely improving, then he might attribute reliability to his biased “gut feeling.” He might be more inclined to use it to guide his beliefs in future and less inclined to trust medical opinion or other more authoritative evidence. While Mele and Russell’s self-deception might not appear epistemically threatening at first glance, in that it does not

require that the self-deceiver intend to deceive themself nor produce a false belief, it turns out that it could be surprisingly harmful. When it yields true beliefs, self-deception is more difficult to recognize and more likely to foster unhealthy cognitive habits. Of course, our old friend Jack does no great harm in believing that his health is improving based on intuition, given his belief is true. But if he continues to privilege his instincts as a result, he is unlikely to be so lucky — especially if his instincts tell him that sausages and whiskey are back on the menu.

45


46


On the Death Penalty: Hegel vs. Foucault

By Robert Cohen

47


Robert Cohen

On the Death Penality

O

48

n a warm summer night in New York, Soviet spies Julius and Ethel Rosenberg shared their last kiss before an executioner and an electric chair. They had leaked numerous technological and military secrets to the Soviets at the height of anti-communist hysteria and now faced punishment for espionage and treason. Julius died after a single shock, but Ethel’s heart refused to stop. She took five turns in the chair. The young couple left behind two small children and a tragic legacy as the only two American civilians executed for espionage during the Cold War. Cesare Beccaria once wrote of capital punishment: “The murder that is depicted as a horrible crime is repeated in cold blood, remorselessly.” Indeed, the death penalty does not solve any problems for society in a tangible or rational way. The criminal is already locked up for life. He or she may have been wrongly convicted. All possibilities for redemption are foreclosed upon. It is more expensive than keeping them alive. The chief purpose of it, commonly understood, is to satiate a bloodlust for vengeance on the part of the wronged. Yet, Hegel, Kant, and other proponents of the Enlightenment may have supported this most controversial of punishments from a rationalistic perspective. And paradoxically, it was post-modern

philosophers such as Foucault and Bataille, who rejected the Enlightenment’s strict adherence to rationality, that most persuasively opposed capital punishment. Why would the greatest advocates for reason support the most unreasonable of punishments? Why might reason's critics argue for a more reasonable course of action? I. The Sovereignty of Reason

“T

he Enlightenment” is a loosely defined term. The 18th-century movement is variously perceived as a special period for scientific, philosophical, cultural and political history and as a time wherein the ideals of reason, logic, skepticism, naturalism, and progress held great authority. In particular, reason held ultimate authority. The Enlightenment produced revolutionary ideals we take for granted today: appeals to liberty, equality, and progress. Our understanding of these concepts is similar to the way that even the most intelligent of sea creatures never recognizes that there exists a mode of being other than a life underwater. For a sea anemone thousands of leagues deep, water simply is, and for humans, the worthiness of Enlightenment ideals remains evident, whether or not we succeed in living up to them. Many philosophers and psycholo-


gists argue that we are not. Yet, these ideals and our means of achieving them remain the civic oxygen that animates our discourse. Much of 19th-century philosophy either built upon the work of the Enlightenment philosophers or sought to tear it down along with systematic and rationalistic philosophical systems. One lumbering giant of 19th-century philosophy, GWF Hegel, is considered by many to have been one of the leaders of Enlightenment thinking. Hegel’s philosophical project is often described as a theodicy, or a vigorous and persuasive attempt to justify God and the inherent value of existence despite thousands of years of human suffering. To briefly summarize, in his Phenomenology of Spirit, he argues that Spirit, the human race, has over the millennia evolved its social, political, and philosophical consciousness in a painstaking process involving the rooting out of contradictions and unsatisfactory experiences to replace them with modes of being and thought that are more cogent, rational, egalitarian and humanitarian. Thus, Spirit distributes human freedom and self-consciousness more broadly throughout the human race. Hegel provides a riveting and difficult account of this process throughout the Phenomenology, commenting upon and analyzing each stage

of the progression and its various pitfalls and errors, and the consequent dialectical turn required to move beyond the given paradigm of consciousness. The values enshrined in the French Revolution, “Liberté, Égalité and Fraternité,” were the culmination of this process, with the Revolution itself being the first example of humanity achieving anything close to absolute political freedom. Hegel’s Introduction to the Philosophy of History vividly describes the process of political, cultural and philosophical evolution as “the slaughter-bench of history.” In line with this forewarning, Hegel notes near the end of the Phenomenology that once “Absolute Freedom” was achieved with the brutal overthrow of the French state, without any kind of legal apparatus to restrain the Absolutely Free, the French nation descended into chaos. A murderous “Reign of Terror” took hold, consisting of a horrific trance of lawlessness and executions. This period introduced the need of a Rational State, supported by a political and philosophical foundation, to guarantee freedom in a sustainable, affirmative, and rational structure. This freedom would be a manifestation of the will of the people that necessarily constrains the people by their desire to exercise their freedom in the structures of family and civil society. This latter project, describing 49


On the Death Penality Robert Cohen 50

the Rational State, is expounded upon in great detail in Hegel’s masterpiece Elements of the Philosophy of Right. It was a common view among philosophers of the Enlightenment and Post-Enlightenment that capital punishment was to be a necessary activity of the Rational State. This may seem irrational at the surface level, as capital punishment does not help citizens become better people, is unnecessarily brutal and torturous, and a grave miscarriage of justice if later shown to be undeserved. This requires further investigation into Hegel’s political philosophy. His commentary on crime and punishment begins rather tautologically, as he notes that laws are not laws unless there are punishments for breaking

them. Further, he claims that for laws to truly embody the free will and rational desires of the governed and therefore the Rational State, they must be affirmed by them. Thus, for the governed to affirm their own set of laws is to affirm their own set of punishments. If they break a given law, they have rationally chosen a punishment for themselves by consenting to belong to the Rational State. Provided, of course, that the State is structured in the way Hegel feels it should be in the Philosophy of Right and affirms their dignity, freedom, and decision to be governed. Hegel goes as far as to argue that this is an honor for the criminal: In so far as the punishment which this entails is seen as embodying the criminals own


right, the criminal is honored as a rational being. He is denied this honor if the concept and criterion of his punishment are not derived from his own act; and he is also denied it if he is regarded simply as a harmful animal which must be rendered harmless, or punished with a view to deterring or reforming him.

For a punishment to be “rational” and not merely an act of vengeance, Hegel holds that it must be retributive, appealing to the ancient belief that a punishment must be equal to the crime. It must also be ethical, in that it serves as an expression of the universal will of the governed. He defines retributive justice in a memorable way, proclaiming that “the universal feeling of peoples and individuals towards crime is, and always has been, that it deserves to be punished, and that what the criminal has done should also happen to him.” He states that punishments should not take the barbaric form of an “eye for an eye, tooth for a tooth” but that they should be proportional to the damage of a crime. However, he states that this rule of proportionality does not hold for murder, “for since life is the entire compass of existence, the punishment for murder cannot consist in a value — since none is equivalent to life — but only the taking of another life.” Seeing that Julius and

Ethel Rosenberg stole national security secrets and sent them to the Soviets, an action that could have led to the deaths of thousands or millions if the Soviets were able to make use of a scientific or military secret acquired through their espionage, it is clear that there are few punishments that could have been proportional to their actions. Through Hegel’s schema of rational punishment as consisting of a violation of the laws to which the public consents and being proportionate to the impact of the crime, they would have to face no less than execution. Kant and other Enlightenment philosophers held a similar position, but it is Hegel that I believe argues for this most convincingly and persuasively. It is not for the sake of vengeance that one must punish the harshest of crimes most harshly. Rather, it is for the sake of upholding the dignity of a rationally-legitimized state. However, this seems to violate many of our intuitions. It does not seem rational at all, given all that we know about the risk, expense, brutality, and ultimate futility of such punishments. Why might the death penalty be both irrational and brutal to contemporary citizens, yet perfectly rational to some of history’s most eminent rationalists?

51


Robert Cohen

On the Death Penality

II. The Sovereignty of Discourse

52

M

ichel Foucault, along with many of his post-modernist contemporaries, held a very different position than Hegel. Foucault’s focus on discourse as the central pillar of authority upon which institutions and government are contingent stands in stark contrast to Enlightenment philosophers’ appeals to reason. In Discipline and Punish, Foucault did not seek to rationalize or explain the philosophies behind various modes of punishment throughout history. Rather, he sought to explain the acts themselves, along with their effects, and let interpretations fall where they may. In this sense, he was more of a historian than a philosopher, and his work is sometimes compared to literature or historical journalism rather than philosophy. Foucault remains a philosopher in my opinion because of his contribution to various theories in the areas of social and political philosophy. Foucault felt that appeals to higher ideals such as “reason,” “human nature,” “divinity,” and other common human values and aspirations more often than not served to legitimize institutions of oppression and domination. For Foucault, Hegel’s grand vision of human culture and society progressing throughout history was nothing more than

an apologia for thousands of years of suffering and brutality. Regarding capital punishment, the story is much the same. While writing about public executions in centuries past, he dispenses with any pretenses appealing to an innate human desire for retributive justice. He states that punishments originally served as a kind of interactive public spectacle that served to actively undermine the state rather than elevate its claim to rational legitimacy, saying: It was as if the punishment was thought to equal, if not to exceed, in savagery the crime itself, to accustom the spectators to a ferocity from which one wished to divert them, to show them the frequency of crime, to make the executioner resemble a criminal, judges murderers, to reverse roles at the last moment, to make the tortured criminal an object of pity or admiration… it can be said that the practice of the public execution haunted our penal system for a long time and still haunts it today.

The amount of public affection, pity and sympathy for the executed criminal, Foucault argues, forced punishment of all kinds out of sight and into the subterranean depths of the prison-industrial complex. A similar kind of public sympathy emerged for the Rosenbergs in the extensive media coverage of their arrest,


trial, and execution. Many were horrified by the actions of the United States government and deeply sympathetic to their orphaned children. In particular, Ethel Rosenberg garnered much sympathy and outrage over her longer execution. This seems particularly salient when one considers that no other American civilians were ever executed for espionage after the Rosenbergs. Had it not been for the public, emotional display of the two orphaned children, who Foucault would describe as “objects of pity or admiration,” it is possible that many more children of Soviet spies would be orphans. Foucault’s Discipline and History of Sexuality provided a pre-eminent demonstration of the evolving purpose of punishment and so-called “justice” over the next several hundred years. If justice was not retributive but instead a spectacle of sovereign power that evolved into a regulation of human “biopower,” as he puts it, it is clear then that capital punishment is merely a tool of power to Foucault. He feels it is fact of social life, not an integral element of rational society. What then, was sovereign for Foucault? For him, it was power for its own sake as an immutable feature of human society and a force encompassing the limits and scope that defined human discourse and culture. He embraced the advance of rationality as a passing excuse for our

darkest drives and most brutal tendencies. In Foucault’s worldview, beyond a few basic characteristics, humans are more or less blank slates waiting for orders and etchings, informed by the prevailing discourse and ideology of their times. Though he held a much darker view of human nature and its relation to higher ideals than Hegel, or rather held that there was a lack of any human nature at all, Foucault was in fact very much against the death penalty, and much of his work can be read as a compassionate outcry in aid of the oppressed. Shortly after its abolishment in France, he proclaimed in the magazine “Libération” that “The oldest grief in the world is dead in France. We must rejoice.”

I

III. The Dialectical Turn

f a brilliant rationalistic political philosophy calls for a form of punishment that is cruel and unusual, and another formidable set of teachings that is more or less anti-rational leads one to the conclusion that such punishment must be done away with, what is one to make of this contrast? Must we abandon rationalistic political philosophy? Are there political limits to reason? One commonality between Hegel and Foucault is the way in which both philosophers place the importance of culture and social life at the 53


On the Death Penality Robert Cohen 54

center of personal experience. Hegel observes that the true realization of human freedom in the world is only possible given a complex web of laws, rights, and institutions, and Foucault shows through his historical investigations that human reality is inseparable from social and historical existence. These are both irreducibly social views of the human being rather than individualistic or atomistic ones. This implies

that given a social order founded upon a more humanitarian and modern set of insights, a human being brought up in such a society would be likely to contribute positively to that society. With this more modern set of insights, it is perhaps the case that what might be defined as “rational� punishment may be less retributive and more utilitarian, with the ideal of furthering the common


good for all rather than simply displaying obeisance to reason. To integrate the ideas of the two philosophers requires a twofold reconfiguration: first, an adequately proportional punishment accompanied by the rational prioritization of the suffering and rehabilitation of the possibly-innocent living over the retribution of the dead on the part of the Hegelian view. Second, an acknowledgement that it is possible to be cynical to a fault and that some ideals, perhaps even that of a modified version of Reason, are worth appealing to on the part of the Foucauldian view. The nature of rationality is such that it presupposes its own ends — following rationality, applied to any system, including social systems, to its logical endpoint will result in a particular, necessary endpoint. 2 + 2 can only equal 4, in other words. It’s easy to see how an anti-rationalist or cynic might poke plenty of holes in such a rigid mode of thought, when applied to such a fluid and emergent phenomena such as social life. Foucault and those in his tradition would argue that to try to apply any form of rationalistic discipline to social life is to necessarily commit some act of violence, through the exclusion of those that cannot abide by the given social discipline. This takes various forms throughout Foucault’s philosophical project, whether it’s

the insane asylum in Madness and Civilization or the introduction of sexual preferences in History of Sexuality. Foucault is not arguing for barbarism and the surrender of all social disciplines, or what one might call civilization itself. Yet, notably, he spends so much effort and energy revealing the contradictions and tragedies inherent in civilization, without proposing any kind of a viable alternative, that it is easy to see why Foucauldian thought is often conflated with nihilism. The Hegelian response to Foucault’s attack on the discipline of civilization, would be to argue that such discipline ultimately ends up saving more lives in the end, allowing more human flourishing than without it. A good Hegelian might even argue that Foucault, in his cynical attack on Hegel and any paeans to reason, has actually even helped society to become more Hegelian, pointing out the absurdities of his given moment in history such that we may correct our course and move further in the direction of a rational ideal worth aspiring to. One that mitigates as much collateral damage as possible, and, as Hegel sought to accomplish in Right, guarantees freedom for all. As a wise man once said, “behind every cynic is a disappointed idealist.”

55


56


The Teleology of the Closet:

Queer Fantasy in the Fiction of E. M. Forster

By Sam Wilcox

57


Clive sat in the theatre of Dionysus. The stage was empty, as it had been for many centuries, the auditorium empty; the sun had set though the

Acropolis behind still radiated heat. He saw barren plains running down

Sam Wilcox

The Teleology of the Closet

to the sea, Salamis, Aegina, mountains, all blended in a violet evening

58

— E. M. Forster, Maurice

M

I. aurice is unique among Forster’s novels. Written between 1913 and 1914, it wasn’t published until 1971 — a year after Forster’s death. The content of the novel offers the explanation for the stalled publication: it is often recognized as one of the earliest depictions of homosexuality in the novel, and one which doesn’t end in tragedy. Thus, the publication of the novel, and the narrative itself, is surrounded by the pervasive presence of English societal homophobia — particularly following the scandalous trials of Oscar Wilde — wherein the act of homosexual relations is criminal. In one scene, the title character and the novel’s protagonist, Maurice Hall, confesses to a doctor, in seeking sexual conversion therapy, that he’s “an unspeakable of the Oscar Wilde sort.” The ramifications of confessing his sexual orientation prevent it — homosexuality cannot be named, it is “unspeakable,” but can identified through its bearers, Maurice and Oscar Wilde. The novel is, in essence, a narrative of overcoming these

societal constraints of language and sexuality, ending with the coupling of Maurice and the groundskeeper, Alec Scudder. The burden of describing a forbidden sexual identity is at last overcome through the speech act of Maurice’s lover: “‘And now we shan’t be parted no more, and that’s finished,’” Alec says, the double-negative of their inability to be parted emphasizing both his working class background, and the unification of the individuals to conclude, or “finish,” the queer narrative. Alec Scudder is not Maurice’s only male love interest, however. In fact, he isn’t introduced until the final third of the novel. Rather, the first half of the novel focuses on the relationship between Maurice and the companion he meets in university, the aforementioned Clive Durham. It is through this relationship that Maurice comes to understand his sexuality, beginning with discussions of Plato’s Symposium (based on Clive’s suggestion) and culminating with sexual liaisons between the two men. The midpoint of the novel, which sees Clive traveling


alone to Greece, fractures this relationship. It is here that Clive sees the “barren[ness]” of the Greek landscape, the ruins of temples and theatres. Suddenly — abruptly — Clive is sexually re-awoken. “There had been no warning—” Forster writes, “just a blind alteration of the life spirit, just an announcement, ‘You who loved men, will henceforward love women [...].’” The shift in Clive’s sexual orientation, particularly in the context of an audience for whom sexuality is static, is perhaps the more jarring — particularly for the contemporary reader of Forster — than anything else in the novel. It occurs in the highly descriptive scene in the “heat” and waning “sun” of the theatre of Dionysus, overlooking a dead civilization. Indeed, in the world novel, this event defies reason: “it was of the nature of death or birth” and thus, is beyond his control.

W

II. here Maurice is unique among Forster’s novels, the short story “The Machine Stops,” originally published in 1909, is distinct among his short fiction. It is a work of speculative fiction, and recognized as one of the earliest works of dystopian fiction. Centered around a subterranean society — sent underground because the “barren plains” that extend before Clive Durham on his journey to Greece, now cover an entire

barren planet — the story deals with the acquiescence to the title “Machine,” which seemingly satisfies every want and need. Yet, in the vein of the explicit queerness of Maurice, perhaps the most compelling reading of “The Machine Stops” focuses not on the speculative fiction elements of the story’s future-set society, but on a coming out narrative (explicated most thoroughly in “Closet Fantasies and the Future of Desire” by Ralph Pordzik) between the son, Kuno, and his mother, Vashti. The strained relationship between mother and son in the story is thematically consistent with the rest of Forster’s writing, emphasizing the need for humanist connection over the constraints of technology, religion, the nation, and society. Indeed, after the titular event — the destruction of the “Machine,” which provides and controls all of society — the story closes with the physical connection of the mother and the son through a kiss. Yet, the trappings of the science fiction genre becomes the means through which a queer erotic allegory — in which the son, identified through his innate nonconformity, defies society and his family by venturing above the earth. Thus, the physical and geographical narrative of the short story becomes akin, and indeed analogous to, the expression of one’s (sexual) self in coming out narratives. 59


The Teleology of the Closet Sam Wilcox

If part of the story’s plot centers on the son’s desire to move from the subterranean society to the earth above, his desire is contrasted to the state of societal complacency, rending him unique — queer1, even. It is desire itself, however, that becomes the means of self identity; throughout the story, Kuno is the only one to express desire, to “want.” His desire impels him to embark on a mission to visit the earth’s surface, to transition from an interior, closet-like space, to its exterior. Thus, central to the narrative of the story is the figurative and literal “coming out” of Kuno: he moves from inside to outside, while his desires and identification with the outside earth move from private to public knowledge. Yet, the narrative relief of Kuno’s successful venture to the earth’s surface is abruptly denied. This, Kuno himself reveals as he discusses his visit to his mother: “‘I realized that something evil was at work [...] Out of the shaft — it is too horrible. A worm, a long white worm, had crawled out of the shaft and was gliding over the moonlit grass’” The “white worm” — the image itself laden with potential homoerotic meaning — comes from the earth, later multiplying (one worm becomes many) to drag

1

60

Kuno back underground. Kuno is returned to the society of the eponymous Machine, his expression of rebellion — his expression of his self — is thwarted. Eve Kosofsky Sedgwick, in her groundbreaking The Epistemology of the Closet, notes that the closet, from which queer individuals must come out, is no singular entity. Rather, the “closet” becomes connected to the various walls that are constructed depending on the context and relationships that a queer individual has with a person or a group. The closet re-emerges, for example, for the person who comes out to their friends and family, yet refuses to draw attention to their sexuality at work or to random passers-by. Indeed, the closet, to some extent, is never truly extinguished. In “The Machine Stops,” the closet can be understood as the inside of the earth, inhabiting the society of the Machine; while Kuno comes out of the closet, he is also drawn back inside through factors outside of his control. Yet, the movement from inside to outside the closet, despite being temporary in the story, creates irrevocable consequences. Upon his confession to his mother, Vashti disowns her son; likewise he holds on to the idea of the earth’s

“Strange, odd, peculiar, eccentric“ — Oxford English Dictionary


surface, and the possibilities life above ground entail. It is the geographic associations of these characters — Vashti, content to live underground, and her son, desiring life above it —that position their desires, their orientations, in relief of one each other. Queerness and heterosexuality, both in the story and for Sedgwick, are thus defined in opposition. According to Sedgwick, this semiotic emergence of “queerness” — a word which, devoid of its sexual context, signifies otherness or peculiarity — and “homosexuality,” which comes out of the 19th century, occurs before, and even creates, the notion of heterosexuality. The language of homosexuality becomes the avenue through which a het-

erosexual society can define itself, in terms of difference. Yet, the distinction between queer/ homosexual and heterosexual — and thus the closet itself — is not a natural distinction; while sexual preferences existed before the 19th century, the codification of the language of sexual orientation as an identity is artificial. Yet, much like the artificial world of the Machine, once codified, the distinction exists: the desires to live above or below ground are distinct, much like queer and heterosexual identities. Both the closet and the Machine, once constructed, are pervasive.

M

III. uch like the Machine and the language of sexual identity, Forster, 61


The Teleology of the Closet Sam Wilcox 62

in Maurice, draws immense attention to the construction of economic class for his central characters. Maurice, who is initially linked to the capitalist class, is first partnered with Clive, a figure representing the aristocracy, bound to a dying feudal social class. They meet at Cambridge, their families become friends, and Maurice’s sister becomes a potential fiance for Clive after his sexual shift. Simply put: they are both wealthy and elite. This proves a great contrast with Maurice and Alec. Alec works for Clive. Alec is not well educated, he is from a lower class background, and his family life remains alien to the reader. It is the latter relationship which ultimately prevails. If The Epistemology of the Closet points to the emergence of a homosexual/heterosexual distinction as a non-natural societal occurrence, Maurice (and “The Machine Stops”) emphasizes the historical and economic context of this emergence: capitalism. The class distinctions and similarities between the three characters ultimately underscore the economic structures which create them, while the deferral to and adoration of the Machine, as well as the environmental/ resource depletion points to an uncontrolled capitalist progression. It is in the language of capitalism, borrowing from Adam Smith’s The Wealth of

Nations, in which the importance of a homosexual/heterosexual dichotomy must be understood. As an economic system, wherein accumulation and production are central, the language of capitalism becomes uniquely tied to the body, with human labour becoming itself a form of capital: “[l]abour therefore, is the real measure of the exchangeable value of all commodities.” It thus follows that, in linking the language of capitalism to the body, and the need of human labour, the production of new bodies through (re)productive sex becomes integral to capitalist survival. The economy demands heterosexual bodily accumulation. The non-reproductive — and thus economically unproductive — homosexual sex thus becomes associated with “sterility” in Maurice. Yet the charge of sterility is met, by Clive, with the interrogative: “Why children? [...] Why always children?” Here, the notion of capitalist reproduction as self-evidently meritorious is questioned — “Why children?” can be understood as implying “why produce more capital?” In his book, No Future, the queer theorist Lee Edelman introduces the term “sinthomosexual” to describe the notion of a homosexual lack of concern for the future, engaging in non-reproductive relationships, particularly in relief of the image of the child. The


homosexual becomes deceptively framed as dangerous next to the child, whose future is symbolically encapsulated not by the body of capitalism, which it will inevitably bolster, but by the more nebulous societal fantasy for the future. Of course, the real threat to the child is its capitalist dehumanization, becoming labour in the production of commodity (and thus capital and commodity itself), not the non-reproductive sexual activities of the individuals its image contrasts. Though deceptive, the notion of the homosexual as an allegedly destructive force — particularly when juxtaposed next to the image of the child — life outside the closet is self-destructive. According to Louis Crompton in Homosexuality and Civilization, coinciding with a proto-capitalist protestant reformation, England introduced the Buggery Act of 1533, which made same-sex activities punishable by death. Thus, the legal ramifications of queerness is linked specifically to reproductive sex in an emerging capitalist economy. As queer sexual activities remains criminal in England — set, again, against the backdrop of Oscar Wilde’s own persecution — the closet becomes a means of safety and survival. The publication of Maurice, penned by a queer author, affirms this dangerous fact. As Sedgwick notes, closets exist in multitudes. While it’s not entire-

ly clear what Forster’s relationship to the closet was among his private friends and family, it can be assumed he was out to some (indeed, the novel was shared with some of his closest friends) and not out to others; to a broader readership, all that might have been known of his sexuality was speculation. Yet, where the intimate and personal depictions of queer sexual activities in Maurice could not be published until his death, cloaked in the realm of speculative fiction, Forster could publish “The Machine Stops,” mapping the space of the closet and imagining its destruction.

T

IV he parameters of the closet are hazily defined. As the closet multiplies and strengthens within different societal contexts, the closet becomes part of the social geography of the capitalist economy. Yet, it is in the fiction of E. M. Forster where the possibility of a space outside the closet is realized: in “The Machine Stops,” Kuno clings to the image of people inhabiting the world on the earth’s surface, even as he and his mother witness the total destruction of their society— and die themselves. Likewise, Maurice and Alec end the novel together, inhabiting a space “outside class, without relations to money.” The removal of oneself from the oppressive society becomes the transition out 63


The Teleology of the Closet Sam Wilcox 64

of the closet. Indeed, the closet exists in more than just a metaphorical sense: for both “The Machine Stops” and Maurice, the closet-society is material. The world of the Machine is a physical prophesy of capitalist progress; the Edwardian society of Maurice more-or-less reflects the capitalist materiality of early 20th century Britain. As a result, the closet exists not just in relation to queer self-identity and identity expression, but as a material confine for both homo- and heterosexual individuals. If heterosexuality is defined in relation to homosexuality and queerness, so too must it exist within an ever-expanding and unfolding closet-society. The closet — an interior space — is bounded by another interior, both spaces

framed and enclosed by the closet’s construction. The destruction of the “Machine,” then implies a teleology of the closet: if the closet is ever-expanding and unfolding, the closet will collapse in on itself. “[T]here came a day when, without the slightest warning, without any previous hint of feebleness, the entire communication-system broke down, all over the world, and the world [...] ended,” Forster writes, again creating a moment for the suddenness of destruction. The Machine stops —fulfilling the prophecy of the short story’s title—spelling the end of itself and over the society inhabiting underneath. Thus, if operating outside the closet can be self-destructive, remaining with the closet-society — which bounds both the interior of the closet and the interior outside the closet — is self-effacing. Both Kuno and Clive explore — to some extent — a space outside the closet. Both the narratives end, however, with their return inside. Indeed, Kuno and Clive can be understood as analogues of one another, both seeing their journeys beyond the closet frustrated. From a narrative standpoint, Kuno is erased, after his retelling of his foray above the earth, until the final moments of “The Machine Stops,” existing in the story’s periphery as one of the many members of the subterranean society. Likewise, while Clive


remains a central character throughout Maurice, the narrative interest in his subjectivity is lost after he returns from Greece, until the final moments of novel — when he sees Maurice for the final time. Both Clive and Kuno, in the end, are resigned to normal lives within their respective societies, until their societies collapse inward. Yet the collapse of the Machine, for Kuno, does not spell humanity’s destruction. Where the barrenness of the landscape proves inhospitable for Clive — he fixates on the “dying light” and the “dead land” — this landscape becomes a source of hope for Kuno. As his society is destroyed, Kuno imagines a group of people, living beyond the Machine on the surface of the earth, the “homeless,” who are left to survive the Machine’s destruction. If the teleology of the closet is destruction, then Maurice and Alec become like the “homeless.” At the end of the novel, Maurice and Alec realize how to live together: “[t]hey must live outside class, without relations or money; they must work and stick to each other till death.” Maurice and Alec do not just leave their respective closets, but elope, escaping the economic and social conditions of their closet society all together. Outside the constraints of the body economy and Edwardian social norms, Maurice and Alec can be free to be together.

Both Maurice and “The Machine Stops,” can be seen as an epistemological project of identifying with language the queer erotic self, and identifying the averse society. Yet access to a space beyond the averse closet-society is limited. As Maurice ends, much like the ending of “The Machine Stops,” the central perspective is not from someone outside, but in — yet with the knowledge of that exterior space. “[Clive] waited for a little in the alley, then returned to the house,” Forster writes, as begins the novel’s final sentence. For Clive, the epistemic journey of the novel is complete; the outdoor “stage was empty” and he “saw barren plains.” He remains — denied access beyond — inside the Edwardian economy. Clive’s narrative is expressed in relief of Maurice and Alec’s. Though Clive never sees Maurice again after the novel ends, Forster offers an outcome to Maurice and Alec’s elopement (told immediately following “[t]hey must live outside class”): “England belonged to them. [...] Her air and sky were theirs.” Much like the homeless, who remain after the Machine is destroyed, Maurice and Alec — in stepping out of the rigid, Edwardian body economy and society — inherit England.

65


Cecilia Bell

Emilie Biggs

66


Ashby Bland

Celine Chen

Ryan Rusiecki

67


Melanie Shi

Sam Wilcox 68


69



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.