The Politics of Interrogating What You Want

This post is composed from the (slightly altered) lecture notes from a talk given at the Red Victorian in San Francisco on January 7, 2020. This is the first of two sections (part two is coming soon)—a general consideration of desire—its social production and exploitation, and the problems this poses for us—whereas the second section deals more specifically with questions of sex and love, which I intend to claim are especially important as primary drivers of many other behaviors.

The main edits are in the form, rather than the content. I’ve added hyperlinks, footnotes, and adjusted the structure of sentences and wording to make it more appropriate for reading instead of listening. I’ve also added text to fill in for slides.

This post is fundamentally about the way that power operates through our preferences and choices. In short, I’m going to argue in favor of critically re-evaluating our desires, and I’m going to attempt to develop a methodology for doing so.

The basic argument I intend to make is that people tend to desire (and do) things that are both harmful to themselves, and to others. Often, these desires are not random, but influenced by social and cultural institutions.

Sometimes, the things we want are directly harmful to us personally. A very obvious example of this is addictive behavior. Here, you desperately want a thing, but that thing is deleterious to your health and happiness. Sometimes, there are no clear beneficiaries of this kind of desire; you optimize for short-term pleasure (or absence of pain), but end up undermining long-term happiness. Often, these kinds of wants are sanctioned or stigmatized in some way (many addictive drugs, for example), but other times, they’re socially-encouraged (toxic forms of romance come to mind).

This touches on the cultural dimensions of desires. Desires—whether harmful or not—are often cultural. What’s more, they may even be shared by whole demographics or populations—even if they’re harmful. The widespread adoption of an ideal or desire does not make it positive, constructive, or harmless. The literary theorist Lauren Berlant uses the term “cruel optimism” to describe an attachment to ambitions and modes of living that are harmful. (1) An example of this that Berlant offers is the desire for the American Dream. The desire for the house in the suburbs, the picturesque family life, and the various fixtures of mid-20th century success are meant to bring happiness and a sense of fulfillment. But, Berlant points out, they are liable to disappoint. We are less likely to attain them in the economy of the 21st century, and even if we do, all kinds of isolation and alienation are likely to await us. Here, the thing that promised to make us happy made us miserable in the end. Unlike most desire-behavior loops that we think of as addictions, the “American Dream” is a collective fantasy. It has been shared in by millions of people, and has been produced by a vast nexus of commercial, cultural, political and social institutions. It has been shaped and sold to us by advertising. (2) It has been reinforced by TV shows. It has been the ecosystem into which many other things we may want fit—family, love, sanctuary, gadgets, the feeling of authority or domain (in this sense, it is a powerful “meta-context” for other desires and aspirations).

These last points border on another important dimension of desires—especially those that are cultural: there are clear external beneficiaries of the pursuit of these desires. The Federal Government planned Suburbia as a proper setting for the American Dream, and the economy of postwar America was dependent upon the growth of this inefficient type of conurbation. (3) Consumer goods, construction, automobile sales—suburbia generated massive demand for all of these, and the desire for the American Dream drove its expansion. Here we can see that economic pressures, governmental institutions, commercial institutions, and other social forces can shape desires—as they did in the case of suburbia.

The point is not to pick on the American Dream in particular (though it certainly has a lot to answer for), but to show how desires can be extremely political, and central to the political and economic organization of a society and/or culture. By considering this example, we are already given several basic prompts for interrogating personally- or culturally-held desires: Does it cause harm to those who hold a desire? Does it cause harm to others? Is it shared by many? Is it culturally encouraged? Are there external beneficiaries?

Perhaps part of the reason that desires are not thought of in terms of political power is that questions of power tend to rotate more frequently around coercion and violence. Yet, as Michel Foucault has shown, coercion is only the most rudimentary, crude and ineffective mode of power. Much more effective than this kind of crude power (which he calls “sovereign power”) is what he calls “disciplinary power,” where the surveillance of political subjects eventually causes to surveil themselves, and to self-police, even when they are not being watched by others. Yet an even more effective mode of power is a regime in which people are made, through the careful manipulation of environmental conditions (whether social or spatial), to want certain things. Here, you manipulate the environment so that, a subject, behaving in their own best interest, will predictably behave in ways that the regime’s authorities want them to behave. Here, there is no feeling of coercion; only desire. This is what makes this type of power so effective: the source of control feels as if it wells up from within the subjects themselves, and there is no dis-identification with the desires that drive their control. Rather than forcing a specific way, these kinds of regimes shape the sentiments of subjects, and then manipulate them through these sentiments. (4)

This shaping of sentiments has proven to be central within democracies. Alexis de Tocqueville—a French aristocrat who visited the young United States in the 1830s to assess the condition of the modern world’s first political democracy—observed that morals (and the institutions that shape them) were of the utmost importance for the health and sustainability of a democracy. Crucially, citizens within a democracy must truly buy into their morals, and see them as good and true. (5) Unlike authoritarian states, where authorities need not control people’s sentiments in the same way, in democratic societies, the thoughts and feelings of the population are liable to shape the outcome of elections and civil society, and therefore society—its cohesive practices, resources, institutions and, importantly, elite interests—are more vulnerable to the sentiments of the population.

Of course, the sustainability of a democracy is not the only goal of government or social institutions in a capitalist democracy. Another major one is the maintenance of social and class power. Wealthy and powerful people organize to defend their interests. From a control standpoint, it is urgent that the sentiments of the population be shaped in ways that preserve class power.

In democratic societies, this necessity has driven a variety of sentiment-shaping institutions. As Noam Chomsky has famously argued in his book Manufacturing Consent, mass media companies especially have served as “effective and powerful ideological institutions that carry out a system-supportive propaganda function, by reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion.” (6) If the powerful in authoritarian states use force, in democracies they tend to use manipulation (unless you’re poor, that is).

So what does this mean for desire? A likely consequence is that the things we want may not only be bad for us, but keeping them that way may be structurally advantageous to an elite element within society. Enormous resources are poured into shaping what the population wants, thinks and feels, and this fact alone should disabuse anyone of the idea that our desires are just these immanent things that well up—like some mighty spring—from within.

For over a century, critical theorists have claimed that human behavior and outlooks within capitalist democracies have been shaped by ideology, or false consciousness—a secularized way of saying that people “know not what they do.” According to Georg Lukacks—the Hungarian social theorist and philosopher—under these conditions, “social truths” are nearly impossible to ascertain—the only possibility for doing so is to analyze the totality of social relations in a society. That is: to really understand humanity’s condition and possibilities, one needs to look at the overall dynamics between social classes, powerful and subjugated interests, and how these accumulate into a broader world or environment. (7)

We might complicate the idea that people “know not what they do.” Foucault famously proclaimed that “People know what they do; frequently they know why they do what they do; but what they don't know is what their doing does.”

In other words, you know that you upgrade your iPhone every year; you know why you upgrade it every year (“Have you seen the friggin’ camera on the iPhone 11 Pro?!”); but what you don’t know is that coltan miners in the Congo, using their bare hands to extract the mineral, die when poorly-built mines collapse, or that arsenic from the batteries of discarded smart phones makes its way into the ecosystem in India where the phone is disposed of; or that conflicts are fueled by the demand for tin, tungsten and gold; or that children are employed in the cobalt mines that feed global supply chains.

You know that you drive a large car and live in a large house; you know why you do this (“Can you even imagine trying to raise a family in the center of town?!”); but what you don’t know is that your house was built on threatened wetlands, by a construction company that engages in lobbying and corruption; that your mortgage is being leveraged to fuel enormous financial speculation whose losses will be paid for by the taxpayer, and that your neighborhood banned black residents until the 1970s Fair Housing Act, at which time few African Americans could afford it, because they were unable to build equity or savings due to redlining practices and discriminatory employment practices.

Effectively, the pursuit of your wants—even if they do make you happy—has secondary consequences—if not for you, then for others. To you, it may very well be in your best interest to have new clothes twice per season, to travel the world having meaningful, self-actualizing experiences, and to have a high-powered career to fund this, but the aggregate effect of billions of people optimizing their lives this way looks like climate change, economic exploitation and ecological catastrophe. Desires have externalities, and externalities, as we are becoming increasingly aware in the claustrophobic conditions of 21st-century “spaceship earth,” accumulate into a world—usually a very dystopian one, when desires go un-interrogated.

This problem has actually been at the heart of institution-building for a long time, and it has become crystal clear in the field of AI ethics. One of the major ethical/existential problems facing those who hope to create a benign Artificial Intelligence is: how can we build a super-intelligent servant/guardian/apparatus that doesn’t merely exacerbate or reinforce our current problems? That is: if we build an extremely powerful apparatus, how do we ensure that it does not merely accelerate the accumulation of externalities from the desires that we program into it?

To address this issue, the AI ethicist Eliezer Yudkowsky has introduced the term “Coherent Extrapolated Volition.” The core of the concept is that AI should act in our best interests, rather than being programmed to serve our desires and wants. “It would not be sufficient to explicitly program our desires and motivations into an AI,” Yudkowsky insists. “Instead, we should find a way to program it in a way that it would act in our best interests – what we want it to do and not what we tell it to.’ According to Nick Tarleton (2010), “rather than attempt to explicitly program in any specific normative theory (a project which would face numerous philosophical and immediate ethical difficulties), we should implement a system to discover what goals we would, upon reflection, want such agents to have.” This is what is meant by ‘Coherent Extrapolated Volition. “coherent extrapolated volition [represents] our wish if we knew more, thought faster, were more the people we wished we were.” He expands on this: “We may want things we don’t want to want. We may want things we wouldn’t want to want if we knew more, thought faster. We may prefer not to have our extrapolated volition do things, in our name, which our future selves will predictably regret. The volitional dynamic takes this into account in multiple ways, including extrapolating our wish to be better people.” (8)

Artificial intelligence certainly magnifies these problems, and raises their stakes. But, as I have been suggesting, we already live in an institutional situation in which meta-agencies act upon us and act through us. One of the things that Yudkowsky’s Coherent Extrapolated Volition attempts to get around is the short-sightedness of individuals’ perceptions of their own interests. A “friendly AI” equipped with Coherent Extrapolated Volition would be able to spot the problems with suburban development, financial speculation, and planned obsolescence. Even better, it would be able to offer alternatives. Is this impossible for collective human intelligence? Certainly in the past, efforts have been made to program cultures and institutions to address societal problems that individual people and organizations pursuing their own narrow interests created. One of the problems with these institutions and cultures has been their reliance on norms. Norms are like brittle glass restraints that have hardened from fluid conditions, which presents a problem when those conditions change and the norms don’t keep up. This is the problem of “values lock-in.” What you want here, according to Yudkowsky, is a moral, rather than a normative system. Norms are dumb; morals are smart.

Perhaps, well before we tackle the problem of AI development, what we need is a method—if not a moral system—for interrogating wants.

Such a method would have to ask:
Does a desire cause harm to those who hold a desire?
Does a desire cause harm to others?
Is a desire shared by many?
Is a desire culturally encouraged?
Are there external beneficiaries? Who benefits from the pursuit of these desires?
What institutional forces shape these desires?
How do these desires aggregate into a “social totality” or world?

These questions must be at the heart of any “critical hedonist” approach to redesigning desirability criteria, shifting economies of care and pleasure, and remaking culturally-held aspirations and ambitions.

Notes:

  1. See Lauren Berlant’s interview on the book Cruel Optimism.

  2. See Roland Marchand, Advertising the American Dream.

  3. See this article on the Federal Housing Administration.

  4. Foucault’s theory of governmentality

  5. See Alexis de Tocqueville, Democracy in America

  6. See Noam Chomsky and Edward S. Herman, Manufacturing Consent: The Political Economy of the Mass Media

  7. See György Lukács, History and Class Consciousness.

  8. See Eliezer Yudkowsky, “Coherent Extrapolated Volition” (2004), and Nick Tarleton, “Coherent Extrapolated Volition: A Meta-Level Approach to Machine Ethics” (2010). Note that Yudkowsky has recently turned away from his original endorsement of CEV, favoring other principles instead.