Field Notes
Nudging toward Bethlehem
A little over a year ago, I attended a virtual conference called the “Nudges in Health Care Symposium.” For the uninitiated, the term “nudging” refers to the practice, via various mechanisms, of deliberately altering an individual’s choices, structuring her so-called “choice architecture” without her awareness. In doing so, it is intended as a way to allow individuals to make decisions that benefit them that they otherwise might not have made.
Nudging is based on the pioneering work of Daniel Kahneman and Amos Tversky, scientists whose research centered on decision-making. The idea is that human beings possess two distinct mental systems (“dual processes”), each containing “different structures, functions, and evolutionary histories”: System One (fast, instinctual, “evolutionarily old”) and System Two (slow, deliberative, “evolutionarily recent”). We employ these systems, each to varying degrees, when navigating our countless daily choices. But the problem, Kahneman and Tversky found, is that these systems are also characterized by certain built-in biases. Some of these will no doubt sound familiar, as they’ve successfully made their way into the language of everyday pop psychology: “anchoring bias” (the tendency to place undue importance on initial data), or “loss aversion” (overvaluing loss over comparable gain), to name just two.
Central to the philosophy of nudging is the idea that our built-in biases cannot be overcome. They are natural and essential, the product of eons of natural selection. Or so behavioral economists and evolutionary psychologists (two disciplines, I’ve noticed, which seem to contain significant overlap, both in personae as well as belief—more on this later) like to say. In the “ancestral environment” of the Stone Age, they may have served us well, allowing us to evade predators and develop tools. But these primeval mental capabilities have demonstrated themselves to be poorly suited to our present era of sugar-laden fast food and Zoom meetings.
So, the thinking goes, why not make these embedded and evolutionarily refined cognitive systems work in our—and society’s—favor? Instead of letting these biases lead to poor decision-making, why not exploit them to achieve outcomes serving the individual and collective good? Enter the economists Richard Thaler and Cass Sunstein. They have applied the ideas of Tversky and Kahneman to the realm of public policy. President Obama, famously, created a Social and Behavioral Sciences Team, an in-house “nudge unit” of sorts. Notable “real-world” applications of nudging include automatic opt-out retirement contributions, or the City of Los Angeles’ decision to require restaurant patrons to ask for plastic utensils (as opposed to providing them by default). In both of these cases, individuals’ “choice sets” are not curtailed (e.g., I could still ask for a plastic spoon if I desperately needed one); they are simply rearranged in a way that directs people to make better decisions.
The conference I attended focused on nudges that might be implemented by hospital systems, insurers, and private employers to promote better health—to, as one of the conference organizers put it, “nudge patients in their everyday lives and passively monitor them.” An intervention to reduce missed doctor’s appointments, for instance, that sends patients text reminders, the messages themselves laden with “social norms” meant to elicit certain emotive responses in the recipient (e.g., reminding patients that “9 out of 10 people attend” their visits; or, to take a more scolding approach, instructing them that skipping their appointments “costs NHS £160”). Another example used the idea of “gamification” to increase physical activity among patients with diabetes: Patients were given a wearable step counter and then sorted into different groups, whose aims ranged from collaboration (patients worked together to score “points” corresponding to things like weight loss and improvements in blood sugar) to competition (where patients were notified of others’ progress in an effort to boost their own motivation to exercise).
I should be clear that none of the techniques being discussed at the conference were of the overtly paternalistic sort, the sort that we’ve become accustomed to seeing during the last few years (e.g., mask and vaccine mandates)—though, perhaps unsurprisingly, the pandemic has presented opportunities for policymakers to try out certain “pro-social” nudges as well. Nevertheless, as a practicing physician, I felt a vague unease about the exercise of soft power that was being celebrated. The reasons for this, I will freely admit, may be entirely idiosyncratic and particular to my own approach to medicine. Many doctors, I’d wager, would not feel this same wariness.
And why would they? The prevailing sense I get is that most physicians—or most people, really—have great confidence in our vast array of medical interventions, both large and small: stents to treat heart attacks and statins to prevent them. To hear them tell it, these treatments are a consequence of a totality of scientific evidence, neutral in its collection and analysis, and their effects are clear, their benefit to humankind unassailable. So if we have to gently, subtly nudge patients (especially those that are “noncompliant,” in medical parlance) toward reliably taking their medications or keeping their appointments, what of it? Isn’t this, on balance, a good thing?
The blind meliorism of the medical establishment and the questionable effectiveness of our vast arsenal of treatments are subjects for another essay. But, suffice it to say, as the philosopher Jacob Stegenga has written, we should have very low confidence in the ability of most medical interventions to alleviate suffering and prolong life. True “blockbuster” treatments, like insulin and antibiotics, are rare. The benefits of preventive drugs like statins, in absolute terms, are small and accrue over the course of decades. Treatments like cardiac stents are invasive interventions with a whole institutional and industrial apparatus devoted to their delivery and yet, outside of a narrow band of patients, their benefits are scant.
The sobering truth: when it comes to health, most of the gains we’ve seen over the past two hundred years have been due not to pills and devices but to things like improved living conditions and worker protections—the sorts of things, it turns out, about which behavioral strategies like nudging have little to say.
One patient I saw just recently—U., a 59 year old woman with congestive heart failure (an inability of the heart to adequately pump blood), in and out of the hospital eight times in the past year, fired from her job due to her inability to stand for prolonged periods and her requests for frequent breaks—had been denied state disability assistance twice. Last week, I spent the better part of a morning helping her pro bono attorney construct yet another appeal. The work was plodding and discouraging; the requisite forms Kafkaesque (“Has the patient been unable to engage in any substantial gainful activity because of any medically determinable physical or mental impairment which can be expected to result in death or has lasted or can be expected to last for a continuous period of not less than 12 months?”). And yet, for U., this was likely her best shot at achieving the material security that might lead to better health.
Unless, of course, we are to listen to the nudgers, who might suggest subjecting U. to a more behaviorally informed approach: an electronic pill bottle that helpfully buzzes when it’s time for her medications; or, perhaps, an automated text-messaging platform that cheerily reminds her that she’s doing a great job. These interventions, trivial and innocuous-seeming, nevertheless represent the slow accretion of an invisible lattice, Weber’s “iron cage”—more monitoring, more data, more alerts, more reminders, more entry points into the panopticon of American healthcare. Such efforts seem to me futile at best—and at worst, bordering on cruel. Because for U. and many patients just like her—working-class patients, patients who are uninsured and undocumented—interactions with systems of concentrated power and coercive authority, hard or soft, already play an outsized role in delimiting their lives.
I am certainly not the first to raise such objections—though no one did at the conference I attended. On the contrary, the mood of the entire event (to the extent that one can detect a collective “mood” through the medium of a computer screen) was ebullient, the underlying ethos one of “it’s all for their own good”—the “they” in question, of course, being the great unwashed masses who, left to their own devices, would squander their savings and let their prescriptions go unfilled. Listening to the conference’s presenters—distinguished psychologists, economists, and physicians—I got the distinct sense that, in their vision of the world, there are two groups of people, defined by their mastery (or their lack thereof) over their own cognitive machinery. Or, as the philosopher Jeremy Waldron writes:
There are, first of all, people, ordinary individuals with their heuristics, their intuitions, and their rules of thumb, with their laziness, their impulses, and their myopia. They have choices to make for themselves and their loved ones, and they make some of them well and many of them badly. Then there are … the law professors and the behavioral economists who (a) understand human choosing and its foibles much better than members of the first group and (b) are in a position to design and manipulate the architecture of the choices that face ordinary folk. In other words, the members of this second group are endowed with a happy combination of power and expertise.
This is, in effect, the ideology of contemporary liberalism: to suspect that what hoi polloi need, most of all, is to be protected from themselves. Trump, Brexit, Orbán, Modi—these examples are routinely trotted out as proof that, as Hillary Clinton once noted, people possess “a psychological as much as political yearning to be told what to do.”
And in medicine, the parallels are clear: patients can’t be trusted to recognize their own failings; it is therefore the responsibility of the enlightened rulers and philosopher-kings of the medical-research-industrial complex to design the nudges, commission the studies, gather the data, and point to a few percentage point decrease in no-show rates as a smashing success—rather than marshal the gargantuan resources of the US health care budget to eliminate patients’ material constraints.
It’s no surprise, then, that we end up with approaches that “gamify” patients’ lives rather than meaningfully improve them. As obvious as it may seem, the idea bears repeating: health and disease are modulated by social conditions such as housing and employment. Chronic conditions like diabetes and congestive heart failure are among the most salient examples, but too often, they too are regarded as mere functions of individual choice—eat fewer donuts! take your statin!
“Choice,” architecturally arranged or not, outside the contrived world of models in which autonomous actors are the preferred unit of analysis, and save for the lucky few, is always constrained. Or, to put it in starker terms, having a low-wage job with no health insurance is hardly a function of one’s volition. So how else, if not as a reflection of a generalized attitude toward ordinary working people, are we to interpret the tendency of (no doubt well meaning, though largely unelected) leaders to focus their efforts on merely tweaking the twists and turns of the maze rather than making it easier to navigate—or eliminating it altogether?
Seen in this way, then, gamification and other instances of nudging in healthcare are merely another form of Foucault’s oft-cited “biopolitics,” and its corollary, which he called “governmentality”—the manner, inherited from feudalism, in which rulers care for individuals in their ambit. While attending the conference, I was immediately reminded of D., another patient of mine, also with congestive heart failure. At our last visit, she told me about her job at an Amazon fulfillment center: she worked fast, she said, because she wanted to earn a bonus by hitting certain targets, which the wearable device on her wrist helpfully tracked for her in real time. She didn’t take breaks. Her labor, now successfully “gamified,” meant that her heart failure was getting worse.
Contained within these various attempts at nudging is the idea of relocating large-scale problems to the level of the individual—of, as the sociologist Magdalena Malecka writes, “treating economic or social processes as cognitive ones.” The well-being of a population rests on its members’ individual mental attributes. A firm’s profitability depends on its workers’ internal motivations. These are conventional explanations, the stuff of any introductory business school course. Behavioral economics, despite its heterodox posturing, hews to these same explanations, while also resting on additional presuppositions. Which, in a roundabout way, brings us back to nudging’s supposed evolutionary or “natural” basis.
Recall, for the nudgers and their ilk, biases are built-in. The claim, according to psychologist Gerd Gigerenzer, is that they are “firmly imbedded [sic] in our brains,” serving as an impediment to our inner rational homunculus. Thaler and Sunstein compare our biases to optical illusions, citing the tendency of our visual system to make errors when presented with certain stimuli as an example of a cognitive system that is riddled with flaws. The implicit assumption, then, as Gigerenzer notes, is that because “our cognitive system makes such big blunders like our visual system,” then of course it also causes us to deviate from expected utility theory (the formal name for economists’ favored version of rational behavior, involving “maximization, consistency, [and] statistical numeracy”) in our everyday decisions.
This assumption—that biases, like the components of our visual apparatus, have a neuronal basis, that they are somewhere inside our skulls—strikes me as a fairly large one, or at least one that is far from settled. But unifying behavioral economics and evolutionary psychology (and indeed, the burgeoning field of “neuroeconomics”) as academic disciplines, and therefore underwriting any social policy based on research emanating from these disciplines, is the idea that psychological phenomena, such as biases, correspond to structures in the brain. And that the brain, like any other organ, has genes as its blueprint. And that these genes are, in turn, modulated by a process of selection for advantageous mutations. Ergo, behavioral phenomena such as the tendency of many people to misapprehend simple mathematical probabilities (a favorite foible of behavioral economists, and therefore the target of many nudges), have, at their core, natural substrata, sculpted across evolutionary time, operating below the level of conscious awareness, and over which, crucially, individuals have no control. Hence the need for the more enlightened among us to exploit these tendencies. Magnanimously, of course.
One thing I hope to call into question is the idea that this unidirectional, bottom-up view of “genes → brain → mind,” attractive though it may be, in any way explains human cognition, and therefore has anything meaningful to say about the complexities of human behavior—in particular, about why someone like U., the patient I told you about earlier, might not “take charge of her own health” in the manner which the medical system has prescribed. As philosopher John Dupré writes, such conclusions “should be rejected not only for their epistemological worthlessness, but because groundless guesses in this area can be extremely dangerous.”
Here, the reader may object: “Conjectural? Sure. Provocative? Fine. But dangerous? After all, aren’t we simply trying to make people healthier?” Perhaps. But I would argue that emphasizing behavioral approaches to making people healthier misses a crucial point. Attaching terms like “evolutionary” to an extant field of study—as defenders of behavioral approaches are wont to do—can lend the appearance of intellectual heft and value neutrality to what is, often enough, a scholarly endeavor that serves the interests of the wealthy and powerful and justifies an existing iniquitous state of affairs (the prefix “neuro,” as others have argued, can operate in much the same way). In the case of behavioral economic policy like nudging, societal problems like poverty and inequality are not only recast as problems of individual cognition but are reframed, using all manner of naturalizing discourse, as problems of individual biology. And in doing so, in inscribing such problems into nature, they are rendered fixed, immutable, something to be tinkered with rather than overcome.
This line of thinking, in addition to being dismissively reductive in its approach to our most vexing problems, also—if I may make a crudely adaptationist argument of my own—serves its own ideological function in contemporary society. The notion that a sizable portion of the population is hard-wired to behave in a certain way operates as a kind of taxonomical device, much like another spurious biological category of dubious evolutionary importance,
Race, as Adolph Reed observes, sorts people “into hierarchies of capacity, civic worth, and desert based on ‘natural’ or essential characteristics attributed to them,” legitimizing a given social order’s “hierarchies of wealth, power, and privilege … as the natural order of things.” The notion of inherent, supposedly evolutionarily conferred biases functions in much the same way. The solution to someone like A.—my patient with terribly high blood pressure, barely controlled on a five-drug regimen—eating too much salt lies not in the sort of redistributive effort that might transform her neighborhood into something other than a food desert; rather, it involves recognizing that A. suffers from “diversification bias,” and suggests altering the local fast food joint’s menu layout.
Narratives like these function as “just-so stories,” propagated over the years precisely due to their tendency to provide explanations that also happen to preserve existing relations of power. Over time, these stories come to seem like a priori facts, coalescing comfortably with the interests of society’s upper strata (in this case, insurance and health system executives, government officials, academics and members of think tanks, to give an updated version of C. Wright Mills’s “power elite”). One needn’t be a philosopher or historian of science to appreciate the idea that any line of inquiry that seeks to find evidence of inborn human difference will find it, independent of the existence of such difference or its explanatory power (see, again, the sordid “scientific” history of race and its enduring ability to appear and reappear as an explanation for existing regimes of inequality). That such explanations possess the imprimatur of empirical data should do little to assuage larger epistemic concerns.
Behavioral economists and their champions in medicine are quick to point to these empirical data to buttress their claims. One need only look at the endnotes to Thaler and Sunstein’s bestselling book, or, for that matter, the extensive citations peppering any of the PowerPoint presentations at the conference I attended. But it’s one thing to invoke the results of controlled experiments and conclude that biases, which form the foundation of the discipline, must exist as empirically observable phenomena; it’s quite another to locate their existence in our gray and white matter. The distinction may seem trivial, but it is an important one.
For one, this sort of logical leap seems to paper over the very real replication crisis plaguing not only behavioral economics, but also the behavioral sciences more generally. If the initially robust findings supporting core tenets of behavioral economics (like, say, “loss aversion”) don’t hold up after sustained and repeated scrutiny, why the ongoing effort to legitimate them using the techniques of modern cognitive neuroscience? And why the ongoing investment on the part of our most august healthcare institutions in approaches like nudging, which suffer from the same problems of reproducibility and, time and time again, have been shown to be minimally effective at best? One answer might be that their minimal effectiveness is precisely the point. As long as those with more or less direct access to society’s levers of power can focus their time and effort (and public funds) on nominal behavioral interventions like nudging; and as long as these efforts are supported by a cadre of “advanced men of science” (to borrow Spencer’s famous phrase) who see their role as, among other things, devising ways to exploit biases and locate their neural correlates—it’s safe to assume that those levers will remain largely untouched.
The concept of race and the “work” that it does may again prove instructive. I am reminded, for instance, of efforts to redescribe the problem of racial injustice as one of “implicit bias,” divorcing the social category of race from its political-economic roots and transposing racism to our amygdalas. Never mind that the most commonly-used method of detecting implicit bias, the implicit association test, tells us next to nothing about our internal mental states. That hasn’t stopped organizations (including those in healthcare) from doubling down on its use—the hallowed New England Journal of Medicine going so far as to recommend “taking the Harvard Implicit Association Tests” in order to “combat the harms caused by these attitudes and beliefs.”
There are other, deeper issues here, beyond the scope of this essay: the neo-phrenological assumption, suffusing so much contemporary research on biases, that complex mental phenomena can be traced to discrete anatomic regions; or the “mereological fallacy,” which, instead of viewing individuals as socially embedded creatures both shaped by and constitutive of their psychological environment, crudely conflates brains with selves. But the fact remains: notions of individual bias—racial or otherwise—succeed in transforming inequality into something attitudinal, in the form of infra-cortical subconscious predilections. In doing so, they divert our collective attention away from structural determinants of human experience, in the process reifying the supervisory role of a thin stratum of behaviorally-attuned technocrats, a sort of “cognitive aristocracy.”
Reflecting on all of this while listening to the conference’s esteemed presenters, I couldn’t help but think about what drew me to medical school in the first place, so many years ago. My reasons were hardly selfless; if anything, I sought a more explicit version of the sort of power being extolled during the symposium. In my twenties, this was the view of medicine presented in television shows like Grey’s Anatomy, with its cast of surgeons using exquisite technical ability to perform miracles for a grateful populace. Among the medical students I teach these days, the theme persists, albeit in a different form—many of them still want to be neurosurgeons, but just as many of them profess a sincere desire to be, like the conference organizers, “change agents” and “thought leaders.” Chief medical officers and executive vice presidents. The vanguard of our biopolitical future.
Now this view strikes me as all wrong. We tend to think of healthcare as a top-down endeavor, the purview of an enlightened priesthood, manipulating individual bodies and collections of bodies from a vaunted position. But medicine is—or at least, has the potential to be—the most subversive of disciplines. “[M]any illnesses that enter the clinic,” as the medical anthropologist Nancy Scheper-Hughes writes, “represent tragic experiences of the world.” It is these experiences and their shared societal (read: political-economic) origins, rather than “natural” phylogenies of human difference, that should form the terrain of the doctor’s struggle.
I can trace this transformation in my thinking by looking back at my own written (and later, typed) notes from previous patient encounters. When I was a student, these notes stuck to a certain rigid scheme that reflected the way we were taught, enumerating a patient’s presenting symptoms in bullet-point format. Chest pain: dull, left-sided, exertional. The “social history” section, in which the doctor is meant to make a note of a patient’s occupation and habits, was regularly given short shrift. We were, after all, in the business of treating disease. A patient’s job, her daily routine, how she attended to and prioritized her life’s many projects—these hardly seemed consequential.
These days, my notes look different. The social history, more often than not, is the history, the patient’s own illness narrative refracted through her corporeal existence. In some minuscule way, I see writing such notes as an act of resistance, a counter-friction to the ever-tightening iron cage: as electronic health records have supplanted written notes, the very purpose of medical documentation has changed. A patient’s chart no longer represents the written repository of her life and eventual death. Rather, it can be seen as a glorified Excel spreadsheet; a vast aggregation of data from which diagnostic codes can be derived, healthcare costs tallied, and bills sent.
But to write about a patient like U. in the perfunctory manner most amenable to medical billing and coding seems dishonest, just another way of collapsing personhood and biology under the rubric of instrumental rationality, as the nudgers would have us do. U., like so many of my patients, doesn’t simply have “congestive heart failure” and “chronic kidney disease.” These disease states interact with each other and several others, of course; but more importantly, they make sense as treatable entities only within the larger process of lifemaking: her first job, the one from which she had been fired, packing garment boxes. The second job she had picked up immediately after, as a home health aide. The unpaid rent, the mounting collection notices. In this way, U. had become the bearer of an entirely new illness, one which defies categorization. The claim of behavioral economics is that recognizing U.’s biases and nudging her accordingly would make this illness easier to bear. I have my doubts. But beyond her individual fate, I worry what ensnaring a patient like U. into a cordoned-off sociopolitical sphere, with boundaries demarcated as scientific and couched in the language of evolutionary theory, might mean for the practice of medicine itself.
None of what I have written here is meant to endorse unqualified blank-slateism. Human beings very likely do possess innate cognitive structures, especially those that endow us with linguistic ability. What I remain wary of are those accounts of innateness that, like now-discredited forms of Victorian race science, are adduced to explain existing patterns of inequality. The incursion of intellectual programs like behavioral economics and evolutionary psychology into the practice of healing are only the latest iterations of this. But if we are to heed Virchow’s famous dictum that “medicine is a social science, and politics nothing but medicine at a larger scale”—and in doing so, realize medicine’s emancipatory potential—we need to rid it of this sort of biodeterministic casuistry once and for all.