E55: re:joinder - The Limits of Artificial Persuasion

We live in a world of unbridled technological and argumentative advancement. A.I. has learned to debate Thanksgiving-table politics against humans. People may soon be using “argument checks” as well as “grammar checks” on their smartphones. Cats and dogs have finally put aside their differences and learned to live in peace by forming a coalition against postal workers. Welcome to the future.

Whether this sounds like an irenic utopian ideal or an Orwellian dystopia to you, it is the subject of today’s episode! In the first installment of our newest re:joinder series, Disciplining Disciplinary Boundaries, we take aim at an article that feels designed to make humanists pull their hair out: Benjamin Wallace-Wells’s “The Limits of Political Debate,” published in The New Yorker. This article tells the story of Project Debater, an artificial intelligence designed to compete in political debate competitions against humans using mountains of empirical evidence and “fifty to seventy” prefabricated argument structures. As we read through the dramatic tale of P.D.’s inception  to it’s first high-profile defeat in public debate by Harish Natarajan in 2019, we discuss the way that science journalists (and scientists themselves) make strange and fascinating assumptions about the humanities.

We also frame our reading of the article with two critical pieces of rhetoric scholarship that help illuminate its various rhetorical pitfalls and spurious assumptions. Jeanne Fahnestock’s 1986 classic “Accommodating Science” lays the groundwork for studying science journalism by taxonomizing some typical rhetorical appeals and information transformations journalists use to make hard science more appealing for public audiences (e.g. sacrificing technical details at the expense of telling a dramatic narrative of “discovery”). Finally, we end with Carolyn Miller’s 2007 article “What can automation teach us about agency,” and reflect upon the ways that A.I. can only have rhetorical agency if an audience attributes it. This article helps us better understand why Project Debater suffered defeat at the hands of a human, and why this article tells us more about the limits of artificial intelligence rather than “rhetorical persuasion.”

Works & Concepts Cited in this Episode:

Fahnestock, J. (1986). Accommodating science: The rhetorical life of scientific facts. Written communication, 3(3), 330-350.

Miller, C. R. (2007). What can automation tell us about agency?. Rhetoric Society Quarterly, 37(2), 137-157.

Plato. (2008). Gorgias (B. Jowett, Trans.). Project Gutenberg. (Original published c. 380 BCE). Retrieved from: https://www.gutenberg.org/files/1672/1672-h/1672-h.htm

Slonim, N., Bilu, Y., Alzate, C., Bar-Haim, R., Bogin, B., Bonin, F., ... & Aharonov, R. (2021). An autonomous debating system. Nature, 591(7850), 379-384.

Wallace-Wells, B. (2021, Apr. 11). The limits of political debate. The New Yorker. Retrieved from: https://www.newyorker.com/news/annals-of-populism/the-limits-of-political-debate

Alex Helberg