With regard to many of the vexing problems facing the United States (and global community), it is fair to say that collective action is the primary barrier. For example, with regard to climate change, the warnings have been issued for decades and technological advances provide a path to avoid calamity. During the COVID-19 pandemic, social distancing, masking, and rapid development of vaccines created opportunities to limit the death-toll, but resistance to each solution extended the scale and impact of the disease. There are many such instances in which collective action lags the existence of solutions. It is therefore reasonable to conclude that, when faced with large-scale problems, it is social scientists who have the most ground to make up. It is with these limitations in mind that Druckman (2021) synthesized the vast literature on social influence into a Generalizing Persuasion (GP) Framework. This framework illustrates the impressive amount of literature that exists on the social and psychological barriers to collection action. The field of communication is certainly not absent in important contributions. For example, the framing of messages (Chong & Druckman, 2007; Gamson, 1992), the use of a narrative (Dahlstrom, 2012; Fisher, 1985), and avoidance of directive or controlling language (Brehm & Brehm, 1981; Raines, 2013). There is one crucial area of communication scholarship, however, where our understanding of message composition strategies is entirely lacking - we do not know what constitutes a good argument. In his GP Framework, Druckman (2012) characterizes the scholarship on this question as providing “scant theoretical insight” (p. 73) because the ultimate definition of argument quality is wrapped up in “an inadequate tautology” (p. 73). He is not the first to level this complaint (cf. Carpenter, 20215; Dijk et al., 2003; Eagly & Chaiken, 1993; Hoeken et al., 2020; O’Keefe, 2003). We seek to contribute to the ongoing conversation about argument quality by proposing a practical definition of good arguments. A practical definition focuses on the question of utility to those who wish to communicate information to a public and seek guidance about effective message construction. This is in contrast to the two dominant approaches, empirical definitions (e.g., Petty & Cacioppo, 1986), in which a good argument is defined as an argument that evokes more positive and fewer negative reactions, and normative definitions (e.g., Johnson et al., 2004; Hoeken et al., 2020; Park et al., 2007) in which a good argument is defined as an argument that offers more desirable consequences and/or a higher probability of being true. We begin our search for a practical definition of argument with a maxim informed by the psychological reactance literature (Brehm & Brehm, 1981; Rosenberg & Siegel, 2018): namely that people wish to possess the necessary information to draw their own conclusions. This is hardly a new premise, in fact it echoes Aristotle’s view that audiences prefer to come to their own conclusions and will reward speakers who use enthymeme to engage the listener’s faculty of reasoning (Bitzer, 1959). Ours is not an explicit test of the enthymeme or psychological reactance theory - though we are indebted to both frameworks. Our opening position is that people do not merely want to know what experts think, they want to know why experts think what they do. In other words, we define a weak argument as a claim supported by an appeal to expertise whereas we define a strong argument as a claim supported by an explanation of the evidence experts evaluated to render a judgment. Our fundamental premise is that these evidence explanations will outperform appeals to expertise when the claim is controversial. Furthermore, we expect evidence explanations to be even more important when audiences are disinclined to accept expert judgment either due to a directional motivation (Kunda, 1990) or a general lack of trust in elite opinion (Capella, 2002; Mede & Schäfer, 2020). We also see no reason that one of the most promising areas of scholarship on reducing message resistance - namely narrative persuasion - should not be incorporated into any practical definition of argument strength. However, we seek to contribute to scholarship on narrative persuasion by adding the concept of narrative illustration. Specifically, we argue that narratives can aid in explaining the nature of scientific evidence by providing a case-in-point or illustration of the scientific data - which aggregates many individual cases to provide an overall picture. Thus, a good argument would present the aggregated data and then provide an illustration in narrative form. There are numerous politicized information environments that are suitable tests of our expectations including but not limited to climate change, the efficacy of COVID-19 remediation measures, the legal evidence against prominent politicians involved in the January 6th attack, the evidence of voter fraud in general and the legitimacy of the 2020 election in particular, and the historical record regarding issues surrounding slavery, reconstruction, and civil rights. Though all of these would make excellent contexts to test our theoretical expectations, we focus on the issue of race-informed education in elementary schools. We select this topic for three reasons. First, it intersects three salient attitudes linked to cognitive biases in information processing: partisanship, racial attitudes, and issues of low social trust. Second, it lends itself to a straightforward experimental manipulation because the race-informed curriculum in question can easily be explained in a simple two-group treatment/control randomized experiment, facilitating the operationalization of evidence demonstration. Third, compared to other issues (especially climate change and COVID-19), race informed curriculum is an important but less studied topic that directly affects the academic audience of this study.