Due to the introduction and rapid ubiquity of artificial intelligence (AI ) and AI -integrated programs that can be used by students and teachers, educational scholarship evaluating the capabilities of AI is needed. This study evaluates the abilities of three prominent AI programs --ChatGPT, Microsoft's Bing, and Google's Bard -- to create high school lesson plans on the subjects of Martin Luther King, Jr., the Indian Removal Act, and climate change. The authors judge the quality of the lessons' content based on scholarship in the education field and document the process of prompting the AI to produce lessons more in line with these criteria. In a 2011 episode of the television game show Jeopardy!, Ken Jennings played against IBM's Watson, an artificial intelligence (AI) designed to assist in the automation of business tasks (IBM, n.d). Jennings still holds the record for the highest average correct responses as well as the longest winning streak on that show with 74 consecutive wins. Despite his human prowess at trivia, he and another high scoring jeopardy champion, Ken Rutter, lost to Watson. Realizing his inevitable defeat, Jennings wrote underneath his Final Jeopardy answer, "I for one welcome our new computer overlords" (Hiskey, 2012). In the years since that Jeopardy match, AI science has continued to evolve, enabling computers to process human language more fluidly and complete a myriad of tasks. In November 2022, a company called Open AI released ChatGPT, an AI "chatbot." ChatGPT describes itself as "a language model developed by OpenAI, which uses deep learning techniques to generate human-like text based on the input it receives. I t is designed to respond to questions, complete text based on the prompt, summarize long text, and perform various other language-related tasks" (OpenAI, 2023a). Soon after, Microsoft released a ChatGPT-based AI integrated into their Bing search engine, and Google released Bard. (At the time of data collection for this study, Microsoft's AI was integrated into the Bing search engine and was not branded as a separate product, but prior to publication the name was changed to Copilot as part of Microsoft's strategy to compete with ChatGPT ; Warren, 2023). In addition, a myriad of AI -based platforms tailored for specific applications have been released. Educators might see these AIs as tools for malfeasance. Many educators, for example, worry that students will use AI to cheat, as it is capable of producing answers sophisticated enough to pass exams in both law schools and business schools (Kelly, 2023). Given that AI is increasingly prevalent in tech circles and is being integrated into everyday technological tools such as search engines, educators will need to adapt to its presence. This paper focuses on the lesson planning abilities of three widely available and free-to-use AIs and examines the extent to which social studies educators can use AIs as a tool to create lesson plans that challenge students to think critically about social studies subjects. Using someone else's lesson plans can be hit or miss -- so much depends on the creator, whom the teacher may or may not know. Online marketplaces like T eachers Pay T eachers, as well as social media platforms like Pinterest, are widely used and can contain not only inadequate but also harmful narratives about topics and issues vital to social studies, such as the Civil Rights Movement (Rodríguez et al., 2020). AI also offers the promise of quickly generated lessons. Early explorations of ChatGPT for teaching purposes by publications such as Education Week have emphasized producing basic outlines to serve as a starting point for lessons, while noting that it can reduce the time burden of some of the more menial teaching tasks, such as grading and writing letters of recommendation (Mallon, 2023; Will, 2023). Y et, with the focus on time-saving affordances of the new technology, little attention has been paid to the quality of the content in lessons produced by AI and whether AI produces lessons that repeat problematic framings. Given that large language models, the basis of the three AI we investigated, are trained using Internet content, there is a risk that AI asked to produce educational content will merely replicate potentially problematic discourses commonly found online on a variety of topics. Our goal in this study was to contribute to emerging understandings of the potential for AI in social studies classrooms. Specifically, we created and used rubrics to assess AI 's lesson planning ability when addressing topics whose coverage in schools has been subject to critique. Since many narratives commonly found in education are seen to serve the interests of dominant social or corporate groups, we chose to focus on how the content of the lessons was framed. We engaged AI to produce lessons on two historical topics, the Civil Rights movement and Indian Removal, where both standards and texts have often reinforced problematic White or settler-colonial narratives and power structures (Sabzalian et al., 2021; Shear et. al, 2015; Woodson, 2016). We also asked ChatGPT, Bing, and Bard to create a lesson on a third topic, climate change, where student learning is often complicated by corporate-initiated discourses of climate denial (Damico & Baildon, 2022). Our investigations were framed around the following questions: 1. To what extent can ChatGPT, Bing, and Bard generate robustly critical lesson plans when prompted to on the subjects of Martin Luther King, Jr., Indian Removal, and climate change? 2. What types of prompting can help AI avoid potentially problematic discourses around these three topics? [ABSTRACT FROM AUTHOR]