This paper presents evidence-based practice applied to course design and delivery, through a study conducted during an in-person undergrudate course exploring several aspects of test delivery. An undergraduate linear algebra course was initially designed to draw on the benefits of the well-documented testing effect, which is characterized by better student learning as a result of frequent testing. A study was conducted over one semester with the goal of assessing objectively whether the addition of a lecture-long informational message about the testing effect delivered by the instructor could enhance overall performance. In addition, the study aimed to investigate an aspect of test design, namely whether the difficulty of the first question (easy vs. hard) would affect the overall performance on the tests. The cohort consisted of 119 students of different STEM areas across a number of sections, all taught by the same instructor. The course included a total of 9 quizzes, 8 of which relevant to the study, each consisting of 3 questions that varied in difficulty; four quizzes started with a hard question and four started with an easy question. The course also included a midterm test and a final exam. The cohort was divided into two counterbalanced groups with one counterbalance receiving the easy question first on odd numbered quizzes and hard questions first on the even numbered quizzes, and with the other counterbalance group experiencing the reverse. All quizzes and exams were delivered at appropriately scheduled times to all students and the same amount of time was given to all students to solve the questions on the quizzes. Critically, one section of the course was chosen to receive an informational message about the testing effect explaining how frequent testing improves performance and encouraging students to use the quizzes as a learning opportunity. For this one section, the informational message was delivered once, after the first quiz (second week in the term). All students received messages of encouragement from the instructor throughout the term. Results showed significantly higher performance on the easy questions than the hard questions indicating the manipulation of question difficulty was successful. However, there was no difference in performance between those participants for whom the quiz started with an easy question than those for whom the quizzes started with a hard question. Notably, grades were higher for the group that received the motivation message than the group that did not receive the message. It is hoped that this promising result can be extended in future experiments, which may include multiple informational messages about the effectiveness of testing throughout the term. [ABSTRACT FROM AUTHOR]