Agents rarely act in isolation -- their behavioral history, in particular, is public to others. We seek a nonasymptotic understanding of how a leader agent should shape this history to its maximal advantage, knowing that follower agent(s) will be learning and responding to it. We study Stackelberg leader-follower games with finite observations of the leader commitment, which commonly models security games and network routing in engineering, and persuasion mechanisms in economics. First, we formally show that when the game is not zero-sum and the vanilla Stackelberg commitment is mixed, it is not robust to observational uncertainty. We propose observation-robust, polynomial-time-computable commitment constructions for leader strategies that approximate the Stackelberg payoff, and also show that these commitment rules approximate the maximum obtainable payoff (which could in general be greater than the Stackelberg payoff). [ABSTRACT FROM AUTHOR]