In this article, an extensive Monte Carlo simulation study is conducted to evaluate and compare nonparametric multiple comparison tests under violations of classical analysis of variance assumptions. Simulation space of the Monte Carlo study is composed of 288 different combinations of balanced and unbalanced sample sizes, number of groups, treatment effects, various levels of heterogeneity of variances, dependence between subgroup levels, and skewed error distributions under the single factor experimental design. By this large simulation space, we present a detailed analysis of effects of the violations of assumptions on the performance of nonparametric multiple comparison tests in terms of three error and four power measures. Observations of this study are beneficial to decide the optimal nonparametric test according to requirements and conditions of undertaken experiments. When some of the assumptions of analysis of variance are violated and number of groups is small, use of stepwise Steel-Dwass procedure with Holm's approach is appropriate to control type I error at a desired level. Dunn's method should be employed for greater number of groups. When subgroups are unbalanced and number of groups is small, Nemenyi's procedure with Duncan's approach produces high power values. Conover's procedure successfully provides high power values with a small number of unbalanced groups or with a greater number of balanced or unbalanced groups. At the same time, Conover's procedure is unable to control type I error rates. [ABSTRACT FROM AUTHOR]