Model selection based on information-theoretic methods (Burnham and Anderson 2002) has gained prevalence as a strategy for analyzing ecological data, especially among wildlife biologists (Stephens et al. 2005). Development of this alternative strategy has been refreshing because it has encouraged many of us to reexamine the analytical strategies we use and especially to evaluate models built explicitly on a biological foundation. However, some proponents now advocate model selection as the only reasonable strategy for a wide range of analyses. In particular, there is growing sentiment that strategies based on the framework of hypothesis testing are now systematically inferior or inappropriate. Because all analytical strategies have strengths and weaknesses, all can be misused. My goal is to suggest that analysts make an informed choice from all available strategies, employing each in contexts where they are most informative, as the most appropriate strategy will always depend on the specific analytical context. For example, authors of recent texts on distance sampling (Buckland et al. 2001, 2004), an estimation framework rooted firmly in model selection, and authors of a monograph written by acknowledged experts in model selection (Burnham et al. 1987), use hypothesis tests routinely when they are appropriate and informative. Application of a single analytical strategy in all circumstances is inappropriate. Failure to consider important parameters or to obtain sufficient data limits all analytical strategies, including model selection. If sample sizes are small and precision of estimates is low, the ability to distinguish among candidate models will be weak, analogous to having low statistical power for traditional hypothesis tests. Selecting a model solely because it satisfies a fixed criterion such as the lowest Akaike’s Information Criterion (AIC) is potentially as naive an approach as rejecting a null hypothesis of zero effect on the basis of a P-value being marginally less than 0.05. In the absence of additional information such as estimated effect sizes, these approaches are too simplistic to be uniformly reliable as endpoints of an analysis. Model selection can be an elegant and effective strategy to distinguish among candidate models based on data. Reliance on a small, fixed set of candidate models, however, presents an additional potential liability for model selection. If the set of candidate models is incomplete, the resulting inferences will be unreliable (Burnham and Anderson 2002, Johnson and Amland 2004). Therefore, in cases where a strong set of candidate models is not or cannot be developed, model selection may be less effective and less informative than approaches based on hypothesis testing. Consequently, model selection requires more information about the system of interest because inferences are contingent on this set of candidate models. The process of developing candidate models is a strength of model selection when done well and a weakness when done poorly. Although I routinely use both approaches, colleagues and reviewers have suggested increasingly that I use model selection instead of hypothesis testing in contexts where I felt hypothesis testing was more appropriate. Similarly, I have observed authors using model selection when their analyses may have been better informed by hypothesis tests. In general, I find hypothesis testing to be a more informative strategy when there is insufficient information available to formulate a strong set of candidate models. I will use a hypothetical study of habitat associations to illustrate a context where hypothesis testing may be more informative than model selection. Although wildlife ecologists often are chastised for assessing pattern too frequently, these studies are commonplace and can be valuable in the appropriate circumstances. Typically, the short-term goal in these studies is to identify environmental features that explain variation in abundance of a species. The long-term goal often is to identify habitat features that managers can alter to meet a conservation or management objective for a species. On a series of study plots, assume that we estimate abundance, perhaps using a distance-sampling or capture– recapture framework where we choose a model for estimation based on model selection. We then assess associations between abundance and the environmental features measured on each plot. Because we have little a priori information as to which features may be influential, we measure 12 different environmental features that span a range of physical and biological elements. We then seek to identify a subset of features that is closely associated with abundance. To use model selection in its intended spirit, we would need to develop a set of candidate models that potentially link abundance with habitat features that we can differentiate objectively through an information criterion such as AIC. In circumstances where there is a solid foundation for 1 E-mail: steidl@ag.arizona.edu more...