To some, the subject of clinical prediction rules would seem an arcane exercise of limited utility to the everyday clinical anesthesiologist. Nothing could be further from the truth. Clinical prediction rules are plentiful and are in wideranging use in everyday practice. The American Society of Anesthesiologists’ physical status classification is one of the most widely used indices of preoperative physical status, and is easy to commit to memory. Thus, it has stood the test of time. Remarkably, it was introduced into everyday practice with little in the way of derivation, validation, or testing, yet it has been proven to perform as well as the Original (Goldman) Cardiac Risk Index, the Detsky Index, or the Revised Cardiac Risk Index. The Revised Cardiac Risk Index (RCRI) was derived from a cohort of 4,315 patients to predict the incidence of major cardiac morbidity in patients over the age of 50. In the validation set of 1,422 patients, the all-cause mortality doubled, due mostly to increased morbidity in the higher risk population, and the calibration of the index improved significantly. In addition, in the revised index, the coefficients of two risk factors identified from the validation set, diabetes and renal dysfunction, were subsequently found to no longer be significant. These observations highlight just a few of the issues involved with the application of clinical prediction rules. Despite the fact that the RCRI has been utilized in many cohort studies to risk-adjust patients, this index has yet to be externally validated. The performance of the RCRI was reported in a recent meta-analysis of 16 trials reporting on a total of 791,282 patients. This report found that the RCRI performs poorly (receiver operator characteristic [ROC] values of 0.630 compared to the original validation set value of 0.8). There are a number of reasons why the RCRI does not perform as well in ‘‘real life’’, including differences in the frequency of outcomes. In the derivation paper, the mortality was 1%, approximately half of that observed in the largest cohort study utilizing the RCRI, where the mortality rate was found to be 2%. Furthermore, there may have been different factors being applied. One factor in the derivation cohort was insulin-treated diabetes, which is now largely regarded as diabetes with or without insulin therapy. The original definition was renal dysfunction, defined as a serum creatinine concentration [176 mmol L. However, this definition has not been universally applied. Despite these important shortcomings, i.e., changes in the ROC between derivation and validation, the lack of external validation, or the findings that the RCRI loses accuracy when performed outside the original institution, the RCRI has been incorporated into the ACC/AHA 2007 Guidelines on Perioperative Cardiovascular Evaluation and Care for Non-cardiac Surgery. Thus, the RCRI has become an integral part of what now constitutes the ‘‘standard of care’’ for patients having non-cardiac surgery. Therefore, by default, any clinician involved in the evaluation of patients prior to non-cardiac surgery should have an understanding of the workings and limitations of this clinical prediction tool. Given the limitations of clinical prediction rules, the statement would also imply that the clinician should know the incidence of important outcomes and have the tools necessary to calibrate the instrument. P. M. A. Brasher, PhD (&) Centre for Clinical Epidemiology and Evaluation, VCH Research Institute, University of British Columbia, 828 West 10th Avenue, Vancouver, BC V5Z 1L8, Canada e-mail: brasher@interchange.ubc.ca