Clinicians continue to struggle with the vexing issue of the relationship between tumor response and patient survival. The article by Torri et al. (/) in this issue of the Journal presents methods intended to help answer this question in ovarian cancer. Cisplatin-based combination therapy has been shown to increase response rates compared with single alkylating agents (2) or combinations without cisplatin (3,4)In advanced ovarian cancer, alkylating agents or nonplatinum combinations produced response rates of 40% (10%-20% complete pathologic response) with median survivals of 12-15 months. In an era of cisplatin combination chemotherapy as primary therapy, response rates have improved to 70%-80% (20%-50% complete pathologic response), with the greatest activity noted in those patients who received optimal surgical cytoreduction (5). Using population-based registry data, a retrospective study from The Netherlands (6) found that patients diagnosed with ovarian cancer between 1981 and 1985 had better survival rates than a similar group of patients diagnosed between 1975 and 1980. Two suggested explanations were 1) the more routine use of aggressive surgical cytoreduction and 2) cisplatin-based therapy in the latter period (6). Similarly, Ozols (7) showed with data from the Surveillance, Epidemiology, and End Results Program that 5-year survival has improved since the institution of platinum-based therapy and its higher response rate. Finally, a recent overview of more than 8000 cases of advanced ovarian cancer (>6500 deaths) strongly suggested that platinum-containing combinations were superior to single-agent cisplatin and that alkylating agents alone were inferior to either (8). Thus, response rates and survival have improved in parallel, although it is not clear to what extent better response rates alone have prolonged survival. Why is it often so difficult to demonstrate convincingly the effects of something as instinctively favorable as tumor response? There are at least four reasons. First, response is an imprecise measurement made only in the subset of patients with measurable disease. This explanation is especially true in ovarian cancer where patients with measurable disease may be a minority and even bulky peritoneal carcinomatosis may not be palpable or evident with imaging. Additionally, clinical complete response need not represent a biologically significant reduction in tumor burden. For example, the reduction of a tumor cell population by two or three factors of 10 (logs) might cause overt disease to disappear, but the tumor could regrow quickly enough to eliminate any impact on survival. In other patients, a clinical response by the same criteria could represent eight to nine logs of tumor cell kill, translating into improved survival or cure. One would expect direct response assessment, such as second-look surgery, to reduce but not eliminate this problem because surgical sampling errors may still occur. Thus, response is an imprecise measurement, even when carefully defined. Second, survival can be improved without response. Treatment can reduce the growth rate of tumor cells without eliminating them. Microscopic metastases could be eliminated by treatment as well, resulting in improved survival without measurable improvement in macroscopic disease. In either case, an increase in the proportion of patients responding, per se, is not required to prolong survival. Furthermore, the aggregate survival on a clinical trial is likely to be the result of both improved response and nonresponding tumor control. Third, both tumor response and survival are results of treatment. They are, therefore, likely to be "correlated," even if they are not causally related. The desire to view response as a predictor probably arises as much from its temporal location relatively early in posttreatment follow-up as it does from its inherent prognostic importance for survival. The fact that response is an outcome prevents us from controlling and testing it using designed experiments in the way that other prognostic factors or treatments can be controlled. Most descriptive statistical methods cannot distinguish whether response and survival result independently from patient and treatment factors or whether response lies causally between treatment and survival. In either case, response could be a useful intermediate end point. However, if response and survival arise independently from a common set of patient and treatment factors, an apparent relationship between the two outcomes would be "explained away" by examining the predictors. If response is truly causal, the observed relationship would likely persist after accounting for predictive factors. The fourth facet of the problem is methodology, which is the main focus of the article by Torri et al. (/). Here, it is useful to separate the approaches to the problem into two groups: those that examine individual patient data (discussed below) and those that analyze the results of trials (groups or aggregate data). Analyzing clinical trials as data points is conceptually simple, although filled with potential problems. For example, it is tempting to examine the apparent relationship between response rate and survival across studies in a scatter plot, as is done hypothetically in Fig. 1. Unfortunately, even the dramatic results simulated in Fig. 1 would not assure that response is a reliable predictor of survival because differences in both response and survival could result entirely from differences in prognostic factors. Thus, the apparent group level correlation could be spurious. It is theoretically possible to see results like Fig. 1 more...