1. Introduction. Let the two random variables (r.v.) X and Y, with E(X) = p and E(Y) = q, describe the outcomes of two experiments, Ex I and Ex II. An experimenter, who does not know the values of p and q, has to perform a sequence of experiments, and at each step he may choose between Ex I and Ex II. He has to stop after n steps, and he wishes to maximise the sum of all outcomes. His decision between Ex I and Ex II at the kth step will depend on the corresponding decisions at prior steps and on the outcomes of these prior experiments. We call a plarn, which fixes his sequence of decisions according to his previous knowledge, a strategy. Robbins [6] shows that it is easy to find a strategy so that the arithmetic mean of n outcomes tends (n -- oo) towards max (p, q) with probability 1. Bradt, Johnson and Karlin [3] try to find a best strategy for fixed n rather than asymptotically. They assume known a priori distributions for the values of p and q. For other approaches see Robbins [7] Isbell [5], Bellman [2] and Vogel [8]. The purpose of this paper is to describe a class of strategies, which results from the following kind of restriction. In the first 2k steps we perform each of Ex I and Ex II k times. Then the rest of the n - 2k steps are made either with Ex I alone or with Ex II alone. The decision whether to continue with Ex I or with Ex II will be made with the help of a sequential probability ratio test for double dichotomies. Therefore k is a r.v. that will be denoted by K when appropriate. Strategies of this kind are not exceptionally good ones (in the sense of the lossfunction defined in Section 3). But when a strategy is applied in practice it may be found economic to do only one sort of experiment for most of the steps. Perhaps the equipment of the other sort of experiment can be used for other purposes; perhaps the shift from one experiment to the other is costly. For such reasons it may be quite natural to use only those strategies described above. Another justification for treating this class of strategies are the results in [8], for which the Theorems 2 and 3 of this paper are needed. Section 2 contains some auxiliary material. Except for Theorem 1, which we give in a slightly more general form than needed for the rest of this paper, nothing here is new, but we found it convenient to summarize some definitions and easy-to-prove formulas in one section. The loss-function and an approximation to the loss-function will be derived