Contrastive learning predicts whether two images belong to the same category by training a model to make their feature representations as close (positive samples) or as far away as possible (negative samples). Selecting appropriate samples is critical to effectively train a model, however, existing methods suffer from false or uninformative sample problems. This paper rethinks how to mine samples in contrastive learning and the proposed method is more comprehensive. It takes into account both positive and negative samples, and mines potential samples from two aspects. First, for positive samples, this paper incorporates both the augmented sample views and the mined sample views. A weighted combination of these positive samples is achieved by using both hard and soft weighting strategies simultaneously. Second, considering the existence of false and uninformative negative samples, this paper analyzes the negative samples from the perspective of gradient and mines negative samples that are neither too difficult nor too easy as potential negative samples, i.e., those negative samples that are close to positive samples. Compared with previous state-of-the-art self-supervised methods, experiments show the obvious advantages of the proposed method, and the corresponding top-1 accuracies of linear classification are improved by 0.77%, 2.39%, and 1.01% on CIFAR10, CIFAR100, and TinyImageNet, respectively. Source code and pretrained models are available at https://github.com/dhkdhk/PSM. [ABSTRACT FROM AUTHOR]