Multi-view partial multi-label learning (MVPML) is a fundenmental problem where each sample is linked to multiple kinds of features and candidate labels, including ground-truth and noise labels. The key problem of MVPML is how to manipulate the multiple features and recover the ground-truth labels from candidate label set. To this end, this study designs a novel Graph-based Multi-view Partial Multi-label model named as GMPM, which combines the multi-view information detection, valuable label selection and multi-label predictor model learning into a unified optimization model. To be specific, GMPM first exploits the consensus information across multiple views by learning the view-specific similarity graph and fuses multiple graphs into a target one. Then, we divide the observed label set into two parts: the ground-truth part and the noise part, where the latter is associated with a sparse constraint to make sure the former is clean. Furthermore, we embed the learned unified similarity graph into the process of label disambiguation to restore a more reliable ground-truth label matrix. Finally, the resulting multi-label predictive model is learned with the help of ground-truth label matrix. Extensive experiments on six common used datasets demonstrate that the proposed GMPM achieves comparable performance over the state-of-the-arts.