Background: Peer assessment has played an important role in large‐scale online learning, as it helps promote the effectiveness of learners' online learning. However, with the emergence of numerical grades and textual feedback generated by peers, it is necessary to detect the reliability of the large amount of peer assessment data, and then develop an effective automated grading model to analyse the data and predict learners' learning results. Objectives: The present study aimed to propose an automated grading model with reliability detection. Methods: A total of 109,327 instances of peer assessment from a large‐scale teacher online learning program were tested in the experiments. The reliability detection approach included three steps: recurrent convolutional neural networks (RCNN) was used to detect grade consistency, bidirectional encoder representations from transformers (BERT) was used to detect text originality, and long short‐term memory (LSTM) was used to detect grade‐text consistency. Furthermore, the automated grading was designed with the BERT‐RCNN model. Results and Conclusions: The effectiveness of the automated grading model with reliability detection was shown. For reliability detection, RCNN performed best in detecting grade consistency with an accuracy rate of 0.889, BERT performed best in detecting text originality with an improvement of 4.47% compared to the benchmark model, and LSTM performed best with an accuracy rate of 0.883. Moreover, the automated grading model with reliability detection achieved good performance, with an accuracy rate of 0.89. Compared to the absence of reliability detection, it increased by 12.1%. Implications: The results strongly suggest that the automated grading model with reliability detection for large‐scale peer assessment is effective, with the following implications: (1) The introduction of reliability detection is necessary to help filter out low reliability data in peer assessment, thus promoting effective automated grading results. (2) This solution could assist assessors in adjusting the exclusion threshold of peer assessment reliability, providing a controllable automated grading tool to reducing manual workload with high quality. (3) This solution could shift educational institutions from labour‐intensive grading procedures to a more efficient educational assessment pattern, allowing for more investment in supporting instructors and learners to improve the quality of peer feedback. Lay Description: What is already known about this topic: Peer assessment has played an important role in large‐scale online learning, as it helps promote the effectiveness of learners' online learning.Issues such as disagreement between peer assessors, rough assessment, and plagiarism in large‐scale online learning can decrease peer assessment reliabilityIncorporating extensive data into a training model may result in grading uncertainties. What this paper adds: Detecting the peer assessment reliability before grading is essential in the context of large‐scale online learning.This study aimed to propose and validate an automated grading model with reliability detection for the large‐scale online peer assessment, which will help improve the effectiveness of automated grading, combining the advantages of computer technology and human expertise. Implications for practice and/or policy: The introduction of reliability detection is necessary to help filter out low reliability data in peer assessment, thus promoting effective automated grading results.This solution could assist assessors in adjusting the exclusion threshold of peer assessment reliability, providing a controllable automated grading tool to reducing manual workload with high quality. [ABSTRACT FROM AUTHOR]