Background: High-resolution (HR) 3D MR images provide detailed soft-tissue information that is useful in assessing long-term side-effects after treatment in childhood cancer survivors, such as morphological changes in brain structures. However, these images require long acquisition times, so routinely acquired follow-up images after treatment often consist of 2D low-resolution (LR) images (with thick slices in multiple planes)., Purpose: In this work, we present a super-resolution convolutional neural network, based on previous single-image MRI super-resolution work, that can reconstruct a HR image from 2D LR slices in multiple planes in order to facilitate the extraction of structural biomarkers from routine scans., Methods: A multilevel densely connected super-resolution convolutional neural network (mDCSRN) was adapted to take two perpendicular LR scans (e.g., coronal and axial) as tensors and reconstruct a 3D HR image. A training set of 90 HR T1 pediatric head scans from the Adolescent Brain Cognitive Development (ABCD) study was used, with 2D LR images simulated through a downsampling pipeline that introduces motion artifacts, blurring, and registration errors to make the LR scans more realistic to routinely acquired ones. The outputs of the model were compared against simple interpolation in two steps. First, the quality of the reconstructed HR images was assessed using the peak signal-to-noise ratio and structural similarity index compared to baseline. Second, the precision of structure segmentation (using the autocontouring software Limbus AI) in the reconstructed versus the baseline HR images was assessed using mean distance-to-agreement (mDTA) and 95% Hausdorff distance. Three datasets were used: 10 new ABCD images (dataset 1), 18 images from the Children's Brain Tumor Network (CBTN) study (dataset 2) and 6 "real-world" follow-up images of a pediatric head and neck cancer patient (dataset 3)., Results: The proposed mDCSRN outperformed simple interpolation in terms of visual quality. Similarly, structure segmentations were closer to baseline images after 3D reconstruction. The mDTA improved to, on average (95% confidence interval), 0.7 (0.4-1.0) and 0.8 (0.7-0.9) mm for datasets 1 and 3 respectively, from the interpolation performance of 6.5 (3.6-9.5) and 1.2 (1.0-1.3) mm., Conclusions: We demonstrate that deep learning methods can successfully reconstruct 3D HR images from 2D LR ones, potentially unlocking datasets for retrospective study and advancing research in the long-term effects of pediatric cancer. Our model outperforms standard interpolation, both in perceptual quality and for autocontouring. Further work is needed to validate it for additional structural analysis tasks., (© 2024 The Author(s). Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine.)