Conventional MRI reconstructions may suffer from noise amplification and residual artifacts at high-resolutions and accelerations. Deep learning (DL) has recently emerged as a powerful approach for accelerated MRI. Most current DL algorithms require fully-sampled data for training, but acquisition of fully-sampled dataset is either infeasible or impractical in many scenarios, for instance due to motion, signal decay or long scan times. To tackle these problems, we developed Self-supervised learning via data undersampling (SSDU) framework which enables training of deep learning (DL) MRI reconstruction without fully-sampled/reference data. SSDU splits the acquired k-space indices into two disjoint sets. One of these is used for data consistency in the network, while the other is used to define the loss in k-space. Hence, the network is trained end-to-end using only the acquired measurements, without making any other assumptions about image output or characteristics.
Recent self-supervised and unsupervised learning approaches enable training without fully-sampled data. However, they still require a database of undersampled measurements, which may not be available in many scenarios, e.g. for scans involving contrast or recently developed pre-clinical acquisitions. Moreover, database-trained models may not generalize well when the unseen measurements differ in terms of sampling pattern, acceleration rate, SNR, image contrast, and anatomy. Such challenges necessitate a new methodology that can enable scan-specific DL MRI reconstruction without any external training datasets.
We propose a zero-shot self-supervised learning approach to perform scan-specific physics-guided MRI reconstruction to tackle these issues. The proposed approach splits available k-space measurements for each scan into three disjoint sets. Two of these sets are used to enforce data consistency and define loss during training, while the last set is used to establish an early stopping criterion. In the presence of models pretrained on a database with different image characteristics, we show that the proposed approach can be combined with transfer learning to further improve reconstruction quality.