Generalizable and accurate stereo depth estimation is vital for 3D reconstruction, especially in surgery. Supervised learning methods obtain best performance however, limited ground truth data for surgical scenes limits generalizability. Self‐supervised methods don't need ground truth, but suffer from scale ambiguity and incorrect disparity prediction due to inconsistency of photometric loss. This work proposes a two‐phase training procedure that is generalizable and retains the high performance of supervised methods. It entails: (1) performing self‐supervised representation learning of left and right views via masked image modelling (MIM) to learn generalizable semantic stereo features (2) utilizing the MIM pre‐trained model to learn robust depth representation via supervised learning for disparity estimation on synthetic data only. To improve stereo representations learnt via MIM, perceptual loss terms are introduced, which improve the model's stereo representations learnt by explicitly encouraging the learning of higher scene‐level features. Qualitative and quantitative performance evaluation on surgical and natural scenes shows that the approach achieves sub‐millimetre accuracy and lowest errors respectively, setting a new state‐of‐the‐art. Despite not training on surgical nor natural scene data for disparity estimation.
This research develops a novel stereo depth estimation method, integrating self‐supervised and supervised learning. It begins with masked image modelling for stereo‐semantic feature learning, then refines it through supervised training on synthetic data for disparity estimation. Enhanced by perceptual loss and model design, the method achieves sub‐millimeter accuracy in surgical and natural scenes, setting a new benchmark without requiring real‐world data.