April 10, 2025
Deep Learning (DL) techniques have been widely deployed in many application domains. The growth of DL models’ size and complexity demands distributed training of DL models. Since DL training is complex, software implementing distributed DL training is error-prone. Thus, it is crucial to test distributed deep learning software to improve its reliability and quality. To address this issue, we propose a differential testing technique—D3 , which leverages a distributed equivalence rule that we create to test distributed deep learning software. The rationale is that the same model trained with the same model input under different distributed settings should produce equivalent prediction output within certain thresholds. The different output indicates potential bugs in the distributed deep learning software. D3 automatically generates a diverse set of distributed settings, DL models, and model input to test distributed deep learning software. Our evaluation on two of the most popular DL libraries, i.e., PyTorch and TensorFlow, shows that D3 detects 21 bugs, including 12 previously unknown bugs.
About Jiannan Wang
Jiannan Wang is a final year PhD student in computer science at Purdue University, working under the guidance of Professor Lin Tan. He has been engaged in research related to deep learning systems, focusing on bug detection and analysis of variance in deep learning software systems, including distributed deep learning software systems.