Device Technologies and Biomedical Robotics
Ryan Ahmed, BSc
Student
NYIT
Kew Gardens, New York, United States
Dang Nguyen
Clinical Research Coordinator
Massachusetts General Hospital, Corrigan Minehan Heart Center
Boston, Massachusetts, United States
Mohammad Al Mousa
Student
University of South Florida, United States
Chinyere Charles-Okezie
Student
University of South Florida, United States
Khue Nguyen
Student
Vietnam National University, Ho Chi Minh City, Vietnam, United States
Ngoc Phuong Hong Tao
Student
University of Miami, United States
Heath Rutledge-Jukes
Medical Student
Washington University School of Medicine in St. Louis, United States
Hoang Tran Pham
Physician
Pham Ngoc Thach University of Medicine, United States
Phat K. Huynh
Ph.D. Student
University of South Florida, United States
Sam Hafez
Physician
Moffitt Cancer Center, United States
Aaron Muncey
Physician
Moffitt Cancer Center, United States
Blood loss remains a significant challenge in surgical procedures, with accurate measurement and monitoring being crucial to ensure patient safety and prevent adverse outcomes. The current standard of care for quantifying blood loss during surgery involves manual counting and weighing of surgical sponges, a method that is labor-intensive, error-prone, and often leads to substantial underestimation. In response to these challenges, a novel deep learning-based integrated system for blood loss quantification in surgical sponges has been developed, harnessing the capabilities of convolutional neural network through Resnet-18 and Yolov2 Object Detection Network in combination with mass sensing technology to deliver precise and objective real-time monitoring of blood loss. This innovative approach streamlines the process of blood loss quantification, reducing human error and providing clinicians with more accurate and timely data to inform their decision-making during surgery.
Designing blood loss quantification system in surgical sponges involved constructing a durable, sterile, and biocompatible device using high-quality stainless steel, ideal for the surgical environment. A comprehensive dataset of over 300 images of surgical sponges with varying sizes, shapes, and blood content was created, including both synthetic blood samples and real human blood samples collected from the Moffitt Cancer Center. High-resolution images of the surgical sponges were captured using an in-depth camera integrated within the device, which was used for both training the deep learning model and as inputs for real-time sponge detection and blood loss quantification during surgery. The Yolov2 object detection network was employed to train the model on the collected dataset, and the model's performance was evaluated using the receiver operating characteristic (ROC) curve and 5-fold cross-validation to ensure reliable and unbiased performance estimates. Finally, the trained deep learning model was integrated into the stainless steel device and used for real-time object detection during surgery, identifying sponges within the input images and providing crucial information for the mass sensor to measure the blood content in the sponges. The combination of advanced deep learning techniques and robust materials demonstrates the potential of this integrated system in enhancing surgical safety and minimizing unnecessary blood transfusions.
The results obtained from the deep learning-based integrated system for blood loss quantification in surgical sponges demonstrated remarkable accuracy in detecting and classifying sponge sizes, which enabled precise blood loss measurements. A classification model using Resnet-18 was trained on a large database of over 3000 sponge images without soaking in real blood, yielding an accuracy of 97% for nine different classes, including small (2x2 inches), medium (4x4 inches), and large (16x16 inches), and for each size, the sponge is soaked in blood at 0%, 50%, and 100%. ROC analysis was used to compare different classification models, including LDA, SVM, decision tree, and KNN, showing that SVM is the most reliable in detecting sponge sizes. The model was further validated using the 5-fold cross-validation method, with the lowest AUC being 0.9859. Utilizing the Yolov2 object detection network, the system achieved an accuracy of 87% on average for detecting objects as sponges. Subsequently, real blood samples from patients were obtained from Moffitt Cancer Center, and the classification model yielded an accuracy of up to 91%. The discussion of these results underscores the importance of integrating advanced deep learning techniques into surgical practice, particularly in addressing challenges associated with blood loss quantification. The system's ability to accurately detect and classify sponge sizes, coupled with a mass sensor for blood measurement, has the potential to greatly improve patient outcomes by minimizing the risk of complications arising from inaccurate blood loss estimations. Moreover, the system's compatibility with existing hospital infrastructure and its user-friendly interface ensures seamless integration into surgical workflows. Future research could focus on refining the model using larger and more diverse datasets, as well as exploring the integration of real-time feedback mechanisms to further enhance surgical decision-making and patient safety. Overall, the development of this novel deep learning-based integrated system represents a significant advancement in the management of blood loss during surgical procedures, with the potential to revolutionize surgical practice and improve patient care.