Professor Florida Institute of Technology INDIAN HARBOUR BEACH, Florida, United States
Introduction:: The current gold standard of skin cancer detection and diagnosis involves a dermatologist to evaluate suspected lesions. Basal cell carcinoma and squamous cell carcinoma are two types of skin cancer that affect deeper skin tissue layers. The last type, melanoma, originates in the most superficial layers of skin tissue and is considered the most dangerous due to its ability to metastasize quickly. A method is needed to quickly and accurately differentiate between types of skin cancer, healthy, and cancerous tissues.
Optical coherence tomography (OCT) is a high-resolution tomographic imaging modality that relies on the intrinsic scattering properties of biological tissues to generate imaging contrast. Optical properties of biological tissues, such as attenuation coefficient, can be quantified by analyzing OCT signals. For example, these signals have been used for differentiating cancerous tissue from non-cancerous ones. The scattering and absorption coefficient of tissues varies strongly amongst individuals.
Deep learning has been demonstrated to be a novel tool in biomedical problems. More specifically, U-Nets are a type of deep-learning network developed for biomedical image segmentation. The goal of U-Net is to perform feature mapping on an image to learn the critical properties of each image given to the network. The working principle of this imaging system is based on a combination of traditional spectra-domain OCT coupled with high transverse lateral resolution Gabor domain optical coherence microscopy (OCM). The research utilizes a deep learning U-Net to accurately segment and analyze images collected with OCM.
Materials and Methods:: Tissue phantoms with varying optical properties were used to train a network on various inputs. Phantoms simulating cancerous tissues had inhomogeneities in their core with different scattering and absorption coefficients. Numerous phantoms of different properties were imaged using LighTopTech OCX. Images were captured in OCM mode with 1000 slices per scan and four scanning depths of focus. The operating wavelength of this system is 860 nm with a super-luminescent diode source and lateral resolution of 2.8 mm. The acquired data was split into 80% for training the network and 20% for validation of the network. A U-Net-based architecture was used to segment the images into three primary regions: surface, normal, and inhomogeneous regions of the phantom. The U-Net takes the input image and puts it through a series of down-sampling, convolutions, maximum pooling, and up-sampling to identify the regions of the image. Then, the network can output the results of feature extraction. This classifies the input set of OCM images to return a final prediction of whether the image contains cancerous tissue or is normal. Additionally, the network calculates a confidence level as to how accurate its prediction is. Lastly, a background removal algorithm is used to differentiate the region of normal tissue versus the lesion. After the network had been validated, a new set of images not previously used were input into the network for final imaging. These phantoms used had different properties than the ones previously used in training and validation.
Results, Conclusions, and Discussions:: Results and Discussion
The network was able to accurately determine the regions of OCM images that showed the location of the phantom inhomogeneity. Also, the network was also able to differentiate between normal and cancerous tissue with high confidence. This confirmed the accuracy of the network as all cases showed the network accurately segmenting between the two regions. A sample data is shown in Figure 1 for a phantom for which the difference in scattering coefficients between the normal and cancerous tissue is 10 mm-1. The inhomogeneity is located at a depth of 1.366 mm from the surface, and the uncertainty of measurement is ±5 mm. To further demonstrate the improvement of this network, the phantoms were also imaged using a Nikon laser-scanning confocal microscope to validate the regions of the inhomogeneities the network predicted, as shown in Figure 2.
Conclusion
An adaptive learning technique has been developed based on OCT measurements to develop a predictive model for the characterization of the inhomogeneities simulating cancerous tissues in tissue phantoms. This deep-learning network has the potential to reduce the level of interpretation needed by dermatologists. This can assist dermatologists and allow them to evaluate more suspected lesions without the long and time-consuming process of taking a biopsy of every lesion. More widescale future work would include the interface of this deep learning network with technology for more accurate and precise tumor removal.