Generative Adversarial Networks for Automatic Whole Slide Imaging Dataset Expansion and Analysis
In this project, we propose to develop and evaluate novel automatic techniques for medical data set expansion, especially for whole slide imaging dataset expansion, using generative adversarial network (GAN) technology. In order to address two challenges in automatically expanding the training data set, the small number of expert-marked images and the large size of whole slide histology images, we propose to develop new generative adversarial network techniques based on our existing work, the AttnGAN. AttnGAN is a state-of-the-art approach for generating high-resolution photorealistic images through an advanced and validated multi-stage GAN architecture. It also has attentional mechanisms to allow synthesizing image details at different subregions of a generated image by attending to relevant words in the natural language description of the image. The attentional model in AttnGAN is learned in a semi-supervised manner requiring only whole image and whole sentence pairs, thus alleviating the need for manual mark ups by expert pathologists. We will train a deep learning classifier for segmenting CIN regions by utilizing the AttnGAN generated whole slide images. We will evaluate the quality of machine segmentation results in collaboration with expert pathologists and will optimize the end-to-end system performance.