Visualizing Transfer Learning in COVID-19 X-rays

Visualizing Transfer Learning in COVID-19 X-rays

- 1 min

Ever wondered what a convolutional neural network ‘sees’ when it makes predictions about the class of a given image? Visualizations of individual neurons within a network can, with a few tricks, reveal the kinds of features learnt by that network, and provide some pretty compelling visualizations, too.

For applications of computer vision to healthcare, it’s especially important that these learnt features are actually related to the diagnosis and not simply technical artifacts of the imaging process. Transfer learning for vision tasks in medicine typically involves using networks pre-trained on ImageNet, which may not always provide features that are generalizable to problems in medicine, despite often converging on valid representations. When they do converge, it doesn’t appear to be to due to useful feature re-use, with random initialization providing comparable results.

This project applies the paradigm of transfer learning to a public dataset of COVID-19 X-rays, compiled by Joseph Paul Cohen and a team at the University of Montreal. In doing so, I’ve aimed to visualize the types of features learnt by networks pre-trained on ImageNet, as well as providing a comparison to both models trained from scratch, and to those pre-trained on CheXNet, a DenseNet architecture initially trained on the ChestX-ray14 dataset.

This project was completed as part of the course BMI 707 in the Department of Biomedical Informatics at Harvard Medical School. A PDF of the report is embedded below, or access the version on Google Drive directly. Here is a link to the GitHub repository hosting the code for the project.



comments powered by Disqus
rss facebook twitter github gitlab youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora