Articles, Blog

AI Beats Radiologists at Pneumonia Detection | Two Minute Papers #214

November 17, 2019


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. In this work, a 121-layer convolutional neural
network is trained to recognize pneumonia and 13 different diseases. Pneumonia is an inflammatory lung condition
that is responsible for a million hospitalizations and 50,000 deaths per year in the US alone. Such an algorithm requires a training set
of formidable size to work properly. This means a bunch of input-output pairs. In this case, one training sample is an input
frontal X-ray image of the chest, and the outputs are annotations by experts who mark
which of the 14 different sought diseases are present in this sample. So they say like this image contains pneumonia
here, and this doesn’t. This is not just a binary yes or no answer,
but a more detailed heatmap of possible regions that fit the diagnosis. The training set used for this algorithm contained
over a 100.000 images of over 30.000 patients. This is then given to the neural network,
and its task is to learn the properties of these diseases by itself. Then, after the learning process took place,
previously unseen images are given to the algorithm and a set of radiologists. This is called a test set, and of course,
it is crucial that both the training and the test sets are reliable. If the training and test set is created by
one expert radiologist, and then we again benchmark a neural network against a different,
randomly picked radiologist, that’s not a very reliable process because each of the
humans may be wrong in more than a few cases. Instead, the training and test annotation
data is created by asking multiple radiologists and taking a majority vote on their decisions. So now that the training and test data is
reliable, we can properly benchmark a human versus a neural network. And here’s the result: this learning algorithm
outperforms the average human radiologist. The performance was measured in a 2D space,
where sensitivity and specificity were the two interesting metrics. Sensitivity means the proportion of positive
samples that were classified as positive, and specificity means the portion of negative
samples that were classified as negative. The crosses mean the human doctors, and as
you can see, whichever radiologist we look at, even though they have different false
positive and negative ratios, they are all located below the blue curve which denotes
the results of the learning algorithm. This is a simple diagram, but if you think
about what it actually means, this is an incredible application of machine intelligence. And now, a word on limitations. It is noted that this was an isolated test,
for instance, the radiologists were only given one image, and usually, when diagnosing someone,
they know more about the history of the patient that may further help their decisions. For instance, a history of a strong cough
and high fever is highly useful supplementary information for humans when diagnosing someone
who may have pneumonia. Beyond only the frontal view of the chest,
it is also standard practice to use the lateral views as well if the results are inconclusive. These views are not available in this dataset
and it is conjectured that it may sway the comparison towards humans. However, I’ll note that this information may
also benefit the AI just as much as the radiologists, and this seems like a suitable direction for
future work. Finally, this is not the only algorithm for
pneumonia detection, and it has been compared to the state of the art for all 14 diseases,
and this new technique came out on top on all of them. Also, have a look at the paper for details
because training a 121-layer neural network requires some clever shenanigans as this was
the case here too. It is really delightful to see that these
learning algorithms can help diagnosing serious illnesses, and provide higher quality healthcare
to more and more people around the world, especially in places where access to expert
radiologists is limited. Everyone needs to hear about this. If you wish to help us spreading the word
and telling these incredible stories to even more people, please consider supporting us
on Patreon. We also know that many of you are crazy for
Bitcoin, so we also set up a Bitcoin address as well. Details are available in the video description. Thanks for watching and for your generous
support, and I’ll see you next time!

12 Comments

  • Reply Two Minute Papers December 12, 2017 at 6:41 pm

    Our Patreon page with the details: https://www.patreon.com/TwoMinutePapers
    Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
    Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A

  • Reply MsJeffreyF December 15, 2017 at 3:02 am

    121 layers!?!

  • Reply minER December 15, 2017 at 6:12 am

    This is very interesting! One thing to keep in mind is that while this system does better than the average radiologist, many X-rays are first interpreted by a primary care physician (such as an ER doctor). This is often done in less than ideal circumstances, such as under bright fluorescent hospital lights and/or on smaller monitors. Patient management is initiated and the image is later interpreted by radiology hours to days (if performed on the weekend) later. Inevitably there are discrepancies between the ER physician and radiologist which results in delays in patient care. A system such as this could highlight areas of interest for the primary care physician pending an official radiology report.

  • Reply age3rcm December 16, 2017 at 1:34 pm

    THE FUTURE IS NOW

  • Reply Jean-Marc Loingtier December 26, 2017 at 10:54 pm

    What it shows is a fantastic way of summarizing and archiving some very specific human knowledge. Of course one can only wonder if this process is not too limited. Rather than to learn from specific comments attached to each picture it would have been more powerful to train thethe system on x-ray + biological results identifying (for sure) an infection. Maybe next time ?

  • Reply Akash Kandpal December 28, 2017 at 10:44 am

    nice video again 🙂

  • Reply anjopag31 January 22, 2018 at 1:47 am

    >121 layer CNN

  • Reply Muhammad Usama awan October 2, 2018 at 9:22 am

    Does chexnet's algorithum also locates the location of Pneumonia in Chest X-Rays or it only detect presence of Pneumonia ?
    And Thanks for the video it is helpful

  • Reply Khin Maung Htay November 25, 2018 at 5:42 am

    Hi Love your brief explanation!!! Could you do video on the paper "Densely Connected Convolution Network" ? Thank you ….

  • Reply Heinrich Peter Maria Radojewski Schäfer Leverkusen May 5, 2019 at 2:19 pm

    There are 121 layers here.
    How many neurons are there in each layer?
    How can you describe the arrangement?

  • Reply Heinrich Peter Maria Radojewski Schäfer Leverkusen May 7, 2019 at 12:27 pm

    There are 121 layers here. How many
    neurons are there in each layer? How
    can you describe the arrangement?

  • Reply Chris Summers September 19, 2019 at 6:50 pm

    Trained to recognize pneumonia

  • Leave a Reply