Pneumonia Detection completed by a CNN
Pneumonia is a disease that has greatly plagued humanity. Pneumonia is a bacterial infection that causes the air sacs of the lungs to become greatly inflamed and you can get a cough with phlegm, fever, chills, and many other unpleasant symptoms. During the COVID-19 Pandemic, patients who have died have usually had COVID-19, fought it off, and then died due to pneumonia taking advantage of a weakened immune system. It’s important to detect pneumonia in its early stages so that doctors can provide an accurate prescription, and this is what my project aims to do. My project aims to create an algorithm that can detect pneumonia and differentiate lungs with pneumonia from lungs that are healthy.
Pneumonia is an unpleasant disease, so I set out to use a Convolutional Neural Network(CNN) to detect pneumonia from x-rays of lungs of patients who have pneumonia. Any algorithm requires data, so I downloaded a dataset off of Kaggle that has almost 1gb of x-rays of patients who have pneumonia and patients who have normal lungs. I wanted to use a CNN to perform this task. You might ask, what is a CNN? Well, a CNN is a type of algorithm that provides the computer with vision. The algorithm will break down the image into pixels and then observe trends about those individual pixels. The algorithm then assigns these individual pixels, pixellated values, and the process of pooling now begins. The algorithm will take a certain area of the image and then find its max-pooling number. This max-pooling number can then be compared and used to identify the specific image that this might be. Now, let's move onto my code.
There are a few things to unpack from this algorithm. First, we have the imported libraries. They all perform a few different tasks. Numpy helps convert the data into an array, which speeds up the processes of identification. Matplotlib is a mathematical extension of Numpy. Os will provide python with functions that allow it to interact with the operating system of my computer and extract data files. Cv2 also has functions that let the computer read the images. Tqdm will add the progress bars that appear later in the code. Pickle will create the two new files that my code will need to get the accuracy after unpacking this information. DATADIR shows my computer where to go to get the data on my computer and the CATEGORIES show the two different categories and the assignments that images should have. The first for loop’s main function is to loop through the file listed in DATADIR and retrieve the data files that are to be used for testing. Outside of the first function, the code will then print the array of data and the shape of that data. The data is then resized and we begin the process of creating a training dataset from all the X-ray files obtained.
The training data is first created as an empty list, which will soon be filled up. The first function will create the training data that this algorithm will need to use. It creates a path to the normal and pneumonia files and then assigns values to them. A 0 will be pneumonia and a 1 will be normal. It will convert the visual data to an array and then resize to manage the size of the data. The training data list will then have the array and the classifications appended to it. The function is then called back and the length of the data is printed. The algorithm then imports random and shuffles around the data in the training data dataset. It will then randomly print out samples of the dataset and call back the functions to identify the images. The data is then reshaped and we now begin the process of training and identification.
The function will now utilize pickle to create the files that have all the training data on them. After those files are created, they are then opened and we utilize TensorFlow to create this algorithm and get the identifications of the x-rays. The value of the pixels can range between 0–255. We then create our model using Sequential. Sequential is a model that has 1 input tensor and 1 output tensor. It’s simply the type of model that we will be using, there are many others that Keras has to offer. Our model then goes through the process of having the images run through it and completing the convolution of turning everything into pixels and then the max-pooling where the final number is determined. The 3D maps are then converted to 1D vectors. The algorithm then compiles the results and the epochs of training begin.
The epochs undergo training and the final accuracy comes in at 95.91%, fairly good for this algorithm. Hopefully, with further research in this field, we can constantly use these types of algorithms to identify diseases and assist human doctors on making their prognosis.