Trending March 2024 # Background Removal In The Image Using The Mediapipe Library # Suggested April 2024 # Top 6 Popular

You are reading the article Background Removal In The Image Using The Mediapipe Library updated in March 2024 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Background Removal In The Image Using The Mediapipe Library

This article was published as a part of the Data Science Blogathon.

In this article, we will be making an application that will remove or replace the background of the image with another image. For that, we will be using two libraries. First, is the media pipe library for segmenting the person from the background. Second, cv2 for performing image processing techniques.

Now, we will be loading all the required libraries to build this application.

So, our very first step will be to initialize our model which will be like a pre-step for the selfie segmentation model.

In this model, we will have two types of model:

General Model: If we pass 0 as the parameter for the selfie segmentation model then the general model will be selected.

Landscape Model: If we will pass 1 as the parameter for the above model then the landscape model will be selected.

Note: If we will not specify any of the models then 0 will be selected by default i.e. General model.

But wait a minute! What is the difference between both models? Let’s discuss that:

When it comes to the general model, it specifically works on 256X256x1, i.e., 256-Height, 256-Width, 1 channel as the input, and 256x256x3 as the output. When we talk about the Landscape model, it specifically works on the 144X256X1 as the input and results in 144x256x3 output tensors other than that, both the general and landscape model is the same.

change_background_mp = mp.solutions.selfie_segmentation change_bg_segment = change_background_mp.SelfieSegmentation()

Code breakdown:

As discussed here, we will initialize the segmentation model using 

mp. solutions.selfie_segmentation,

 if we break it down, then we can see that from the Mediapipe library, we are calling solutions class, and from that class, 

we are calling selfie_segmentation model

.

    After model initialization, we will be setting our segmentation function, i.e., SelfieSegmentation().

    Read an Image

    So previously, we have initialized our segmentation model and created a selfie segmentation function as well. Now, let’s read our sample image and see what it looks like:

    cv2.read: To read the sample image from the local system.

    plt.imshow: This is the matplotlib function that will help us to see/plot the image.

    sample_img = cv2.imread('media/sample.jpg') plt.figure(figsize = [10, 10]) plt.title("Sample Image");plt.axis('off');plt.imshow(sample_img[:,:,::-1]);plt.show()

    Sample image source: Unsplash

    Code breakdown:

    So firstly, we are reading the image from the read() function

      Then before plotting/displaying the image, we will set the size of the display using the figure function.

        Finally, before displaying the image, it will be a good practice to convert the image format from RGB to BGR as cv2 will read the image in that format only when it comes to colored image, and then with the help of the show function, we will display the image.

        Remove/Replace Background using Mediapipe Library

        RGB_sample_img = cv2.cvtColor(sample_img, cv2.COLOR_BGR2RGB) result = change_bg_segment.process(RGB_sample_img) plt.figure(figsize=[22,22]) plt.subplot(121);plt.imshow(sample_img[:,:,::-1]);plt.title("Original Image");plt.axis('off'); plt.subplot(122);plt.imshow(result.segmentation_mask, cmap='gray');plt.title("Probability Map");plt.axis('off');

        Output:

        Code breakdown:

        As discussed, we will first convert the BGR format image to an RGB format image.

          Now with the help of process function, we will process our selfie segmentation model on the sample image.

            Then as we did in the Read image section, here also we will set the figure size with the help of figure function.

              Finally, we will be displaying the original image as well as segmented image side by side (by using subplot function of matplotlib) and imshow function.

              Inference: So, if we will closely look at the output (segmented subplot, i.e., our main processed output) then there, we can see that some areas are neither purely black nor purely white they are a bit gray which indicates that those places our model was not able to predict that it was the background or the person so for that reason we will be using the threshold techniques to have the more accurate segmented area in the image.

              So in our next step, we will be using thresholding of the mask so that we would only get two types of pixel values, i.e., Binary black and white mask, which has a pixel value of 1 for the person and 0 for the background.

              plt.figure(figsize=[22,22]) plt.subplot(121);plt.imshow(sample_img[:,:,::-1]);plt.title(“Original Image”);plt.axis(‘off’); plt.subplot(122);plt.imshow(binary_mask, cmap=’gray’);plt.title(“Binary Mask”);plt.axis(‘off’);

              Output:

              Code breakdown:

              Binary masking with thresholding: Here, we are using the concept of binary masking, which will have a pixel value of 1 for the person and a pixel value of 0 for the background. Also, we will be setting up the threshold value of 0.9, i.e., the confidence of 90% that pixel values when will be greater it will be 1 otherwise 0.

                Now, again we will plot both the original and preprocessed image (one with the binary mask) using subplots and Matplotlib’s show function.

                So by far, we have segmented our image accurately by performing some image preprocessing techniques. Now it’s time to visually see how the image’s background will be removed, so for that, we will be using the numpy.where() function. This function will use the binary mask values and returns white are for every 1-pixel value and then replace every area with 0 pixels, i.e., a black region with 255, which means the background will have white color only.

                But, before having the required output, we have to first convert the one-channel image into the three-channel image using numpy.dstack function.

                binary_mask_3 = np.dstack((binary_mask,binary_mask,binary_mask)) output_image = np.where(binary_mask_3, sample_img, 255) plt.figure(figsize=[22,22]) plt.subplot(121);plt.imshow(sample_img[:,:,::-1]);plt.title("Original Image");plt.axis('off'); plt.subplot(122);plt.imshow(output_image[:,:,::-1]);plt.title("Output Image");plt.axis('off');

                Output:

                Code breakdown:

                As discussed, we will be using Numpy’s d-stack function to convert our image from one channel to three-channel.

                  Now, we will be using the Numpy’s function that will convert every black region to a white region. That is, it removes the black segmented area with the white so that it appears to be like the white background.

                    Finally, we will set the image size using the figure function. And then display both the original and output image using the show function.

                    Note: By far, for having the white background, we have used 255 as the value, but we can also have another background image as the output, for that, we just need to change the parameter in np.where function.

                    bg_img = cv2.imread('media/background.jpg') output_image = np.where(binary_mask_3, sample_img, bg_img) plt.figure(figsize=[22,22]) plt.subplot(131);plt.imshow(sample_img[:,:,::-1]);plt.title("Original Image");plt.axis('off'); plt.subplot(132);plt.imshow(binary_mask, cmap='gray');plt.title("Binary Mask");plt.axis('off'); plt.subplot(133);plt.imshow(output_image[:,:,::-1]);plt.title("Output Image");plt.axis('off');

                    Output:

                    Code breakdown:

                    So here comes the last part where we will replace the background of the image. For that, we will first read that background image using imread the function.

                      Now we will create one final output image. We’ll use the np. where function to replace the black region (binary asking) with the other background image.

                        Finally, we will display the original image, sample image, and the final segmentation result.

                        So, finally, we have developed our application which can remove the background of any particular image that has the person in it, though, we can also create functionality, where it can be done in real-time just like the zoom application. Still, the logic will be the same only, instead of image processing, there, we will be handling the video processing.

                        Key takeaways from the article

                        The very first takeaway from this article is that we have learned how image segmentation works and its real-world implementation.

                        There are ample techniques available for image segmentation. But this is one of the simplest to use as you can see it’s in the modular form.

                        We have also covered some image preprocessing techniques like thresholding, erosion, stack. These basic techniques are also involved in building a complete computer vision pipeline for an application.

                        Endnotes

                        Read on AV Blog about various predictions using Machine Learning.

                        About Me

                        Greeting to everyone, I’m currently working in TCS and previously, I worked as a Data Science Analyst in Zorba Consulting India. Along with full-time work, I’ve got an immense interest in the same field, i.e. Data Science, along with its other subsets of Artificial Intelligence such as Computer Vision, Machine Learning, and Deep learning; feel free to collaborate with me on any project on the domains mentioned above (LinkedIn).

                        The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.

                        You're reading Background Removal In The Image Using The Mediapipe Library

                        How To Remove An Image Background

                        So you have a wonderful picture of your kids or dogs and you want to have some fun by removing the background and dropping in a different background? Or maybe you just want to remove the background on a picture so that you can use it on a website or digital document?

                        There are many ways to go about removing a background from images and all of them will give you different results. The best thing is to try a couple of different methods on your picture and see which one gives you the best results.

                        Table of Contents

                        In this post, I’m going to write about three methods: using PowerPoint, using a website called ClippingMagic and using an app on your Android or iOS phone called Background Eraser.

                        Remove Background using PowerPoint

                        When you let go, it will automatically figure out which part to keep. It might end up adding more purple in another part of the image, but you can just draw another line to fix that. Here you can see I had to draw a couple of lines in order to get just the dogs.

                        Here I inserted a picture of clouds and then moved my dog picture to the front. When you remove a background using PowerPoint it automatically makes the removed part transparent, so you don’t have to manually make the background transparent.

                        When you remove the background, you might see a little bit of the background around the edges of the main subject. This is really hard to remove in some images, but there is a nice trick that makes the cutout look a lot better in PowerPoint.

                        Remove Background Using Clipping Magic

                        The only downside to this site is that you have to pay in order to download a copy of your image. Not only that, they require you to subscribe to a monthly plan instead of having a one-time charge, which I would not mind paying if it was something really cheap.

                        Remove Background using Background Eraser

                        If you take most of your photos from your smartphone now, it will probably be easier to just download an app that can help you remove a background. Here are the ones I suggest for iOS and Android:

                        Once you install the iOS app, Background Eraser will give you a blank screen like shown below.

                        Tap on the small icon at the top left that has the mountain on it. It will ask you for permission to access your photos. When you agree, go ahead and select a picture from your camera roll.

                        Once your image has loaded, you will see the buttons at the bottom become enabled. You can crop and adjust the colors, etc if you like. In our case, we want to tap on Erase.

                        The erase tools will appear across the bottom. By default, Erase is selected and if you start to move your hand over the image, it will start erasing. There are a couple of things to note. Firstly, the width is set to max and you can adjust it by using the slider.

                        In addition, there is an offset so that when you move your finger across the screen, the erasing will be offset from your finger so that you can actually see what you are erasing. You can also adjust this offset using the slider.

                        Next, Restore will do the opposite of Erase and will bring back any part of the image you move your finger over. TargetArea is really handy and will allow you to simply tap on an area with similar background and remove it automatically. This is good for sections that have solid colors.

                        TargetColor will allow you to pick one color in the image and have it erased anywhere else it shows up in the image. Lastly, Reverse will invert the selection.

                        Using a combination of the tools, you can remove exactly the portions of the picture you want. Note that you can also pinch to zoom, which makes it really easy to get rid of those hard to get to parts. Finally, when you are done, tap on the Done link and then tap on the arrow at the top right.

                        You can now save the picture out to your camera roll, email it, or share it onto social media. You can also choose from various sizes and choose between PNG and JPEG.

                        Javafx Example To Decrease The Brightness Of An Image Using Opencv.

                        One way to alter the brightness of an image using Java is to use the convertTo() method. This method performs the required calculations on the given matrix to alter the contrast and brightness of an image. This method accepts 4 parameters −

                        mat − Empty matrix to hold the result with the same size and type as the source matrix.

                        rtype − integer value specifying the type of the output matrix. If this value is negative, the type will be same as the source.

                        alpha − Gain value, which must be greater than 0 (default value 1).

                        beta − Bias value (default value 0).

                        if the chosen value for the parameter beta is a negative value (0 to -255) the brightness of the image is reduced.

                        Example import java.awt.image.BufferedImage; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import javafx.application.Application; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.embed.swing.SwingFXUtils; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.control.Slider; import javafx.scene.image.ImageView; import javafx.scene.image.WritableImage; import javafx.scene.layout.VBox; import javafx.stage.Stage; import javax.imageio.ImageIO; import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.core.MatOfByte; import org.opencv.imgcodecs.Imgcodecs; public class DecreasingBrightnessJavaFX extends Application {    double contrast = 1;    private final int rtype = -1;    double alpha = 1;    double beta = 0;    Slider slider1;    int sliderMinVal = 0;    int sliderMaxVal = 255;    int sliderInitVal = 255;    Mat src = null;    public void start(Stage stage) throws IOException {             System.loadLibrary( Core.NATIVE_LIBRARY_NAME );       String file ="D:Imagecuba.jpg";       src = Imgcodecs.imread(file);       WritableImage writableImage = loadImage(src);             ImageView imageView = new ImageView(writableImage);       imageView.setX(25);       imageView.setY(25);       imageView.setFitHeight(400);       imageView.setFitWidth(550);       imageView.setPreserveRatio(true);             slider1 = new Slider(sliderMinVal, sliderMaxVal, sliderInitVal);       slider1.setShowTickLabels(true);       slider1.setShowTickMarks(true);       slider1.setMajorTickUnit(25);       slider1.setBlockIncrement(10);             Label label1 = new Label();       Label label2 = new Label();                oldValue, Number newValue){             try {                alpha = newValue.doubleValue();                Mat dest = new Mat(src.rows(), src.cols(), src.type());                alpha = (alpha/sliderMaxVal);                beta = 1.0 - alpha;                label1.setText("α-value: " + alpha);                label2.setText("β-value: " + beta);                src.convertTo(dest, rtype, alpha, beta);                imageView.setImage(loadImage(dest));             }             catch(Exception e) {                System.out.println("");             }          }       });             VBox vbox = new VBox();       vbox.setPadding(new Insets(20));       vbox.setSpacing(10);       vbox.getChildren().addAll(label1, label2, slider1, imageView);             Scene scene = new Scene(vbox, 600, 450);       stage.setTitle("Decreasing an image");       stage.setScene(scene);       stage.show();    }    public WritableImage loadImage(Mat image) throws IOException {       MatOfByte matOfByte = new MatOfByte();       Imgcodecs.imencode(".jpg", image, matOfByte);             byte[] byteArray = matOfByte.toArray();             InputStream in = new ByteArrayInputStream(byteArray);       BufferedImage bufImage = ImageIO.read(in);       WritableImage writableImage = SwingFXUtils.toFXImage(bufImage, null);       return writableImage;    }    public static void main(String args[]) {       launch(args);    }

                        Blood Cell Detection In Image Using Naive Approach

                        This article was published as a part of the Data Science Blogathon.

                        The basics of object detection problems are how uss the different deep learning architectures that we can use to solve object detection problems. Let us first discuss the problem statement that we’ll be working on.

                        Table of Contents

                        Understanding the Problem Statement Blood Cell Detection

                        Dataset Link

                        Naive Approach for Solving Object Detection Problem

                        Steps to Implement Naive Approach

                        Load the Dataset

                        Data Exploration

                        Prepare Dataset for Naive Approach

                        Create Train and Validation Set

                        Define Classification Model Architecture

                        Train the Model

                        Make Predictions

                        Conclusion

                        Understanding the Problem Statement Blood Cell Detection Problem Statement

                        Now, here is a sample image from the data set. You can see that there are some red-shaded regions and a blue or a purple region, as you can see.

                        So in the above image, there are the red-shaded regions which are the RBCs or Red Blood Cells, and the purple-shaded regions, which are the WBCs,  and some small black highlighted portions, which are the platelets.

                        As you can see in this particular image, we have multiple objects and multiple classes.

                        We are converting this to a single class single object problem for simplicity. That means we are going to consider only WBCs.

                        Hence, just a single class, WBC, and ignore the rest of the classes. Also, we will only keep the images that have a single WBC.

                        So the images which have multiple WBCs will be removed from this data set. Here is how we will select the images from this data set.

                        So, we have removed image 2 and image 5 because image 5 has no WBC, whereas image 2 has 2 WBCs and the other images are a part of the data set. Similarly, the test set will also have only one WBC.

                        Now, for each image, we have a bounding box around the WBCs. And as you can see in this image, we have the file name as chúng tôi and these are the bounding box coordinates for the bounding box around the WBC.

                        In the next section, we will cover the simplest approach or the naive approach for solving this object detection problem.

                        Dataset Link Naive Approach for Solving Object Detection Problem

                        In this section, we are going to discuss a naive approach for solving the object detection problem. So let’s first understand the task, we have to detect WBCs in the image of blood cells, so you can see that below image.

                        Now, the simplest way would be that divide the images into multiple patches, so for this image, have divided the image into four patches.

                        We classify each of these patches, so the first patch has no WBC the second patch has a WBC, similarly the third and fourth do not have any WBC.

                        We are already familiar with the classification process and how to build the classification algorithms. So we can easily classify each of these individual patches as yes and no for representing WBC’s.

                        Now, in the below image the patch (a green box) which has a WBC, can be represented as the bounding box, so in this case, we’ll take the coordinates of this patch take this coordinates-value, and return that as the bounding box for WBCs.

                        Now in order to implement this approach, we’ll first need to prepare our training data. Now one question might be, why do we need to prepare the training data at all? we already have the images and the bounding boxes along with these images.

                        Well, if you remember, we have our training data in the following format where we have our WBC bounding box and the bounding box coordinates.

                        Now, note that we have these bounding box coordinates for the complete image, but we are going to divide this image into four patches. So we’ll need the bounding box coordinates for all of those four patches.  So our next question is how do we do that?

                        we have to define a new training data where we have the file name as you can see below image. We have the different patches and for each of these patches, we have Xmin, Xmax, Ymin, and Ymax values which denote the coordinates of these patches, and finally, our target variable which is WBC.  IS a WBC present in the image or not?

                        Now in this case it would become a simple classification problem. So for each image, we’ll divide it into four different patches and create the bounding box coordinates for each of these patches.

                        Now the next question is how do we create these bounding box coordinates? So it’s really simple.

                        Consider this that we have an image of size (640*480). So this origin would be (0,0). The above image has x and y-axis, and here we would have the coordinate value as (640, 480).

                        Now, if we find out the midpoint it would be (320,240). Once we have these values, we can easily find out the coordinates for each of these patches. So for the first patch, our Xmin and Ymin would be (0,0) and Xmax, Ymax would be (320,240).

                        Similarly, we can find it out for the second, third, and fourth patches. Once we have the coordinate values or the bounding box values for each of these patches. The next task is to identify if there is a WBC within this patch or not.

                        Here we can clearly see that patch 2 has a WBC while other patches do not, but we cannot manually label it for each of the images on each of the patches in the data set.

                        Now in the next section, we are going to implement the naive approach.

                        Steps to Implement Naive Approach

                        In the last section, we discussed the Naive approach for object detection. Let us now define the steps to implement this approach on the blood cell detection problem.

                        These are the steps that will follow:-

                        Load the Dataset

                        Data Explore

                        Prepare the Dataset for Naive Approach

                        Create Train and Validation set

                        Define classification model Architecture

                        Train the model

                        Make Predictions

                        so let’s go to the next section, implement these above steps.

                        1 Loading Required Libraries and Dataset

                        So let’s first start with loading the required libraries. It’s “numpy” and pandas then we have “matplotlib” in order to visualize the data and we have loaded some libraries to work with the images and resize the image and finally the torch library.

                        # Importing Required Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import os from PIL import Image from skimage.transform import resize import torch from torch import nn

                        Now we will fix a random seed value.

                        # Fixing a random seed values to stop potential randomness seed = 42 rng = np.random.RandomState(seed)

                        here we’ll mount the drive since the data set is stored on the drive.

                        # mount the drive from google.colab import drive drive.mount('/content/drive')

                        Now since the data on the drive is available in the zip format. We’ll have to unzip this data and here we are going to unzip the data. So we can see that all the images are loaded and are stored in a folder called images. At the end of this folder, we have a CSV file which is trained.csv.

                        # unzip the dataset from drive !unzip /content/drive/My Drive/train_zedkk38.zip

                        Source: Author

                        2 Data Exploration

                        So let us read the CSV file and find out what is the information stored in this ‘train.csv’ file.

                        ## Reading target file data = pd.read_csv('train.csv') data.shape

                        So here we are printing the first few rows of the CSV file. We can see that the file has image_names along with the cell_type which will denote RBC or WBC and so on. Finally the bounding box coordinates for this particular object in this particular image.

                        data.head()

                        Source: Author

                        So if we check the value counts for the RBC, WBC, and platelets. we’ll see that RBCs have the maximum value count followed by WBCs and platelets.

                        data.cell_type.value_counts()

                        Source: Author

                        Now for simplicity, we are going to only consider the WBC’s here. Hence we have selected the data with only WBC’s. So now you can see we have image_names and only cell_type WBC against these images. Also, we have our bounding box coordinates.

                        (data.loc[data['cell_type'] =='WBC']).head()

                        Source: Author

                        Let’s look at a few images from the original data set and the shape of these images. So we can see that the shape of these images is (480,640,3). So this is an RGB image with three channels and this is the first image in the data set.

                        image = plt.imread('images/' + '1.jpg') print(image.shape) plt.imshow(image)

                        Source: Author

                        Now the next step is to create patches out of this image. So we are going to learn how to divide this image into four patches. Now we know that the image is of the shape (640, 480). hence this middle point will be (320,240) and the center is (0, 0).

                        Source: Author

                        So we have the coordinates for all of these patches in the image and here we are going to make use of these coordinates and create the patches. So our format of these coordinates will be Ymin, Ymax, Xmin, and Xmax. So here we have our (Ymin, Ymax) is ( 0, 240) and (Xmin, Xmax) is (0 ,320). So this basically indicates the first patch. Similarly, we have image_2,image_3, image_4 for the subsequent second third, and fourth patches. So this is a process we can create patches from the image.

                        # creating 4 patches from the image # format ymin, ymax, xmin, xmax image_1 = image[0:240, 0:320, :] image_2 = image[0:240, 320:640, :] image_3 = image[240:480, 0:320, :] image_4 = image[240:480, 320:640, :]

                        Source: Author

                        Now we need to assign a target value for these patches. So in order to do that we calculate the intersection over union where we have to find out the intersection area and the union area.

                        Source: Author

                        So intersection area is simply this particular rectangle, to find out the area we need to find out the Xmin, Xmax, and Ymin, Ymax coordinates for this rectangle.

                        def iou(box1, box2): Irect_xmin, Irect_ymin = max(box1[0],box2[0]), max(box1[2],box2[2]) Irect_xmax, Irect_ymax = min(box1[1],box2[1]), min(box1[3],box2[3]) if Irect_xmax < Irect_xmin or Irect_ymax < Irect_ymin: target = inter_area = 0 else: inter_area = np.abs((Irect_xmax - Irect_xmin) * (Irect_ymax - Irect_ymin)) box1_area = (box1[1]-box1[0])*(box1[3]-box1[2]) box2_area = (box2[1]-box2[0])*(box2[3]-box2[2]) union_area = box1_area+box2_area-inter_area iou = inter_area/union_area return target

                        We have our original bounding box coordinates from the train CSV file. When I used as input these two values to the “iou” function that we defined the target comes out to be 1. You can try with different patches also based on that you will get target value.

                        box1= [320, 640, 0, 240] box2= [93, 296, 1, 173] iou(box1, box2)

                        The output is 0. Now the next step is to prepare the dataset.

                        3 Preparing Dataset for Naive Approach

                        We have considered and explored only a single image from the dataset. So let us perform these steps for all the images in the data set. so first of all here is the complete data that we have.

                        data.head()

                        Source: Author

                        Now, We are converting these cell types as RBC is zero, WBC is one, and platelets are two.

                        data['cell_type'] = data['cell_type'].replace({'RBC': 0, 'WBC': 1, 'Platelets': 2})

                        Now we have to select the images which have only a single WBC.

                        Source: Author

                        So first of all we are creating a copy of the dataset and then keeping only WBCs and removing any image which has more than one WBC.

                        ## keep only Single WBCs data_wbc = data.loc[data.cell_type == 1].copy() data_wbc = data_wbc.drop_duplicates(subset=['image_names', 'cell_type'], keep=False)

                        So now we have selected the images. We are going to set the patch coordinates based on our input image sizes. We are reading the images one by one and storing the bounding box coordinates of the WBC for this particular image. We are extracting the patches out of this image using the patch coordinates that we have defined here.

                        And then we are finding out the target value for each of these patches using the IoU function that we have defined. Finally, here we are resizing the patches to the standard size of (224, 224, 3). Here we are creating our final input data and the target data for each of these patches.

                        # create empty lists X = [] Y = [] # set patch co-ordinates patch_1_coordinates = [0, 320, 0, 240] patch_2_coordinates = [320, 640, 0, 240] patch_3_coordinates = [0, 320, 240, 480] patch_4_coordinates = [320, 640, 240, 480] for idx, row in data_wbc.iterrows(): # read image image = plt.imread('images/' + row.image_names) bb_coordinates = [row.xmin, row.xmax, row.ymin, row.ymax] # extract patches patch_1 = image[patch_1_coordinates[2]:patch_1_coordinates[3], patch_1_coordinates[0]:patch_1_coordinates[1], :] patch_2 = image[patch_2_coordinates[2]:patch_2_coordinates[3], patch_2_coordinates[0]:patch_2_coordinates[1], :] patch_3 = image[patch_3_coordinates[2]:patch_3_coordinates[3], patch_3_coordinates[0]:patch_3_coordinates[1], :] patch_4 = image[patch_4_coordinates[2]:patch_4_coordinates[3], patch_4_coordinates[0]:patch_4_coordinates[1], :] # set default values target_1 = target_2 = target_3 = target_4 = inter_area = 0 # figure out if the patch contains the object ## for patch_1 target_1 = iou(patch_1_coordinates, bb_coordinates ) ## for patch_2 target_2 = iou(patch_2_coordinates, bb_coordinates) ## for patch_3 target_3 = iou(patch_3_coordinates, bb_coordinates) ## for patch_4 target_4 = iou(patch_4_coordinates, bb_coordinates) # resize the patches patch_1 = resize(patch_1, (224, 224, 3), preserve_range=True) patch_2 = resize(patch_2, (224, 224, 3), preserve_range=True) patch_3 = resize(patch_3, (224, 224, 3), preserve_range=True) patch_4 = resize(patch_4, (224, 224, 3), preserve_range=True) # create final input data X.extend([patch_1, patch_2, patch_3, patch_4]) # create target data Y.extend([target_1, target_2, target_3, target_4]) # convert these lists to single numpy array X = np.array(X) Y = np.array(Y)

                        Now, let’s print the shape of our original data and the new data that we have just created. So we can see that we originally had 240 images. Now we have divided these images into four parts so we have (960,224,224,3). This is the shape of the images.

                        # 4 patches for every image data_wbc.shape, X.shape, Y.shape

                        so let’s quickly look at one of these images that we have just created. So here is our original image and this is the last patch or the fourth patch for this original image. We can see that the target assigned is one.

                        image = plt.imread('images/' + '1.jpg') plt.imshow(image)

                        Source: Author

                        If we check any other patch, let’s say I want to check the first patch of this image so here this will put the target as zero. You will get the first patch. Similarly, you can ensure that all the images are converted into patches and the targets are assigned accordingly.

                        plt.imshow(X[0].astype('uint8')), Y[0]

                        Source: Author

                        4 Preparing Train and Validation Sets

                        Now that we have the dataset. we are going to prepare our training and validation sets. Now note that here we have the shape of images as (224,224,3).

                        # 4 patches for every image data_wbc.shape, X.shape, Y.shape

                        The output is:-

                        ((240, 6), (960, 224, 224, 3), (960,))

                        In PyTorch, we need to have the channels first. So we are going to move the axis that is will have the shape (3,224,224).

                        X = np.moveaxis(X, -1, 1) X.shape

                        The output is:-

                        (960, 3, 224, 224)

                        Now here we are normalizing the image pixel values.

                        X = X / X.max()

                        Using the train test split function we are going to create a train and validation set.

                        from sklearn.model_selection import train_test_split X_train, X_valid, Y_train, Y_valid=train_test_split(X, Y, test_size=0.1, random_state=seed) X_train.shape, X_valid.shape, Y_train.shape, Y_valid.shape

                        The output of the above code is:-

                        ((864, 3, 224, 224), (96, 3, 224, 224), (864,), (96,))

                        Now, we are going to convert both of our training sets and validation sets into tensors, because these are “numpy” arrays.

                        X_train = torch.FloatTensor(X_train) Y_train = torch.FloatTensor(Y_train) X_valid = torch.FloatTensor(X_valid) Y_valid = torch.FloatTensor(Y_valid) 5 Model Building

                        For now, we’re going to build our model so here we have installed a library which is PyTorch model summary.

                        !pip install pytorch-model-summary

                        Source: Author

                        This is simply used to print the model summary in PyTorch. Now we are importing the summary function from here.

                        from pytorch_model_summary import summary

                        Here is the architecture that we have defined for our Naive approach. So we have defined a sequential model where we have our Conv2d layer with the input number of channels as 3 and the number of filters is 64, the size of the filters is 5 and the stride is set to 2. We have our ReLU activation function for this Conv2d layer. A pooling layer with the window size as 4 and stride 2 and then convolutional layer. Now we are flattening the output from the Conv2d layer and finally our linear layer or dense layer and sigmoid activation function.

                        ## model architecture model = nn.Sequential( nn.Conv2d(in_channels=3, out_channels=64, kernel_size=5, stride=2), nn.ReLU(), nn.MaxPool2d(kernel_size=4,stride=2), nn.Conv2d(in_channels=64, out_channels=64, kernel_size=5, stride=2), nn.Flatten(), nn.Linear(40000, 1), nn.Sigmoid() )

                        So here if we print the model this will be the model architecture that we have defined.

                        print(model)

                        Source: Author

                        Using the summary function, we can have a look at the model summary. So this will return us the layers the output shape from each of these layers, the number of trainable parameters for each of these layers. So now our model is ready.

                        print(summary(model, X_train[:1]))

                        Source: Author

                        6 Train the Model

                        let us train this model. So we are going to define our loss and optimizer functions. We have defined binary cross-entropy as a loss and adam optimizer. And then we are transferring the model to GPU. Here we are taking batches from the input image in order to train this model.

                        ## loss and optimizer criterion = torch.nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) ## GPU device if torch.cuda.is_available(): model = model.cuda() criterion = criterion.cuda()

                        In the output, we can see that at each epoch the loss is decreasing. So the training is complete for this model.

                        # batch size of the model batch_size = 32 # defining the training phase model.train() for epoch in range(15): # setting initial loss as 0 train_loss = 0.0 # to randomly pick the images without replacement in batches permutation = torch.randperm(X_train.size()[0]) # to keep track of training loss training_loss = [] # for loop for training on batches for i in range(0,X_train.size()[0], batch_size): # taking the indices from randomly generated values indices = permutation[i:i+batch_size] # getting the images and labels for a batch batch_x, batch_y = X_train[indices], Y_train[indices] if torch.cuda.is_available(): batch_x, batch_y = batch_x.cuda().float(), batch_y.cuda().float() # clearing all the accumulated gradients optimizer.zero_grad() # mini batch computation outputs = model(batch_x) # calculating the loss for a mini batch loss = criterion(outputs.squeeze(),batch_y) # storing the loss for every mini batch training_loss.append(loss.item()) # calculating the gradients loss.backward() # updating the parameters optimizer.step() training_loss = np.average(training_loss) print('epoch: t', epoch, 't training loss: t', training_loss)

                        Source: Author

                        7 Make Predictions

                        Let us now use this model in order to make predictions. So here I am only taking the first five inputs from the validation set and transferring them to the Cuda.

                        output = model(X_valid[:5].to('cuda')).cpu().detach().numpy()

                        Here is the output for these first five images that we have taken. Now we can see that for the first two the output is that there is no WBC or there is a WBC.

                        output

                        This is the output:

                        array([[0.00641595], [0.01172841], [0.99919134], [0.01065345], [0.00520921]], dtype=float32)

                        So let’s also plot the images. We can see that this is the third image, here the model says that there is a WBC and we can see that we have a WBC in this image.

                        plt.imshow(np.transpose(X_valid[2]))

                        Source: Author

                        Similarly, we can check for another image, So will take the first image of the output. you can see the output image, this image was our input patch and there is no WBC in this patch.

                        plt.imshow(np.transpose(X_valid[1]))

                        Source: Author

                         

                        This was a very simple method in order to make the predictions or identify the patch or portion of the image that has a WBC.

                        Conclusion

                        Understanding the practical implementation of blood cell detection with an image dataset using a naive approach. This is the real challenge to solve the business problem and develop the model. While working on image data you have to analyze a few tasks such as bounding box, calculating IoU value, Evaluation metric.  The next level(future task) of this article is one image can have more than one object. The task is to detect the object in each of these images.

                        I hope the articles helped you understand how to detect blood cells with image data, how to build detection models, we are going to use this technique, and apply it in the medical analysis domain.

                        About the Author

                        Hi, I am Kajal Kumari. have completed my Master’s from IIT(ISM) Dhanbad in Computer Science & Engineering. As of now, I am working as Machine Learning Engineer in Hyderabad. Here is my Linkedin profile if you want to connect with me.

                        End Notes

                        Thanks for reading!

                        If you want to read my previous blogs, you can read Previous Data Science Blog posts from here.

                        The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.

                        Related

                        Enhancing The Sky In An Image With Photoshop

                        Enhancing The Sky In An Image With Photoshop

                        Written by Steve Patterson.

                        In this Photoshop tutorial, we’ll learn how to enhance the sky in a photo, bringing out details in the clouds, improving the contrast and increasing color saturation, and we can do all these things at once using a very simple technique.

                        Often times when taking pictures outdoors, the camera exposes the shot based on the brightness of the subject you’re focusing on, which is fine except that it tends to overexpose the sky, washing away that rich, deep blue along with most of the details in the clouds. The usual way of avoiding this problem is by using a neutral density filter, which is a fancy name for what is essentially a black-to-transparent gradient attached to the lens of your camera, fading from black at the top down to transparent at the bottom. Since the top portion of the filter is darker than the bottom portion, less light is able to enter the top portion of the lens, protecting the sky from overexposure.

                        If you don’t happen to own a neutral density filter or forgot to bring it with you, no worries. Photoshop makes it easy to achieve the same results with very little effort. In fact, even though the general rule is to get things right as much as possible in front of the camera rather than relying on Photoshop to save the day, this is one time when I find it easier to do the work in Photoshop since it gives us a lot more control over the end result.

                        I was out driving around the countryside one morning when I came across a couple of horses proudly standing by the road enjoying a delicious helping of grass. Since I always bring my camera along everywhere I go (I find it makes it easier to take pictures), I couldn’t resist pulling the car over and snapping a few photos. Here’s one of them:

                        The original image.

                        Overall, it’s not a bad photo, and unlike several people I know who run for cover at the very mention of having their picture taken, these two horses didn’t seem to mind at all. It might have been a better photo if the wire fence wasn’t in the way, but I could always get rid of it if I really wanted to using the Clone Stamp tool. The problem I’m concerned about here is the sky. It’s looking quite dull and could definitely use some help. There’s a lot of detail in the clouds that we’re not seeing, and the light blue needs to be deeper, more saturated.

                        Here’s what the sky will look like when we’re done:

                        The final “enhanced sky” result.

                        Let’s see how to go about enhancing the sky.

                        One thing I should point out here before we begin is that if your sky is completely blown-out, meaning that it is overexposed to the point where it has become pure white and there is no useable image information, this technique won’t work. In fact, no technique will work. Photoshop may be extremely powerful, but it can’t create something out of nothing. If that’s the case with your photo, you’ll need to replace the sky completely. We’ll look at how to do that in another tutorial.

                        Download this tutorial as a print-ready PDF!

                        Step 1:

                        Add A New Blank Layer

                        With my photo newly opened in Photoshop, I can see in my Layers palette that I currently have one layer, the Background layer, which contains my image:

                        The Layers palette in Photoshop showing the original image on the Background layer.

                        This adds a new blank layer, which Photoshop automatically names “Layer 1”, above the Background layer:

                        A new blank layer appears above the Background layer.

                        Step 2:

                        Reset Your Foreground And Background Colors

                        You may not need to do this step, but just to keep us both on the same page, press the letter D on your keyboard to reset your Foreground and Background colors. This sets your Foreground color to black and your Background color to white, which we can see if we look at the two color swatches near the bottom of the Tools palette. The swatch in the top left shows the current Foreground color, while the swatch in the bottom right shows the current Background color:

                        The Tools palette in Photoshop showing the current Foreground and Background colors.

                        Step 3:

                        Select The Gradient Tool

                        Grab the Gradient Tool from the Tools palette, or press G on your keyboard to quickly select it:

                        Selecting the Gradient Tool from the Tools palette.

                        Step 4:

                        Select The Foreground to Transparent Gradient

                        Select the “Foreground to Transparent” gradient from the Gradient Picker.

                        Step 5:

                        Drag Out A Gradient Inside The Photo

                        Drag a black-to-transparent gradient from the sky to the ground.

                        If you want a larger transition area with your image, simply drag the gradient across a larger area. A common practice is to start the gradient at the very top of the image and then drag down to the horizon line. This leaves the darkest area of sky at the top of the photo and gradually lightens it towards the horizon, a very nice effect.

                        When I release my mouse button, Photoshop draws the gradient, filling most of my sky with black and then quickly fading it away as it crosses the trees (my transition area):

                        The black to transparent gradient is now added to the image.

                        The sky is definitely darker now than it was before. Of course, it also looks quite horrible at the moment, but we’re not done yet! Hold your horses! (Sorry, I had to say that at some point).

                        Step 6:

                        Change The Blend Mode Of “Layer 1” To “Overlay”

                        Change the blend mode of “Layer 1” to “Overlay”.

                        With the gradient set to the Overlay blend mode, the black area from the gradient blends in with the photo in a way that makes a huge improvement to the sky. The contrast has been increased, we’ve brought out all the details in the clouds, and the sky is now a deeper, richer blue:

                        The sky has now been enhanced after changing the blend mode of the gradient layer to “Overlay”.

                        The sky now looks a whole lot better, but by solving one problem, we’ve created another. The gradient I dragged out covered not only the sky but also part of the horses, and now they look like they’ve been roasting in the sun too long. I need to prevent the gradient from affecting the horses, and I can do that easily using a layer mask.

                        Step 7:

                        Add A Layer Mask To “Layer 1”

                        This adds a layer mask to the gradient layer. Nothing seems to have happened in the document window, but if we look at the Layers palette, we can see that a layer mask thumbnail has been added to “Layer 1”:

                        The Layers palette in Photoshop now showing a layer mask thumbnail on “Layer 1”.

                        Step 8:

                        Select The Brush Tool

                        We’re going to hide the effects of the gradient by painting with black on the layer mask over the areas we want to protect. First, we need the Brush Tool, so either grab it from the Tools palette or press B on your keyboard:

                        Selecting the Brush Tool from the Tools palette.

                        Step 9:

                        Set Your Foreground Color To Black

                        Since we want to paint with black, we need to set our Foreground color to black. By default when you have a layer mask selected (which we currently do), Photoshop sets your Foreground color to white and your Background color to black. All we need to do here is swap them, and we can do that by pressing the letter X on the keyboard. If we look at the Foreground and Background color swatches again in the Tools palette, we can see that black is now our Foreground color:

                        The Tools palette showing black as our current Foreground color.

                        Step 10:

                        Paint Over The Areas You Want To Protect

                        With the Brush Tool selected and black as our Foreground color, all we need to do now is paint over the areas that we want to protect from the effects of the gradient. In my case, I want to paint over the horses. You’ll most likely need to change the size of your brush, and you can do that by pressing the left and right bracket keys on the keyboard (located to the right of the letter P). The left bracket key makes the brush smaller, and the right bracket key makes the brush larger. You’ll probably want to use a soft-edge brush, and you can control the hardness of the brush by holding down the Shift key and pressing the left and right bracket keys. Holding Shift and pressing the left bracket key makes the brush softer. Holding Shift and pressing the right bracket key makes the brush harder.

                        Simply paint over any areas where you need to hide the effects of the gradient. Here, I’m painting over the backs of the horses. Since I’m painting on the layer mask, not on the photo itself, the black color of the brush is not visible. Instead, we see the effects of the gradient being hidden from view:

                        Painting with black on the layer mask hides the effects of the gradient.

                        If you make a mistake and accidentally paint over an area you didn’t mean to, just press X on your keyboard to swap your Foreground and Background colors again so white becomes your Foreground color. Paint over the mistake with white to bring back the effects of the gradient, then press X again to set your Foreground color back to black and continue painting.

                        I’m going to finish painting over the areas that I want to protect from the gradient. As I mentioned a moment ago, since we’re painting on the layer mask rather than on the image itself, we can’t see the color we’re painting with, but if we look at the layer mask thumbnail in the Layers palette, we can see all the areas where we’ve painted with black:

                        The layer mask thumbnail in the Layers palette showing the areas where we’ve painted with black.

                        And here, after painting away the effects of the gradient over the horses, bringing back their original color and brightness, is my final “enhanced sky” result:

                        The final “enhanced sky” result.

                        And there we have it! That’s how to enhance the sky in an image with Photoshop! Check out our Photo Retouching section for more Photoshop image editing tutorials!

                        Here’s How You Can Play Youtube Videos In The Background

                        How to play Youtube videos in the background

                        One of the ways to achieve this is by using a web browser that supports Android’s Picture-in-Picture (PiP) mode, which allows us to view the content in a small window while we have another application open. In this article, we will explore three web browsers that can play YouTube in the background. And how to use them.

                        Vivaldi

                        The first browser on our list is Vivaldi. It is a free browser that plays YouTube audio in the background and works perfectly fine. Besides allowing you to listen to the content. While using another application, you can also continue listening with the screen off. To use Vivaldi for this purpose, you need to download it for free from the Google Play Store. And modify the browser settings to ensure everything works correctly.

                        Gizchina News of the week

                        Join GizChina on Telegram

                        The steps to follow include tapping on the tab manager in the bottom right corner, tapping on the three-dot button at the top right, entering “Settings,” checking the “Stay in browser” and “Allow background audio playback” checkboxes, and finally tapping on “Restart now”. After the browser restarts, you can enter the web version of YouTube to view the content in the background. This is particularly useful if you use the Google service to listen to music as it will save you battery life.

                        Firefox

                        The second browser on our list is Mozilla Firefox, a popular web browser for Android known for its excellent user security and privacy protection. Firefox allows you to watch YouTube videos in a small window while using another app for free. To use Firefox for this purpose, you need to download the app from the Google Play Store and search for the video in the web version of the platform. Once you’ve found the video, put it on full screen and press the “Home” button to go to the home screen.

                        Opera

                        The third browser on our list is Opera, a fast and secure browser with an integrated VPN. Like Firefox, Opera allows you to play YouTube in the background for free, but with a slight difference. In this case, you must open the browser and enter YouTube to search for the video you want to watch. Once playing, you just have to minimize Opera so that the content continues to play. But in a small window in the lower right corner of the screen. Unfortunately, the playback of Opera on YouTube does not work if we lock the device. But it is very useful if you want to watch videos or listen to music while you do other things with your mobile.

                        In conclusion, playing YouTube in the background for free on Android is possible. And there are several ways to achieve this. You can use Vivaldi, Mozilla Firefox, or Opera, all of which support Android’s Picture-in-Picture (PiP) mode, allowing you to view the content in a small window while you have another application open. These browsers provide an excellent alternative to the paid YouTube Premium subscription. Allowing you to save battery life and continue to enjoy your favorite YouTube videos while multitasking on your mobile.

                        Update the detailed information about Background Removal In The Image Using The Mediapipe Library on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!