Trending March 2024 # Crowd Counting Using Deep Learning # Suggested April 2024 # Top 10 Popular

You are reading the article Crowd Counting Using Deep Learning updated in March 2024 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Crowd Counting Using Deep Learning

This article was published as a part of the Data Science Blogathon

Introduction

Before we start with Crowd Counting, let’s get familiar with counting objects in images. It is one of the fundamental tasks in computer vision. The goal of Object Counting is to count the number of object instances in a single image or video sequence. It has many real-world applications such as surveillance, microbiology, crowdedness estimation, product counting, and traffic flow monitoring.

Types of object counting techniques

Detection based object counting 

1. Detection-based Object Counting – Here, we use a moving window-like detector to identify the target objects in an image and count how many there are. The methods used for detection require well-trained classifiers that can extract low-level features. Although these methods work well for detecting faces, they do not perform well on crowded images of people/objects as most of the target objects are not clearly visible.

An implementation of detection-based object counting can be done using any state-of-the-art object detection methods like Faster RCNN or Yolo where the model detects the target objects in the images. We can return the count of the detected objects (the bounding boxes) to obtain the count.

2. Regression-based Object Counting – We build an end-to-end regression method using CNNs. This takes the entire image as input and directly generates the crowd/object count. CNNs work really well with regression or classification tasks. The regression-based method directly maps the number of objects predicted based on features extracted from cropped image patches to the ground truth count.

An implementation of regression-based object counting has been seen in Kaggle’s NOAA fisheries steller sea lion population count competition where the winner had used VGG-16 as the feature extractor and created a fully connected architecture with the last layer being a regression layer (linear output).

Now since we have discussed some of the basics of object counting, let us start Crowd Counting!

Crowd Counting

Crowd Counting is a technique to count or estimate the number of people in an image. Accurately estimating the number of people/objects in a single image is a challenging yet meaningful task and has been applied in many applications such as urban planning and public safety. Crowd counting is particularly prominent in the various object counting tasks due to its specific significance to social security and development.

Before we proceed further, look at various crowd counting datasets to know how the crowd images actually look like!

Types of approaches for crowd counting technique –

Problem

Early works on crowd counting use detection-based

approaches(we have already discussed the basics of the detection-based approach). These approaches usually apply a person head detector via a moving window on an image. Recently many extraordinary object detectors such as R-CNN, YOLO, and SSD have been presented, which may perform dramatic detection accuracy in sparse scenes. However, they will present unsatisfactory results when encountered the situation of occlusion and background clutter in extremely dense crowds.

ignore spatial information.

Density Estimation based method

is a method to solve this problem by learning a linear mapping between features in the local region and its object density maps. It integrates the information of saliency during the learning process. Since the ideal linear mapping is hard to obtain, we can use random forest regression to learn a non-linear mapping instead of the linear one.

The latest works have used CNN-based approaches to predict the density map because of its success in classification and recognition. In the rest of the blog, we will some of the modern density map-based approach methods mainly CNN-based for crowd counting. By the end of the blog, you’ll have an intuition of how crowd counting techniques work and how to implement them!

Ground truth generation – 

Assuming that there is an object instance (head of a person in our case) at pixel xi, it can be represented by a delta function δ(x − xi). Therefore, for an image with N annotations, it can be represented by the above equation

where σi represents the standard deviation

To generate the density map F, we convolute H(x) with a Gaussian kernel, which can be defined by the above equation.

 Visualization of ground truth density maps via Gaussian convolution operation. In this example, the images have instances of vehicles.

 

1. MCNN – Multi-column CNN for density map estimation.

Architecture of MCNN

It contains three parallel CNNs whose filters are with local receptive fields of different sizes. For simplification, we use the same network structures for all columns (i.e., conv–pooling–conv–pooling) except for the sizes and numbers of filters. Max pooling is applied for each 2×2 region, and Rectified linear unit (ReLU) is adopted as the activation function because of its good performance for CNNs . To reduce the computational complexity (the number of parameters to be optimized), we use fewer filters for CNNs with larger filters. We stack the output feature maps of all CNNs and map them to a density map. To map the features maps to the density map, we adopt filters whose sizes are 1 × 1. 

Then Euclidean distance is used to measure the difference between the estimated density map and ground truth. The loss is defined as –

where Θ is a set of learnable parameters in the MCNN. N is the number of the training image. Xi is the input image and Fi is the ground truth density map of image Xi. F(Xi; Θ) stands for the estimated density map generated by MCNN which is parameterized with Θ for sample Xi. L is the loss between the estimated density map and the ground truth density map. 

Refer to the papers and code to learn more about MCNN. The method was proposed on the Shanghaitech crowd dataset.

2. CSRNet – Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace pooling operations. CSRNet is an easy-trained model because of its pure convolutional structure.

They chose VGG16 as the front-end of CSRNet because of its strong transfer-learning ability and its flexible architecture for easily concatenating the back-end for density map generation. In these cases, the VGG-16 performs as an ancillary without significantly boosting the final accuracy. In the research paper, they first remove the classification part of VGG16 (fully connected layers) and build the proposed CSRNet with convolutional layers in VGG-16.

The output size of this front-end network is 1/8 of the original input size. If we continue to stack more convolutional layers and pooling layers (basic components in VGG-16), output size would be further shrunken, and it is hard to generate high-quality density maps. Therefore they deployed dilated convolutional layers as the back-end.

. 3 × 3 convolution kernels with different dilation rates as 1, 2, and 3.

In dilated convolution, a small-size kernel with a k × k filter is enlarged to k + (k − 1)(r − 1) with dilated stride r. Thus it allows flexible aggregation of the multi-scale contextual information while keeping the same resolution. Dilated convolution is applied to keep the output resolutions high and it avoids the need for upsampling. Most importantly, the output from dilated convolution contains more detailed information (referring to the portions we zoom in on). Read this to understand more about dilated convolution.

Configuration – They kept the first ten layers of VGG-16 with only three pooling layers instead of five to suppress the detrimental effects on output accuracy caused by the pooling operation. Since the output (density maps) of CSRNet is smaller (1/8 of input size), we choose bilinear interpolation with the factor of 8 for scaling and make sure the output shares the same resolution as the input image.

Refer to the papers and code to learn more about CSRNet. The method gave state-of-the-art results on various crowd counting datasets.

Comparison of different approaches for crowd counting techniques.

About the author

I am Yash Khandelwal, pursuing MSc in Mathematics and Computing from Birla Institute of Technology, Ranchi. I am a Data Science and Machine Learning enthusiast. 

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

You're reading Crowd Counting Using Deep Learning

Creating Continuous Action Bot Using Deep Reinforcement Learning

Now, we move on to the crux Actor critic method. The original paper explains quite well how it works but here is a rough idea. The actor takes a decision based on a policy, critic evaluates state-action pair and give it a Q value. If the state-action pair is good according to critics it will have a higher Q value and vice versa.

Now we move to actor-network, we created a similar network but here are some key points which you must remember while making the actor.

Actor-Network class ActorNetwork(nn.Module): def __init__(self, alpha): super(ActorNetwork, self).__init__() self.input_dims = 2 self.fc1_dims = fc1_dims self.fc2_dims = fc2_dims self.n_actions = 2 chúng tôi = nn.Linear(self.input_dims, self.fc1_dims) chúng tôi = nn.Linear(self.fc1_dims, self.fc2_dims) chúng tôi = nn.Linear(self.fc2_dims, self.n_actions) self.optimizer = optim.Adam(self.parameters(), lr=alpha) self.device = T.device('cuda' if T.cuda.is_available() else 'cpu') self.to(self.device) def forward(self, state): prob = self.fc1(state) prob = F.relu(prob) prob = self.fc2(prob) prob = F.relu(prob) #fixing each agent between 0 and 1 and transforming each action in env mu = T.sigmoid(self.mu(prob)) return mu

Note: We used 2 hidden layers since our action space was small and environment not very complex. Authors used 400 and 300 neurons for 2 hidden layers.

Just like gym env, the agent has some conditions too. We initialized our target networks with the same weights as our original (A-C) networks. Since we are chasing a moving target, target networks create stability and help original networks to train.

We initialize with all the requirements, as you might have noticed we have one loss function parameter too. We can use different loss functions and choose whichever works best for us (usually L1 smooth loss), paper had used mse loss, so we will go ahead and use it as default.

Here we include choosing action function, you can create an evaluation function as well, which outputs action space without noise. A remembering function (just as cover) to store it in our memory.

Update parameter function, now this is where we do soft(target networks) and hard updates(original networks). Now here it takes only one parameter Tau, this is similar to how we think a learning rate is.

It is used to soft update our target networks and in the paper, they found the best tau to be 0.001 and it usually is best across different papers (you can try and play with it).

class Agent(object): def __init__(self, alpha, beta, input_dims=2, tau, env, gamma=0.99, n_actions=2, max_size=1000000, batch_size=64): self.gamma = gamma chúng tôi = tau self.memory = ReplayBuffer(max_size) self.batch_size = batch_size self.actor = ActorNetwork(alpha) self.critic = CriticNetwork(beta) self.target_actor = ActorNetwork(alpha) self.target_critic = CriticNetwork(beta) self.scale = 1.0 self.noise = np.random.normal(scale=self.scale,size=(n_actions)) self.update_network_parameters(tau=1) def choose_action(self, observation): self.actor.eval() observation = T.tensor(observation, dtype=T.float).to(self.actor.device) mu = self.actor.forward(observation).to(self.actor.device) mu_prime = mu + T.tensor(self.noise(), dtype=T.float).to(self.actor.device) self.actor.train() return mu_prime.cpu().detach().numpy() def remember(self, state, action, reward, new_state, done): self.memory.store_transition(state, action, reward, new_state, done) def learn(self): if self.memory.mem_cntr < self.batch_size: return state, action, reward, new_state, done = self.memory.sample_buffer(self.batch_size) reward = T.tensor(reward, dtype=T.float).to(self.critic.device) done = T.tensor(done).to(self.critic.device) new_state = T.tensor(new_state, dtype=T.float).to(self.critic.device) action = T.tensor(action, dtype=T.float).to(self.critic.device) state = T.tensor(state, dtype=T.float).to(self.critic.device) self.target_actor.eval() self.target_critic.eval() self.critic.eval() target_actions = self.target_actor.forward(new_state) critic_value_ = self.target_critic.forward(new_state, target_actions) critic_value = self.critic.forward(state, action) target = [] for j in range(self.batch_size): target.append(reward[j] + self.gamma*critic_value_[j]*done[j]) target = T.tensor(target).to(self.critic.device) target = target.view(self.batch_size, 1) self.critic.train() self.critic.optimizer.zero_grad() critic_loss = F.mse_loss(target, critic_value) critic_loss.backward() self.critic.optimizer.step() self.critic.eval() self.actor.optimizer.zero_grad() mu = self.actor.forward(state) self.actor.train() actor_loss = -self.critic.forward(state, mu) actor_loss = T.mean(actor_loss) actor_loss.backward() self.actor.optimizer.step() self.update_network_parameters() def update_network_parameters(self, tau=None): if tau is None: tau = self.tau actor_params = self.actor.named_parameters() critic_params = self.critic.named_parameters() target_actor_params = self.target_actor.named_parameters() target_critic_params = self.target_critic.named_parameters() critic_state_dict = dict(critic_params) actor_state_dict = dict(actor_params) target_critic_dict = dict(target_critic_params) target_actor_dict = dict(target_actor_params) for name in critic_state_dict: critic_state_dict[name] = tau*critic_state_dict[name].clone() + (1-tau)*target_critic_dict[name].clone() self.target_critic.load_state_dict(critic_state_dict) for name in actor_state_dict: actor_state_dict[name] = tau*actor_state_dict[name].clone() + (1-tau)*target_actor_dict[name].clone() self.target_actor.load_state_dict(actor_state_dict)

The most crucial part is the learning function. First, we feed the network with samples until it reaches batch size and then start sampling from batches to update our networks. Calculate critic and actor losses. Then just soft update all the parameters.

env = OurCustomEnv(sales_function, obs_range, act_range) agent = Agent(alpha=0.000025, beta=0.00025, tau=0.001, env=env, batch_size=64, n_actions=2) score_history = [] for i in range(10000): obs = env.reset() done = False score = 0 while not done: act = agent.choose_action(obs) new_state, reward, done, info = env.step(act) agent.remember(obs, act, reward, new_state, int(done)) agent.learn() score += reward obs = new_state score_history.append(score) Results

Just in few minutes, we have training results ready. Agent exhausts almost full budget and we have a graph during training –

These results can be achieved even faster if we make changes in hyperparameters and reward functions.

Also thanks to Phil and Andrej Karpathy for their marvellous work.

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Tinyml Gets A Deep Learning Twist! Self

TinyML has become quite popular in the tech community owing to its growth opportunities

The world is getting a whole lot smarter with the constant evolution of the technology landscape. Emerging technology trends possess the potential to transform avant-garde business processes. With the integration of artificial intelligence and machine learning, the tech industry is about to unlock multi-billion dollar market prospects. One of these prospects is

How is self-attention in TinyML affecting modern AI models?

Typically, deep neural network models are designed to process one single piece of data at a time. But, in several other applications, the network is processed altogether, automatically, and then the relations between the sequence of input data is being checked. Hence, self-attention aims to address such issues. It is one of the most efficient and successful mechanisms for addressing relations between sequential data. Self-attention is generally used in transformers, which is a deep learning architecture that operates behind popular LLMs like GPT-3 and OPT-175B, but it is estimated to be very useful for TinyML applications as well. Experts say that there is a big potential that encompasses the field of machine learning and TinyML. The recent innovations in TinyML are enabling machines to run increasingly complex deep learning models directly through microcontrollers. This new technology has made it possible to run these models on the existing microcontroller hardware. This branch of machine learning represents a collaborative effort between the embedded ultra-low power systems and ML communities that have traditionally operated independently.

The world is getting a whole lot smarter with the constant evolution of the technology landscape. Emerging technology trends possess the potential to transform avant-garde business processes. With the integration of artificial intelligence and machine learning, the tech industry is about to unlock multi-billion dollar market prospects. One of these prospects is TinyML , which is basically the art and science of producing machine learning models robust enough to function in the end, which is why it is witnessing growing popularity in the business domains. TinyML has made it possible for machine learning models to be deployed at the edge. So, what actually is TinyML ? Experts believe that it can be broadly defined as a fast-growing field of ML technologies and applications including hardware, algorithms, and software that are capable of performing on-device sensor data. Recently, researchers, after witnessing the growing interest in TinyML have decided to exceed their capabilities. The innovation is titled to be self-attention for TinyML, which has eventually become one of the key components in several deep learning architectures. In a new research paper, scientists from the University of Waterloo and DarwinAI have introduced a new deep learning architecture that will bring highly efficient self-attention to TinyML . This process is basically known as the double-condensing attention condenser. The architecture is designed to build based on the old research theories that the team has done and is currently promising for edge AI applications.Typically, deep neural network models are designed to process one single piece of data at a time. But, in several other applications, the network is processed altogether, automatically, and then the relations between the sequence of input data is being checked. Hence, self-attention aims to address such issues. It is one of the most efficient and successful mechanisms for addressing relations between sequential data. Self-attention is generally used in transformers, which is a deep learning architecture that operates behind popular LLMs like GPT-3 and OPT-175B, but it is estimated to be very useful for TinyML applications as well. Experts say that there is a big potential that encompasses the field of machine learning and TinyML. The recent innovations in TinyML are enabling machines to run increasingly complex deep learning models directly through microcontrollers. This new technology has made it possible to run these models on the existing microcontroller hardware. This branch of machine learning represents a collaborative effort between the embedded ultra-low power systems and ML communities that have traditionally operated independently. In the paper, the scientists also included TinySpeech, which is a neural network that uses attention condensers for speech recognition. Attention condensers demonstrated great success across manufacturing, automotive, and healthcare applications, whereas traditional machines had various limits.

Newbie’s Deep Learning Project To Recognize Handwritten Digit

This article was published as a part of the Data Science Blogathon.

Developers are putting all their strength to make machines more intelligent, and smarter than humans. Deep learning is one such technique that contributes to developers enhancing machines. To memorize how a task is performed, what do humans do? Humans keep practicing and repeating that task again and again so that they get proficient in that task. After some time our brain neurons can automatically trigger and perform the task quickly and accurately. Likewise, deep learning follows the approach to solving a problem. Various types of neural network architectures are used by deep learning algorithms to solve different types of problems.

Table of Contents

Introduction

Handwritten Digit Recognition

Prerequisites

Dataset

Steps to build Recognition System

5.1. Import libraries

5.2. Data Preprocessing

5.3. Model creation

5.4. Model training

5.5. Model Evaluation

5.6. GUI creation

Output

Conclusion

Introduction

In this article, we are going to use the MNIST dataset for the implementation of a handwritten digit recognition app. To implement this we will use a special type of deep neural network called Convolutional Neural Networks. In the end, we will also build a Graphical user interface(GUI) where you can directly draw the digit and recognize it straight away.

What is Handwritten Digit Recognition?

Handwritten digit recognition is the process to provide the ability to machines to recognize human handwritten digits. It is not an easy task for the machine because handwritten digits are not perfect, vary from person-to-person, and can be made with many different flavors.

Prerequisites

Basic knowledge of deep learning with Keras library, the Tkinter library for GUI building, and Python programming are required to run this amazing project.

Commands to Install the necessary libraries for this project:

pip install numpy pip install tensorflow pip install keras pip install pillow The MNIST dataset

Among thousands of datasets available in the market, MNIST is the most popular dataset for enthusiasts of machine learning and deep learning. Above 60,000 plus training images of handwritten digits from zero to nine and more than 10,000 images for testing are present in the MNIST dataset. So, 10 different classes are in the MNIST dataset. The images of handwritten digits are shown as a matrix of 28×28 where every cell consists of a grayscale pixel value.

Steps to build Handwritten Digit Recognition System 1. Import libraries and dataset

At the project beginning, we import all the needed modules for training our model. We can easily import the dataset and start working on that because the Keras library already contains many datasets and MNIST is one of them. We call mnist.load_data() function to get training data with its labels and also the testing data with its labels.

import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense Flatten from keras.layers import Dropout from keras.layers import Flatten from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras import backend as K # to split the data of training and testing sets (x_train, y_train), (x_test, y_test) = mnist.load_data() The Data Preprocessing

Model cannot take the image data directly so we need to perform some basic operations and process the data to make it ready for our neural network. The dimension of the training data is (60000*28*28). One more dimension is needed for the CNN model so we reshape the matrix to shape (60000*28*28*1).

x_train = x_train.reshape(x_train.shape[0], 28, 28, 1) x_test = x_test.reshape(x_test.shape[0], 28, 28, 1) input_shape = (28, 28, 1) # conversion of class vectors to matrices of binary class y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 Create the model

Its time for the creation of the CNN model for this Python-based data science project. A convolutional layer and pooling layers are the two wheels of a CNN model. The reason behind the success of CNN for image classification problems is its feasibility with grid structured data. We will use the Adadelta optimizer for the model compilation.

batch_size = 128 num_classes = 10 epochs = 10 model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) Train the model

To start the training of the model we can simply call the model.fit() function of Keras. It takes the training data, validation data, epochs, and batch size as the parameter.

The training of model takes some time. After succesful model training, we can save the weights and model definition in the ‘mnist.h5’ file.

hist = model.fit(x_train, y_train,batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(x_test, y_test)) print("The model has successfully trained") model.save('mnist.h5') print("Saving the bot as mnist.h5")

Output

The model has successfully trained

Saving the bot as mnist.h5

Evaluate the model

To evaluate how accurate our model works, we have around 10,000 images in our dataset. In the training of the data model, we do not include the testing data that’s why it is new data for our model. Around 99% accuracy is achieved with this well-balanced MNIST dataset.

score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])

Output

Create GUI to predict digits

Below is the full code for our guidigit_recog.py file:

from keras.models import load_model from Tkinter import * import Tkinter successful as tk import win32gui from PIL import ImageGrab, Image import numpy as np model = load_model('mnist.h5') def predict_digit(img): #resize image to 28x28 pixels img = img.resize((28,28)) #convert rgb to grayscale img = img.convert('L') img = np.array(img) #reshaping for model normalization img = img.reshape(1,28,28,1) img = img/255.0 #predicting the class res = model.predict([img])[0] return np.argmax(res), max(res) class App(tk.Tk): def __init__(self): tk.Tk.__init__(self) self.x = self.y = 0 # Creating elements self.canvas = tk.Canvas(self, width=200, height=200, bg = "black", cursor="cross") self.label = tk.Label(self, text="Analyzing..", font=("Helvetica", 48)) self.classify_btn = tk.Button(self, text = "Searched", command = self.classify_handwriting) self.button_clear = tk.Button(self, text = "Dlt", command = self.clear_all) # Grid structure self.canvas.grid(row=0, column=0, pady=2, sticky=W, ) self.label.grid(row=0, column=1,pady=2, padx=2) self.classify_btn.grid(row=1, column=1, pady=2, padx=2) self.button_clear.grid(row=1, column=0, pady=2) #self.canvas.bind("", self.start_pos) self.canvas.bind("", self.draw_lines) def clear_all(self): self.canvas.delete("all") def classify_handwriting(self): Hd = self.canvas.winfo_id() # to fetch the handle of the canvas rect = win32gui.GetWindowRect(Hd) # to fetch the edges of the canvas im = ImageGrab.grab(rect) digit, acc = predict_digit(im) self.label.configure(text= str(digit)+', '+ str(int(acc*100))+'%') def draw_lines(slf, event): slf.x = event.x slf.y = event.y r=8 slf.canvas.create_oval(slf.x-r, slf.y-r, slf.x + r, slf.y + r, fill='black') app = App() mainloop()

Output

Conclusion

This project is beginner-friendly and can be used by data science newbies. We have created and deployed a successful deep learning project of digit recognition. We build the GUI for easy learning where we draw a digit on the canvas then we classify the digit and show the results.

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion

Related

Top 5 Skills Needed To Be A Deep Learning Engineer!

This article was published as a part of the Data Science Blogathon

“What’s behind driverless cars? Artificial Intelligence or more specifically Deep Learning”  – Dave Waters When

Introduction

In India, around 700 thousand students graduate per year, and students who want to pursue their dreams in the Development side of Computer Science need to choose one domain, Deep Learning is one of them. As technologies are growing larger and larger students’ interest in Machine Learning/Deep Learning is also growing. One big issue that everybody faces is that they all know there is some good scope in the field of AI, but what they lack is where to start and in which direction they need to focus their energy, they try one technology and after a few days, they jump on the other one leaving the last one unpracticed.

Most Deep Learning Practitioners are afraid that big tech giants are working on some algorithms that will automate the whole Deep Learning Process and there would be no such post as Deep Learning Engineer. So let me confirm you one thing that is not going to happen, not now for sure. Automation is taking over the place but there is still a long path to go.

If you know that you are good with numbers and want to work on some exciting technologies, Deep Learning is surely for you, in this tutorial I will tell you the pathway which you can follow to be a successful Applied Deep Learning Engineer.

What is Deep Learning?

Deep Learning is the subset of Machine Learning that primarily deals with Neural Networks (NN). It provides solutions for almost all kinds of data like images, text, audio, etc. Neural Networks try to mimic the brain in order to produce results in the same fashion as the human mind does. This theory part you guys already know so let me jump straight to the things that you are sceptical about. I know you might have doubts about if you need a master’s degree or you need to be from Harvard or MIT to be a good fit for Deep Learning, let me answer that for you. There are two roles in Deep Learning as well one called a Deep Learning Researcher and another one as Applied Deep Learning Engineer. The first one deals with having more statistics and mathematics-based knowledge that can help you understand the Deep Learning concepts and would eventually lead you to discover new algorithms/technologies, while the other one deals with whatever is already implemented by Deep Learning Researchers just use that and apply it somewhere, where it can reduce the human effort.

Source: Levity.ai

Now you know that if you are not from some Tier-1 college you can still be a Deep Learning Engineer, so let’s discuss what all technologies you will need to be a successful Deep Learning Engineer.

Summary of Skills Needed:

While you work on Machine Learning/Deep Learning it’s not like you will just have to know some algorithms and apply them to the data that you are gonna get. You will start from the requirement phase i.e. first identify the problem for which you will find the solution. One most important thing is not all problems require Deep Learning solutions first try to analyze the problem and see if it can be solved using traditional algorithms if yes you will be saving a lot of energy and resources otherwise you are free to choose Deep Learning solution.

1. A programming language suitable for AI/ML/DL

I know you might be wondering why I am telling you this while you might already know it, but choosing the programming language is the first task that sets you up on the path of Deep Learning. Common languages preferred for DL are Python and R (personally I use python).

Both of these languages have their specialties it’s not like when you are using one you can completely ignore the other one, having knowledge of both of them is a cherry on the cake. When you start learning any of these programming languages try to focus fully on one language and once you have mastered it, another one would be very easy for you. Try to master as many libraries as you can, once done, it would be very easy for you to work on real-world projects.

Problems Faced:

While anyone starts learning a programming language main issue he/she faces is the knowledge of resources that will abate the learning process. I have also faced the same issue while doing the same. How I have mastered the Python language is by completing primary lectures from chúng tôi and them completing several video lectures from Udemy and Coursera. One important suggestion that I would surely make is don’t watch video lectures just to earn certificates and fancy your linked profile, watch them, gain knowledge that would eventually help you to be a better developer.

2. Computer Science Fundamentals and Data Structures

Knowing Machine Learning/Deep Learning algorithms is not enough, you will also require knowledge of  Software Engineering skills like Data Structures, Software Development Life Cycle, Github, Algorithms (Sorting, Searching, and Optimisation).

When you work on any real-world project, the client would not need any Machine Learning model, what he would require is a solution in form of any Service or Application for that you need to have a deeper understanding of these concepts.

Problems Faced:

Most of the Data Science enthusiasts think that if they are going to work in the field of AI/ML/DL they will just have to learn a bunch of algorithms that they are anyways going to get in some package, and other concepts are not that much important spacially Data Structure. So let me clear one thing for you when you work on any live project where you will have to optimize your code in order to make it memory and time-efficient, and whenever we talk about efficiency there comes the Data Structure. Also when you work on live projects you need to work on deadlines so to deliver projects to clients on time you need to have a proper understanding of SDLC. I also studied these concepts in college only and when I was working on POCs (Proof of Concepts) I also did not have a proper understanding of these concepts in real-world but as I have been part of few projects these concepts became clear to me, so if you know these concepts and are afraid that you don’t know how to apply these concepts in real-world, don’t worry that you will learn only when you will be part of any project.

3. Mathematics for Machine Learning

If you are a software Engineer you can easily code any solution but when it comes to Machine Learning you need to have an understanding of Mathematical and Statistical concepts that will help you analyze any algorithm and tune it according to your need.

For training and inference also you will need to have knowledge of concepts like Gradient Descent, Distance Matrics, Mean, Median and Mode, etc.

Problems Faced:

According to me, this is the important thing that someone must learn before entering the field of Deep Learning. Most of us think that when most of the algorithms are already implemented and we are just going to apply them to different fields why learn mathematical concepts. So as I have experienced every time when you are going to work on any Deep Learning algorithm you will have to tune it according to your use case and for that, you need these concepts. I have worked on several projects and honestly 95% of the time these concepts came in handy for tuning the algorithm other 5% I have used algorithms as it is.

4. Front End/UI Technology & Deployment Services

When you have your Machine Learning solution ready you need to represent it to others in form of Some Charts or Visualisations because the person to whom you are explaining might not have the knowledge of these algorithms and what he would want is the working solution for his problem. So what can enhance this development process is the knowledge of any UI technology like Django, Flask and if when required, JavaScript, your Machine Learning code would be the backend while you will create a frontend for the same.

Source: yashints.dev

Once the whole solution is ready you need to deploy that solution somewhere for that you should learn technologies like Apache, Wamp, etc.

This technology is a must for working on Deep Learning projects, if you are part of a very big organization there are dedicated frontend and backend developers so there is no worry if you work only on the backend part but if you are part of a small organization or a small team most probably you will have to handle both frontend and backend development.

5. knowledge of Cloud Computing platforms

As we are moving ahead in technology the amount of data is increasing immensely, you can not manage that data on your local server so one should move to cloud technologies. These platforms provide some very good services from data preparation to model development.

Some of these computing platforms have some Deep Learning based solutions that are State of The Art. The most preferred platforms are AWS and Azure and you can also try Google Cloud.

These are the technologies that one should learn while working as a Deep Learning Engineer, of course, you can learn other technologies also but these are the must ones.

Problems Faced:

Working on Cloud Computing might be hard as you will have to work on this tech while working on other 4 techs, but if you are interested in learning some extra stuff this is surely for Deep Learning Engineer.

Resource:

You can use the following resources to start learning these technologies:

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Human Activity Prediction Using Machine Learning

1. Objective:

The objective of this study is to analyse a dataset of smartphone sensor data of human activities of about 30 participants and try to analyse the same and draw insights and predict the activity using Machine Learning. We also try to detect if we could identify the participants from their walking styles and try to draw additional insights. The potentials of such a study can be exploited to scenarios such as activity detection, monitoring persons for signs of fatigue, distinguishing one individual from another , with possible deployment in highly sensitive and secure workplaces etc. Data Set Source Jorge-L. Reyes-Ortiz, Luca Oneto, Albert Sama, Xavier Parra, Davide Anguita. Transition-Aware Human Activity Recognition Using Smartphones. Neurocomputing. Springer 2024. Data Repository : UCI

2.  Dynamics Experiment and Data capture (from cited Journal)

Data capture overview The experiments were carried out with a group of 30 volunteers within an age bracket of 19-48 years. They performed a protocol of activities composed of six basic activities: three static postures (standing, sitting, lying) and three dynamic activities (walking, walking downstairs and walking upstairs). The experiment also included postural transitions that occurred between the static postures. These are: stand-to-sit, sit-to-stand, sit-to-lie, lie-to-sit, stand-to-lie, and lie-to-stand. All the participants were wearing a smartphone (Samsung Galaxy S II) on the waist during the experiment execution. We captured 3-axial linear acceleration and 3-axial angular velocity at a constant rate of 50Hz using the embedded accelerometer and gyroscope of the device. The experiments were video-recorded to label the data manually.   Signal processing The sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cut off frequency was used.  

3.  Feature selection

The features selected for this database come from the accelerometer and gyroscope 3-axial raw signals tAcc-XYZ and tGyro-XYZ. These time domain signals (prefix ‘t’ to denote time) were captured at a constant rate of 50 Hz. Then they were filtered using a median filter and a 3rd order low pass Butterworth filter with a corner frequency of 20 Hz to remove noise. Similarly, the acceleration signal was then separated into body and gravity acceleration signals (tBodyAcc-XYZ and tGravityAcc-XYZ) using another low pass Butterworth filter with a corner frequency of 0.3 Hz. Subsequently, the body linear acceleration and angular velocity were derived in time to obtain Jerk signals (tBodyAccJerk-XYZ and tBodyGyroJerk-XYZ). Also the magnitude of these three-dimensional signals were calculated using the Euclidean norm (tBodyAccMag, tGravityAccMag, tBodyAccJerkMag, tBodyGyroMag, tBodyGyroJerkMag). Finally a Fast Fourier Transform (FFT) was applied to some of these signals producing fBodyAcc-XYZ, fBodyAccJerk-XYZ, fBodyGyro-XYZ, fBodyAccJerkMag, fBodyGyroMag, fBodyGyroJerkMag. (Note the ‘f’ to indicate frequency domain signals). These signals were used to estimate variables of the feature vector for each pattern’-XYZ’ is used to denote 3-axial signals in the X, Y and Z directions. Primary attributes time domain 1.     tBodyAcc-XYZ 2.     tGravityAcc-XYZ 3.     tBodyAccJerk-XYZ 4.     tBodyGyro-XYZ 5.     tBodyGyroJerk-XYZ 6.     tBodyAccMag 7.     tGravityAccMag 8.     tBodyAccJerkMag 9.     tBodyGyroMag 10.  tBodyGyroJerkMag   Primary attributes frequency  domain 1.     fBodyAcc-XYZ 2.     fBodyAccJerk-XYZ 3.     fBodyGyro-XYZ 4.     fBodyAccMag 5.     fBodyAccJerkMag 6.     fBodyGyroMag 7.     fBodyGyroJerkMag   Statistically derived features The set of variables that were estimated from these signals are: 1.     mean(): Mean value 2.     std(): Standard deviation 3.     mad(): Median absolute deviation 4.     max(): Largest value in array 5.     min(): Smallest value in array 6.     sma(): Signal magnitude area 7.     energy(): Energy measure. Sum of the squares divided by the number of values. 8.     iqr(): Interquartile range 9.     entropy(): Signal entropy 10.  arCoeff(): Autorregresion coefficients with Burg order equal to 4 11.  correlation(): correlation coefficient between two signals 12.  maxInds(): index of the frequency component with largest magnitude 13.  meanFreq(): Weighted average of the frequency components to obtain a mean frequency 14.  skewness(): skewness of the frequency domain signal 15.  kurtosis(): kurtosis of the frequency domain signal 16.  bandsEnergy(): Energy of a frequency interval within the 64 bins of the FFT of each window. 17.  angle(): Angle between to vectors.   Additional vectors obtained Additionally  by averaging the signals in a signal window sample the following features were obtained 1.     gravityMean 2.     tBodyAccMean 3.     tBodyAccJerkMean 4.     tBodyGyroMean 5.     tBodyGyroJerkMean   Activities captured in the data set used 1.     WALKING 2.     WALKING_UPSTAIRS 3.     WALKING_DOWNSTAIRS 4.     SITTING 5.     STANDING 6.     LAYING  

4.  Exploratory Data Analysis

We explore the combine train and test data set and try to see how the data is distributed and the how separable are the participant and activity labels. Distribution of Activities We try to check the data volumes for each of the activities in order to see how balanced the data sets is with respect to the different activity labels. We find from the figure below that the volumes are more or less well balanced

Separation of Activities using  First two Principal Components Next we try to evaluate visually if the classes can be separable. As we have a multivariate data at hand, in order to plot in two dimensions we have two different approaches. In the first approach we try to reduce the dimensionality of the data using the first two principal components. We find from the plot below that PCA through an established technique of dimensionality reduction, still results in some class over lap if we just use the first two PC’s.  

    Separation of Activities using  t- distributed stochastic neighbour embedding We have seen from Fig. 2 that the PCA method does not help us to get a very good visual separation in two dimensions. We try a multivariate dimensionality reduction technique next. The t-Distributed Stochastic Neighbour Embedding (t-SNE) is a non-linear technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. It is extensively applied in image processing, NLP, genomic data and speech processing. We find from Fig. 3 a much better separation in this case.

We have seen from Fig. 2 that the PCA method does not help us to get a very good visual separation in two dimensions. We try a multivariate dimensionality reduction technique next. The(t-SNE) is a non-linear technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. It is extensively applied in image processing, NLP, genomic data and speech processing. We find from Fig. 3 a much better separation in this case.

Fig. 3 Separation of Activity labels using t-SNE

  Separation of participants  using  t- distributed stochastic neighbour embedding We have seen from Fig. 3 that the t-SNE method does give us a goof visual separation. Let us now try to check if the participants themselves are separable in sense of their different activities. If so this could be exploited to build prediction models wherein one could identify a person from his or her walking style.  We find from Fig. 4 that such a possibility exists.

We have seen from Fig. 3 that the t-SNE method does give us a goof visual separation. Let us now try to check if the participants themselves are separable in sense of their different activities. If so this could be exploited to build prediction models wherein one could identify a person from his or her walking style. We find from Fig. 4 that such a possibility exists.

Fig. 4  Separation of Participants using t-SNE

5.  Modelling

Here we try to fit an ensemble of models to predict the human activity, including techniques like Bagging,  Random Forest, Boosting as well as deep learning   Boosting We first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot  in Fig. 5 that about 200 trees are optimal for the current data set with bagging.

Here we try to fit an ensemble of models to predict the human activity, including techniques like Bagging, Random Forest, Boosting as well as deep learningWe first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot in Fig. 5 that about 200 trees are optimal for the current data set with bagging.

Fig. 5  OOB Error Vs number of trees plot

Performance at optimal number of  trees We observe from Fig. 5 that about 200 trees would be an optimal tree count. We then run a bagging model with n=200 trees and plot the multiclass  confusion matrix as depicted in Fig. 6. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’  misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected.  We also plot the ROC curves as seen in Fig. 7. We find the same trend of accuracy from the ROC Curves as well.

We observe from Fig. 5 that about 200 trees would be an optimal tree count. We then run a bagging model with n=200 trees and plot the multiclass confusion matrix as depicted in Fig. 6. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’ misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected. We also plot the ROC curves as seen in Fig. 7. We find the same trend of accuracy from the ROC Curves as well.

Fig. 6  Bagging (n=200) Optimal number of trees Confusion Matrix on Test Data

Fig. 7  ROC curves for each activity from BAGGING

Random forest On similar lines as in bagging we first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot  in Fig. 8 that about 200 trees are optimal for the current data set with Random forest.

On similar lines as in bagging we first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot in Fig. 8 that about 200 trees are optimal for the current data set with Random forest.

Fig. 8  OOB Error vs N Trees for random forest

Performance at optimal number of  trees

We observe from Fig. 8 that about 200 trees would be an optimal tree count. We then run a RF model with n=200 trees and plot the multiclass  confusion matrix as depicted in Fig. 9. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’  misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected.  We also plot the ROC curves as seen in Fig. 10. We find the same trend of accuracy from the ROC Curves as well.  We also observe a marginal improvement in Accuracy with RF compared to Bagging

We observe from Fig. 8 that about 200 trees would be an optimal tree count. We then run a RF model with n=200 trees and plot the multiclass confusion matrix as depicted in Fig. 9. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’ misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected. We also plot the ROC curves as seen in Fig. 10. We find the same trend of accuracy from the ROC Curves as well. We also observe a marginal improvement in Accuracy with RF compared to Bagging

Fig. 9  Confusion Matrix for Random Forest : n = 200 trees

Fig. 10  ROC Curves from Random Forest with n = 200 estimators

Extra trees classifier

We further try a variant of Random forest, i.e. the Extra Trees classifier. In this case the samples are drawn randomly without replacement and further the splits are also chosen randomly.  On similar lines as in RF we first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot  in Fig. 11 that about 225 trees are optimal for the current data set with Extra Trees classifier.

We further try a variant of Random forest, i.e. the Extra Trees classifier. In this case the samples are drawn randomly without replacement and further the splits are also chosen randomly. On similar lines as in RF we first try to check for OOB error vs number of trees by plotting OOV vs 10 different combinations of number of trees ranging from 50 to 300 in steps of 25 trees. We find from the plot in Fig. 11 that about 225 trees are optimal for the current data set with Extra Trees classifier.

Fig. 11  OOB Error vs N Trees for Extra Trees classifier

Performance at optimal number of  trees

We observe from Fig. 11 that about 225 trees would be an optimal tree count. We then run a Extra Trees classifier  model with n=225 trees and plot the multiclass  confusion matrix as depicted in Fig. 12. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’  misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected.  We also plot the ROC curves as seen in Fig. 12. We find the same trend of accuracy from the ROC Curves as well.  We also observe a marginal improvement in Accuracy with ETC  compared to Random Forest.

We observe from Fig. 11 that about 225 trees would be an optimal tree count. We then run a Extra Trees classifier model with n=225 trees and plot the multiclass confusion matrix as depicted in Fig. 12. We find that the activity ‘standing’ is perfectly classified, however between activities ‘sitting’ and ‘laying’ misclassification is observed and this is expected as they are close to each other as far as positions are concerned. Next we find when comparing the three forms of walking a small degree of misclassification between the three forms of walking again this is expected. We also plot the ROC curves as seen in Fig. 12. We find the same trend of accuracy from the ROC Curves as well. We also observe a marginal improvement in Accuracy with ETC compared to Random Forest.

Fig. 12  Confusion Matrix : Extra Trees Classifier : n =225 trees

  Accuracy Measure comparison

Bagging with 200 estimators:                   Accuracy on testset:   90.09%

RF with 200 estimators:                             Accuracy on testset:   92.87%

Extra trees with 225 estimators: Accuracy on testset:  93.65%

6.  Additional insights

Feature Importance and Sensor Importance From our Random Forest Classification model we can get the feature importance of every feature. We present the top ten features which have the most classification power for the data at hand. We also check between the two sensors i.e. Accelerometer and Gyroscope how are they placed in the overall feature importance that is to say we sum up the feature importance of ever individual features from variables generated or computed from the two sensors and plot them to have a visual comparison.   Fig. 13 shows the top ten features with most classification power and Fig 14.  Shows the comparative classification power between the two types of sensors used in  the study , i.e. gyroscope and accelerometer.

From our Random Forest Classification model we can get the feature importance of every feature. We present the top ten features which have the most classification power for the data at hand. We also check between the two sensors i.e. Accelerometer and Gyroscope how are they placed in the overall feature importance that is to say we sum up the feature importance of ever individual features from variables generated or computed from the two sensors and plot them to have a visual comparison. Fig. 13 shows the top ten features with most classification power and Fig 14. Shows the comparative classification power between the two types of sensors used in the study , i.e. gyroscope and accelerometer.

Fig. 13  Comparative Sensor Importance for classification

Fig. 14 Comparative Sensor Importance for classification

  Participants  Walk Analysis 1: Walk Time In this study we capture the time involved in the activity ‘Walking’ for the different participants and we try to study the distribution to check for anomalies if any. We find from  Fig.  15 that the data is distributed over a range.  We assume that the test subjects had a fixed  walk distance  and the variations for the walk time are natural. We check this by checking (See Fig. 16) if the data follows a normal distribution as in most real world scenarios.  We find that the data is close to normal distribution barring 2 or 3 outlier points

In this study we capture the time involved in the activity ‘Walking’ for the different participants and we try to study the distribution to check for anomalies if any. We find from Fig. 15 that the data is distributed over a range. We assume that the test subjects had awalk distance and the variations for the walk time are natural. We check this by checking (See Fig. 16) if the data follows a normal distribution as in most real world scenarios. We find that the data is close to normal distribution barring 2 or 3 outlier points

Fig. 15 Walk time for 30 participants

Fig. 16 Normality check for the Walk data

  Participants  Walk Analysis 2: Walk style analysis for each participant In this study we try to capture the walk pattern or walk signature for each participant. This is an important feature to be studied, as walk patterns can be used to distinguish and with more data points even identify uniquely individuals or capture change in physical behaviour like signs of fatigue vs healthy walking etc.  In order to plot the multivariate data in two dimensions,  we reduce the data dimensions to just two using PCA and t-SNE (t Distributed Stochastic Neighbour Embedding) as shown in Fig. 17.   We observe  from the plots  that each participant has a different walk pattern. We also note that for most of the patterns we can find two clusters.  We can infer from the patterns as each cluster may denote a ‘Walk experiment’ .

In this study we try to capture the walk pattern or walk signature for each participant. This is an important feature to be studied, as walk patterns can be used to distinguish and with more data points even identify uniquely individuals or capture change in physical behaviour like signs of fatigue vs healthy walking etc. In order to plot the multivariate data in two dimensions, we reduce the data dimensions to just two using PCA and t-SNE (t Distributed Stochastic Neighbour Embedding) as shown in Fig. 17. We observe from the plots that each participant has a different walk pattern. We also note that for most of the patterns we can find two clusters. We can infer from the patterns as each cluster may denote a ‘Walk experiment’ .

Fig. 17  Walk patterns for individual participants

7.  Conclusions and Future Scope

In this case study we showcased how machine learning can help to study data from sensors which are already present in most smartphones maybe analysed to gain rich insights about the candidates studied. We can identify activities, classify or group participants to activities, get additional insights of activity durations and patterns of individuals involved in those activities.  One can exploit the rich scope such insights has to offer in developing real time human asset monitoring in highly secured installations,  tracking Elderly or population with movement disability or illness for any emergencies based on movement patterns,  determining if a person id under fatigue or not and so on and so forth. The application domains are as broad as from healthcare to security services and fitness monitoring.   REFERENCES   Relevant Kaggle Kernels References 1.Relevant Literature References 1. Jorge-Luis Reyes-Ortiz, Luca Oneto, Alessandro Ghio, Albert Samá, Davide Anguita and Xavier Parra. Human Activity Recognition on Smartphones With Awareness of Basic Activities and Postural Transitions. Artificial Neural Networks and Machine Learning – ICANN 2014. Lecture Notes in Computer Science. Springer. 2014. 2. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013. 3. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, Jorge L. Reyes-Ortiz. Energy Efficient Smartphone-Based Activity Recognition using Fixed-Point Arithmetic. Journal of Universal Computer Science. Special Issue in Ambient Assisted Living: Home Care. Volume 19, Issue 9. May 2013 4. Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. Human Activity Recognition on Smartphones using a Multiclass Hardware-Friendly Support Vector Machine. 4th International Workshop of Ambient Assited Living, IWAAL 2012, Vitoria-Gasteiz, Spain, December 3-5, 2012. Proceedings. Lecture Notes in Computer Science 2012, pp 216-223. 5. Jorge Luis Reyes-Ortiz, Alessandro Ghio, Xavier Parra-Llanas, Davide Anguita, Joan Cabestany, Andreu Català. Human Activity and Motion Disorder Recognition: Towards Smarter Interactive Cognitive Environments. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.  

Author

Update the detailed information about Crowd Counting Using Deep Learning on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!