Trending March 2024 # A Complete Guide To Pytorch Tensors # Suggested April 2024 # Top 5 Popular

You are reading the article A Complete Guide To Pytorch Tensors updated in March 2024 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 A Complete Guide To Pytorch Tensors

Introduction to PyTorch Tensors

The following article provides an outline for PyTorch Tensors. PyTorch was released as an open-source framework in 2023 by Facebook, and it has been very popular among developers and the research community. PyTorch has made building deep neural network models by providing easy programming and faster computation. However, PyTorch’s strong feature is providing Tensors. Tensors are defined as single dimensions or a matrix of a multi-dimensional array containing an element of single data types.

Start Your Free Software Development Course

Web development, programming languages, Software testing & others

Tensors are also used in the Tensorflow framework, which Google released. NumPy Arrays in Python are basically just tensors processed by using GPUs or TPUs for training neural network models. PyTorch has libraries included in it for calculating gradient descent for feed-forward networks as well as back-propagation. PyTorch has more support to the Python libraries like NumPy and Scipy compared to other frameworks like Tensorflow.

PyTorch Tensors Dimensions

In any linear algebraic operations, the user may have data in vector, matrix or N-dimensional form. Vector is basically a single-dimensional tensor, Matrix is two-dimensional tensors, and an Image is a 3-dimensional tensor with RGB as a dimension. PyTorch tensor is a multi-dimensional array, same as NumPy and also it acts as a container or storage for the number. To create any neural network for a deep learning model, all linear algebraic operations are performed on Tensors to transform one tensor to new tensors.

Example:

import torch tensor_1 = torch.rand(3,3)

Here random tensor of size 3*3 is created.

How to Create PyTorch Tensors Using Various Methods

Let’s create a different PyTorch tensor before creating any tensor import torch class using the below command:

Code:

import torch

1. Create tensor from pre-existing data in list or sequence form using torch class.

It is a 2*3 matrix with values as 0 and 1.

Syntax:

torch.tensor(data, dtype=None, device=None, requires_grad=False, pin_memory=False)

Code:

import torch tensor_b = torch.Tensor([[0,0,0], [1,1,1]]) tensor_b

Output:

2. Create n*m tensor from random function in the torch.

Syntax:

torch.randn(data_size, dtype=input.dtype, layout=input.layout, device=input.device)

Code:

import torch tensor_a = torch.rand((3, 3)) tensor_a

Output:

3. Creating a tensor from numerical types using functions such as ones and zeros.

torch.zeros(data_size, dtype=input.dtype, layout=input.layout, device=input.device)

Code:

tensor_d = torch.zeros(3, 3) tensor_d

Output:

In the above, tensor .zeros() is used to create a 3*3 matrix with all the values as ‘0’ (zero).

4. Creating a PyTorch tensor from the numpy tensor.

To create a tensor from numpy, create an array using numpy and then convert it to tensor using the .as_tensor keyword.

Syntax:

torch.as_tensor(data, dtype=None, device=None)

Code:

import numpy arr = numpy.array([0, 1, 2, 4]) tensor_e = torch.as_tensor(arr) tensor_e

Output:

Here is the basic tensor operation to perform the matrix product and get a new tensor.

Code:

tensor_e = torch.Tensor([[1, 2], [7, 8]]) tensor_f = torch.Tensor([[10], [20]]) tensor_mat = tensor_e.mm(tensor_f) tensor_mat

Output:

Parameters:

Here is the list and information on parameters used in syntax:

data: Data for tensors.

dtype: Datatype of the returned tensor.

device: Device used is CPU or CUDA device with returned tensor.

requires_grad: It is a boolean data type with values as True or False to record automatic gradient on returned tensor.

data_size: Data shape of the input tensor.

pin_memory: If the pin_memory is set to Truly returned tensor will have pinned memory.

See below jupyter notebook for the above operation to create tensors.

Importance of Tensors in PyTorch

Tensor is the building block of the PyTorch libraries with a matrix-like structure. Tensors are important in PyTorch framework as it supports to perform a mathematical operation on the data.

Following are some of the key important points of tensors in PyTorch:

Tensors are important in the PyTorch as it is a fundamental data structure and all the neural network models are built using tensors as it has the ability to perform linear algebra operations

Tensors are similar to numpy arrays, but they are way more powerful than the numpy array as They perform their computation GPU or CPU. Hence, It is way more faster than the numpy library of python.

It offers seamless interoperability with Python libraries so that the programmer can easily use Sci-kit, SciPy libraries with tensors. Also, using functions like as_tensors or from_numpy programmer can easily convert the numpy array to PyTorch tensors.

One of the important features offered by tensor is it can store track of all the operations performed on them, which helps to compute the gradient descent of output; this can be done using Autograd functionality of tensors.

It is a multi-dimensional array which holds data for Images that can be converted into a 3-dimensional array based on its color like RGB (Red, Green and Blue); also, it holds Audio data or Time series data; any unstructured data can be addressed using tensors.

Conclusion

To learn PyTorch framework for building deep learning models for computer vision, Natural language processing or reinforcement learning. In the above tutorial, a programmer can get an idea of how useful and simple it is to learn and implement tensors in PyTorch. Of course, tensors can be used in PyTorch as well as Tensorflow. Still, the basic idea behind using tensors stays the same: using GPU or CPU with Cuda cores to process data faster which one framework to use for building models is developers decisions. Still, the above articles give a clear idea about tensor in the PyTorch.

Recommended Articles

We hope that this EDUCBA information on “PyTorch Tensors” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

You're reading A Complete Guide To Pytorch Tensors

A Complete Guide To K

Introduction

In the four years of my data science career, I have built more than 80% of classification models and just 15-20% of regression models. These ratios can be more or less generalized throughout the industry. The reason behind this bias towards classification models is that most analytical problems involve making decisions. In this article, we will talk about one such widely used machine learning classification technique called k nearest neighbor (KNN) algorithm. Our focus will primarily be on how the algorithm works on new data and how the input parameter affects the output/prediction.

Note: People who prefer to learn through videos can learn the same through our free course – K-Nearest Neighbors (KNN) Algorithm in Python and R. And if you are a complete beginner to Data Science and Machine Learning, check out our Certified BlackBelt program –

Learning Objectives

Understand the working of KNN and how it operates in python and R.

Get to know how to choose the right value of k for KNN

Understand the difference between training error rate and validation error rate

What is KNN (K-Nearest Neighbor) Algorithm?

The K-Nearest Neighbor (KNN) algorithm is a popular machine learning technique used for classification and regression tasks. It relies on the idea that similar data points tend to have similar labels or values.

During the training phase, the KNN algorithm stores the entire training dataset as a reference. When making predictions, it calculates the distance between the input data point and all the training examples, using a chosen distance metric such as Euclidean distance.

Next, the algorithm identifies the K nearest neighbors to the input data point based on their distances. In the case of classification, the algorithm assigns the most common class label among the K neighbors as the predicted label for the input data point. For regression, it calculates the average or weighted average of the target values of the K neighbors to predict the value for the input data point.

The KNN algorithm is straightforward and easy to understand, making it a popular choice in various domains. However, its performance can be affected by the choice of K and the distance metric, so careful parameter tuning is necessary for optimal results.

When Do We Use the KNN Algorithm?

KNN can be used for both classification and regression predictive problems. However, it is more widely used in classification problems in the industry. To evaluate any technique, we generally look at 3 important aspects:

1. Ease of interpreting output

2. Calculation time

3. Predictive Power

Let us take a few examples to  place KNN in the scale :

KNN classifier fairs across all parameters of consideration. It is commonly used for its ease of interpretation and low calculation time.

How Does the KNN Algorithm Work?

Let’s take a simple case to understand this algorithm. Following is a spread of red circles (RC) and green squares (GS):

You intend to find out the class of the blue star (BS). BS can either be RC or GS and nothing else. The “K” in KNN algorithm is the nearest neighbor we wish to take the vote from. Let’s say K = 3. Hence, we will now make a circle with BS as the center just as big as to enclose only three data points on the plane. Refer to the following diagram for more details:

The three closest points to BS are all RC. Hence, with a good confidence level, we can say that the BS should belong to the class RC. Here, the choice became obvious as all three votes from the closest neighbor went to RC. The choice of the parameter K is very crucial in this algorithm. Next, we will understand the factors to be considered to conclude the best K.

How Do We Choose the Factor K?

First, let us try to understand exactly the K influence in the algorithm. If we see the last example, given that all the 6 training observation remain constant, with a given K value we can make boundaries of each class. These decision boundaries will segregate RC from GS. In the same way, let’s try to see the effect of value “K” on the class boundaries. The following are the different boundaries separating the two classes with different values of K.

If you watch carefully, you can see that the boundary becomes smoother with increasing value of K. With K increasing to infinity it finally becomes all blue or all red depending on the total majority.  The training error rate and the validation error rate are two parameters we need to access different K-value. Following is the curve for the training error rate with a varying value of K :

As you can see, the error rate at K=1 is always zero for the training sample. This is because the closest point to any training data point is itself.Hence the prediction is always accurate with K=1. If validation error curve would have been similar, our choice of K would have been 1. Following is the validation error curve with varying value of K:

This makes the story more clear. At K=1, we were overfitting the boundaries. Hence, error rate initially decreases and reaches a minima. After the minima point, it then increase with increasing K. To get the optimal value of K, you can segregate the training and validation from the initial dataset. Now plot the validation error curve to get the optimal value of K. This value of K should be used for all predictions.

The above content can be understood more intuitively using our free course – K-Nearest Neighbors (KNN) Algorithm in Python and R

Breaking It Down – Pseudo Code of KNN

We can implement a KNN model by following the below steps:

Load the data

Initialise the value of k

For getting the predicted class, iterate from 1 to total number of training data points

Calculate the distance between test data and each row of training dataset. Here we will use Euclidean distance as our distance metric since it’s the most popular method. The other distance function or metrics that can be used are Manhattan distance, Minkowski distance, Chebyshev, cosine, etc. If there are categorical variables, hamming distance can be used.

Sort the calculated distances in ascending order based on distance values

Get top k rows from the sorted array

Get the most frequent class of these rows

Return the predicted class

Implementation in Python From Scratch

We will be using the popular Iris dataset for building our KNN model. You can download it from here.



Comparing Our Model With Scikit-learn from sklearn.neighbors import KNeighborsClassifier neigh = KNeighborsClassifier(n_neighbors=3) neigh.fit(data.iloc[:,0:4], data['Name']) # Predicted class print(neigh.predict(test)) # 3 nearest neighbors print(neigh.kneighbors(test)[1])

We can see that both the models predicted the same class (‘Iris-virginica’) and the same nearest neighbors ( [141 139 120] ). Hence we can conclude that our model runs as expected.

Implementation of KNN in R

View the code on Gist.

Output

#Top observations present in the data SepalLength SepalWidth PetalLength PetalWidth Name 1 5.1 3.5 1.4 0.2 Iris-setosa 2 4.9 3.0 1.4 0.2 Iris-setosa 3 4.7 3.2 1.3 0.2 Iris-setosa 4 4.6 3.1 1.5 0.2 Iris-setosa 5 5.0 3.6 1.4 0.2 Iris-setosa 6 5.4 3.9 1.7 0.4 Iris-setosa #Check the dimensions of the data [1] 150 5 #Summarise the data SepalLength SepalWidth PetalLength PetalWidth Name Min. :4.300 Min. :2.000 Min. :1.000 Min. :0.100 Iris-setosa :50 1st Qu.:5.100 1st Qu.:2.800 1st Qu.:1.600 1st Qu.:0.300 Iris-versicolor:50 Median :5.800 Median :3.000 Median :4.350 Median :1.300 Iris-virginica :50 Mean :5.843 Mean :3.054 Mean :3.759 Mean :1.199 3rd Qu.:6.400 3rd Qu.:3.300 3rd Qu.:5.100 3rd Qu.:1.800 Max. :7.900 Max. :4.400 Max. :6.900 Max. :2.500

Step 3: Splitting the Data

View the code on Gist.

Step 4: Calculating the Euclidean Distance

View the code on Gist. View the code on Gist.

Output

For K=1 [1] "Iris-virginica"

In the same way, you can compute for other values of K.

Comparing Our KNN Predictor Function With “Class” Library

View the code on Gist.

Output

For K=1 [1] "Iris-virginica"

We can see that both models predicted the same class (‘Iris-virginica’).

Conclusion

The KNN algorithm is one of the simplest classification algorithms. Even with such simplicity, it can give highly competitive results. KNN algorithm can also be used for regression problems. The only difference from the discussed methodology will be using averages of nearest neighbors rather than voting from nearest neighbors. KNN can be coded in a single line on R. I am yet to explore how we can use the KNN algorithm on SAS.

Key Takeaways

KNN classifier operates by finding the k nearest neighbors to a given data point, and it takes the majority vote to classify the data point.

The value of k is crucial, and one needs to choose it wisely to prevent overfitting or underfitting the model.

One can use cross-validation to select the optimal value of k for the k-NN algorithm, which helps improve its performance and prevent overfitting or underfitting. Cross-validation is also used to identify the outliers before applying the KNN algorithm.

The above article provides implementations of KNN in Python and R, and it compares the result with scikit-learn and the “Class” library in R.

Frequently Asked Questions Related

Decision Tree Algorithm – A Complete Guide

Decision trees are a popular machine learning algorithm that can be used for both regression and classification tasks. They are easy to understand, interpret, and implement, making them an ideal choice for beginners in the field of machine learning. In this comprehensive guide, we will cover all aspects of the decision tree algorithm, including the working principles, different types of decision trees, the process of building decision trees, and how to evaluate and optimize decision trees. By the end of this article, you will have a complete understanding of decision trees and how they can be used to solve real-world problems. Please check the decision tree full course tutorial for FREE given below.

Decision Tree Full Course Tutorial

This article was published as a part of the Data Science Blogathon!

What is a Decision Tree?

A decision tree is a predictive model that uses a flowchart-like structure to make decisions based on input data. It divides data into branches and assigns outcomes to leaf nodes. Decision trees are used for classification and regression tasks, providing easy-to-understand models.

A decision tree is a hierarchical model used in decision support that depicts decisions and their potential outcomes, incorporating chance events, resource expenses, and utility. This algorithmic model utilizes conditional control statements and is non-parametric, supervised learning, useful for both classification and regression tasks. The tree structure is comprised of a root node, branches, internal nodes, and leaf nodes, forming a hierarchical, tree-like structure.

It is a tool that has applications spanning several different areas. Decision trees can be used for classification as well as regression problems. The name itself suggests that it uses a flowchart like a tree structure to show the predictions that result from a series of feature-based splits. It starts with a root node and ends with a decision made by leaves.

Decision Tree Terminologies

Before learning more about decision trees let’s get familiar with some of the terminologies:

Root Nodes – It is the node present at the beginning of a decision tree from this node the population starts dividing according to various features.

Decision Nodes – the nodes we get after splitting the root nodes are called Decision Node

Leaf Nodes – the nodes where further splitting is not possible are called leaf nodes or terminal nodes

Sub-tree – just like a small portion of a graph is called sub-graph similarly a sub-section of this decision tree is called sub-tree.

Pruning – is nothing but cutting down some nodes to stop overfitting.

Example of Decision Tree

Let’s understand decision trees with the help of an example:

Decision trees are upside down which means the root is at the top and then this root is split into various several nodes. Decision trees are nothing but a bunch of if-else statements in layman terms. It checks if the condition is true and if it is then it goes to the next node attached to that decision.

In the below diagram the tree will first ask what is the weather? Is it sunny, cloudy, or rainy? If yes then it will go to the next feature which is humidity and wind. It will again check if there is a strong wind or weak, if it’s a weak wind and it’s rainy then the person may go and play.

Did you notice anything in the above flowchart? We see that if the weather is cloudy then we must go to play. Why didn’t it split more? Why did it stop there?

To answer this question, we need to know about few more concepts like entropy, information gain, and Gini index. But in simple terms, I can say here that the output for the training dataset is always yes for cloudy weather, since there is no disorderliness here we don’t need to split the node further.

The goal of machine learning is to decrease uncertainty or disorders from the dataset and for this, we use decision trees.

Now you must be thinking how do I know what should be the root node? what should be the decision node? when should I stop splitting? To decide this, there is a metric called “Entropy” which is the amount of uncertainty in the dataset.

Entropy

Entropy is nothing but the uncertainty in our dataset or measure of disorder. Let me try to explain this with the help of an example.

Suppose you have a group of friends who decides which movie they can watch together on Sunday. There are 2 choices for movies, one is “Lucy” and the second is “Titanic” and now everyone has to tell their choice. After everyone gives their answer we see that “Lucy” gets 4 votes and “Titanic” gets 5 votes. Which movie do we watch now? Isn’t it hard to choose 1 movie now because the votes for both the movies are somewhat equal.

This is exactly what we call disorderness, there is an equal number of votes for both the movies, and we can’t really decide which movie we should watch. It would have been much easier if the votes for “Lucy” were 8 and for “Titanic” it was 2. Here we could easily say that the majority of votes are for “Lucy” hence everyone will be watching this movie.

In a decision tree, the output is mostly “yes” or “no”

The formula for Entropy is shown below:

Here p+ is the probability of positive class

p– is the probability of negative class

S is the subset of the training example

How do Decision Trees use Entropy?

Now we know what entropy is and what is its formula, Next, we need to know that how exactly does it work in this algorithm.

Entropy basically measures the impurity of a node. Impurity is the degree of randomness; it tells how random our data is. Apure sub-splitmeans that either you should be getting “yes”, or you should be getting “no”.

Supposea featurehas 8 “yes” and 4 “no” initially, after the first split the left node gets 5 ‘yes’ and 2 ‘no’whereas right node gets 3 ‘yes’ and 2 ‘no’.

We see here the split is not pure, why? Because we can still see some negative classes in both the nodes. In order to make a decision tree, we need to calculate the impurity of each split, and when the purity is 100%, we make it as a leaf node.

To check the impurity of feature 2 and feature 3 we will take the help for Entropy formula.

For feature 3,

We can clearly see from the tree itself that left node has low entropy or more purity than right node since left node has a greater number of “yes” and it is easy to decide here.

Always remember that the higher the Entropy, the lower will be the purity and the higher will be the impurity.

As mentioned earlier the goal of machine learning is to decrease the uncertainty or impurity in the dataset, here by using the entropy we are getting the impurity of a particular node, we don’t know if the parent entropy or the entropy of a particular node has decreased or not.

For this, we bring a new metric called “Information gain” which tells us how much the parent entropy has decreased after splitting it with some feature.

Information Gain

Information gain measures the reduction of uncertainty given some feature and it is also a deciding factor for which attribute should be selected as a decision node or root node.

It is just entropy of the full dataset – entropy of the dataset given some feature.

To understand this better let’s consider an example:Suppose our entire population has a total of 30 instances. The dataset is to predict whether the person will go to the gym or not. Let’s say 16 people go to the gym and 14 people don’t

Now we have two features to predict whether he/she will go to the gym or not.

Feature 1 is “Energy” which takes two values “high” and “low”

Feature 2 is “Motivation” which takes 3 values “No motivation”, “Neutral” and “Highly motivated”.

Let’s see how our decision tree will be made using these 2 features. We’ll use information gain to decide which feature should be the root node and which feature should be placed after the split.

Image Source: Author

Let’s calculate the entropy

To see the weighted average of entropy of each node we will do as follows:

Our parent entropy was near 0.99 and after looking at this value of information gain, we can say that the entropy of the dataset will decrease by 0.37 if we make “Energy” as our root node.

Similarly, we will do this with the other feature “Motivation” and calculate its information gain.

Image Source: Author

Let’s calculate the entropy here:

To see the weighted average of entropy of each node we will do as follows:

We now see that the “Energy” feature gives more reduction which is 0.37 than the “Motivation” feature. Hence we will select the feature which has the highest information gain and then split the node based on that feature.

In this example “Energy” will be our root node and we’ll do the same for sub-nodes. Here we can see that when the energy is “high” the entropy is low and hence we can say a person will definitely go to the gym if he has high energy, but what if the energy is low? We will again split the node based on the new feature which is “Motivation”.

When to Stop Splitting?

You must be asking this question to yourself that when do we stop growing our tree? Usually, real-world datasets have a large number of features, which will result in a large number of splits, which in turn gives a huge tree. Such trees take time to build and can lead to overfitting. That means the tree will give very good accuracy on the training dataset but will give bad accuracy in test data.

There are many ways to tackle this problem through hyperparameter tuning. We can set the maximum depth of our decision tree using themax_depth parameter. The more the value of max_depth, the more complex your tree will be. The training error will off-course decrease if we increase the max_depth value but when our test data comes into the picture, we will get a very bad accuracy. Hence you need a value that will not overfit as well as underfit our data and for this, you can use GridSearchCV.

Another way is to set the minimum number of samples for each spilt. It is denoted by min_samples_split. Here we specify the minimum number of samples required to do a spilt. For example, we can use a minimum of 10 samples to reach a decision. That means if a node has less than 10 samples then using this parameter, we can stop the further splitting of this node and make it a leaf node.

There are more hyperparameters such as :

min_samples_leaf – represents the minimum number of samples required to be in the leaf node. The more you increase the number, the more is the possibility of overfitting.

max_features – it helps us decide what number of features to consider when looking for the best split.

To read more about these hyperparameters you can read ithere.

Pruning

Pruning is another method that can help us avoid overfitting. It helps in improving the performance of the tree by cutting the nodes or sub-nodes which are not significant. Additionally, it removes the branches which have very low importance.

There are mainly 2 ways for pruning:

Pre-pruning – we can stop growing the tree earlier, which means we can prune/remove/cut a node if it has low importance while growing the tree.

Post-pruning – once our tree is built to its depth, we can start pruning the nodes based on their significance.

Endnotes

To summarize, in this article we learned about decision trees. On what basis the tree splits the nodes and how to can stop overfitting. why linear regression doesn’t work in the case of classification chúng tôi check out the full implementation of decision trees please refer to my Github repository. You can master all the Data Science topics with our Black Belt Plus Program with out 50+ projects and 20+ tools. Start your learning journey today!

Frequently Asked Questions

Q1. What is decision tree and example?

A. A decision tree is a tree-like structure that represents a series of decisions and their possible consequences. It is used in machine learning for classification and regression tasks. An example of a decision tree is a flowchart that helps a person decide what to wear based on the weather conditions.

Q2. What is the purpose of decision tree?

A. The purpose of a decision tree is to make decisions or predictions by learning from past data. It helps to understand the relationships between input variables and their outcomes and identify the most significant features that contribute to the final decision.

Q3. What are the 4 types of decision tree?

A. The four types of decision trees are Classification tree, Regression tree, Cost-complexity pruning tree, and Reduced Error Pruning tree.

Q4. What is a decision tree algorithm?

A. A decision tree algorithm is a machine learning algorithm that uses a decision tree to make predictions. It follows a tree-like model of decisions and their possible consequences. The algorithm works by recursively splitting the data into subsets based on the most significant feature at each node of the tree.

Q5. Which algorithm is best for decision tree?

A. The best algorithm for decision trees depends on the specific problem and dataset. Popular decision tree algorithms include ID3, C4.5, CART, and Random Forest. Random Forest is considered one of the best algorithms as it combines multiple decision trees to improve accuracy and reduce overfitting.

Related

A Complete Guide To The Google Fred Algorithm

March 8, 2023, was a day that started out like any other…

You were sat at your desk, casually sipping your first cup of coffee and catching up with the search news here on Search Engine Journal, or perhaps scrolling through your Facebook feed, when it hit you…

You headed over to your favorite rank-checking tool.

“Please God, just let my site/clients be OK,” you quietly prayed.

Depending on your strategies and sites, the impact was almost certainly significant. Losers lost big, and the winners took their place.

Fred was here.

Why Name the Algorithm Fred?

According to Google’s very sarcastic Gary Illyes, ‘Fred’ is the name of every update Google doesn’t give us a name for.

sure! From now on every update, unless otherwise stated, shall be called Fred

— Gary 鯨理/경리 Illyes (@methode) March 9, 2023

With that said, when we refer to the “Fred Update,” we are typically referring to the update that rolled out on March 7, 2023.

Unless otherwise noted, any reference to Fred below will be in this context and not a compilation of all the “unnamed” updates since then.

Fred’s Timing

Fred has interesting timing.

Fred was preceded a month earlier by a major Google Core Update, which was said to focus on E-A-T.

A week after Fred, Google announced Project Owl, which was designed to clear away misleading and offensive information based on feedback from their quality raters.

Now, let’s be clear: The raters were training the system to recognize inaccurate or offensive information, not making the decision as to what sites should be purged from the results.

Clearly, Google was highly focused on quality and using data from their quality raters.

Fred was no exception.

What Was Google’s Fred Algorithm?

Google’s Fred algorithm update rolled out in an attempt to remove what Google perceived as low-quality results — sites that relied on thin content and aggressive ad placement.

Many were affiliate sites, though not all.

The majority used content as their primary traffic driver. Ordinarily, we hear Google telling folks to do just that.

While Gary gave us a name for the update, he didn’t give us a list of the areas they were addressing aside from the statement:

Fred is closely related to quality section of rater guidelines. @methode #smx

— Jennifer Slegg (@jenstar) June 13, 2023

That tells us that it did have to do with E-A-T, and the impacted sites imply that the areas it targeted were some or all of:

Thin content.

Poor link quality.

Poor content quality.

Aggressive affiliate linking.

Overwhelming interstitials.

Disproportionate Main Content/Supplemental Content ratio.

If you want a refresher on E-A-T and the Quality Raters’ Guidelines, you’ll find one here.

From the Horse’s Mouth

Jenn Slegg interviewed Gary Illyes on the topic at Brighton SEO in 2023.

Here’s a transcript of their discussion.

When it came to Fred, it all came basically down to the following:

Freds, Not Fred

Gary reinforced in the interview that Fred is the name of every unnamed update. As noted above, that’s all well and good for him to state but is a bit useless for SEO pros.

This is why we are typically referring to the single update.

Google Doesn’t Like That We Care About Updates

Gary goes on to note,

“I don’t like that people are focusing on [updates]. Every single update that we make is around quality of the site or general quality, perceived quality of the site, content, and the links or whatever.”

They would rather we just focused our time and attention on meeting the user’s needs than analyzing updates and chasing the metrics they imply.

Most Updates are Unactionable

With two to three updates per day, Gary rightfully points out that most are addressing unactionable areas like how words are structured on a page in a specific language.

I just want to stress the use of the word “most.”

Links Matter

Gary says,

“Basically, if you publish high quality content that is highly cited on the internet …”

He goes on to be a bit tongue-in-cheek, but it’s clear that a goal should be building quality content that attracts links.

It’s not news or Fred-specific, but worth noting.

Q&A with Gary Illyes

You can watch a full video of the interview below. The portion on Fred begins at 4:30.

Dave’s Take

I’m not a huge fan of how Gary sort of sidesteps what Fred is by discussing it in the plural. He knows the question is about the March 7 update and not all of the Freds.

And the likelihood that they updated all the algorithms at once is… well, I suppose I can’t say 0%, but it’s as close to that as possible.

Other than that, his answers were predictable but revealing:

Most updates aren’t actionable (in that there’s literally nothing that can be done – not that there are only things Google tells you they don’t want you to do like link building).

All sites fluctuate.

When in doubt, read the Webmaster Guidelines (and I would add the Quality Raters Guidelines).

Gary is sarcastic and pretty funny.

Recovering From the Fred Update

Thankfully, if you’re ranking now, you’ve probably been doing the things that will keep you from getting hit by similar updates.

Those who wanted to recover from this update had a big, big task ahead of them. Typically, they needed to revisit their site structure to reduce the ad layout and on top of that, revisit their content page-by-page to ensure it actually deserved a spot in the top 10.

Some did. But many didn’t.

Some tried to shortcut it.

Barry Schwartz compiled a list of sites he knew to have been hit.

Here’s how some did:

Seems they tried to trick their way back in.

Looks familiar, but the second drop took a bit longer. The follow-up hit would be one of three updates.

A flurry of manual actions were sent out around this time. This is the least likely.

Quality updates that occurred around this time.

And Marie Haynes reported seeing a number of sites impacted around June 17th and 18th that had previous link-related issues.

I suspect the third is the most likely.

Again, we see some recovery, and then subsequent hits in additional quality updates.

Google will get their way eventually.

Takeaway

Those who’ve been doing SEO recently will be used to updates like Fred, but in 2023 it was different than updates before it.

Stronger. More targeted. More effective. More devastating… or rewarding.

I remember when Fred rolled out. While my own clients weren’t impacted significantly one way or the other, it put a stamp on what was to come.

We’d seen quality updates and spam cleansers before, but this one somehow felt different. And it was.

After Fred, the updates around quality came more frequently and more varied. I credit that with the rise of machine learning but whatever the reason, as a searcher and someone who likes informative content, I appreciate it.

And hopefully, you feel you’ve found it here as well.

More Resources:

Image Credits

All screenshots taken by author, August of 2023

Complete Guide To Python Stopiteration

Introduction to Python StopIteration

The following article outlines Python StopIteration as we know the topic ‘iterator’ and ‘iterable’ in Python. The basic idea of what the ‘iterator’ is? An iterator is an object that holds a value (generally a countable number) that is iterated upon. Iterator in Python uses the __next__() method to traverse to the next value. To tell that no more deals need to be traversed by the __next__() process, a StopIteration statement is used. Programmers usually write a terminating condition inside the __next__() method to stop it after reaching the specified state.

Syntax of Python StopIteration

When the method used for iterators and generators completes a specified number of iterations, it raises the StopIteration exception. It’s important to note that Python treats raising StopIteration as an exception rather than a mistake. Like how Python handles other exceptions, this exception can be handled by catching it. This active handling of the StopIteration exception allows for proper control and management of the iteration process, ensuring that the code can gracefully handle the termination of the iteration when required.

The general syntax of using StopIteration in if and else of next() method is as follows:

class classname: def __iter__(self): … … #set of statements return self; def __next__(self): if …. #condition till the loop needs to be executed …. #set of statements that needs to be performed till the traversing needs to be done return … else raise StopIteration #it will get raised when all the values of iterator are traversed How StopIteration works in Python?

It is raised by the method next() or __next__(), a built-in Python method to stop the iterations or to show that no more items are left to be iterated upon.

We can catch the StopIteration exception by writing the code inside the try block, catching the exception using the ‘except’ keyword, and printing it on screen using the ‘print’ keyword.

The following () method in both generators and iterators raises it when no more elements are present in the loop or any iterable object.

Examples of Python StopIteration

Given below are the examples mentioned:

Example #1

Stop the printing of numbers after 20 or printing numbers incrementing by 2 till 20 in the case of Iterators.

Code:

class printNum: def __iter__(self): self.z = 2 return self def __next__(self): if self.z <= 20: #performing the action like printing the value on console till the value reaches 20 y = self.z self.z += 2 return y else: raise StopIteration #raising the StopIteration exception once the value gets increased from 20 obj = printNum() value_passed = iter(obj) for u in value_passed: print(u)

Output:

Explanation:

In the above example, we use two methods, namely iter() and next(), to iterate through the values. The next() method utilizes if and else statements to check for the termination condition of the iteration actively.

If the iterable value is less than or equal to 20, it continues to print those values at the increment of 2. Once the value exceeds 20, the next() method raises a StopIteration exception.

Example #2

Finding the cubes of number and stop executing once the value becomes equal to the value passed using StopIteration in the case of generators.

Code:

def values(): #list of integer values with no limits x = 1 #initializing the value of integer to 1 while True: yield x x+= 1 def findingcubes(): for x in values(): yield x * x *x #finding the cubes of value ‘x’ def func(y, sequence): sequence = iter(sequence) output = [ ] #creating an output blank array try: for x in range(y): #using the range function of python to use for loop output.append(next(sequence)) #appending the output in the array except StopIteration: #catching the exception pass return output print(func(5, findingcubes())) #passing the value in the method ‘func’

Output:

Explanation:

In the above example, we find the cubes of numbers from 1 to the number passed in the function. We generate multiple values at a time using generators in Python, and to stop the execution once the value reaches the one passed in the function, we raise a StopIteration exception.

We create different methods serving their respective purposes, such as generating the values, finding the cubes, and printing the values by storing them in the output array. The program uses basic Python functions like range and append, which should be clear to the programmer in the initial stages of learning.

How to Avoid StopIteration Exception in Python?

As seen above StopIteration is not an error in Python but an exception and is used to run the next() method for the specified number of iterations. Iterator in Python uses two methods, i.e. iter() and next().

The next() method raises a StopIteration exception when the next() method is called manually.

The best way to avoid this exception in Python is to use normal looping or use it as a normal iterator instead of writing the next() method repeatedly.

Otherwise, if not able to avoid StopIteration exception in Python, we can simply raise the exception in the next() method and catch the exception like a normal exception in Python using the except keyword.

Conclusion

As discussed above in the article, it must be clear to you what is the StopIteration exception and in which condition it is raised in Python. StopIteration exception could be an issue to deal with for the new programmers as it can be raised in many situations.

Recommended Articles

This is a guide to Python StopIteration. Here we discuss how StopIteration works in Python and how to avoid StopIteration exceptions with programming examples. You may also have a look at the following articles to learn more –

The Complete Guide To Todoist Filters

If you’re already using Todoist to keep track of your life, you might wonder how you can make it even more useful. The simple answer: Todoist filters. These have the power to streamline and better organize all your tasks, especially when you’ve added so many to-dos that you don’t even know where to start. The good news is you can use built-in filters or create your own. Read on to learn more.

What Are Todoist Filters?

Todoist already has a handy search bar to quickly find tasks. Todoist filters, though, take it a step further by letting you create custom searches for those you use often. For example, you might create a filter for calls or emails you need to respond to by the end of the day. You can filter by tag and due date to quickly see just those tasks.

If you have a few hundred tasks on your to-do list, simply scrolling through isn’t enough. Even if you carefully categorize them with tags and priority, you could still waste valuable time trying to find what you need and could easily miss an important task.

To really see how useful Todoist filters can be, let’s imagine a busy professional has several hundred tasks listed for the week. This could be a mixture of emails, calls, projects, and even things to do on their way home. When they log in to see their tasks at the start of the day, they want to get to work immediately.

They create a filter to first show only top priority tasks. They further customize the filter to show tasks that are due that day, possibly even tasks due before lunch. If they always handle emails left over from the day before the first thing in the morning, they’d customize the filter one more time to only show email tasks. Suddenly, that extremely long list only shows the handful of tasks the person needs to do as soon as they start working that day.

The same holds true for when they leave for the day. They’d filter tasks by Home along with the current day. They could also filter by person if they wanted to see upcoming tasks (such as extracurricular school activities) for their kids, spouse, friends, or charity organizations.

Best Default Todoist Filters

By default, Todoist gives you a few filters. These may vary based on the platform you’re using. For the purpose of this post, I’m using the free Web version.

The following filters are included by default without the need for you to create anything:

Assigned to me – only lists tasks that are assigned to you

Priority 1 – lists tasks labeled as Priority 1

No due date – only lists tasks without a due date

View all – shows all your tasks in one list

Out of these defaults, Priority 1 and Assigned to me are probably the most useful, as you can quickly see what your more urgent tasks may be.

Creating Your Own Filters

In the grand scheme of things, the default Todoist filters are extremely basic and may not be all that helpful. That’s when it’s best to create your own filters.

To make filters better, it’s important to use labels, dates (if applicable), and priorities when creating tasks. Otherwise, it’s difficult to create filters based on those criteria. You can create labels when creating or editing a task or by using the “Filters & Labels” section.

To create your own filter, select “Filters & Labels” in the left pane. On Android, drag the menu up from the bottom and select “Filters.” In iOS, tap “<” to open the menu and select “Filters & Labels.”

Beside “Filters,” select the “+” button to add a new filter. (For this example, I’m creating a filter that shows overdue tasks. This works well for those tasks that get overlooked but still need to be done. This only works if your tasks have a due date.)

When creating basic filters, there are a few things to keep in mind:

If your query is based on a label, always use “@” symbol before the label name, such as “@work.”

If your query is based on a project/main section or only a sub-section, always use “#” before the name, such as “#Inbox.”

If you want your query to include a main section along with all of its sub-sections, use “##” before the name.

If you want to exclude a specific sub-section, add a “!” before the sub-section name, such as “##Inbox & !#Followups.” (This includes all sections in the Inbox parent section, excluding anything from the Followups sub-section).

If you want to search sections with the same name across multiple projects, use “/” before the name, such as “/Emails,” which could be a sub-section in multiple parent sections.

Creating Advanced Todoist Filters

Creating a basic filter is fairly easy. Simply use the name of a label, section, date, or specific word or phrase (such as overdue, recurring, no date, no label). However, you’re not limited to a single filter criteria. For example, in the section above, you saw how to exclude a sub-section in a filter.

To use multiple criteria, use the following operators:

“*” (wildcard) – Make your filter more encompassing with a wildcard symbol. For instance, search for all tasks assigned to anyone with the last name Crowder with “assigned to: * crowder.”

If you love creating search filters in all the productivity apps you use, learn how to master VLOOKUP in Excel and Google Sheets.

Most Useful Filters

To help you get started, Todoist has an AI filter query generator. It may not get things quite right but can give you a starting point.

If you’re not sure where to start to create your own filters, consider using some of the most useful filter queries, including:

Finding Todoist Filter Inspiration

Want to become a master of Todoist filters? All you need is the right inspiration. The Doist blog has 24 incredible and highly useful filters to get you organized quickly. These are also great examples of using more complex filters.

Frequently Asked Questions 1. How can I access my most used filters faster?

If you only have a few filters, going to the “Filters & Labels” section isn’t a problem. However, if you have dozens of filters, finding the right one can be time consuming.

2. Can I filter completed tasks? 3. Do I have to create a filter for all my searches? 4. How can I organize my filters?

It’s easy for filters to get out of hand. There are several ways to keep them organized:

Add your most used to Favorites.

Group similar filters with color-coded labels.

Drag and drop to organize filters the way you want in your Filters & Labels list.

If there are filters you no longer use, delete them. The fewer filters you have, the easier it is to find what you need.

Crystal Crowder

Crystal Crowder has spent over 15 years working in the tech industry, first as an IT technician and then as a writer. She works to help teach others how to get the most from their devices, systems, and apps. She stays on top of the latest trends and is always finding solutions to common tech problems.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Update the detailed information about A Complete Guide To Pytorch Tensors on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!