Trending March 2024 # Python With Mysql Connectivity: Database & Table # Suggested April 2024 # Top 12 Popular

You are reading the article Python With Mysql Connectivity: Database & Table updated in March 2024 on the website Katfastfood.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Python With Mysql Connectivity: Database & Table

In order to use MySQL connectivity with Python, you must have some knowledge of SQL

Before diving deep, let’s understand

What is MySQL?

MySQL is an Open-Source database and one of the best type of RDBMS (Relational Database Management System). Co-founder of MySQLdb is Michael Widenius’s, and also MySQL name derives from the daughter of Michael.

In this tutorial, you will learn

How to Install MySQL Connector Python on Windows, Linux/Unix Install MySQL in Linux/Unix:

In terminal use following command

Example rpm -i MySQL-5.0.9.0.i386.rpm

To check in Linux

mysql --version Install MySQL in Windows

Download MySQL database exe from official site and install as usual normal installation of software in Windows. Refer this tutorial, for a step by step guide

How to Install MySQL Connector Library for Python

Here is how to connect MySQL with Python:

For Python 2.7 or lower install using pip as:

pip install mysql-connector

For Python 3 or higher version install using pip3 as:

pip3 install mysql-connector Test the MySQL Database connection with Python

To test MySQL database connectivity in Python here, we will use pre-installed MySQL connector and pass credentials into connect() function like host, username and password as shown in the below Python MySQL connector example.

Syntax to access MySQL with Python:

import mysql.connector db_connection = mysql.connector.connect( host="hostname", user="username", passwd="password" )

Example:

import mysql.connector db_connection = mysql.connector.connect( host="localhost", user="root", passwd="root" ) print(db_connection)

Output:

Here output shows the connection created successfully.

Creating Database in MySQL using Python

Syntax to Create new database in SQL is

CREATE DATABASE "database_name"

Now we create database using database programming in Python

import mysql.connector db_connection = mysql.connector.connect( host= "localhost", user= "root", passwd= "root" ) # creating database_cursor to perform SQL operation db_cursor = db_connection.cursor() # executing cursor with execute method and pass SQL query db_cursor.execute("CREATE DATABASE my_first_db") # get list of all databases db_cursor.execute("SHOW DATABASES") #print all databases for db in db_cursor: print(db)

Output:

Here above image shows the my_first_db database is created

Create a Table in MySQL with Python

Let’s create a simple table “student” which has two columns as shown in the below MySQL connector Python example.

SQL Syntax:

CREATE TABLE student (id INT, name VARCHAR(255))

Example:

import mysql.connector db_connection = mysql.connector.connect( host="localhost", user="root", passwd="root", database="my_first_db" ) db_cursor = db_connection.cursor() #Here creating database table as student' db_cursor.execute("CREATE TABLE student (id INT, name VARCHAR(255))") #Get database table' db_cursor.execute("SHOW TABLES") for table in db_cursor: print(table)

Output:

('student',) Create a Table with Primary Key

Let’s create an Employee table with three different columns. We will add a primary key in id column with AUTO_INCREMENT constraint as shown in the below Python project with database connectivity.

SQL Syntax:

CREATE TABLE employee(id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255), salary INT(6))

Example:

import mysql.connector db_connection = mysql.connector.connect( host="localhost", user="root", passwd="root", database="my_first_db" ) db_cursor = db_connection.cursor() #Here creating database table as employee with primary key db_cursor.execute("CREATE TABLE employee(id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255), salary INT(6))") #Get database table db_cursor.execute("SHOW TABLES") for table in db_cursor: print(table)

Output:

('employee',) ('student',) ALTER table in MySQL with Python

Alter command is used for modification of Table structure in SQL. Here we will alter Student table and add a primary key to the id field as shown in the below Python MySQL connector project.

SQL Syntax:

ALTER TABLE student MODIFY id INT PRIMARY KEY

Example:

import mysql.connector db_connection = mysql.connector.connect( host="localhost", user="root", passwd="root", database="my_first_db" ) db_cursor = db_connection.cursor() #Here we modify existing column id db_cursor.execute("ALTER TABLE student MODIFY id INT PRIMARY KEY")

Output:

Here below you can see the id column is modified.

Insert Operation with MySQL in Python:

Let’s perform insertion operation in MySQL Database table which we already create. We will insert data oi STUDENT table and EMPLOYEE table.

SQL Syntax:

INSERT INTO student (id, name) VALUES (01, "John") INSERT INTO employee (id, name, salary) VALUES(01, "John", 10000)

Example:

import mysql.connector db_connection = mysql.connector.connect( host="localhost", user="root", passwd="root", database="my_first_db" ) db_cursor = db_connection.cursor() student_sql_query = "INSERT INTO student(id,name) VALUES(01, 'John')" employee_sql_query = " INSERT INTO employee (id, name, salary) VALUES (01, 'John', 10000)" #Execute cursor and pass query as well as student data db_cursor.execute(student_sql_query) #Execute cursor and pass query of employee and data of employee db_cursor.execute(employee_sql_query) print(db_cursor.rowcount, "Record Inserted")

Output:

2 Record Inserted

Also Check:- Python Tutorial for Beginners: Learn Programming Basics [PDF]

You're reading Python With Mysql Connectivity: Database & Table

How To Dump Mysql Server With Examples?

Introduction to MySQL Dump

In case the system’s database is corrupted, crashed, or lost, we should be able to restore the data in the database. For this reason, MySQL provides us with a facility to dump the database using mysqldump utility. You can use this utility only if you have access to your database, you have been assigned the select privilege on the tables of that database, and the database is currently running. The utility creates a logical backup and generates a flat file containing SQL statements. Later on, you can execute these SQL statements to restore the database to the same state it was in when the backup file was created. This utility supports both single and multiple database backups. Additionally, the mysqldump utility has the capability to export the data in XML, CSV, or any other delimited text format.

Start Your Free Data Science Course

Hadoop, Data Science, Statistics & others

We must dump our database frequently to have the updated backup of the database available to us. Whenever the backup is restored the database will be back to the state when that dump file was being created using mysqldump.

Pre-requisites

There are certain privileges on the tables, views, triggers, and transactions that we should have to use the mysqldump utility. It depends on the content that we are backing up. If we are backing up the database that contains tables then you should have select privilege, for views it is necessary to have SHOW VIEW privilege, for triggers TRIGGER privilege and if we use –the single-transaction option while dumping the database then the LOCK TABLES privilege should be there with us.

Similarly, while reloading or restoring the dumped data, we must possess the privilege such as CREATE, INSERT, and ALTER privileges that might be present in your dumped flat file that will be executed. The ALTER statements may be present in the dumped file sometimes when stored programs are dumped for encoded character preservations. To execute this ALTER command and modify the database collation, the user must have the ALTER privilege assigned to them.

Syntax of MySQL Dump

Dumping one or more of the selected tables:

Dumping one or more of the selected databases:

Dumping Complete MySql Server

Syntax of dumping complete mysql server are:

Many options we can use to specify the behavior or values of some of the objects like -u for the username using which we will login -p to mention the password and other options related to defining the behavior of dumping. There are many different types of options that can be specified. We categorize them into the following types: –

Connection Options

Option-File Options

DDL Options

Debug Options

Help Options

Internationalization Options

Replication Options

Format Options

Filtering Options

Performance Options

Transactional Options

To see a complete list of the options that are available and can be used, we can execute the following command –

mysqldump -u root p –help

that gives the following output displaying all the options and usage of the same:

as the list is too big, you can export it tho the file and then open the file to view the options and search for options that can be used in your context and use case. You can export the output to a file by executing the following command:

Output:

And the temp file when opened on an editor looks like the following:

Examples of MySQL Dump

Let us consider one example, we will firstly query on my database server to display all databases –

show databases;

Output:

Now, we will use educba database and check the tables present in it.

use educba; show tables;

Let us now see all the records present in the developers table.

select * from developers;

Output:

Now, let us export the educba database using the mysqldump command –

Output:

Note that we will have to exit from the MySQL command shell and then execute the above command. After, a file named chúng tôi file will be created on the same path. After opening the file, we will see that it contains all the commands of SQL that will recreate the educba database in case if we restore this file to a certain database. Here’s how that file will look like:

This is the dumped flat-file that was created after performing a dump of the ‘educba’ database. The file consists of commands to create the database, create a table, and insert queries to populate the table with records.

Restoring the Database

Let us now drop the database educba using the following command –

DROP DATABASE educba;

Output:

And now confirm the available databases by using the command –

show databases;

Output:

We can see that the educba database does not exist in our database server of MySQL. Now, we will restore the educba database from the backup file chúng tôi that we created by dumping the educba database previously.

You can restore the database using the following command:

sudo mysql -u root -p < backupOfEducba.sql

Output:

Let us check the contents of the backup_educba database

show database; use educba; MySQL select * from developers;

Output:

Upon restoration, it becomes evident that the database named ‘educba’ is reestablished, and it encompasses identical content as that of the ‘developer’ table, including all the records within.

Recommended Articles

We hope that this EDUCBA information on “MySQL Dump” was beneficial to you. You can view EDUCBA’s recommended articles for more information.

Getting Started With Mongodb Database For Data Science

When we talk about structured data, Database is the word that comes first to mind. There are various types of databases, here we will be looking at the NoSQL database.

For the past few years, one of the most common ways to store data is by using NoSQL Databases. NoSQL databases AKA “non SQL” OR “not only SQL” are those databases that store data in a non-tabular format different from a relational database.

Today, we will be working with MongoDB, a widely used product for NoSQL databases, and learning how to use data inside MongoDB databases, for data science. You can learn more about the NoSQL database on the official site of MongoDB: NoSQL Explained.

Table of Contents:

Pre-requisites

Looking at MongoDB Compass

pymongo module in Python

Getting ready for Data science

1. Pre-requisites

MongoDB

Before working with MongoDB Database, we need to install it. Here is the official installation guide for your personal working environment.

MongoDB Compass

For a simpler and easier explanation in this tutorial, we will be using the official interactive GUI for MongoDB databases, i.e. MongoDB Compass. Here is the installation guide for it.

Python 3.7 or above

Here is the link to install the latest stable Python 3 version.

pymongo module for working with MongoDB client in Python. Install using pip install pymongo

Data science libraries according to your particular use case, here I will be using only Pandas to create a DataFrame.

2. Looking at MongoDB Compass

After the successful installation of MongoDB Compass (refer to the link given in the above step), we will briefly explore its interface.

Startup interface

NOTE: If you are using it for the first time, you might not see any recent entries.

Connecting to your local Database

NOTE: Admin, local, and config are the 3 databases that will be present in your MongoDB client by Default. We will be working with the admin database for demonstration purposes.

Still if you have any doubts, you can quickly glance over the Glossary for MongoDB

For creating the collection, we are choosing a very famous dataset: Iris Data Set, you can download the .csv file from the provided link. Here are the steps to import the dataset into our MongoDB admin database (as Tutorial Collection)

Select the chúng tôi file that you downloaded from the dataset above.

Note: Remember to check and change the data type of columns; if necessary before importing.

Here is how your ‘Tutorial’ collection would look like after the successful import of Iris data from the .csv file.

3. Pymongo Module in Python

To install the module, you need to simply write pip install pymongo in your terminal.

import pymongo # Getting the access to local MongoDB databases databases = pymongo.MongoClient() # Getting the access to `admin` database from the group of other databases present admin_db = databases.admin # Getting the access to 'Tutorial' collection that we just created inside `admin` database tutorial_collection = admin_db.Tutorial # Now this is where our imported `iris` data is stored. tutorial_collection.find_one({})

tutorial_collection.find({})

Note: pymongo cursor object is iterable, so here we converted it into a list to glance at all the values.

list(tutorial_collection.find({}))

the list goes on till all the 150 values of the iris dataset.

4. Getting ready for Data Science

We are onto the final stage that would join this tutorial to further down the line data science/ Analytics tasks.

We need to create a DataFrame using pandas for our MongoDB Tutorial Collection. Let’s see how we can do that in Jupyter notebooks for better interactivity.

import pandas as pd iris_df = pd.DataFrame(list(tutorial_collection.find({}))) iris_df

If you don’t want some of the columns you can clean them in 2 ways:

First is before retrieving the data from database to python code using MongoDB aggregate pipelines(Out of the scope of the tutorial),

The second is data cleaning after creating the DataFrame of the data.

# we will clearn the `id` columns by second approach, iris_df = iris_df.drop("_id", axis=1) iris_df.head()

You have reached the end of this tutorial. Now further down the line, you can write the same code as any other data science/analytics task. From this point onwards, you can be as flexible as would want with your data science skills.

Additional Information:

MongoDB offers the functionality of aggregate pipelines (mentioned once above) to filter, pre-process, and in general create use-case-specific data pipelines. With proper logic and built, they can be really powerful to retrieve refined and enriched data from the output of that pipeline. It is several times computationally faster than achieving the same result in python or any interpretable language after creating a DataFrame.

Gargeya Sharma

For more information, check out my GitHub Home Page.

LinkedIn       GitHub

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Related

Python Zip File With Example

Python allows you to quickly create zip/tar archives.

Following command will zip entire directory

shutil.make_archive(output_filename, 'zip', dir_name)

Following command gives you control on the files you want to archive

ZipFile.write(filename)

Here are the steps to create Zip File in Python

Step 1) To create an archive file from Python, make sure you have your import statement correct and in order. Here the import statement for the archive is from shutil import make_archive

Code Explanation

Import make_archive class from module shutil

Use the split function to split out the directory and the file name from the path to the location of the text file (guru99)

Then we call the module “shutil.make_archive(“guru99 archive, “zip”, root_dir)” to create archive file, which will be in zip format

After then we pass in the root directory of things we want to be zipped up. So everything in the directory will be zipped

When you run the code, you can see the archive zip file is created on the right side of the panel.

Now your chúng tôi file will appear on your O.S (Windows Explorer)

Step 4) In Python we can have more control over archive since we can define which specific file to include under archive. In our case, we will include two files under archive “guru99.txt” and “guru99.txt.bak”.

Code Explanation

Import Zipfile class from zip file Python module. This module gives full control over creating zip files

We create a new Zipfile with name ( “testguru99.zip, “w”)

Creating a new Zipfile class, requires to pass in permission because it’s a file, so you need to write information into the file as newzip

We used variable “newzip” to refer to the zip file we created

Using the write function on the “newzip” variable, we add the files “guru99.txt” and “guru99.txt.bak” to the archive

When you execute the code you can see the file is created on the right side of the panel with name “guru99.zip”

Note: Here we don’t give any command to “close” the file like “newzip.close” because we use “With” scope lock, so when program falls outside of this scope the file will be cleaned up and is closed automatically.

Here is the complete code

Python 2 Example

import os import shutil from zipfile import ZipFile from os import path from shutil import make_archive def main(): # Check if file exists if path.exists("guru99.txt"): # get the path to the file in the current directory src = path.realpath("guru99.txt"); # rename the original file os.rename("career.guru99.txt","guru99.txt") # now put things into a ZIP archive root_dir,tail = path.split(src) shutil.make_archive("guru99 archive", "zip", root_dir) # more fine-grained control over ZIP files with ZipFile("testguru99.zip","w") as newzip: newzip.write("guru99.txt") newzip.write("guru99.txt.bak") if __name__== "__main__": main()

Python 3 Example

import os import shutil from zipfile import ZipFile from os import path from shutil import make_archive # Check if file exists if path.exists("guru99.txt"): # get the path to the file in the current directory src = path.realpath("guru99.txt"); # rename the original file os.rename("career.guru99.txt","guru99.txt") # now put things into a ZIP archive root_dir,tail = path.split(src) shutil.make_archive("guru99 archive","zip",root_dir) # more fine-grained control over ZIP files with ZipFile("testguru99.zip", "w") as newzip: newzip.write("guru99.txt") newzip.write("guru99.txt.bak") Summary

To zip entire directory use command “shutil.make_archive(“name”,”zip”, root_dir)

To select the files to zip use command “ZipFile.write(filename)”

Ai With Python – Genetic Algorithms

AI with Python – Genetic Algorithms

This chapter discusses Genetic Algorithms of AI in detail.

What are Genetic Algorithms?

Genetic Algorithms (GAs) are search based algorithms based on the concepts of natural selection and genetics. GAs are a subset of a much larger branch of computation known as Evolutionary Computation.

GAs were developed by John Holland and his students and colleagues at the University of Michigan, most notably David E. Goldberg. It has since been tried on various optimization problems with a high degree of success.

In GAs, we have a pool of possible solutions to the given problem. These solutions then undergo recombination and mutation (like in natural genetics), produces new children, and the process is repeated for various generations. Each individual (or candidate solution) is assigned a fitness value (based on its objective function value) and the fitter individuals are given a higher chance to mate and yield fitter individuals. This is in line with the Darwinian Theory of Survival of the Fittest.

Thus, it keeps evolving better individuals or solutions over generations, till it reaches a stopping criterion.

Genetic Algorithms are sufficiently randomized in nature, but they perform much better than random local search (where we just try random solutions, keeping track of the best so far), as they exploit historical information as well.

How to Use GA for Optimization Problems?

Optimization is an action of making design, situation, resource and system, as effective as possible. The following block diagram shows the optimization process −

Stages of GA mechanism for optimization process

The following is a sequence of steps of GA mechanism when used for optimization of problems.

Step 1 − Generate the initial population randomly.

Step 2 − Select the initial solution with best fitness values.

Step 3 − Recombine the selected solutions using mutation and crossover operators.

Step 4 − Insert an offspring into the population.

Step 5 − Now, if the stop condition is met, return the solution with their best fitness value. Else go to step 2.

Installing Necessary Packages

For solving the problem by using Genetic Algorithms in Python, we are going to use a powerful package for GA called DEAP. It is a library of novel evolutionary computation framework for rapid prototyping and testing of ideas. We can install this package with the help of the following command on command prompt −

pip install deap

If you are using anaconda environment, then following command can be used to install deap −

conda install -c conda-forge deap Implementing Solutions using Genetic Algorithms

This section explains you the implementation of solutions using Genetic Algorithms.

Generating bit patterns

The following example shows you how to generate a bit string that would contain 15 ones, based on the One Max problem.

Import the necessary packages as shown −

import random from deap import base, creator, tools

Define the evaluation function. It is the first step to create a genetic algorithm.

def eval_func(individual): target_sum = 15 return len(individual) - abs(sum(individual) - target_sum),

Now, create the toolbox with the right parameters −

def create_toolbox(num_bits): creator.create("FitnessMax", base.Fitness, weights=(1.0,)) creator.create("Individual", list, fitness=creator.FitnessMax)

Initialize the toolbox

toolbox = base.Toolbox() toolbox.register("attr_bool", random.randint, 0, 1) toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_bool, num_bits) toolbox.register("population", tools.initRepeat, list, toolbox.individual)

Register the evaluation operator −

toolbox.register("evaluate", eval_func)

Now, register the crossover operator −

toolbox.register("mate", tools.cxTwoPoint)

Register a mutation operator −

toolbox.register("mutate", tools.mutFlipBit, indpb = 0.05)

Define the operator for breeding −

toolbox.register("select", tools.selTournament, tournsize = 3) return toolbox if __name__ == "__main__": num_bits = 45 toolbox = create_toolbox(num_bits) random.seed(7) population = toolbox.population(n = 500) probab_crossing, probab_mutating = 0.5, 0.2 num_generations = 10 print('nEvolution process starts')

Evaluate the entire population −

fitnesses = list(map(toolbox.evaluate, population)) for ind, fit in zip(population, fitnesses): ind.fitness.values = fit print('nEvaluated', len(population), 'individuals')

Create and iterate through generations −

for g in range(num_generations): print("n- Generation", g)

Selecting the next generation individuals −

offspring = toolbox.select(population, len(population))

Now, clone the selected individuals −

offspring = list(map(toolbox.clone, offspring))

Apply crossover and mutation on the offspring −

for child1, child2 in zip(offspring[::2], offspring[1::2]): if random.random() < probab_crossing: toolbox.mate(child1, child2)

Delete the fitness value of child

del child1.fitness.values del child2.fitness.values

Now, apply mutation −

for mutant in offspring: if random.random() < probab_mutating: toolbox.mutate(mutant) del mutant.fitness.values

Evaluate the individuals with an invalid fitness −

invalid_ind = [ind for ind in offspring if not ind.fitness.valid] fitnesses = map(toolbox.evaluate, invalid_ind) for ind, fit in zip(invalid_ind, fitnesses): ind.fitness.values = fit print('Evaluated', len(invalid_ind), 'individuals')

Now, replace population with next generation individual −

population[:] = offspring

Print the statistics for the current generations −

fits = [ind.fitness.values[0] for ind in population] length = len(population) mean = sum(fits) / length sum2 = sum(x*x for x in fits) std = abs(sum2 / length - mean**2)**0.5 print('Min =', min(fits), ', Max =', max(fits)) print('Average =', round(mean, 2), ', Standard deviation =', round(std, 2)) print("n- Evolution ends")

Print the final output −

best_ind = tools.selBest(population, 1)[0] print('nBest individual:n', best_ind) print('nNumber of ones:', sum(best_ind)) Following would be the output: Evolution process starts Evaluated 500 individuals - Generation 0 Evaluated 295 individuals Min = 32.0 , Max = 45.0 Average = 40.29 , Standard deviation = 2.61 - Generation 1 Evaluated 292 individuals Min = 34.0 , Max = 45.0 Average = 42.35 , Standard deviation = 1.91 - Generation 2 Evaluated 277 individuals Min = 37.0 , Max = 45.0 Average = 43.39 , Standard deviation = 1.46 … … … … - Generation 9 Evaluated 299 individuals Min = 40.0 , Max = 45.0 Average = 44.12 , Standard deviation = 1.11 - Evolution ends Best individual: [0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1] Number of ones: 15 Symbol Regression Problem

It is one of the best known problems in genetic programming. All symbolic regression problems use an arbitrary data distribution, and try to fit the most accurate data with a symbolic formula. Usually, a measure like the RMSE (Root Mean Square Error) is used to measure an individual’s fitness. It is a classic regressor problem and here we are using the equation 5×3-6×2+8x=1. We need to follow all the steps as followed in the above example, but the main part would be to create the primitive sets because they are the building blocks for the individuals so the evaluation can start. Here we will be using the classic set of primitives.

The following Python code explains this in detail −

import operator import math import random import numpy as np from deap import algorithms, base, creator, tools, gp def division_operator(numerator, denominator): if denominator == 0: return 1 return numerator / denominator def eval_func(individual, points): return math.fsum(mse) / len(points), def create_toolbox(): pset = gp.PrimitiveSet("MAIN", 1) pset.addPrimitive(operator.add, 2) pset.addPrimitive(operator.sub, 2) pset.addPrimitive(operator.mul, 2) pset.addPrimitive(division_operator, 2) pset.addPrimitive(operator.neg, 1) pset.addPrimitive(math.cos, 1) pset.addPrimitive(math.sin, 1) pset.addEphemeralConstant("rand101", lambda: random.randint(-1,1)) pset.renameArguments(ARG0 = 'x') creator.create("FitnessMin", base.Fitness, weights = (-1.0,)) creator.create("Individual",gp.PrimitiveTree,fitness=creator.FitnessMin) toolbox = base.Toolbox() toolbox.register("expr", gp.genHalfAndHalf, pset=pset, min_=1, max_=2) toolbox.expr) toolbox.register("population",tools.initRepeat,list, toolbox.individual) toolbox.register("evaluate", eval_func, points = [x/10. for x in range(-10,10)]) toolbox.register("select", tools.selTournament, tournsize = 3) toolbox.register("mate", gp.cxOnePoint) toolbox.register("expr_mut", gp.genFull, min_=0, max_=2) toolbox.register("mutate", gp.mutUniform, expr = toolbox.expr_mut, pset = pset) toolbox.decorate("mate", gp.staticLimit(key = operator.attrgetter("height"), max_value = 17)) toolbox.decorate("mutate", gp.staticLimit(key = operator.attrgetter("height"), max_value = 17)) return toolbox if __name__ == "__main__": random.seed(7) toolbox = create_toolbox() population = toolbox.population(n = 450) hall_of_fame = tools.HallOfFame(1) stats_fit = tools.Statistics(lambda x: x.fitness.values) stats_size = tools.Statistics(len) mstats = tools.MultiStatistics(fitness=stats_fit, size = stats_size) mstats.register("avg", np.mean) mstats.register("std", np.std) mstats.register("min", np.min) mstats.register("max", np.max) probab_crossover = 0.4 probab_mutate = 0.2 number_gen = 10 population, log = algorithms.eaSimple(population, toolbox, probab_crossover, probab_mutate, number_gen, stats = mstats, halloffame = hall_of_fame, verbose = True)

Note that all the basic steps are same as used while generating bit patterns. This program will give us the output as min, max, std (standard deviation) after 10 number of generations.

Advertisements

Some Popular Things Made With Python

In this article, we will learn some popular things made with Python. The following are some of the popular applications made using python.

Instagram

Instagram is the most popular social networking site, allowing users to record videos and photographs, modify them with various digital filters, and share them with their Instagram followers. It is one of the best Python application examples.

The Instagram app changed the landscape of digital photography by making it more accessible and popular, instantly defining new marketing norms, and expanding lines of creativity.

Instagram, with roughly 400 million active users per day, disproves the notion that Python applications are not scalable. Instagram engineer Hui Ding stated that their engineering slogan is ‘Do the simple things first,’ which is exactly what Python requires of its developers.

Netflix

Netflix is the world’s leading internet television network, with over 33 million subscribers in 40 countries watching over one billion hours of TV shows and movies per month, including Netflix’s original series. Netflix’s technical blog claims that −

“Netflix developers are free to use the technologies that are best suited for the job. Python is becoming increasingly popular among developers because to its extensive batteries-included standard library, short and clear yet expressive syntax, vast developer community, and richness of third-party libraries available to solve a given problem.”

Pinterest

Pinterest is a social media network that allows its users to search for and save any data on the Internet. This image-based platform saves data as GIFs, short videos, and photos. It boasts a dynamic user base of over 335 million and a strong presence and user engagement for a broad range of topics such as technology, fashion, science, food, and DIY.

This platform is used by people to subscribe to other users and share boards. During the early stages of developing the mobile and web applications, the Pinterest team chose Python and a heavily modified Django framework.

Django and Python help Pinterest influence user experiences, assure speedy push notifications, and real-time photo updates, deal with massive amounts of content and keep up with the growing number of users.

Spotify

This enables Spotify to manage functions like Discover and Radio, which are entirely dependent on the customers’ unique musical preferences.

Spotify describes why it employs Python for project development in one of its blogs, stating −

“Spotify is thought to place a high value on speed. Python fits well into this approach since it provides huge progress in project development speed. We also make extensive use of Python frameworks to facilitate IO-based services.”

Uber

Uber is a ride-hailing service that also provides food delivery, peer-to-peer ridesharing, and bicycle sharing (among other things). Consider this: the corporation operates in 785 metropolitan regions worldwide and has an estimated 122 million subscribers. That’s a lot of numbers.

Reddit

Reddit is a web content ranking, debate, and social news aggregation platform. It enables registered users to submit content in the form of text entries, links, and images. And this can be downvoted or upvoted by several other members. It is one of the most inspiring Python app examples as of February 2023, with around 542 million visitors per month.

Dropbox

Dropbox is a well-known web-based hosting service that offers file synchronization, cloud storage, client software, and personal cloud. Dropbox, a Python-based storage strategy, is used by users who want to access any file on their computer devices from any location.

Dropbox is available for iOS, Android, Windows, Linux, and Mac computers. It takes initiative to coordinate and distribute files across multiple devices over the cloud.

According to its software engineers, employing Python results in readability, excellent support, and simplicity of memorization. Python provides a consistent and quick development cycle. As a result, any new features may be rapidly implemented, distributed, and tested.

Facebook

Python is a significant part of the Facebook technology stack, accounting for 21% of the codebase. Facebook has been upgraded to 3.4 and has released open-source projects that are solely for Py3. With Python, the Facebook team can reduce the amount of code. They also improved the app’s efficiency and the consistency of the infrastructure.

Python is also utilized in a few critical Facebook services. Tornado is used to manage multiple connections at the same time. Tornado is extremely similar to the Django web framework in terms of security and user authentication. Tornado updates the users’ news feed on a regular basis. Python has thus far proved useful in −

Making full-scale placement possible

Accepting communication between network devices

Assisting with burn-in testing

Helping with server imaging

Auto-remediation made possible

Detecting several errors

Examining server performance

Automating maintenance tasks

Quora

The Quora engineering team chose Python because they were sure that it would continue to evolve in the direction that they perceived as appropriate for their codebase. Python benefited from eight criteria −

Readability

A stable ecosystem

The syntax is simple.

Capability to create more functions with fewer code lines

Backend and frontend development efficiency

Creating applications that are more interoperable

Fewer efforts, faster progress, and cost-cutting measures

Conclusion

Python was once utilized for rough draughts and startup development since it was easy and inexpensive. However, the simplest solutions are frequently the most dependable. The more pieces a mechanism has, the more likely something will break or someone will make a mistake, as many huge corporations discovered the hard way. That’s why they selected Python, and why so many of the world’s most popular apps are written in Python.

Update the detailed information about Python With Mysql Connectivity: Database & Table on the Katfastfood.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!