Trending December 2023 # How To Get Started In Final Cut Pro # Suggested January 2024 # Top 18 Popular

You are reading the article How To Get Started In Final Cut Pro updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 How To Get Started In Final Cut Pro

Final Cut Pro X is Apple’s professional video-editing suite. It’s changed a lot since the old days of Final Cut Pro 7, and while not everyone has been happy with the change, the program is undeniably easier for beginners to handle. With iMovie-like snapping and an interface run via drag-and-drop, you’ll have no problem getting started with Final Cut Pro.

Installing Final Cut Pro X

You can purchase Final Cut Pro X from the Mac App Store for $299. Unfortunately, there’s no demo or trial available. Once you have the program downloaded and installed, open it for the first time.

By default, Final Cut Pro (FCP to its friends) will set itself up with some useful default.

Importing Media

Before you can edit anything, you need to get your raw video files into Final Cut Pro. The example below covers the case of digital video files that have already been transferred to a local hard drive. If you want to follow along with the same media files, download them from Ripple Training.

Working with the Timeline

The timeline at the bottom of the screen is where you’ll make your edited video. This is where you’ll put clips that you want in the final cut. Here, we can trim clips, change clip order, apply effects – nearly everything you do in FCP happens in the Timeline.


The easiest way to bring video files into the timeline is to drag and drop. Start by dragging all the clips in your media library down into the timeline. We’re creating what’s called an “assembly edit:” all the clips we want to include in the final cut, in roughly the order we want them to appear.

Once they are all there, press the Spacebar to play back the sequence. As you’re watching, consider how you can change the edit to improve it. Are there clips that need to be shortened because they’re boring? Does the clip start before the mic was recording, or with undesirable handling noise? Which part of the clip is the most important to portray your narrative? Everything has a narrative, even tutorial projects.


If you think you need to trim some clips down, you’re absolutely right. It’s really easy, too. Press T to bring up the trimming tool. Your cursor will change to reflect the tool selection. Drag your cursor down to a space between two clips.

As you trim, you’ll see that the other clips automatically “stick” together, even when you change the length of one clip. That’s a huge help for us since it avoids accidental black frames.


Audio clips are added to the timeline and edited just like video clips. Drag the audio file into the timeline to place it.

You’ll see a waveform appear, which describes the track’s volume level. You may notice the audio track is shorter than your clips. If so, you can try cutting with the audio. For extra credit, you could try to create a seamless extension of the existing track. That’s not easy, but the Blade tool (B) will be helpful.


Drag the transition you want on to the boundary between two clips to apply it. Note that you may not be able to place a transition if you have the clip extended to its maximum length.

Exporting Videos

You can adjust the export settings in the “Settings” tab of the sharing window.

For this tutorial, the defaults are fine, though they will make a rather large file. Experiment with changing settings at some point to see what kind of effect they have.


This has obviously been a barebones introduction to the most important parts of Final Cut Pro. It will help you take your first steps as an editor, but you’ll want to seek out other training resources to get a more complete picture.

Alexander Fox

Alexander Fox is a tech and science writer based in Philadelphia, PA with one cat, three Macs and more USB cables than he could ever use.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

You're reading How To Get Started In Final Cut Pro

Beyond Sheets: Get Started With Google Bigquery

This tutorial is written for Google Sheets users who have datasets that are too big or too slow to use in Google Sheets. It’s written to help you get started with Google BigQuery.

If you’re experiencing slow Google Sheets that no amount of clever tricks will fix, or you work with datasets that are outgrowing the 10 million cell limit of Google Sheets, then you need to think about moving your data into a database.

As a Google user, probably the best and most logical next step is to get started with Google BigQuery and move your data out of Google Sheets and into BigQuery.

We’ll explore five topics:

By the end of this tutorial, you will have created a BigQuery account, uploaded a dataset from Google Sheets, written some queries to analyze the data and exported the results back to Google Sheets to create a chart.

You’ll also do the same analysis side-by-side in a Google Sheet, so you can understand exactly what’s happening in BigQuery.

I’ve highlighted the action steps throughout the tutorial, to make it super easy for you to follow along:

Google BigQuery exercise steps are shown in blue.

Actions for you to do in Google BigQuery.

Google Sheet exercise steps are shown in green.

Section 1: What is BigQuery?

Google BigQuery is a data warehouse for storing and analyzing huge amounts of data.

Officially, BigQuery is a serverless, highly-scalable, and cost-effective cloud data warehouse with an in-memory BI Engine and machine learning built in.

This is a formal way of saying that it’s:

Works with any size data (thousands, millions, billions of rows…)

Easy to set up because Google handles the infrastructure

Grows as your data grows

Good value for money, with a generous free tier and pay-as-you-go beyond that

Lightning fast

Seamlessly integrated with other Google tools, like Sheets and Data Studio

Can import and export data from and to many sources

Has Built-in machine learning, so predictive modeling can be set up quickly

What’s the difference between BigQuery and a “regular” database?

BigQuery is a database optimized for storing and analyzing data, not for updating or deleting data.

It’s ideal for data that’s generated by e-commerce, operations, digital marketing, engineering sensors etc. Basically, transactional data that you want to analyze to gain insights.

A regular database is suitable for data that is stored, but also updated or deleted. Think of your social media profile or customer database. Names, emails, addresses, etc. are stored in a relational database. They frequently need to be updated as details change.

Section 2: Google BigQuery Setup

It’s super easy to get started wit Google BigQuery!

There are two ways to get started: 1) use the free sandbox account (no billing details required), or 2) use the free tier (requires you to enter billing details, but you’ll also get $300 free Cloud credits).

In either case, this tutorial won’t cost you anything in BigQuery, since the volume of data is so tiny.

We’ll proceed using the sandbox account, so that you don’t have to enter any billing details.

Step 1: Set up BigQuery

Follow these steps:

Go to the Google Cloud BigQuery homepage

A new project called “My First Project” is automatically created

Here’s that process shown as a GIF:

You’re ready for Step 2 below.

BigQuery Console

Here’s what you can see in the console:

The SANDBOX tag to tell you you’re in the sandbox environment

Message to upgrade to the free trial and $300 credit (may or may not show)

UPGRADE button to upgrade out of the Sandbox account

ACTIVATE button to claim the free $300 credit

The current project and where to create new projects

The Query editor window where you type your SQL code

Current project resource

Button to create a new dataset for this project (see below)

Query outputs and table information window

What is the free Sandbox Account?

The sandbox account is an option that lets you use BigQuery without having to enter any credit card information. There are limits to what you can do, but it gives you peace of mind that you won’t run up any charges whilst you’re learning.

In the sandbox account:

Tables or views last 60 days

You get 10 Gb of storage per month for free

And 1 Tb data processing each month

It’s more than enough to do everything in this tutorial today.

How to set up the BigQuery sandbox (YouTube video from Google Cloud)

BigQuery Pricing for Regular Accounts

Unlike Google Sheets, you have to pay to use BigQuery based on your storage and processing needs.

However, there is a sandbox account for free experimentation (see below) and then a generous free tier to continue using BigQuery.

In fact, if you’re working with datasets that are only just too big for Sheets, it’ll probably be free to use BigQuery or very cheap.

BigQuery charges for data storage, streaming inserts, and for querying data, but loading and exporting data are free of charge.

Your first 1 TB (1,000 GB) per month is free.

Full BigQuery pricing information can be found here.

Section 3: How to get your data into BigQuery

Extracting, loading and transforming (ELT) is sometimes the most challenging and time consuming part of a data analysis project. It’s the most engineering-heavy stage, where the heavy lifting happens.

You can load data into BigQuery in a number of ways:

From a readable data source (such as your local machine)

From Google Sheets

From other Google services, such as Google Ad Manager and Google Ads

Use a third-party data integration tool, e.g. Supermetrics, Stitch

Use the CIFL BigQuery connector

Write Apps Script to upload data

From Google Cloud Storage, such as Google Cloud SQL

In this tutorial, we’ll look at loading data from a Google Sheet into BigQuery.

Get started with Google BigQuery: Dataset For This Tutorial

Step 2: Make a copy of the datasets for this tutorial

Make a copy of these Google Sheets in your Drive folder:

Brooklyn Bridge pedestrian traffic

Bicycle Crossings Of New York City Bridges

You might want to make a SECOND copy in your Drive folder too, so you can keep one copy untouched for the upload to BigQuery and use the second copy for doing the follow-along analysis in Google Sheets.

The first dataset is a record of pedestrian traffic crossing Brooklyn Bridge in New York city (source).

It’s only 7,000 rows, so it could be easily analyzed in Sheets of course, but we’ll use it here so that you can do the same steps in BigQuery and in Sheets.

The second dataset is a daily total of bike counts for New York’s East River bridges (source).

There’s noting inherently wrong with putting “small” data into BigQuery. Yes, it’s designed for truly gigantic datasets (billions of rows+) but it works equally well on data of any size.

Back in the BigQuery Console, you need to set up a project before you can add data to it.

Get started with Google BigQuery: Loading data From A Google Sheet

Think of the Project as a folder in Google Drive, the Dataset as a Google Sheet and the Table as individual Sheet within that Google Sheet.

The first step to get started with Google BigQuery is to create a project.

In step 1, BigQuery will have automatically generated a new project for you, called “My First Project”.

If it didn’t, or you want to create another new project, here’s how.

Step 3: Create a new Project

Give it a name, organization (your domain) and location (parent organization or folder).

Name it “start_bigquery”. You’re not allowed to have any spaces or special characters apart from the underscore.

Step 5: Create a new Table

You want to select “Drive”, add the URL and set the file format to Google Sheets.

Name your table “brooklyn_bridge_pedestrians”.

Choose Auto detect schema.

Under Advanced settings, tell BigQuery you have a single header row to skip by entering the value 1.

Your settings should look like this:

Section 4: Analyzing Data in BigQuery

Google BigQuery uses Structure Query Language (SQL) to analyze data.

The Google Sheets Query function uses a similar SQL style syntax to parse data. So if you know how to use the Query function then you basically know enough SQL to get started with Google BigQuery!

Basic SQL Syntax for BigQuery

The basic SQL syntax to write queries looks like this:

LIMIT restrict answer to X number of rows

You’ll see all of these keywords and more in the exercises below.

Get started with Google BigQuery: First Query

The BigQuery console provides a button that gives you a starter query.

Step 6: Write your first query

SELECT FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` LIMIT 1000

Modify it by adding a * between the SELECT and FROM, and reducing the number after LIMIT to 10:

SELECT * FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` LIMIT 10 SELECT * FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` LIMIT 10

The output of this query will be 10 rows of data showing under the query editor:


You just wrote your first query in Google BigQuery.

Let’s continue and analyze the dataset:

Exercise 2: Analyzing Data In BigQuery

Run through the following steps:

Step 7: tell the story of one row

Write a query that selects all the columns (SELECT *) and a limited number of rows (e.g. LIMIT 10), as you did in step 6 above.

Run that query and look at the output. Scan across one whole row. Look at every column and think about what data is stored there.

Think about doing the equivalent step in Google Sheets. Look at your dataset and scroll to the right, telling the story of a single row.

We do this step to understand our data, before getting too immersed in the weeds.

Select Specific Columns

Step 8: Select specific columns

Select specific columns by writing the column names into your query.

SELECT hour_beginning, location, Pedestrians, weather_summary FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` LIMIT 10 Math Operations

Let’s find out the total number of pedestrians that crossed the Brooklyn Bridge across the whole time period.

Step 9: Calculate total in Google Sheets

Open the Google Sheet you copied in Step 2, called “Copy of Brooklyn Bridge pedestrian count dataset”

Add this simple SUM function to cell C7298 to calculate the total:


This gives an answer of 5,021,692

Let’s see how to do that in BigQuery:

Step 10: Math operations in BigQuery

Write a query with the pedestrians column and wrap it with a SUM function:

SELECT SUM(Pedestrians) AS total_pedestrians FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians`

This gives the same answer of 5,021,692

You’ll notice that I gave the output a new column name using the code “AS total_pedestrians“. This is similar to using the LABEL clause in the QUERY function in Google Sheets

Filtering Data

In SQL, the WHERE clause is used to filter rows of data.

It acts in the same way as the filter operation on a dataset in Google Sheets.

Step 11: Filtering data in Google Sheets

Then choose “sleet” and “snow” as your filter values.

Hit OK to implement the filter.

You end up with 61 rows of data showing only the “sleet” or “snow” rows.

Now let’s see that same filter in BigQuery.

Step 12: WHERE filter keyword

Add the WHERE clause after the FROM line, and use the OR statement to filter on two conditions.

SELECT * FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` WHERE weather_summary = 'snow' OR weather_summary = 'sleet'

Check the count of the rows outputted by the this query. It’s 61, which matches the row count from your Google Sheet.

Ordering Data

Another common operation we want to do to understand our data is sort it. In Sheets we can either sort through the filter menu options or through the Data menu.

Step 13: Sorting data in Google Sheets

Remove the sleet and snow filter you applied above.

(Quick aside: it’s amazing to still see so many people walking across the bridge in sub-zero temps!)

Let’s recreate this sort in BigQuery.

Step 14: ORDER BY sort keyword

Add the ORDER BY clause to your query, after the FROM clause:

SELECT * FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` ORDER BY temperature ASC;

Use the keyword ASC to sort ascending (A – Z) or the keyword DESC to sort descending (Z – A).

You might notice that the first two records that show up have “null” in the temperature column, which means that no temperature value was recorded for those rows or it’s missing.

Let’s filter them out with the WHERE clause, so you can see how the WHERE and ORDER BY fit together.

Step 15: Filter out null values

The WHERE clause comes after the FROM clause but before the ORDER BY.

Remove the nulls by using the keyword phrase “IS NOT NULL”.

SELECT * FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` WHERE temperature IS NOT NULL ORDER BY temperature ASC; Aggregating Data

In Google Sheets, we group data with a pivot table.

Typically you choose a category for the rows and aggregate (summarize) the data into each category.

In this dataset, we have a row of data for each hour of each day. We want to group all 24 rows into a single summary row for each day.

Step 16: Pivot tables in Google Sheets

In the pivot table, add hour_beginning to the Rows.

Uncheck the “Show totals” checkbox.

Select “Day of the month” from the list of options.

Add hour_beginning to Rows again, and move it so it’s the top category in Rows.

Check the “Repeat row labels” checkbox.

Add Pedestrians field to the Values section, and leave it set to the default SUM.

Your pivot table should look like this, with the total pedestrian counts for each day:

Now let’s recreate this in BigQuery.

If you’ve ever used the QUERY function in Google Sheets then you’re probably familiar with the GROUP BY keyword. It does exactly what the pivot table in Sheets does and “rolls up” the data into the summary categories.

Step 17: GROUP BY in BigQuery to aggregate data

First off, you need to use the EXTRACT function to extract the date from the timestamp in BigQuery.

This query selects the extracted date and the original timestamp, so you can see them side-by-side:

SELECT EXTRACT(DATE FROM hour_beginning) AS bb_date, hour_beginning FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians`

The EXTRACT DATE function turns “2023-10-01 00:00:00 UTC” into “2023-10-01”, which lets us aggregate by the date.

Modify the query above to add the SUM(Pedestrians) column, remove the “hour_beginning” column you no longer need and add the GROUP BY clause, referencing the grouping column by the alias name you gave it “bb_date”

SELECT EXTRACT(DATE FROM hour_beginning) AS bb_date, SUM(Pedestrians) AS bb_pedestrians FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` GROUP BY bb_date

The output of this query will be a table that matches the data in your pivot table in Google Sheet. Great work!

Functions in BigQuery

You’ll notice we used a special function (EXTRACT) in that previous query.

Like Google Sheets, BigQuery has a huge library of built-in functions. As you make progress on your BigQuery journey, you’ll find more and more of these functions to use.

For more information on functions in BigQuery, have a look at the function reference.

There’s also this handy tool from Analytics Canvas that converts Google Sheets functions into their BigQuery SQL code equivalent.

Filtering Aggregated Data

We saw the WHERE clause earlier, which lets you filter rows in your dataset.

However, if you aggregate your data with a GROUP BY clause and you want to filter this grouped data, you need to use the HAVING keyword.


WHERE = filter original rows of data in dataset

HAVING = filter aggregated data after a GROUP BY operation

To conceptualize this, let’s apply the filter to our aggregate data in the Google Sheet pivot table.

Step 18: Pivot table filter in Google Sheets

Add hour_beginning to the filter section of your pivot table in Google Sheets.

This filter removes rows of data in your Pivot Table where the data is on or after 1 November 2023. It leaves just the October 2023 data.

By now, I think you know what’s coming next.

Let’s apply that same filter condition in BigQuery using the HAVING keyword.

Step 19: HAVING filter keyword

Add the HAVING clause to your existing query, to filter out data on or after 1 November 2023.

Only data that satisfies the HAVING condition (less than 2023-11-01) is included.

SELECT EXTRACT(DATE FROM hour_beginning) AS bb_date, SUM(Pedestrians) AS bb_pedestrians FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` GROUP BY bb_date HAVING bb_date

The output of this query is 31 rows of data, for each day of the month of October.

Get started with Google BigQuery: Joining Data

Mind if I join you?

JOIN pulls multiple tables together, like the VLOOKUP function in Google Sheets. Let’s start in your Google Sheet.

Step 20: Vlookup to join data tables in Google Sheets

Create a new blank Sheet inside your Google Sheet.

Add this IMPORTRANGE formula to import the bicycle bridge data:

Back in the pivot table sheet, use a VLOOKUP to bring the Brooklyn Bridge bicycle data next to the pedestrian data.

Put the VLOOKUP in column D, next to the pedestrian count values:

=VLOOKUP( DATE(2023,10,B2) , Sheet2!A1:F , 6 , false )

Drag the formula down the rows to complete the dataset.

The data in your Sheet now looks like this:

That’s great!

We summarized the pedestrian data by day and joined the bicycle data to it, so you can compare the two numbers.

As you can see, there’s around 10k – 20k pedestrian crossings/day and about 2k – 3k bike crossings/day.

Joining tables in BigQuery

Let’s recreate this table in BigQuery, using a JOIN.

Step 21: Upload bicycle data to BigQuery

Following step 5 above, create a new table in your start_bigquery dataset and upload the second dataset, of bike data for NYC bridges from October 2023.

Name your table “nyc_bridges_bikes”

Your project should now look like this in the Resources pane in the left sidebar:

What we want to do now is take the table the you created above, with pedestrian data per day, and add the bike counts for each day to it.

To do that we use an INNER JOIN.

There are several different types of JOIN available in SQL, but we’ll only look at the INNER JOIN in this article. It creates a new table with only the rows from each of the constituent tables that meet the join condition.

In our case the join condition is matching dates from the pedestrian table and the bike table.

We’ll end up with a table consisting of the date, the pedestrian data and the bike data.

Ready? Let’s go.

Step 22: JOIN the datasets in BigQuery

First, wrap the query you wrote above with the WITH clause, so you can refer to the temporary table that’s created by the name “pedestrian_table”.

WITH pedestrian_table AS ( SELECT EXTRACT(DATE FROM hour_beginning) AS bb_date, SUM(Pedestrians) AS bb_pedestrians FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` GROUP BY bb_date HAVING bb_date

Next, select both columns from the pedestrian table and one column from the bike table:

SELECT pedestrian_table.bb_date, pedestrian_table.bb_pedestrians, bike_table.Brooklyn_Bridge AS bb_bikes FROM pedestrian_table

Of course, you need to add in the bike table to the query so the bike data can be retrieved:

INNER JOIN `start-bigquery-294922.start_bigquery.nyc_bridges_bikes` AS bike_table

Finally, specify the join condition, which tells the query what columns to match:

ON pedestrian_table.bb_date = bike_table.Date

Phew, that's a lot!

Here's the full query:

WITH pedestrian_table AS ( SELECT EXTRACT(DATE FROM hour_beginning) AS bb_date, SUM(Pedestrians) AS bb_pedestrians FROM `start-bigquery-294922.start_bigquery.brooklyn_bridge_pedestrians` GROUP BY bb_date HAVING bb_date

You’ll notice that the names of the columns in our SELECT clause are preceded by the table name, e.g. “pedestrian_table.bb_date”.

This ensures there is no confusion over which columns from which tables are being requested. It’s also necessary when you join tables that have common column headings.

The output of this query is the same as the table you created in your Google Sheet step 20 (using the pivot table and VLOOKUP).

Formatting Your Queries

Step 23: Formatting Your Queries


everything between the slash-stars is ignored by the program when it’s run */

It’s also a good habit to put SQL keywords on separate lines, to make it more readable.

Section 5: Export Data Out Of BigQuery

You have a few options to export data out of BigQuery.

Save as a CSV file

Save as a JSON file

Export query results to Google Sheets (up to 16,000 rows)

Copy to Clipboard

In this tutorial, we’re going to export the data out of BigQuery and back into a Google Sheet, to create a chart. We’re able to do this because the summary dataset we’ve created is small (it’s aggregated data we want to use to create a chart, not the row-by-row data).

Explore BigQuery Data in Sheets or Data Studio

If you want to create a chart based on hundreds of thousands or millions of rows of data, then you can explore the data in Google Sheets or Data Studio directly, without taking it out of BigQuery.

Explore in Google Sheets using Connected Sheets (Enterprise customers only)

Explore directly in Data Studio

Get started with Google BigQuery: Export to Google Sheets

In this tutorial, the output table is easily small enough to fit in Google Sheets, so let’s export the data out of BigQuery and into Sheets.

There, we’ll create chart a chart showing the pedestrian and bike traffic across the Brooklyn Bridge.

Step 24: Export Data Out Of BigQuery

Run your query from step 22 above, which outputs a table with date, pedestrian count and bike count.

Hit Save.

Select Open in the toast popup that tells you a new Sheet has been created, or find it in your Drive root folder (the top folder).

The data now looks like this in the new Sheet:

Yay! Back on familiar territory!

From here, you can do whatever you want with your data.

I chose to create a simple line chart to compare the daily foot and bike traffic across Brooklyn Bridge:

Step 25: Display the data in a chart in Google Sheets

Select the line chart (if it isn’t selected as the default).

Fix the title and change the column names to display better in chart.

Under the Horizontal Axis option, check the “Treat labels as text” checkbox.

See how much information this chart gives you, compared to thousands of rows of raw data.

It tells you the story of the pedestrian and bike traffic crossing the Brooklyn Bridge.


You’ve completed your first end-to-end BigQuery + Google Sheets data analysis project.

Seriously, well done!

Get started with Google BigQuery: Resources

BigQuery Documentation

Explore the public datasets in BigQuery for query practise.

The full code for this tutorial “Get started with Google BigQuery” is also available here on GitHub.

Sql For Beginners And Analysts – Get Started With Sql Using Python


SQL is a mandatory language every analyst and data science professional should know

Learn about the basics of SQL here, including how to work with SQLite databases using Python

SQLite – The Lightweight and Quick Response Database!

SQL is a language every analyst and data scientist should know. There’s no escaping from this. You will be peppered with SQL questions in your analytics or data science interview rounds, especially if you’re a fresher in this field.

If you’ve been putting off learning SQL recently, it’s time to get into action and start getting your hands dirty. You would have to learn about databases to work with data so why not start your SQL journey today?

I’ve personally been working with SQL for a while and can attest to how useful it is, especially in these golden data-driven times. SQL is a simple yet powerful language that helps us manage and query data directly from a database, without having to copy it first.

It is also very easy to understand because of the various clauses that are similar to those used in the English language. So writing SQL commands will be a piece of cake for you!

And given the proliferation of data all over the world, every business is looking for professionals who are proficient in SQL. So once you add SQL skill to your resume, you will be a hotshot commodity out in the market. Great, but where to begin?

There are many different database systems out there, but the simplest and easiest to work with is SQLite. It is fast, compact, and stores data in an easy to share file format. It is used inside countless mobile phones, computers, and various other applications used by people every day. And the most amazing part, it comes bundled with Python! Heck, there is a reason why giants like Facebook, Google, Dropbox, and others use SQLite!

In this article, we will explore how to work with databases in Python using SQLite and look into the most commonly used SQL commands. So let’s start by asking the very basic question – what on earth is a database?

Table of Contents

What is a Database?

What is SQL?

Why Should you use SQLite?

Connecting to an SQLite Database

Creating tables using SQL

Inserting values in a table using SQL

Fetching records from a table using SQL

Loading a Pandas DataFrame into SQLite Database

Reading an SQLite Database into a Pandas DataFrame

Querying SQLite Database

Where clause

Group By clause

Order By clause

Having clause

Join clause

Update statement

Delete statement

Drop Table statement

What is a Database?

A database is an organized collection of interrelated data stored in an electronic format.

While there are various types of databases and their choice of usage varies from organization to organization, the most basic and widely used is the Relational Database model. It organizes the data into tables where each row holds a record and is called a tuple. And each column represents an attribute for which each record usually holds a value.

A Relational database breaks down different aspects of a problem into different tables so that storing them and manipulating them becomes an easy task. For example, an e-commerce website maintaining a separate table for products and customers will find it more useful for doing analytics than saving all of the information in the same table.

Database Management System (DBMS) is a software that facilitates users and different applications to store, retrieve, and manipulate data in a database. Relational Database Management System or RDBMS is a DBMS for relational databases. There are many RDBMS like MYSQL, Postgres, SQL Server, etc. which use SQL for accessing the database.

What is SQL?

But wait – we’ve been hearing the word ‘SQL’ since the beginner. What in the world is SQL?

SQL stands for Structured Query Language. It is a querying language designed for accessing and manipulating information from RDBMS.

SQL lets us write queries or sets of instructions to either create a new table, manipulate data or query on the stored data. Being a data scientist, it becomes imperative for you to know the basics of SQL to work your way around databases because you can only perform analysis if you can retrieve data from your organization’s database!

Why Should you use SQLite?

SQLite stores data in variable-length records which requires less memory and makes it run faster. It is designed for improved performance, reduced cost, and optimized for concurrency.

The sqlite3 module facilitates the use of SQLite databases with Python. In this article, I will show you how to work with an SQLite database in Python. You don’t need to download SQLite as it is shipped by default along with Python version 2.5 onwards!

Connecting to an SQLite Database

The first step to working with your database is to create a connection with it. We can do this by using the connect() method that returns a Connection object. It accepts a path to the existing database. If no database exists, it will create a new database on the given path.

The next step is to generate a Cursor object using the cursor() method which allows you to execute queries against a database:

View the code on Gist.

You are now ready to execute queries against the database and manipulate the data. But after we have done that, it is very important to do two things:

Commit/save the operations that we performed on the database using the commit() method. If we don’t commit our queries, then any changes we made to the database will not be saved automatically

Close the connection to the database to prevent the SQLite database from getting locked. When an SQLite database is locked, it will not be accessible by other users and will give an error

View the code on Gist.

Creating tables using SQL

Now that we have created a database, it is time to create a table to store values.

Let’s create a table that stores values for a customer of an e-commerce website. It stores values like customer name, the id of the product bought, name, gender, age, and the city the customer is from.

A table in SQL is created using the CREATE TABLE command. Here I am going to create a table called Customer with the following attributes:

User_ID – Id to identify individual customers. This is an Integer data type, Primary key and is defined as Not Null

The Primary key is an attribute or set of attributes that can determine individual records in a table. 

Defining an attribute Not Null will make sure there is a value given to the attribute (otherwise it will give an error).

Product_ID – Id to identify the product that the customer bought. Also defined as Not Null

Name – Name of a customer of Text type

Gender – Gender of a customer of Integer type

Age – Age of the customer of Integer type

Any SQL command can be executed using the execute() method of the Cursor object. You just need to write your query inside quotes and you may choose to include a ; which is a requirement in some databases but not in SQLite. But it is always good practice so I will include it with my commands.

So, using the execute() method, we can create our table as shown here:

View the code on Gist.

Perfect! Now that we have our table, let’s add some values to it.

Inserting values in a SQL table




A database table is of no use without values. So, we can use the INSERT INTO SQL command to add values to the table. The syntax for the command is as follows:

INSERT INTO table_name (column1, column2, column3, …)

VALUES (value1, value2, value3, …);

But if we are adding values for all the columns in the table, we can just simplify things and get rid of the column names in the SQL statement:

INSERT INTO table_name

VALUES (value1, value2, value3, …);

Like I said before, we can execute SQL statements using the execute() method. So let’s do that!

View the code on Gist.

What if we want to write multiple Insert commands in a single go? We could use the executescript() method instead:

View the code on Gist.

Or just simply use the executemany() method without having to repeatedly write the Insert Into command every time! executemany() actually executes an SQL command using an iterator to yield the values:

View the code on Gist.

These methods are not limited to the Insert Into command and can be used to execute any SQL statement.

Now that we have a few values in our table, let’s try to fetch those values from the database.

Fetching Records from a SQL table



For fetching values from the database, we use the SELECT command and the attribute values we want to retrieve:

SELECT column1, column2, … FROM table_name;

If you instead wanted to fetch values for all the attributes in the table, use the * character instead of the column names:

SELECT * FROM table_name;

To fetch only a single record from the database, we can use the fetchone() method:

To fetch multiple rows, you can execute a SELECT statement and iterate over it directly using only a single call on the Cursor object:

But a better way of retrieving multiple records would be to use the fetchall() method which returns all the records in a list format:

Awesome! We now know how to insert values into a table and fetch those values. But since data scientists love working with Pandas dataframes, wouldn’t it be great to somehow load the values from the database directly into a dataframe?

Yes, there is and I am going to show you how to do that. But first, I am going to show you how to store your Pandas dataframe into a database, which is obviously a better way to store your data!

Loading Pandas DataFrame into SQLite database

Pandas let us quickly write our data from a dataframe into a database using the to_sql() method. The method takes the table name and Connection object as its arguments.

I will use the dataframes from the Food Demand Forecasting hackathon on the DataHack platform which has three dataframes: order information, meal information, and center fulfillment information.

View the code on Gist.

We now have three tables in the database for each dataframe. It is easy to check them using the read_sql_query() method which we will explore in the next section where we will see how to load a database into a Pandas dataframe.

Reading an SQLite Database into a Pandas DataFrame

The read_sql_query() method of the Pandas library returns a DataFrame corresponding to the result of an SQL query. It takes as an argument the query and the Connection object to the database.

We can check the values in the tables using the real_sql_query() method:

View the code on chúng tôi the code on chúng tôi the code on Gist.

Perfect! Now let’s try to run some queries on these tables and understand a few important SQL commands that will come in handy when we try to analyze data from the database.

Querying our SQLite Database Where clause

The first important clause is the WHERE clause. It is used to filter the records based on a condition:

SELECT column1, column2, … FROM table_name

WHERE condition;

We can always use the * character if we want to retrieve values for all the columns in the table.

We can use it to query and retrieve only the Indian cuisine meals from the meal table:

Here, we have retrieved all the 12 records that matched our given condition. But what if we only wanted to retrieve the top 5 records that satisfy our condition? Well, we could use the LIMIT clause in that case.

LIMIT clause returns only the specified number of records and is useful when there are a large number of records in the table.

Here, we returned only the top 5 records from those that matched our given condition.

Group By statement

GROUP BY statement separates rows into different groups based on an attribute and can be used to apply an aggregate function (COUNT, MIN, MAX, SUM) on the resultant groups:

SELECT column1, column2, … FROM table_name

GROUP BY column_name;

We can use the GROUP BY statement to compare the number of orders for meals that received email promotions to those that did not. We will group the records on the emailer_for_promotion column and apply the COUNT aggregate function on the id column since it contains unique values. This will return the total number of rows belonging to each group:

Here we can see that there were more orders for meals that did not have an email promotion. But if we want to order our result, we can use the ORDER BY statement.

Order By clause

ORDER BY clause is used to sort the result into ascending or descending order using the keywords ASC or DESC respectively. By default, it sorts the records in ascending order:

SELECT column1, column2, … FROM table_name

Here I have combined two clauses: Group By and Order By. The Group By clause groups the values based on the email_for_promotion attribute and the Order By attribute orders output based on the count of the rows in each group. We can combine a bunch of clauses to extract more precise information from the database.

To sort the result in descending order, just type in the keyword DESC:

Having clause

The HAVING clause is used to query on the results of another query run on the database. It applies a filter on the groups returned by a previous query. It should not be confused with the WHERE clause that applies the filter condition before grouping.

HAVING is used to filter records after grouping. Hence, the HAVING clause is always used after the GROUP BY statement:

SELECT column1, column2, … FROM table_name

GROUP BY column_name

HAVING condition;

Here, we returned only those groups that had a count of more than 15.

Join clause

Join clause is a very interesting and important SQL clause. It retrieves and combines data from multiple tables on the same query based on a common attribute:

SELECT column1, column2, … FROM table1


ON table1.column_name= table2.column_name;

In our database, we can retrieve data from the centers and train tables since they share the common attribute center_id:

The INNER JOIN clause combines the two tables, train and centers, on the common attribute center_id specified by the statement train.center_id = centers.center_id. This means records having the same center_id in both the columns will concatenate horizontally.

This way we were able to retrieve the center_type, from centers, and the corresponding total number of orders from the train table. The . operator is very important here as it lets the database know which table the column belongs to.

If you want to know more about joins, I suggest going through this excellent article.

Update statement

Now, let’s say there was a glitch in the system and the base price for all the orders was saved as 10 more than the actual amount. We want to make that update in the database as soon as we found the mistake.

In such a situation, we will use the UPDATE SQL command.

The UPDATE command is used to modify existing records in a table. However, always make you sure you provide which records need to be updated in the WHERE clause otherwise all the records will be updated!

UPDATE table_name

SET column1 = value1, column2 = value2, …

WHERE condition;

Let’s have a look at the table before the update:

Decrease all the base prices by 10 for orders containing meals that had an email promotion:

View the code on Gist.

Finally, here’s a look at the updated table:

All the records have been correctly updated!

Delete statement

Now, suppose center number 11 no longer wants to continue to do business with the company. We have to delete their records from the database. We use the DELETE statement. However, make sure to use the WHERE clause otherwise all the records will be deleted from the table!

DELETE FROM table_name

WHERE condition;

Perfect! We no longer have any records corresponding to center 11.

Drop Table statement

Finally, if we had to drop an entire table from the database and not just its records, we use the DROP TABLE statement. But be extra careful before you run this command because all the records in the table along with the table structure will be lost after this!

Drop table table_name;

End Notes

SQL is a super important language to learn as an analyst or a data science professional. And it’s not a difficult language to pick up, as we’ve seen in this article.

We saw how to create a table in SQL and how to add values to it. We covered some of the most basic and heavily used querying commands like where, having, group by, order by, and joins to retrieve records in the database. Finally, we covered some manipulation commands like update and delete to make changes to tables in the database.

This is by no means an exhaustive guide on SQL and I suggest going through the below great resources to build upon the knowledge that you gathered in this article:


Final Specifications For The Umidigi Z2 & Z2 Pro Officially Released

Display & Design

The UMIDIGI Z2 & Z2 Pro both sport 6.2-inch displays with a 19:9 aspect ratio and Full HD+ (1080 x 2246) resolution with a reported screen-to-body ratio of up to 90%. The smartphone also features 7000-series aluminum frame and four-sided glass body, which enables easy wireless charging.

As most will notice, the Z2 series inspired by Huawei P20’s lineup, which isn’t necessarily a bad thing as Huawei phones do look stunning. The UMIDIGI Z2 series flagship comes in three colors: Black, Twilight and Phantom; with the twilight and phantom versions sport an innovative gradient color finish. Dimensions wise, the Z2 & Z2 Pro both measure 153.4 x 74.4 x 8.3 mm.

Performance, Variants and Software



All the variants of the UMIDIGI Z2 series feature a microSD card slot and run stock Android 8.1 Oreo out of box.

AI Dual Camera and Selfie Camera

Gizchina News of the week

Join GizChina on Telegram

The UMIDIGI Z2 series is the first phone in the company’s portfolio to feature dual camera setups on both sides. The main rear camera features a 16MP f/2.0 aperture Samsung sensor, coupled with a secondary 8MP, which is specially designed for shooting in the dark in combination with the 16MP main camera to take clearer photos in low-light conditions. The dual rear cameras can capture impressive bokeh effect shots by effectively blurring the background and focusing more on the subject. With the help of AI, the UMIDIGI Z2 Pro can potentially shoot even better photos in low-light conditions while retaining lots of details.

The dual 16MP+8MP Sony selfie cameras are also capable at snapping pictures with bokeh effect thanks to AI. The UMIDIGI Z2 series includes support for face unlock through the front camera.

Additional Features

The UMIDIGI Z2 series comes with rear fingerprint scanners and features supports for global bands, it supports 7 network modes and 36 global bands, dual-SIM (supports 4G LTE in both slots simultaneously) and has an USB Type-C port. Additionally, the UMIDIGI Z2 Pro also packs NFC.

Presales for the UMIDIGI Z2 will start soon at $249.99 with a $50 off discount at Gearbest, meanwhile presales for the UMIDIGI Z2 Pro will start around a month later.

UMIDIGI Z2 Pro & Z2 Main Specifications



6.2-inch Full HD+ (1080 x 2246) LCD IPS screen

Aspect ratio: 19:9

Pixel density: 403 ppi (pixels per inch)

6.2-inch Full HD+ (1080 x 2246)  LCD IPS screen

Aspect ratio: 19:9

Pixel density: 403 ppi (pixels per inch)


Stock Android 8.1

Stock Android 8.1


Helio P60

octa-core (4 x 2.0GHz Cortex A73  + 4 x 2.0GHz Cortex A53),

AI Processing Unit

Helio P23

octa-core (4 x 2.0GHz Cortex A53  + 4 x 1.5GHz Cortex A53)


ARM Mali G72 MP3 800MHz

ARM Mali G71 MP2 700MHz








Main: 16MP (F2.0)+8MP,Samsung sensor, autofocus (PDAF), Dual LED flash, 4K video recording

Front: 16MP+8MP with F2.0 aperture, Sony sensor,

video recording 1080p

Main: 16MP (F2.0)+8MP,Samsung sensor, autofocus (PDAF), Dual LED flash, 4K video recording

Front: 16MP+8MP with F2.0 aperture, Sony sensor,

video recording 1080p



Fast charging 18W

Fast wireless charging 15W


Fast charging 18W


Dual 4G-LTE

Dual 4G-LTE



,Global bands,Fingerprint sensor, single/dual SIM (type: nano), Bluetooth v4.0, Wi-Fi, Type-C USB port, GPS/GLONASS

Global bands, Fingerprint sensor, single/dual SIM (type: nano), Bluetooth v4.0, Wi-Fi, Type-C USB port, GPS/GLONASS


153.4 x 74.4 x 8.3 mm

153.4 x 74.4 x 8.3 mm





Black/ Twilight/ Phantom

Black/ Twilight/ Phantom



(Presale price $249.99)

Learn more about the UMIDIGI Z2 & Z2 Pro on the company’s official website.

The Ring Video Doorbell Pro 2 Adds Radar To Cut Down On False Alarms

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

The first time you set up a video doorbell with motion detection, it will likely be a little overzealous with its communication. You’ll end up with a feed full of motion-detection notifications until you tweak the settings and more clearly define the areas it should monitor. Even then, you may still get an occasional ping when a raccoon comes to pilfer the leftover pizza crusts from your trash can. But, Amazon hopes adding some new sensor tech inside its Ring Video Doorbell Pro 2 will improve tracking accuracy while decreasing false positives.

Ring calls the new feature “3D Motion Detection” and it uses radar sensors instead of relying on typical optical cameras. The setup process involves defining borders around the property that the sensors should monitor. Once an object crosses into the defined area, the camera will start recording and the user will get a motion detection alert.

You can now see the whole person as well as a map of where they came from before hitting the bell. Ring

Radar, at least in theory, should do a better job of recognizing when an object gets within a specific distance from the camera because that’s literally what the technology was designed to do. Radar, as the name suggests, involves sending out radio waves, which reflect off of a subject and back to the sensor, giving the device an accurate depiction of how far away the object is. Because it doesn’t rely on visible light as a typical optical camera would, it should provide a more accurate reading of an object’s distance and do an overall better job of locating the subjects in space. Self-driving cars often use radar in conjunction with optical cameras and Lidar (which is roughly the laser equivalent of radar) to get an accurate picture of the world around them.

In addition to the radar-driven motion-detection feature, Ring has also added a Bird’s Eye View feature, which provides an overhead map of where the subject moved across the designated area. So, if a user tunes into a live stream once they receive a motion alert, they can also see the path the subject took to get to the door.

What about the other specs?

In addition to the new radar sensors, Ring has also upgraded its typical camera hardware. Now, instead of a horizontal wide-angle shot, it offers a square field of view that can see a person head-to-toe when they’re standing in front of the doorbell. That also gives the camera the ability to see packages sitting on the ground in front of them. This has become an increasingly common layout for video doorbells, so it’s good to see Ring jumping on board with it.

The Pro 2 still doesn’t run on batteries, which means it has to be hard-wired into the house. It will cost $250 when it launches at the end of March, but you should also factor in the price of the subscription service, which is $3 monthly (or $30 per year) for the basic service, or $10 monthly ($100 per year) for the Plus tier if you want to make the most of all the new features.

If you’re a subscriber, you’ll get access to the new Alexa Greetings feature, which enables the company’s virtual assistant to handle automated tasks such as taking messages and giving instructions to delivery people about where to put a package. Hopefully, in the future, Amazon adds a Ferris Bueller mode to kindly tell everyone who comes to the door to go away.

How To Get Free Apple Music In 2023

Without a doubt, Apple has become quite serious about its music streaming service Apple Music. Today, this isn’t just a viable alternative to popular services like Spotify, Amazon Music, Tidal, and similar – but a leader of the music streaming industry. If you’ve been wondering how to get free Apple Music in 2023, know that you’ve come to the right place.

Before we dive in, a quick note. At the moment, there are six ways to get Apple Music for free (for a limited time). However, the majority of these methods apply only to new subscribers. If you’re a newbie to Apple Music, here’s everything you want to know.

1. Best Buy Promo (4 Months of Free Apple Music)

You’ll see an overview of your cart. Select “Checkout” and log in to your Best Buy account. If you don’t have one, you can create one for free.

A digital code will be sent to the email address associated with your Best Buy account. Check your inbox and wait for the code to arrive.

To redeem your code, visit this page on Apple’s website. Select “Redeem your gift card,” and the website will open the corresponding application on your computer. You may be asked to log in to your Apple ID. Once you do so, you’ll get to redeem your code.

2. Apple Promo (1 Month of Free Apple Music)

Best Buy’s promotion gets you six months of Apple Music for free, while Apple’s promo is limited to only one month. However, we wanted to include this second method since Best Buy’s promo could end at any moment.

Confirm your purchase. For the trial to activate, you need to have a payment method associated with your Apple ID. Alternatively, you can redeem an Apple gift card. (Keep in mind that you’ll need to have at least $9.99 on your Apple ID for the trial to activate.)

3. Apple Product Promo (6 Months of Free Apple Music)

Another worthy promotion from Apple allows new subscribers to gain six months of free Apple Music within 90 days after the purchase of eligible Beats, AirPods, and HomePod mini devices.

Image source: Apple

Eligible Devices

AirPods (2nd Generation)

AirPods (3rd Generation)

AirPods Max

HomePod mini

Beats Studio Buds


Powerbeats Pro

Beats Solo Pro

Beats Fit Pro

If you have one of these devices, here’s how you can redeem your Apple Music free trial.

Ensure that your iPhone or iPad is running the latest version of iOS or iPadOS. Learn how to update your iOS device before attempting to activate this trial.

Pair an eligible audio device to your iPhone or iPad.

On your iPhone or iPad open the Music app and navigate to the “Listen Now” tab.

Then tap the button labeled “Get 6 months free” to redeem the offer.

Note: AirPods (1st generation), Beats Solo3 Wireless, Beats Studio3 Wireless, Beats EP, and Beats Flex are not eligible for this offer.

4. Shazam QR Code (3 Months of Free Apple Music)

Perhaps the simplest way to access three full months of Apple Music for free as a new subscriber is by scanning a QR code on the Shazam website using your iPhone or iPad.

On your PC, visit the Shazam Apple Music Offer page.

Launch the camera app on your iPhone or iPad and point it at the onscreen QR code. Then tap on the yellow URL redirect icon highlighted below.

After you scan the QR code, you will be redirected to a webpage to redeem the 3-month Apple Music offer. To link the offer to your Apple ID, tap the “Verify Your Identity” button and then sign in with your Apple ID credentials.

5. Extend Your Trial With Family Sharing (Another 3 months of Free Apple Music)

A little known trick enables users who already have an active Apple Music trial to extend the trial period by another three months using Apple’s Family Sharing feature. Family Sharing allows Apple ID holders to share their subscription services with up to five other family members. For this method to work properly you only need to add one family member to your network.

Image source: Apple

Utilize any of the above methods (Options 1-4) to ensure that you have an active trial of Apple Music.

The day before your Apple Music trial is set to expire navigate to the Settings app to activate Family Sharing. If you’re not familiar with this workflow you can discover how to set up Family Sharing on Apple devices.

Add at least one (or up to five) family member(s) to automatically renew your Apple Music subscription for you and your family member(s) for another three months. You can learn more about how to share Apple Music with your family by viewing our guide.

6. Verizon Promo (6 Months of Free Apple Music)

A promotional deal from the mobile carrier Verizon allows subscribers with select unlimited plan subscriptions to score up to six months of free Apple Music.

Image source: Verizon

If Verizon is your mobile phone carrier start by visiting Verizon’s Apple Music Offer page to check your eligibility. This varies based on which plan you are subscribed to. Verizon’s Apple Music FAQ page provides some details regarding eligibility.

Select the button labeled “Get Apple Music” and then sign in with your Verizon account credentials to see if you are eligible for this promotion. If you are eligible, follow the onscreen instructions to enroll Apple Music on your Verizon unlimited data plan.

Note: There is no mention of needing to be a new Apple Music subscriber to be eligible for this promo.

Frequently Asked Questions Can I redeem these offers on a PC or Android device?

Yes. These Apple Music trial offers can be redeemed on any PC (regardless of its OS) at chúng tôi If you are using an Android smartphone or tablet you can access the Apple Music app and Shazam from the Play Store.

I would like to redeem the Apple product promo offer (option 3), but the 90-day expiration period has passed. Can I reactivate this offer?

Yes. Simply update your iPhone or iPad to the latest version of iOS or iPadOS (if available) to reactivate the 90-day activation period deadline. Owners of eligible devices have an additional 90 days after upgrading to the latest version iOS or iPadOS to redeem the six-month Apple Music trial.

Is there a difference between iTunes and Apple Music?

With the introduction of macOS Catalina back in 2023, Apple replaced the iTunes app with four standalone apps each offering dedicated service experiences, including the Apple Music app, the Apple Podcasts app, the Apple TV app, and the Apple Books App. You can still access your preexisting content from iTunes in any one of these standalone apps. An Apple Music subscription only enables access to over 90 million songs via the Music app and does not include access to other Apple services such as Apple TV+. Learn more about what happened to iTunes by visiting Apple’s support page.

Image credit: Malte Helmhold via Unsplash All screenshots taken by Brahm Shank

Brahm Shank

Self-proclaimed coffee connoisseur and tech enthusiast Brahm Shank is captivated by the impact of consumer tech: “It’s profoundly moving when people discover that the phone in their pocket or the tiny computer on their wrist has the power to enrich their lives in ways they never imagined.” Apple, Inc. and its unique position at the intersection of technology and the creative arts, resonates deeply with Brahm and his passion for helping people unleash their potential using technology. Over the years, Brahm has held various podcasts – including famed technologist David Pogue of The New York Times on topics such as Big Tech and digital wellness.

Subscribe to our newsletter!

Our latest tutorials delivered straight to your inbox

Sign up for all newsletters.

By signing up, you agree to our Privacy Policy and European users agree to the data transfer policy. We will not share your data and you can unsubscribe at any time.

Update the detailed information about How To Get Started In Final Cut Pro on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!