Trending December 2023 # What Is Data Lake? It’s Architecture: Data Lake Tutorial # Suggested January 2024 # Top 13 Popular

You are reading the article What Is Data Lake? It’s Architecture: Data Lake Tutorial updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 What Is Data Lake? It’s Architecture: Data Lake Tutorial

What is Data Lake?

A Data Lake is a storage repository that can store large amount of structured, semi-structured, and unstructured data. It is a place to store every type of data in its native format with no fixed limits on account size or file. It offers high data quantity to increase analytic performance and native integration.

Data Lake is like a large container which is very similar to real lake and rivers. Just like in a lake you have multiple tributaries coming in, a data lake has structured data, unstructured data, machine to machine, logs flowing through in real-time.

Data Lake

The Data Lake democratizes data and is a cost-effective way to store all data of an organization for later processing. Research Analyst can focus on finding meaning patterns in data and not data itself.

Unlike a hierarchal Data Warehouse where data is stored in Files and Folder, Data lake has a flat architecture. Every data elements in a Data Lake is given a unique identifier and tagged with a set of metadata information.

In this tutorial, you will learn-

Why Data Lake?

The main objective of building a data lake is to offer an unrefined view of data to data scientists.

Reasons for using Data Lake are:

With the onset of storage engines like Hadoop storing disparate information has become easy. There is no need to model data into an enterprise-wide schema with a Data Lake.

With the increase in data volume, data quality, and metadata, the quality of analyses also increases.

Data Lake offers business Agility

Machine Learning and Artificial Intelligence can be used to make profitable predictions.

There is no data silo structure. Data Lake gives 360 degrees view of customers and makes analysis more robust.

Data Lake Architecture

Data Lake Architecture

The figure shows the architecture of a Business Data Lake. The lower levels represent data that is mostly at rest while the upper levels show real-time transactional data. This data flow through the system with no or little latency. Following are important tiers in Data Lake Architecture:

Ingestion Tier: The tiers on the left side depict the data sources. The data could be loaded into the data lake in batches or in real-time

Insights Tier: The tiers on the right represent the research side where insights from the system are used. SQL, NoSQL queries, or even excel could be used for data analysis.

HDFS is a cost-effective solution for both structured and unstructured data. It is a landing zone for all data that is at rest in the system.

Distillation tier takes data from the storage tire and converts it to structured data for easier analysis.

Processing tier run analytical algorithms and users queries with varying real time, interactive, batch to generate structured data for easier analysis.

Unified operations tier governs system management and monitoring. It includes auditing and proficiency management, data management, workflow management.

Key Data Lake Concepts

Following are Key Data Lake concepts that one needs to understand to completely understand the Data Lake Architecture

Key Concepts of Data Lake

Data Ingestion

Data Ingestion allows connectors to get data from a different data sources and load into the Data lake.

Data Ingestion supports:

All types of Structured, Semi-Structured, and Unstructured data.

Multiple ingestions like Batch, Real-Time, One-time load.

Many types of data sources like Databases, Webservers, Emails, IoT, and FTP.

Data Storage

Data storage should be scalable, offers cost-effective storage and allow fast access to data exploration. It should support various data formats.

Data Governance

Data governance is a process of managing availability, usability, security, and integrity of data used in an organization.


Security needs to be implemented in every layer of the Data lake. It starts with Storage, Unearthing, and Consumption. The basic need is to stop access for unauthorized users. It should support different tools to access data with easy to navigate GUI and Dashboards.

Authentication, Accounting, Authorization and Data Protection are some important features of data lake security.

Data Quality:

Data quality is an essential component of Data Lake architecture. Data is used to exact business value. Extracting insights from poor quality data will lead to poor quality insights.

Data Discovery

Data Discovery is another important stage before you can begin preparing data or analysis. In this stage, tagging technique is used to express the data understanding, by organizing and interpreting the data ingested in the Data lake.

Data Auditing

Two major Data auditing tasks are tracking changes to the key dataset.

Tracking changes to important dataset elements

Captures how/ when/ and who changes to these elements.

Data auditing helps to evaluate risk and compliance.

Data Lineage

This component deals with data’s origins. It mainly deals with where it movers over time and what happens to it. It eases errors corrections in a data analytics process from origin to destination.

Data Exploration

It is the beginning stage of data analysis. It helps to identify right dataset is vital before starting Data Exploration.

All given components need to work together to play an important part in Data lake building easily evolve and explore the environment.

Maturity stages of Data Lake

The Definition of Data Lake Maturity stages differs from textbook to other. Though the crux remains the same. Following maturity, stage definition is from a layman point of view.

Maturity stages of Data Lake

Stage 1: Handle and ingest data at scale

This first stage of Data Maturity Involves improving the ability to transform and analyze data. Here, business owners need to find the tools according to their skillset for obtaining more data and build analytical applications.

Stage 2: Building the analytical muscle

This is a second stage which involves improving the ability to transform and analyze data. In this stage, companies use the tool which is most appropriate to their skillset. They start acquiring more data and building applications. Here, capabilities of the enterprise data warehouse and data lake are used together.

Stage 3: EDW and Data Lake work in unison

This step involves getting data and analytics into the hands of as many people as possible. In this stage, the data lake and the enterprise data warehouse start to work in a union. Both playing their part in analytics

Stage 4: Enterprise capability in the lake

In this maturity stage of the data lake, enterprise capabilities are added to the Data Lake. Adoption of information governance, information lifecycle management capabilities, and Metadata management. However, very few organizations can reach this level of maturity, but this tally will increase in the future.

Best practices for Data Lake Implementation:

Architectural components, their interaction and identified products should support native data types

Design of Data Lake should be driven by what is available instead of what is required. The schema and data requirement is not defined until it is queried

Design should be guided by disposable components integrated with service API.

Data discovery, ingestion, storage, administration, quality, transformation, and visualization should be managed independently.

The Data Lake architecture should be tailored to a specific industry. It should ensure that capabilities necessary for that domain are an inherent part of the design

Faster on-boarding of newly discovered data sources is important

Data Lake helps customized management to extract maximum value

The Data Lake should support existing enterprise data management techniques and methods

Challenges of building a data lake:

In Data Lake, Data volume is higher, so the process must be more reliant on programmatic administration

It is difficult to deal with sparse, incomplete, volatile data

Wider scope of dataset and source needs larger data governance & support

Difference between Data lakes and Data warehouse

Parameters Data Lakes Data Warehouse

Data Data lakes store everything. Data Warehouse focuses only on Business Processes.

Processing Data are mainly unprocessed Highly processed data.

Type of Data It can be Unstructured, semi-structured and structured. It is mostly in tabular form & structure.

Task Share data stewardship Optimized for data retrieval

Agility Highly agile, configure and reconfigure as needed. Compare to Data lake it is less agile and has fixed configuration.

Users Data Lake is mostly used by Data Scientist Business professionals widely use data Warehouse

Storage Data lakes design for low-cost storage. Expensive storage that give fast response times are used

Security Offers lesser control. Allows better control of the data.

Replacement of EDW Data lake can be source for EDW Complementary to EDW (not replacement)

Schema Schema on reading (no predefined schemas) Schema on write (predefined schemas)

Data Processing Helps for fast ingestion of new data. Time-consuming to introduce new content.

Data Granularity Data at a low level of detail or granularity. Data at the summary or aggregated level of detail.

Tools Can use open source/tools like Hadoop/ Map Reduce Mostly commercial tools.

Benefits and Risks of using Data Lake:

Here are some major benefits in using a Data Lake:

Offers cost-effective scalability and flexibility

Offers value from unlimited data types

Reduces long-term cost of ownership

Allows economic storage of files

Quickly adaptable to changes

Users, from various departments, may be scattered around the globe can have flexible access to the data

Risk of Using Data Lake:

After some time, Data Lake may lose relevance and momentum

There is larger amount risk involved while designing Data Lake

Unstructured Data may lead to Ungoverned Chao, Unusable Data, Disparate & Complex Tools, Enterprise-Wide Collaboration, Unified, Consistent, and Common

It also increases storage & computes costs

There is no way to get insights from others who have worked with the data because there is no account of the lineage of findings by previous analysts

The biggest risk of data lakes is security and access control. Sometimes data can be placed into a lake without any oversight, as some of the data may have privacy and regulatory need


A Data Lake is a storage repository that can store large amount of structured, semi-structured, and unstructured data.

The main objective of building a data lake is to offer an unrefined view of data to data scientists.

Unified operations tier, Processing tier, Distillation tier and HDFS are important layers of Data Lake Architecture

Data Ingestion, Data storage, Data quality, Data Auditing, Data exploration, Data discover are some important components of Data Lake Architecture

Design of Data Lake should be driven by what is available instead of what is required.

Data Lake reduces long-term cost of ownership and allows economic storage of files

The biggest risk of data lakes is security and access control. Sometimes data can be placed into a lake without any oversight, as some of the data may have privacy and regulatory need.

You're reading What Is Data Lake? It’s Architecture: Data Lake Tutorial Mvc Tutorial For Beginners: What Is, Architecture

What is chúng tôi MVC?

ASP.NET MVC is an open source web development framework from Microsoft that provides a Model View Controller architecture. chúng tôi MVC offers an alternative to chúng tôi web forms for building web applications. It is a part of the .Net platform for building, deploying and running web apps. You can develop web apps and website with the help of HTML, CSS, jQuery, Javascript, etc.

Learn MVC with this chúng tôi MVC tutorial which covers all the basic concepts of MVC for beginners:

Why chúng tôi MVC?

Although web forms were very successful, Microsoft thought of developing chúng tôi MVC. The main issue with chúng tôi webForms is performance.

In a web application, there are four main aspects which define performance:-

Response time issues

Problem of Unit Testing

HTML customization

Reusability of the code-behind class MVC excels on the above parameters.

Version History of MVC ASP.NET MVC1

Released on Mar 13, 2009

It runs chúng tôi 3.5

Visual Studio 2008

MVC Pattern architecture with WebForm Engine

Main Features includes Html & Unit testing, Ajax Helpers, Routing, etc.


This version released on March 10, 2010

Runs on .NET 3.5,4.0 and with Microsoft Visual Studio 2008

Include Feature like Templated helpers, Ul helpers with automatic scaffolding & customizable templates

It supports for DataAnnotations Attributes to apply model validation on client and server sides


It was released on Jan 13, 2011

Runs on .NET 4.0 and with Microsoft Visual Studio 2010

Use of NuGet to deliver software and allows you to manage dependencies across the platform

Better JavaScript support with, jQuery Validation, and JSON binding

It offers features like the Razor view engine; enhanced Data Annotations attributes for model validation on both client and server sides


This version was released on Aug 2012

It runs on .NET 4.0, 4.5 and with Visual Studio 2010 & Visual Studio 2012

Enhancements to default project templates

Offers features like Mobile project template using jQuery Mobile, Task support for Asynchronous Controllers, bundling, minification, etc.


Released on 17 October 2013

Runs on .NET 4.5, 4.5.1 and with Visual Studio 2012 & Visual OneASP.NET

Supports attribute routing in MVC

Features of MVC

Easy and frictionless testability

Full control over your HTML, JavaScript , and URLs

Leverage existing chúng tôi Features

A new presentation option for ASP.Net

A simpler way to program Asp.Net

Clear separation of logic: Model, View, Controller

Test-Driven Development

Support for parallel development

Things to remember while creating MVC Application

Here are a few useful things in this chúng tôi MVC tutorial which you need to remember for creating MVC application:

You need to remember that ASP .net MVC is NOT a replacement of chúng tôi web forms based applications

The approach of MVC app development must be decided based on the application requirements and features provided by ASP .net MVC to suit the specific development needs.

Application development process with ASP .NET MVC is more complex compared with web forms based applications.

Application maintainability always be higher with separation of application tasks.

MVC architectural Pattern

MVC architectural Pattern

MVC is a software architecture pattern which follows the separation of concerns method. In this model .Net applications are divided into three interconnected parts which are called Model, View, and Controller.

The goal of the MVC pattern is that each of these parts can be developed, tested in relative isolation and also combined to create a very robust application.

Let see all of them in detail:


Model objects are parts of the application which implement the logic for the application’s data domain. It retrieves and stores model state in a database. For example, product object might retrieve information from a database, operate on it. Then write information back to products table in the SQL server.


Views are the components which are used to display the application’s user interface (UI) also called viewmodel in MVC. It displays the .Net MVC application’s which is created from the model data.

The common example would be an edit view of an Item table. It displays text boxes, pop-ups and checks boxes based on the current state of products & object.


Controllers handle user interaction, work with the model, and select a view to render that display Ul. In a .Net MVC app, the view only displays information, the controller manages and responds to user input & interaction using action filters in MVC.

For example, the controller manages query-string values and passes those values to the model.

Web Forms vs. MVC

Parameters WebFroms MVC

Model Asp.Net Web Forms follow event-driven development model. Asp.Net MVC uses MVC pattern based development model.

Used Since Been around since 2002 It was first released in 2009

Support for View state Asp.Net Web Form supports view state for state management at the client side. .Net MVC doesn’t support view state.

URL type Asp.Net Web Form has file-based URLs. It means file name exists in the URLs and they must exist physically. Asp.Net MVC has route-based URLs that means URLs which are redirected to controllers and actions.

Syntax Asp.Net MVC follows Web Forms Syntax. Asp.Net MVC follow the customizable syntax.

View type Web Form, views are tightly coupled to Code behind(ASPX-CS), i.e., logic. MVC, Views, and logic are always kept separately.

Consistent look and feels It has master pages for a consistent look. Asp.Net MVC has layouts for a consistent look.

Code Reusability Web Form offers User Controls for code re-usability. Asp.Net MVC has offered partial views for code re-usability.

Control for HTML Less control over rendered HTML. Full control over HTML

State management Automatic state management of controls. Manual state management.

TDD support Weak or custom TDD required. Encourages and includes TDD!

Advantages of chúng tôi MVC

Highly maintainable applications by default

It allows you to replace any component of the application.

Better support for Test Driven Development

Complex applications are easy to manage because of divisions of Model, View, and Controllers.

Offering robust routing mechanism with front controller pattern

Offers better control over application behavior with the elimination of view state and server-based forms

.Net MVC applications are supported by large teams of developers and Web designers

It offers more control over the behaviors of the application. It also uses an optimized bandwidth for requests made to the server

You can’t see design page preview like the .aspx page.

You need to run the program every time to see it’s actual design.

Understanding the flow of the application can be challenging

It is quite complicated to implement, so it is not an ideal option for small level applications

It is hard to learn MVC for chúng tôi as it requires a great understanding of MVC pattern

Best practices while using chúng tôi MVC

Create a separate assembly for MODEL in case of large and complex code to avoid any unwanted situation o

The model should include business logic, session maintenance, validation part, and data logic part.

VIEW should not contain any business logic and session maintenance, use ViewData to access data in View

Business logic and data access should never occur in ControllerViewData

The controller should only be responsible for preparing and return a view, calling model, redirect to action, etc.

Delete Demo code from the application when you create it Delete AccountController

Use only specific view engine to create HTML markup from your view as it is the combination of HTML and the programming code.


ASP.NET MVC is an open source web development framework from Microsoft that provides a Model View Controller architecture. MVC offers an alternative to chúng tôi web forms for building web applications.

The main issue with chúng tôi webForms is performance. MVC offer Easy and frictionless testability with Full control over your HTML & URLs.

You need to remember that ASP .net MVC is NOT a replacement of chúng tôi web forms based applications.

The approach of MVC app development or chúng tôi MVC Life Cycle must be decided based on the application requirements and features provided by ASP .net MVC to suit the specific development needs.

ASP.NET MVC offers Highly maintainable applications by default.

With chúng tôi you can’t see design page preview like the .aspx page.

As a best practise, the model should include business logic, session maintenance, validation part, and data logic part.

Official Intel Core Raptor Lake Specs Leaked: 24

Official Intel Core Raptor Lake specs leaked: 24-core, 5.8 GHz i9-13900K

Igor’s lab has done it again, bringing official specifications for the entirety of Intel’s Raptor Lake CPU line-up.

Official Intel Core Raptor Lake specs leaked: Intel plans to unveil its Intel Core Raptor Lake CPUs on the 27th of September 2023, the same day AMD intends to make its Zen 4 CPUs available for purchase. As of yet, there’s no confirmed release date for Intel’s 13th-generation CPUs. 

Now read: 13th gen Raptor Lake, what we know.

However, thanks to Igor’s Lab, we don’t have to wait until then to get the official specifications of each and every 13th gen Raptor Lake CPU Intel brings to the market later this year. 

Intel Core Raptor lake specifications 

Below are the specifications of every Raptor Lake CPU intel that intends to release to the public later this year. 

As we can see, Intel has spared nothing when developing a successor to the very well-received Alder Lake family of processors. 


The i9-13900K flagship sports 24-cores in an 8+16 configuration with a massive single-core boost frequency of 5.8GHz. This is big, especially if we see a 13900KS at some point, that may very well be the first CPU to push 6 GHz natively without aftermarket OC. 

Definitely still hard to stomach that massive 253W TDP though, we’d hate to imagine what TDP numbers we would be looking at if Intel didn’t adopt the very efficient chúng tôi technology last generation. We hope that this isn’t just another case of throwing more power at the problem. 


The 13700K comes in with 16 cores (8+8) and a core boost speed of up to 5.4GHz, meaning the mid-range spot in the Raptor Lake line-up is well defended, Zen 4 might have a difficult job retaining the single-core performance crown. 

AMD recently disclosed that all Zen 4 CPUs score higher in Geekbench benchmarks than the previous generation Intel flagship (i9-12900k). Let’s hope Raptor Lake can re-take that crown.


Looks like even the low end of Raptor Lake is coming out swinging, the 13600K has some pretty big shoes to fill, featuring 14-cores (6+8) and a boost clock of up to 5.1GHz, will that be enough to fend off the lower-end Zen 4 entries? 

To KF or not to KF? 

Traditionally you could squeeze a little extra performance out of a KF processor, and they were a little lighter on the wallet. This time around, however, there’s no discernible difference between the K and KF processors. At least on paper anyway, we’ll soon see what’s what when we get a hold of some of these CPUs. 

Currently, there are no rumors, leaks, or official information from Intel that suggest a solid release date. All we know is that Intel plans to unveil Its Intel Core Raptor Lake 13th generation CPUs on the 27th of September 2023. 

So as of right now, all signs point to an October/November Raptor Lake release. But that’s just speculation on our end. 

Final word

Intel’s 13th generation CPUs look juicy and full of juice they are with that TDP rating! It obviously goes without saying that Intel makes some phenomenal CPUs. We can’t wait to see how Intel prices these CPUs, or how they stack up against AMD’s Zen 4 CPU line-up. 

Watch this space for the latest info on Raptor Lake. We hope you enjoyed this Official Intel Core Raptor Lake specs leaked article.

What Is Google Data Studio? (Beginners Guide)

Do you want to customize and visualize your data from a variety of data sources?

Google Data Studio (now called Looker Studio) is free dashboarding software that helps you easily connect your sources, update your data, and generate meaningful reports. 

What Is Google Data Studio?

For an experienced marketer, it’s quite easy to analyze data in the tool where it is collected.

But clients are more interested in knowing results. Rather than look at actual data, they want to see the big picture. In that case, it’s better to display customized reports based on their requirements instead of overwhelming them with data.

And this is where Google Data Studio comes into the picture. With this tool, you can simply connect data sources like Google Analytics, Google Ads, or Facebook Ads and update your data reports automatically.

This brings all your tracking data in one place. All you have to do is drag and drop any of the preferred charts on your report and configure it to your needs. 

Moreover, you can also style your dashboard by using built-in dropdowns and filters to make it interactive. 

If you’re new to Google Data Studio, we have an in-depth beginner guide to help you get started!

And since it runs on a web browser, you can easily share your work and collaborate with your colleagues. 

Let’s see how! 

Sharing Data from Google Data Studio

Once you have created your dashboard, you can either send out a static version of your dashboard via the Email with a PDF option, or you can share access to the online version. 

The online version will automatically keep itself up to date, and it will maintain any interactivity that you’ve built in with dropdowns or filters!

The best part about sharing your dashboard is that your clients or colleagues won’t need access to the data source. Instead, they’ll be able to view meaningful representations of the data through your Google Data Studio dashboard.

Google Data Studio Functionalities

There is more to this tool’s functionalities than just creating custom reports.

It provides a one-stop solution to present data from various sources, including Google BigQuery. By connecting to Google BigQuery, you’ll be able to extract data from your own data warehouse. This is a very helpful feature for marketers who collect huge amounts of data from different sources using different tools.

You can also combine data with the data blending feature, then analyze aggregated data by using case statements and calculated fields. This can help you (and clients) see the bigger picture behind your data.

Lastly, you can create more sophisticated, interactive reports by adding tooltips, date filters, and pivot tables to your data visualizations.

Thus, Google Data Studio’s utility doesn’t end at creating stylized, custom reports; it can also help you combine, process, and convey data that is collected from varied sources.

FAQ How does Google Data Studio work?

Google Data Studio allows you to connect multiple data sources such as Google Analytics, Google Ads, Facebook Ads, and more. You can drag and drop charts onto your report and configure them according to your needs. The tool also provides built-in dropdowns and filters to style your dashboard and make it interactive.

How can I share data from Google Data Studio?

You can share your Google Data Studio dashboard in two ways. First, you can send out a static version of your dashboard via email as a PDF. Second, you can share access to the online version of the dashboard, which will automatically update itself and maintain any interactivity you’ve added with dropdowns or filters. This allows your clients or colleagues to view meaningful representations of the data without needing access to the data source.

What are the additional functionalities of Google Data Studio?

Google Data Studio offers more than just creating custom reports. It provides a one-stop solution for presenting data from various sources, including Google BigQuery. You can extract data from your own data warehouse by connecting to Google BigQuery. The tool also allows you to combine data using the data blending feature and analyze aggregated data using case statements and calculated fields. Additionally, you can enhance your reports with tooltips, date filters, and pivot tables to create more interactive visualizations.


In short, Google Data Studio is a tool that can aggregate all of your tracking data in one place, then synthesize them into a customized data report that you can share with clients and colleagues. 

Dplyr Tutorial: Merge And Join Data In R With Examples

Introduction to Data Analysis

Data analysis can be divided into three parts:

Extraction: First, we need to collect the data from many sources and combine them.

Transform: This step involves the data manipulation. Once we have consolidated all the sources of data, we can begin to clean the data.

Visualize: The last move is to visualize our data to check irregularity.

Data Analysis Process

One of the most significant challenges faced by data scientists is the data manipulation. Data is never available in the desired format. Data scientists need to spend at least half of their time, cleaning and manipulating the data. That is one of the most critical assignments in the job. If the data manipulation process is not complete, precise and rigorous, the model will not perform correctly.

In this tutorial, you will learn:

R Dplyr

R has a library called dplyr to help in data transformation. The dplyr library is fundamentally created around four functions to manipulate the data and five verbs to clean the data. After that, we can use the ggplot library to analyze and visualize the data.

We will learn how to use the dplyr library to manipulate a Data Frame.

Merge Data with R Dplyr

dplyr provides a nice and convenient way to combine datasets. We may have many sources of input data, and at some point, we need to combine them. A join with dplyr adds variables to the right of the original dataset.

Dplyr Joins

Following are four important types of joins used in dplyr to merge two datasets:

Function Objective Arguments Multiple keys

left_join() Merge two datasets. Keep all observations from the origin table data, origin, destination, by = “ID” origin, destination, by = c(“ID”, “ID2”)

right_join() Merge two datasets. Keep all observations from the destination table data, origin, destination, by = “ID” origin, destination, by = c(“ID”, “ID2”)

inner_join() Merge two datasets. Excludes all unmatched rows data, origin, destination, by = “ID” origin, destination, by = c(“ID”, “ID2”)

full_join() Merge two datasets. Keeps all observations data, origin, destination, by = “ID” origin, destination, by = c(“ID”, “ID2”)

We will study all the joins types via an easy example.

First of all, we build two datasets. Table 1 contains two variables, ID, and y, whereas Table 2 gathers ID and z. In each situation, we need to have a key-pair variable. In our case, ID is our key variable. The function will look for identical values in both tables and bind the returning values to the right of table 1.

library(dplyr) df_primary <- tribble( ~ID, ~y, "A", 5, "B", 5, "C", 8, "D", 0, "F", 9) df_secondary <- tribble( ~ID, ~z, "A", 30, "B", 21, "C", 22, "D", 25, "E", 29) Dplyr left_join()

The most common way to merge two datasets is to use the left_join() function. We can see from the picture below that the key-pair matches perfectly the rows A, B, C and D from both datasets. However, E and F are left over. How do we treat these two observations? With the left_join(), we will keep all the variables in the original table and don’t consider the variables that do not have a key-paired in the destination table. In our example, the variable E does not exist in table 1. Therefore, the row will be dropped. The variable F comes from the origin table; it will be kept after the left_join() and return NA in the column z. The figure below reproduces what will happen with a left_join().

Example of dplyr left_join()

left_join(df_primary, df_secondary, by ='ID')


## # A tibble: 5 x 3 ## ID y.x y.y ## 1 A 5 30 ## 2 B 5 21 ## 3 C 8 22 ## 4 D 0 25 ## 5 F 9 NA Dplyr right_join()

The right_join() function works exactly like left_join(). The only difference is the row dropped. The value E, available in the destination data frame, exists in the new table and takes the value NA for the column y.

Example of dplyr right_join()

right_join(df_primary, df_secondary, by = 'ID')


## # A tibble: 5 x 3 ## ID y.x y.y ## 1 A 5 30 ## 2 B 5 21 ## 3 C 8 22 ## 4 D 0 25 ## 5 E NA 29 Dplyr inner_join()

When we are 100% sure that the two datasets won’t match, we can consider to return only rows existing in both dataset. This is possible when we need a clean dataset or when we don’t want to impute missing values with the mean or median.

The inner_join()comes to help. This function excludes the unmatched rows.

Example of dplyr inner_join()

inner_join(df_primary, df_secondary, by ='ID')


## # A tibble: 4 x 3 ## ID y.x y.y ## 1 A 5 30 ## 2 B 5 21 ## 3 C 8 22 ## 4 D 0 25 Dplyr full_join()

Finally, the full_join() function keeps all observations and replace missing values with NA.

Example of dplyr full_join()

full_join(df_primary, df_secondary, by = 'ID')


## # A tibble: 6 x 3 ## ID y.x y.y ## 1 A 5 30 ## 2 B 5 21 ## 3 C 8 22 ## 4 D 0 25 ## 5 F 9 NA ## 6 E NA 29 Multiple Key pairs

Last but not least, we can have multiple keys in our dataset. Consider the following dataset where we have years or a list of products bought by the customer.

If we try to merge both tables, R throws an error. To remedy the situation, we can pass two key-pairs variables. That is, ID and year which appear in both datasets. We can use the following code to merge table1 and table 2

df_primary <- tribble( ~ID, ~year, ~items, "A", 2023,3, "A", 2023,7, "A", 2023,6, "B", 2023,4, "B", 2023,8, "B", 2023,7, "C", 2023,4, "C", 2023,6, "C", 2023,6) df_secondary <- tribble( ~ID, ~year, ~prices, "A", 2023,9, "A", 2023,8, "A", 2023,12, "B", 2023,13, "B", 2023,14, "B", 2023,6, "C", 2023,15, "C", 2023,15, "C", 2023,13) left_join(df_primary, df_secondary, by = c('ID', 'year'))


## # A tibble: 9 x 4 ## ID year items prices ## 1 A 2023 3 9 ## 2 A 2023 7 8 ## 3 A 2023 6 12 ## 4 B 2023 4 13 ## 5 B 2023 8 14 ## 6 B 2023 7 6 ## 7 C 2023 4 15 ## 8 C 2023 6 15 ## 9 C 2023 6 13 Data Cleaning Functions in R

Following are the four important functions to tidy (clean) the data:

Function Objective Arguments

gather() Transform the data from wide to long (data, key, value, chúng tôi = FALSE)

spread() Transform the data from long to wide (data, key, value)

separate() Split one variables into two (data, col, into, sep= “”, remove = TRUE)

unit() Unit two variables into one (data, col, conc ,sep= “”, remove = TRUE)

If not installed already, enter the following command to install tidyr:

install tidyr : install.packages("tidyr")

The objectives of the gather() function is to transform the data from wide to long.


gather(data, key, value, chúng tôi = FALSE) Arguments: -data: The data frame used to reshape the dataset -key: Name of the new column created -value: Select the columns used to fill the key column -na.rm: Remove missing values. FALSE by default


Below, we can visualize the concept of reshaping wide to long. We want to create a single column named growth, filled by the rows of the quarter variables.

library(tidyr) # Create a messy dataset messy <- data.frame( country = c("A", "B", "C"), q1_2023 = c(0.03, 0.05, 0.01), q2_2023 = c(0.05, 0.07, 0.02), q3_2023 = c(0.04, 0.05, 0.01), q4_2023 = c(0.03, 0.02, 0.04)) messy


## country q1_2023 q2_2023 q3_2023 q4_2023 ## 1 A 0.03 0.05 0.04 0.03 ## 2 B 0.05 0.07 0.05 0.02 ## 3 C 0.01 0.02 0.01 0.04 # Reshape the data gather(quarter, growth, q1_2023:q4_2023) tidier


## country quarter growth ## 1 A q1_2023 0.03 ## 2 B q1_2023 0.05 ## 3 C q1_2023 0.01 ## 4 A q2_2023 0.05 ## 5 B q2_2023 0.07 ## 6 C q2_2023 0.02 ## 7 A q3_2023 0.04 ## 8 B q3_2023 0.05 ## 9 C q3_2023 0.01 ## 10 A q4_2023 0.03 ## 11 B q4_2023 0.02 ## 12 C q4_2023 0.04

In the gather() function, we create two new variable quarter and growth because our original dataset has one group variable: i.e. country and the key-value pairs.

The spread() function does the opposite of gather.


spread(data, key, value) arguments: data: The data frame used to reshape the dataset key: Column to reshape long to wide value: Rows used to fill the new column


We can reshape the tidier dataset back to messy with spread()

# Reshape the data spread(quarter, growth) messy_1


## country q1_2023 q2_2023 q3_2023 q4_2023 ## 1 A 0.03 0.05 0.04 0.03 ## 2 B 0.05 0.07 0.05 0.02 ## 3 C 0.01 0.02 0.01 0.04

The separate() function splits a column into two according to a separator. This function is helpful in some situations where the variable is a date. Our analysis can require focussing on month and year and we want to separate the column into two new variables.


separate(data, col, into, sep= "", remove = TRUE) arguments: -data: The data frame used to reshape the dataset -col: The column to split -into: The name of the new variables -sep: Indicates the symbol used that separates the variable, i.e.: "-", "_", "&" -remove: Remove the old column. By default sets to TRUE.


We can split the quarter from the year in the tidier dataset by applying the separate() function.

separate(quarter, c(“Qrt”, “year”), sep =”_”) head(separate_tidier)


## country Qrt year growth ## 1 A q1 2023 0.03 ## 2 B q1 2023 0.05 ## 3 C q1 2023 0.01 ## 4 A q2 2023 0.05 ## 5 B q2 2023 0.07 ## 6 C q2 2023 0.02

The unite() function concanates two columns into one.


unit(data, col, conc ,sep= "", remove = TRUE) arguments: -data: The data frame used to reshape the dataset -col: Name of the new column -conc: Name of the columns to concatenate -sep: Indicates the symbol used that unites the variable, i.e: "-", "_", "&" -remove: Remove the old columns. By default, sets to TRUE


In the above example, we separated quarter from year. What if we want to merge them. We use the following code:

unite(Quarter, Qrt, year, sep =”_”) head(unit_tidier)


## country Quarter growth ## 1 A q1_2023 0.03 ## 2 B q1_2023 0.05 ## 3 C q1_2023 0.01 ## 4 A q2_2023 0.05 ## 5 B q2_2023 0.07 ## 6 C q2_2023 0.02 Summary

Data analysis can be divided into three parts: Extraction, Transform, and Visualize.

R has a library called dplyr to help in data transformation. The dplyr library is fundamentally created around four functions to manipulate the data and five verbs to clean the data.

dplyr provides a nice and convenient way to combine datasets. A join with dplyr adds variables to the right of the original dataset.

The beauty of dplyr is that it handles four types of joins similar to SQL:

left_join() – To merge two datasets and keep all observations from the origin table.

right_join() – To merge two datasets and keep all observations from the destination table.

inner_join() – To merge two datasets and exclude all unmatched rows.

full_join() – To merge two datasets and keep all observations.

Using the tidyr Library, you can transform a dataset using following functions:

gather(): Transform the data from wide to long.

spread(): Transform the data from long to wide.

separate(): Split one variable into two.

unit(): Unit two variables into one.

What Is Data Integration & How Does It Work?

Data integration, which combines data from different sources, is essential in today’s data-driven economy because business competitiveness, customer satisfaction and operations depend on merging diverse data sets. As more organizations pursue digital transformation paths – using data integration tools – their ability to access and combine data becomes even more critical.

As data integration combines data from different inputs, it enables the user to drive more value from their data. This is central to Big Data work. Specifically, it provides a unified view across data sources and enables the analysis of combined data sets to unlock insights that were previously unavailable or not as economically feasible to obtain. Data integration is usually implemented in a data warehouse, cloud or hybrid environment where massive amounts of internal and perhaps external data reside.

In the case of mergers and acquisitions, data integration can result in the creation of a data warehouse that combines the information assets of the various entities so that those information assets can be leveraged more effectively.

Data integration platforms integrate enterprise data on-premises, in the cloud, or both. They provide users with a unified view of their data, which enables them to better understand their data assets. In addition, they may include various capabilities such as real-time, event-based and batch processing as well as support for legacy systems and Hadoop.

Although data integration platforms can vary in complexity and difficulty depending on the target audience, the general trend has been toward low-code and no-code tools that do not require specialized knowledge of query languages, programming languages, data management, data structure or data integration.

Importantly, these data integration platforms provide the ability to combine structured and unstructured data from internal data sources, as well as combine internal and external data sources. Structured data is data that’s stored in rows and columns in a relational database. Unstructured data is everything else, such as word processing documents, video, audio, graphics, etc.

In addition to enabling the combination of disparate data, some data integration platforms also enable users to cleanse data, monitor it, and transform it so the data is trustworthy and complies with data governance rules.

ETL platforms that extract data from a data source, transform it into a common format, and load it onto a target destination (may be part of a data integration solution or vice versa). Data integration and ETL tools can also be referred to synonymously.

Data catalogs that enable a common business language and facilitate the discovery, understanding and analysis of information.

Data governance tools that ensure the availability, usability, integrity and security of data.

Data cleansing tools that identify, correct, or remove incomplete, incorrect, inaccurate or irrelevant parts of the data.

Data replication tools capable of replicating data across SQL and NoSQL (relational and non-relational) databases for the purposes of improving transactional integrity and performance.

Data warehouses – centralized data repositories used for reporting and data analysis.

Data migration tools that transport data between computers, storage devices or formats.

Master data management tools that enable common data definitions and unified data management.

Metadata management tools that enable the establishment of policies and processes that ensure information can be accessed, analyzed, integrated, linked, maintained and shared across the organization.

Data connectors that import or export data or convert them to another format.

Data profiling tools for understanding data and its potential uses.

Data integration started in the 1980’s with discussions about “data exchange” between different applications. If a system could leverage the data in another system, then it would not be necessary to replicate the data in the other system. At the time, the cost of data storage was higher than it is today because everything had to be physically stored on-premises since cloud environments were not yet available.

Exchanging or integrating data between or among systems has been a difficult and expensive proposition traditionally since data formats, data types, and even the way data is organized varies from one system to another. “Point-to-point” integrations were the norm until middleware, data integration platforms, and APIs became fashionable. The latter solutions gained popularity over the former because point-to-point integrations are time-intensive, expensive, and don’t scale.

Meanwhile, data usage patterns have evolved from periodic reporting using historical data to predictive analytics. To facilitate more efficient use of data, new technologies and techniques have continued to emerge over time including:

Data warehouses. The general practice was to extract data from different data sources using ETL, transform the data into a common format and load it into a data warehouse. However, as the volume and variety of data continued to expand and the velocity of data generation and use accelerated, data warehouse limitations caused organizations to look for more cost-effective and scalable cloud solutions. While data warehouses are still in use, more organizations increasingly rely on cloud solutions.

Data mapping. The differences in data types and formats necessitated “data mapping,” which makes it easier to understand the relationships between data. For example, D. Smith and David Smith could be the same customer and the differences in references would be attributable to the applications fields in which the data was entered.

Semantic mapping. Another challenge has been “semantic mapping” in which a common reference such as “product” or “customer” holds different meaning in different systems. These differences necessitated ontologies that define schema terms and resolve the differences.

Data lakes. Meanwhile, the explosion of Big Data has resulted in the creation of data lakes that store vast amounts of raw data.

The explosion of enterprise data coupled with the availability of third-party data sets enables insights and predictions that were too difficult, time consuming, or practical to do before. For example, consider the following use cases:

Companies combine data from sales, marketing, finance, fulfillment, customer support and technical support – or some combination of those elements – to understand customer journeys.

Public attractions such as zoos combine weather data with historical attendance data to better predict staffing requirements on specific dates.

Hotels use weather data and data about major events (e.g., professional sports playoff games, championships, or rock concerts) to more precisely allocate resources and maximize profits through dynamic pricing.

Data integration theories are a subset of database theories. They are based on first-order logic, which is a collection of formal systems used in mathematics, philosophy, linguistics and computer science. Data integration theories indicate the difficulty and feasibility of data integration problems.

Data integration is necessary for business competitiveness. Still, particularly in established businesses, data remains locked in systems and difficult to access. To help liberate that data, more types of data integration products have become available. Liberating the data enables companies to better understand:

Their operations and how to improve operational efficiencies.

The competitors.

Their customers and how to improve customer satisfaction/reduce churn.


Merger and acquisition targets.

Their target markets and the relative attractiveness of new markets.

How well their products and services are performing and whether the mix of products and services should change.

Business opportunities.

Business risks.

More effective collaboration.

Faster access to combined data sets than traditional methods such as manual integrations.

More comprehensive visibility into and across data assets.

Data syncing to ensure the delivery of timely, accurate data.

Error reduction as opposed to manual integrations.

Higher data quality over time.

Data integration combines data but does not necessarily result in a data warehouse. It provides a unified view of the data; however, the data may reside in different places.

Data integration results in a data warehouse when the data from two or more entities is combined into a central repository.

While data integration tools and techniques have improved over time, organizations can nevertheless face several challenges which can include:

Data created and housed in different systems tends to be in different formats and organized differently.

Data may be missing. For example, internal data may have more detail than external data or data residing in a mainframe may lack time and data information about activities.

Historically, data and applications have been tightly-coupled. That model is changing. Specifically, the application and data layers are being decoupled to enable more flexible data use.

Data integration isn’t just an IT problem; it’s a business problem.

Data itself can be problematic if it’s biased, corrupted, unavailable, or unusable (including uses precluded by data governance).

The data is not available at all or for the specific purpose for which it will be used.

Data use restrictions – whether the data be used at all, or for the specific purpose.

Extraction rules may limit data availability.

Lack of a business purpose. Data integrations should support business objectives.

Service-level integrity falls short of the SLA.

Cost – will one entity bear the cost or will the cost be shared?

Short-term versus long-term value.

Software-related issues (function, performance, quality).

Testing is inadequate.

APIs aren’t perfect. Some are well-documented and functionally sound, while others are not.

Data integration implementations can be accomplished in several different ways including:

Manual integrations between source systems.

Application integrations that require the application publishers overcome the integration challenges of their respective systems.

Common storage integration data from different systems is replicated and stored in a common, independent system.

Middleware which transfers the data integration logic from the application to a separate middleware layer.

Virtual data integration or uniform access integration, which provide views of the data, but data remains in its original repository.

APIs which is a software intermediary that enables applications to communicate and share data.

Update the detailed information about What Is Data Lake? It’s Architecture: Data Lake Tutorial on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!