Trending March 2024 # Build Your Own Optical Character Recognition (Ocr) System Using Google’s Tesseract And Opencv # Suggested April 2024 # Top 4 Popular

You are reading the article Build Your Own Optical Character Recognition (Ocr) System Using Google’s Tesseract And Opencv updated in March 2024 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Build Your Own Optical Character Recognition (Ocr) System Using Google’s Tesseract And Opencv


Optical Character Recognition (OCR) is a widely used system in the computer vision space

Learn how to build your own OCR for a variety of tasks

We will leverage the OpenCV library and Tesseract for building the OCR system


Honestly, OCR has applications in a broad range of industries and functions. So, everything from scanning documents – bank statements, receipts, handwritten documents, coupons, etc.,  to reading street signs in autonomous vehicles – this all falls under the OCR umbrella.

But building an OCR system isn’t a straightforward task. For starters, it is filled with problems like different fonts in images, poor contrast, multiple objects in an image, etc.

So, in this article, we will explore some very famous and effective approaches for the OCR task and how you can implement one yourself.

If you are new to object detection and computer vision, I suggest going through the following resources:

Table of Contents

What is Optical Character Recognition (OCR)?

Popular OCR Applications in the Real World

Text Recognition with Tesseract OCR

The Different Ways for Text Detection

What is Optical Character Recognition (OCR)?

Let’s first understand what OCR is, in case you haven’t come across this concept before.

OCR, or Optical Character Recognition, is a process of recognizing text inside images and converting it into an electronic form. These images could be of handwritten text, printed text like documents, receipts, name cards, etc., or even a natural scene photograph.

OCR has two parts to it. The first part is text detection where the textual part within the image is determined. This localization of text within the image is important for the second part of OCR, text recognition, where the text is extracted from the image. Using these techniques together is how you can extract text from any image.

Before we dive into how to build your own OCR, let’s take a look at some of the popular applications of OCR.

Popular OCR Applications in the Real World

OCR has widespread applications across industries (primarily with the aim of reducing manual human effort). It has been incorporated in our everyday life to an extent that we hardly ever notice it! But they surely strive to bring a better user experience.

OCR is increasingly being used for digitization by various industries to cut down manual workload. This makes it very easy and efficient to extract and store information from business documents, receipts, invoices, passports, etc. Also, when you upload your documents for KYC (Know Your Customer), OCR is used to extract information from these documents and store them for future reference.

OCR is also used for book scanning where it turns raw images into a digital text format. Many large scale projects like the Gutenberg project, Million Book Project, and Google Books use OCR to scan and digitize books and store the works as an archive.

The banking industry is also increasingly using OCR to archive client-related paperwork, like onboarding material, to easily create a client repository. This significantly reduces the onboarding time and thereby improves the user experience. Also, banks use OCR to extract information like account number, amount, cheque number from cheques for faster processing.

Now, let’s look at one of the most famous and widely used text recognition techniques – Tesseract.

Text Recognition with Tesseract OCR

Tesseract is an open-source OCR engine originally developed as proprietary software by HP (Hewlett-Packard) but was later made open source in 2005. Google has since then adopted the project and sponsored its development.

As of today, Tesseract can detect over 100 languages and can process even right-to-left text such as Arabic or Hebrew! No wonder it is used by Google for text detection on mobile devices, in videos, and in Gmail’s image spam detection algorithm.

From version 4 onwards, Google has given a significant boost to this OCR engine. Tesseract 4.0 has added a new OCR engine that uses a neural network system based on LSTM (Long Short-term Memory), one of the most effective solutions for sequence prediction problems. Although its previous OCR engine using pattern matching is still available as legacy code.

Once you have downloaded Tesseract onto your system, you easily run it from the command line using the following command:

You can change the Tesseract configuration for results best suited for your image:

Langue (-l) – You can detect a single language or multiple languages with Tesseract

OCR engine mode (–oem) – As you already know, Tesseract 4 has both LSTM and Legacy OCR engines. However, there are 4 modes of valid operation modes based on their combination

3. Page Segmentation (–psm) – Can be adjusted according to the text in the image for better results


However, instead of the command-line method, you could also use Pytesseract – a Python wrapper for Tesseract. Using this you can easily implement your own text recognizer using Tesseract OCR by writing a simple Python script.

You can download Pytesseract using the pip install pytesseract command.

The main function in Pytesseract is image_to_text() which takes the image and the command line options as its arguments:

View the code on Gist.

What are the Challenges with Tesseract?

It’s no secret that Tesseract is not perfect. It performs poorly when the image has a lot of noise or when the font of the language is one on which Tesseract OCR is not trained. Other conditions like brightness or skewness of text will also affect the performance of Tesseract. Nevertheless, it is a good starting point for text recognition with low efforts and high outputs.

The Different Ways for Text Detection

Tesseract assumes that the input text image is fairly clean. Unfortunately, many input images will contain a plethora of objects and not just a clean preprocessed text. Therefore, it becomes imperative to have a good text detection system that can detect text which can then be easily extracted.

There are a fair few ways for text detection:

Traditional way of using OpenCV

Contemporary way of using Deep Learning models, and

Building your very own custom model

Text Detection using OpenCV

Text detection using OpenCV is the classic way of doing things. You can apply various manipulations like image resizing, blurring, thresholding, morphological operations, etc. to clean the image.

Here we have Grayscale, blurred and thresholded images, in that order.

Once you have done that, you can use OpenCV contours detection to detect contours to extract chunks of data:

View the code on Gist.

Finally, you can apply text recognition on the contours that you got to predict the text:

The results in the image above were achieved with minimum preprocessing and contour detection followed by text recognition using Pytesseract. Obviously, the contours did not detect the text every time.

But, still, doing text detection with OpenCV is a tedious task requiring a lot of playing around with the parameters. Also, it does not do well in terms of generalization. A better way of doing this is by using the EAST text detection model.

Contemporary Deep Learning Model – EAST

EAST, or Efficient and Accurate Scene Text Detector, is a deep learning model for detecting text from natural scene images. It is pretty fast and accurate as it is able to detect 720p images at 13.2fps with an F-score of 0.7820.

The model consists of a Fully Convolutional Network and a Non-maximum suppression stage to predict a word or text lines. The model, however, does not include some intermediary steps like candidate proposal, text region formation, and word partition that were involved in other previous models, which allows for an optimized model.

You can have a look at the image below provided by the authors in their paper comparing the EAST model with other previous models:

EAST has a U-shape network. The first part of the network consists of convolutional layers trained on the ImageNet dataset. The next part is the feature merging branch which concatenates the current feature map with the unpooled feature map from the previous stage.

This is followed by convolutional layers to reduce computation and produce output feature maps. Finally, using a convolutional layer, the output is a score map showing the presence of text and a geometry map which is either a rotated box or a quadrangle that covers the text. This can be visually understood from the image of the architecture that was included in the research paper:

I highly suggest you go through the paper yourself to get a good understanding of the EAST model.

OpenCV has included the EAST text detector model in version 3.4 onwards. This makes it super convenient to implement your own text detector. The resulting localized text boxes can be passed through Tesseract OCR to extract the text and you will have a complete end-to-end model for OCR.

Custom Model using TensorFlow Object API for Text Detection

The final method to build your text detector is using a custom-built text detector model using the TensorFlow Object API. It is an open-source framework used to build deep learning models for object detection tasks. To understand it in detail, I suggest going through this detailed article first.

To build your custom text detector, you would obviously require a dataset of quite a few images, at least more than 100. Then you need to annotate these images so that the model can know where the target object is and learn everything about it. Finally, you can choose from one of the pre-trained models, depending on the trade-off between performance and speed, from TensorFlow’s detection model zoo. You can refer to this comprehensive blog to build your custom model.

Now. training can require some computation, but if you don’t really have enough of it, don’t worry! You can use Google Colaboratory for all your requirements! This article will teach you how to use it effectively.

Finally, if you want to go a step ahead and build a YOLO state-of-the-art text detector model, this article will be a stepping stone to understanding all the nitty-gritty of it and you will be off to a great start!

End Notes

In this article, we covered the problems in OCR and the various approaches that can be used to solve the task. We also discussed the various shortcomings in the approaches and why OCR is not as easy as it seems!

Have you worked with any OCR application before? What kind of OCR use cases do you plan on building after this? Let me know your ideas and feedback below.


You're reading Build Your Own Optical Character Recognition (Ocr) System Using Google’s Tesseract And Opencv

The Ultimate Guide To Build A Custom Chatgpt Chatbot With Your Own Data

Chatbots can offer many benefits for users and businesses, but they vary in their quality and performance. If you want to build a chatbot that is both intelligent and conversational, you might want to consider using ChatGPT, a state-of-the-art natural language generation (NLG) model that can generate texts based on user inputs.

In this article, we will show you how to build a custom ChatGPT chatbot with your own data, so that you can create a unique and tailored experience for your users. We will also share some best practices and tips for building a high-quality and effective ChatGPT chatbot.

What is a ChatGPT chatbot?

A ChatGPT chatbot can leverage the power of the ChatGPT model to produce diverse and creative responses that match the user’s intent, context, and personality. It can also handle complex and open-ended conversations that span multiple turns and topics.

Why build a custom ChatGPT chatbot with your own data?

While the ChatGPT model is impressive in its generality and versatility, it might not be enough for your specific needs. Depending on your domain, audience, and goals, you might want to customize your chatbot’s behavior, tone, vocabulary, and knowledge.

For example, if you want to build a chatbot for a medical domain, you might want it to use medical terms, facts, and guidelines that are relevant and accurate. If you want to build a chatbot for an entertainment domain, you might want it to use humor, sarcasm, and references that are appropriate and engaging.

By building a custom ChatGPT chatbot with your own data, you can fine-tune the ChatGPT model on your own corpus of conversations, which can reflect your domain, audience, and goals. This way, you can create a chatbot that is more specific, consistent, and reliable than a generic ChatGPT chatbot.

What are the benefits of a custom ChatGPT chatbot?

Building a custom ChatGPT chatbot with your own data can bring you many benefits, such as:

Personalization: You can tailor your chatbot’s responses to your user’s preferences, needs, and emotions, which can increase user satisfaction and loyalty.

Differentiation: You can give your chatbot a unique identity, voice, and style, which can make it stand out from the crowd and attract more users.

Optimization: You can improve your chatbot’s performance, accuracy, and relevance, which can reduce errors and misunderstandings and enhance user trust and retention.

Innovation: You can explore new possibilities and opportunities for your chatbot, which can generate more value and impact for your business.

How to Build a Custom ChatGPT Chatbot with Your Own Data

Building a custom ChatGPT chatbot with your own data is not as hard as it sounds. You just need to follow these four steps:

Step 1: Prepare your data

The first step is to prepare your data for training your ChatGPT model. Your data should consist of pairs of user messages and chatbot responses, formatted as JSON objects. For example:

{ "user": "Hi, I'm looking for a good movie to watch.", "chatbot": "Hello, welcome to MovieBot. What genre do you like?" }

You can collect your data from various sources, such as:

Existing chat logs from your platform or service

Online forums or communities related to your domain.

Human-generated dialogues from crowdsourcing platforms or tools

Synthetic dialogues from NLG models or tools

You should aim to have at least 10,000 pairs of user messages and chatbot responses for your data. The more data you have, the better your ChatGPT model will be. However, you should also make sure that your data is high-quality and relevant to your domain, audience, and goals. You should avoid using data that is noisy, outdated, inaccurate, biased, or offensive.

Step 2: Train your ChatGPT model

The second step is to train your ChatGPT model on your data. You can use any platform or tool that supports fine-tuning GPT-3 models, such as OpenAI Playground, Hugging Face Transformers, or Google Colab.

The basic idea is to load the pre-trained ChatGPT model and feed it with your data in batches. For each batch, the model will try to predict the next word in the chatbot response given the user message. The model will then compare its prediction with the actual word in the data and update its parameters accordingly. This process will repeat until the model converges or reaches a desired level of accuracy.

The training process can take from a few hours to a few days depending on the size of your data and the computing resources you have. You can monitor the progress of the training by using metrics such as perplexity (a measure of how well the model fits the data) and loss (a measure of how much the model deviates from the data).

Step 3: Test and evaluate your ChatGPT chatbot

The third step is to test and evaluate your ChatGPT chatbot on new user messages that are not in your data. You can use any platform or tool that supports generating texts from GPT-3 models, such as OpenAI Playground, Hugging Face Transformers, or Google Colab.

The basic idea is to load your trained ChatGPT model and feed it with new user messages. For each user message, the model will generate a chatbot response by sampling words from its probability distribution. The model will stop generating words when it reaches a predefined end-of-text token (such as `or a period). The model will also keep track of the previous user messages and chatbot responses in the conversation to maintain coherence and context. You can test and evaluate your ChatGPT chatbot on various aspects, such as Fluency, Relevance, Consistency, Diversity

You can use qualitative methods to evaluate your ChatGPT chatbot, you can ask real users or experts to rate or review your chatbot response based on certain criteria, such as satisfaction (how happy the user is with the chatbot response), usefulness (how helpful the chatbot response is for the user), or engagement (how interested the user is in continuing the conversation with the chatbot). You should also compare your ChatGPT chatbot with other chatbots or baselines in your domain to see how it performs relative to them.

Step 4: Deploy and integrate your ChatGPT chatbot

The fourth and final step is to deploy and integrate your ChatGPT chatbot to your platform or service. You can use any platform or tool that supports hosting and serving GPT-3 models.

The basic idea is to upload your trained ChatGPT model to a cloud server and expose it as an API endpoint. The API endpoint will accept user messages as input and return chatbot responses as output. You can then connect your API endpoint to your platform or service using webhooks, SDKs, or libraries.

You can deploy and integrate your ChatGPT chatbot to various platforms or services, such as Websites, Apps, social media, Messaging platforms. You should make sure that your ChatGPT chatbot is accessible, responsive, and secure for your users. You should also monitor and update your ChatGPT chatbot regularly to ensure its quality and reliability.

Best Practices and Tips for Building a Custom ChatGPT

Building a custom ChatGPT chatbot with your own data is not only a technical process but also a creative one. You need to consider many factors that can affect your chatbot’s performance and user experience. Here are some best practices and tips that can help you build a better custom ChatGPT chatbot:

Define your chatbot’s purpose and personality

Before you start building your custom ChatGPT chatbot, you should have a clear idea of what you want it to do and how you want it to sound. You should define your chatbot’s purpose and personality based on your domain, audience, and goals.

Your chatbot’s purpose and personality should be consistent and coherent throughout your chatbot’s responses. They should also match your user’s expectations and preferences. You can use your data to reflect your chatbot’s purpose and personality by using appropriate words, phrases, and expressions.

Use high-quality and relevant data

Your data is the foundation of your custom ChatGPT chatbot. The quality and relevance of your data will directly affect the quality and relevance of your chatbot’s responses. You should use high-quality and relevant data that can help your chatbot achieve its purpose and exhibit its personality.

You should use high-quality and relevant data to train your ChatGPT model, as well as to test and evaluate your ChatGPT chatbot. You should also update your data regularly to keep up with the changes in your domain, audience, and goals.

Fine-tune your ChatGPT model parameters

Your ChatGPT model parameters are the settings that control how your ChatGPT model generates texts. You can fine-tune your ChatGPT model parameters to optimize your chatbot’s response generation according to your needs and preferences.

You can fine-tune your ChatGPT model parameters by using different values or ranges for each parameter and observing how they affect your chatbot’s response generation. You can also use different combinations of parameters for different scenarios, topics, or intents in a conversation.

Monitor and improve your ChatGPT chatbot performance

Your custom ChatGPT chatbot is not a static product but a dynamic process. You should monitor and improve your ChatGPT chatbot performance continuously to ensure its quality and reliability.

Frequently Asked Questions What is ChatGPT? How much data do I need to build a custom ChatGPT chatbot?

There is no definitive answer to this question, as it depends on many factors, such as your domain, audience, and goals. However, as a general rule of thumb, you should aim to have at least 10,000 pairs of user messages and chatbot responses for your data. The more data you have, the better your ChatGPT model will be. However, you should also make sure that your data is high-quality and relevant to your domain, audience, and goals.

How long does it take to train a custom ChatGPT chatbot?

The training time of a custom ChatGPT chatbot depends on the size of your data and the computing resources you have. The training process can take from a few hours to a few days depending on these factors. You can monitor the progress of the training by using metrics such as perplexity and loss.

How can I deploy my custom ChatGPT chatbot to different platforms?

You can deploy your custom ChatGPT chatbot to different platforms by using platforms or tools that support hosting and serving GPT-3 models, such as OpenAI Playground, Hugging Face Transformers, or Google Cloud. You can upload your trained ChatGPT model to a cloud server and expose it as an API endpoint. You can then connect your API endpoint to different platforms using webhooks, SDKs, or libraries.


In this article, we have shown you how to build a custom ChatGPT chatbot with your own data. We have also shared some best practices and tips for building a high-quality and effective ChatGPT chatbot. Building a custom ChatGPT chatbot with your own data can help you create a unique and tailored experience for your users. It can also help you achieve your domain, audience, and goals. We hope you have enjoyed this article and learned something useful from it. We wish you all the best in your chatbot endeavors. Happy chatting!

Ddr Diy: How To Build Your Own Dance Game With A Raspberry Pi

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Those who lived through the 1990s may remember the craze of dance games in the amusement arcades. People would throw themselves around a miniature stage trying to time the movement of their feet to the arrows scrolling up the screen. From a distance, it might even look like they were dancing.

To relive this experience or enjoy it for the first time (without buying a full game or console), you can use a Raspberry Pi and a microcontroller to duplicate the features of the game.


Time: 2-3 hours

Material cost: $100-$140

Difficulty: moderate

Materials Tools

USB (Type A) keyboard

USB (Type A) mouse

Crimp tool (or pliers)

Internet connection

Soldering iron and solder

Sewing needle


1. Set up your Raspberry Pi. To do so, you’ll need to plug the keyboard and mouse into it and connect it to the monitor with the Micro-HDMI cable. Copy the Raspbian Buster operating system onto the SD card, ensuring you use the one that includes a desktop—you’ll need it to use StepMania. Insert the SD card into the Raspberry Pi and plug in the USB-C power supply. Follow the instructions to install the operating system and to connect to your Wi-Fi network.

Note: Once the project is complete, you’ll be able to manage without the mouse and keyboard, as their functions will be fulfilled by the Circuit Playground and dance mat.

cd ~ cd raspbian-stepmania-arcade make

Note: StepMania has been ported to the Raspberry Pi by Matthias Rozensztok.

3. Reboot the Raspberry Pi. Once you do, StepMania will start automatically.

4. Boost the sound. The Pi’s sound capability is a little lacking, but it can be improved by an add-on. The Adafruit speaker bonnet can run two small speakers. Solder in the two screw connectors that came with the bonnet kit and screw the speaker wires into the connectors.

Note: You can buy pre-soldered speakers for the bonnet.

5. Separate the bonnet and the Pi. Without space between them, the bonnet may contact the pins on the Pi, potentially damaging one or both pieces of hardware. To avoid this, connect a GPIO riser or a ribbon cable between the two.

Note: Having a cable to mount the bonnet gives you the option of adding a heatsink. The Pi will slow down if it gets too hot. If you find this happening, you can buy small, stick-on heat sinks to keep the temperature below 176 degrees Fahrenheit (80 Celsius).

6. Install the driver software for the bonnet. Use these commands:

curl -sS

Once you’ve connected the speakers, your project should look like this. Andy Clark

7. Ensure the sound comes out of the bonnet. Installing StepMania will configure the Pi to have USB sound by default. To remove this and get the sound to come out of your speakers, run the following commands:

sudo rm /etc/modprobe.d/usb-audio-by-default.conf rm ~/.asoundrc

8. Reboot the Pi to get the sound working properly.

sudo apt-get update sudo apt-get install arduino

Note: There are two variations of the Circuit Playground board. This project uses the cheaper “classic” board, but will also work with the newer “express” board. The classic board uses the C programming language and can be coded using the Arduino Integrated Development Environment (IDE).

11. Load the Dance Controller software. The controller software is written in C. The disco lights are provided by its onboard multi-colored LEDs and the dance mat contacts use the board’s touch sensors. Use a terminal session to download the code as follows:

cd ~

When your dance mat is done, it should resemble the “stage” you’ve seen at arcades or at your friends’ houses. Andy Clark

12. Make the dance mat. Because you’ll be stepping all over it, you’ll want to use a square of tough material such as upholstery or denim. Choose a contrasting color for the triangles. Stitch the triangles in place using a needle and regular thread.

13. Using the conductive thread, make large stitches all over the triangles. The aim is to ensure that wherever the dancer steps, they’ll make contact with the thread. We used a catch stitch—a herringbone-shaped stitch that covers a large area with a small number of stitches. Feed the ends of these areas of conductive stitching out to the edge of the mat.

15. Play the game. Connect your Pi to a suitable monitor and connect the bonnet and the Circuit Playground via USB. Turn on the Pi, and if it’s all working, it should boot straight into StepMania.

Note: You might need to adjust the settings to configure the game to use your entire screen. Navigate the menu with the up and down arrows; use the left button on the Circuit Playground to select and the right one to exit.

Rotate Image Without Cutting Off Sides Using Opencv Python

Rotating an image is the most basic operation in image editing. The python OpenCV library provides the methods cv2.getRotationMatrix2D(),cv2.rotate() to do this task very easily.

The cv2.rotate() will rotate the image in 0 or 90 or 180 or 270 angles only where as Cv2.getRotationMatrix2D() will rotate the image to any specified angle. In the article below, we will rotate the image without cropping or cutting off sides using OpenCV Python.

To rotate an image using the cv2.getRotationMatrix2D() method then we need to follow below three steps −

First, we need to get the centre of rotation.

Next by using the getRotationMatrix2D() method, we need to create the 2D-rotation matrix.

Finally, by using the warpAffine() function in OpenCV, we need to apply the affine transformation to the image to correct the geometric distortions or deformations of the image.

Using Cv2.getRotationMatrix2D() function

The function creates a transformation matrix of the input image array, therefore it will be used for rotating an image. If the value of the angle parameter is positive, then the image gets rotated in the counter-clockwise direction. If you want to rotate the image clockwise, then the angle needs to be negative.

Syntax cv2.getRotationMatrix2D(center, angle, scale) Parameters

center: Center of the rotation for the input image.

angle: The angle of rotation in degrees.

scale: An isotropic scale factor. Which scales the image up or down according to the value provided.


Let’s take an example, and rotate the image using the trigonometric functions of the math module.

import cv2 import math def rotate_image(array, angle): height, width = array.shape[:2] image_center = (width / 2, height / 2) rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1) radians = math.radians(angle) sin = math.sin(radians) cos = math.cos(radians) bound_w = int((height * abs(sin)) + (width * abs(cos))) bound_h = int((height * abs(cos)) + (width * abs(sin))) rotation_mat[0, 2] += ((bound_w / 2) - image_center[0]) rotation_mat[1, 2] += ((bound_h / 2) - image_center[1]) rotated_mat = cv2.warpAffine(array, rotation_mat, (bound_w, bound_h)) return rotated_mat img = cv2.imread('Images/car.jpg',1) rotated_image = rotate_image(img, 256) cv2.imshow('Rotated image', rotated_image) cv2.waitKey(0) cv2.destroyAllWindows() Input image


The output Rotated image is displayed below.

The input image is successfully rotated to the 256 degrees angle.


In this example, we will rotate an image using cv2.getRotationMatrix2D() and python built in abs() functions.

import cv2 def rotate_image(arr, angle): height, width = arr.shape[:2] # get the image centers image_center = (width/2, height/2) rotation_arr = cv2.getRotationMatrix2D(image_center, angle, 1) abs_cos = abs(rotation_arr[0,0]) abs_sin = abs(rotation_arr[0,1]) bound_w = int(height * abs_sin + width * abs_cos) bound_h = int(height * abs_cos + width * abs_sin) rotation_arr[0, 2] += bound_w/2 - image_center[0] rotation_arr[1, 2] += bound_h/2 - image_center[1] rotated_mat = cv2.warpAffine(arr, rotation_arr, (bound_w, bound_h)) return rotated_arr img = cv2.imread('Images/cat.jpg',1) rotated_image = rotate_image(img, 197) cv2.imshow('Original image', img) cv2.imshow('Rotated image', rotated_image) cv2.waitKey(0) cv2.destroyAllWindows() Original Image

Rotated Image

The input image is successfully rotated to the 197degrees angle.


The cv2.rotate() function rotates an image frame in multiples of 90 degrees(0 or 90 or 180 or 270 angles). The function rotates the image in three different ways using the rotateCode= 0 or 1 or 2 parameters.

Syntax src, rotateCode[, dst] ) Parameters

src: Input image

rotateCode: It specifies how to rotate the image.

dst: It is the output image of the same size and depth as the input image.


It returns a rotated image.


In this example, the input image “Fruits.jpg” will be rotated to the 90 degrees anticlockwise direction.

import cv2 import numpy as np img = cv2.imread('Images/logo.jpg',1) rotated_image = cv2.rotate(img,rotateCode = 2) cv2.imshow('Original image', img) cv2.imshow('Rotated image', rotated_image) cv2.waitKey(0) cv2.destroyAllWindows() Original Image

Rotated Image

Using np.rot90()function

The numpy.rot90() method is used to rotate an array by 90 degrees. If it is sufficient to rotate our input only about 90 degrees rotation, then it is a simple and easier way.


In this example, we will take an input rectangular image “car.jpg” with 850X315 dimensions.

import cv2 import numpy as np img = cv2.imread('Images/car.jpg',1) rotated_image = np.rot90(img) cv2.imwrite('Rotated image.jpg', rotated_image) cv2.imshow('InputImage', img) cv2.waitKey(0) Original Image

Rotated Image

The method rotates the array from the first towards the second axis direction. So that the given image is rotated in an anti-clock wise direction.

Javafx Example To Decrease The Brightness Of An Image Using Opencv.

One way to alter the brightness of an image using Java is to use the convertTo() method. This method performs the required calculations on the given matrix to alter the contrast and brightness of an image. This method accepts 4 parameters −

mat − Empty matrix to hold the result with the same size and type as the source matrix.

rtype − integer value specifying the type of the output matrix. If this value is negative, the type will be same as the source.

alpha − Gain value, which must be greater than 0 (default value 1).

beta − Bias value (default value 0).

if the chosen value for the parameter beta is a negative value (0 to -255) the brightness of the image is reduced.

Example import java.awt.image.BufferedImage; import; import; import; import javafx.application.Application; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.embed.swing.SwingFXUtils; import javafx.geometry.Insets; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.control.Slider; import javafx.scene.image.ImageView; import javafx.scene.image.WritableImage; import javafx.scene.layout.VBox; import javafx.stage.Stage; import javax.imageio.ImageIO; import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.core.MatOfByte; import org.opencv.imgcodecs.Imgcodecs; public class DecreasingBrightnessJavaFX extends Application {    double contrast = 1;    private final int rtype = -1;    double alpha = 1;    double beta = 0;    Slider slider1;    int sliderMinVal = 0;    int sliderMaxVal = 255;    int sliderInitVal = 255;    Mat src = null;    public void start(Stage stage) throws IOException {             System.loadLibrary( Core.NATIVE_LIBRARY_NAME );       String file ="D:Imagecuba.jpg";       src = Imgcodecs.imread(file);       WritableImage writableImage = loadImage(src);             ImageView imageView = new ImageView(writableImage);       imageView.setX(25);       imageView.setY(25);       imageView.setFitHeight(400);       imageView.setFitWidth(550);       imageView.setPreserveRatio(true);             slider1 = new Slider(sliderMinVal, sliderMaxVal, sliderInitVal);       slider1.setShowTickLabels(true);       slider1.setShowTickMarks(true);       slider1.setMajorTickUnit(25);       slider1.setBlockIncrement(10);             Label label1 = new Label();       Label label2 = new Label();                oldValue, Number newValue){             try {                alpha = newValue.doubleValue();                Mat dest = new Mat(src.rows(), src.cols(), src.type());                alpha = (alpha/sliderMaxVal);                beta = 1.0 - alpha;                label1.setText("α-value: " + alpha);                label2.setText("β-value: " + beta);                src.convertTo(dest, rtype, alpha, beta);                imageView.setImage(loadImage(dest));             }             catch(Exception e) {                System.out.println("");             }          }       });             VBox vbox = new VBox();       vbox.setPadding(new Insets(20));       vbox.setSpacing(10);       vbox.getChildren().addAll(label1, label2, slider1, imageView);             Scene scene = new Scene(vbox, 600, 450);       stage.setTitle("Decreasing an image");       stage.setScene(scene);;    }    public WritableImage loadImage(Mat image) throws IOException {       MatOfByte matOfByte = new MatOfByte();       Imgcodecs.imencode(".jpg", image, matOfByte);             byte[] byteArray = matOfByte.toArray();             InputStream in = new ByteArrayInputStream(byteArray);       BufferedImage bufImage =;       WritableImage writableImage = SwingFXUtils.toFXImage(bufImage, null);       return writableImage;    }    public static void main(String args[]) {       launch(args);    }

Build High Performance Time Series Models Using Auto Arima In Python And R


Picture this – You’ve been tasked with forecasting the price of the next iPhone and have been provided with historical data. This includes features like quarterly sales, month-on-month expenditure, and a whole host of things that come with Apple’s balance sheet. As a data scientist, which kind of problem would you classify this as? Time series modeling, of course.

From predicting the sales of a product to estimating the electricity usage of households, time series forecasting is one of the core skills any data scientist is expected to know, if not master. There are a plethora of different techniques out there which you can use, and we will be covering one of the most effective ones, called Auto ARIMA, in this article.

We will first understand the concept of ARIMA which will lead us to our main topic – Auto ARIMA. To solidify our concepts, we will take up a dataset and implement it in both Python and R.

If you are familiar with time series and it’s techniques (like moving average, exponential smoothing, and ARIMA),  you can skip directly to section 4. For beginners, start from the below section which is a brief introduction to time series and various forecasting techniques.

What is a time series ?

Before we learn about the techniques to work on time series data, we must first understand what a time series actually is and how is it different from any other kind of data. Here is the formal definition of time series – It is a series of data points measured at consistent time intervals. This simply means that particular values are recorded at a constant interval which may be hourly, daily, weekly, every 10 days, and so on. What makes time series different is that each data point in the series is dependent on the previous data points. Let us understand the difference more clearly by taking a couple of examples.

Example 1:

Suppose you have a dataset of people who have taken a loan from a particular company (as shown in the table below). Do you think each row will be related to the previous rows? Certainly not! The loan taken by a person will be based on his financial conditions and needs (there could be other factors such as the family size etc., but for simplicity we are considering only income and loan type) . Also, the data was not collected at any specific time interval. It depends on when the company received a request for the loan.

Example 2:

Let’s take another example. Suppose you have a dataset that contains the level of CO2 in the air per day (screenshot below). Will you be able to predict the approximate amount of CO2 for the next day by looking at the values from the past few days? Well, of course. If you observe, the data has been recorded on a daily basis, that is, the time interval is constant (24 hours).

You must have got an intuition about this by now – the first case is a simple regression problem and the second is a time series problem. Although the time series puzzle here can also be solved using linear regression, but that isn’t really the best approach as it neglects the relation of the values with all the relative past values. Let’s now look at some of the common techniques used for solving time series problems.

Methods for time series forecasting

There are a number of methods for time series forecasting and we will briefly cover them in this section. The detailed explanation and python codes for all the below mentioned techniques can be found in this article: 7 techniques for time series forecasting (with python codes).

Naive Approach: In this forecasting technique, the value of the new data point is predicted to be equal to the previous data point. The result would be a flat line, since all new values take the previous values.

Simple Average: The next value is taken as the average of all the previous values. The predictions here are better than the ‘Naive Approach’ as it doesn’t result in a flat line but here, all the past values are taken into consideration which might not always be useful. For instance, when asked to predict today’s temperature, you would consider the last 7 days’ temperature rather than the temperature a month ago.

Moving Average : This is an improvement over the previous technique. Instead of taking the average of all the previous points, the average of ‘n’ previous points is taken to be the predicted value.

Weighted moving average : A weighted moving average is a moving average where the past ‘n’ values are given different weights.

Simple Exponential Smoothing: In this technique, larger weights are assigned to more recent observations than to observations from the distant past.

Holt’s Linear Trend Model: This method takes into account the trend of the dataset. By trend, we mean the increasing or decreasing nature of the series. Suppose the number of bookings in a hotel increases every year, then we can say that the number of bookings show an increasing trend. The forecast function in this method is a function of level and trend.

Holt Winters Method: This algorithm takes into account both the trend and the seasonality of the series. For instance – the number of bookings in a hotel is high on weekends & low on weekdays, and increases every year; there exists a weekly seasonality and an increasing trend.

ARIMA: ARIMA is a very popular technique for time series modeling. It describes the correlation between data points and takes into account the difference of the values. An improvement over ARIMA is SARIMA (or seasonal ARIMA). We will look at ARIMA in a bit more detail in the following section.

Introduction to ARIMA

In this section we will do a quick introduction to ARIMA which will be helpful in understanding Auto Arima. A detailed explanation of Arima, parameters (p,q,d), plots (ACF PACF) and implementation is included in this article : Complete tutorial to Time Series.

ARIMA is a very popular statistical method for time series forecasting. ARIMA stands for Auto-Regressive Integrated Moving Averages. ARIMA models work on the following assumptions –

The data series is stationary, which means that the mean and variance should not vary with time. A series can be made stationary by using log transformation or differencing the series.

The data provided as input must be a univariate series, since arima uses the past values to predict the future values.

ARIMA has three components – AR (autoregressive term), I (differencing term) and MA (moving average term). Let us understand each of these components –

AR term refers to the past values used for forecasting the next value. The AR term is defined by the parameter ‘p’ in arima. The value of ‘p’ is determined using the PACF plot.

MA term is used to defines number of past forecast errors used to predict the future values. The parameter ‘q’ in arima represents the MA term. ACF plot is used to identify the correct ‘q’ value.

Order of differencing  specifies the number of times the differencing operation is performed on series to make it stationary. Test like ADF and KPSS can be used to determine whether the series is stationary and help in identifying the d value.

Steps for ARIMA implementation

The general steps to implement an ARIMA model are –

Load the data

The first step for model building is of course to load the dataset


Depending on the dataset, the steps of preprocessing will be defined. This will include creating timestamps, converting the dtype of date/time column, making the series univariate, etc.

Make series stationary

In order to satisfy the assumption, it is necessary to make the series stationary. This would include checking the stationarity of the series and performing required transformations

Determine d value

For making the series stationary, the number of times the difference operation was performed will be taken as the d value

Create ACF and PACF plots

This is the most important step in ARIMA implementation. ACF PACF plots are used to determine the input parameters for our ARIMA model

Determine the p and q values

Read the values of p and q from the plots in the previous step

Fit ARIMA model

Using the processed data and parameter values we calculated from the previous steps, fit the ARIMA model

Predict values on validation set

Predict the future values

Calculate RMSE

To check the performance of the model, check the RMSE value using the predictions and actual values on the validation set

What is Auto ARIMA?

Auto ARIMA (Auto-Regressive Integrated Moving Average) is a statistical algorithm used for time series forecasting. It automatically determines the optimal parameters for an ARIMA model, such as the order of differencing, autoregressive (AR) terms, and moving average (MA) terms. Auto ARIMA searches through different combinations of these parameters to find the best fit for the given time series data. This automated process saves time and effort, making it easier for users to generate accurate forecasts without requiring extensive knowledge of time series analysis.

Why do we need Auto ARIMA?

Although ARIMA is a very powerful model for forecasting time series data, the data preparation and parameter tuning processes end up being really time consuming. Before implementing ARIMA, you need to make the series stationary, and determine the values of p and q using the plots we discussed above. Auto ARIMA makes this task really simple for us as it eliminates steps 3 to 6 we saw in the previous section. Below are the steps you should follow for implementing auto ARIMA:

Load the data: This step will be the same. Load the data into your notebook

Preprocessing data: The input should be univariate, hence drop the other columns

Fit Auto ARIMA: Fit the model on the univariate series

Predict values on validation set: Make predictions on the validation set

Calculate RMSE: Check the performance of the model using the predicted values against the actual values

We completely bypassed the selection of p and q feature as you can see. What a relief! In the next section, we will implement auto ARIMA using a toy dataset.

Implementation in Python and R

#building the model from pyramid.arima import auto_arima model = auto_arima(train, trace=True, error_action='ignore', suppress_warnings=True) forecast = model.predict(n_periods=len(valid)) forecast = pd.DataFrame(forecast,index = valid.index,columns=['Prediction']) #plot the predictions for validation set plt.plot(train, label='Train') plt.plot(valid, label='Valid') plt.plot(forecast, label='Prediction') #calculate rmse from math import sqrt from sklearn.metrics import mean_squared_error rms = sqrt(mean_squared_error(valid,forecast)) print(rms) output - 76.51355764316357

Below is the R Code for the same problem:

# loading packages library(forecast) library(Metrics) # reading data data = read.csv("international-airline-passengers.csv") # splitting data into train and valid sets train = data[1:100,] valid = data[101:nrow(data),] # removing "Month" column train$Month = NULL # training model model = auto.arima(train) # model summary summary(model) # forecasting forecast = predict(model,44) # evaluation rmse(valid$International.airline.passengers, forecast$pred) How does Auto Arima select the best parameters

In the above code, we simply used the .fit() command to fit the model without having to select the combination of p, q, d. But how did the model figure out the best combination of these parameters? Auto ARIMA takes into account the AIC and BIC values generated (as you can see in the code) to determine the best combination of parameters. AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) values are estimators to compare models. The lower these values, the better is the model.

Check out these links if you are interested in the maths behind AIC and BIC.

Frequently Asked Questions

I have found auto ARIMA to be the simplest technique for performing time series forecasting. Knowing a shortcut is good but being familiar with the math behind it is also important. In this article I have skimmed through the details of how ARIMA works but do make sure that you go through the links provided in the article. For your easy reference, here are the links again:

I would suggest practicing what we have learned here on this practice problem: Time Series Practice Problem. You can also take our training course created on the same practice problem, Time series forecasting, to provide you a head start.


Update the detailed information about Build Your Own Optical Character Recognition (Ocr) System Using Google’s Tesseract And Opencv on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!