<![CDATA[tejakummarikuntla]]>tejakummarikuntla.github.io/blog/tejakummarikuntla.github.io/blog/favicon.pngtejakummarikuntlatejakummarikuntla.github.io/blog/Ghost 3.13Mon, 04 May 2020 18:10:40 GMT60<![CDATA[Snorkeling in Data for Supervision and Generation]]>Let's Start with 'Why?'

Most of the preprocessing phase of pipelines has the complexity of taking the unstructured data or the Dark Data like text, Tables, Image, etc, and turning into the structured data which usually takes months or years and to build ML models further.Before building an

]]>
tejakummarikuntla.github.io/blog/snorkeling-in-data-for-supervision-and-generation/5ea5ec98f432ef475c61ed01Sun, 26 Apr 2020 20:20:58 GMTLet's Start with 'Why?'Snorkeling in Data for Supervision and Generation

Most of the preprocessing phase of pipelines has the complexity of taking the unstructured data or the Dark Data like text, Tables, Image, etc, and turning into the structured data which usually takes months or years and to build ML models further.Before building an ML model over the extracted data, we need to Hand label it, those are called Gold Labels. If we look from Today's Machine Learning pipeline at a high level this explicitly says that people spent most of the time on creating the Training Dataset and little hustle over Feature Engineering as Deep Learning makes it easy, the crucial part of creating a Training data set is labeling that data points correctly, because the performance of the end model totally depends on how well it trained with correct labels. Now, the question is can we hasten up the process of labeling using any Framework ?, This is where Snorkel began to accelerate Data Building and Managing, not only labeling, snorkel has many other features that make the bottlenecks unclogged.Snorkel: A framework for Rapidly Generating Training Data with Weak Supervision.

Snorkeling in Data for Supervision and Generation
Snorkel over Data Creation and Feature Engineering.

Weak Supervision

By eliminating the Hand labeling process, Now we can programmatically generate labels with external Domain knowledge or any patterns. This results in low-quality lables[Weak labels] more efficiently, which means Weak labels are intended to decrease the cost and increase efficiency. Using noisy, imprecise sources for building a large amount of training data in supervised learning is called Weak Supervision. One of the famous methodologies of weak labeling is Distant supervision. To reiterate Snorkel is an open-source system for quickly assembling training data through weak supervision.

What Snorkel can do?

Snorkel currently has three features for creating and handling training data sets.

  • Data Labeling: Assigning a value to each data point based on Heruristc, Distant Supervision Techniques, etc.,
  • Data Transformation: Converting existing data from one format to another or modifying the data which doesn't affect actual labels. e.g: rotating an image in different angles, etc.,
  • Data Slicing: Segmenting a data set into required subsets for different purposes like improving model perfomance.
Snorkeling in Data for Supervision and Generation
Source: snorkel.org

How Snorkel does it?

The Hight level Architecture of Snorkel's Workflow consist of Five Steps:

  1. Writing Labeling Functions. (LFs)
  2. Modeling and Combining Labeling Functions.
  3. Writing Transformation Functions for perfoming Data Augmentation.
  4. Writing Slicing Functions for Subset selection.
  5. Training a final ML Model.

Following up these, the takeaway parts of Snorkel is its ability to use labels from different Weak Supervision sources and the set of all labeling functions are modeled by combining and applying a Generative Model [LabelModel] that generates a weak set of labels. Along with that, the system can output Probablistic labels that can be used to train various Classifiers, which indeed generalizes noise labels.

Snorkeling in Data for Supervision and Generation

Data Labeling

There are few diffent ways that you would totally turn into weak suprevision by using a python decorator labeling_function() or a python class LabelingFunction.

  • Heuristics: applying set of conditions by a pattern, ex: using regular expressions
  • Third party Models: Using an exisiting Model to perfome labeling.
  • Distant Supervision: Using Existing ground truth data that imprecisely fits in, [External Knowledge Bases].

Let's suppose, we have three different labeling functios are written using conditional pattern and Regular expresions. Expample taken form Snorkel-tutorials.

@labeling_function()
def check(x):
    return SPAM if "check" in x.text.lower() else ABSTAIN

@labeling_function()
def check_out(x):
    return SPAM if "check out" in x.text.lower() else ABSTAIN
  
# with Regex
@labeling_function()
def regex_check_out(x):
    return SPAM if re.search(r"check.*out", x.text, flags=re.I) else ABSTAIN

Now, these labeling functions can label in their own way, which is completely independent, the lables would varry.

Applying LFs

Snorkel provides Labeling Functions applier for Pandas DataFrames, we can use PandasLFApplier(lfs) which takes a list of labeling functions and return Label Matrix in which each columns represents the outputs of each labeling function in the input list.

lfs = [check_out, check, regex_check_out]

applier = PandasLFApplier(lfs=lfs)
L_train = applier.apply(df=df_train)
Snorkeling in Data for Supervision and Generation

For understanding the perfomance and analysing multiple labeling functions let's burst some terminologeis.

  • Polarity: The set of unique labels that LF outputs
  • Coverage: The fraction of the dataset the LF labels
  • Overlaps: The fraction of the dataset where the one LF and atleast one other LF label same.
  • Conflicts: The fraction of the dataset where one LF and at least one other LF label and disagree
  • Correct: The number of data points that the LF labels correctly (if gold labels are provided)
  • Incorrect: The number of data points that this LF labels incorrectly (if gold labels are provided)
  • Empirical Accuracy: The empirical accuracy of the LF (if gold labels are available)

When we apply LFAnalysis[Labeling Functions Analysis] utility, it results the above metrics for each labeling function.

lfs = [check, check_out, regex_check_out]
LFAnalysis(L=L_train, lfs=lfs).lf_summary(Y=Y_dev)
Snorkeling in Data for Supervision and Generation

After done writing Labeling Functions, and the L_Train has respective columns that represents the corrosponding outputs, our goal is to consie and  generate one standard column which has noise-aware probablistic label per data that can be appended to unlabeled dataset for futher training purpose.

This process of generating the final label column can be done in few approaches. We can take a Majority vote from the L_train for each data point and result a single value.

from snorkel.labeling import MajorityLabelVoter

majority_model = MajorityLabelVoter()
preds_train = majority_model.predict(L=L_train)

One other approch is Snorkel can train a Label Model that takes advange of conflicts between all Labeling Functions and estimate their accuracy. This model with produce a singel set of noise-aware labels, which are porbablistic [Confident-Weighted]. We now can use the resulting probablistic label values to train various classifiers.

We can use techniques like Logistic Regression, SVM, LSTM at this stage. The discriminative model learn feature representation of our labelling functions and this makes it better able to generalize to unseen data. This will increase the recall and produces final output.

from snorkel.labeling import LabelModel

label_model = LabelModel(cardinality=2, verbose=True)
label_model.fit(L_train=L_train, n_epochs=500, lr=0.001, log_freq=100, seed=123)
preds_train = label_model.predict(L=L_train)
Snorkeling in Data for Supervision and Generation
Source: Automating Weak Supervision

Data Transformation

Transformation is a technique of Data Augmentation, a proper data augmentation certainly boosts up the model performance.  Computer vision is the area of work where Data augmentation is used extensively, an Image can be augmented by rotating, flipping or adding filters, etc. When it befalls to the part of Text the complexity of applying Augmentation goes up. A simple example of transforming a text is by replacing the existing words in the document by its synonyms.  But not every word can be replaceable such as a, an, the, etc.

When we ask what that really makes Data Transformation a big difference is the More the data we have, the better the model performs.  As we transform a data point in different ways, we certainly do not affect the label so it explicitly generates data that could most benefit the training phase.

Snorkeling in Data for Supervision and Generation
Source: Survey on Image Data Augmentation

Writing Transformation Functions

Snorkel provides a python decorator `transformation_function` which takes a single data point and returns the transformed version of it. If the data transformation isn't done it returns `None`, If all the TFs applied to a data point return None, the data point won't be included in the augmented dataset when we apply our TFs below.

@transformation_function(pre=[spacy])
def change_person(x):
    person_names = [ent.text for ent in x.doc.ents if ent.label_ == "PERSON"]
    # If there is at least one person name, replace a random one. Else return None.
    if person_names:
            name_to_replace = np.random.choice(person_names)
            replacement_name = np.random.choice(replacement_names)
            x.text = x.text.replace(name_to_replace, replacement_name)
            return x

Applying Transformation Functions.

A  little similar approach as applying labeling functions, Snorkel provides a specific class for Pandas DataFrame `PandasTFApplier`, where the PandasTFApplier takes list of transformation functions and a Policy, a Policy is used to determine what sequence of Transformation functions to apply, here we use `mean_field_policy`, which allows specifying a sampling distribution for the Transformation Functions.

from snorkel.augmentation import PandasTFApplier

tfs = [change_person, swap_adjectives, replace_verb_with_synonym, replace_noun_with_synonym, replace_adjective_with_synonym]

tf_applier = PandasTFApplier(tfs, mean_field_policy)
df_train_augmented = tf_applier.apply(df_train)
Y_train_augmented = df_train_augmented["label"].values

Reference:

Snorkel Tutorials; snorkel.org; Snorkel a weak supervision systerm; Introducing Snorkel; Snorkel-June-2019-workshop;

My Exprience at Imaginea Labs.

A short period of time may not give you much experience, but it leaves you with constructive insights that you could transform the way you admire.       - tejakummarikuntla.

I'm the youngest friend to everyone in the team not only in age but also in thoughts and actions ;p. I always feel that the connections and habits you build will build you back and I tried every day doing it my best. I spent most of my time to Unlearn and relearn in the other dimension that could easily create vivid impressions over the journey and this is the endgame of my Internship journey[24/01/2020].

TL;DR

Here are a few takeaways that I could only mention by writing. As there are many (intuitions/Familirites) that I couldn't express in words.

Sri would be probably chuckling when he notices that I tried using Intuition and Familiarity interchangeably [Familiarity Breeds Intution]
  • Constructive learning approach.
  • Learning how to learn.
  • Art of not negotiating peculiar advice.
  • Aligning the mathematical thinking with programmatical implementation.
  • Unlearning.
  • Lucid approach to understanding a research paper.
  • It's not about how much you know, it's all about what you can devise with it.
  • Tracking the new learning by logging in Latex or Markdown.[Zotero]
  • You can only know the curx when you keep questioning 'Why?'
  • Mathematical bonding not only with life but also with Music and Art [Godel, Escher, Bach]
  • You can only Enjoy working when you see a purpose in it.
  • Healthy relations will amplify performance.
  • When you start expressing, you will start noticing different results.
  • Laughing for lame jokes isn't a sin, coz I do a lot ;p [Not lame jokes ] I know it's very lame ;p.

I've updated my technical progess at GitHub [Never missed a day to commit :D], feel free to check out and keep in touch @ LinkedIn, Instagram.

Thank you for all my amazing mentors❤️:

Arun Edwin, Vikash, Sachin, Ebby, Vivek, Rehan, Vishwas, Nimmy, Vijay, Swamy, Kripa, Arthy, Sri, Hari.

Snorkeling in Data for Supervision and Generation
Sachin's Send off [10/01/2020] | Shot on Vikash's OnePlus 7t 🤪

Originally Published at: Imaginea Labs

Snorkeling in Data for Supervision and Generation
]]>
<![CDATA[Blue or Green Screen Effect with OpenCV [Chroma keying]]]>Jump to Code with .ipynb


Before we get into Chroma keying[ green screen effect ] it’s better to understand the underlying concept that making it possible with Open CV.

Colour Thresholds

As we treat Images as grids of pixels as a function of X and Y, we are gonna use

]]>
tejakummarikuntla.github.io/blog/blue-or-green-screen-effect-with-opencv-chroma-keying/5ead982e659dad2aec6708caWed, 01 May 2019 16:00:00 GMT

Jump to Code with .ipynb


Blue or Green Screen Effect with OpenCV [Chroma keying]

Before we get into Chroma keying[ green screen effect ] it’s better to understand the underlying concept that making it possible with Open CV.

Colour Thresholds

As we treat Images as grids of pixels as a function of X and Y, we are gonna use that information of colors to isolate a particular area.selecting areas of intrest. we’ll be selecting an area of interest using Colour Thresholds,

With Colour Thresholds we can able to remove parts of an image that falls under a specific color range. The common use is with Blue/Green Screen.

A Blue Screen similar to a green screen is used to layer two images or video streams based on identifying and replacing a large blue area.

We’re gonna use Blue Screen to film now ;p. So, how does it work?

The first step is to isolate the blue background and replace that blue area with an image of your choosing.


Blue or Green Screen Effect with OpenCV [Chroma keying]
source: Crude Animation (ctree_bluescreen.jpg)

We’ll be starting with an image of a Christmas tree on a Blue screen background.

We first have to identify the blue region then later we’ll replace it with a background image of our choosing.


import cv2
import matplotlib.pyplot as plt
import numpy as np
image = cv2.imread('images/ctree_bluescreen.jpg')

cv2.imread() is used to read an image which takes the image location as an argument, where the image ctree_bluescreen.jpg is in the folder called images.

print('Image type: ', type(image),
      'Image Dimensions : ', image.shape)

This gives you the result as :

Image type: <class ‘numpy.ndarray’> Image Dimesnsions: (720, 1280, 3)

The openCV library reads the image as an array, also known as a grid or matrix of pixel values. The shape of the image, which contains three values that represent the dimensions of the image array,

720: height in pixels

816: width in pixels

3: Colour Components for Red, Green and Blue (RGB) valuesimage_copy = np.copy(image)

image_copy = np.copy(image)
image_copy = cv2.cvtColor(image_copy, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
Blue or Green Screen Effect with OpenCV [Chroma keying]
Output for the above code snippet

Open CV reads in colour images as BGR(blue, green, red) images, not as RGB(red, blue, green). So, the Red and Blue colours are in reverse order and pyplot reflect this switch and results in a differently coloured image than original.

So, before we display the image let’s make a copy of the original image and use Open CV to change colour from BGR to RGB. It’s good practice to always make a copy of the image you’re working with. This way any transformation you’ll apply to the copy will not affect the original image, so it’s easier to undo a step or try something new.

Now, on this copied image image_copy we can perform a colour transformation using Open CV function cvtColor() , this takes a source image and colour conversion code, in this case, it is just BGR2RGB and then outputs the desired image.


Defining the Colour Threshold

Now, we need to create a colour threshold to remove the desired blue region.

To create a Colour Threshold, we need to define lower and upper bounds for the colour that we need to isolate and remove -blue

we’ll be using the colour threshold values to eventually select the blue screen area that contains this range of colour values and get rid of it.

lower_blue = np.array([0, 0, 100])     ##[R value, G value, B value]
upper_blue = np.array([120, 100, 255])

So, we defined the low threshold that contains the lowest values of red, green and blue that are still considered part of the blue screen background.

In lower_blue , for red and green, we set as 0, meaning it’s okay to have no red or green. But, the lowest value for blue should still be quite high, let’s say around 100.

Now, for upper_bluedefined the upper threshold to allow little more red and green, and set the highest value for blue to 255. Any colour within this low and high range will be an intense blue colour. this is just an estimation though. So, if we find that this range isn’t finding the blue screen area we want, we can get back and change the values.


Creating a Mask

We are gonna use the colour bound that are just created to create an image mask.

Masks are very common way to isolate a selected area of intrest and do something with that area. We can create a mask over blue area using Open CV’s inRange() function.

mask = cv2.inRange(image_copy, lower_blue, upper_blue)
plt.imshow(mask, cmap='gray')

The inRange() function takes in an image in our lower and upper colour bounds and defines a mask by asking if the colour value of each image pixel falls in the range of the lower and upper colour thresholds. If it does fall in this range, the mask will be allowed to be displayed and if not it will block it out and turn the pixel black.

In fact, we can visualize the mask by plotting it as we would an image.

Blue or Green Screen Effect with OpenCV [Chroma keying]
plt.imshow(mask, cmap=’gray’)

The whole white area is where the image will be allowed to show through and the black will be blocked out. In numerical values, we can look at this mask as a 2D grid with the same dimensions as our image 720 pixels in height and 816 pixels in width.

Each coordinate in the mask has a value of either 255 for white and 0 for black, sort of like a grayscale image. And when we look at this mask we can see that it has a white area where the blue screen background is and the black area where the Christmas tree is.

Now, the first thing we need to do is let the Christmas tree show through and block the blue screen background.


masked_image = np.copy(image_copy)
masked_image[mask != 0] = [0, 0, 0]
plt.imshow(masked_image)

First, to mask the image we are gonna make another image copy called maksed_image of our colour changed image copy, just in case I want to change the mask later on.

Then one way to select the blue screen is by asking for the part of that image that overlaps with the part of the mask that is white or not black. That is we’ll select the part of the image where the area of the mask is not equal to zero, using mask != 0 . And to block this background area out we then set the pixels to black. Now when we display our result, that should show the Christmas tree area is the only area that should show through.

Blue or Green Screen Effect with OpenCV [Chroma keying]
plt.imshow(masked_image)

The Blue screen background is gone, we might even change our colour threshold to get rid of any few blue spots, we can try it by increasing the highest green value and decreasing the low blue value, that should capture a larger range of blue.


Mask and Add Background Image

Now, we just have one last step which is to apply a background to this image. The process is fairly similar.

background_image = cv2.imread('images/treeBackground.jpg')
background_image = cv2.cvtColor(background_image, cv2.COLOR_BGR2RGB)

crop_background = background_image[0:720, 0:1280]

crop_background[mask == 0] = [0, 0, 0]

plt.imshow(crop_background)

First, we’ll read in an image of outer space and convert into RGB Colour. We’ll also crop it so that it’s the same size as our tree image 720 x 1280 pixels , we are calling this image as crop_background , then we apply the mask, this time using the opposite mask, mean we want the background to show through and not the Christmas tree area. If we look back at the mak in this case we’re blocking the part of the background image where the mask is equal to zero.

Just to make sure we got this masking correct, we’re gonna plot the resulting image.

Blue or Green Screen Effect with OpenCV [Chroma keying]
plt.imshow(crop_background)

The result is the background with the Tree cut out.

Then finally, we just need to add these two images together. Since the black area is equivalent to zeros in pixel colour value, a simple addition will work.

final_image = crop_background + masked_image
plt.imshow(final_image)

Now, when we plot the complete image I got the Christmas tree with new Background.🙌

Blue or Green Screen Effect with OpenCV [Chroma keying]
plt.imshow(final_image)

Originally Published at: Medium

Blue or Green Screen Effect with OpenCV [Chroma keying]
]]>
<![CDATA[Camera Calibration with OpenCV]]>When we talk about camera calibration and Image distortion, we’re talking about what happens when a camera looks at 3D objects in the real world and transforms them into a 2D image. That transformation isn’t perfect.

For example, here’s an image of a road and some images

]]>
tejakummarikuntla.github.io/blog/camera-calibration-with-opencv/5ead9647659dad2aec67089bSat, 09 Feb 2019 15:55:00 GMT

When we talk about camera calibration and Image distortion, we’re talking about what happens when a camera looks at 3D objects in the real world and transforms them into a 2D image. That transformation isn’t perfect.

For example, here’s an image of a road and some images taken through the different camera lens that slightly distorted.

Camera Calibration with OpenCV
An original picture of the road
Camera Calibration with OpenCV
Distorted versions of the above picture by a camera

In these distorted images, you can see that the edges of the lanes are bent and sort of rounded or stretched outward. Our first step in analyzing camera is to undo this distortion so we can get correct and useful information out of them.

Why Distortion?

Before we get into the code and start correcting for distortion, let’s get some intuition as to how this distortion occurs.

Here’s a simple model of a camera called the pinhole camera model.

Camera Calibration with OpenCV

When a camera looking at an object, it is looking at the world similar to how our eyes do. By focusing the light that’s reflected off of objects in the world. In this case, though a small pinhole, the camera focuses the light that’s reflected off to a 3D traffic sign and forms a 2D image at the back of the camera.

Camera Calibration with OpenCV

In math, the Transformation from 3D object points, P of X, Y and Z to X and Y is done by a transformative matrix called the camera matrix(C), we’ll be using this to calibrate the camera.

However, real cameras don’t use tiny pinholes; they use lenses to focus on multiple light rays at a time which allows them to quickly form images. But, lenses can introduce distortion too.

Light lays often bend a little too much at the edges of a curved lens of a camera, and this creates the effect that distorts the edges of the images.

Types of Distortion

Radial Distortion: Radial Distortion is the most common type that affects the images, In which when a camera captured pictures of straight lines appeared slightly curved or bent

Camera Calibration with OpenCV
Radially Distorted by a camera

Tangential distortion: Tangential distortion occurs mainly because the lens is not parallely aligned to the imaging plane, that makes the image to be extended a little while longer or tilted, it makes the objects appear farther away or even closer than they actually are.

Camera Calibration with OpenCV
Camera Calibration with OpenCV

So, In order to reduce the distortion, luckily this distortion can be captured by five numbers called Distortion Coefficients, whose values reflect the amount of radial and tangential distortion in an image.

Camera Calibration with OpenCV

If we know the values of all the coefficients, we can use them to calibrate our camera and undistort the distorted images.

Camera Calibration with OpenCV
Undistorting the Distorted Image with Distortion Coefficients.

Measuring Distortion

So, we know that the distortion changes the size and shape of the object in an image. But, how do we calibrate for that?

Well, we can take pictures of known shapes, then we’ll be able to detect and correct any distortion errors. We could choose any shape to calibrate our camera, and we’ll use a chessboard.

Camera Calibration with OpenCV

A chessboard is great for calibration because it's regular, high contrast pattern makes it easy to detect automatically. And we know how an undistorted flat chessboard looks like. So, if we use our camera to take pictures of Chessboard at different angles

Camera Calibration with OpenCV

Finding Corners

Open CV helps to automatically detect the corners and draw on it by findChessboardCorners() and drawChessboardCorners()

Applying both functions to a sample image, results:

Camera Calibration with OpenCV
After applying findChessboardCorners() and drawChessboardCorners()
import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# prepare object points
nx = 8 number of inside corners in x
ny = 6 number of inside corners in y
# Make a list of calibration images
fname = 'calibration_test.png'
img = cv2.imread(fname)
# Convert to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)
# If found, draw corners
if ret == True:
    # Draw and display the corners
    cv2.drawChessboardCorners(img, (nx, ny), corners, ret)
    plt.imshow(img)

Calibrating The Camera

In order to Calibrate the camera, the first step will be to read in calibration Images of a chess board. It’s recommended to use at least 20 images to get a reliable calibration, For this, we have a lot of images here, each chess board has eight by six corners to detect,

To calibrate a camera, OpenCV gives us the calibrateCamera() function

Camera Calibration with OpenCV

This takes in Object points, Image points[will understand these points in a moment], and the shape of the image and using these inputs, it calculates and returns

Camera Calibration with OpenCV

mtx: Camera Matrix, which helps to transform 3D objects points to 2D image points.

dist: distortion coefficient

It also returns the position of the camera in the world, with the values of rotation and translation vectors rvecs, tvecs

The next function that we require is undistort().

Camera Calibration with OpenCV

The undistort function takes in a distorted image, our camera matrix, and distortion coefficients and it returns an undistorted, often called destination image.

In calibrateCamera() function we need object points and image points.

import numpy as np 
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#read in a calibration image
img = mpimg.imread('../calibration_images/calibration1.jpg')
plt.imshow(img)

First, Done with numpy, openCV, and plotting imports, then we are gonna read the first image calibarion1.jpg and display it.

Now, we are gonna map the coordinates of the corners in the 2D displayed image which called as imagepoints , to the 3D coordinates of the real, undistorted chessboard corners, which are called as objectpoinst.

So, we are gonna set up two empty arrays to hold these points, objectpoints and imagepoints

# Arrays to store object points and image points from all the images
objpoints = [] # 3D points in real world space
imgpoints = [] # 2D points in image plane

The object points will all be the same, just the known object corners of the chess board corners for an eight by six board.

So, we are going to prepare these object points, first by creating six by eight points in an array, each with three columns for the x,y and z coordinates of each corner. We will then initialize all these to 0s using Numpy’s zeros function. The z coordinates will stay zero so leave that as it is but, for our first two columns x and y, use Numpy’s mgrid function to generate the coordinates that we want. mgrid returns the coordinate values for given grid size and shape those coordinates back into two columns, one for x and one for y:

# Prepare obj points, like (0, 0, 0), (1, 0, 0), (2, 0, 0)....., (7, 5, 0)
objp = np.zeros((6*8,3), np.float32)
objp[:,:,] =  mp.mgrid[0:8,0:6].T.reshape(-1,2) # x,y coordinates

Next to create the imagepoints, we need to consider the distorted calibrated image and detect the corners of the board. OpenCV gives us an easy way to detect chessboard corners with a function called findChessboardCorners(), that returns the corners found in a grayscale image.

So, we will convert the image to greyscale and then pass that to the findChessboardCorners() function. This function takes in a grayscle image along with the dimensions of the chess board corners. In this case 8 by 6 and last parameter is for any flags; there are none in this example:

# Convert image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BRG2GRAY)
# Find the Chesse board corners
rer, corners = cv2.findChessboardCorners(gray, (8,6), None)

If this function detects corners, we are gonna append those points to the image points array and also add our prepared object points objp to the objectpoints array. These object points will be the same for all of the calibration images since they represent a real chessboard.

# If corners are found, add object points, image points
if ret == True:
    imgpoints.append(corners)
    objpoints.append(objp)

Next, we also draw the detected corners, with a call to drawChessboardCorners() , that takes in our image, corner dimensions and corner points.

# If corners are found, add object points, image points
if ret == True:
    imgpoints.append(corners)
    objpoints.append(objp)
    
    # Draw and display the corners
    img = cv2.drawChessboardCorners(img, (8,6), corners, ret)
    plt.imshow(img)
Camera Calibration with OpenCV

Correction for Distortion

import pickle
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Read in the saved objpoints and imgpoints
dist_pickle = pickle.load( open( "wide_dist_pickle.p", "rb" ) )
objpoints = dist_pickle["objpoints"]
imgpoints = dist_pickle["imgpoints"]
# Read in an image
img = cv2.imread('test_image.png')
def cal_undistort(img, objpoints, imgpoints):
    ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[1:], None, None)
    undist = cv2.undistort(img, mtx, dist, None, mtx)
    return undist
undistorted = cal_undistort(img, objpoints, imgpoints)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
f.tight_layout()
ax1.imshow(img)
ax1.set_title('Original Image', fontsize=50)
ax2.imshow(undistorted)
ax2.set_title('Undistorted Image', fontsize=50)
plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)

Get distortion pickle file and test image

Output result:

Camera Calibration with OpenCV

Reference: Udacity Self Driving Car Engineer Nanodegree

Originally Published at: Analytics Vidhya

Camera Calibration with OpenCV
]]>
<![CDATA[Automate GitHub Issues status of your organization with Webhooks]]>Introduction

Let’s suppose you are holding an organization which builds products and develop them, you are gonna use GitHub repositories(public/private) to keep all your code or resources safe and secure without getting messed up. In the time, you will be using an amazing feature of GitHub repos

]]>
tejakummarikuntla.github.io/blog/automate-github-issues-status-of-your-organization-with-webhooks/5ea5ac26f432ef475c61ec91Sat, 15 Dec 2018 17:00:00 GMTIntroductionAutomate GitHub Issues status of your organization with Webhooks

Let’s suppose you are holding an organization which builds products and develop them, you are gonna use GitHub repositories(public/private) to keep all your code or resources safe and secure without getting messed up. In the time, you will be using an amazing feature of GitHub repos to raise an Issue and look forward to someone in or out of the organization to solve it!

For now, your issue status is open

Automate GitHub Issues status of your organization with Webhooks

As the issue is open, you wanted to get updated to your database if it’s got commented or solved by someone. For this, the old traditional approach to keep requesting the Github server for status updates and then updating in your database alongside, which is not an effective approach and here comes the helping hand of Github feature ‘WebHooks’ which make the process easier

WebHooks

Automate GitHub Issues status of your organization with Webhooks

WebHooks is an amazing external service in which the server reach you out with a POST request automatically when certain events happen, rather continuously requesting the server for any changes. Learn more at Webhooks Guide.

How do WebHooks works?

Automate GitHub Issues status of your organization with Webhooks
GitHub WebHook sending the data to your server as Issue Comment Created on a repo

How to create a GitHub Webhook

First and for most, Webhooks can only be created for an organization that you are holding.

Go to the settings page of your organization and look for the option Webhooks

Automate GitHub Issues status of your organization with Webhooks

The above image may change for times, but the configuration should be similar.

Payload URL

The URL you provide will be get triggered when particular events(you could choose the required events in the third option ‘which Events would you like to trigger this webhook?’)happen

Payload URL should consist a service that should receive data(payload) from GitHub. Follow up my next writings to learn how to write a service and deploy to Heroku that receives data.

Content-type

The Content-type is a header, which is used to indicate the media type of the response. As GitHub Webhook sends the response in JSON format, we are gonna select ‘application/json’ Learn more at MDN-ContentType

Secret

As you configure the payload link and content-type, here’s the security part to secure your service that receives data. This ‘secret’ part is to limit the requests to those coming from GitHub, there are a couple of ways to do this. But, for simplicity, I’ll be leaving it blank for now. Learn more about this at Securing your WebHook.

which Events would you like to trigger this webhook?

As it sounds, this option helps you to subscribe for particular events, In which the Payload URL should get triggered,

We are gonna select “Let me select Individual Events

Automate GitHub Issues status of your organization with Webhooks

You can choose your required Events, for getting issue status and issue comments we choose “Issue Comments” and “Issues”

Automate GitHub Issues status of your organization with Webhooks

Click on Add webhook. you are ready to go!

To know the Recent Deliveries of Webhook, go back to your webhook page and select the existing WebHook and scroll all the way to down find Recent Deliveries

Automate GitHub Issues status of your organization with Webhooks

Originally Published at: FnPlus Tech Blog

Automate GitHub Issues status of your organization with Webhooks
]]>