Handling Missing Data – Abstract

The article discusses various types of missing data and how to handle them. We demonstrate how the prediction results are affected by the quality of missing data as well as the method of handle missing data through some experiments.

1. Introduction – Handling Missing Data

For any real data set, missing data is almost unavoidable. There are many possible reasons for this phenomenon including changes in the design of data collection, precision of data that users entered, the unwillingness of participants surveyed when answering some questions, etc. Detecting and handling these missing values are problems of the data wrangling process.

There are 3 major types of Missing data:

• Missing Completely at Random (MCAR): this is the random case. The missing record is just random and there is no correlation between any value between the missing values and values in other variables.

• Missing at Random (MAR): this type of missing means that the propensity for a missing point is not related to the missing data, but some of the observed data. For example, in a market research survey, for any reason, some interviewers (of some cities) forgot to ask about the income of interviewees, which lead to the ratio of missing income values in these cities higher than other ones. We can consider this as Missing at Random.

• Missing Not at Random (MNAR): this is a highly biased case. The missingness is related to the value of missing observation. In some cases, the dataset should be re-collected to ensure not to have this type of missing. For example, interviewees with high income rejected to answer about their figure could cause this type of missing.

1. Handling Missing Data

Ignoring

Yeah, you just ignore it, if you know missing data is MCAR. Although you do not do anything by yourself, the library (such as XGBoost) is the one that does the stuff for you by choosing an appropriate method. So technically, we can count this method as cases of other methods, depends on circumstance.

Removing (Deletion)

• Column deletion: another simple to handling missing data is to remove that attribute (column deletion). It can be applied when the missing record ratio is high (should be at least 60%, but this is not a fixed rule) and the variable is insignificant.
• Row deletion: If the missing value is MCAR and the missing ratio is not very high, we can drop the entire record (row). This method can be acknowledged as listwise deletion. But if the missing case is not MCAR, this method could introduce bias to a dataset.
• Pairwise deletion: instead of completely removing unknown records, we will maximize data usage by omitting only when necessary. Pairwise deletion can be considered as a method to reduce the data loss caused by listwise deletion.

Imputation (Fill-in)

• Imputation with Median/Mean/Mode values: these values are usually used to fill the missing position. Most of the time, the mean value is used. By using the mean value, we are keeping the mean unchanged after processing. In case of a categorical variable, the most popular value (mode) can be used to fill. The imputation method could decrease the variance of the attribute. We could extend the imputation by adding information whether value comes from imputation or from original dataset value using boolean type (this technique can be called marking imputed values in some document). However, one must be aware of using this method, if the data missing is not random, using mean can introduce outliers to the data.
• Algorithm-based Imputation: instead of using a constant for imputing missing values, we could model variables with missing values as a function of other features. A regression algorithm can predict them with some assumptions.
• If a linear regression is used, we must assume that variables have linear relationship.
• If predicting missing values based on the order of high correlated columns, the process is called hot-deck imputation.
• KNN Imputation: this method can be considered as a variant of median/mean/mode imputation, but instead of calculating these values across all observations, it only does among K nearest observations. One question we should think about is how to measure the distance between observations.

• Multivariate Imputation Chained Equations: instead of the imputation value of each column separately, we can repeat to estimate missing values based on the distribution of other variables. The process repeats until data become stable. This approach has two settings: single and multiple data sets (can also be mentioned as Multiple Imputation by Chained Equations – MICE).
1. Experiment

We are using the Titanic dataset for the experiment, which is quite familiar to most data scientists. The original data consist of 12 variables includes categorical variables and numerical variables. The original task is predicting whether each passenger is survived or not.

We will do a classification task with Logistic Regression (fixed among trials). In each experiment, we try to simulate the situation of data missing by removing some existing values from some features of input data. There will be 2 ways to removing data: completely random (MCAR Generator) and random (MAR Generator). Consider MAR Generator, in each trial, values will be removed with a different ratio based on values of other features (in particular, we based on Pclass – a highly correlated variable with Survived status). We track the changing of accuracy across different settings. For cross-validation, we apply K-Fold with K=5.

In experiment 1, we observe the changing of accuracy when we removing different amounts of data from some features.

In experiment 2, we generate missing data using MCAR and MAR Generator and use 2 MCAR-compatible methods to handle them. We will find out whether these methods decrease the accuracy of the classifier model.

1. Results and Discussion

The affection of Missing Data Amount

In this experiment, we will try to find the correlation (not the correlation coefficient but the correlation in general) between the amount of missing data and the output of learning models, as well as the method to handle them. We do this by masking different ratios of a few columns with an MCAR setting.

 Pct Ratio (%) Masking Sex, Dropping Title Masking Age, Dropping Title Masking Age, Sex, Dropping Title Masking Age, Sex, Keeping Title 0 81.04 0 81.04 0 81.07 0 82.21 0 20 77.23 -3.81 81.19 0.15 77.53 -3.54 81.83 -0.38 40 75.17 -5.87 80.84 -0.2 75.41 -5.66 81.87 -0.34 60 73.96 -7.08 80.29 -0.75 73.93 -7.14 82.32 0.11 80 71.95 -9.09 79.58 -1.46 71.92 -9.15 82.69 0.48 99 71.48 -9.56 79.5 -1.54 71 -10.07 82.98 0.77

Figure 3: Affection of Missing Ratio. The columns just right to each accuracy columns show the difference between the original (0%) and current setting

As can be seen, the more values are removed, the more accuracy decreases. But it happens only under some settings.

The Missing Data quantity is affected significantly only if the feature brings “unique” information. With the presence of the Title feature (extracted from Name), the missing values in the Sex column do not decrease the performance of the model, even with 99% missing data. It is because the majority of values of the Title column (Mr, Mrs, Ms, Dr…) induced information of Sex columns.

With the existence of some features that are important and highly correlated with missing features, the missing data effect becomes negligible. One thing we can learn that although its simplicity, removing entire variables should be considered in many cases, especially if some features that highly correlate with missing features. This can be valuable if we do not want to sacrifice performance and waste effort to gain a small portion of accuracy (around 1%).

The affection of Missing Generator and Handling Method

In this experiment, we use MCAR and MAR simulators to create modified datasets. With each removing method, we apply numerical columns (Age and Fare). Then, we use Mean Imputation (so we choose numerical features for removing values) and Listwise Deletion, which compatible with which MCAR setting, to handle these missing values and observe the difference of accuracy.

Handling by Mean Imputation

 Missing ratio MCAR Missing Generator (Age) MAR Missing Generator (Age) Difference 0 81 81 0 20 80.97 80.99 -0.02 40 80.72 80.7 0.02 60 80.04 80.38 -0.34

Handling by Listwise Deletion

 Missing ratio MCAR Missing Generator (Age) MAR Missing Generator (Age) Difference 0 79.24 79.24 0 20 78.69 77.85 0.84 40 78.81 76.59 2.22 60 80.65 77.34 3.31

Figure 4: Different Missing Generators with different MCAR Handling Methods

Once again, we notice that with Mean Imputation, there are not any significant improvements when we use the MCAR Missing Generator instead of the MAR one. We can see that although Mean Imputation (which is considered as an MCAR-compatible handling method) can distort the correlation between features in case of MAR Missing Generator, the classification task can achieve comparable accuracy.

On the other hand, in case of using Listwise Deletion, the classifier accuracy is higher when the handling method is synced (MCAR Missing Generator). This can be explained by doing listwise deletion, we also throw data from other variables away. So in the MAR Generator case, we removed rows with an unrandom mechanism (it is still removed randomly in the MCAR Generator case), which worsen the classifier’s accuracy. Note that in one column, there is an increase in 60% setting. This phenomenon happens because by removing more rows, both the training and testing folds become smaller. We should not consider this as an improvement of the model when we increase the missing ratio.

1. Recap

All methods of handling missing data may be helpful, but the choice is based on the circumstance. For better choice, data scientists should understand the process that generated the dataset, as well as the knowledge of the domain.

Considering the correlation between features are important to decide whether missing data should be handle or just ignore or delete from the dataset.

There are also some aspects of handling missing data we want to show you but due to time and resource limitation, we have not done these experiments yet. We would want to do experiments with more complex methods such as algorithm-based handling, as well as compare the affection over different datasets. We hope to come back to these problems some days.

Reference

https://seleritysas.com/blog/2020/03/03/the-best-way-to-handle-missing-data/

https://towardsdatascience.com/how-to-handle-missing-data-8646b18db0d4

https://www.kaggle.com/dansbecker/handling-missing-values

https://scikit-learn.org/stable/modules/impute.html

Multiple Imputation by Chained Equations (MICE): https://www.youtube.com/watch?v=zX-pacwVyvU

Data Science Blog

Please check our other Data Science Blog

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineer.

Introduction to Healthcare Data Science (Overview)

Healthcare analytics is the collection and analysis of data in the healthcare field to study determinants of disease in human populations, identify and mitigate risk by predicting outcomes. This post introduces  some common epidemiological study designs and an overview of the modern healthcare data analytics process.

Types of Epidemiologic Studies

In general, epidemiologic studies can be classified into 3 types: Interventional study, Observational study, Meta-analysis study

Interventional or Randomized control study

Clinical medicine relies on evidence base found in strong research to inform best practices and improve clinical care. The gold standard for study design to provide evidence is Randomized control (RCT). The main idea of this kind of research is to prove the root cause that makes a certain disease happen or the causal effect of a treatment. RCTs are performed in fairly homogeneous patient populations when participants are allocated by chance to two similar groups. The researchers then try different interventions or treatments for these two groups and compare the outcomes.

As an example, a study was conducted to assess whether improved lifestyle habits could reduce the hemoglobin A1c (HbA1c) levels of employees. In the experiment, the intervention consisted of a 3-month competition among employees to adopt  healthier lifestyle habits (Eat better, Move more, and Quit smoking) or keep their current lifestyle. After the intervention, employees with elevated HbA1c significantly reduced their HbA1c levels while employees without elevated HbA1c levels of employees without intervention were not changed.

In ideal conditions, there are no confounding variables in a randomized experiment, thus RCTs are often designed to investigate the causal relationship between exposure and outcome. However, RCTs have several limitations, RCTs are often costly, time-intensive, labor-intensive, slow, and can consist of homogeneous patients that are seldom generalizable to every patient population.

Observational studies

Unlike RCTs, Observational studies have no active interventions which mean the researchers do not interfere with their participants. In contrast with interventional studies, observation studies are usually performed in heterogeneous patient populations. In these studies, researchers often define an outcome of interest (e.g a disease) and use data collected on patients such as demographic, labs, vital signs, and disease states to explore the relationship between exposures and outcome, determine which factors contributed to the outcome and attempt to draw inferences about the effects of different exposures on the outcome. Findings from observational studies can subsequently be developed and tested with the use of RCTs in targeted patient populations.

Observational studies tend to be less time- and cost-intensive. There are three main study designs in observational studies: prospective study design, retrospective study design, and cross-sectional study design.

Follow-up study/ Prospective study/ Longitudinal (incidence) study

A prospective study is a study in which a group of disease-free individuals is identified as a baseline and are followed over some time until some of them develop the disease. The development of disease over time is then related to other variables measured at baseline, generally called exposure variables. The study population in a prospective study is often called a cohort.

Retrospective study/ Case-Control study

A retrospective study is a study in which two groups of individuals are initially identified: (1) a group that has the disease under study (the cases) and (2) a group that does not have the disease under study (the controls). Cases are individuals who have a specific disease investigated in the research. Controls are those who did not have the disease of interest in the research. Usually, a retrospective history of health habits before getting the disease is obtained. An attempt is then made to relate their prior health habits to their current disease status. This type of study is also sometimes called a case-control study.

Cross-sectional (Prevalence) study/ Prevalence study

A cross-sectional study is one in which a study population is ascertained at a single point in time. This type of study is sometimes called a prevalence study because the prevalence of disease at one point in time is compared between exposed and unexposed individuals. . Prevalence of a disease is obtained by dividing the number of people who currently have the disease by the number of people in the study population.

Meta-data analysis

Often more than one investigation is performed to study a particular research question, by different research groups reporting significant differences for a particular finding and other research groups reporting no significant differences. Therefore, in meta-analysis researchers collect and synthesize findings from many existing studies and provides a clear picture of factors associated with the development of a certain disease. These results may be utilized for ranking and prioritizing risk factors in other researches.

Modern Healthcare Data analytics approach

Secondary Analysis and modern healthcare data analytics approach

In primary research infrastructure, designing a large-scale randomized controlled trial (RCTs) is expensive and sometimes unfeasible.  The alternative approach for expansive data is to utilize electronic health records (EHR). In contrast with the primary analysis, secondary analysis performs retrospective research using data collected for purposes other than research such as Electronic Health Record (EHR). Modern healthcare data analytic projects apply advanced data analysis methods, such as machine learning, and perform integrative analysis to leverage a wealth of deep clinical and administrative data with longitudinal history from EHR to get a more comprehensive understanding of the patient’s condition.

Electronic Health Record (EHR)

EHRs are data generated during routine patient care. Electronic health records contain large amounts of longitudinal data and a wealth of detailed clinical information. Thus, the data, if properly analyzed and meaningfully interpreted, could vastly improve our conception and development of best practices. Common data in EHR are listed as the following:

• Demographics

Age, gender, occupation, marital status, ethnicity

• Physical measurement

SBP, DBP, Height, Weight, BMI, waist circumference

• Anthropometry

Stature, sitting height, elbow width, weight, subscapular, triceps skinfold measurement

• Laboratory

Creatinine, hemoglobin, white blood cell count (WBC), total cholesterol, cholesterol, triglyceride, gamma-glutamyl transferase (GGT)

• Symptoms

frequency in urination, skin rash, stomachache, cough

• Medical history and Family diseases

diabetes, traumas, dyslipidemia, hypertension, cancer, heart diseases, stroke, diabetes, arthritis, etc

• Lifestyle habit

Behavior risk factors from Questionnaires such as Physical activity, dietary habit, smoking, drinking alcohol, sleeping, diet, nutritional habits, cognitive function, work history, and digestive health, etc

• Treatment

Medications (prescriptions, dose, timing), procedures, etc.

Using EHR to Conduct Outcome and Health Services Research

In the secondary analysis, the process of analyzing data often includes steps:

1. Problem Understanding and Formulating the Research Question: In this step, the process of transforming a clinical question into research is defined. There are 3 key components of the research question: the study sample (or patient cohort), the exposure of interest (e.g., information about patient demographic, lifestyle habit, medical history, regular health checkup test result), and the outcome of interest (e.g., a patient has diabetes or not after 5 years):
2. Data Preparation and Integration: Extracted raw data can be collected from different data sources, or be in separate datasets with different representation and formats. Data Preparation and Integration is the process of combining and reorganizing data derived from various data sources (such as databases, flat files, etc.) into a consistent dataset that contains all the information required for desired statistical analysis
3. Exploratory Data Analysis/ Data Understanding: Before statistics and machine learning models are employed, there is an important step of exploring data which is important for understanding the type of information that has been collected and what they mean. Data Exploration consists of investigating the distribution of variables, patterns, and nature inside the data and checking the quality of the underlying data. This preliminary examination will influence which methods will be most suitable for the data preprocessing step and choosing the appropriate predictive model
4. Data Preprocessing: Data preprocessing is one of the most important steps and critical in the success of machine learning techniques. Electronic health records (EHR) often were collected for clinical purposes. Therefore, these databases can have many data quality issues. Preprocessing aims at assessing and improving the quality of data to allow for reliable statistical analysis.
5. Feature Selection: Since the final dataset may have several hundreds of data fields, and not all of them are relevant to explain the target variable. In many machine learning algorithms, high-dimensionality can cause overfitting or reduce the accuracy of the model instead of improving it. Features selection algorithms are used to identify features that have an important predictive role. These techniques do not change the content of the initial features set, only select a subset of them. The purpose of feature selection is to help to create optimize and cost-benefit models for enhancing prediction performance.
6. Predictive Model: To develop prediction models with statistical models and machine learning algorithms could be employed. The purpose of machine learning is to design and develop prediction models, by allowing the computer to learn from data or experience to solve a certain problem. These models are useful for understanding the system under study, the models can be divided according to the type of outcome that they produce which includes the Classification model, Regression model, or Clustering model
7. Prediction and Model Evaluation: This process is to evaluate the performance of predictive models. The evaluation should include internal and external validation. Internal validation refers to the model performance evaluation in the same dataset in which the model was developed. External validation is the evaluation of a prediction model in other populations with different characteristics to assess the generalizability of the model.
Please also check our Healthcare Data Science example

References:

[1] Fundamentals of Biostatistics – Bernard Rosner, Harvard University

[2] Secondary Analysis of Electronic Health Records – Springer Open

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineers.

Vietnam AI / Data Science Lab

Please also visit Vietnam AI Lab

Introduction

Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can “interpolate”: By decoding the convex combination of the latent codes for two data points, the autoencoder can produce an output that semantically mixes characteristics from the data points. In this paper, we propose a regularization procedure that encourages interpolated outputs to appear more realistic by fooling a critical network that has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations. – [1]

The idea comes from the paper “Implementation from the paper: Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer” (https://arxiv.org/abs/1807.07543), also known as ACAI framework.

Today I will walk through the implementation of this fantastic idea. The implementation is based on tensorflow 2.0 and python 3.6. Let’s start!

Implementation

First, we need to import some dependency packages.

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply, GaussianNoise
from tensorflow.keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import UpSampling2D, Conv2D, Reshape
from tensorflow.keras.layers import Lambda
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras import losses
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
import keras.backend as K

import matplotlib.pyplot as plt

import numpy as np
import tqdm
import os

import io
from PIL
import Image
from sklearn.decomposition import PCA
# from sklearn.manifold import TSNE
import seaborn as sns
import pandas as pd


Next, we define the overall framework of ACAI, which composes of 3 parts: encoder, decoder and discriminator (also called as critic in the paper).

class ACAI():
def __init__(self, img_shape=(28,28), latent_dim=32, disc_reg_coef=0.2, ae_reg_coef=0.5, dropout=0.0):
self.latent_dim = latent_dim
self.img_shape = img_shape
self.dropout = dropout
self.disc_reg_coef = disc_reg_coef
self.ae_reg_coef = ae_reg_coef
self.intitializer = tf.keras.initializers.VarianceScaling(
scale=0.2, mode='fan_in', distribution='truncated_normal')
self.initialize_models(self.img_shape, self.latent_dim)

def initialize_models(self, img_shape, latent_dim):
self.encoder = self.build_encoder(img_shape, latent_dim)
self.decoder = self.build_decoder(latent_dim, img_shape)
self.discriminator = self.build_discriminator(latent_dim, img_shape)

img = Input(shape=img_shape)
latent = self.encoder(img)
res_img = self.decoder(latent)

self.autoencoder = Model(img, res_img)
discri_out = self.discriminator(img)

def build_encoder(self, img_shape, latent_dim):
encoder = Sequential(name='encoder')

encoder.summary()
return encoder

def build_decoder(self, latent_dim, img_shape):
decoder = Sequential(name='decoder')

decoder.summary()
return decoder

def build_discriminator(self, latent_dim, img_shape):
discriminator = Sequential(name='discriminator')

discriminator.summary()
return discriminator

Some utility functions for monitoring the results:

def make_image_grid(imgs, shape, prefix, save_path, is_show=False):
# Find the implementation in below github repo

def flip_tensor(t):
# Find the implementation in below github repo

def plot_to_image(figure):
# Find the implementation in below github repo

def visualize_latent_space(x, labels, n_clusters, range_lim=(-80, 80), perplexity=40, is_save=False, save_path=None):
# Find the implementation in below github repo

Next, we define the training worker, which is called at each epoch:

@tf.function
def train_on_batch(x, y, model: ACAI):
# Randomzie interpolated coefficient alpha
alpha = tf.random.uniform((x.shape[0], 1), 0, 1)
alpha = 0.5 - tf.abs(alpha - 0.5)  # Make interval [0, 0.5]

# Constructs non-interpolated latent space and decoded input
latent = model.encoder(x, training=True)
res_x = model.decoder(latent, training=True)

ae_loss = tf.reduce_mean(tf.losses.mean_squared_error(tf.reshape(x, (x.shape[0], -1)), tf.reshape(res_x, (res_x.shape[0], -1))))

inp_latent = alpha * latent + (1 - alpha) * latent[::-1]
res_x_hat = model.decoder(inp_latent, training=False)

pred_alpha = model.discriminator(res_x_hat, training=True)
# pred_alpha = K.mean(pred_alpha, [1,2,3])
temp = model.discriminator(res_x + model.disc_reg_coef * (x - res_x), training=True)
# temp = K.mean(temp, [1,2,3])
disc_loss_term_1 = tf.reduce_mean(tf.square(pred_alpha - alpha))
disc_loss_term_2 = tf.reduce_mean(tf.square(temp))

reg_ae_loss = model.ae_reg_coef * tf.reduce_mean(tf.square(pred_alpha))

total_ae_loss = ae_loss + reg_ae_loss
total_d_loss = disc_loss_term_1 + disc_loss_term_2

return {
'res_ae_loss': ae_loss,
'reg_ae_loss': reg_ae_loss,
'disc_loss': disc_loss_term_1,
'reg_disc_loss': disc_loss_term_2

}

Next, we need to define a main training function:

def train(model: ACAI, x_train, y_train, x_test,
batch_size, epochs=1000, save_interval=200,
save_path='./images'):
n_epochs = tqdm.tqdm_notebook(range(epochs))
total_batches = x_train.shape[0] // batch_size
if not os.path.exists(save_path):
os.makedirs(save_path)
for epoch in n_epochs:
offset = 0
losses = []
random_idx = np.random.randint(0, x_train.shape[0], x_train.shape[0])
for batch_iter in range(total_batches):
# Randomly choose each half batch
imgs = x_train[offset:offset + batch_size,::] if (batch_iter < (total_batches - 1)) else x_train[offset:,::]
offset += batch_size

loss = train_on_batch(imgs, None, model)
losses.append(loss)

avg_loss = avg_losses(losses)
# wandb.log({'losses': avg_loss})

if epoch % save_interval == 0 or (epoch == epochs - 1):
sampled_imgs = model.autoencoder(x_test[:100])
res_img = make_image_grid(sampled_imgs.numpy(), (28,28), str(epoch), save_path)

latent = model.encoder(x_train, training=False).numpy()
latent_space_img = visualize_latent_space(latent, y_train, 10, is_save=True, save_path=f'./latent_space/{epoch}.png')
# wandb.log({'res_test_img': [wandb.Image(res_img, caption="Reconstructed images")],
#            'latent_space': [wandb.Image(latent_space_img, caption="Latent space")]})
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype(np.float32) / 255.
x_test = x_test.astype(np.float32) / 255.
ann = ACAI(dropout=0.5)
train(model=ann,
x_train=x_train,
y_train=y_train,
x_test=x_test,
batch_size=x_train.shape[0]//4,
epochs=2000,
save_interval=50,
save_path='./images')

Results

Some of the result from ACAI after finishing:

First is the visualization of MNIST dataset after encoded by the encoder, we can see that the cluster is well separated, and applying downstream tasks on latent space will lead to significant improvement in comparison to raw data (such as clustering, try KMeans and check it out yourself :D).

Second is the visualization of interpolation power on latent space:

• Interpolation with alpha values in range [0,1.0] with step 0.1.
• 1st row and final rows are the source and destination images, respectively.
• Formula:
mix_latent = alpha * src_latent + (1 - alpha) * dst_latent


We can see that there is a very smooth morphing from the digits at the top row to the digits at the bottom row.

The whole running code is available at github (acai_notebook). It’s your time to play with the paper :D.

Reference

[1] David Berthelot, Colin Raffel, Aurko Roy, and Ian Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer, 2018.

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineer.

Data Science Project

Vietnam AI / Data Science Lab

Please also visit Vietnam AI Lab

Pytorch part 1: Introducing Pytorch

Pytorch is a deep learning framework and a scientific computing package
The scientific computing aspect of PyTorch’s is primarily a result PyTorch’s tensor library and associated tensor operations. That means you can take advantage of Pytorch for many computing tasks, thanks to its supporting tensor operation, without touching deep learning modules.

Important to note that PyTorch tensors and their associated operations are very similar to numpy n-dimensional arrays. A tensor is actually an n-dimensional array.

Pytorch builds its library around Object Oriented Programming(OOP) concept. With object-oriented programming, we orient our program design and structure around objects. The tensor in Pytorch is presented by the object torch. Tensor which is created from numpy ndarray objects. Two objects share memory. This makes the transition between PyTorch and NumPy very cheap from a performance perspective.

With PyTorch tensors, GPU support is built-in. It’s very easy with PyTorch to move tensors to and from a GPU if we have one installed on our system. Tensors are super important for deep learning and neural networks because they are the data structure that we ultimately use for building and training our neural networks.

The initial release of PyTorch was in October of 2016, and before PyTorch was created, there was and still is, another framework called Torch which is also a machine learning framework but is based on the Lua programming language. The connection between PyTorch and this Lua version, called Torch, exists because many of the developers who maintain the Lua version are the individuals who created PyTorch. And they have been working for Facebook since then till now.

Below are the primary PyTorch modules we’ll be learning about and using as we build neural networks along the way.

Why use Pytorch for deep learning?

• PyTorch’s design is modern, Pythonic. When we build neural networks with PyTorch, we are super close to programming neural networks from scratch. When we write PyTorch code, we are just writing and extending standard Python classes, and when we debug PyTorch code, we are using the standard Python debugger. It’s written mostly in Python and only drops into C++ and CUDA code for operations that are performance bottlenecks.
• It is a thin framework, which makes it more likely that PyTorch will be capable of adapting to the rapidly evolving deep learning environment as things change quickly over time.
• Stays out of the way and this makes it so that we can focus on neural networks and less on the actual framework.

Why PyTorch is great for deep learning research

The reason for this research suitability is that Pytorch uses a dynamic computational graph, in contrast with tensorfow which uses a static computational graph, to calculate derivatives.

Computational graphs are used to graph the function operations that occur on tensors inside neural networks. These graphs are then used to compute the derivatives needed to optimize the neural network. Dynamic computational graph means that the graph is generated on the fly as the operations are created. Static graphs that are fully determined before the actual operations occur.

k-Nearest Neighbors algorithms

In this blog post, I am going to introduce one of the most intuitive algorithms in the field of Supervised Learning[1], the k-Nearest Neighbors algorithm (kNN).

The original k-Nearest Neighbors algorithm

The kNN algorithm is very intuitive. Indeed, with the assumption that items close together in the dataset are typically similar, kNN infers the output of a new sample by first constructing the distance score with every sample in the training dataset. From there, it creates a ‘neighbor zone’ by selecting samples that are ‘near’ the candidate one, and does the supervised tasks based on samples lie inside that zone. The task could be either classification or regression.

Let’s start with the basic kNN algorithm. Let be our training dataset with samples belong to classes, where is the class of one sample, and denotes the corresponding feature vector that describes the characteristics of that sample. Furthermore, it is necessary to define the suitable distance metric, since it drives the algorithm to select neighbors and make predictions later on. The distance metric , is a mapping over a vector space , where the following conditions are satisfied :

In the following steps to describe the k-Nearest Neighbors algorithm, the Euclidean distance will be used as the distance metric .

For any new instance :

• Find where is the set of samples that are closest to
• The way to define the nearest neighbors is based on distance metric (Note that we are using Euclidean distance).

• The classifier is defined as:

where is the unit function. Note that for the regression problem, the function will just an average of all response values from neighbor samples.

Please also check Vietnam AI Lab

Weighted k-Nearest Neighbors

In the kNN algorithm, we weigh all neighbors equally. It may affect the inference steps, especially when the neighbor zone becomes bigger and bigger. To strengthen the effect of ‘close’ neighbors than others, the weighted scheme of k-Nearest Neighbors is applied.

Weighted k-Nearest Neighbors is based on the idea that, within , such observations that are closer to , should get a higher weight than the further neighbors. Now it is necessary to note some properties of any weighting schemes on any distance metric :

• decreases monotonously for

For any new instance :

• We find where is the set of samples that are closest to
• The th neighbor is used for standardization of the smallest distance:
• We transform the standardized distance with any kernel function into weights .
• The classifier is defined as:

The pros and cons of kNN, and further topics

The kNN and weighted kNN do not rely on any specific assumption on the distribution of the data, so it is quite easy to apply it to many problems as the baseline model. Furthermore, kNN (and its family) is very intuitive for understanding and implementing, which again makes it a worthy try-it-first approach for many supervised problems.

Despite those facts, kNN still has challenges in some aspects: It is computationally expensive – especially when the dataset size becomes huge. Another challenge is choosing the ‘correct’ distance metric that best follows the assumption for using this algorithm: items close together in the data set should be typically similar. Lastly, the curse of dimensionality heavily affects the distance metric. Beyer et al.[2] proves that, under some preconditions, in high dimension space, all points converge to the same distance from the query point. In this case, the concept of ‘nearest neighbors’ is no longer meaningful.

I. A Brief Overview

Consider an example of a courtroom trial:

A car company C is accused of not manufacturing environment-friendly vehicles. The average CO2 emission per car from different manufacturers based on a survey from the previous year is 120.4 grams per kilometer. But for a random batch of 100 cars produced at C’s factory, the average CO2 emission is 121.2 grams per kilometer with a standard deviation of 1.8.

At the trial, Company C is not considered to be guilty as long as their wrongdoing is not proven. A public prosecutor tries to prove that C is guilty and can only succeed when enough evidence is presented.

The example above illustrates the concepts of hypothesis testing; specifically, there are two conflicting hypotheses:

i) C is not guilty; or

ii) C is guilty

The first is called the null hypothesis (denoted by H0), and the second the alternative hypothesis (denoted by HA). At the start of the trial, the null hypothesis is temporarily accepted, until proven otherwise. The goal of hypothesis testing is to perform some sort of transformed comparison between the two numbers 121.2 and 120.4 to either reject H0 and accept HA or vice versa. This one-sample mean testing because we are comparing the average value obtained from one sample (121.2) with the average value assumed to represent the whole population (120.4)

II. Required Steps for Hypothesis Testing

The six steps below must be followed to conduct a hypothesis test. The details will be elaborated on with our example afterward.

1) Set up null and alternative hypotheses and check conditions.

2) Determine the significance level, alpha.

3) Calculate the test statistic.

4) Calculate the probability value (a.k.a the p-value), or find the rejection region. For the following example, we will use the p-value.

5) Decide on the null hypothesis.

6) State the overall conclusion.

III. A step-by-step example

1) Set up hypotheses:

We already mentioned in the beginning the two hypotheses. But now we will formalize them:

Null hypothesis:

Company C’s CO2 mean (denoted by μ ) is equal to the population mean (denoted by μ0):           μ = μ0

Alternative hypothesis:

Company C’s CO2 mean is greater than the population mean: μ > μ0

The hypothesis testing for the one-sample means we are conducting requires the data to come from an approximately normal distribution or a large enough sample size, which can be quite subjective. To keep things simple, we decide that the data gathered from company C is big enough with the sample size being 100 cars.

2) Determine the significance level, alpha, or confidence level

The significance level and its complementary, the confidence level, provide a level of probability cutoff for our test to make decisions about the hypotheses. A common value for alpha is 5%, which is the same as a confidence level of 95%.

3) Calculate the test statistic

For the one-sample mean test, we calculate the t* test statistic using the formula:

where s is the standard deviation from the sample we are testing, 1.8, and n is the size of the sample, 100.

[AI Lab] Hiring Data Scientist

If you want to join in exciting and challenging projects, MTI Tech could be the next destination for your career.

Contact

recruitment@mti-tech.vn

MTI Technology specializes in creating smart mobile contents and services that transform and transcend customers’ lives. We design and develop our products using agile methods bringing the best deliverable results to the table in the shortest amount of time. MTI stands for an attitude: seeking a balance in excellence, pragmatism and convenience for customers. With the original members of 20 people, we grow our members up to more than 100 bright talents and continue to grow more. Looking for a place to grow your talents and be awesome? This is the place!

The Job

We are looking for Data Scientists who would like to participate in the project to use existing various data to apply to AI, moreover, combine with other data to create new value.

Currently, we are looking for candidates with experiences in Algorithms/ Natural Language Processing (NLP), but any other fields of AI will be considered too.

Example of data

• Data of medical examination results, medical questionnaire results or image of them in health check, or general medical examination.
• Data of athletes’ training results and vital.
• Pregnancy activities data of pregnant women.
• Life data such as weather and navigation.
• Text data such as newspapers.
We have many project style for outsourcing/offshoring, research etc.

Example of application

• By combining and analyzing Healthcare data such as medical examination/ medical questionnaire results and labor data such as mental health check and overtime records etc., we can find out future health risks at an earlier stage.

Programming Language

• Python, R, MATLAB, SPSS
• Java, JavaScript, Golang, Haskell, Erlang/Elixir is a plus.

Currently, development is mainly in Python. It is good to understand object thinking programming in Java etc. It is also good if you have parallel processing experience in the server-side language (Golang, etc.).

In addition, engineers who can use functional languages (Haskell, Erlang / Elixir) are treasures of talented people. Such people are interested in various programming languages, have mathematical curiosity, and many of them study by themselves. Although we do not have many opportunities to use these languages in actual development, we welcome such engineers as well.

Operational Environment

• AWS, Google Cloud Platform, Microsoft AZURE, Redmine, GitHub etc.

Who we are looking for

Recently, in the development of Web services, engineers who have experienced APIs usage and libraries related to prominent AI such as open source, Google, IBM etc. from the standpoint of “User”, are often classified as “AI Engineers”. What MTI Group seeks is not such a technician, we find an experienced person who has learned the data sciences themselves deeply. On the other hand, if a person who has learned data science, and has not much experience in actual work, we still consider.

• Have a great ambition and ability to study the most leading-edge research by yourself and apply them to your own development.
• Have technical skills and creativity to build new technologies from scratch by yourself if it is necessary but does not exist yet.
• Adapt yourself to our working culture in a team such as discussion or sharing together. Personality is preferred. Excellent person has a variety of personalities. However, being able to work only on your own becomes a problem.
• Have experiences in research and study related to Statistical Mathematic such as Regression analysis, SVM or Information theory.
• Have taken part in research/business about AI, Machine Learning, Natural language processing (NLP), Neural network and so on.
• Have experiences in research and study related to Engineering and Science, Econometrics, Behavior Psychology, Medical Statistic and so on.
• Have working experiences in Statistical Analysis or Data Scientist.
• In AI development, Trial & Error repeats many times to solve the problem with unclear specification or unfixed answer (result). For this reason, we are looking for the individual with the following requirements:
• Be agile in the cycle of Trial & Error (speed of use your thought in code）
• Be concerned with even small issues/ problems and solve all problems efficiently by your logical thought.
• Be curious about knowledge. The person who is greatly interested in and curious about knowledge surely grows the most.
• Have deep experiences in research.
• English skill: Be able to use your English reading skill to gain information related to AI.

More on MTI – what is it like to work in MTI?

At MTI Technology, our goal is to empower every individual to learn, discover, be able to communicate openly and honestly to create the best services based on effective teamwork.

Bias in Data Science – the Good, the Bad and the Avoidable !?

In recent years, there have been a few prominent examples of accidental bias in machine-learning applications, such as smartphones’ beauty filters (that essentially ended up whitening skin) [1] or Microsoft’s from-innocent-teen-to-racist-in-24-hours chatbot [2,3]. Examples such as these fell victim to inherently biased data being fed into algorithms too complex to allow for much transparency. Hidden bias continues to be an issue on ubiquitous social media platforms, such as Instagram, whose curators appear to profess themselves both regretful AND baffled [4]. Unfortunately, any model will somewhat regurgitate what it has been fed and interventions at this level of model complexity may prove tricky.

Interestingly, bias itself does not need to be harmful and is often built into a model’s design on purpose, either to address only a subset of the overall population or to model a real-world state, for instance when predicting house prices from its size and number of bedrooms, the model’s bias parameter often represents the average house price in the data set. Thus, we need to distinguish between conscious and unconscious bias in data analysis. Additionally, there is the factor of intent, i.e. is the person conducting the analysis well-intentioned to follow a good scientific method or trying to manipulate it to achieve a particular outcome.

In the following, I will only discuss aspects of unintentional and unconscious bias, meaning bias hidden from the researcher or data scientist introducing it. This is by no means an exhaustive discussion, but merely a highlight of some pervasive aspects:

A. Data availability bias

B. Coherent theory bias

C. Method availability/popularity bias

A. Data availability bias

A. The problem of scientists’ selecting their data out of convenience rather than suitability or representativeness for the current task has been around for a while [4], e.g. the ideal data set may not be available in a machine-readable format or would require higher costs and more time for processing, in short, several obstacles to doing an analysis quickly. For instance, in the area of Natural Language Processing, the major European languages, like English, French, and German, etc. tend to receive more attention because both data and tools to analyze them are widely available. Similarly, psychology research has mostly focused on so-called WEIRD societies (White, Educated, Industrialized, Rich, Democratic) [5] and out of convenience often targets the even smaller population of “North American college students” that unsurprisingly have been found to not represent human populations at large.

B. Coherent theory bias

B. Various studies suggest that we as people strongly favor truths that fit into our pre-existent worldview, and why would scientists be exempt from this? Thus, it appears when people analyze data they are often biased by their underlying beliefs about the outcome and are then less likely to yield unexpected non-significant results [6]. This does not include scientists disregarding new evidence because of conflicting interests [7]. This phenomenon is commonly referred to as confirmation bias or, more fittingly, “my side” bias.

C. Method availability/popularity bias

C. There is a tendency of hailing new trendy algorithms as one-fits-all solutions for whatever task or application. The solution is presented before examining the problem and its actual requirements. While more complex models are often more powerful, this comes at the cost of interpretability, which in some cases is not advisable. Additionally, some methods, both simple and complex ones, enjoy popularity primarily because they come ready-to-use in one’s favorite programming language.

Going forward…

We as data scientists should:

a. Select our data carefully with our objective in mind. Get to know our data and its limitations.

b. Be honest with ourselves about possible emotional investment in our analyses’ outcomes and resulting conflicts.

c. Examine the problem and its (theoretical) solutions BEFORE making any model design choices.

References:

[2] https://www.bbc.com/news/technology-35902104(last accessed 21.10.2020)

[5] Joseph Rudman (2003) Cherry Picking in Nontraditional Authorship Attribution Studies, CHANCE, 16:2,26-32, DOI: 10.1080/09332480.2003.10554845

[6] Henrich, Joseph; Heine, Steven J., and Norenzayan, Ara. The Weirdest People in the World? Behavioral and Brain Sciences, 33(2-3):61–83, 2010. doi: 10.1017/S0140525X0999152X.

[7] Hewitt CE, Mitchell N, Torgerson DJ. Heed the data when results are not significant. BMJ. 2008;336(7634):23-25. doi:10.1136/bmj.39379.359560.AD

[8] Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124. doi:10.1371/journal.pmed.0020124

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineer.

Vietnam AI / Data Science Lab

Please also visit Vietnam AI Lab

Motivation

What makes computers useful for us is primarily the ability to solve problems. The procedure in which computers solve a problem is an algorithm.  In the recent context of an increasing number of algorithms available for solving data-related problems, there is increasing demand for a higher level of understanding of algorithm’s performance for data scientists to choose the right algorithms for their problems.

Having a general perception of the efficiency of an algorithm would help to shape the thought process for creating or choosing better algorithms. With this intention in mind, I would like to create a series of posts to discuss what makes a good algorithm in practice, or for short, efficient algorithm. And this article is the first step of the journey.

Define Efficiency

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, ‘acceptable’ means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input.

There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage. In the next 2 sections, we will be looking at the two different perspectives for measuring the efficiency of an algorithm from theoreticians and practitioners.

Theoreticians perspective

Theoreticians are interested in measuring the efficiency of an algorithm without actually have to run it in several machines and input size. The key idea is that they are not going to consider the runtimes of the algorithm on any particular input. Rather, they look at what is known as asymptotic runtimes. Or in other words, they look at how the runtime scale with input size (n) as n gets larger. Does the output scale proportional to n, or proportional to n squared, or maybe exponential in n? These rates of growth are so different that as long as n is sufficiently large, constant multiples that come from other measures like temporary disk usage, long-term disk usage would be relatively small and neglected.

Practitioner perspective

While certainly useful, the asymptotic runtime of an algorithm doesn’t tell the whole story. Some algorithms have good asymptotic runtime, but constants that are so huge that they effectively can’t be used. Ever. Lipton calls them Galactic Algorithms. A galactic algorithm is an algorithm that is wonderful in its asymptotic behavior but is never used to compute anything.

In practice, other factors that can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, how an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues.

Implementation issues can also effect efficiencies, such as the choice of programming language, or how the algorithm is coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases, a language implemented by an interpreter may be much slower than a language implemented by a compiler.

Binomial Theorem

Can you expand on ? I guess you would find that is quite easy to do. You can easily find that .

How about the expansion of . It is no longer easy.

It is no longer easy, isn’t it. However, if we use Binomial Theorem, this expansion becomes an easy problem.

Binomial Theorem is a very intriguing topic in mathematics and it has a wide range of applications.

Theorem

Let  be real numbers (or complex, or polynomial). For any positive integer , we have:

where,

Proof:

We will use prove by induction. The base case  is obvious. Now suppose that the theorem is true for the case , that is assume that:

we will need to  show that, this is true for



Let us consider the left-hand side of the equation above



We can now apply Pascal’s identity:



The equation above can be simplified to:

as we desired.

Example 1:  Power rule in Calculus

In calculus, we always use the power rule that

We can prove this rule using the Binomial Theorem.

Proof:

Recall that derivative for any continuous function f(x) is defined as:



Let  be a positive integer and let

The derivative of f(x) is:

Example 2:  Binomial Distribution

Let X be the number of Head a sequence of n independent coin tossing. X is usually model by binomial distribution in the probability model. Let  be the probability that a head shows up in a toss, and let . The probability that there is  head in the sequence of  toss is:



We know that sum of all the probability must equal to 1. In order to show this, we can use Binomial Theorem. We have:



Please also check another article Gaussian Samples and N-gram language models ,Bayesian model, Monte Carlo for statistics knowledge.

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineer.