In this blog post, I am going to introduce one of the most intuitive algorithms in the field of Supervised Learning[1], the *k*-Nearest Neighbors algorithm (kNN).

**The original ***k*-Nearest Neighbors algorithm

*k*-Nearest Neighbors algorithm

The kNN algorithm is very intuitive. Indeed, with the assumption that items close together in the dataset are typically similar, kNN infers the output of new sample by first constructing the distance score with every sample in the training dataset. From there, it creates a ‘neighbor zone’ by selecting samples that are ‘near’ the candidate one, and does the supervised tasks based on samples lie inside that zone. The task could be either classification or regression.

Let’s start with the basic kNN algorithm. Let be our training dataset with samples belong to classes, where is the class of one sample, and denotes the corresponding feature vector that describes the characteristics of that sample. Furthermore, it is necessary to define the suitable distance metric, since it drives the algorithm to select neighbors and make prediction later on. The distance metric , is a mapping over a vector space , where the following conditions are satisfied :

In the following steps to describe the *k*-Nearest Neighbors algorithm, the Euclidean distance will be used as the distance metric .

For any new instance :

- Find where is the set of samples that are closest to
- The way to define the nearest neighbors is based on distance metric (Note that we are using Euclidean distance).

- The classifier is defined as:

where is the unit function. Note that for the regression problem, the function will just an average of all response values from neighbor samples.

**Weighted ***k*-Nearest Neighbors

*k*-Nearest Neighbors

In the kNN algorithm, we weight all neighbors equally. It may affect the inference steps, especially when the neighbor zone becomes bigger and bigger. To strengthen the effect of ‘close’ neighbors than others, the weighted scheme of *k*-Nearest Neighbors is applied.

Weighted k-Nearest Neighbors is based on the idea that, within , such observations that are closer to , should get a higher weight than the further neighbors. Now it is necessary to note some properties of any weighting schemes on any distance metric :

- decreases monotonously for

For any new instance :

- We find where is the set of samples that are closest to
- The th neighbor is used for standardization of the smallest distance:
- We transform the standardized distance with any kernel function into weights .
- The classifier is defined as:

**The pros and cons of kNN, and further topics**

The kNN and weighted kNN do not rely on any specific assumption on the distribution of the data, so it is quite easy to apply it on many problems as the baseline model. Furthermore, kNN (and its family) is very intuitive for understanding and implementing, which again make it a worthy try-it-first approach for many supervised problems.

Despite those facts, kNN still has challenges in some aspects: It is computationally expensive – especially when the dataset size becomes huge. Another challenge is choosing the ‘correct’ distance metric that best follows the assumption for using this algorithm: items close together in the data set should be typically similar. Lastly, the curse of dimensionality heavily affects the distance metric. Beyer et al.[2] proves that, under some preconditions, in high dimension space, all points converge to the same distance from the query point. In this case, the concept of ‘nearest neighbors’ is no longer meaningful.