N-gram language models – Part 2

Evaluating the model

Once all the conditional probabilities of each n-gram are calculated from the training text, we will assign them to every word in an evaluation text. Furthermore, the probability of the entire evaluation text is nothing but the products of all n-gram probabilities:

N-gram

 

As a result, we can again use the average log-likelihood as the evaluation metric for the n-gram model. The better our n-gram model is, the probability that it assigns to each word in the evaluation text will be higher on average.

N-gram

 

 

Please check Gaussian Sample and Bayesian Statistics.

Hiring Data Scientist / Engineer

We are looking for Data Scientist and Engineer.
Please check our Career Page.

Data Science Project

Please check about experiences for Data Science Project

Vietnam AI / Data Science Lab

Vietnam AI Lab

Please also visit Vietnam AI Lab