In AI Lab, one of our projects is “Weather Forecasting”, in which we process the data from weather radars, and perform prediction using nowcasting techniques, i.e. forecasting in a short-time period like 60 minutes.

To develop forecasting model and do evaluation, we need to run many test-cases. Each test-case corresponds to one rainfall event, whose duration can be several hours. In the rainy season, one rainfall event cans last for several days.

Given one event and one interested area (80km^{2} of Tokyo city for example). In our actual work, a list of arrays of rainfall intensity needs to be stored in machine. Each array corresponds to one minute of observation by radar. If the event lasts for 5 days, then the number of arrays is

60 minutes x 24 hours x 5 days = 7200 arrays

Therefore, storing a such amount of data costs a lot of memory.

One of solution is to modify the way feeding data to the evaluation modul. However, for the scope of this blog, let assume that we would like to keep that amount of data in computer. **H****ow** do we do **to compress** data, and **how much** of data **can be compressed**?

The aim of this post is to introduce the Huffman algorithm, one of the basic methods for data compressing. By learning about the Huffman algorithm, we can easily understand the mechanism behind some advanced compressing methods.

Let us start with a question: how does a computer store data?

**Data in computer language**

Computer uses bits of {0,1} to represent numbers. One character of 0 or 1 is called a bit. Any string of bits is called a code.

Each number is encoded by a unique string of bits, or code. The encoder should satisfy the “prefix” condition. This condition states that no code is the prefix of any other codes. This makes it is always possible to decode a bit-string back to a number.

Let us consider an example. Let assume that we have a data *d* as a list of numbers, *d* = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3].

To store this list *d*, the computer needs to encode the three numbers: 1, 2, 3. The two possible encoders are:

Code/Number | 1 |
2 |
3 |

Code 1 | ‘0’ | ‘01’ | ‘11’ |

Code 2 | ‘0’ | ‘10’ | ‘11’ |

But the Code 1 violates the ‘prefix’ condition, as ‘0’ is prefix of ’01’, while the Code 2 does not. Therefore, the Code 2 can be used for encoding.

So, how many of bits needed to represent N different numbers? In theoretically, we need codes of k-bits length to encode 2^k numbers.

For example, to encode 2 numbers, we need codes of 1-bit length, which are ‘0’ and ‘1’.

To encode 4 = 2^2 numbers, we need codes of 2-bits length, which are: ’00’, ’01’, ’10’, ’11’

To encode 256 = 2^8 numbers, we need codes of 8-bits length, which are: … [we skip it :)]

Consider one radar image of dimension 401×401 including float numbers that take value from 0 to 200 mm/h. If we are interested in one decimal number, this data can be multipled by 10 and saved as interger. At that time, the array contains 401×401 intergers, whose values are from 0 to 2000.

Following the above logic, to encode 2000 different numbers, we need 11-bits codes, because 2^11 = 2048. If so, the total number of bits to encode this data is: 401 x 401 x 11 bits = 1,768,811 bits.

Compressing data aims to reduce the total number of bits that are used to encode it, in other words, to find an encoder that has a minimum average of code lengths.

**Expected code length**

Expected or average code length (ECL) of a data d is defined as:

where x is values of data, f(x) is its frequency, and l(x) is the length of code used to encode the value x. This quantity tells us how many of bits on average is used to encode a number in the data x and by a Code C.

**Minimum expected code length**

It can be proved mathematically that, the ECL of a code is bounded below by the entropy of given data. What is entropy of a data? It’s a quantity to measure information, that was initially introduced by Shannon (a Mathematician and father of Information Theory). This quantity tells us how much uncertainty the data contains.

In mathematical point of view, the entropy of the data is defined as

And by the source coding theorem , the following inequality is always true

which is equivalent to

The ECL achieves the minimum value if l(x) = -log2 [f(x)], meaning that, the length of the code of value x depends on the frequency of that value in the data. For example, in our radar images, the most frequent number is 0 (0.1mm/h), then we should use a short string of bits to encode the value 0. On the contrary, the number 2000 (0.1mm/h) has a very low frequency, then a longer bit-string must be used. In that way, the average of bits used to encode one value will be reduced to the entropy value, which is the lower bound of the compression. If we use a code which yields an ECL less than the entropy, then the compression makes loss of information.

In summary, the compressing could be minimized if the length of codes follows the formula: l(x) = -log2 [f(x)].

**The Huffman Encoding**

The Huffman algorithm is one of methods to encode data into bit-strings whose ECL approximates to the lower bound of compression. This is a very simple algorithm that can be produced by a several code lines in Python. For detailed of Huffman algorithm, you can refers to the link:

https://www.geeksforgeeks.org/huffman-coding-greedy-algo-3/

**Encode and Decode a radar images**

As described above, a radar image contains 401×401 data points, each point is an interger taking value from 0 to 2000.

By performing the Huffman encoder, we obtained the following codes (Figure 1).

By using these codes, the total number of bits of one specific image is 416,924 bits, which is much smaller than the codes using 11-bits length.

In actual work, we can convert an image data to bits by concatenate value by value, then write everything to a binary file. As computer groups every 8 bits to a byte, then the binary file consume 416,924/8 bits = 52KB.

To decode a binary file back to the image array, we load the binary file and trasform it to bit. As the Huffman code verifies the ‘prefix’ condition, by searching and maping, the original data can always be reconstructed without loosing any information.

If we write image array in the numpy format, then the storage can be up to 1.3MB/array. Now with Huffman compression, the storage can be reduced to 52KB.

If we use ‘ZIP’ application to compress the numpy file, the storage of the zip file is only 48KB, meaning that the ‘ZIP’ is outperform our Huffman encoding. That is because ZIP is using another encoding algorithm that allows to reach closer to the lower bound of compression.

**Hiring Data Scientist / Engineer**

We are looking for Data Scientist and Engineer.

Please check our *Career Page*.

**Weather Data Science Project**

Please check about *Weather Data Science Project*

**Vietnam AI / Data Science Lab**

Please also visit *Vietnam AI Lab*