Shannon, Tsallis and Kaniadakis entropies in bi-level image thresholding

The maximum entropy principle is often used for bi-level or multi-level thresholding of images. For this purpose, some methods are available based on Shannon and Tsallis entropies. In this paper, we discuss them and propose a method based on Kaniadakis entropy.


Introduction
The concept of entropy was born in thermodynamics and statistical mechanics. Shannon, in 1948, formulated it for the theory of information, obtaining the "information entropy". In an intuitive understanding of it [1], this entropy relates to the amount of uncertainty about an event associated with a given probability distribution. In image processing, Shannon entropy was the first being used. Today, it is the Tsallis formulation of entropy that seems to be preferred [2][3][4]. For the elaboration of images, the entropy uses their histograms. For instance, in the bi-level segmentation of a gray-level image, a threshold is determined which separates the gray tones in two systems A and B , maximizing the entropy. Considering A and B independent, the entropy are the corresponding entropies of the systems. In this paper, we will discuss the use of Shannon and Tsallis entropies for image thresholding. Among the other formulations of entropy [5], here, we propose the thresholding using Kaniadakis entropy, which is a quite attractive entropy based on the relativistic formulation of the statistical mechanics [6,7]. Both Tsallis and Kaniadakis entropies have an entropic index. If these entropies are used for a bi-level thresholding of an image, the bi-level black and white image that we obtain depends on the value of the optimize threshold, which is depending on the entropic index. Here we compare the results we can obtain with Kaniadakis and Tsallis entropies, proposing a "measure" on the output image. In this "measure" we evaluate the number of edge pixels which separate black and white regions. After experiments on some images, we can conclude that the two entropies compare positively. We have the same results, but Kaniadakis entropy has the intuitive advantage of recovering the Shannon result when its entropic index goes to zero. , that we call "the scene". Any given scene has a certain multiplicity F W :

Images and information entropy
Equation (1) gives the number of ways in which we can generate the same scene [8]. Let us consider that each of the generated copy of the scene has the same probability, which is equal to . In the case N is large, we can apply the Stirling approximation:   1 . This is the Shannon entropy; it is depending on "objective" frequencies i f instead of "subjective" probabilities i p [8].

Thresholding
For a bi-level thresholding of an image, let us follow the approach of [4]. Let us consider two independent systems A and B, for which the joint probability is . The systems can be given as in the following [4]. A contains elements with g gray tones, labelled , that is, with gray tone below or equal a given threshold t. Let us suppose that each tone i x is chosen i A N , times, according to frequency: We have: The entropy A S is: Then, using Stirling again: For B: The abovementioned frequencies for a gray-level image can be given by the normalized histogram. To find the best value of threshold t, we have to maximize

Bi-level thresholding with Tsallis and Kaniadakis entropies
Let us remember that the Tsallis entropy is given by [9,10]: In fact, the Tsallis entropy is defined, using the q-logarithm, as: Let us assume a bi-level threshold t for the gray levels. In [3], two systems had been introduced, A and B, and their probability distributions. Let us assume the properties of A and B as in the previous section. The Tsallis entropies, one for each distribution, are given by: Taking the limit 1 → q , Tsallis entropy gives Shannon's entropy. The total Tsallis entropy is given by the generalized sum: In fact, in a generalization of statistical mechanics, a deformed entropy had been proposed, the Kaniadakis entropy, also known as κ-entropy [6,7]: This entropy has the remarkable property of having the same behavior of Shannon entropy, that is: In it, we have the generalized version of the logarithm [7]: We can apply this entropy to the bi-level thresholding. Let us call the threshold τ. Generalizing (3) and (5), κ-entropies are: In the limit 0 → κ , Kaniadakis entropy becomes Shannon entropy. Let us consider the composition of systems A and B, but in the framework of this deformed statistics. According to [11], the generalized sum: In (14) we have: When entropies (10) or (16) are maximized, the corresponding gray level τ is considered the optimum threshold value. In the gray bi-level thresholding, we have a resulting processed image, which is a black and white image. The output image is created as in the following: if pixels have a gray tone larger than the threshold, they become white. If pixels have a lower value, they become black.

More deeply in the limit of Kaniadakis entropy
In fact, we have that: In fact: In the limit 0 → κ , Kaniadakis entropy becomes Shannon entropy, and therefore we must have the normal additivity. And in fact: . We have the same for B.

Discussion
Let us note that, both Tsallis and Kaniadakis entropies have entropic indices that can give different results when applied to the sample. To choose among these several results and define an output image, we propose a "measure" of the bi-level image, given by the number of edge pixels between black and white regions. Of course, other measures can be defined. Figures 1-3 give images and bi-level images. The corresponding Tables are the results of maximizing Tsallis and Kaniadakis entropies, according to (10) and (16). In experiments, their entropic indices are spanning interval (0,1). Let us avoid, in the calculations, the values 0 and 1. We can see that the two entropies are able to provide the same results. We have also, in the case of Figure 3 and Table VI, the evidence of an "image transition" (see Ref.12 for more details). Besides the fact that the Kaniadakis entropy possesses a formalism which is closer to that of Shannon entropy, a good reason for preferring κ-entropy is that it has the more intuitive behavior of an entropy recovering the Shannon one, when its entropic index is going to zero. Another relevant advantage of Kaniadakis entropy is in the evaluation of multi-level thresholding of images. This will be discussed in a future paper.   Table I: Optimized thresholds on Lena, using Tsallis and Kaniadakis entropies, for several values of entropic indices. In the limit q 1 → , Tsallis entropy provides Shannon result, and for κ 0 → , Kaniadakis entropy becomes Shannon entropy. When the image is segmented in a bi-level black and white image according to the given threshold, the number of edge pixels between black and white regions are calculated. If we assume that the "best" bi-level image is that having the largest number of edge pixels, the threshold to choose is 120 from Tsallis and κ-entropy.  Table II: Optimized thresholds for Cameraman. The "best" bi-level image is that having the largest number of edge pixels and the threshold to choose is 175 from Tsallis entropy and 176 from κ-entropy.  Figure 2. The "best" bi-level image is that having the largest number of edge pixels and the threshold to choose is 150 from Tsallis and κ-entropy.