^{1}

^{2}

^{3}

Generally in this paper, we show how the new version of parameter in Jacod decomposition will change an expression of entropy-Hellinger process of order one, order q and order zero and consequently an equation of minimal entropy Hellinger sigma martingale density for all orders. This is because even the measurable function which is an important parameter of an equation of minimal martingale density changes. In order to get a required parameter , we introduce the function during our calculation for all orders. The result is different to order zero because we failed to get an equation of minimal entropy-Hellinger sigma martingale density of order zero.

Hellinger processes concepts have been interested in the part of probability theory which addresses the notion of distance between two probability measures [

In the research papers [

This study gives other expression of entropy-Hellinger process for positive sigma martingale of order one, order q and order zero when N = β ⋅ S c + W ⋆ ( μ − ν ) + g ⋆ μ + N ′ and W = f − 1 + f ^ − a 1 − a 1 { a < 1 } . Further, we find an equation of minimal entropy Hellinger sigma martingale density of order one, order q and order zero.

Methods of Equivalent Martingale MeasureThere has been an increasing interest in providing quantitative approaches to the portfolio optimization, pricing and hedging of contingent claims since the emergence of modern finance. Since martingale methods have been introduced by [

However, in the situation of incomplete markets the case is more involved because one is facing a mathematical and a conceptual problem. Mathematically, because the use of martingale methods is complicated by the fact that, there are infinitely many equivalent martingale measures. The question that an investor faces in this case, is concerned with an appropriate equivalent martingale measure to be chosen. Thus many research works have proposed several methods to be used to choose the appropriate equivalent martingale measure.

The first class of methods which were proposed deal with some sort of distance minimization between an equivalent martingale measure and physical probability measure. An appropriate martingale measure is the one which has a smallest distance compared to the other martingale measures. The selection of appropriate martingale measure with the smallest distance compared to the other is due to different optimization criteria for the set of all equivalent martingale measures. For instance, minimal martingale measure proposed by [

The second class of methods consists of concepts of utility based pricing. We can call this method an equilibrium-based pricing approach. This method is investigated by [

Because of that reason a dynamic version of these works were proposed by [

On this dynamic concept another method which was proposed is known as entropy-Hellinger process. This method is expressed as a jump of local martingale of Jacod decomposition [

The concept of minimal entropy Hellinger sigma martingale density is introduced by [

This paper contributes to the existing literature because we show how the entropy-Hellinger process for positive sigma martingale of order one, order q and order zero can be modified when we have another version of its important parameter U. Also we proved that even a description of minimal entropy-Hellinger sigma martingale density of all orders is going to change when we are solving minimization problems based on those entropy-Hellinger processes.

The rest of the paper is organized as follows. Section 2, discusses an expression of entropy Hellinger process for a positive sigma martingale of order one followed by a solution of minimization problem based on entropy Hellinger process of sigma martingale density of order one in Section 3. In Section 4, we provide an expression of entropy Hellinger process for a positive sigma martingale of order q followed by a solution of minimization problem based on entropy-Hellinger process of sigma martingale density of order q in Section 5. A discussion on an expression of entropy-Hellinger process for a positive sigma martingale of order zero is in Section 6. In Section 7, we provide a solution of the minimization problem based on the entropy-Hellinger process of sigma martingale density of order zero.

According to [

According to [

∫ | x ( 1 + U ( x ) − h ( x ) ) | F ( d x ) < ∞ (1)

b ⋅ A + c β ⋅ A + ( x − h ( x ) + x U ( x ) ) ⋆ ν = 0 (2)

Furthermore, if Z is a σ-martingale density for ( S , P ) then the following holds

∫ x ( 1 + U ( x ) ) F ( d x ) Δ A = 0 (3)

According to [

N = β ⋅ S c + W ⋆ ( μ − ν ) + g ⋆ μ + N ′ where W = U + U ^ 1 − a (4)

The above Equation (4) is called the Jacod decomposition with parameters ( β , U , g , N ′ ) .

According to [

Δ [ W ⋆ ( μ − ν ) ] = W ¯ t ( w ) where W ¯ t ( w ) = W ( w , t ) 1 { Δ S ≠ 0 } − W ^ t ( w , t ) (5)

Also according to [

Δ ( g ⋆ μ ) t = g ( t , w ) 1 { Δ S = 0 } (6)

Therefore from above Equations (4), (5) and (6) we can build the equation of jump of Jacod decomposition N as follows

Δ N t = ( U t ( Δ S t ) + g t ( Δ S t ) ) 1 Δ S t ≠ 0 + ( Δ N ′ t − U ^ t 1 − a t ) 1 Δ S t = 0 (7)

According to [

V t E ( N ) = 1 2 〈 N c 〉 t + ∑ 0 < s ≤ t [ ( 1 + Δ N s ) log ( 1 + Δ N s ) − Δ N s ] (8)

is locally integrable (i.e. V E ( N ) ∈ A l o c + ( P ) ), then its compensator (with respect to the probability P) is called the entropy-Hellinger process of N, denoted by h t E ( N , P ) .

The entropy-Hellinger process of N with respect to probability measure P denoted by ( h t E ( N , P ) ) is the same as of sigma density Z ∈ Z l o c e denoted by ( h t E ( Z , P ) ) and of equivalent probability measure Q ∈ ℙ a denoted by ( h t E ( Q , P ) ).

According to [

If Δ N > − 1 then also U > − 1 , if we have a new version of U = f − 1 then this gives W = f − 1 + f ^ − 1 1 − a 1 { a < 1 } .

Therefore we are going to have the following equation of jump of Jacod decomposition:

Δ N = ( f + g − 1 ) 1 { Δ S ≠ 0 } + ( Δ N ′ − f ^ − a 1 − a ) 1 { Δ S = 0 } (9)

Proposition 1 (The entropy-Hellinger process of order 1)

The Hellinger process of order 1 for N when W = f − 1 + f ^ − 1 1 − a 1 { a < 1 } is equal to

h E ( Z , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ ν + ∑ 0 < s ≤ t [ 1 − f ^ 1 − a ln ( 1 − f ^ − a 1 − a ) + ( f ^ − a ) ] + f M μ P ( ( 1 − g f ) ln ( 1 + g f ) − g f | P ˜ ) ⋆ ν + K (10)

Proof

To prove the Equation (10) we are going to apply the same method used in [

From (4) and (9), lets define g ¯ = g f 1 { f > 0 }

Then we are going to have the following assertions

g 1 { f = 0 } = 0 Δ N ′ 1 { f ^ − a = 1 − a } = 0 (11)

From Equation (8)

1 + Δ N = ( f + g ) 1 { Δ S ≠ 0 } + ( 1 + Δ N ′ − f ^ − a 1 − a ) 1 { Δ S = 0 } (12)

Then

f + g > 0 1 + Δ N ′ − f ^ − a 1 − a > 0

By taking the conditional expectation under M μ P with respect to P ˜ and a predictable projection of above equations we are going to get

f > 0 1 − f ^ − a 1 − a > 0

From Equation (12)

∑ 0 < s ≤ t [ ( 1 + Δ N s ) ln ( 1 + Δ N s ) − Δ N s ] = [ ( f + g ) ln ( f + g ) − ( f + g − 1 ) ] ⋆ μ + ∑ [ ( 1 − f ^ − a 1 − a + Δ N ) ln ( 1 − f ^ − a 1 − a + Δ N ) + f ^ − a 1 − a − Δ N ′ ] 1 { Δ S = 0 } (13)

Thanks to the assertions above on the Equation (11).

V ( N , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ μ + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ln ( 1 − f ^ − a 1 − a ) + f ^ − a 1 − a ] 1 Δ S = 0 + f [ ( 1 + g f ) ln ( 1 + g f ) − g f ] ⋆ μ + 1 2 〈 N ′ t c 〉 + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ( ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) ln ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) − Δ N ′ 1 − f ^ − a 1 − a ) ] + ( g ln f ) ⋆ μ + ∑ 0 < s ≤ t ( 1 − f ^ − a 1 − a ) Δ N ′ 1 { Δ S = 0 } (14)

The dual predictable projection of the two last terms in the RHS of the Equation (14) of the above equation vanish. We are going to remain with

V ( N , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ μ + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ln ( 1 − f ^ − a 1 − a ) + f ^ − a 1 − a ] 1 Δ S = 0 + f [ ( 1 + g f ) ln ( 1 + g f ) − g f ] ⋆ μ + 1 2 〈 N ′ t c 〉 + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ( ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) ln ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) − Δ N ′ 1 − f ^ − a 1 − a ) ] (15)

Taking dual predictable projection we are going to have the following equation

h E ( Z , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ ν + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ln ( 1 − f ^ − a 1 − a ) + f ^ − a 1 − a ] ( 1 − a ) + f M μ P ( ( 1 + g f ) ln ( 1 + g f ) − g f | P ˜ ) ⋆ ν + K (16)

where K is a predictable projection of the

1 2 〈 N ′ t c 〉 + ∑ 0 < s ≤ t [ ( 1 − f ^ − a 1 − a ) ( ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) ln ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) − Δ N ′ 1 − f ^ − a 1 − a ) ] .

Then the required entropy-Hellinger process is

h E ( Z , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ ν + ∑ 0 < s ≤ t [ 1 − f ^ 1 − a ln ( 1 − f ^ − a 1 − a ) + f ^ − a ] + f M μ P ( ( 1 + g f ) ln ( 1 + g f ) − g f | P ˜ ) ⋆ ν + K (17)

which is the same as (10).

From Equation (4), If we set N 1 = β ⋅ S c + W ⋆ ( μ − ν ) and Z 1 = E ( N 1 ) . If ZS is a σ-martingale. then Z 1 S is also a σ-martingale.

Since we are going to focus on σ-martingales with finite entropy. The set of these measures is given by the following sets. ( [

M f e ( S ) = { Q ∈ ℙ e | S ∈ M σ ( Q ) and E [ d Q d P log ( d Q d P ) ] < + ∞ } (18)

Z l o c e ( S ) = { Z ∈ M l o c ( P ) | Z > 0, Z log ( Z ) is locally integrable , Z S ∈ M σ ( P ) } (19)

The minimization problem of min Z ∈ Z e , l o c h E ( Z , P ) is equivalent to minimize the Hellinger process over the set of densities that have the following predictable representation ( g = 0 and N ′ = 0 ).

Therefore we are going to have the following minimization entropy-Hellinger process.

h E ( Z , P ) = 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ ν + ∑ 0 < s ≤ t [ 1 − f ^ 1 − a ln ( 1 − f ^ − a 1 − a ) + f ^ − a ] (20)

Theorem 2 According to [

∫ | x | > 1 | x | exp ( λ T x ) F ( d x ) < + ∞

and the solution to

min Z ∈ Z a , l o c h E ( Z , P )

exists and is given by

Z ˜ = E ( N ) > 0 N ˜ t = β ˜ ⋅ S t c + W ˜ ⋆ ( μ − ν )

where

W ˜ t ( x ) = exp ( λ ˜ t T x ) − 1 1 − a t + ∫ exp ( λ ˜ t T x ) ν ( t × ( d x ) ) β ˜ t = λ ˜ t 1 { Δ A t = 0 } (21)

where λ ˜ t 1 { Δ A t ≠ 0 } is a root for

∫ exp ( x λ ) × ν ( w ; t × ( d x ) ) = 0

while λ ˜ t 1 { Δ A t = 0 } is a root for

0 = G E ( w , t , λ ) = b t ( w ) + c t ( w ) λ + ∫ ( e x λ x − h ( x ) ) F t ( w , d x )

Since we know that Z = E ( N ) = E ( β ⋅ S c + W ⋆ ( μ − ν ) ) . We are going to minimize the following equation

min ( f , β ) ( 1 2 β T c β ⋅ A + [ f ln f − ( f − 1 ) ] ⋆ ν + ∑ 0 < s ≤ t [ 1 − f ^ 1 − a ln ( 1 − f ^ − a 1 − a ) + f ^ − a ] ) (22)

By distinguishing the cases where Δ A = 0 and the case where Δ A ≠ 0 , this problem can be split into the following two minimization problems. The first problem ( Δ A = 0 ) is defined by

∫ 1 2 β T c β d A + ∫ [ f ln f − ( f − 1 ) ] F t ( d x ) d A (23)

where the minimization is over all couples ( β , f ) satisfying

b + c β + ∫ ( x − h ( x ) + x f ) F t ( d x ) = 0 (24)

The second problem ( Δ A ≠ 0 ) is defined as follows

∫ [ f ln f − ( f − 1 ) ] v t ( d x ) + ∑ 0 < s ≤ t [ 1 − f ^ 1 − a ln ( 1 − f ^ − a 1 − a ) + f ^ − a ] (25)

where the minimization is over the functional f such that

∫ x f ν t ( d x ) = 0 (26)

The conditions (24) and (26) correspond to the conditions given in above theorem and conditions which is known for given for a local martingale to be a sigma martingale density Equations (2) and (3)

Euler-Lagrange equation of the first problem ( Δ A = 0 ) which is the combination of Equations (23) and (24).

L ( β , f ) = ∫ 1 2 β T c β d A + ∫ [ f ln f − ( f − 1 ) ] F t ( d x ) d A − λ ( ∫ b d A + ∫ c β d A + ∫ ( x − h ( x ) + x f ) F t ( d x ) d A ) = 0

L ( β , f ) = 1 2 β T c β + ∫ [ f ln f − ( f − 1 ) ] F t ( d x ) − λ ( b + c β + ∫ ( x − h ( x ) + x f ) F t ( d x ) ) = 0

d β : β = λ d f : f = e λ x (27)

Since our W = f − 1 + f ^ − a 1 − a , we must introduce a new function m = f t − 1 ∈ P ˜ .

Therefore we are going to have the following function for the first Lagrange equation m = f t − 1 = e λ x − 1 .

Therefore the description of β ˜ and f ˜ is completely established

β ˜ = λ ˜ 1 { Δ A t = 0 } m 1 { Δ A t = 0 } = f t 1 { Δ A t = 0 } − 1 = exp ( λ T x 1 Δ A t = 0 ) − 1 (28)

where λ is the root for b + c λ + ∫ ( e λ T x x − h ( x ) ) F t ( d x ) = 0

Euler-Lagrange equation of the second problem ( Δ A ≠ 0 ) which is the combination of Equations (25) and (26).

L ( f , ϕ , λ , α ) = ∫ [ f ln f − ( f − 1 ) ] ν t ( d x ) + ∑ 0 < s ≤ t [ ( 1 − a − ϕ ) ln ( 1 − ϕ 1 − a ) + ϕ ] − λ ( ∫ x f ν t ( d x ) ) − α ( ∫ f ν t ( d x ) − a − ϕ ) = 0

d f : ln f = λ x + α f = e λ x + α (29)

d ϕ : ln ( 1 − ϕ 1 − a ) = α (30)

d λ : ∫ x f ν t ( d x ) = 0 (31)

d α : ∫ x f ν t ( d x ) − a − ϕ = 0 (32)

Substitute Equations (29) into (31)

∫ e λ x + α x ν t ( d x ) = 0 ∫ e λ x x ν t ( d x ) = 0 (33)

Equation (33) is one of the condition of Z to be σ-martingale density Equation (3).

From Equation (30) we have

ϕ = ( 1 − a ) ( 1 − e α ) (34)

Substitute Equations (29) and (34) into Equation (32).

e α = 1 1 − a + ∫ e λ x ν t ( d x ) (35)

Substitute Equation (35) into Equation (29)

f t ( x ) = e λ x 1 − a + ∫ e λ x ν t ( d x ) (36)

If you multiply both equation with ν t ( d x ) and integrate both equation we are going to have

∫ f t ( x ) ν t ( d x ) = ∫ e λ x ν t ( d x ) 1 − a + ∫ e λ x ν t ( d x ) (37)

If we add Equations (36) and (37) we are going to have

f t ( x ) + f ^ t ( x ) = e λ x + ∫ e λ x ν t ( d x ) 1 − a + ∫ e λ x ν t ( d x ) (38)

The LHS of Equation (38) which is f t ( x ) + f ^ t ( x ) makes a function W = f t ( x ) + f ^ t ( x ) which is not equal to W = f − 1 + f ^ − a 1 − a . This function W = f t ( x ) + f ^ t ( x ) makes the W ˜ = e λ x + ∫ e λ x ν t ( d x ) 1 − a + ∫ e λ x ν t ( d x ) .

Since our W = f − 1 + f ^ − a 1 − a , in order to get the required W and W ˜ for minimum entropy-Hellinger sigma martingale density, we must introduce a new equation m = f t − 1 ∈ P ˜ .

m = f t − 1 = e λ x 1 − a + ∫ e λ x ν t ( d x ) − 1 = e λ x 1 Δ A = 0 1 − a + ∫ e λ x ν t ( d x ) − 1 (39)

Therefore we have

That is for the second problem.

We can summarize Equations (28) and (29) as follows

m = f t − 1 = e λ x 1 − a + ∫ e λ x ν t ( d x ) − 1 (40)

From Equation (40) multiply both sides by ν t ( d x ) and find integration both sides. Then we are going to get

∫ f t ( x ) ν t ( d x ) − a 1 − a = ∫ e λ x ν t ( d x ) − a 1 − a + ∫ e λ x ν t ( d x ) (41)

By combining Equation (40) and Equation (41) LHS and RHS we get the required minimal entropy Hellinger sigma martingale density

Z ˜ = E ( N ˜ ) , N ˜ = β ˜ ⋅ S c + W ˜ ⋆ ( μ − ν ) W ˜ t ( x ) = ( γ ˜ t ) − 1 ( e γ ˜ t T x − 1 ) , γ ˜ t = 1 − a t + ∫ e γ ˜ t T x ν ( t , d x ) (42)

which is the same as Equation (21).

A measurable function W in Jacod decomposition depends U ∈ P ˜ . We proved that, when we have another version of U it will change a measurable function W, an expression of entropy-Hellinger process as well as W ˜ . In order to get a measurable function W ˜ for an equation of minimal entropy-Hellinger sigma martingale density we introduce the function m = f t − 1 ∈ P ˜ and get

W ˜ t ( x ) = e γ ˜ t T x − 1 1 − a t + ∫ e γ ˜ t T x ν ( t , d x ) .

According to [

V t q ( N ) = 1 2 〈 N c 〉 t + ∑ 0 < s ≤ t ϕ q ( Δ N s ) , 0 ≤ t ≤ T (43)

is locally integrable, then the Hellinger process H ( q ) ( N , P ) of order q ≠ 0 is the dual predictable projection of V t q ( N ) with respect to P.

The function ϕ q ( x ) in Equation (43) is defined as follows

ϕ q ( x ) = ( ( 1 + x ) q − 1 − q x q ( q − 1 ) , if x > − 1 and q ∉ 0,1 + ∞ otherwise (44)

The set of sigma martingale densities which we are interested in is:

Z l o c e ( S ) = { Z = E ( N ) ≥ 0 | N ∈ M l o c ( P ) : Z > 0 Z log Z locally integrable , Z S ∈ M σ ( P ) , ϕ q ( Δ N ) ∈ L l o c 1 }

As we have seen in the entropy-Hellinger process of order one: The entropy-Hellinger process of N with respect to probability measure P ( h ( q ) ( N , P ) ) is the same as of sigma density Z ∈ Z l o c e ( q ) denoted by ( h t ( q ) ( Z , P ) ) and also is the same as of equivalent probability measure ( Q ∈ ℙ a ) denoted by ( h t ( q ) ( Q , P ) ).

Proposition 3 (The entropy Hellinger process of order q)

The Hellinger process of order q for N when W = f − 1 + f ^ − 1 1 − a 1 { a < 1 } is equal to

h ( q ) ( N , P ) = 1 2 β T c β ⋅ A + ∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) ( d A ) + ∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a ) + f q M μ P ( ϕ q ( g f ) | P ˜ ) ⋆ ν + M (45)

Proof

We are going to apply the same method used in [

ϕ q ( Δ N ) = ϕ q ( f + g − 1 ) 1 { Δ S ≠ 0 } + ϕ q ( Δ N ′ − f ^ − a 1 − a ) 1 { Δ S = 0 } (46)

Since 1 + Δ N t > 0 , we derive

f + g > 0 , 1 + Δ N ′ − f ^ − a 1 − a > 0

Taking the conditional expectation under M μ P with respect to P ˜ in the first inequality and a predictable projection in the second one we are going to have

f > 0 , 1 − f ^ − a 1 − a > 0 g 1 { f = 0 } = 0 , Δ N ′ 1 { 1 − a = f ^ − a } = 0

As from [

ϕ q ( f + g − 1 ) = ϕ q ( f − 1 ) + f q ϕ q ( g f ) + ( f q − 1 − 1 q − 1 ) g

That means we have the following equation

∑ ϕ q ( f + g − 1 ) 1 { Δ S ≠ 0 } = ∑ ( ϕ q ( f − 1 ) + f q ϕ q ( g f ) + ( f q − 1 − 1 q − 1 ) g ) 1 { Δ S ≠ 0 }

From the definition of random measure we have

ϕ q ( f + g − 1 ) ⋆ μ = ϕ q ( f − 1 ) ⋆ μ + f q ϕ q ( g f ) ⋆ μ + ( f q − 1 − 1 q − 1 ) g ⋆ μ (47)

The compensator of the LHS of Equation (47) is equal to the sum of the compensators of the RHS of Equation (47).

The compensator of ϕ q ( f − 1 ) ⋆ μ is equal to the ϕ q ( f − 1 ) ⋆ ν = ∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) d A

The compensator of f q ϕ q ( g f ) ⋆ μ is given by f q M μ P ( ϕ q ( g f ) | P ˜ ) ⋆ ν

The compensator of the last term: We are going to introduce the set C n = [ ( w , t , x ) : f q − 1 ≠ n ] . Since M μ P ( g | P ˜ ) = 0 , Then the compensator of ( 1 C n f q − 1 − 1 q − 1 ) g ⋆ μ is equal to 0.

Therefore the compensator of Ψ is

∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) d A + f q M μ P ( ϕ q ( g f ) | P ˜ ) ⋆ ν (48)

Now we compute the compensator of

Ϝ = ∑ ϕ q ( Δ N ′ − f ^ − a 1 − a ) 1 { Δ S = 0 }

Then

ϕ q ( Δ N ′ − f ^ − a 1 − a ) = ϕ q ( − f ^ − a 1 − a ) + ( 1 − f ^ − a 1 − a ) ϕ q ( Δ N ′ 1 − f ^ − a 1 − a ) + 1 − ( f ^ − a 1 − a ) q − 1 − 1 q − 1 Δ N ′

Therefore

ϕ q ( Δ N ′ − f ^ − a 1 − a ) 1 { Δ S = 0 } = ϕ q ( − f ^ − a 1 − a ) 1 { Δ S = 0 } + ( 1 − f ^ − a 1 − a ) ϕ q ( Δ N ′ 1 − f ^ − a 1 − a ) 1 { Δ S = 0 } + 1 − ( f ^ − a 1 − a ) q − 1 − 1 q − 1 Δ N ′ 1 { Δ S = 0 } (49)

The compensator of the LHS of Equation (49) is equal to the sum of the compensators of the RHS of Equation (49).

The compensator of ∑ 0 < s ≤ t ϕ q ( − f ^ − a 1 − a ) is equal to

∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a )

Let the compensator of ( 1 − f ^ − a 1 − a ) ϕ q ( Δ N ′ 1 − f ^ − a 1 − a ) be equal to M

The compensator of the process 1 − ( f ^ − a 1 − a ) q − 1 − 1 q − 1 Δ N ′ is equal to 0.

Therefore the compensator of Ϝ is

∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a ) + M . (50)

Therefore by combining Equation (48) and Equation (50) we get Equation (45).

From Jacod decomposition Equation (4) if we set N 1 = β ⋅ S c + W ⋆ ( μ − ν ) and Z 1 = E ( N 1 ) . If Z S is a σ-martingale. then Z 1 S is also a σ-martingale because N ′ preserves the martingales property for S,

Then Z e , σ r ,1 ( q ) ⊆ Z e , σ ( q )

where Z e , σ r ,1 ( q ) = { Z = E ( N ′ ) | E ( N ) ∈ Z e , σ ( q ) } .

Therefore for the minimization problem

min Z ∈ Z e , l o c h ( q ) ( Z , P ) = min Z ∈ Z e , l o c r ,1 h ( q ) ( Z , P )

we are going to have the following equation for the hellinger process.

h ( q ) ( N , P ) = 1 2 β T c β ⋅ A + ∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) ( d A ) + ∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a ) (51)

Theorem 4 According to [

∫ | ( 1 + λ T x ) 1 q − 1 x − h ( x ) | F ( d x ) < + ∞ (52)

and a solution of Equation (51) exists and is given by

Z ( q ) = E ( N ( q ) ) N ( q ) = β ( q ) ⋅ S c + W ( q ) ⋆ ( μ − ν ) (53)

where

β t ( q ) = 1 q − 1 λ ˜ t W ( q ) ( t , x ) = ( 1 + λ ˜ t T x ) 1 q − 1 − 1 1 − a t + ∫ ( 1 + λ ˜ t T x ) 1 q − 1 ν t ( d x ) (54)

Since we know that Z ( q ) = E ( N ( q ) ) = E ( β ( q ) ⋅ S c + W ( q ) ⋆ ( μ − ν ) ) , we are going to minimize the following equation

min { f t ( x ) , β } ( 1 2 β T c β ⋅ A + ∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) ( d A ) + ∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a ) ) (55)

By distinguishing the cases where Δ A = 0 and the case where Δ A ≠ 0 , this problem can be split into the following two minimization problems.

The first problem Δ A = 0 is defined by

1 2 β T c β + ∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) (56)

where the minimization is over all couples ( β , f t ( x ) ) satisfying

b + c β + ∫ ( x − h ( x ) + x ( f − 1 ) ) F t ( d x ) (57)

The second problem Δ A ≠ 0 is defined as follows

∫ 0 T ∫ ℝ ϕ q ( f − 1 ) F t ( d x ) ( d A ) + ∑ 0 < s ≤ t ( 1 − a ) ϕ q ( − f ^ − a 1 − a ) (58)

where the minimization is over the function f t ( x ) such that

∫ f t ( x ) x ν t ( d x ) = 0 (59)

Euler-Lagrange equation of the first problem on Δ A = 0 which is a combination of (56) and Equation (57)

L ( β , f ( x ) , λ ) = 1 2 β T c β + ∫ 0 T ∫ ℝ f q − 1 − q ( f − 1 ) q ( q − 1 ) − λ ( b + c β + ∫ ( x − h ( x ) + x ( f − 1 ) ) F t ( d x ) )

d β : β c = λ c d f : f = ( λ x + 1 ) 1 q − 1 β = λ (60)

d λ : ( b + c β + ∫ ( x − h ( x ) + x ( f − 1 ) ) F t ( d x ) ) = 0 (61)

Therefore we are going to have the following

β ˜ = λ f ˜ t ( x ) = ( λ x + 1 ) 1 q − 1 (62)

When we substitute Equation (60) into Equation (61).

b + c λ + ∫ ( ( λ x + 1 ) 1 q − 1 − h ( x ) ) F t ( d x ) = 0 (63)

The Second equation on Δ A ≠ 0 is as follows

L ( f ( x ) , λ , ρ ) = ∫ 0 T ∫ ℝ f q − 1 − q ( f − 1 ) q ( q − 1 ) F t ( d x ) + ( 1 − a ) ( 1 − a − κ 1 − a ) q − 1 − q ( − κ 1 − a ) q ( q − 1 ) − λ ( ∫ x f t ( x ) F t ( d x ) ) − ρ ( ∫ f t ( x ) ν t ( d x ) − a − κ )

We are going to have the following equations

d f : f q − 1 − 1 = ( q − 1 ) ( λ x + ρ ) d κ : ( 1 − a − κ ) q − 1 ( 1 − a ) 1 − q = 1 + ( q − 1 ) ρ d λ : ∫ x f t ( x ) F t ( d x ) = 0 d ρ : ∫ f t ( x ) ν t ( d x ) − a − κ = 0 (64)

From the first Equation in (64) we are going to have

f q − 1 = ( 1 − ( q − 1 ) ρ ) ( 1 + ( q − 1 ) λ x 1 + ( q − 1 ) ρ ) Let Γ = ( q − 1 ) λ 1 + ( q − 1 ) ρ f = [ ( 1 − ( q − 1 ) ρ ) ( 1 + Γ x ) ] 1 q − 1 (65)

From second Equation in (64) make κ the subject

κ = 1 − a − ( 1 + ( q − 1 ) ρ ( 1 − a ) 1 − q ) 1 q − 1 (66)

Substitute (65) and (66) into the fourth Equation in (64) and make ρ the subject.

ρ = 1 q − 1 ( 1 1 − a + ∫ ( 1 + Γ x ) 1 q − 1 ν t ( d x ) ) q − 1 − 1 q − 1 (67)

Substitute the Equation (67) into Equation (66)

f t ( x ) = ( 1 + Γ x ) 1 q − 1 1 − a + ∫ ( 1 + Γ x ) 1 q − 1 ν t ( d x ) (68)

From Equation (65), Γ = ( q − 1 ) λ 1 + ( q − 1 ) ρ . That means Γ ∝ λ , Therefore Γ = k λ and k = ( q − 1 ) 1 + ( q − 1 ) ρ . If we assume k = 1 = ( q − 1 ) λ 1 + ( q − 1 ) ρ . Then Γ = λ

Then we are going to have

f t ( x ) = ( 1 + λ x ) 1 q − 1 1 − a + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) (69)

So if you multiply the above Equation (69) by ν t ( d x ) and integrate both sides with respect to it we get:

∫ f t ( x ) ν t ( d x ) = ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) 1 − a + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) (70)

If you add Equation (69) and and Equation (70) we get W = f t ( x ) + ∫ f t ( x ) ν t ( d x ) which is equal to W = f t ( x ) + f ˜ t ( x ) and this equation is not equal to given W = f − 1 + f ^ − a 1 − a .

This function W = f t ( x ) + ∫ f t ( x ) ν t ( d x ) gives us W ˜ = ( 1 + λ x ) 1 q − 1 + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) 1 − a + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) .

Since W = f − 1 + f ^ − a 1 − a , in order to get the required minimal entropy-Hellinger sigma martingale density we need to introduce the new function m = f t − 1 ∈ P ˜ where f = ( 1 + λ x ) 1 q − 1 1 − a ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) .

Therefore

m = f t − 1 = ( 1 + λ x ) 1 q − 1 1 − a + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) − 1 (71)

Multiply both sides by ν t ( d x ) and integrate both sides. Then we are going to get:

∫ f t ( x ) ν t ( d x ) − a 1 − a = ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) − a 1 − a + ∫ ( 1 + λ x ) 1 q − 1 ν t ( d x ) (72)

By adding Equation (71) and Equation (72) we get the required minimal entropy-Hellinger sigma martingale density Equation (54).

As we have seen on entropy-Hellinger process of order one, a measurable function W in Jacod decomposition depends U ∈ P ˜ . We proved that, when we have another version of U it will change a measurable function W, an expression of entropy-Hellinger process as well as W ˜ . In order to get a measurable function W ˜ for an equation of minimal entropy-Hellinger sigma martingale density even in this entropy process of order q we introduce the new function

m = f t − 1 ∈ P ˜ and get a required W ( q ) ( t , x ) = ( 1 + λ ˜ t T x ) 1 q − 1 − 1 1 − a t + ∫ ( 1 + λ ˜ t T x ) 1 q − 1 ν t ( d x ) . Also in our case we find out β = λ which is different compare to [

If N ∈ M 0, l o c ( P ) be such that ( 1 + Δ N ) > 0 and q ≠ 1 such that the nondecreasing adapted process

V t 0 ( N ) = 1 2 〈 N c 〉 t + ∑ 0 < s ≤ t ϕ 0 ( Δ N s ) , 0 ≤ t ≤ T (73)

is locally integrable, then the Hellinger process H ( 0 ) ( N , P ) of order q = 0 is the dual predictable projection of V t 0 ( N ) with respect to P.

The function ϕ 0 ( x ) on the above Equation (73) is given as

ϕ 0 ( x ) = ( x − log ( 1 + x ) , if x > − 1 and q = 0 + ∞ otherwise

The set of sigma martingale densities which we are interested in:

Z l o c e ( S ) = { Z = E ( N ) ≥ 0 | N ∈ M l o c ( P ) : Z > 0 Z log Z locallyintegrable , Z S ∈ M σ ( P ) , ϕ 0 ( Δ N ) ∈ L l o c 1 }

As we have seen in the entropy-Hellinger process of order one: The entropy-Hellinger process of N with respect to probability measure P ( h ( 0 ) ( N , P ) ) is the same as of sigma density Z ∈ Z l o c e ( q = 0 ) denoted by ( h t ( 0 ) ( Z , P ) ) and also is the same as of equivalent probability measure ( Q ∈ ℙ a ) denoted by ( h t ( 0 ) ( Q , P ) ).

Proposition 5 (The entropy-Hellinger process of order zero)

The Hellinger process of order 0 for N when W = f − 1 + f ^ − 1 1 − a 1 { a < 1 } is equal to

h ( 0 ) ( N , P ) = 1 2 β T c β ⋅ A + ∫ 0 T ∫ ℝ [ ( f − 1 ) − ln f ] F t ( d x ) ( d A ) + ∑ 0 < s ≤ t ( − ( f ^ − a ) − ( 1 − a ) ln ( 1 − f ^ − a 1 − a ) ) + f M μ P ( g f − ln ( 1 + g f ) | P ˜ ) ⋆ ν + L . (74)

Proof

From Jacod decomposition Δ N ′ t = Δ N ′ t 1 { Δ S = 0 } ,

We have the following assertions

g 1 f = 0 = 0 and Δ N ′ 1 f ^ − a 1 − a = 0

From Equation (9)

Δ N = ( f + g − 1 ) 1 Δ S ≠ 0 + ( Δ N ′ − f ^ − a 1 − a ) 1 Δ S = 0

Also from (12) we have

1 + Δ N = ( f + g ) 1 Δ S ≠ 0 + ( 1 + Δ N ′ − f ^ − a 1 − a ) 1 Δ S = 0

( f + g ) ≥ 0 , 1 + Δ N ′ − f ^ − a 1 − a ≥ 0 (75)

By taking the conditional expectation in the first equation in Equation (75) with respect to P ˜ and predictable projection in the second Equation in (75) we are going to have

f ≥ 0 , 1 − f ^ − a 1 − a ≥ 0 (76)

From Equation (73) on the RHS we have

ϕ 0 ( Δ N s ) = ∑ 0 < s ≤ t ( Δ N − ln ( 1 + Δ N ) ) = ( ( f + g − 1 ) − ln ( f + g ) ) ⋆ μ + ( ( Δ N ′ − f ^ − a 1 − a ) − ln ( 1 + Δ N ′ − f ^ − a 1 − a ) ) 1 Δ S = 0

By using the assertion above

V ( N , P ) = 1 2 β T c β ⋅ A + ( ( f − 1 ) − ln f ) ⋆ μ + ∑ 0 < s ≤ t ( ( − f ^ − a 1 − a ) − ln ( 1 − f ^ − a 1 − a ) ) 1 Δ S = 0 + f ( g f − ln ( 1 + g f ) ) ⋆ μ + 1 2 〈 N c ′ 〉 + ∑ 0 < s ≤ t ( 1 − f ^ − a 1 − a ) [ ( Δ N ′ 1 − f ^ − a 1 − a ) − ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) ] 1 Δ S = 0 + g ln f ⋆ μ + ∑ 0 < s ≤ t ln ( 1 − f ^ − a 1 − a ) Δ N ′ (77)

The dual predictable projection of the last two terms in the RHS of the Equation (77) will vanish due to the duality predictable projection properties of local martingal N and g.

Let L be dual predictable projection of the following equation

K = 1 2 〈 N c ′ 〉 + ∑ 0 < s ≤ t ( 1 − f ^ − a 1 − a ) [ ( Δ N ′ 1 − f ^ − a 1 − a ) − ( 1 + Δ N ′ 1 − f ^ − a 1 − a ) ] 1 Δ S = 0

Taking the dual predictable projection of the remaining terms we are going to have.

h 0 ( N , P ) = 1 2 β T c β ⋅ A + ( ( f − 1 ) − ln f ) ⋆ ν + ∑ 0 < s ≤ t ( ( − f ^ − a 1 − a ) − ln ( 1 − f ^ − a 1 − a ) ) ( 1 − a ) + f M μ P ( g f − ln ( 1 + g f ) | P ˜ ) ⋆ ν + L (78)

Which is the same as (74)

The minimization problem of min Z ∈ Z e , l o c h E ( Z , P ) is equivalent to minimize the Hellinger process over the set of densities that have the following predictable representation ( g = 0 and N ′ = 0 ).

Therefore we are going to have the following minimization entropy-Hellinger process.

h 0 ( N , P ) = 1 2 β T c β ⋅ A + ( ( f − 1 ) − ln f ) ⋆ ν + ∑ 0 < s ≤ t ( − ( f ^ − a ) − ( 1 − a ) ln ( 1 − f ^ − a 1 − a ) ) (79)

By distinguishing the cases where Δ A = 0 and the case where Δ A ≠ 0 , this problem can be split into the following two minimization problems. The first problem ( Δ A = 0 ) is defined by

∫ 1 2 β T c β d A + ∫ ( ( f − 1 ) − ln f ) F t d A (80)

where the minimization is over all couples ( β , f ) satisfying

b + c β + ∫ ( x ( f + 1 ) − h ( x ) ) F t ( d x ) = 0 (81)

The second problem ( Δ A ≠ 0 ) is defined as follows

( ( f − 1 ) − ln f ) ⋆ ν + ∑ 0 < s ≤ t ( − ( f ^ − a ) − ( 1 − a ) ln ( 1 − f ^ − a 1 − a ) ) (82)

where the minimization is over the functional f such that

∫ x f ν t ( d x ) = 0 (83)

The conditions (81) and (83) correspond to the conditions given in above theorem and conditions given for a local martingale to be a sigma martingale [

Euler-Lagrange equation of the first problem due to Equation (80) and Equation (81)

L ( β , f ) = ∫ 1 2 β T c β d A + ∫ ( ( f − 1 ) − ln f ) F t ( d x ) d A − λ ( ∫ b d A + ∫ c β d A + ∫ ( x ( f + 1 ) − h ( x ) ) F t ( d x ) d A ) = 0

L ( β , f ) = 1 2 β T c β + ∫ ( ( f − 1 ) − ln f ) F t ( d x ) − λ ( b + c β + ∫ ( x ( f + 1 ) − h ( x ) ) F t ( d x ) ) = 0

d β : β = λ d f : f = 1 1 − λ x (84)

Therefore the description of β ˜ and f ˜ due to Equation (84) is equal to

β ˜ = λ ˜ 1 { Δ A = 0 } , f ˜ 1 { Δ A = 0 } = ( 1 1 − λ x ) 1 { Δ A = 0 } (85)

The Lagrangian for this minimization problem due to Equations (82) and (83) is given by

L ( f , ϕ , λ , α ) = ∫ ( ( f − 1 ) − ln f ) ν t ( d x ) + ∑ 0 < s ≤ t [ − ϕ − ( 1 − a ) ln ( 1 − ϕ 1 − a ) ] − λ ( ∫ x f t ( x ) ν t ( d x ) ) − α ( ∫ f t ( x ) ν t ( d x ) − a − ϕ ) = 0

d f : f t ( x ) = 1 1 − λ x − α d ϕ : 1 − a ( 1 − a ) − ϕ = 1 − α d λ : ∫ x f x ν d x = 0 d α : ∫ f t ( x ) ν t ( d x ) − a − ϕ = 0 (86)

Substitute the first Equation of (86) into the third Equation of (86)

∫ x 1 − λ x − α ν t ( d x ) = 0 (87)

From the second Equation (86) make ϕ the subject

ϕ = ( 1 − a ) ( − α ) 1 − α (88)

Substitute Equation (88) into the fourth Equation (86) and make α the subject

∫ f t ( x ) ν t ( d x ) − a − ( ( 1 − a ) ( − α ) 1 − α ) = 0 α = ∫ f t ( x ) ν t ( d x ) − a ∫ f t ( x ) ν t ( d x ) − 1 (89)

Substitute Equation (89) into the first Equation (86)

f t ( x ) = ∫ f t ( x ) ν t ( d x ) − 1 ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) (90)

Multiply both sides of Equation (90) by ν t ( d x ) and integrate the equation

∫ f t ( x ) ν t ( d x ) = a 1 − λ x − [ ∫ f t ( x ) ν t ( d x ) − a ∫ f t ( x ) ν t ( d x ) − 1 ] ∫ f t ( x ) ν t ( d x ) a = ∫ f t ( x ) ν t ( d x ) − 1 ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) (91)

If we add Equations (90) and Equation (91) on LHS we are going to have

W = f t ( x ) + ∫ f t ( x ) ν t ( d x ) a which is equal to

W = f t ( x ) + f ^ t ( x ) a and this equation gives us

W ^ = ∫ f t ( x ) ν t ( d x ) − 1 ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) + ∫ f t ( x ) ν t ( d x ) − 1 ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) on the RHS.

This is equal to

W ^ = 2 ∫ f t ( x ) ν t ( d x ) − 1 ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a )

But we know W = f − 1 + f ^ − a 1 − a

Therefore let

m = f t ( x ) − 1 ∈ P ˜ where f t ( x ) = 1 1 − λ x − [ ∫ f t ( x ) ν t ( d x ) − a ∫ f t ( x ) ν t ( d x ) − 1 ]

m = f t ( x ) − 1 = 1 1 − λ x − [ ∫ f t ( x ) ν t ( d x ) − a ∫ f t ( x ) ν t ( d x ) − 1 ] − 1 (92)

From Equation (92)

f t ( x ) − 1 = ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) ) ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) (93)

Multiply both sides of Equation (92) by ν t ( d x ) and integrate both sides

∫ f t ( x ) ν t ( d x ) − a = a [ ∫ f t ( x ) ν t ( d x ) − 1 ] ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) − a (94)

If we put the common denominator in the above equation we are going to have

∫ f t ( x ) ν t ( d x ) − a = a ( λ x − 1 a ( 1 + 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) ) ( − λ x − 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) ∫ f t ( x ) ν t ( d x ) − a 1 − a = a 1 − a ( λ x − 1 a ( 1 + 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) ) ( − λ x − 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) (95)

If you add LHS Equations (93) and (95) we are going to have

W = f t ( x ) − 1 + ∫ f t ( x ) ν t ( d x ) − a 1 − a (96)

And if you add the RHS of Equations (93) and (95) we are going to have

W ^ = ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) ) ( 1 − λ x ) ( ∫ f t ( x ) ν t ( d x ) − 1 ) − ( ∫ f t ( x ) ν t ( d x ) − a ) + a 1 − a ( λ x − 1 a ( 1 + 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) ) ( − λ x − 1 − a ∫ f t ( x ) ν t ( d x ) − 1 ) (97)

The result is difference compared with the order one and order is not equal to one. This is because even after we introduce a function m = f t − 1 ∈ P ˜ in order to get W ^ for minimal entropy Hellinger sigma martingale density, we fail to get a required results.

Therefore: For order zero it is possible to get the equation of W but it is not possible to get a required W ^ for the minimal entropy-Hellinger sigma martingale density.

In the process of forming expressions of Entropy-Hellinger processes of order one, order q and order zero the researchers used a local martingale jump equation of Jacod decomposition. As we know Jacod decomposition has known parameters ( β , W , g , N ′ ) . A measurable function W = U + U ^ 1 − a where U ∈ P ˜ . If we have a new version of U with the conditions that Δ N > − 1 then also

U > − 1 . If we set U = f − 1 , then we are going to have W = f − 1 + f ^ − a 1 − a 1 { a < 1 } .

In this research work, we provide and prove other expressions of entropy-Hellinger processes for a positive sigma martingale of order one, order q and order zero. Furthermore, we show how measurable functions W and W ˜ change during our minimizations solutions and we introduce a function m = f t − 1 ∈ P ˜ in order to get a required equation of minimal entropy-Hellinger sigma martingale density. However, the results is different to the minimization of entropy-Hellinger process of order zero because it is possible to get an equation of measurable function W after an introduction of function m = f t − 1 ∈ P ˜ but it is not possible to get an equation of measurable function W ˜ .

This study is based on a proposed dynamic method of finding an equivalent sigma martingale measure and/or density by using entropy-Hellinger process. Since we show it is possible to have other expressions of entropy-Hellinger processes of all orders. For future studies we recommend other forms of expressions of entropy-Hellinger processes to be considered. This will be possible if new versions of parameters of Jacod decomposition are set under needed conditions.

We acknowledge the African Union through the Pan African University, Institute of Basic Sciences, Technology and Innovation for its consideration support.

The authors declare no conflicts of interest regarding the publication of this paper.

Mwigilwa, W.F., Aduda, J. and Kube, A.O. (2021) Description of Minimal Entropy Hellinger Sigma Martingale Density of Order One, Order q and Order Zero. Journal of Mathematical Finance, 11, 528-553. https://doi.org/10.4236/jmf.2021.113030