Complex Probability and Markov Stochastic Process
B.A., M.Sc., Ph.D., Post Doc.
Professor of Economics and Chief Islamic Banking Advisor
Bank Melli Iran, Tehran, Iran�������
Email: [email protected]
Faculty of Mathematics, Polytechnics University
Hafez Ave., Tehran, 15914, Iran
Email: [email protected]
This note discusses the existence of "complex probability" in the real world sensible problems. By defining a measure more general than the conventional definition of probability, the transition probability matrix of discrete Markov chain is broken to the periods shorter than a complete step of the transition. In this regard, the complex probability is implied.
Keywords: Prbability, Markov Process, Stochastic Process, Population Census
Sometimes analytic numbers coincide with the mathematical modeling of real world and make the real analysis of problems complex. All the measures in our everyday problems belong to R, and mostly to R+. Probability of occurrence of an event always belongs to the range [0,1]. In this paper, it is discussed that to solve a special class of Markov chain which should have a solution in the real world; we are confronted with "analytic probabilities"!. Though the name probability applies to the values between zero and one, we define a special analog measure of probability as a complex probability where the conventional probability is a subclass of this newly defined measure.
define the well-known discrete-time Markov chain
Therefore, the Markov or transition probability matrix of the process is defined by
The n-step transition probability matrix
According to Chapman � Kolmogorov relation for discrete Markov matrices (Karlin and Taylor (1975)), it can be proved that
Pn that is P to the power n is a Markov matrix if P is Markov.
��������������� Now, suppose that we intend to derive the t-step transition probability matrix P(t) where t≥0 from the above (3) and (4) definition of n-step transition probability matrix P. That is, to find the transition probability matrix for incomplete steps. On the other hand, we are interested in finding the transition matrix P(t) when t is between two sequential integers. This case is not just a tatonnement example. To clarify the application of this phenomenon, consider the following example.
Example 1. Usually, in the population census of societies with N distinct regions, migration information is collected in an NxN migration matrix for a period of ten years. Denote this matrix by M. Any element of M, m�ij is the population who left region i and went to region j through the last ten years. By dividing each mij to sum of the ith row of M, a value of Pij is computed as an estimate of the probability of transition from ith to jth regions. Thus, the stochastic matrix P gives the probabilities of going from region i to region j in ten years (which is one�step transition probability matrix). The question is: how we can compute the transition probability matrix for one year or one-tenth step and so on.
��������������� If we knew the generic function of probabilities in a very small period of time, we would be able to solve problems similar to example 1. But the generic function (5) is not obtainable. If it were, we would apply the continuous time Markov procedure using the generic NxN matrix A as
Where P(h) denotes transition probability matrix
at time h. Then the transition probability matrix at any time
P(t) = e At ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� �����(6)
��������������� Therefore a special procedure should be adopted to find the transition probability matrix P(t) at any time t from discrete Markov chain information. As will be shown later, the adopted procedure coincides with a transition probability matrix with complex elements.
2.Breaking the time in discrete Markov chain
��������������� Consider again matrix P defined in (2). Also, assume P is of full rank.
Assumption 1: P is of full rank.
This assumption assures that all eigenvalues of P are nonzero, and P is diagonalizable, Searle (1982), Dhrymes (1978). This assumption is not very restrictive, since; actually, most of Markov matrices have dominant diagonals. That is the probability of transition from state i to itself is more than the sum of probabilities from state i to all other states. The matrices having dominant diagonals are non-singular, Takayama (1974). Therefore, P can be decomposed as follows (Searle (1982), Klein (1973)).
Where X is an NxN matrix of eigenvectors�
Using (7), (8), and (9) to break n-step transition
probability matrix P to any smaller
period of time t
the other hand, the transition probability matrix of n-step can be broken to
fractions of n, if the sum of them is equal to n. Therefore, any
Before discussing the nature of eigenvalues of P, let us define the generalized Markov matrix.
Definition 1. Matrix Q is a generalized Markov matrix if the following conditions are fulfilled:
Remark 1. According to definition 1, matrix Q can be written as:
Where U and V are NxN matrices of real and
imaginary parts of Q with
Remark 2. Matrix U has all Properties of P defined by (2); thus, P� Q.
Theorem 1. If P is a Markov matrix, then Pt also satisfies Markovian properties.
Proof: According to Chapman�Kolmogorov relation for continuous Markov chain (Karlin and Taylor (1975)), we have
That is, if P(t) and P(s), transition probability matrices at times t and s are Markovs, then the product of them P(t+s) is also Markov. Let t=1, then P(1) is a one-step transition probability matrix which is equivalent to (2). Hence, our discrete Markov matrix P is equivalent to its continuous analog P(1). So
If we show that
Then according to (14)
We can conclude
that if P is Markov then Pt, Ps and Pt+s
are also Markovs for
Rewrite P(t) in (6) as (18).
And X �is the corresponding eigenmatrix of A. Take the natural logarithm of (18),
Write (22) for t=1 and multiply both side by t,
By comparison of (22) and (24) conclude that
Given (15), equation (26) is the same as (16) Q.E.D.
Matrix Pt fulfills
definition 1. Thus,
Remark 3. Sum of each row of Pt is equal to one. Since Pt satisfies Markovian properties (theorem 1).
Remark 4. Sum of imaginary parts of each row is equal to zero. This immediately comes from remark 3.
If qij denotes the ijth element of Pt for
Remark 7. Given Q as in remark 6, then ujk�[0,1]. This also comes immediately from Theorem 1.
3. Discussion on broken times
��������������� The broken time discrete Markov
chain is not always a complex probability matrix defined by definition 1.
Matrix Pt has different
properties with respect to t and eigenvalues.
��������������� Since P is a non�negative matrix, Frobenius theorem (Takayama (1974), Nikaido (1970)) assures that P has a positive dominant eigenvalue
Furthermore, if P is also a Markov matrix then its Frobenius root is equal to one, (Bellman (1970), Takayama (1974)). Therefore,
With the above information, consider the following discussions.
this case all
Pt in this case
In all cases of a, b, and c we never coincide with complex probabilities. Since Pt can be driven by simply multiplying P, t times.
In this case, Pt is a real matrix but does not always satisfy condition 2 of definition 1.
Pt is a complex matrix but does always satisfy conditions 2 and 3 of definition 1.
4. Complex probability justification
��������������� Interpretation of the "Complex probability" as defined by definition 1 is not very simple and needs more elaborations. The interesting problem is that it exists in operational works of statistics, as example 1 discussed. Many similar examples like the cited may be gathered.
��������������� With this definition of
probability, the moments of a real random variable are complex. Although the
And we have the following remark accordingly,
Remark 8. Sum of t-step distribution is equal to sum of initial distribution. That is,
This can be derived based on (32) and (33) as
And, the sum of t�step distribution is
The two parentheses in (36) are one and zero, respectively based on conditions 4 and 5 of definition 1. Thus, (36) and (34) are the same.
��������������� The above remark 8 states that though there exists imaginary transition probabilities to move from state j to k, the total sum of �imaginary transitions� is equal to zero. On the other hand, after the tth step transition, the total distribution has no imaginary part.
By summarizing the discrete and continuous times Markov stochastic processes, a class of real-world problems was introduced which cannot be solved by each of the procedures. The solutions of these problems coincide with �Complex probabilities� of transitions that are inherent in the mathematical formulation of the model. Complex probability is defined, and some of its properties with respect to the cited class are examined. Justification of the idea of complex probability needs more work that is left for further research.
The authors are indebted to Dr. A. Monajemi, who read the manuscript and gave valuable remarks.
R. Bellman (1970), Introduction to matrix analysis, McGraw�Hill.
P.J. Dhrymes (1978), Mathematics for econometrics. Springer-Verlag.
W. Feller (1970, 1971), An Introduction to probability theory and its applications, Vols. 1,2, Wiley, New York.
P.G. Hoel, S.C. Port, C. J. Stone (1972), Introduction to stochastic processes. Houghton Mifflin, New York.
S. Karlin, H.M.Taylor (1975), A first course in stochastic processes. Academic Press.
E. Klein (1973), Mathematical methods in theoretical economics, topological and vector space foundations of equilibrium analysis, Academic Press.
K.V. Mardia, J.T. Kent, J.M. Bibby (1982), Multivariate analysis, Academic Press.
H. Nikaido (1970), Introduction to sets and mapping in modern economics. North Holland Publishing Co.
S.S. Searle (1982), Matrix algebra useful for statistics. Wiley.
A.Takayama (1974) Mathematical economics. Dryden Press, Illinois.
E. Wentzel, L. Ovcharov (1986) Applied problems in probability theory. Mir, Moscow.
Copyright for this article is retained by the author(s), with first publication rights granted to the journal. This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).