Moreover, our model also fulfils the

Markov property.

As the future risk state of a driving vehicle has strong randomness and no aftereffect (i.e., "the state of the previous moment has no direct influence on the state of the next moment" [11]), the driving risk evolution process follows the

Markov property. Markov chains have been widely used in the engineering field and have already been applied to the area of transportation, such as traffic flow and travel speed forecasting [11-13], but have not been extensively researched in driving risk prediction.

Secondly, based on the

Markov property of AMC, the one-step transition probability matrix of different attack types produced by the alerts in each cluster is extracted.

Ledoux,

Markov property for a function of a Markov chain: A linear algebra approach, Linear Algebra and its Applications, Vol.

This relation is a

Markov property. Transition (probability) function is,

A Dirichlet form [8] in an [L.sup.2]-space is a closed, positive, symmetric, densely defined bilinear form that has the contraction property (also known as the

Markov property).

where we have used the strong

Markov property for the second equality, (97) and the dominated convergence theorem for the penultimate equality and then (98).

Due to the

Markov property of the queueing system, we know that the optimal policy depends only on the current state regardless of t.

But when LTE-U is considered, the

Markov property no longer holds, because of the fact that when LTE-U is off, the Wi-Fi transmission failure probability is only Wi-Fi to Wi-Fi collision probability; when LTE-U is on, the Wi-Fi transmission failure probability depends on both Wi-Fi to Wi-Fi and LTE-U to Wi-Fi collision probability.

One is the so-called Markov finite approximations, that is, the discrete Frobenius-Perron operator shares the same

Markov property of the Frobenius-Perron operator, and the other is the Gelerkin projection principle.

(1) The hidden states S = {[S.sub.1], [S.sub.2], ..., [S.sub.N]}, which meet the

Markov property, where N indicates the number of hidden states.

A Markov chain is a sequence X = ([X.sub.n])n[member of]N of random variables that take their values in some countable set F, the state space, such that the

Markov property holds,