Chebyshev's inequality


Also found in: Encyclopedia, Wikipedia.

Chebyshev's inequality

(ˈtʃɛbɪˌʃɒfs)
n
(Statistics) statistics the fundamental theorem that the probability that a random variable differs from its mean by more than k standard deviations is less than or equal to 1/k2
[named after P. L. Chebyshev (1821–94), Russian mathematician]
Collins English Dictionary – Complete and Unabridged, 12th Edition 2014 © HarperCollins Publishers 1991, 1994, 1998, 2000, 2003, 2006, 2007, 2009, 2011, 2014
Mentioned in ?
References in periodicals archive ?
Next there exists a finite constant K > 0 such that condition (i) holds for all f [member of] L Applying Chebyshev's inequality to the Lebesgue measure of the set [E.sub.f] = {x [member of] R | [(log(1 + [absolute value of ([f.sup.*](x))]).sup.p] > K/[delta]} for f [member of] L, the following estimate is valid:
What is interesting about (2) in Theorem 8 is the possibility of combining it with Chebyshev's inequality to obtain rates of convergence.
It is stressed that Chebyshev's inequality is rarely used to set mean value confidence intervals (in our case fuel efficiency average for both data, experimental and calculated).
Using the Chebyshev's inequality to sin A/2, sin B/2, sin c/2 and cos A/2, cos c/2 we get
The idea of the proof of Theorem 2.1 is as for the binary search tree [6, Theorem 2.1] to use the well-known Chebyshev inequality for proving (ii), (iii) and (iv); (i) is very easy to prove.(Recall that Chebyshev's inequality is a useful tool for proving that a random variable is sharply concentrated about its mean value.)
In discussing the law of large numbers it is helpful to refer to Chebyshev's inequality: