In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds.[1][2][3] The inequality states that, for

where

is a real-valued random variable,
is the probability measure,
is the expected value of ,
is the variance of .

Applying the Cantelli inequality to gives a bound on the lower tail,

While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928,[4] it originates in Chebyshev's work of 1874.[5] When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality.

Comparison to Chebyshev's inequality edit

For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get

 

On the other hand, for two-sided tail bounds, Cantelli's inequality gives

 

which is always worse than Chebyshev's inequality (when  ; otherwise, both inequalities bound a probability by a value greater than one, and so are trivial).

Proof edit

Let   be a real-valued random variable with finite variance   and expectation  , and define   (so that   and  ).

Then, for any  , we have

 

the last inequality being a consequence of Markov's inequality. As the above holds for any choice of  , we can choose to apply it with the value that minimizes the function  . By differentiating, this can be seen to be  , leading to

  if  

Generalizations edit

Various stronger inequalities can be shown. He, Zhang, and Zhang showed[6] (Corollary 2.3) when   and  :

 

In the case   this matches a bound in Berger's "The Fourth Moment Method",[7]

 

This improves over Cantelli's inequality in that we can get a non-zero lower bound, even when  .

See also edit

References edit

  1. ^ Boucheron, Stéphane (2013). Concentration inequalities : a nonasymptotic theory of independence. Gábor Lugosi, Pascal Massart. Oxford: Oxford University Press. ISBN 978-0-19-953525-5. OCLC 829910957.
  2. ^ "Tail and Concentration Inequalities" by Hung Q. Ngo
  3. ^ "Concentration-of-measure inequalities" by Gábor Lugosi
  4. ^ Cantelli, F. P. (1928), "Sui confini della probabilita," Atti del Congresso Internazional del Matematici, Bologna, 6, 47-5
  5. ^ Ghosh, B.K., 2002. Probability inequalities related to Markov's theorem. The American Statistician, 56(3), pp.186-190
  6. ^ He, S.; Zhang, J.; Zhang, S. (2010). "Bounding probability of small deviation: A fourth moment approach". Mathematics of Operations Research. 35 (1): 208–232. doi:10.1287/moor.1090.0438. S2CID 11298475.
  7. ^ Berger, Bonnie (August 1997). "The Fourth Moment Method". SIAM Journal on Computing. 26 (4): 1188–1207. doi:10.1137/S0097539792240005. ISSN 0097-5397. S2CID 14313557.