Skip to main content
Log in

Limit distributions of the upper order statistics for the Lévy-frailty Marshall-Olkin distribution

  • Published:
Extremes Aims and scope Submit manuscript

Abstract

The Marshall-Olkin (MO) distribution is considered a key model in reliability theory and in risk analysis, where it is used to model the lifetimes of dependent components or entities of a system and dependency is induced by “shocks” that hit one or more components at a time. Of particular interest is the Lévy-frailty subfamily of the Marshall-Olkin (LFMO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying Lévy subordinator process. The main contribution of this work is that we derive the precise asymptotic behavior of the upper order statistics of the LFMO distribution. More specifically, we consider a sequence of n univariate random variables jointly distributed as a multivariate LFMO distribution and analyze the order statistics of the sequence as n grows. Our main result states that if the underlying Lévy subordinator is in the normal domain of attraction of a stable distribution with index of stability α then, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if α > 1 or a simple transformation of it if α ≤ 1. Our result can also give easily computable confidence intervals for the last failure times, provided that a proper convergence analysis is carried out first.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Asmussen, S.: Applied probability and queues. Springer, New York (2003)

  • Asmussen, S., Glynn, P. W.: Stochastic Simulation: Algorithms and Analysis, vol. 57. Springer (2007)

  • Barrera, J., Ycart, B.: Bounds for left and right window cutoffs. ALEA: Latin Amer. J. Probab. Math. Stat. 11(2), 445–458 (2014)

    MathSciNet  MATH  Google Scholar 

  • Bernhart, G., Fernández, L., Mai, J. F., Schenk, S., Scherer, M.: A Survey of Dynamic Representations and Generalizations of the Marshall-Olkin Distribution. In: Marshall-Olkin Distributions–Advances in Theory and Applications, pp. 1–13. Springer (2015)

  • Botev, Z., L’Ecuyer, P., Simard, R., Tuffin, B.: Static network reliability estimation under the Marshall-Olkin copula. ACM Trans. Model. Comput. Simul. 26(2), 14 (2016)

    Article  MathSciNet  Google Scholar 

  • Cherubini, U., Durante, F., Mulinacci, S.: Marshall-Olkin Distributions — Advances in Theory and Applications: bologna, vol. 141. Springer, Italy (2015)

  • David, H. A.: Topics in the History of Order Statistics. In: Advances in Distribution Theory, Order Statistics, and Inference, pp. 157–172. Springer (2006)

  • De Haan, L., Peng, L.: Exact rates of convergence to a stable law. J. Lond. Math. Soc. 59(3), 1134–1152 (1999)

    Article  MathSciNet  Google Scholar 

  • Engel, J., Scherer, M., Spiegelberg, L.: One-Factor Lévy-Frailty Copulas with Inhomogeneous Trigger Rates. In: Soft Methods for Data Science, pp. 205–212. Springer (2017)

  • Fernández, L., Mai, J. F., Scherer, M.: The Mean of Marshall-Olkin-Dependent Exponential Random Variables. In: Marshall-Olkin Distributions–Advances in Theory and Applications, pp. 33–50. Springer (2015)

  • Finkenstadt, B., Rootzén, H. (eds.): Extreme values in finance, telecommunications, and the environment. CRC Press, Boca Raton (2003)

  • Gudendorf, G., Segers, J.: Extreme-Value Copulas. In: Copula Theory and Its Applications, pp. 127–145. Springer (2010)

  • Gumbel, E. J.: Les valeurs extrêmes des distributions statistiques. Ann. l’inst. Henri Poincaré 5(2), 115–158 (1935)

    MathSciNet  MATH  Google Scholar 

  • Hering, C., Mai, J. F.: Moment-based estimation of extendible Marshall-Olkin copulas. Metrika 75(5), 601–620 (2012)

    Article  MathSciNet  Google Scholar 

  • Hüsler, J.: Multivariate extreme values in stationary random sequences. Stoch. Process. Appl. 35(1), 99–108 (1990)

    Article  MathSciNet  Google Scholar 

  • Kallenberg, O.: Foundations of modern probability. Springer Science & Business Media (2002)

  • Leadbetter, M. R.: Extremes and local dependence in stationary sequences. Zeitsch. Wahrscheinlichkeitstheorie Verwandte Gebiete 65(2), 291–306 (1983)

    Article  MathSciNet  Google Scholar 

  • Leadbetter, M. R., Lindgren, G., Rootzén, H.: Extremes and related properties of random sequences and processes. Springer Science & Business Media (2012)

  • Lindskog, F., McNeil, A. J.: Common Poisson shock models: applications to insurance and credit risk modelling. Astin Bullet. 33(02), 209–238 (2003)

    Article  MathSciNet  Google Scholar 

  • Mai, J. F., Scherer, M.: Lévy-frailty copulas. J. Multivar. Anal. 100(7), 1567–1585 (2009)

    Article  Google Scholar 

  • Mai, J. F., Scherer, M.: Reparameterizing Marshall-Olkin copulas with applications to sampling. J. Stat. Comput. Simul. 81(1), 59–78 (2011)

    Article  MathSciNet  Google Scholar 

  • Mai, J. F.: Multivariate Exponential Distributions with Latent Factor Structure and Related Topics. Ph.D. thesis, Technische Universität München (2014)

  • Mai, J. F.: Extreme-value copulas associated with the expected scaled maximum of independent random variables. J. Multivar. Anal. 166, 50–61 (2018)

    Article  MathSciNet  Google Scholar 

  • Mai, J. F., Scherer, M.: Sampling exchangeable and hierarchical Marshall-Olkin distributions. Commun. Stat.-Theory Methods 42(4), 619–632 (2013)

    Article  MathSciNet  Google Scholar 

  • Mai, J. F., Scherer, M.: Simulating copulas: stochastic models, sampling algorithms, and applications, vol. 6 World Scientific (2017)

  • Maller, R., Schindler, T.: Small time convergence of subordinators with regularly or slowly varying canonical measure, vol. 129. Elsevier (2019)

  • Marichal, J. L., Mathonet, P., Waldhauser, T.: On signature-based expressions of system reliability. J. Multivar. Anal. 102(10), 1410–1416 (2011)

    Article  MathSciNet  Google Scholar 

  • Marshall, A., Olkin, I.: A multivariate exponencial distribution. J. Am. Stat. Assoc. 62(2014), 30–44 (1967)

    Article  Google Scholar 

  • Matus, O., Barrera, J., Moreno, E., Rubino, G.: On the Marshall-Olkin copula model for network reliability under dependent failures, https://doi.org/10.1109/TR.2018.2865707https://doi.org/10.1109/TR.2018.2865707, vol. 68 (2019)

  • Nagaraja, H. N.: Order Statistics from Independent Exponential Random Variables and the Sum of the Top Order Statistics. In: Advances in Distribution Theory, Order Statistics, and Inference, pp. 173–185. Springer (2006)

  • Nolan, J.P.: Stable Distributions - Models for Heavy Tailed Data. Birkhauser, Boston. In progress, Chapter 1 online at http://fs2.american.edu/jpnolan/www/stable/stable.html (2018)

  • Prokhorov, Y. V.: Convergence of random processes and limit theorems in probability theory. Theory Probab. Appl. 1(2), 157–214 (1956)

    Article  MathSciNet  Google Scholar 

  • Whitt, W.: Stochastic-Process Limits: an introduction to Stochastic-Process limits and their application to queues. Springer Science & Business Media (2002)

  • Wüthrich, M. V.: Limit distributions of upper order statistics for families of multivariate distributions. Extremes 8(4), 339–344 (2005)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Javiera Barrera acknowledges the financial support of FONDECYT grant 1161064 and of Programa Iniciativa Científica Milenio NC120062. Guido Lagos acknowledges the financial support of FONDECYT grant 3180767, Programa Iniciativa Científica Milenio NC120062, and the Center for Mathematical Modeling of Universidad de Chile through their grants Proyecto Basal PFB-03 and PIA Fellowship AFB170001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guido Lagos.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Appendix Proof of Lemma 1

Lemma 1

Let \((q_{n})_{n} \subset \mathbb {R}\) be a sequence such that \(q_{n} = o(\sqrt {n})\) as \(n \to +\infty \). Then, \(\left (1+q_{n}/n\right )^{n} \sim e^{q_{n}}\) as \(n \to +\infty \), i.e., \(\lim _{n \to +\infty } \left (1 + q_{n}/n\right )^{n} / e^{q_{n}} = 1\).

Proof

For all n sufficiently large, we can write

$$ \begin{array}{@{}rcl@{}} \lefteqn{ \frac{\left( 1+q_{n}/n\right)^{n}}{ e^{q_{n}}} = \exp \left( n \log(1+q_{n}/n) -q_{n} \right)} \\ & = \exp \left( {\sum}_{j \geq 2} (-1)^{j+1} j! \left( q_{n} / \sqrt{n} \right)^{j} \frac{1}{n^{\frac{j}{2}-1}} \right) \\ & = \exp \left( -2 \left( q_{n} / \sqrt{n} \right)^{2} + o \left( \left( q_{n} / \sqrt{n}\right)^{2} \right) \right), \end{array} $$

where we used the series expansion of the \(\log \) function around 1 to write the term \(\log (1+q_{n}/n)\) as a series and where in the last equality, we used that qn/n → 0 as \(n \to +\infty \) because \(q_{n} = o(\sqrt {n})\). We conclude by noting that the last term of the equation goes to one as \(n \to +\infty \) since \(q_{n} = o(\sqrt {n})\). □

Lemma 2

Let (pn)n be a sequence in (0,1) such that pn → 0 and \(n p_{n} \to +\infty \) as \(n \to +\infty \). Denoting by b(n,pn) a binomial random variable with parameters (n,pn), and by N a standard normal random variable, it holds that

$$ \begin{array}{@{}rcl@{}} \frac{\mathrm{b}({n},{p_{n}}) - n p_{n}}{\sqrt{n p_{n}}} \Longrightarrow N \end{array} $$
(33)

as \(n \to +\infty \).

Proof

We will prove that the moment generating function of the random variable in the left of Eq. 1 converges to the moment generating function of a standard normal random variable. Indeed,

$$ \begin{array}{@{}rcl@{}} \lefteqn{ \mathbb{E} \left[ \exp \left( t \frac{\mathrm{b}({n},{p_{n}}) - n p_{n}}{\sqrt{n p_{n}}} \right) \right] = \left( 1-p_{n} + p_{n} e^{t / \sqrt{n p_{n}}} \right)^{n} e^{-t \sqrt{n p_{n}}} } \\ & = \left( 1 + \frac{n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right)}{n} \right)^{n} e^{-t \sqrt{n p_{n}}} \\ & = \left[ \left. \left( 1 + \frac{n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right)}{n} \right)^{n} \middle/ \exp \left( n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right) \right) \right. \right] \\ & \qquad\qquad\qquad\qquad\qquad\qquad \cdot \exp \left( n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right) - t \sqrt{n p_{n}} \right). \end{array} $$

Now, by Lemma 3 we have that the term in the square brackets converges to one, since

$$ \begin{array}{@{}rcl@{}} \frac{n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right)}{\sqrt{n}} = \frac{n p_{n}}{\sqrt{n}} {\sum}_{j \geq 1} \frac{t^{j}}{j!} \frac{1}{(n p_{n})^{j/2}} = t \sqrt{p_{n}} + \frac{1}{\sqrt{n}} {\sum}_{j \geq 0} \frac{t^{j+2}}{(j+2)!} \frac{1}{(n p_{n})^{j/2}}, \end{array} $$

which converges to zero as \(n \to +\infty \). Additionally, using the series expansion of the exponential function we obtain that

$$ \begin{array}{@{}rcl@{}} \lefteqn{\exp \left( n p_{n} \left( e^{t / \sqrt{n p_{n}}}-1\right) - t \sqrt{n p_{n}} \right)} \\ && = \exp \left( \frac{t^{2}}{2} + \frac{1}{\sqrt{n p_{n}}} {\sum}_{j \geq 0} \frac{t^{j+3}}{(j+3)!} \frac{1}{(n p_{n})^{j/2}} \right), \end{array} $$

which converges to \(\exp (t^{2}/2)\) since \(n p_{n} \to +\infty \). This concludes the proof. □

We are now able to prove Lemma 1.

Proof of Lemma 1

For part 1. it is enough to show that b(n,pn) ⇒ 0. This holds since \(\mathbb {P} (\mathrm {b}({n},{p_{n}}) = 0) = \left (1-p_{n} \right )^{n} = \left (1 - n p_{n} / n \right )^{n} \sim e^{-n p_{n}} \sim 1\) as \(n \to +\infty \), where the asymptotic equalities hold due to Lemma 3 and npn → 0.

We now prove part 2. For that purpose define first \(F_{n} (t) := \mathbb {P}(\mathrm {b}({n},{p_{n}}) \leq n p_{n} + \sqrt {n p_{n}} t)\) and \(a_{n} := (k_{n} - n p_{n}) / \sqrt {n p_{n}}\). Note that part 2. of Lemma 1 is equivalent to

$$ \begin{array}{@{}rcl@{}} \limsup F_{n}(a_{n}) = 0 \qquad \text{if and only if} \qquad \limsup a_{n} = -\infty. \end{array} $$
(34)

To prove the reverse implication of Eq. 34 note that for all b and all sufficiently large n we have Fn(an) ≤ Fn(b), since anb for all sufficiently large n. Taking then \(\limsup \) we obtain by Lemma 4 that \(\limsup F_{n}(a_{n}) \leq {\Phi }(b)\), where Φ is the cumulative distribution function of the standard normal distribution. We conclude that \(\limsup F_{n}(a_{n}) = 0\) by making \(b \searrow -\infty \). Now, to prove the direct implication of Eq. 34, let \(\limsup F_{n}(a_{n}) = 0\) and assume ad absurdum that \(\limsup a_{n} = c > -\infty \). Consider then a subsequence \((a_{n_{j}})_{j}\) such that \(a_{n_{j}} \nearrow c \) as \(j \to +\infty \). Then for all 𝜖 > 0 and all sufficiently large j it holds that

$$ \begin{array}{@{}rcl@{}} F_{n_{j}} (c-\epsilon) \leq F_{n_{j}} (a_{n_{j}}) \leq F_{n_{j}} (c), \end{array} $$
(35)

since \(a_{n_{j}} \in [c-\epsilon , c]\) for all sufficiently large j. Making \(j \to +\infty \) in Eq. 3 we obtain that

$$ \begin{array}{@{}rcl@{}} {\Phi} (c-\epsilon) \leq \liminf F_{n_{j}} (a_{n_{j}}) \leq \limsup F_{n_{j}} (a_{n_{j}}) \leq {\Phi} (c), \end{array} $$
(36)

where we used that by Lemma 1 the sequence \((F_{n_{j}})_{j}\) converges pointwise to Φ. It follows that by taking 𝜖 ↘ 0 in Eq. 36 we obtain that \(\lim _{j} F_{n_{j}} (a_{n_{j}}) = {\Phi }(c)\). Thus, in particular, Φ(c) is an accumulation point of (Fn(an))n, which in turn implies \({\Phi }(c) \leq \limsup F_{n}(a_{n})\). But Φ(c) > 0 because \(c>-\infty \), and we had assumed \(\limsup F_{n}(a_{n}) = 0\), which is a contradiction. This shows that necessarily \(\limsup a_{n} = -\infty \). □

Sketch of proof for joint convergence of part 1. of Theorem 1

In this part we give a sketch of the proof of joint convergence for part 1. of Theorem 1. We do so for the joint convergence of two sequences of upper order statistics of the LFMO distribution.

For that, we consider two sequences \(({m_{n}^{1}})_{n}\) and \(({m_{n}^{2}})_{n}\) and without loss of generality assume \({m_{n}^{1}} < {m_{n}^{2}}\) for all n. We want to analyze the behavior of the following probability as \(n \to +\infty \):

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \left( \frac{T_{{m_{n}^{1}} : n} - \log n / \mathbb{E} S_{1}}{ (\log n)^{1/\alpha} / \mathbb{E} S_{1} } > t^{1}, \quad \frac{T_{{m_{n}^{2}} : n} - \log n / \mathbb{E} S_{1}}{ (\log n)^{1/\alpha} / \mathbb{E} S_{1} } > t^{2} \right). \end{array} $$
(37)

The interesting case is when t1 < t2, since in the case t1t2 the event of interest reduces to \(\left (T_{{m_{n}^{1}} : n} - \log n / \mathbb {E} S_{1} \right ) / \left ((\log n)^{1/\alpha } / \mathbb {E} S_{1}\right ) > t^{1}\).

Assume onwards that 0 < t1 < t2 and note that for any 0 < u1 < u2 we have that

$$ \begin{array}{@{}rcl@{}} &&\lefteqn{ \mathbb{P} \left( T_{{m_{n}^{1}} : n} > u^{1}, \quad T_{{m_{n}^{2}} : n} > u^{2} \right) } \\ & =& \mathbb{P} \left( \varepsilon_{{m_{n}^{1}} : n} > S_{u^{1}}, \quad \varepsilon_{{m_{n}^{2}} : n} > S_{u^{2}} \right) \\ & =& \mathbb{E} \left[ \mathbb{P} \left( \left. \varepsilon_{{m_{n}^{1}} : n} > S_{u^{1}}, \quad \varepsilon_{{m_{n}^{2}} : n} > S_{u^{2}} \right| (S_{s})_{s \in \mathbb{R}_{+}} \right) \right] \\ & =& \mathbb{E} \left[ \mathbb{P} \left( \left. {m_{n}^{1}} > B^{1}, \quad {m_{n}^{2}} > B^{1}+B^{2} \right| (S_{s})_{s \in \mathbb{R}_{+}} \right) \right] \\ & =& \mathbb{E} \left[ \mathbb{P} \left( \left. {m_{n}^{1}} > {B^{1}_{n}}, \quad {B^{3}_{n}} > n-{m_{n}^{2}} \right| (S_{s})_{s \in \mathbb{R}_{+}} \right) \right], \end{array} $$

where ε1:n < ε2:n < … < εn:n are the order statistics of the triggers ε1,…,εn, and (B1,B2,B3) is a random vector having a multinomial distribution of n trials and probabilities \((p^{1}, p^{2}, p^{3}) := (1-e^{-S_{u^{1}}}, e^{-S_{u^{1}}} - e^{-S_{u^{2}}}, e^{-S_{u^{2}}})\).

Now note that taking in particular

$$ \begin{array}{@{}rcl@{}} u^{1} = {u^{1}_{n}} := \frac{ \log n + t^{1} (\log n)^{1/\alpha} }{ \mathbb{E} S_{1} } \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} u^{2} = {u^{2}_{n}} := \frac{ \log n + t^{2} (\log n)^{1/\alpha} }{ \mathbb{E} S_{1} } \end{array} $$

for t1 < t2 (cf. Eq. 17) we see that \({u^{1}_{n}} < {u^{2}_{n}}\) and obtain that (p1,p2,p3) take the form

$$ \begin{array}{@{}rcl@{}} &&p^{1} = {p^{1}_{n}} = 1-\frac{e^{-(\sigma {{\Sigma}^{1}_{n}} + t^{1})(\log n)^{1/\alpha}}}{n} \\ &&p^{2} = {p^{2}_{n}} = \frac{e^{-(\sigma {{\Sigma}^{1}_{n}} + t^{1})(\log n)^{1/\alpha}}}{n} \\ &&\qquad\quad \cdot \left( 1 - e^{- \left( \sigma {{\Sigma}^{2}_{n}} (t^{2}-t^{1})^{1/\alpha} (\log n)^{1/{\alpha^{2}}} + (t^{2}-t^{1})(\log n)^{1/\alpha} \right)} \right) \\ &&p^{3} = {p^{3}_{n}} = \frac{e^{-(\sigma {{\Sigma}^{1}_{n}} + t^{1})(\log n)^{1/\alpha}}}{n} e^{-\left( \sigma {{\Sigma}^{2}_{n}} (t^{2}-t^{1})^{1/\alpha} (\log n)^{1/{\alpha^{2}}} + (t^{2}-t^{1})(\log n)^{1/\alpha} \right)} \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} {{\Sigma}^{1}_{n}} := \left( \frac{S_{{u^{1}_{n}}} -{{u^{1}_{n}}} \mathbb{E} S_{1}}{\sigma \left( {u^{1}_{n}} \mathbb{E} S_{1} \right)^{1 / \alpha}} \right) \left( 1 + t^{1} (\log n)^{-(\alpha-1)/\alpha} \right)^{1 / \alpha} \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} {{\Sigma}^{2}_{n}} := \frac{(S_{{u^{2}_{n}}}-S_{{u^{1}_{n}}}) - ({{u^{2}_{n}}}-{{u^{1}_{n}}}) \mathbb{E} S_{1}}{\sigma \left( ({u^{2}_{n}}-{u^{1}_{n}}) \mathbb{E} S_{1} \right)^{1 / \alpha}} \end{array} $$

(cf. Eq. 16). Importantly, note that \({{\Sigma }_{n}^{1}}\) and \({{\Sigma }_{n}^{2}}\) are independent for all n, by the Markov property of the Lévy subordinator S. It follows that since t1 < t2 then both \({u^{1}_{n}}\) and \({u^{2}_{n}}-{u^{1}_{n}} \to +\infty \) as \(n \to +\infty \); hence, using that S1 is in the normal domain of attraction of a stable distribution we obtain that \({{\Sigma }_{n}^{1}}\) and \({{\Sigma }_{n}^{2}}\) both converge to iid copies of a Stableα(1, 1, 0) distribution.

It follows that

$$ \begin{array}{@{}rcl@{}} && \mathbb{P} \left( \frac{T_{{m_{n}^{1}} : n} - \log n / \mathbb{E} S_{1}}{ (\log n)^{1/\alpha} / \mathbb{E} S_{1} } > t^{1}, \quad \frac{T_{{m_{n}^{2}} : n} - \log n / \mathbb{E} S_{1}}{ (\log n)^{1/\alpha} / \mathbb{E} S_{1} } > t^{2} \right) \\ && = \mathbb{P} \left( T_{{m_{n}^{1}} : n} > {u^{1}_{n}}, \quad T_{{m_{n}^{2}} : n} > {u^{2}_{n}} \right) \\ && = \mathbb{E} \left[ \mathbb{P} \left( \left. {m_{n}^{1}} > {B^{1}_{n}}, \quad {B^{3}_{n}} > n-{m_{n}^{2}} \right| (S_{s})_{s \in \mathbb{R}_{+}} \right) \right], \end{array} $$

with \(({B^{1}_{n}},{B^{2}_{n}},{B^{3}_{n}})\) distributed multinomial of n trials and with probabilities \(({p^{1}_{n}}, {p^{2}_{n}}, {p^{3}_{n}})\) define as above. Writing then \(g_{n} (q^{1}, q^{2}, q^{3}; \ k^{1}, k^{2}) := \mathbb {P}(k^{1} > K^{1} , \quad K^{3}>n-k^{2})\) with (K1,K2,K3) distributed multinomial of n trials and probabilities (q1,q2,q3), we have that

$$ \begin{array}{@{}rcl@{}} \mathbb{P} \left( \left. {m_{n}^{1}} > {B^{1}_{n}}, \quad {B^{3}_{n}} > n-{m^{2}_{n}} \right| (S_{s})_{s \in \mathbb{R}_{+}} \right) = g_{n}({p^{1}_{n}}, {p^{2}_{n}}, {p^{3}_{n}}; \ {m^{1}_{n}}, {m^{2}_{n}}) \end{array} $$

almost surely. Paralleling then the proof of Theorem 1 given in Section 5.2, since we already know that \(({{\Sigma }^{1}_{n}}, {{\Sigma }^{2}_{n}}) \Rightarrow ({\Sigma }^{1}_{\infty }, {\Sigma }^{2}_{\infty })\) with \({\Sigma }^{1}_{\infty }, {\Sigma }^{2}_{\infty }\) distributed iid Stableα(1, 1, 0) distributed, then it only rests to elucidate under which growth conditions for \(({m^{1}_{n}}, {m^{2}_{n}})\) it holds that for all \(x,y \in \mathbb {R}\)

$$ \begin{array}{@{}rcl@{}} g_{n} \left( {q^{1}_{n}}, {q^{2}_{n}}, {q^{3}_{n}}; \ {m^{1}_{n}}, {m^{2}_{n}} \right) \to \textbf{1}_{\left\{\sigma x + t^{1} < 0, \ \sigma x + t^{2} < 0\right\}} \ \text{as} \ n \to +\infty \end{array} $$
(38)

for \(({q^{1}_{n}}, {q^{2}_{n}}, {q^{3}_{n}})\) defined as

$$ \begin{array}{@{}rcl@{}} {q^{1}_{n}} &:= & 1-\frac{e^{-(\sigma x + t^{1})(\log n)^{1/\alpha}}}{n} \\ {q^{2}_{n}} &:= & \frac{e^{-(\sigma x + t^{1})(\log n)^{1/\alpha}}}{n} \left[ 1 - e^{- \left( \sigma y (t^{2}-t^{1})^{1/\alpha} (\log n)^{1/{\alpha^{2}}} + (t^{2}-t^{1})(\log n)^{1/\alpha} \right)} \right] \\ {q^{3}_{n}} &:= & \frac{e^{-(\sigma x + t^{1})(\log n)^{1/\alpha}}}{n} e^{-\left( \sigma y (t^{2}-t^{1})^{1/\alpha} (\log n)^{1/{\alpha^{2}}} + (t^{2}-t^{1})(\log n)^{1/\alpha} \right)} \\ &= & \frac{e^{- \left[ \sigma x + t^{2} - \sigma y (t^{2}-t^{1})^{1/\alpha} (\log n)^{(1-\alpha)/{\alpha^{2}}} \right] (\log n)^{1/\alpha} }}{n} \end{array} $$

We claim then that Eq. 38 will hold for all x,yR whenever the double sequence \((({m^{1}_{n}}, {m^{2}_{n}}))_{n}\) satisfies \(\log (n-{m^{1}_{n}}) / (\log n)^{1/\alpha } \to 0\) and \({m_{n}^{1}} < {m_{n}^{2}}\) for all n. Indeed, first note that the term \(- \sigma y (t^{2}-t^{1})^{1/\alpha } (\log n)^{(1-\alpha )/\alpha ^{2}}\) in \({q^{3}_{n}}\) vanishes for all \(y \in \mathbb {R}\) as \(n \to +\infty \), since α ∈ (1, 2). Then, by paralleling again the proof of Theorem 1 in Section 5.2 and writing an analogous of Lemma 1 for multinomial distributions, we conclude that under the latter conditions the probability (5) will converge to \(\mathbb {E} \left [ \mathbf {1}_{\left \{\sigma {\Sigma }^{1}_{\infty } + t^{1} < 0, \ \sigma {\Sigma }^{1}_{\infty } + t^{2} < 0\right \}} \right ] = \mathbb {P}(-\sigma {\Sigma }^{1}_{\infty } > t^{1}, \ -\sigma {\Sigma }^{1}_{\infty } > t^{2})\), with \(-\sigma {\Sigma }^{1}_{\infty }\) distributed as a Stableα(σ,− 1, 0) random variable.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Barrera, J., Lagos, G. Limit distributions of the upper order statistics for the Lévy-frailty Marshall-Olkin distribution. Extremes 23, 603–628 (2020). https://doi.org/10.1007/s10687-020-00386-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10687-020-00386-z

Keywords

Mathematics Subject Classification (2010)

Navigation