Next Article in Journal
On a Certain Subclass of Analytic Functions Defined by Touchard Polynomials
Next Article in Special Issue
Some Results on the Truncated Multivariate Skew-Normal Distribution
Previous Article in Journal
Hadamard–Mercer, Dragomir–Agarwal–Mercer, and Pachpatte–Mercer Type Fractional Inclusions for Convex Functions with an Exponential Kernel and Their Applications
Previous Article in Special Issue
Asymmetric versus Symmetric Binary Regresion: A New Proposal with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution

1
Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Diagonal Las Torres 2640, Peñalolén, Santiago 7941169, Chile
2
Departamento de Matemática, Facultad de Ingeniería, Universidad de Atacama, Av. Copayapu 485, Copiapó 1532297, Chile
3
CIMFAV-INGEMAT, Facultad de Ingeniería, Universidad de Valparaíso, General Cruz 222, Valparaíso 2362905, Chile
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 837; https://doi.org/10.3390/sym14040837
Submission received: 16 March 2022 / Revised: 5 April 2022 / Accepted: 14 April 2022 / Published: 18 April 2022

Abstract

:
Many lifetime distribution models have successfully served as population models for risk analysis and reliability mechanisms. We propose a novel estimation procedure of stress–strength reliability in the case of two independent unit-half-normal distributions can fit asymmetrical data with either positive or negative skew, with different shape parameters. We obtain the maximum likelihood estimator of the reliability, its asymptotic distribution, and exact and asymptotic confidence intervals. In addition, confidence intervals of model parameters are constructed by using bootstrap techniques. We study the performance of the estimators based on Monte Carlo simulations, the mean squared error, average bias and length, and coverage probabilities. Finally, we apply the proposed reliability model in data analysis of burr measurements on the iron sheets.

1. Introduction

Recently, ref. [1] introduced a new distribution defined on the unit interval with one parameter and simple structure based on the half normal distribution, called the unit-half-normal distribution, as a good alternative to the Topp–Leone distribution [2], Kumaraswamy distribution [3], unit-logistic distribution [4], beta distribution of two parameters (or the Pearson type IV distribution) [5], unit-Birnbaum–Saunders distribution [6], and unit-Lindley distribution [7], among others. The probability density function (PDF) of the unit-half-normal distribution is as follows
f X ( x ; η ) = 2 η ( 1 x ) 2 ϕ x η ( 1 x ) , 0 x < 1 ,
where η > 0 is a scale parameter and ϕ ( · ) is the PDF of the standard normal distribution. From now on, a random variable X with PDF defined in (1) will be denoted by U H N ( η ) . The corresponding cumulative distribution function (CDF) is
F X ( x ; η ) = 2 Φ x η ( 1 x ) 1 ,
where Φ ( · ) is the CDF of the standard normal distribution. Figure 1 and Figure 2 illustrate some of the possible shape of the unit-half-normal distribution for selected values of the parameter η . From these figures, we observed that, the PDF shapes are unimodal and asymmetric (left and right skewed). As showed by [1], the unit-half-normal distribution belongs the exponential family of probability distributions.
The literature demonstrates that estimation of the stress–strength model, R = P ( Y < X ) , has already been performed assuming that X and Y are independent random variables with positive support and different degrees of skewness and kurtosis described by the same kind of probability distribution. We refer the reader to [8] for a review and the references therein for more information on this claim. Much less attention is given when X and Y take values in a limited range, such as proportions, percentages and fractions. The main goal of this work is to develop the inferential procedure of the stress–strength parameter R, when X and Y are independent U H N ( η ) and U H N ( λ ) , respectively. We can note the important role in the reliability analysis played the stress–strength parameter. Let X and Y denote, respectively, the stress and the strength. We say that the system is failed if the used stress is greater than its strength, in one active system.
The rest of the paper is structured as follows. The next section presents the entropy and mean residual life of a random variable with U H N distribution. Then, we present an expression for the stress–strength reliability (R), MLE of R, its exact distribution and some properties, and three algorithms to simulate random variables from R. In the subsequent section, confidence intervals for R are developed by means of exact, asymptotic and bootstrap approaches. Next, the computational simulations are presented to evaluate the performance of the MLE and exact, asymptotic and bootstrap confidence intervals, followed by the section containing an application in the context burr measurements on the iron sheets. Finally, some concluding remarks are presented in the last section.

2. Entropy and Mean Residual Life

In this section, we present the entropy and mean residual life for a random variable with U H N distribution.

2.1. Entropy

The entropy of a random variable X with PDF (1) is a measure of variation of the uncertainty. A large value of entropy indicate the grater uncertainty in the data. Using U = X η ( 1 X ) and numerical integration, it is possible to calculate the entropy as a function of the scale parameter η . Then the Shannon entropy [9], defined by E ( log f X ( x ) ) , is equal to
ξ : = E ( log f X ( x ) ) = 1 2 + 1 2 log π η 2 2 4 0 log ( 1 + η u ) ϕ ( u ) d u .
Using the Taylor expansion for log ( 1 + x ) around zero, but instead of x, we have 0 < η u < 1 .
ξ = 1 2 + 1 2 log π η 2 2 4 0 ϕ ( u ) k = 1 ( 1 ) k + 1 k η k u k d u .
Thus, by interchanging the order of the summation and the integration, the final form of the entropy is given by
ξ 1 2 + 1 2 log π η 2 2 1 π k = 1 ( 1 ) k + 1 2 1 + k 2 k η k Γ 1 + k 2 .
Figure 3 shows the value of (3) for true value for η from zero to one. The real value was computed by numerical integration. The software R [10] provides this option with the function integrate.
We can note that the second and third order approximation is not as good as the fourth order ones. However, all approximations are good especially for values of 0 < η < 1 / 2 .

2.2. Mean Residual Life

The mean residual life or life expectancy is an important characteristic of the model. It gives the expected additional lifetime given that a component has survived until time t. For a non-negative continuous random variable X U H N ( η ) the mean residual life function is defined as
E ( X t | X > t ) = E ( X | X > t ) t ,
where t ( 0 , 1 ) . The above conditional expectation is given by
E ( X | X > t ) = t 1 x f X ( x ) P ( X > t ) d x = t 1 x f X ( x ) 1 F X ( t ) d x .
Calculation of the numerator is done in the same way as the calculation of the mean. Thus
t 1 x f X ( x ) d x = 2 t 1 x η ( 1 x ) 2 ϕ x η ( 1 x ) d x = 2 t η ( 1 t ) η u 1 + η u ϕ ( u ) d u ,
where η u = x / ( 1 x ) .
Finally, Equation (4) can be written as
E ( X t | X > t ) = t η ( 1 t ) η u 1 + η u ϕ ( u ) d u Φ t η ( t 1 ) t .
The integral of the numerator can be calculated with numerical methods. More on mean residual life, we refer our readers to [11], among others

3. Stress–Strength Reliability Model

An expression for the stress–strength reliability R is given by the following theorem.
Theorem 1.
Suppose X and Y are random variables independently distributed as X U H N ( η ) and Y U H N ( λ ) , the reliability of the system with stress variable (Y) and strength variable (X) is given by
R = P ( Y < X ) = 2 π arctan η λ , η > 0 , λ > 0 .
Proof of Theorem 1.
Using the Equations (1) and (2) with x / ( 1 x ) = η u we have
P ( Y < X ) = 0 1 F Y ( x ) f X ( x ) d x = 0 1 2 Φ x λ ( 1 x ) 1 × 2 η ( 1 x ) 2 ϕ x η ( 1 x ) d x = 2 0 1 2 η ( 1 x ) 2 ϕ x η ( 1 x ) × Φ x λ ( 1 x ) d x 1 = 2 0 2 ϕ ( u ) Φ η λ u d u 1 = 2 π arctan η λ .
   □
Since η λ ( η λ ) then P ( Y > X ) 0.5 ( P ( Y > X ) 0.5 ). We can note that P ( Y > X ) can be computed by Equation (5) when η and λ are known. We then focus on estimating η and λ .

3.1. Maximum Likelihood Estimation of R

Before we move on to calculate the maximum likelihood estimation of R, some results are necessary.
Lemma 1.
If X U H N ( η ) then X 1 X H N ( η ) .
Proof of Lemma 1.
See [1].    □
Corollary 1.
If X 1 X H N ( η ) then 1 η 2 X 1 X 2 χ ( 1 ) 2 , where χ ( 1 ) 2 denotes the chi-squared distribution with 1 degree of freedom.
Proof of Corollary 1.
Let Z = 1 η 2 X 1 X 2
P ( Z z ) = P X 1 X 2 η 2 z = P X 1 X η z = 2 Φ ( z ) 1 .
The derivative of P ( Z z ) gives the PDF of χ ( 1 ) 2 .    □
Corollary 2.
If U i = X i 1 X i H N ( η ) , i = 1 , , n , and V j = Y j 1 Y j H N ( λ ) , j = 1 , , m , with U i independent of V i , then
1.
i = 1 n U i η 2 χ ( n ) 2 and j = 1 m V j λ 2 χ ( m ) 2
2.
E ( η ^ 2 ) = η 2 and E ( λ ^ 2 ) = λ 2
3.
η ^ λ ^ 2 λ η 2 F ( n , m )
4.
E η ^ 2 λ ^ 2 = m m 2 η 2 λ 2 , m > 2 .
Now, suppose ( X 1 , X 2 , , X n ) is a random sample of size n from U H N ( η ) and ( Y 1 , Y 2 , , Y m ) is a independent random sample of size m from U H N ( λ ) with η > 0 and λ > 0 . The log-likelihood is given by
l ( η , λ ) = i = 1 n log f X ( x i ; η ) + j = 1 n log f Y ( y j ; λ ) = n log η m log λ 2 i = 1 n log ( 1 x i ) 2 j = 1 m log ( 1 y j ) 1 2 η 2 i = 1 n x i 1 x i 2 1 2 λ 2 j = 1 m y j 1 y j 2 .
The maximum likelihood estimators (MLEs) η ^ and λ ^ of η and λ , respectively, are the solutions of the following system of linear equations:
l ( η , λ ) η = n η + 1 η 3 i = 1 n x i 1 x i 2 = 0 l ( η , λ ) λ = m λ + 1 λ 3 j = 1 m y j 1 y j 2 = 0 .
The solution to the system of equations is
η ^ = 1 n i = 1 n x i 1 x i 2 1 / 2
λ ^ = 1 m j = 1 m y j 1 y j 2 1 / 2 .
Therefore, under the invariance property of the MLE [12] and Equation (5), the MLE of R becomes
R ^ = 2 π arctan η ^ λ ^ .
Corollary 3.
If U i = X i 1 X i H N ( η ) , i = 1 , , n , and V j = Y j 1 Y j H N ( λ ) , j = 1 , , m , with U i independent of V i , then
1.
χ η = n η η ^ χ ( n ) , then E ( χ η ) = 2 Γ ( ( n + 1 ) / 2 ) Γ ( n / 2 ) and V a r ( χ η ) = n E 2 ( χ η ) .
2.
χ λ = m λ λ ^ χ ( m ) , then E ( χ λ ) = 2 Γ ( ( m + 1 ) / 2 ) Γ ( m / 2 ) and V a r ( χ λ ) = m E 2 ( χ λ ) ,
where χ ( s ) denotes the chi-distribution with s degrees of freedom.
Remark 1.
From Corollary 3 we have E ( η ^ ) = η n 2 Γ ( ( n + 1 ) / 2 ) Γ ( n / 2 ) and E ( λ ^ ) = λ m 2 Γ ( ( m + 1 ) / 2 ) Γ ( m / 2 ) . Therefore, both η ^ and λ ^ are biased estimators of η and λ, respectively.

3.2. Confidence Intervals for η and λ

Let X 1 , , X n and Y 1 , , Y m be random samples from U H N ( η ) and U H N ( λ ) , respectively. In addition, let the two samples be independent. From Corollary 3, we have χ η 2 = n η 2 η ^ 2 χ ( n ) 2 and χ λ 2 = m λ 2 λ ^ 2 χ ( m ) 2 . Taking χ η 2 and χ λ 2 as two pivotal quantities, the 100 ( 1 α ) % confidence intervals for η and λ are given by η ^ n χ ( 1 α / 2 , n ) 2 , η ^ n χ ( α / 2 , n ) 2 and λ ^ m χ ( 1 α / 2 , m ) 2 , λ ^ m χ ( α / 2 , m ) 2 , respectively, where χ ( α / 2 , a ) 2 and χ ( 1 α / 2 , a ) 2 are the lower and upper ( α / 2 ) th percentiles of a chi-square distribution with a degrees of freedom.

3.3. Exact PDF for R

Lemma 2.
Let W F ( n , m ) then the PDF of Z = η ^ / λ ^ is given by
f Z ( z ; n , m , η , λ ) = 2 λ 2 η 2 z f W λ 2 η 2 z 2 = 2 ( n m λ 2 η 2 ) n 2 B ( n 2 , m 2 ) z n 1 ( 1 + n m λ 2 η 2 z 2 ) n 2 + m 2 ,
for 0 < z < , where n , m > 0 , η , λ > 0 and B ( n / 2 , m / 2 ) = Γ ( n / 2 ) Γ ( m / 2 ) / Γ ( ( n + m ) / 2 ) .
Proof. 
Using the Corollary 2, let W = Z 2 λ η 2 F ( n , m ) then
P Z z = P η W 1 / 2 λ z = P W λ 2 η 2 z 2 = F W λ 2 η 2 z 2
The derivative of P ( Z z ) gives the PDF in Equation (9).    □
Remark 2.
Note that the random variable η ^ λ ^ is a ratio of independent generalized gamma random variables, since we can write η ^ λ ^ = Z 1 Z 2 with Z 1 G Γ ( 2 , n , η 2 / n ) and Z 2 G Γ ( 2 , m , λ 2 / m ) , where G Γ ( p , d , a ) denotes a generalized gamma distribution. Following [13] the ratio of independent generalized gamma random variables has a generalized F distribution.
Proposition 1.
A random variable R follows a f R distribution, denoted as R f R ( η , λ , n , m ) , if its PDF is given by
f R ( r ; η , λ , n , m ) = π n m λ 2 η 2 n 2 B n 2 , m 2 × sec 2 ( π 2 r ) ( tan ( π 2 r ) ) n 1 1 + n m λ 2 η 2 tan 2 ( π 2 r ) n + m 2 ,
for 0 < r < 1 , where η , λ > 0 and n , m > 0 .
Proof of Proposition 1.
Using Lemma 2 and Equation (8) we have the result.    □
With a little algebraic handling and the help of Maple or Mathematica software, it can be proved that the expectation of R is given by
E ( R ) = π csc ( π 2 n ) Γ ( n 2 ) Γ ( 1 n 2 ) 2 n n m λ 2 η 2 n 2 sec ( π 2 n ) B ( n 2 , m 2 ) × 2 F 1 n 2 , n + m 2 , 1 + n 2 , n m λ 2 η 2 2 π n m λ 2 η 2 1 2 Γ ( n 1 2 ) Γ ( m + 1 2 ) Γ ( n 2 ) Γ ( m 2 ) × 3 F 2 1 2 , 1 , m + 1 2 , 3 2 , 3 n 2 , n m λ 2 η 2 ,
where p F q is the hypergeometric function such that
p F q ( [ a 1 , , a p ] , [ b 1 , , b q ] , z ) = k = 0 ( a 1 ) k ( a p ) k ( b 1 ) k ( b q ) k z k k !
and ( a ) k is the Pochhammer symbol or ascending factorial. It is defined by ( a ) 0 = 1 for k = 0 , and by ( a ) k = a ( a + 1 ) . . . ( a + k 1 ) for k 1 . However, for all integer k, we write simply
( a ) k = Γ ( a + k ) Γ ( a ) .
We may generate values of R using different procedures. Next, we describe Algorithms 1–3.
Algorithm 1: Algorithm to generate observations from R f R ( η , λ , n , m ) .
Require Initialize the algorithm fixing η , λ , n and m
1.
Generate Z 1 from Γ ( n / 2 , 2 η 2 / n )
2.
Generate Z 2 from Γ ( m / 2 , 2 λ 2 / m )
3.
Compute
R = 2 π arctan Z 1 Z 2 .
4.
Repeat steps 1 to 3 N times to get a sample of size N
5.
return ( R 1 , , R N ) .
Algorithm 2: Algorithm to generate observations from R f R ( η , λ , n , m ) .
Require Initialize the algorithm fixing η , λ , n and m
1.
Generate Z 1 from χ ( n ) 2
2.
Generate Z 2 from χ ( m ) 2
3.
Compute
R = 2 π arctan η λ m n Z 1 Z 2 .
4.
Repeat steps 1 to 3 N times to get a sample of size N
5.
return ( R 1 , , R N ) .
Algorithm 3: Algorithm to generate observations from R f R ( η , λ , n , m ) .
Require Initialize the algorithm fixing η , λ , n and m
1.
Generate F from F ( n , m )
2.
Compute
R = 2 π arctan η λ F .
3.
Repeat steps 1 and 2 N times to get a sample of size N
4.
return ( R 1 , , R N ) .

4. Interval Estimation of R

In this section, we consider the interval estimation of R based on exact, asymptotic and bootstrap methods.

4.1. Exact Confidence Interval

Let us assume that X U H N ( η ) and Y U H N ( λ ) , and we have a sample X 1 , , X n from the distribution of X and a sample Y 1 , , Y m from the distribution of Y. In addition, let the two samples be independent. From Corollary 1, we have F = λ 2 η 2 η ^ 2 λ ^ 2 F ( n , m ) then F = η 2 λ 2 λ ^ 2 η ^ 2 F ( m , n ) . Taking F as a pivotal quantity, a 100 ( 1 α ) % confidence interval for R is given by
2 π arctan η ^ λ ^ F ( α / 2 , m , n ) 1 / 2 , 2 π arctan η ^ λ ^ F ( 1 α / 2 , m , n ) 1 / 2 ,
where F ( α / 2 , m , n ) and F ( 1 α / 2 , m , n ) denote, respectively, the lower and upper ( α / 2 ) th percentiles of F distribution with m and n degrees of freedom.

4.2. Asymptotic Distribution and Confidence Interval

In this subsection, at first, we compute the asymptotic distribution of θ ^ = ( η ^ , λ ^ ) and after this, we study the asymptotic distribution of R ^ . From the asymptotic distribution of R ^ , we get the asymptotic confidence interval of R.
Let J ( θ ) = ( J i j ( θ ) , i , j = 1 , 2 ) be the expected Fisher’s information matrix of θ = ( η , λ ) . The elements of the expected Fisher’s information matrix are
J 11 = E 2 l ( η , λ ) η 2 = 2 n η 2 J 22 = E 2 l ( η , λ ) λ 2 = 2 m λ 2 J 21 = J 12 = 0 .
Under some regularity conditions, we have
n ( η ^ η ) m ( λ ^ λ ) d N 2 0 0 , η 2 2 0 0 λ 2 2 .
The point estimator of R is R ^ = R ( η ^ , λ ^ ) . We obtain the asymptotic confidence interval for R following this procedure (see [14])
d 1 ( η , λ ) = R η = 2 λ π ( η 2 + λ 2 ) , and d 2 ( η , λ ) = R λ = 2 η π ( η 2 + λ 2 ) .
This gives
Var ( R ^ ) = Var ( η ^ ) d 1 2 ( η , λ ) + Var ( λ ^ ) d 2 2 ( η , λ ) = 2 η 2 λ 2 π 2 ( η 2 + λ 2 ) 2 1 n + 1 m .
Thus, we obtain the following result
Z R = R ^ R 2 η ^ λ ^ π ( η ^ 2 + λ ^ 2 ) 1 n + 1 m 1 / 2 d N ( 0 , 1 ) .
Hence, the asymptotic 100 ( 1 α ) % confidence interval for R is given by
R ^ ± Z ( 1 α / 2 ) 2 η ^ λ ^ π ( η ^ 2 + λ ^ 2 ) 1 n + 1 m 1 / 2 ,
where Z ( 1 α / 2 ) is the ( 1 α / 2 ) th percentile of the standard normal distribution and R ^ is given by (8).

4.3. Bootstrap Confidence Intervals

MLE is a typical statistical method. However, in many practical situations since the sample size is not large, so the large-sample based inference such as MLE-based asymptotic estimates may not be suitable and may even be misleading sometimes. In this subsection, parametric and nonparametric bootstrap confidence intervals are constructed for unknown parameters; see [15,16] for details.

4.3.1. Parametric Bootstrap Sampling Algorithm

Next, to generate parametric bootstrap samples, as suggested by [16], of η , λ and R, from the given independent random samples, we use the following method X 1 , , X n and Y 1 , , Y m obtained from U H N ( η ) and U H N ( λ ) , respectively.
  • Stage 1 Compute MLE of η and λ , say η ^ and λ ^ , based on data X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) .
  • Stage 2 Based on η ^ and λ ^ , generates samples X = ( X 1 , , X n ) from U H N ( η ^ ) and Y = ( Y 1 , , Y m ) from U H N ( λ ^ ) with
    X i = η ^ Φ 1 U i 1 + 1 2 1 + η ^ Φ 1 U i 1 + 1 2 Y i = λ ^ Φ 1 U i 2 + 1 2 1 + λ ^ Φ 1 U i 2 + 1 2
    where for j = 1 , 2 , U i j is generated independent observations from the uniform distribution U ( 0 , 1 ) of sample size n and m, respectively.
  • Stage 3 Compute MLE of η and λ , say η ^ and λ ^ , based on data X and Y , respectively.
  • Stage 4 Compute MLE of R, say R ^ , based on η ^ and λ ^ .
  • Stage 5 Repeat Steps 2 to 4 B times and generate B bootstrap estimates of η , λ and R.

4.3.2. Nonparametric Bootstrap Sampling Algorithm

Next, we describe the steps to obtain nonparametric bootstrap samples of η , λ and R.
  • Stage 1 Draw random samples with replacement X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) from the original data X = ( X 1 , , X n ) and Y = ( Y 1 , , Y m ) , respectively.
  • Stage 2 Compute the bootstrap estimates η and λ , say η ^ and λ ^ , based on data X and Y , respectively.
  • Stage 3 Using η ^ and λ ^ and Equation (8), compute the bootstrap estimate of R, say R ^ .
  • Stage 4 Repeat Steps 1 and 3 B times and generate B bootstrap estimates of η , λ and R.
Now, we propose different types of bootstrap confidence intervals for the parameter R using the parametric and nonparametric bootstrap samples. Identically to R, we compute the confidence intervals of η and λ . For b = 1 , , B , we denote { R ^ b } the set of bootstrap estimates of R. We also denote the MLE obtained from the original real dataset as R ^ and we assumed that the confidence level is 100 ( 1 α ) % .
Bootstrap-t confidence interval. The bootstrap-t confidence interval reproduces the way of constructing the standard-t confidence interval. The t-like critical value and the standard error of R ^ are computed based on the bootstrap estimates { R ^ b } . We obtain the bootstrap standard error as follows
s e ( R ^ ) = b = 1 B ( R ^ b R ^ ) 2 B , where R ^ = b = 1 B R ^ b B .
To find the t-like critical value, denoted by t ^ α , we standardize { R ^ b } ( b = 1 , , B ) by using
z b ( R ) = R ^ b R ^ s e ( R ^ ) .
Then, we obtain t ^ α from the bootstrap estimate:
# z b ( R ) t ^ α B = α .
Then, we obtain the 100 ( 1 α ) % bootstrap-t confidence interval
R ^ t ^ ( 1 α / 2 ) · s e ( R ^ ) , R ^ + t ^ ( α / 2 ) · s e ( R ^ ) .
Bootstrap percentile confidence interval. To obtain bootstrap percentile confidence interval [17] of R, we simply find the α / 2 and 1 α / 2 percentiles, denoted by R ^ ( α / 2 ) and R ^ ( 1 α / 2 ) , based on the set of bootstrap estimates of R. The simple 100 ( 1 α ) % bootstrap percentile confidence interval is defined to be ( R ^ ( α / 2 ) , R ^ ( 1 α / 2 ) ) .
Bias-Corrected and Accelerated Bootstrap (BCa) Method. To overcome the overcoverage issues in percentile bootstrap CIs, the BCa method corrects for both bias and skewness of the bootstrap parameter estimates by incorporating a bias-correction factor and an acceleration factor (see [17,18]). The bias-correction factor z 0 is estimated as the proportion of the bootstrap estimates less than the original parameter estimate θ ^ ,
z 0 = Φ 1 # { R ^ R ^ } B
where Φ 1 is the inverse CDF of a standard normal distribution. We can estimate the acceleration factor a through jackknife or leave-one-out resampling, which involves generating n replicates of the original sample, where n is the number of observations in the sample. The first jackknife replicate is obtained by leaving out the first case ( i = 1 ) of the original sample, the second by leaving out the second case ( i = 2 ), and so on, until n samples of size n 1 are obtained. For each of the jackknife resamples, R ^ ( i ) is obtained. The average of these estimates is,
R ^ ( · ) = 1 n i = 1 n R ^ ( i ) .
Then, the acceleration factor a ^ is calculated as follows
a ^ = i = 1 n ( R ^ ( · ) R ^ ( i ) ) 3 6 i = 1 n ( R ^ ( · ) R ^ ( i ) ) 2 3 / 2 .
With the values of z 0 and a ^ , the values a 1 and a 2 are calculated
a 1 = Φ z 0 + z 0 + z ( α / 2 ) 1 + a ^ z 0 + z ( α / 2 ) a 2 = Φ z 0 + z 0 + z ( 1 α / 2 ) 1 + a ^ z 0 + z ( 1 α / 2 ) .
Here, z ( α / 2 ) is the 100 α / 2 th percentile point of a standard normal distribution. Then, a 100 ( 1 α ) % BCa confidence interval of R is given by R ^ ( a 1 ) , R ^ ( a 2 ) . For more on different types of confidence intervals, see [19], among others.

5. Simulation Study

In this section, we present a small Monte Carlo simulation study in order to illustrate the behavior of different estimates for different sample sizes. The simulation studies were conducted using 10,000 samples from U H N ( η ) and U H N ( λ ) . The sample sizes were combinations of n and m, with n = 15, 20, 30, 50, 100 and m = 15, 20, 30, 50, 100. In all cases, we take η = 0.3 and λ = 0.2 , with those values we get R = 0.63 . In [1], the reader can find a simulation study for different values of the shape parameter including values greater than 1. From the sample, we estimate η and λ using MLE Equations (6) and (7). Once η and λ are estimated, we compute the MLE of R using (8). We also compute the average biases and mean squared errors (MSEs) using MLE and parametric and nonparametric bootstrap estimates (Par.Boot/Npar.Boot) of η , λ and R in Table 1 over the 10,000 replications. We obtain the 95% confidence intervals based on the exact and asymptotic distributions of η , λ and R. We also obtain the 95% confidence intervals based on parametric and nonparametric bootstrap methods for η , λ and R. Based on exact and asymptotic distribution of η , λ and R, and parametric and nonparametric bootstrap methods of η , λ and R, we reported the average confidence lengths and coverage probabilities for 95% confidence intervals in Table 2. For the bootstrap methods, estimates and confidence interval was computed based on B = 500 replications. Some of the points are quite clear from this simulation. Even for small sample sizes, the performance of the MLEs and bootstrap methods are quite satisfactory in terms of biases and MSEs. In addition for all methods when sample sizes n, m increases then the average bias and MSEs decreases. It verifies the consistency property of the MLE of η , λ and R. It is observed that the bootstrap methods behave almost in a similar way both with respect to biases and MSEs.
We also compute the confidence intervals and the corresponding coverage probabilities by different methods. For R, the exact confidence interval was computed using (11), the asymptotic confidence interval was computed using (12). For η and λ , we compute the confidence intervals using the formulae of Section 3.2 for the exact confidence interval, for the asymptotic confidence intervals we use the formulae in [1]. For the bootstrap methods, the confidence intervals are computed using the formulae of Section 4.3. In this case, all the eight confidence intervals behave very similarly in the sense of average confidence lengths and coverage probabilities.
The confidence intervals based on asymptotic confidence interval provides the shortest length in comparison with the exact confidence interval, whereas in coverage probability, using exact confidence interval shows better performance than the asymptotic confidence interval. On the contrary, among the bootstrap methods considered here, bootstrap-p method performed well as compared with bootstrap-t and bootstrap- B C a methods.

6. An Illustrative Example

Cutting processes are those where a great enough force is applied to a piece of raw metal, usually sheet metal to cause the material to fail. One of the most common cutting processes is shearing and it is performed by applying a shearing force on the metal sheets [20]. In this section, we propose the use of our procedure on two real-life data sets to illustrate the implementation of our methods. The two data sets were firstly introduced and studied by [21] for burr measurements on the iron sheets. For the first data set of 50 observations on burr (in the unit of millimeter), the hole diameter is 12 mm and the sheet thickness is 3.15 mm. We shall refer to this as data set 1 and is given in Table 3. For the second data set of 50 observations, hole diameter and sheet thickness are 9 mm and 2 mm, respectively. We shall refer to this as data set 2 and is given in Table 4. Hole diameter readings are taken on jobs with respect to one hole, selected and fixed as per a predetermined orientation. The two data sets relate to two different machines under comparison [21]. One may see [21] about the technical details of the data sets’ measurements.
First of all, we conduct a one sample Kolmogorov–Smirnov (K-S) goodness-of-fit test on U H N distribution based on the two data sets. We report the MLEs and estimates using parametric and nonparametric bootstrap methods and its corresponding standard errors (S.E.) of model parameters as well as the p-value (pval) and the test statistic (D) of K-S goodness-of-fit test for both data sets (K-S i , for data set i = 1 , 2 ) in Table 5. The K-S statistic (based on the MLE of the parameter η ^ = 0.238 ) is 0.080 and the corresponding p-value is 0.726 for data set 1. The K-S statistic (based on the MLE of the parameter λ ^ = 0.219 ) is 0.1 and the corresponding p-value is 0.607 for data set 2. Therefore, the two data sets are reasonably fitted for the unit-half-normal distribution. Point estimates of η , λ and R are similar in all methods considered, but the standard errors of the nonparametric bootstrap estimates are smaller than the MLEs and the parametric bootstrap estimates. The confidence intervals for η , λ and R at 95% confidence level are reported in Table 6. Noticed that length of exact confidence interval is larger than that of asymptotic as we expected. In addition, the confidence intervals of the parametric bootstrap methods are larger than that of nonparametric bootstrap methods.

7. Concluding Remarks

In this work, we study different estimators of R = P ( Y < X ) considering that both random variables X and Y follow a unit-half-normal distribution, with different shape parameters. A MLE procedure to obtain the MLEs of the unknown shape parameters is presented. Moreover, the MLE and the exact and asymptotic distribution of R are deduced. This allows us to compute the exact and asymptotic confidence intervals (CI). Additionally, based on parametric and nonparametric bootstrap methods, we are able to compute estimates of R and its respective CI. The simulation study shows that the performance of the MLEs, in terms of biases and MSEs, is quite satisfactory. We observe, also, a decrease in the average bias and MSEs as the sample size increases. From the point of view of biases and MSEs, we noticed similar performances using the MLEs and bootstrap methods. We studied, using different methods, the CI and the corresponding coverage percentages. We observe similar performances in terms of average confidence lengths and coverage probabilities for all the eight CI consider in this work. Based on the CI of R develop in this work, the best preference is using the nonparametric bootstrap method.
It was observed that the MLE of the shape parameter of the UHN distribution is biased. Although, the MLE possesses a number of attractive limiting properties: asymptotically unbiased, consistent, and asymptotically normal, many of these properties depend on an extremely large sample sizes. Those properties, such as unbiasedness, may not be valid for small or even moderate sample sizes, see [22], which are more practical in real data applications. Some bias-corrected techniques for the MLEs are desired in practice, especially when the sample size is small, see, for example, refs. [23,24,25,26,27] and references therein. Bias correction is an important topic in the UHN distribution, but is outside the scope of this article.

Author Contributions

Created and conceptualized the idea, R.d.l.C. and H.S.S.; data curation, R.d.l.C. and H.S.S.; formal analysis, R.d.l.C. and H.S.S.; methodology, R.d.l.C., H.S.S. and C.M.; software, R.d.l.C. and H.S.S.; supervision, R.d.l.C. and H.S.S.; validation, R.d.l.C., H.S.S. and C.M.; visualization, R.d.l.C. and H.S.S.; writing—original draft, R.d.l.C., H.S.S. and C.M.; writing—review and editing, R.d.l.C., H.S.S. and C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ANID/FONDECYT/1181662 and ANID/FONDECYT/1190801 (Chile).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used to support the findings of the study are available within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bakouch, H.S.; Nik, A.S.; Asgharzadeh, A.; Salinas, H.S. A flexible probability model for proportion data: Unit-half-normal distribution. Commun. Stat. Case Stud. Data Anal. Appl. 2021, 7, 271–288. [Google Scholar] [CrossRef]
  2. Topp, C.W.; Leone, F.C. A family of J-Shaped frequency functions. J. Am. Stat. Assoc. 1995, 50, 209–219. [Google Scholar] [CrossRef]
  3. Kumaraswamy, P. A generalized probability density function for double-bounded random processes. J. Hydrol. 1980, 46, 79–88. [Google Scholar] [CrossRef]
  4. Tadikamalla, P.R.; Johnson, M.L. Systems of frequency curves generated by transformations of Logistic variables. Biometrika 1982, 69, 461–465. [Google Scholar] [CrossRef]
  5. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Continuous Univariate Distributions, 2nd ed.; Wiley: New York, NY, USA, 1994; Volume 1. [Google Scholar]
  6. Mazucheli, J.; Menezes, A.F.B.; Dey, S. The unit-Birnbaum-Saunders distribution with applications. Chil. J. Stat. 2018, 9, 47–57. [Google Scholar]
  7. Mazucheli, J.; Menezes, A.F.B.; Chakraborty, S. On the one parameter unit-Lindley distribution and its associated regression model for proportion data. J. Appl. Stat. 2019, 46, 700–714. [Google Scholar] [CrossRef] [Green Version]
  8. Kotz, S.; Lumelskii, Y.; Pensky, M. The Stress-Strength Model and Its Generalizations: Theory and Applications; World Scientific Publishing: Singapore, 2003. [Google Scholar]
  9. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  10. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 10 January 2022).
  11. Ahsanullah, M.; Aazad, A.A.; Kibria, B.M.G. A Note on Mean Residual Life of the k out of n System. Bull. Malays. Math. Sci. Soc. 2013, 37, 83–91. [Google Scholar]
  12. Casella, G.; Berger, R. Statistical Inference; Duxbury Press: Belmont, CA, USA, 1990. [Google Scholar]
  13. Malik, H.J. Exact distributions of the quotient of independent generalized gamma variables. Can. Math. Bull. 1967, 10, 463–465. [Google Scholar] [CrossRef]
  14. Rao, C.R. Linear Statistical Inference and Its Applications; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2002. [Google Scholar]
  15. Davison, A.; Hinkley, D. Bootstrap Methods and Their Application; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  16. Efron, B.; Tibshirani, R.J. An Introduction to the Bootstrap; Chapman and Hall: New York, NY, USA, 1993. [Google Scholar]
  17. Efron, B. The Jackknife, the Bootstrap, and Other Resampling Plans; Society of Industrial and Applied Mathematics: Philadelphia, PA, USA, 1982. [Google Scholar]
  18. Efron, B. Better bootstrap confidence intervals. J. Am. Stat. Assoc. 1987, 82, 171–185. [Google Scholar] [CrossRef]
  19. Almonte, C.; Kibria, B.M.G. On some classical, bootstrap and transformation confidence intervals for estimating the mean of an asymmetrical population. Model Assist. Stat. Appl. 2009, 4, 91–104. [Google Scholar] [CrossRef]
  20. CustomPart.Net. Sheet Metal Cutting (Shearing). Available online: https://www.custompartnet.com/wu/sheet-metal-shearing (accessed on 3 January 2022).
  21. Dasgupta, R. On the distribution of Burr with applications. Sankhya B 2011, 73, 1–19. [Google Scholar] [CrossRef]
  22. Kay, S. Asymptotic maximum likelihood estimator performance for chaotic signals in noise. IEEE Trans. Signal Process. 1995, 43, 1009–1012. [Google Scholar] [CrossRef]
  23. Mazucheli, J.; Menezes, A.F.B.; Dey, S. Improved Maximum Likelihood Estimators for the Parameters of the Unit-Gamma Distribution. Commun. Stat. Theory Methods 2018, 47, 3767–3778. [Google Scholar] [CrossRef]
  24. Giles, D.E.; Feng, H.; Godwin, R.T. On the Bias of the Maximum Likelihood Estimator for the Two-Parameter Lomax Distribution. Commun. Stat. Theory Methods 2013, 42, 1934–1950. [Google Scholar] [CrossRef] [Green Version]
  25. Giles, D.E. Bias Reduction for the Maximum Likelihood Estimators of the Parameters in the Half-Logistic Distribution. Commun. Stat. Theory Methods 2012, 41, 212–222. [Google Scholar] [CrossRef]
  26. Lemonte, A.J. Improved point estimation for the Kumaraswamy distribution. J. Stat. Comput. Simul. 2011, 81, 1971–1982. [Google Scholar] [CrossRef]
  27. Firth, D. Bias reduction of maximum likelihood estimates. Biometrika 1993, 80, 27–38. [Google Scholar] [CrossRef]
Figure 1. Plot of density function of UHN distribution for 0 < η < 1 .
Figure 1. Plot of density function of UHN distribution for 0 < η < 1 .
Symmetry 14 00837 g001
Figure 2. Plot of density function of UHN distribution for η > 1 .
Figure 2. Plot of density function of UHN distribution for η > 1 .
Symmetry 14 00837 g002
Figure 3. Entropy values for a range of values of η .
Figure 3. Entropy values for a range of values of η .
Symmetry 14 00837 g003
Table 1. Average biases and MSE values (within bracket) for parameters at η = 0.3 , λ = 0.2 and R = 0.63 .
Table 1. Average biases and MSE values (within bracket) for parameters at η = 0.3 , λ = 0.2 and R = 0.63 .
( n , m ) Method η λ R
( 15 , 20 ) MLE−0.0044(0.0029)−0.0025(0.0010)−0.0052(0.0051)
Npar.Boot−0.0086(0.0029)−0.0046(0.0010)−0.0098(0.0051)
Par.Boot−0.0093(0.0029)−0.0049(0.0010)−0.0105(0.0050)
( 20 , 15 ) MLE−0.0039(0.0022)−0.0029(0.0013)−0.0014(0.0049)
Npar.Boot−0.0072(0.0022)−0.0057(0.0013)−0.0022(0.0048)
Par.Boot−0.0076(0.0022)−0.0062(0.0013)−0.0021(0.0047)
( 20 , 20 ) MLE−0.0032(0.0022)−0.0029(0.0010)−0.0017(0.0044)
Npar.Boot−0.0065(0.0022)−0.0051(0.0010)−0.0040(0.0043)
Par.Boot−0.0069(0.0022)−0.0053(0.0010)−0.0043(0.0043)
( 20 , 30 ) MLE−0.0033(0.0022)−0.0017(0.0007)−0.0042(0.0038)
Npar.Boot−0.0065(0.0022)−0.0032(0.0007)−0.0081(0.0037)
Par.Boot−0.0070(0.0022)−0.0034(0.0007)−0.0088(0.0037)
( 30 , 20 ) MLE−0.0020(0.0015)−0.0021(0.0010) 3.6 × 10 5 ( 0.0036 )
Npar.Boot−0.0043(0.0015)−0.0043(0.0010) 3.2 × 10 6 ( 0.0035 )
Par.Boot−0.0045(0.0015)−0.0045(0.0010)0.0002(0.0035)
( 30 , 30 ) MLE−0.0024(0.0015)−0.0016(0.0007)−0.0019(0.0030)
Npar.Boot−0.0047(0.0015)−0.0031(0.0007)−0.0036(0.0030)
Par.Boot−0.0049(0.0015)−0.0033(0.0007)−0.0036(0.0030)
( 30 , 50 ) MLE−0.0027(0.0015)−0.0011(0.0004)−0.0036(0.0024)
Npar.Boot−0.0050(0.0015)−0.0021(0.0004)−0.0066(0.0024)
Par.Boot−0.0052(0.0015)−0.0021(0.0004)−0.0069(0.0024)
( 50 , 30 ) MLE−0.0016(0.0009)−0.0013(0.0007)−0.0002(0.0022)
Npar.Boot−0.0030(0.0009)−0.0028(0.0007)0.0002(0.0022)
Par.Boot−0.0031(0.0009)−0.0029(0.0007)0.0003(0.0022)
( 50 , 50 ) MLE−0.0011(0.0009)−0.0010(0.0004)−0.0007(0.0018)
Npar.Boot−0.0025(0.0009)−0.0019(0.0004)−0.0018(0.0017)
Par.Boot−0.0026(0.0009)−0.0020(0.0004)−0.0018(0.0017)
( 100 , 100 ) MLE−0.0005(0.0004)−0.0002(0.0002)−0.0007(0.0009)
Npar.Boot−0.0013(0.0004)−0.0007(0.0002)−0.0013(0.0009)
Par.Boot−0.0013(0.0004)−0.0007(0.0002)−0.0013(0.0009)
Table 2. Average confidence length and coverage probabilities of confidence intervals using exact, asymptotic and various parametric and non-parametric bootstrap methods.
Table 2. Average confidence length and coverage probabilities of confidence intervals using exact, asymptotic and various parametric and non-parametric bootstrap methods.
( n , m ) Method η λ R
( 15 , 20 ) Exact 0.2383(0.950)0.1163(0.943)0.2766(0.944)
Asympt. 0.2108(0.944)0.1225(0.934)0.2732(0.941)
Non-part0.2402(0.949)0.1187(0.941)0.2768(0.942)
Bootq0.1671(0.945)0.1101(0.942)0.2769(0.943)
CIs B C a 0.1720(0.943)0.1105(0.943)0.2744(0.942)
Part0.2394(0.942)0.1174(0.943)0.2801(0.945)
Bootq0.2384(0.951)0.1161(0.942)0.2765(0.943)
CIs B C a 0.2382(0.949)0.1164(0.942)0.2750(0.947)
( 20 , 15 ) Exact 0.2018(0.947)0.1838(0.943)0.2719(0.951)
Asympt. 0.1842(0.946)0.1408(0.941)0.2713(0.947)
Non-part0.2201(0.950)0.1123(0.945)0.2718(0.948)
Bootq0.1834(0.949)0.1089(0.946)0.2617(0.947)
CIs B C a 0.1923(0.948)0.1166(0.947)0.2742(0.953)
Part0.2386(0.947)0.1812(0.947)0.2760(0.949)
Bootq0.2381(0.948)0.1705(0.946)0.2748(0.951)
CIs B C a 0.2385(0.949)0.1877(0.945)0.2742(0.958)
( 20 , 20 ) Exact 0.2013(0.945)0.1349(0.952)0.2542(0.947)
Asympt. 0.1838(0.942)0.1223(0.949)0.2527(0.942)
Non-part0.2206(0.949)0.1125(0.947)0.2721(0.943)
Bootq0.1832(0.949)0.1087(0.947)0.2611(0.948)
CIs B C a 0.1920(0.947)0.1169(0.948)0.2557(0.946)
Part0.2249(0.948)0.1311(0.948)0.2758(0.947)
Bootq0.2245(0.947)0.1201(0.949)0.2751(0.946)
CIs B C a 0.2249(0.948)0.1308(0.947)0.2556(0.945)
( 20 , 30 ) Exact 0.2017(0.947)0.1271(0.943)0.2346(0.945)
Asympt. 0.1841(0.946)0.1004(0.944)0.2320(0.944)
Non-part0.1918(0.949)0.1169(0.945)0.2232(0.945)
Bootq0.1799(0.948)0.1001(0.946)0.2311(0.946)
CIs B C a 0.1801(0.949)0.1007(0.947)0.2301(0.951)
Part0.2116(0.946)0.1315(0.947)0.2351(0.946)
Bootq0.2011(0.947)0.1316(0.948)0.2313(0.947)
CIs B C a 0.2007(0.948)0.1318(0.949)0.2332(0.955)
( 30 , 20 ) Exact 0.1596(0.951)0.1643(0.941)0.2311(0.953)
Asympt. 0.1503(0.945)0.1224(0.942)0.2311(0.949)
Non-part0.1501(0.949)0.1568(0.946)0.2278(0.945)
Bootq0.1424(0.947)0.1502(0.947)0.2199(0.946)
CIs B C a 0.1425(0.947)0.1507(0.948)0.2121(0.946)
Part0.1602(0.947)0.1677(0.943)0.2401(0.948)
Bootq0.1599(0.948)0.1601(0.947)0.2397(0.946)
CIs B C a 0.1566(0.947)0.1609(0.947)0.2320(0.945)
( 30 , 30 ) Exact 0.1597(0.949)0.1065(0.952)0.2086(0.948)
Asympt. 0.1504(0.942)0.1003(0.943)0.2078(0.946)
Non-part0.1495(0.951)0.1044(0.953)0.2084(0.951)
Bootq0.1485(0.948)0.0979(0.951)0.1999(0.947)
CIs B C a 0.1486(0.945)0.0977(0.949)0.2082(0.943)
Part0.1604(0.949)0.1071(0.950)0.2117(0.948)
Bootq0.1601(0.948)0.1063(0.951)0.2085(0.945)
CIs B C a 0.1598(0.948)0.1065(0.948)0.2080(0.944)
( 30 , 50 ) Exact 0.1604(0.949)0.1026(0.935)0.1881(0.949)
Asympt. 0.1510(0.934)0.1080(0.938)0.1865(0.945)
Non-part0.1485(0.950)0.0998(0.938)0.1856(0.946)
Bootq0.1449(0.948)0.0904(0.937)0.1855(0.947)
CIs B C a 0.1451(0.951)0.0911(0.938)0.1870(0.945)
Part0.1649(0.946)0.1087(0.939)0.1882(0.942)
Bootq0.1601(0.949)0.1023(0.940)0.1876(0.944)
CIs B C a 0.1604(0.947)0.1024(0.941)0.1875(0.935)
( 50 , 30 ) Exact 0.1211(0.951)0.1372(0.937)0.1855(0.951)
Asympt. 0.1169(0.941)0.1000(0.932)0.1856(0.940)
Non-part0.1171(0.943)0.0989(0.942)0.1823(0.942)
Bootq0.1078(0.948)0.0942(0.940)0.1799(0.946)
CIs B C a 0.1081(0.947)0.0943(0.941)0.1854(0.946)
Part0.1285(0.946)0.1389(0.945)0.1899(0.948)
Bootq0.1203(0.945)0.1367(0.946)0.1862(0.945)
CIs B C a 0.1213(0.949)0.1369(0.945)0.1860(0.944)
( 50 , 50 ) Exact 0.1215(0.951)0.0810(0.954)0.1621(0.950)
Asympt. 0.1172(0.940)0.0781(0.942)0.1617(0.944)
Non-part0.1149(0.944)0.0751(0.945)0.1602(0.948)
Bootq0.1102(0.943)0.0733(0.943)0.1599(0.950)
CIs B C a 0.1101(0.942)0.0731(0.944)0.1625(0.949)
Part0.1201(0.942)0.0791(0.942)0.1672(0.947)
Bootq0.1172(0.941)0.0773(0.941)0.1624(0.943)
CIs B C a 0.1199(0.941)0.0785(0.940)0.1623(0.950)
( 100 , 100 ) Exact 0.0845(0.948)0.0563(0.946)0.1149(0.943)
Asympt. 0.0830(0.945)0.0553(0.944)0.1147(0.942)
Non-part0.0865(0.942)0.0571(0.943)0.1153(0.944)
Bootq0.0818(0.944)0.0498(0.947)0.1001(0.944)
CIs B C a 0.0830(0.943)0.5341(0.941)0.1154(0.953)
Part0.0838(0.942)0.0862(0.951)0.1162(0.945)
Bootq0.0834(0.941)0.0856(0.952)0.1049(0.943)
CIs B C a 0.0836(0.943)0.0861(0.951)0.1150(0.954)
Table 3. Data set 1.
Table 3. Data set 1.
0.04, 0.02, 0.06, 0.12, 0.14, 0.08, 0.22, 0.12, 0.08, 0.26,
0.24, 0.04, 0.14, 0.16, 0.08, 0.26, 0.32, 0.28, 0.14, 0.16,
0.24, 0.22, 0.12, 0.18, 0.24, 0.32, 0.16, 0.14, 0.08, 0.16,
0.24, 0.16, 0.32, 0.18, 0.24, 0.22, 0.16, 0.12, 0.24, 0.06,
0.02, 0.18, 0.22, 0.14, 0.06, 0.04, 0.14, 0.26, 0.18, 0.16
Table 4. Data set 2.
Table 4. Data set 2.
0.06, 0.12, 0.14, 0.04, 0.14, 0.16, 0.08, 0.26, 0.32, 0.22,
0.16, 0.12, 0.24, 0.06, 0.02, 0.18, 0.22, 0.14, 0.22, 0.16,
0.12, 0.24, 0.06, 0.02, 0.18, 0.22, 0.14, 0.02, 0.18, 0.22,
0.14, 0.06, 0.04, 0.14, 0.22, 0.14, 0.06, 0.04, 0.16, 0.24,
0.16, 0.32, 0.18, 0.24, 0.22, 0.04, 0.14, 0.26, 0.18, 0.16
Table 5. Maximum likelihood (MLE), parametric (Par.Boot) and non-parametric bootstrap (Npar.Boot) estimates (s.e.), statistics (D) and the p-values (pval) of K-S test of goodness-of-fit of the two distributions in both data sets.
Table 5. Maximum likelihood (MLE), parametric (Par.Boot) and non-parametric bootstrap (Npar.Boot) estimates (s.e.), statistics (D) and the p-values (pval) of K-S test of goodness-of-fit of the two distributions in both data sets.
η ^ λ ^ R ^
MLE 0.238 ( 0.024 ) 0.219 ( 0.022 ) 0.526 ( 0.045 )
Npar.Boot 0.237 ( 0.017 ) 0.219 ( 0.016 ) 0.526 ( 0.031 )
Par.Boot 0.235 ( 0.024 ) 0.218 ( 0.023 ) 0.524 ( 0.045 )
K-S 1 : D = 0.080 pval = 0.726
K-S 2 : D = 0.100 pval = 0.607
Table 6. The exact, asymptotic and various parametric and non-parametric bootstrap confidence intervals of η , λ and R at 95% confidence level.
Table 6. The exact, asymptotic and various parametric and non-parametric bootstrap confidence intervals of η , λ and R at 95% confidence level.
η λ R
Exact (0.199, 0.296)(0.184, 0.273)(0.437, 0.613)
Asympt. (0.191, 0.285)(0.176, 0.262)(0.438, 0.614)
Non-part(0.205, 0.277)(0.190, 0.257)(0.455, 0.586)
Bootq(0.204, 0.271)(0.188, 0.250)(0.463, 0.586)
CIs B C a (0.207, 0.274)(0.191, 0.254)(0.460, 0.584)
Part(0.196, 0.302)(0.183, 0.276)(0.425, 0.628)
Bootq(0.188, 0.287)(0.175, 0.262)(0.431, 0.615)
CIs B C a (0.199, 0.296)(0.184, 0.273)(0.437, 0.612)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

de la Cruz, R.; Salinas, H.S.; Meza, C. Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution. Symmetry 2022, 14, 837. https://doi.org/10.3390/sym14040837

AMA Style

de la Cruz R, Salinas HS, Meza C. Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution. Symmetry. 2022; 14(4):837. https://doi.org/10.3390/sym14040837

Chicago/Turabian Style

de la Cruz, Rolando, Hugo S. Salinas, and Cristian Meza. 2022. "Reliability Estimation for Stress-Strength Model Based on Unit-Half-Normal Distribution" Symmetry 14, no. 4: 837. https://doi.org/10.3390/sym14040837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop