Elsevier

Structural Safety

Volume 99, November 2022, 102259
Structural Safety

Structural reliability analysis: A Bayesian perspective

https://doi.org/10.1016/j.strusafe.2022.102259Get rights and content

Highlights

  • Estimation of the failure probability integral is treated as a Bayesian inference problem

  • A principled Bayesian failure probability inference (BFPI) framework is proposed for the first time

  • Posterior variance and distribution of the failure probability are derived and numerically investigated

  • A parallel adaptive-Bayesian failure probability learning (PA-BFPL) method is proposed

  • PA-BFPL enables to select multiple points at each iteration and supports parallel distributed processing

Abstract

Numerical methods play a dominant role in structural reliability analysis, and the goal has long been to produce a failure probability estimate with a desired level of accuracy using a minimum number of performance function evaluations. In the present study, we attempt to offer a Bayesian perspective on the failure probability integral estimation, as opposed to the classical frequentist perspective. For this purpose, a principled Bayesian Failure Probability Inference (BFPI) framework is first developed, which allows to quantify, propagate and reduce numerical uncertainty behind the failure probability due to discretization error. Especially, the posterior variance of the failure probability is derived in a semi-analytical form, and the Gaussianity of the posterior failure probability distribution is investigated numerically. Then, a Parallel Adaptive-Bayesian Failure Probability Learning (PA-BFPL) method is proposed within the Bayesian framework. In the PA-BFPL method, a variance-amplified importance sampling technique is presented to evaluate the posterior mean and variance of the failure probability, and an adaptive parallel active learning strategy is proposed to identify multiple updating points at each iteration. Thus, a novel advantage of PA-BFPL is that both prior knowledge and parallel computing can be used to make inference about the failure probability. Four numerical examples are investigated, indicating the potential benefits by advocating a Bayesian approach to failure probability estimation.

Introduction

A fundamental problem in structural reliability analysis is to assess the likelihood that a structure attains an unsatisfactory performance in the presence of uncertainties. Within a probabilistic framework, the primary objective is to compute the so-called failure probability Pf, defined by the following multifold integral: Pf=Probg(X)0=XI(x)fX(x)dx,where Prob denotes the probability operator; X=[X1,X2,,Xd]XRd is a vector of d random variables with known joint probability density function (PDF) fX(x); Y=g(X):RdR is the performance function (or limit state function) with y=g(x)0 indicating a failure state and a safe state otherwise; I(x) is the failure indicator function such that: Ix=1,gx00,otherwise.Except for some special cases, it is impossible to derive the analytical solution to the failure probability (defined by Eq. (1)). Besides, the g-function in practical applications is typically dependent on a simulation model (e.g., a finite element model) so that each evaluation can be computationally demanding. Therefore, numerical methods that minimize the number of g-function evaluations are highly desirable to approximate the failure probability. Even though various methods following different paradigms have been developed over the past several decades (e.g., as summarized in [1]), it seems that they never reach the end of being efficient while accurate and generally applicable. The present paper is also concerned with developing a new reliability analysis method, but putting more emphasis on how to interpret the problem of failure probability estimation.

In fact, the problem of evaluating the failure probability integral (Eq. (1)) can be treated as a statistical problem, though it does not mean that all methods must follow this perspective. Specifically, the failure probability Pf is an unknown quantity of interest, about which we wish to make inference using a set of g-function observations (equivalently, I-function observations), say g(x(1)),g(x(2)),,g(x(n)). Further, a statistical inference rule approximates Pf as a function of those observations.

In the classical frequentist viewpoint, the sample x(1),x(2),,x(n) might be supposed to draw at random from a population distributed according to fX(x). Taking the Monte Carlo simulation (MCS) method as an example, the MCS estimator for the failure probability is given by the sample mean: PˆfMCS=1ni=1nI(x(i)).The law of large numbers implies that PˆfMCS converges to Pf with probability 1 as n. The estimator is viewed as a random variable since x(i) is random. Besides, by the central limit theorem, PˆfMCS asymptotically follows a normal distribution for a large n. In practical applications, one can only afford a finite sample size to approximate the failure probability. Hence, the uncertainty associated with PˆfMCS due to the sampling variability may not be neglected. Such uncertainty can be measured by the variance of the estimator [2]: VPˆfMCS=PˆfMCS(1PˆfMCS)n,where V denotes the variance operator. Despite its conceptual and algorithmic simplicity, the MCS method is often criticized by many authors for its unreasonable effectiveness and theoretical unsoundness [3], [4]. In addition, some variants of the MCS method, e.g., subset simulation [5], [6], importance sampling [7], [8], [9], [10], have been developed and are able to offer improved efficiency. These methods, however, can still be regarded as more advanced frequentist approaches, and hence may be subject to the same criticism as MCS.

In contrast to the classical frequentist perspective, we seek to interpret the problem of failure probability integral estimation as a Bayesian inference problem. For this context, a central role is played by numerical integration (also known as quadrature) that is widely encountered in scientific computing. The study of numerical integration from a point of view of Bayesian dates back to at least the work of Diaconis [11] and has led to the commonly known Bayesian quadrature, Bayesian cubature or probabilistic integration [12], [13], [14], [15]. In such methods, our uncertainty about the true integral value resulted from a limited number of integrand observations (i.e., discretization error) is regarded as a kind of epistemic uncertainty, which can be modeled following a Bayesian approach. The Bayesian approach to numerical integration has demonstrated many promising advantages with respect to the classical approach (e.g., see [11], [16]). However, only a few studies have investigated the Bayesian approach to failure probability estimation, which requires a slightly different treatment compared to a common quadrature problem. Loosely speaking, the popular active learning reliability methods [17], [18], e.g., efficient global reliability analysis [19] and AK-MCS [20], have almost reached the idea of being Bayesian. That is, the surrogate models (e.g., Kriging) used in those methods allow a Bayesian interpretation. In spite of that, the existing methods do not count as fully Bayesian in the strict sense because they provide no probabilistic uncertainty measure over the failure probability. A truly Bayesian interpretation was, to the best of our knowledge, first clearly reported in the work [21], where the Bayesian Monte Carlo method developed in [13] was applied. However, it is challenging to directly place a Gaussian process (GP) prior over the failure indicator function with a large discontinuity. The first author and his co-workers continued the idea of re-interpreting the failure probability integral estimation with Bayesian inference in a recent work [22], and then it was further improved in [1]. In [22], the posterior mean and an upper-bound of the posterior variance of the failure probability were derived, given that a GP prior was assigned to the performance function. Nevertheless, the posterior variance and posterior distribution of the failure probability are still not available, which are undoubtedly of interest and importance in a Bayesian framework.

This paper aims to present a fully Bayesian perspective on failure probability estimation, complementing the work in [1], [22]. The main contributions of this work are summarized as follows. First, to the best of the authors’ knowledge, a complete and principled Bayesian framework for failure probability estimation is developed for the first time. The framework is termed ‘Bayesian Failure Probability Inference’ (BFPI), in which the posterior variance of the failure probability is derived in a semi-analytical form. Besides, the posterior distribution of the failure probability is also empirically investigated by several numerical examples. Second, we illustrate how the BFPI framework can be used to make inference about the failure probability in an adaptive scheme. The resulting method is called ‘Parallel Adaptive-Bayesian Failure Probability Learning’ (PA-BFPL). In the PA-BFPL method, a variance-amplified importance sampling (VAIS) method is proposed to approximate the posterior mean and variance of the failure probability and an adaptive parallel active learning strategy based on the concepts of expected misclassification probability contribution (EMPC) and k-means clustering is presented to enable multipoint selection (hence parallel distributed processing). In addition, we also suggest a new stopping criterion in order to achieve a desired level of accuracy for the failure probability estimate.

The rest of this paper is organized as follows. The proposed BFPI framework is introduced in Section 2. Section 3 presents the proposed PA-BFPL method. Four numerical examples are investigated in Section 4 to demonstrate the proposed method. The Gaussianity of the posterior failure probability is numerically studied in Section 5. The paper is closed with some concluding remarks in Section 6.

Section snippets

Bayesian failure probability inference

In this section, the problem of failure probability estimation is interpreted as a Bayesian inference problem, leading to a framework of Bayesian failure probability inference (BFPI). As shown in Fig. 1, the proposed BFPI framework begins with a prior distribution over the g-function. Conditional on the observations that arise from evaluating the g-function at some points, we arrive at a posterior distribution over g. This in turn implies a posterior distribution over the failure indicator

Parallel adaptive-Bayesian failure probability learning

This section presents a novel method, termed ‘parallel adaptive-Bayesian failure probability learning’ (PA-BFPL), to make inference about the failure probability. The proposed method builds upon the BFPI framework, and aims at producing a reasonably accurate failure probability estimate using a limited number of observations from the g-function. This objective is achieved mainly by developing a variance-amplified importance sampling (VAIS) method and an adaptive parallel active learning (APAL)

Numerical examples

The performance of the proposed PA-BFPL method will be illustrated in this section by means of four numerical examples. These examples cover a variety of problems with varying dimensions, non-linearity and failure probabilities, etc. The reference results for the target failure probabilities are provided by MCS when there is no (semi-) analytical solution. For comparison, AK-MCS [20], Active Learning Probabilistic Integration (ALPI) [22], Active learning Kriging Markov Chain Monte Carlo

Numerical investigation on the posterior distribution of failure probability

In addition to the posterior mean and variance, the posterior distribution of failure probability could be of interest for a complete Bayesian framework. For example, one can offer a credible interval for the failure probability when the posterior distribution is available. However, it cannot be obtained analytically according to its definition (Eq. (22)). In this section, we attempt to numerically investigate the posterior distribution of failure probability through the four numerical examples

Concluding remarks

In the present paper, the task of failure probability estimation is interpreted from a perspective of Bayesian inference, in contrast to the classical frequentist inference. The proposed Bayesian failure probability inference (BFPI) framework regards the discretization error as a kind of epistemic uncertainty, and allows it to be properly modeled. To be specific, a prior Gaussian process is assumed for the performance function, and the posterior statistics is then derived for the performance

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

Chao Dang is mainly supported by China Scholarship Council (CSC). Pengfei Wei is grateful to the support from the National Natural Science Foundation of China (grant no. 51905430 and 72171194). Marcos Valdebenito acknowledges the support by ANID (National Agency for Research and Development, Chile) under its program FONDECYT, grant number 1180271. Chao Dang, Pengfei Wei and Michael Beer also would like to appreciate the support of Sino-German Mobility Program, PR China under grant number M-0175.

References (39)

Cited by (19)

  • Bayesian reinforcement learning reliability analysis

    2024, Computer Methods in Applied Mechanics and Engineering
View all citing articles on Scopus
View full text