Abstract

The real-time analysis of a structure’s integrity associated with a process to estimate damage levels improves the safety of people and assets and reduces the economic losses associated with interrupted production or operation of the structure. The appearance of damage in a building changes its dynamic response (frequency, damping, and/or modal shape), and one of the most effective methods for the continuous assessment of integrity is based on the use of ambient vibrations. However, although resonance frequency can be used as an indicator of change, misinterpretation is possible since frequency is affected not only by the occurrence of damage but also by certain operating conditions and particularly certain atmospheric conditions. In this study, after analyzing the correlation of resonance frequency values with temperature for one building, we use the data mining method called “association rule learning” (ARL) to predict future frequencies according to temperature measurements. We then propose an anomaly interpretation strategy using the “traffic light” method.

1. Introduction

The management of risks associated with potentially hazardous activities remains a matter of profound public and technical interest for today’s societies. Among high-risk situations, the deterioration of structures and infrastructures is critical. All mechanical systems inevitably suffer deterioration [1]. Owners of civil engineering structures and infrastructures need to detect and track this deterioration as early as possible [2] for reasons related to user safety and service disruption [3] and to define rational asset management policies, strategies, and practices. The causes of deterioration fall into two categories: ageing effects (e.g., environmental erosion, operating loads, and fatigue) and extreme events (e.g., fires, earthquakes, and tornadoes). Since structural failure can be catastrophic, not only from an economical and life safety point of view but also in terms of its social and psychological impact, structural damage detection has been a subject of worldwide research since the early 2000s.

It has been proven that a properly established and effectively implemented maintenance program based on monitoring can significantly reduce operational and maintenance costs throughout the life cycle of the system [4]. Moreover, overall lifetime can be increased, leading to a direct increase in profitability. The process of determining and tracking structural integrity and assessing the nature of damage in a structure or mechanical system is often referred to as structural health monitoring [2, 58]. This process involves observation over time using periodically spaced measurements, extraction of damage-sensitive features from these measurements, and statistical analysis of the said features to determine the current state of the system. The key issue that must be addressed first is to determine whether damage has or has not occurred in the structure as a whole. Damage can be defined as changes to the material and/or geometric properties of a system, including changes in boundary conditions and system connectivity, which have an adverse effect on its current or future performance. Worden and Manson [9] classify the methods into two groups: (1) model-driven methods, where an initial model of the undamaged structure is assumed to describe the normal condition and any deviation of actual parameters from normal behavior indicates damage, and (2) data-driven methods, which establish statistical-based models using measured data only.

One of the most efficient methods for structural health monitoring (SHM) based on the concept of pattern recognition and falling into the second group is vibration-based damage detection [1, 8, 10, 11]. This technique is particularly relevant because it is cost-effective, with no visual inspection, and monitoring can be implemented continuously and in real time. The premise of this technique is that structure damage causes changes in the measured modal parameters (frequency, damping, and modal shape) and that any modification of the stiffness, mass, or energy dissipation characteristics of a system alters its dynamic response [1]. Most SHM techniques are based on detecting changes relative to initial conditions, and their efficiency depends on the level of knowledge of prior conditions. Among all the available parameters, frequency is probably the parameter most sensitive to structural change and is directly related to the stiffness of the building [1].

However, for civil engineering structures (buildings and bridges), the most challenging problem is that the modal frequency, whose measurement is considered to be the most cost-effective solution for SHM, is subject to changes caused not only by damage but also by environmental and operating conditions (traffic, wind, humidity, solar radiation, and, most importantly, temperature). Salawu [12] reported that significant frequency changes alone do not automatically imply the existence of damage since variations exceeding 5% due to ambient conditions have been measured in both concrete and steel bridges within a single day. Other authors have also observed this natural wandering of dynamic parameters in real buildings and temperature seems to be the main parameter that induces changes in structures [1317], although wind and heavy rain can also lead to sizeable variations [13, 14].

Data-driven methods consist in analyzing time series using mathematical or statistical tools to explain and/or predict the expected values of the modal parameters considering a normal condition [18]. The natural wandering of modal parameters means that methods must account for uncertainties related to the accuracy of the measurements [19] and these methods present a number of potential advantages. For example, they do not require complete numerical modeling of the structure, uncertainties due to measurements and environmental and operating conditions are integrated using statistical tools, and finally they enable decision-making based on a statistical approach.

Since the condition of the structure may change with time, an effective SHM algorithm requires knowledge of initial conditions. Therefore, the effects of environmental changes must be fully understood and quantified to enable reliable damage assessment. For the practical implementation of an efficient damage detection algorithm and to prevent costly false alarms related to structure occupancy or operability, we must make sure that natural variations are not misinterpreted as a loss of integrity or vice versa; that is, actual structural damage is not masked by changes due to environmental conditions. In this paper, a method is developed using accurate assessment of the modal frequency of the structure and a data-based statistical learning tool for the consideration of frequency variations caused by temperature based on the use of long-term measurement data. This paper is a companion paper of two previous ones, on the description of the building and data [15] and the observation of ageing effect of the building in a conference paper [20]. Herein, an easily implemented, real-time warning system based on deviation of the frequency from normal conditions coupled with a traffic light decision method is finally proposed and tested in an actual building.

2. Data Description

This work considers one building: the Ophite tower (OT) in France. OT is an 18-story residential structure built in 1972 and located in Lourdes (France) (Figure 1). The interstory height is regular (3.3 m) throughout the height, and shear walls of reinforced concrete provide lateral resistance in the two horizontal directions. Its dimensions are 19 m wide (T) and 24 m long (L). The building is founded on a rocky site composed of ophitic rock, without resonance frequency and amplification. A temperature sensor at the top of the building provides one temperature measurement per hour and at the same time sampling as for the monitoring of the building. The quality and accuracy of the measurement is good enough for the analysis we perform in this study. This building has a permanent monitoring system, designed with a 24-bit acquisition system as part of the National Building Array Program of the French Accelerometric Network [21]. Accelerometric sensors (EST-FBA), located at three top corners of the building for bending and torsion-mode assessment, were used to take continuous recordings at 125 Hz and transmit them in real time to the National Data Center at the Institute of Earth Science (ISTerre, Grenoble). In this study, the same results were observed whatever the sensors considered, and we then considered only one sensor for the monitoring.

These structures were not designed to resist earthquakes, and the original design report was not available to the authors. A full description of the acquisition system is given in [15]. Recordings are continuous, and frequencies were measured hourly for 47 months, from 3 January 2011 to 3 December 2014 using the data recorded at the top of the building. Mikael et al. [15] used the random decrement technique (RDT) [22] to assess the resonance frequency of the building, that is, a method applied to extract an accurate value of resonance frequency each hour. The details of this method applied to real data are given in [15, 23, 24]. Finally, the fundamental frequencies of the building are 1.74 Hz and 1.73 Hz for the longitudinal (L) and transverse (T) directions, respectively. Secondary bending modes were also observed, corresponding to 5.28 and 6.13 Hz in the L and T directions, respectively. An additional torsion mode was observed close to 2.2 Hz. In this paper, only the continuous assessment of the first bending mode in the L direction is used to propose an operative real-time condition-based health assessment solution. We also assume that the structure recordings have been archived and shared continuously and that the data mining algorithms are applied to the real-time data in a manner that is suitable for health monitoring and condition-based decision-making. In such conditions, the decision-making tool must be efficient and cost-effective in terms of computing resources to provide information hourly.

3. Methods

In order to overcome the difficulties raised by changes in modal structural frequency due to variations in environmental conditions, it is necessary to distinguish the damage condition from the natural variation of the structure’s behavior. Most previous investigations (e.g., [1316]) indicate that temperature is the main cause of modal parameter variability in buildings and that changes in modal frequencies caused by temperature could be as high as 4%. Although many measurements and observations have been made in the field, very few studies have addressed the modeling of environmental effects on the modal properties of real-scale buildings. One solution is to use a machine-learning algorithm applied to long-term monitoring data to predict the future and to observe deviations from expectations based solely on prior normal conditions.

In this paper, we used the easy association rule learning (ARL) solution [25], a method for discovering relationships between variables in large databases. Such solutions were first used in marketing to define “rules” between products and to make decisions about promotional pricing or product placement based on these rules [25]. All rules are in the form X implies Y. For two sets of parameters, in our case, temperature and frequency recorded at the same time, the rules are simplified and a statistical relationship can be found between them. We know that for the same temperature, the building can experience a wide range of frequencies following a normal distribution [15, 16]. The different frequency values for the same temperature are then quantified by assessing the mean frequency value (±standard deviation) for each temperature. Following the original definition proposed by [25], each temperature implies a subset of frequencies.

The proposed method relies on the probability that for a given temperature, a certain frequency value is typically observed, expressed in the probabilistic form P(Ef, EC), where Ef and EC are the events corresponding to a situation with a given value of frequency (f) and a given value of temperature (C). This confidence term can be rewritten as a conditional probability P(f  | C), that is, a certain probability of a frequency value corresponding to a given temperature. If the frequency and temperature values are known for a long period, we can build a statistical model able to predict the expected range of frequency for a given temperature, which is defined by the mean value and the standard deviation. The continuous monitoring of the building provides sufficient frequency and temperature data for the rule developed to be considered statistically significant. For extreme (or rare) temperatures, the data are not sufficient leading to uncertainty in the rules, but this will improve over the years.

Figure 2 shows a flowchart of the method developed for modeling the temperature-frequency relationship. Data come from long-term monitoring of the building, that is, a learning phase during which the natural wandering of the modal frequency is observed (learning phase during the period t < t0 where t0 is the time of the condition-based monitoring process). First, temperature and modal frequency are extracted for the first year of monitoring. Frequency and temperature for the learning phase are then used to provide the ARL model. For each temperature value, the histogram for decision is computed and used to predict the frequency values over the testing phase. The frequencies can of course only be predicted for temperatures actually experienced by the structure. When the observed frequency values fall outside the predicted range of values (i.e., considering tolerance thresholds), the classification indicates that the structure is in an abnormal situation. This situation may be the result of the building damage or the observed frequency just fell into a tail of the observed frequency histogram for that particular temperature experienced by the structure. This abnormal situation triggers the system to be vigilant for the next values of frequency and temperature (t0 = t0 + 1). The structure is maintained in a state of vigilance if the abnormal values persist over time (Nt, the number of consecutive times the structure is in an abnormal situation). Otherwise, the building returns to a normal situation and the ARL process resumes, including the new {frequency-temperature} rule (N = N + Nt), updating the learning phase with the frequency-temperature condition probability. If the situation persists (Nt > Ntth, with Ntth being the maximal threshold for the number of continuous abnormal situations), the structure is classified as being in an abnormal situation, and this information is used to decide a possible on-site inspection.

4. Application to the Ophite Tower

Figure 3(a) is a single plot showing overlapping frequencies and temperature variations during the learning phase between January 2009 and February 2010. The seasonal variation obtained by applying a low-pass filter (48 hours) and the daily variation over 7 days are also shown. As previously reported by [15], there is a clear positive correlation between the variations for both seasonal (Figure 3(b)) and daily (Figure 3(c)) terms, with values ranging from 0.6 to 0.8 depending on the direction and mode considered. Outside the scope of this study, the correlation coefficient between temperature and frequency shows a zero-time lag for the maximal value.

In this study, we discuss the method considering the fundamental frequency in the longitudinal direction. The Ophite tower experienced a temperature variation range of 46°C, with a minimum of −6°C, corresponding to frequency values between 1.70 and 1.75 Hz. During this period, the scattering of frequencies for the same temperature can be explained by the fact that as shown in Figure 3(c) there are different mechanisms impacting the relationship between temperature and frequency, which are not discussed in this paper. For example, Figure 3(c) shows that during the daily heating cycle, the temperature and frequency variation rates are the same, while during the daily cooling cycle, the rates are different depending on the thermal diffusivity of the material on the outer surfaces of the building exposed to the temperature of the air. For a given value of temperature, the structural modal frequency therefore varies according to the rules developed for modeling the frequency-temperature relationship. A complete description of the temperature-frequency relationship for the Ophite tower is given by Mikael et al. [15].

For example, Figure 4 shows the frequency distribution for three ranges of temperature, with a bandwidth corresponding to one degree centered on the temperature concerned. At 10°C, mean frequency (µ) is 1.744 Hz with a standard deviation (σ) equal to 0.010 Hz, that is, a coefficient of variation (COV = σ/μ) of 0.6%. The same order of variation is observed at 15°C and 20°C, with μ = 1.754 Hz and σ = 0.009 (COV = 0.5%) and μ = 1.755 Hz and σ = 0.006 Hz (COV = 0.3%), respectively. The coefficient of correlation between these examples show that the accuracy of the method used for frequency analysis (in our case RDT) is critical to ensure accurate assessment of the frequency values and their associated wandering in order to improve knowledge of the normal condition frequency values to be used in an efficient SHM method.

Having defined the rules and knowing the temperature, we can predict the n + 1 value of frequency and compare it with the value observed. Figure 5 shows an example of the predicted frequency values for each temperature, with data being processed every hour. The predictions are produced for the period between 1 March 2012 and 8 July 2012, representing the mean predicted value ±2σ, assuming that the learning process is over. The error (ε) is computed as follows:where is the current frequency observed, is the mean predicted value corresponding to the current temperature, and is the standard deviation for that rule.

The predictions are reliable until mid May, and the calculated error is approximately 2σ. A few points fall outside these thresholds, but after integration in the flowchart in Figure 2, new {frequency-temperature} rules appear, leading to false predictions. In the method we propose, if these situations are temporary, the new data will be added to the dataset used for learning, and the rules are recalculated regularly to improve the prediction, as long as no permanent anomaly is observed.

After May 24, 2012, the observed value drops compared with the normal predicted situation, with a maximum error of approximately 6σ. This deviation from normal conditions triggered our attention and raises questions about its significance in terms of decision-making in real time: (1) Is the building returning to its original normal situation? (2) Does the continued frequency decrease after this period placing the building in an emergency situation? (3) Is the building returning to a new normal situation, with permanent deviations caused by changes to its elastic properties? To answer these questions, the learning dataset was adjusted to provide predicted values in 2013 with different periods for the learning phase. The first one corresponds to the period covering all the datasets since the beginning of monitoring, including the frequency shift (January 2009–May 2012); the second corresponds to the period after the frequency drop, starting on 3 June 2012. Figure 6 shows that because the frequency values are lower after the drop, the observed values are at the lower boundary of the predicted range, and the prediction scatter is much larger. In the second case, a good match is found between observations and predictions, thus indicating that, after the drop, the frequency of the building is stabilized and follows the normal distribution. We therefore conclude that the frequency drop occurred over a limited period of 7 days (between 24 May and 2 June) and was permanent, confirming degradation within the building, but the building returned to a stabilized variation of frequency that allowed the vigilance status to be lifted.

In this example, in a temporary abnormal situation, the rules defined in the learning phase must be upgraded, as suggested in Figure 2. Anomaly detection must therefore be instantaneous to ensure rapid detection of a changing situation that could degenerate into an emergency situation, but also lasting to enable temporary anomalies to be distinguished from permanent anomalies. If the anomaly is considered temporary, new conditions are integrated to complement the rules describing the situations normally experienced by the building during its lifetime. Over time, the prediction of resonant frequencies integrating environmental conditions (or other situations, operational conditions, or natural conditions that may modify the monitored parameter) is refined, and the monitoring models improve. If the situation persists, the decision to intervene must be made and inspection or maintenance operations scheduled. Once the situation has stabilized, a new process must be put in place to continue the monitoring. A final key element for an operational and efficient condition-based decision process is to be able to visualize the anomaly quickly along with its evolution over time. The solution presented here is based on the traffic light principle.

5. The Traffic Light Concept

When making decisions, decision-makers face a very large number of heterogeneous situations. Some are always relevant (time period, unpredicted event, etc.) but others depend upon the system analyzed. Decision-makers must deal with an incomplete set of uncertain information on the matter to be resolved to make their decisions [26]. Different decision strategies exist and can be classified according to interaction between decision-makers and the information required for the decision and the level of confidence and accuracy of the information [27]. One important point is the passage from contextual knowledge to the procedural context required for operative decision-making, for which decision-makers must accommodate data uncertainty and incompleteness and the time sampling of the information available compared with the timescale of the decision to be made.

Many such uncertainties are due to the intrinsic randomness of natural phenomena (e.g., environmental loading, natural hazards, and material deterioration). To address this issue, several models have been proposed to assess the most significant uncertainties, and this research field is attracting increasing interest, as indicated by this list of seminal papers published over the last couple of years: Padgett et al. [28] and Ghosh and Padgett [29] studied the evolution in time of probabilistic seismic fragility curves for loss estimation; Mitropoulou et al. [30] present a probabilistic life cycle cost analysis of reinforced concrete structures under seismic actions that take into account randomness in both seismic demand and structural capacity; and Bocchini et al. [31] present a discrete-time Markov chain (MC) model to approximate the probabilistic life cycle analysis of bridges.

In terms of the safety of structures and infrastructures, physical parameters related to the dynamic response of the structure, considered as an accurate, physical proxy of structural health, can be introduced into the decision chain. Recently, Trevlopoulos and Guéguen [32] proposed an MC-based model coupled with the degradation of frequency for assessing the time evolution of probabilistic fragility curves of existing buildings during aftershock sequences.

However, to address the demand of decision-makers for certain and unambiguous information, the so-called “traffic light” model is often used (e.g., [3336]). This model evaluates the safety of structures according to criteria related to their operability and occupancy, assigning the structures to one of three categories [37]: tolerable (green), intermediate (amber), and intolerable (red). This strategy is usually applied in emergency management systems and has already been applied to postearthquake crisis management, which consists in conducting a visual inspection of buildings and classifying them according to occupancy safety, based on empirical expert judgment [38, 39]. In this case, the traffic light system defines a “green” level, indicating that the buildings are safe, apparently undamaged and that people can use them again; an “amber” level indicating that the building presents all the characteristics of a damaged building but with a high level of uncertainty; and a “red” level to indicate that occupancy must be suspended immediately and the building must ultimately be demolished. The same classification can be applied to infrastructures, with consequences on the economic costs of maintenance and operation.

Low statistical uncertainty and few catastrophic consequences characterize the “green” level [40]. On the contrary, the “amber” and “red” levels are more problematic because the consequences of system failure may go beyond the ordinary dimensions of emergency management. The reliability of health assessment is low, statistical uncertainty is high, especially for the “amber” level, the consequences of wrong decisions on structure occupancy and infrastructure operation can have a significant impact on the well-being of the people concerned and the economic viability of the infrastructure, and there is little or no systematic knowledge of the impact of these consequences. Wrong long-term decisions on infrastructure operation may also result in broader and/or irreversible decisions, leading to condition-based maintenance strategies that are not suited to the actual condition of the structure. Ultimately, this failure may lead to costly recovery operations (curative action), with such costs being much higher than investing in preventive maintenance solutions.

Until the traffic light concept has been implemented and proven in practice to provide a high degree of confidence, dependence on such measures alone is a risky option. Nevertheless, confidence in the ability of the system and its operability require accurate and fast measurement of the general modal parameters, and definition of the thresholds corresponding to the suitability of the building for occupancy will ensure safer, more resilient structures. Two terms must be considered: (1) the short-term, corresponding to a fast evolving situation which might be caused by an extreme event, such as an earthquake, and being complementary to visual inspection; (2) the long-term, corresponding to the continuous ageing of the civil engineering structures and related to observations falling outside the expected values for several hours/days. Several authors have suggested correlations between European Macroseismic Scale damage grades [41] or the immediate red-amber-green classification of structures and the modal structural frequency drop observed after an earthquake (e.g., [32, 38, 39]). The results are summarized in Figure 7. Dunand et al. [38] and Vidal et al. [39] performed frequency measurements in a large number of buildings after the Boumerdès (2003, Algeria) and Lorca (2012, Spain) earthquakes, respectively, while Trevlopoulos and Guéguen [32] used empirical thresholds integrated into a decision-making process during an aftershock sequence.

In this study, we used an easy and fast warning system to monitor possible deviations of dynamic parameters from normal conditions, in short- and long-term situations. We customized the traffic light concept according to the principle of the energy consumption labels used worldwide to indicate product efficiency (http://europa.eu/youreurope/business/environment/energy-labels/index_en.htm). Figure 8 shows the threshold values finally considered. In practice, these labels can offer real-time information on the current state of the structure, and they are easy to interpret because they are widely assimilated by the community and can be adapted to short- and long-term monitoring. In our model, we use the green label for observed frequency values within the 95.4% confidence interval [−2σ, +2σ], but distinguishing different thresholds of deviation with different shades of green (label A to C). Beyond this level, the building is considered yellow (D) or amber (E), corresponding to 5% and 1% from the lower limit for short- and long-term assessments, respectively, and up to 15% and 3% for short- and long-term assessments, respectively, triggering a Check Level I procedure. If Check Level I is reached, the structure is probably damaged and an immediate on-site inspection is recommended. If frequency continues to fall, Check Level II is flagged, corresponding to a frequency drop of more than 30%, as observed after earthquakes [38, 39] for short-term assessments, and a 5% frequency drop for continuous degradation. In such situations, immediate on-site inspection is mandatory, and it is recommended to evacuate the building before a complete inspection involving additional parameters that can be used to assess the level of damage more accurately.

Figure 9 shows the real-time operating system applied to OT, starting in March 2012 and covering the period of frequency drop previously observed. Over the first period, the values follow the predictions computed by ARL during the learning phase, except for a short period at the end of March, classified yellow because the deviation is less than 1%. After this period, the values observed match the predicted values once again and the building recovers its green classification. After 24 May, the deviation increases to more than 1% and is considered permanent. The deviation reaches 3% in early June and the building classification changes from yellow to amber. A Check Level I is required before a decision can be made. After the inspection and decision, monitoring is restarted, using the new modal structural frequency and the learning phase is defined again month after month.

6. Conclusions and Discussions

In this study, we propose a framework for condition-based decision strategy applied to civil engineering buildings, in order to detect the occurrence of damage as soon as possible. We show that for the buildings studied, there is a strong correlation (see Mikael et al., [15] for a full description of the coefficient of correlation for two modes and directions) between frequency, that is, the parameter used for assessing the health of the structure, and temperature, which can induce changes up to 5% (depending on the structure) and can mask actual damage or lead to false alarms. For a more reliable assessment of damage, we used a simple data mining method (association rule learning) comprising two steps, learning and testing, to predict future frequency variations based on data collected during the learning phase. Having frequency and temperature data from two years, we analyzed the behavior of the OT building.

After a certain period of learning to understand the natural wandering of frequencies related to temperature fluctuations, we propose a real-time, condition-based decision strategy, using a customized traffic light model. This strategy is similar to the algorithm trained on data represented by numerical values [41] and extended to the histogram decision algorithm, in which the data are considered in the form of histogram variables [42, 43]. Each histogram variable (i.e., modal structural frequency value for a given temperature) thus has an assigned variability with a relative frequency. The decision tree learning algorithm can then be applied to histogram data. This strategy provides precious information about the overall health of the structure in the long-term (ageing effects, operational load, and change of building use), and we propose an extension for the short-term, representing decision-making after an extreme event, such as earthquake.

We tested the algorithm on real data from the Ophite tower, for which we had observed a decrease in frequency, which then stabilized at the new value. The most exciting fact about the cost-effective methodology proposed is that it can be implemented on real buildings, updated every hour, without requiring time-consuming computation. Moreover, an inherent advantage of data-driven methods is the consideration of uncertainty and statistical decision-making, without requiring costly numerical or physical models.

To improve this strategy, we could increase the number of sensors in the building and add more parameters to the health-monitoring process, such as damping, modal shape, or interstory drift, thus increasing the level of confidence in the warning system. With new parameters added to the decision process, more sophisticated time series methods based on data mining and learning concepts could be used for fault detection methods, such as support vector machine, which would improve decision accuracy but increase the overall cost of the process. Such considerations are to be taken into account when selecting a method, using a cost-benefit analysis of the solution.

Data Availability

The OT data used in this manuscript are provided by the French Accelerometric Network (RAP-RESIF-RESIF (1995), http://data.datacite.org/10.15778/RESIF.RA (last access: 20/02/2017)).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

RESIF network is a national research infrastructure recognized as such by the French Ministry of Higher Education and Research. RESIF additionally supported by a public grant overseen by the French National Research Agency (ANR) as part of the “Investissements d’Avenir” program (reference: ANR-11-EQPX-0040) and the French Ministry of ecology, Sustainable Development. This study was sponsored by the Urban Seismology project at the Institute of Earth Science ISTerre of the University of Grenoble-Alpes and by a grant from Labex OSUG@2020 (Investissements d’Avenir, ANR10-LABX56).