Main Content

This example shows how to build an exponential degradation model to predict the Remaining Useful Life (RUL) of a wind turbine bearing in real time. The exponential degradation model predicts the RUL based on its parameter priors and the latest measurements (historical run-to-failure data can help estimate the model parameters priors, but they are not required). The model is able to detect the significant degradation trend in real time and updates its parameter priors when a new observation becomes available. The example follows a typical prognosis workflow: data import and exploration, feature extraction and postprocessing, feature importance ranking and fusion, model fitting and prediction, and performance analysis.

The dataset is collected from a 2MW wind turbine high-speed shaft driven by a 20-tooth pinion gear [1]. A vibration signal of 6 seconds was acquired each day for 50 consecutive days (there are 2 measurements on March 17, which are treated as two days in this example). An inner race fault developed and caused the failure of the bearing across the 50-day period.

A compact version of the dataset is available in the toolbox. To use the compact dataset, copy the dataset to the current folder and enable its write permission.

copyfile(... fullfile(matlabroot, 'toolbox', 'predmaint', ... 'predmaintdemos', 'windTurbineHighSpeedBearingPrognosis'), ... 'WindTurbineHighSpeedBearingPrognosis-Data-master') fileattrib(fullfile('WindTurbineHighSpeedBearingPrognosis-Data-master', '*.mat'), '+w')

The measurement time step for the compact dataset is 5 days.

`timeUnit = '\times 5 day';`

For the full dataset, go to this link https://github.com/mathworks/WindTurbineHighSpeedBearingPrognosis-Data, download the entire repository as a zip file and save it in the same directory as this live script. Unzip the file using this command. The measurement time step for the full dataset is 1 day.

if exist('WindTurbineHighSpeedBearingPrognosis-Data-master.zip', 'file') unzip('WindTurbineHighSpeedBearingPrognosis-Data-master.zip') timeUnit = 'day'; end

The results in this example are generated from the full dataset. It is highly recommended to download the full dataset to run this example. Results generated from the compact dataset might not be meaningful.

Create a `fileEnsembleDatastore`

of the wind turbine data. The data contains a vibration signal and a tachometer signal. The `fileEnsembleDatastore`

will parse the file name and extract the date information as `IndependentVariables`

. See the helper functions in the supporting files associated with this example for more details.

hsbearing = fileEnsembleDatastore(... fullfile('.', 'WindTurbineHighSpeedBearingPrognosis-Data-master'), ... '.mat'); hsbearing.DataVariables = ["vibration", "tach"]; hsbearing.IndependentVariables = "Date"; hsbearing.SelectedVariables = ["Date", "vibration", "tach"]; hsbearing.ReadFcn = @helperReadData; hsbearing.WriteToMemberFcn = @helperWriteToHSBearing; tall(hsbearing)

ans = M×3 tall table Date vibration tach ____________________ _________________ _______________ 07-Mar-2013 01:57:46 [585936×1 double] [2446×1 double] 08-Mar-2013 02:34:21 [585936×1 double] [2411×1 double] 09-Mar-2013 02:33:43 [585936×1 double] [2465×1 double] 10-Mar-2013 03:01:02 [585936×1 double] [2461×1 double] 11-Mar-2013 03:00:24 [585936×1 double] [2506×1 double] 12-Mar-2013 06:17:10 [585936×1 double] [2447×1 double] 13-Mar-2013 06:34:04 [585936×1 double] [2438×1 double] 14-Mar-2013 06:50:41 [585936×1 double] [2390×1 double] : : : : : :

Sample rate of vibration signal is 97656 Hz.

`fs = 97656; % Hz`

This section explores the data in both time domain and frequency domain and seeks inspiration of what features to extract for prognosis purposes.

First visualize the vibration signals in the time domain. In this dataset, there are 50 vibration signals of 6 seconds measured in 50 consecutive days. Now plot the 50 vibration signals one after each other.

reset(hsbearing) tstart = 0; figure hold on while hasdata(hsbearing) data = read(hsbearing); v = data.vibration{1}; t = tstart + (1:length(v))/fs; % Downsample the signal to reduce memory usage plot(t(1:10:end), v(1:10:end)) tstart = t(end); end hold off xlabel('Time (s), 6 second per day, 50 days in total') ylabel('Acceleration (g)')

The vibration signals in time domain reveals an increasing trend of the signal impulsiveness. Indicators quantifying the impulsiveness of the signal, such as kurtosis, peak-to-peak value, crest factors *etc.*, are potential prognostic features for this wind turbine bearing dataset [2].

On the other hand, spectral kurtosis is considered powerful tool for wind turbine prognosis in frequency domain [3]. To visualize the spectral kurtosis changes along time, plot the spectral kurtosis values as a function of frequency and the measurement day.

hsbearing.DataVariables = ["vibration", "tach", "SpectralKurtosis"]; colors = parula(50); figure hold on reset(hsbearing) day = 1; while hasdata(hsbearing) data = read(hsbearing); data2add = table; % Get vibration signal and measurement date v = data.vibration{1}; % Compute spectral kurtosis with window size = 128 wc = 128; [SK, F] = pkurtosis(v, fs, wc); data2add.SpectralKurtosis = {table(F, SK)}; % Plot the spectral kurtosis plot3(F, day*ones(size(F)), SK, 'Color', colors(day, :)) % Write spectral kurtosis values writeToLastMemberRead(hsbearing, data2add); % Increment the number of days day = day + 1; end hold off xlabel('Frequency (Hz)') ylabel('Time (day)') zlabel('Spectral Kurtosis') grid on view(-45, 30) cbar = colorbar; ylabel(cbar, 'Fault Severity (0 - healthy, 1 - faulty)')

Fault Severity indicated in colorbar is the measurement date normalized into 0 to 1 scale. It is observed that the spectral kurtosis value around 10 kHz gradually increases as the machine condition degrades. Statistical features of the spectral kurtosis, such as mean, standard deviation *etc.*, will be potential indicators of the bearing degradation [3].

Based on the analysis in the previous section, a collection of statistical features derived from time-domain signal and spectral kurtosis are going to be extracted. More mathematical details about the features are provided in [2-3].

First, pre-assign the feature names in DataVariables before writing them into the fileEnsembleDatastore.

hsbearing.DataVariables = [hsbearing.DataVariables; ... "Mean"; "Std"; "Skewness"; "Kurtosis"; "Peak2Peak"; ... "RMS"; "CrestFactor"; "ShapeFactor"; "ImpulseFactor"; "MarginFactor"; "Energy"; ... "SKMean"; "SKStd"; "SKSkewness"; "SKKurtosis"];

Compute feature values for each ensemble member.

hsbearing.SelectedVariables = ["vibration", "SpectralKurtosis"]; reset(hsbearing) while hasdata(hsbearing) data = read(hsbearing); v = data.vibration{1}; SK = data.SpectralKurtosis{1}.SK; features = table; % Time Domain Features features.Mean = mean(v); features.Std = std(v); features.Skewness = skewness(v); features.Kurtosis = kurtosis(v); features.Peak2Peak = peak2peak(v); features.RMS = rms(v); features.CrestFactor = max(v)/features.RMS; features.ShapeFactor = features.RMS/mean(abs(v)); features.ImpulseFactor = max(v)/mean(abs(v)); features.MarginFactor = max(v)/mean(abs(v))^2; features.Energy = sum(v.^2); % Spectral Kurtosis related features features.SKMean = mean(SK); features.SKStd = std(SK); features.SKSkewness = skewness(SK); features.SKKurtosis = kurtosis(SK); % write the derived features to the corresponding file writeToLastMemberRead(hsbearing, features); end

Select the independent variable `Date`

and all the extracted features to construct the feature table.

hsbearing.SelectedVariables = ["Date", "Mean", "Std", "Skewness", "Kurtosis", "Peak2Peak", ... "RMS", "CrestFactor", "ShapeFactor", "ImpulseFactor", "MarginFactor", "Energy", ... "SKMean", "SKStd", "SKSkewness", "SKKurtosis"];

Since the feature table is small enough to fit in memory (50 by 15), gather the table before processing. For big data, it is recommended to perform operations in tall format until you are confident that the output is small enough to fit in memory.

featureTable = gather(tall(hsbearing));

Evaluating tall expression using the Parallel Pool 'local': - Pass 1 of 1: Completed in 1 sec Evaluation completed in 1 sec

Convert the table to timetable so that the time information is always associated with the feature values.

featureTable = table2timetable(featureTable)

`featureTable=`*50×15 timetable*
Date Mean Std Skewness Kurtosis Peak2Peak RMS CrestFactor ShapeFactor ImpulseFactor MarginFactor Energy SKMean SKStd SKSkewness SKKurtosis
____________________ _______ ______ ___________ ________ _________ ______ ___________ ___________ _____________ ____________ __________ __________ ________ __________ __________
07-Mar-2013 01:57:46 0.34605 2.2705 0.0038699 2.9956 21.621 2.2967 4.9147 1.2535 6.1607 3.3625 3.0907e+06 0.001253 0.025674 -0.22882 3.362
08-Mar-2013 02:34:21 0.24409 2.0621 0.0030103 3.0195 19.31 2.0765 4.9129 1.2545 6.163 3.7231 2.5266e+06 0.0046823 0.020888 0.057651 3.3508
09-Mar-2013 02:33:43 0.21873 2.1036 -0.0010289 3.0224 21.474 2.1149 5.2143 1.2539 6.5384 3.8766 2.6208e+06 -0.0011084 0.022705 -0.50004 4.9953
10-Mar-2013 03:01:02 0.21372 2.0081 0.001477 3.0415 19.52 2.0194 5.286 1.2556 6.637 4.1266 2.3894e+06 0.0087035 0.034456 1.4705 8.1235
11-Mar-2013 03:00:24 0.21518 2.0606 0.0010116 3.0445 21.217 2.0718 5.0058 1.2554 6.2841 3.8078 2.515e+06 0.013559 0.032193 0.11355 3.848
12-Mar-2013 06:17:10 0.29335 2.0791 -0.008428 3.018 20.05 2.0997 4.7966 1.2537 6.0136 3.5907 2.5833e+06 0.0017539 0.028326 -0.1288 3.8072
13-Mar-2013 06:34:04 0.21293 1.972 -0.0014294 3.0174 18.837 1.9834 4.8496 1.2539 6.0808 3.8441 2.3051e+06 0.0039353 0.029292 -1.4734 8.1242
14-Mar-2013 06:50:41 0.24401 1.8114 0.0022161 3.0057 17.862 1.8278 4.8638 1.2536 6.0975 4.1821 1.9575e+06 0.0013107 0.022468 0.27438 2.8597
15-Mar-2013 06:50:03 0.20984 1.9973 0.001559 3.0711 21.12 2.0083 5.4323 1.2568 6.8272 4.2724 2.3633e+06 0.0023769 0.047767 -2.5832 20.171
16-Mar-2013 06:56:43 0.23318 1.9842 -0.0019594 3.0072 18.832 1.9979 5.0483 1.254 6.3304 3.9734 2.3387e+06 0.0076327 0.030418 0.52322 4.0082
17-Mar-2013 06:56:04 0.21657 2.113 -0.0013711 3.1247 21.858 2.1241 5.4857 1.2587 6.9048 4.0916 2.6437e+06 0.0084907 0.037876 2.3753 11.491
17-Mar-2013 18:47:56 0.19381 2.1335 -0.012744 3.0934 21.589 2.1423 4.7574 1.2575 5.9825 3.5117 2.6891e+06 0.019624 0.055537 3.1986 17.796
18-Mar-2013 18:47:15 0.21919 2.1284 -0.0002039 3.1647 24.051 2.1396 5.7883 1.2595 7.2902 4.2914 2.6824e+06 0.016315 0.064516 2.8735 11.632
20-Mar-2013 00:33:54 0.35865 2.2536 -0.002308 3.0817 22.633 2.2819 5.2771 1.2571 6.6339 3.6546 3.0511e+06 0.0011097 0.051539 -0.056774 7.0712
21-Mar-2013 00:33:14 0.1908 2.1782 -0.00019286 3.1548 25.515 2.1865 6.0521 1.26 7.6258 4.3945 2.8013e+06 0.0040392 0.066254 -0.39587 12.111
22-Mar-2013 00:39:50 0.20569 2.1861 0.0020375 3.2691 26.439 2.1958 6.1753 1.2633 7.8011 4.4882 2.825e+06 0.020676 0.077728 2.6038 11.088
⋮

Extracted features are usually associated with noise. The noise with opposite trend can sometimes be harmful to the RUL prediction. In addition, one of the feature performance metrics, monotonicity, to be introduced next is not robust to noise. Therefore, a causal moving mean filter with a lag window of 5 steps is applied to the extracted features, where "causal" means no future value is used in the moving mean filtering.

variableNames = featureTable.Properties.VariableNames; featureTableSmooth = varfun(@(x) movmean(x, [5 0]), featureTable); featureTableSmooth.Properties.VariableNames = variableNames;

Here is an example showing the feature before and after smoothing.

figure hold on plot(featureTable.Date, featureTable.SKMean) plot(featureTableSmooth.Date, featureTableSmooth.SKMean) hold off xlabel('Time') ylabel('Feature Value') legend('Before smoothing', 'After smoothing') title('SKMean')

Moving mean smoothing introduces a time delay of the signal, but the delay effect can be mitigated by selecting proper threshold in the RUL prediction.

In practice, the data of the whole life cycle is not available when developing the prognostic algorithm, but it is reasonable to assume that some data in the early stage of the life cycle has been collected. Hence data collected in the first 20 days (40% of the life cycle) is treated as training data. The following feature importance ranking and fusion is only based on the training data.

```
breaktime = datetime(2013, 3, 27);
breakpoint = find(featureTableSmooth.Date < breaktime, 1, 'last');
trainData = featureTableSmooth(1:breakpoint, :);
```

In this example, monotonicity proposed by [3] is used to quantify the merit of the features for prognosis purpose.

Monotonicity of $\mathit{i}$th feature ${\mathit{x}}_{\mathit{i}}$ is computed as

$$\mathrm{Monotonicity}\left({\mathit{x}}_{\mathit{i}}\right)=\frac{1}{\mathit{m}}\sum _{\mathit{j}=1}^{\mathit{m}}\frac{|\mathrm{number}\text{\hspace{0.17em}}\mathrm{of}\text{\hspace{0.17em}}\mathrm{positive}\text{\hspace{0.17em}}\mathrm{diff}\left({\mathit{x}}_{\mathit{i}}^{\mathit{j}}\right)\text{\hspace{0.17em}}-\text{\hspace{0.17em}}\mathrm{number}\text{\hspace{0.17em}}\mathrm{of}\text{\hspace{0.17em}}\mathrm{negative}\text{\hspace{0.17em}}\mathrm{diff}\left({\mathit{x}}_{\mathit{i}}^{\mathit{j}}\right)|}{\mathit{n}-1}$$

where $\mathit{n}$ is the number of measurement points, in this case $\mathit{n}=50$. $\mathit{m}$ is the number of machines monitored, in this case $\mathit{m}=1$. ${\mathit{x}}_{\mathit{i}}^{\mathit{j}}$ is the $\mathit{i}$th feature measured on $\mathit{j}$th machine. $\mathrm{diff}\left({\mathit{x}}_{\mathit{i}}^{\mathit{j}}\right)={\mathit{x}}_{\mathit{i}}^{\mathit{j}}\left(\mathit{t}\right)-{\mathit{x}}_{\mathit{i}}^{\mathit{j}}\left(\mathit{t}-1\right)$, i.e. the difference of the signal ${\mathit{x}}_{\mathit{i}}^{\mathit{j}}$.

% Since moving window smoothing is already done, set 'WindowSize' to 0 to % turn off the smoothing within the function featureImportance = monotonicity(trainData, 'WindowSize', 0); helperSortedBarPlot(featureImportance, 'Monotonicity');

Kurtosis of the signal is the top feature based on the monotonicity.

Features with feature importance score larger than 0.3 are selected for feature fusion in the next section.

trainDataSelected = trainData(:, featureImportance{:,:}>0.3); featureSelected = featureTableSmooth(:, featureImportance{:,:}>0.3)

`featureSelected=`*50×5 timetable*
Date Mean Kurtosis ShapeFactor MarginFactor SKStd
____________________ _______ ________ ___________ ____________ ________
07-Mar-2013 01:57:46 0.34605 2.9956 1.2535 3.3625 0.025674
08-Mar-2013 02:34:21 0.29507 3.0075 1.254 3.5428 0.023281
09-Mar-2013 02:33:43 0.26962 3.0125 1.254 3.6541 0.023089
10-Mar-2013 03:01:02 0.25565 3.0197 1.2544 3.7722 0.025931
11-Mar-2013 03:00:24 0.24756 3.0247 1.2546 3.7793 0.027183
12-Mar-2013 06:17:10 0.25519 3.0236 1.2544 3.7479 0.027374
13-Mar-2013 06:34:04 0.233 3.0272 1.2545 3.8282 0.027977
14-Mar-2013 06:50:41 0.23299 3.0249 1.2544 3.9047 0.02824
15-Mar-2013 06:50:03 0.2315 3.033 1.2548 3.9706 0.032417
16-Mar-2013 06:56:43 0.23475 3.0273 1.2546 3.9451 0.031744
17-Mar-2013 06:56:04 0.23498 3.0407 1.2551 3.9924 0.032691
17-Mar-2013 18:47:56 0.21839 3.0533 1.2557 3.9792 0.037226
18-Mar-2013 18:47:15 0.21943 3.0778 1.2567 4.0538 0.043097
20-Mar-2013 00:33:54 0.23854 3.0905 1.2573 3.9658 0.047942
21-Mar-2013 00:33:14 0.23537 3.1044 1.2578 3.9862 0.051023
22-Mar-2013 00:39:50 0.23079 3.1481 1.2593 4.072 0.058908
⋮

Principal Component Analysis (PCA) is used for dimension reduction and feature fusion in this example. Before performing PCA, it is a good practice to normalize the features into the same scale. Note that PCA coefficients and the mean and standard deviation used in normalization are obtained from training data, and applied to the entire dataset.

meanTrain = mean(trainDataSelected{:,:}); sdTrain = std(trainDataSelected{:,:}); trainDataNormalized = (trainDataSelected{:,:} - meanTrain)./sdTrain; coef = pca(trainDataNormalized);

The mean, standard deviation and PCA coefficients are used to process the entire data set.

PCA1 = (featureSelected{:,:} - meanTrain) ./ sdTrain * coef(:, 1); PCA2 = (featureSelected{:,:} - meanTrain) ./ sdTrain * coef(:, 2);

Visualize the data in the space of the first two principal components.

figure numData = size(featureTable, 1); scatter(PCA1, PCA2, [], 1:numData, 'filled') xlabel('PCA 1') ylabel('PCA 2') cbar = colorbar; ylabel(cbar, ['Time (' timeUnit ')'])

The plot indicates that the first principal component is increasing as the machine approaches to failure. Therefore, the first principal component is a promising fused health indicator.

healthIndicator = PCA1;

Visualize the health indicator.

figure plot(featureSelected.Date, healthIndicator, '-o') xlabel('Time') title('Health Indicator')

Exponential degradation model is defined as

$$\mathit{h}\left(\mathit{t}\right)=\varphi +\theta \text{\hspace{0.17em}}\mathrm{exp}\left(\beta \text{\hspace{0.17em}}\mathit{t}+\u03f5\text{\hspace{0.17em}}-\frac{{\sigma}^{2}}{2}\right)$$

where $\mathit{h}\left(\mathit{t}\right)$ is the health indicator as a function of time. $\varphi \text{\hspace{0.17em}}$is the intercept term considered as a constant. $\theta $ and $\beta \text{\hspace{0.17em}}$ are random parameters determining the slope of the model, where $\theta \text{\hspace{0.17em}}$is lognormal-distributed and $\beta \text{\hspace{0.17em}}$is Gaussian-distributed. At each time step $\mathit{t}$, the distribution of $\theta \text{\hspace{0.17em}}$and $\beta \text{\hspace{0.17em}}$is updated to the posterior based on the latest observation of $\mathit{h}\left(\mathit{t}\right)$. $\u03f5\text{\hspace{0.17em}}$ is a Gaussian white noise yielding to $\mathit{N}\left(0,{\sigma}^{2}\right)$. The $-\frac{{\sigma}^{2}}{2}$ term in the exponential is to make the expectation of $\mathit{h}\left(\mathit{t}\right)$ satisfy

$\mathit{E}\left[\mathit{h}\left(\mathit{t}\right)|\theta \text{\hspace{0.17em}},\beta \text{\hspace{0.17em}}\right]\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\varphi +\theta \text{\hspace{0.17em}}\mathrm{exp}\left(\beta \text{\hspace{0.17em}}\mathit{t}\right)$.

Here an Exponential Degradation Model is fit to the health indicator extracted in the last section, and the performances is evaluated in the next section.

First shift the health indicator so that it starts from 0.

healthIndicator = healthIndicator - healthIndicator(1);

The selection of threshold is usually based on the historical records of the machine or some domain-specific knowledge. Since no historical data is available in this dataset, the last value of the health indicator is chosen as the threshold. It is recommended to choose the threshold based on the smoothed (historical) data so that the delay effect of smoothing will be partially mitigated.

threshold = healthIndicator(end);

If historical data is available, use `fit`

method provided by `exponentialDegradationModel`

to estimate the priors and intercept. However, historical data is not available for this wind turbine bearing dataset. The prior of the slope parameters are chosen arbitrarily with large variances ($\mathit{E}\left(\theta \right)=1,\text{\hspace{0.17em}}\mathrm{Var}\left(\theta \right)\text{\hspace{0.17em}}={10}^{6},\mathit{E}\left(\beta \right)=1,\mathrm{Var}\left(\beta \right)={10}^{6}$) so that the model is mostly relying on the observed data. Based on $\mathit{E}\left[\mathit{h}\left(0\right)\right]=\varphi +\mathit{E}\left(\theta \right)$, intercept $\varphi \text{\hspace{0.17em}}$is set to $-1$ so that the model will start from 0 as well.

The relationship between the variation of health indicator and the variation of noise can be derived as

$$\Delta \mathit{h}\left(\mathit{t}\right)\approx \left(\mathit{h}\left(\mathit{t}\right)-\varphi \text{\hspace{0.17em}}\right)\Delta \u03f5\left(\mathit{t}\right)$$

Here the standard deviation of the noise is assumed to cause 10% of variation of the health indicator when it is near the threshold. Therefore, the standard deviation of the noise can be represented as $\frac{10\%\cdot \mathrm{threshold}}{\mathrm{threshold}-\varphi \text{\hspace{0.17em}}}$.

The exponential degradation model also provides a functionality to evaluate the significance of the slope. Once a significant slope of the health indicator is detected, the model will forget the previous observations and restart the estimation based on the original priors. The sensitivity of the detection algorithm can be tuned by specifying `SlopeDetectionLevel`

. If p value is less than `SlopeDetectionLevel`

, the slope is declared to be detected. Here `SlopeDetectionLevel`

is set to 0.05.

Now create an exponential degradation model with the parameters discussed above.

mdl = exponentialDegradationModel(... 'Theta', 1, ... 'ThetaVariance', 1e6, ... 'Beta', 1, ... 'BetaVariance', 1e6, ... 'Phi', -1, ... 'NoiseVariance', (0.1*threshold/(threshold + 1))^2, ... 'SlopeDetectionLevel', 0.05);

Use `predictRUL`

and `update`

methods to predict the RUL and update the parameter distribution in real time.

% Keep records at each iteration totalDay = length(healthIndicator) - 1; estRULs = zeros(totalDay, 1); trueRULs = zeros(totalDay, 1); CIRULs = zeros(totalDay, 2); pdfRULs = cell(totalDay, 1); % Create figures and axes for plot updating figure ax1 = subplot(2, 1, 1); ax2 = subplot(2, 1, 2); for currentDay = 1:totalDay % Update model parameter posterior distribution update(mdl, [currentDay healthIndicator(currentDay)]) % Predict Remaining Useful Life [estRUL, CIRUL, pdfRUL] = predictRUL(mdl, ... [currentDay healthIndicator(currentDay)], ... threshold); trueRUL = totalDay - currentDay + 1; % Updating RUL distribution plot helperPlotTrend(ax1, currentDay, healthIndicator, mdl, threshold, timeUnit); helperPlotRUL(ax2, trueRUL, estRUL, CIRUL, pdfRUL, timeUnit) % Keep prediction results estRULs(currentDay) = estRUL; trueRULs(currentDay) = trueRUL; CIRULs(currentDay, :) = CIRUL; pdfRULs{currentDay} = pdfRUL; % Pause 0.1 seconds to make the animation visible pause(0.1) end

Here is the animation of the real-time RUL estimation.

$\alpha $-$\lambda $ plot is used for prognostic performance analysis [5], where $\alpha \text{\hspace{0.17em}}$bound is set to 20%. The probability that the estimated RUL is between the $\alpha $ bound of the true RUL is calculated as a performance metric of the model:

$$\mathrm{Pr}\left({\mathit{r}}^{*}\left(\mathit{t}\right)-\alpha {\mathit{r}}^{*}\left(\mathit{t}\right)<\mathit{r}\left(\mathit{t}\right)<{\mathit{r}}^{*}\left(\mathit{t}\right)+\alpha {\mathit{r}}^{*}\left(\mathit{t}\right)|\Theta \left(\mathit{t}\right)\right)$$

where $\mathit{r}\left(\mathit{t}\right)$ is the estimated RUL at time $\mathit{t}$, ${\mathit{r}}^{*}\left(\mathit{t}\right)$ is the true RUL at time $\mathit{t}$, $\Theta \left(\mathit{t}\right)$ is the estimated model parameters at time $\mathit{t}.$

alpha = 0.2; detectTime = mdl.SlopeDetectionInstant; prob = helperAlphaLambdaPlot(alpha, trueRULs, estRULs, CIRULs, ... pdfRULs, detectTime, breakpoint, timeUnit); title('\alpha-\lambda Plot')

Since the preset prior does not reflect the true prior, the model usually need a few time steps to adjust to a proper parameter distribution. The prediction becomes more accurate as more data points are available.

Visualize the probability of the predicted RUL within the $\alpha \text{\hspace{0.17em}}$bound.

figure t = 1:totalDay; hold on plot(t, prob) plot([breakpoint breakpoint], [0 1], 'k-.') hold off xlabel(['Time (' timeUnit ')']) ylabel('Probability') legend('Probability of predicted RUL within \alpha bound', 'Train-Test Breakpoint') title(['Probability within \alpha bound, \alpha = ' num2str(alpha*100) '%'])

[1] Bechhoefer, Eric, Brandon Van Hecke, and David He. "Processing for improved spectral analysis." *Annual Conference of the Prognostics and Health Management Society, New Orleans, LA, Oct*. 2013.

[2] Ali, Jaouher Ben, et al. "Online automatic diagnosis of wind turbine bearings progressive degradations under real experimental conditions based on unsupervised machine learning." *Applied Acoustics* 132 (2018): 167-181.

[3] Saidi, Lotfi, et al. "Wind turbine high-speed shaft bearings health prognosis through a spectral Kurtosis-derived indices and SVR." *Applied Acoustics* 120 (2017): 1-8.

[4] Coble, Jamie Baalis. "Merging data sources to predict remaining useful life–an automated method to identify prognostic parameters." (2010).

[5] Saxena, Abhinav, et al. "Metrics for offline evaluation of prognostic performance." *International Journal of Prognostics and Health Management* 1.1 (2010): 4-23.