# ENBIS Spring Meeting 2022

Europe/Berlin
Grenoble

#### Grenoble

Bâtiment IMAG Université Grenoble Alpes 700 avenue Centrale Domaine Universitaire St Martin d'Hères
Description

Welcome to the ENBIS Spring Meeting 2022 - May 19-20, 2022

The 2022 ENBIS Spring Meeting will be dedicated to

Degradation and Maintenance, Modelling and Analysis

Aim of the meeting

In the field of reliability studies, the multiplication of equipment control and monitoring systems implies that degradation models become more and more prominent over lifetime models. The modeling of degradation processes, the statistical analysis of the corresponding data and their use for the predictive maintenance of industrial systems are important and challenging issues. The aim of the 2022 ENBIS Spring Meeting is to bring together both academic and industrial statisticians interested in theoretical developments and practical applications in this field.

The topics of the meeting include :

• Statistical analysis of degradation data

• Predictive maintenance

• Maintenance modelling and optimization

• Statistical reliability

• Prognostic and health management

• Software development for degradation and maintenance analysis

• Case studies in reliability analysis

Contact information

For any question about the meeting venue and scientific programme, feel free to contact the 2022 ENBIS Spring meeting organizer Olivier Gaudoin : olivier.gaudoin@univ-grenoble-alpes.fr.

Programme committee

• Marcel Chevalier, Schneider Electric, Grenoble, France

• Christian Paroissin, Université de Pau et des Pays de l'Adour, France

• Gianpaolo Pulcini, Istituto Motori, Consiglio Nazionale delle Ricerche, Napoli, Italy

• Emmanuel Remy, EDF R&D, Chatou, France

Organizing committee

• Olivier Gaudoin, Grenoble INP, Université Grenoble Alpes (chair)

• Christophe Bérenguer, Grenoble INP, Université Grenoble Alpes

• Franck Corset, Université Grenoble Alpes

• Laurent Doyen, Université Grenoble Alpes

• Rémy Drouilhet, Université Grenoble Alpes

ENBIS Spring Meeting 2022 Highlights

Plenary speakers

• Massimiliano Giorgio (Universita' degli Studi di Napoli, Federico II, Italy): About some extensions of the gamma process and their applications in reliability and maintenance.

• David Coit (Rutgers University, USA): System reliability modeling with dependent degradation paths: a review of models including clustering and machine learning

The meeting will also include a number of contributed papers sessions. A particular focus will be given on industrial applications of reliability.

A special issue of the Wiley Journal Applied Stochastic Models in Business and Industry will be published on the topics of the meeting. Call for papers.

Registration fees

 Early bird registration fee until April 29th, 2022 Normal registration fee from April 30th, 2022 Regular 180 € 220 € Student 90 € 90 €

Conference registration fees include

• Conference materials

• Admission to the banquet on May 19th

• Thursday, 19 May
• 08:30 09:00
Registration 30m
• 09:00 09:15
Welcome and Introduction
• 09:15 10:00
Invited speaker: Massimiliano Giorgio
• 10:00 10:30
Coffee break 30m
• 10:30 11:50
• 10:30
On the modelling of dependence between univariate Lévy wear processes and impact on the reliability function 20m

Univariate Lévy processes have become quite common in the reliability literature for modelling accumulative deterioration. In case of correlated deterioration indicators, several possibilities have been suggested for modelling their dependence. The point of this study is the analysis and comparison of three different dependence models considered in the most recent literature: 1. Use of a regular copula, where the dependence in a multivariate increment is modelled through a time-independent regular copula; 2. Superposition of independent univariate Lévy processes, where each marginal process is constructed as the sum of independent univariate Lévy processes $\{X_j(t), t\geq 0\}$ with possibly common $\{X_j(t), t\geq 0\}$ between margins; 3. Use of a Lévy copula. The three methods are first presented and analysed. As for the model based on a regular copula, it is shown that the corresponding multivariate process cannot have independent increments in general, so that it is not a Lévy process. This means that the distribution of the multivariate process is not fully characterized in this way. The second and third models both lead to a multivariate Lévy process, with a limited dependence range for the second superposition-based model, which is not the case for the third Lévy copula-based model. However, this last model requires a higher technicity for its use and numerical methods (such as Monte-Carlo simulations) have to be used for its numerical assessment. Practical details are given in the paper and two Monte-Carlo simulation procedures are compared.

A two-component series system is next considered, with joint deterioration level modelled by one of the three previous models. Each component is considered as failed as soon as its deterioration level is beyond a given failure threshold. The impact of a wrong choice for the model is explored, based on data simulated from one of the three models and next adjusted to all three models. It is shown that a wrong choice for the model can lead to either surestimate or underestimate the reliability function of the two-component series system, which could be problematic in an applicative context.

Speaker: Sophie MERCIER (University of Pau and Pays de l'Adour)
• 10:50
Stochastic Drift Model for Discrete Parameters 20m

In the context of semiconductor reliability, predictive maintenance and calculation of residual useful life are important topics under the greater umbrella of prognostics and health management.
Especially in automotive applications, with higher expected usage times of self-driving autonomous vehicles, it becomes more and more important to recognize degradation processes early, so that preventive maintenance actions can be taken automatically. For semiconductor producers, it is important to account for life-time degradation of electronic devices when guaranteeing quality standards for their customer.
For this, accurate and fast statistical models are needed to identify degradation by parameter drift. Typically, electrical parameters have specified limits in which they need to stay over their whole life cycle.
Efficient life-time simulations are performed by so-called accelerated stress tests. In those tests, electrical parameters are measured before, during, and after higher-than usual stress conditions. These stress test data represent the expected life-time behavior of these parameters.
Using models based on these data, tighter limits, so called test limits, are then introduced at production testing to guarantee life-time quality of the devices for the customer.
Based on this data, quality control measures like guard bands are introduced. Guard bands are the differences between specification and test limits and account, amongst others, for lifetime drift effects of electrical parameters.
Models to calculate lifetime drift have to be flexible enough to accurately represent a large number of stress test behaviors while being computationally light-weight enough to run on edge devices in the vehicles.
We present a statistical model for discrete parameters based on nonparametric interval estimation of conditional transition probabilities in Markov chains that allows for flexible modelling and fast computation. We then show how to use the model to formulate an integer optimization problem to calculate optimal test limits. Calculation for both arbitrary parameter distributions at production testing as well as defined initial distributions are shown. Finally, we give an approach to calculate remaining useful lifetime for electronic components.
The work has been performed in the project ArchitectECA2030 under grant agreement No 877539. The project is co-funded by grants from Germany, Netherlands, Czech Republic, Austria, Norway and - Electronic Component Systems for European Leadership Joint Undertaking (ECSEL JU).
All ArchitectECA2030 related communication reflects only the author’s view and ECSEL JU and the Commission are not responsible for any use that may be made of the information it contains.

Speakers: Lukas Sommeregger (Infineon Technologies Austria AG) , Dr Horst Lewitschnig (Infineon Technologies Austria AG)
• 11:10
Fuel cell stochastic deterioration modeling for energy management in a multi-stack system 20m

Fuel cells use hydrogen and oxygen as reactants to produce electricity through electrochemical reactions with the only byproduct of water. They are widely used in various applications, e.g. transport, due to their high efficiency, energy density, and limited impact on environmental resources, however fuel cells deployment is held down by multiple barriers such as their high cost or their shorter than required lifetime. To cross over these barriers, using multi-stack fuel cells (MFC) instead of a single one is a promising solution. Firstly, MFC offers improved reliability thanks to the multi-stack structure. Another advantage is that the durability of multi-stack FC can also be increased by optimally distributing the power demand among different stacks by an efficient Energy Management Strategy (EMS), and thus avoiding degraded mode operation [1]. In short, MFC systems are relevant to meet this challenge if properly dimensioned and managed by an appropriate EMS taking into account the deterioration of the cells. In order to implement such a degradation-aware EMS, it is mandatory to build a degradation model that integrates the dynamic behavior of MFC according to the operating conditions. Fuel cell performance degradation is linked to complex electrochemical, mechanical, and thermal mechanisms, which are difficult to model using a “white-box” approach, relying on the exact laws of physics. Within this context, the aim of the present work is to propose a fuel cell degradation model adapted for the energy management of MFC.
The deterioration behavior of an MFC is characterized by two main features : (i) it is load-dependent, i.e. the degradation is affected by the load distributed by the stack ; (ii) it is stochastic and exhibits a stack-to-stack variability. A degradation-aware energy management system allocates a load to deliver to the different stacks of the MFC system as a function of their degradation state and of their predicted degradation behavior. The deterioration dynamics must thus be modeled as a function of the load power. Another specificity of fuel cells is their individual deterioration variability, which can be due to stochasticity in the intrinsic fuel cell deterioration phenomena. This stochasticity varies the deterioration levels even for the identical stacks operating under identical load profiles.
In order to meet these modelling requirements, this work develops a load-dependent stochastic deterioration model for an MFC. First, the overall stack resistance is chosen as the degradation indicator, as it carries the key aging information of a fuel cell stack. Then, a stochastic non-homogeneous Gamma process is used to model the deterioration of the fuel cell, i.e. the increase in the fuel cell resistance. The shape parameter of the considered Gamma process is further modeled by an empirical function of the fuel cell operation load in order to make the resistance deterioration load-dependent. Finally, to model the individual deterioration heterogeneity, a random effect is added to the Gamma process on its scale parameter, taken as a random variable following a probability distribution (a Gamma law is chosen in this work).
Resistance degradation paths can then be simulated based on the proposed deterioration model, based on which the first hitting time distribution of a failure threshold (or equivalently a remaining useful life distribution) can be estimated and the reliability of the system can be analyzed. The proposed model can also be used to optimize the load allocation strategy for an MFC [2].

Keywords: Multi-stack fuel cells, load-dependent deterioration model, stochastic modelling, Gamma process, random effect.

References:
[1]Marx, Neigel, et al. "On the sizing and energy management of an hybrid multistack fuel cell–Battery system for automotive applications." International Journal of Hydrogen Energy 42.2 (2017): 1518-1526.
[2] Zuo, J., C. Cadet, Z. Li, C. Bérenguer, and R. Outbib (2022). Post-prognostics decision-making strategy for load allocation on a stochastically deteriorating multi-stack fuel cell system. To appear in Proc. Inst. Mech. Eng - Part O: Journal of Risk and Reliability.

Speaker: Jian Zuo (Phd student)
• 11:30
Change-level detection for Lévy subordinators 20m

Let $\boldsymbol{X}=(X_t)_{t\ge 0}$ be a process behaving as a general increasing Lévy process (subordinator) prior to hitting a given unknown level $m_0$, then behaving as another different subordinator once this threshold is crossed. We address the detection of this unknown threshold $m_0\in [0,+\infty]$ from an observed trajectory of the process. These kind of model and issue are encountered in many areas such as reliability and quality control in degradation problems. More precisely, we construct, from a sample path and for each $\epsilon >0$, a so-called detection level $L_\epsilon$ by considering a CUSUM inspired procedure. Under mild assumptions, this level is such that, while $m_0$ is infinite (i.e. when no changes occur), its expectation $\mathbb{E}_{\infty}(L_{\epsilon})$ tends to $+\infty$ as $\epsilon$ tends to $0$, and the expected overshoot $\mathbb{E}_{m_0}([L_{\epsilon} - m_0]^+)$, while the threshold $m_0$ is finite, is negligible compared to $\mathbb{E}_{\infty}(L_{\epsilon})$ as $\epsilon$ tends to $0$. Numerical illustrations are provided when the Lévy processes are gamma processes with different shape parameters. This is joint work with Z.Al Masry and G.Verdier.

Speaker: Landy Rabehasaina (Laboratoire de Mathématiques, Université Franche Comté)
• 12:00 13:50
Lunch 1h 50m
• 14:00 15:20
Maintenance optimization
• 14:00
Maintenance policies for items with their repair processes modelled by the extended Poisson processes 20m

Optimisation of maintenance policies for items with their repair processes modelled by the geometric process (GP) has received a good amount of attention. The extended Poisson process (EPP), one of the extensions of the GP, can be used to model the repair process with its times-between-failures possessing a non-monotonic trend. A central issue in the applications of the EPPs in maintenance policy optimisation is to find when the EPP has an increasing times-between-failures. This paper aims to answer this question. Numerical examples are provided to illustrate the proposed maintenance policies.

Speaker: Jiaqi Yin (University of Kent)
• 14:20
A condition-based maintenance policy in a system with heterogeneities 20m

Models that describe the deterioration processes of the components are key to determining the lifetime of a system and play a fundamental role in predicting the system reliability and planning the system maintenance. In most systems there is heterogeneity among the degradation paths of the units. This variability is usually introduced in the model through random effects, that is, considering random coefficients on the model.
A degrading system subject to multiple degradation processes whose initiation times follow a shot-Cox noise process is studied. The growth of these processes is modeled by a homogeneous gamma process. A condition based maintenance policy with periodic inspections is applied to reduce the impact of failures and optimise the total expected maintenance cost. The heterogeneities between components are included in the model considering that the scale parameter of the gamma process follows a uniform distribution. Numerical examples of this maintenance policy are given comparing both models, with and without heterogeneities.

• 14:40
Data-driven Maintenance Optimization Using Random Forest Algorithms 20m

In this paper, a multi-component series system is considered which is periodically inspected and at inspection times the failed components are replaced by a new one. Therefore, this maintenance action is perfect corrective maintenance for the failed component, and it can be considered as imperfect corrective maintenance for the system. The inspection interval is considered as a decision parameter and the maintenance policy is optimized using long-run cost rate function. It is assumed that there is no information related to components' lifetime distributions and their parameters. Therefore, an optimal decision parameter is derived considering historical data (a data storage for the system that includes information related to past repairs) using density estimation and random forest algorithms. Eventually, the efficiency of the proposed optimal decision parameter according to available data is compared to the one derived when all information on the system is available.

keywords: Maintenance Optimization, Data-driven Estimation, Random Forest Algorithm.

Speaker: HASAN MISAII (University of TEHRAN and University of Technology of TROYES)
• 15:00
Reliability degradation and optimal maintenance for information equipment installed on railway cars 20m

A Reliability degradation model was developed in a project, which involved LCD TV screens installed on railway. One critical item in the project was an LED strip.
Accelerated Life tests data are the data source for the determination of LED Reliability. The expected life of LEDs being about 10-15 years - testing them until death is not practical.
Luminosity (vs time measurements provided by the manufacturer – were fitted to a degradation model. As opposed to [1] and [2], where an exponential degradation model for the average luminosity was applied our degradation model assumed degradation equations, based on the second law of thermodynamics developed by A. Einstein (1905), Fokker (1919) , Planck (1930) and Kolmogorov (1931). The main difference in the approaches is that the degradation function's Taylor expansion contains terms of the first and second derivative, while the models of [1] and [2] contain only the first.
As opposed to [2] we did not fit the results to an assumed Reliability function (Weibull, Normal, Lognormal) and left it in a tabular form. The table allows to determine required reliability and maintenance information.

The following results are derived:
1. PDF of the failure rate as a function of time
2. Reliability as a function of time and temperature (for simple and complex components)
3. MTBF for a device used for limited and unlimited life.
A model developed for maintenance costs and spare parts provisioning allows development Optimal Preventive Maintenance Policy :
1. Optimal preventive maintenance without individual monitoring.
2. Optimal preventive maintenance based on Rest of Useful Life (with monitoring)

References

1. Ott ,Melanie "Capabilities and Reliability of LEDs and Laser Diodes" Internal NASA Parts and Packaging Publication (1996).
2. J.Fan K_C Yung,M.Pecht Lifetime Estimation of High-Power White LED Using Degradation-Data-Driven Method IEEE TRANSACTIONS ON DEVICE AND MATERIALS RELIABILITY, VOL. 12, NO. 2, JUNE 2012
3. Si, Xiao-Sheng, et al. "Remaining useful life estimation based on a nonlinear diffusion degradation process." IEEE Transactions on reliability 61.1 (2012): 50-67
4. Livni, Haim. "Life cycle maintenance costs for a non-exponential component." Applied Mathematical Modelling 103 (2022): 261-286.‏
Speaker: Mr Haim Livni (Oslo Reliability)
• 15:20 15:50
Coffee break 30m
• 15:50 17:10
Case studies
• 15:50
Development of an Operational Digital Twin of a Locomotive Systems 20m

A Digital Twin (DT) is a new and powerful concept that maps a physical structure operating in a specific context to the digital space. The development and deployment of a DT improves forecasting prognostic performance and decision support for operators and managers. , DT have been introduced in various industries across a range of application areas including design, manufacturing and maintenance. Due to the large impact of maintenance on the proper functioning of a system, maintenance is one of the most studied DT applications. In the case of trains, poor maintenance can put the rolling carts out of service or, worse, pose a safety risk to passengers and operators. Implementing intelligent maintenance strategies can therefore offer tremendous benefits. This study addresses the development of an architecture for DT designed to formulate and evaluate new hypotheses in predictive maintenance by iterating between physical experiments and computational experiments. The designed DT supports a broad perspective on statistical aspects of simulations and experiments. In addition, the DT enables real-time prediction and optimization of the actual behavior of a system at any stage of its life cycle. Examples of safety valves and suspension systems will be given.

Speaker: Mr Gabriel Davidyan (Israel railway)
• 16:10
The long road from data collection to maintenance optimization of industrial equipment 20m

In the coming years, with the development of intermittent renewable energy sources and the gradual phasing out of coal-fired power plants, combined cycle gas turbines (CCGT) will play an essential role in regulating electricity production. This need for flexibility will increase the demands on the equipment of these CCGT and the issue of optimizing their maintenance will become increasingly important. To this end, the use of statistical tools to enhance the value of data from the operation and maintenance of these plants' equipment is a possible approach to provide decision support elements.
It is in this industrial context that an important collection of data was carried out for several conventional repairable equipment (turbines, pumps...) of three EDF CCGT. The second step consisted in a pre-processing / cleaning of these raw data with the support of field experts, an essential requirement for the statistical modeling stage. A wide range of imperfect maintenance models implemented in the free R VAM (for Virtual Age Models) package (https://rpackages.imag.fr/VAM#) was tested to evaluate the ability of these models, on the one hand to reproduce the field reality, on the other hand to bring useful insights to help the development of equipment maintenance plans.
The communication will present this work, illustrating it on a piece of equipment and insisting on its industrial application dimension.

Speaker: Emmanuel REMY (EDF R&D)
• 16:30
Statistical process control versus deep learning for predictive maintenance of power plant process data 20m

# Abstract

This work is motivated by the non-documented, practical learnings gained by a predictive maintenance (PdM) development team in the Danish energy company Ørsted. The team implements PdM solutions for power plant machinery to monitor for faults in the making. Their learnings support the hypothesis that there are not significant enough benefits to be gained from using overly complicated condition monitoring models on selected machinery.

To explore this hypothesis, we set out to compare two different methodologies for detecting faults in process data. We compare a classical latent structure-based method from the field of statistical process control (SPC) with a standard autoencoder deep neural network. Furthermore, we compare the fault detection performance of these methods with two more experimental deep learning models recently proposed in literature [1]. The reason for these specific models is that they a priori seem very well suited for the modelling task the PdM team had undertaken due to the models’ alleged ability to automate domain knowledge in a data-driven way.

We benchmark all methods against each other using first the well-known Tennessee Eastman Process (TEP) data, and subsequently data collected from two feedwater pumps (FWP) at a large Danish combined heat and power plant.

The TEP data stems from a simulation tool for generating data from the process, and thus a large number of datasets are generated for each of 20 process disturbances. For the FWP data, six historical faults in the form of leaks are used to test the methods against each other in their ability to detect faults as they develop over time. Each methods’ ability to detect faults is measured using a weighted combination of performance metrics such as mean absolute error, ROC AUC and average precision AP.

Preliminary results of the experiments suggest that detection performance is comparable between the different models on both datasets, but that each model seems to come with its own set of advantages in terms of fault detection performance, as in the case of reaction time to certain types of faults.

Based on the mentioned datasets and models, we discuss the quantitative results of these experiments, as well as other pros and cons, such as number of modelling decisions, hyperparameters etc. of each paradigm that may influence the choice of detection model in an industrial setting.

# References

1. Schulze, J.-P, Sperl, P, Böttinger, K. 2022, “Anomaly Detection by Recombining Gated Unsupervised Experts”, arXiv preprint: https://doi.org/10.48550/arXiv.2008.13763
Speaker: Mr Henrik Hansen (DTU & Ørsted (Industrial PhD))
• 16:50
Battery degradation model for mission assignment in a fleet of electric vehicles 20m

[1] Anthony Barré, Benjamin Deguilhem, Sébastien Grolleau, Mathias Gérard, Frédéric Suard, Delphine Riu, A review on lithium-ion battery ageing mechanisms and estimations for automotive applications, Journal of Power Sources,Volume 241,
[2] Saxena, Saurabh, Darius Roman, Valentin Robu, David Flynn, and Michael Pecht. 2021. "Battery Stress Factor Ranking for Accelerated Degradation Test Planning Using Machine Learning" Energies 14, no. 3: 723.
[3] Jiaming Fan et al. “A novel machine learning method based approach for Li-ion battery prognostic and health management”. In: IEEE Access 7 (2019), pp. 160043–
160061 ,2013, Pages 680-689,
[4] Xu ,B., Oudalov, A., Ulbig, A., Andersson, G., and Kirschen, D.S. (2018). Modeling of lithium-ion battery degradation for cell life assessment. IEEE Transactions on Smart Grid, 9(2), 1131–1140. doi:10.1109/TSG.2016.2578950

Speaker: Pedro Dias Longhitano (Volvo Group)
• 19:30 22:00
Gala dinner: L'Epicurien 2h 30m
• Friday, 20 May
• 09:00 09:45
Invited speaker: David Coit
• 09:45 10:10
Coffee break 25m
• 10:10 11:50
• 10:10
Quantile Regression via Accelerated Destructive Degradation Modeling for Reliabilty Estimation 20m

Along with the shortening of production period, manufacturing industry utilizes an accelerated degradation test (ADT) to estimate the reliability of newly developed products as quickly as possible. In ADT, the stress factor which is related to the failure mechanisms is imposed to cause the failure of products faster than those under normal use condition. By increasing the degree of stress such as voltage, temperature, humidity or other external factors, the performance of new products continuously degrades and leads to the failure. In some applications, accelerated destructive degradation test (ADDT) is conducted when testing units should be destroyed to measure the performance of the degrading product.
For general ADT and ADDT models, the mean estimators have been considered as the location measurement. However, the estimation result using mean estimator can be inappropriate for highly skewed data, because the lifetime estimation of degradation data with outliers or irregularity can be distorted.
In this paper, the ADDT modeling based on quantile regression (QR) is suggested as a comprehensive approach for asymmetric observations. QR-based ADDT requires fewer assumptions than the general parametric methods to construct the model, and enables the interpretation to be more flexible. Through the data application, our approach provides a great advantage by inferring a nonlinear degradation path without bias and partiality.

Speakers: Ms Munwon Lim (Hanyang University) , Ms Gyu Ri Kim (Department of Industrial Engineering, Hanyang University)
• 10:30
Imperfect condition-based maintenance for a gamma degradation process in presence of unknown parameters 20m

A system subject to degradation is considered. The degradation is modelled by a gamma process. A condition-based maintenance policy with perfect corrective and an imperfect preventive actions is proposed. The maintenance cost is derived considering a Markov-renewal process. The statistical inference of the degradation and maintenance parameters by the maximum likelihood method is proposed. A sensibility analysis to different parameters is carried out and the perspectives are detailed.

Speaker: Franck Corset (LJK - Université Grenoble Alpes)
• 10:50
Statistical inference for a Wiener-based degradation model with imperfect maintenance actions under different observation schemes 20m

In this article, technological or industrial equipment that are subject to degradation are considered. These units undergo maintenance actions, which reduce their degradation level.
The paper considers a degradation model with imperfect maintenance effect. The underlying degradation process is a Wiener process with drift. The maintenance effects are described with an Arithmetic Reduction of Degradation ($ARD_1$) model. The system is regularly inspected and the degradation levels are measured.

Four different observation schemes are considered so that degradation levels can be observed between maintenance actions as well as just before or just after maintenance times. In each scheme, observations of the degradation level between successive maintenance actions are made. In the first observation scheme, degradation levels just before and just after each maintenance action are observed. In the second scheme, degradation levels just after each maintenance action are not observed but are observed just before. On the contrary, in the third scheme, degradation levels just before the maintenance actions are not observed but are observed just after. Finally, in the fourth observation scheme, the degradation levels are not observed neither just before nor just after the maintenance actions.

The paper studies the estimation of the model parameters under these different observation schemes. The maximum likelihood estimators are derived for each scheme.
Several situations are studied in order to assess the impact of different features on the estimation quality.
Among them, the number of observations between successive maintenance actions, the number of maintenance actions, the maintenance efficiency parameter and the location of the observations are considered. These situations are used to assess the estimation quality and compare the observation schemes through an extensive simulation and performance study.

Speaker: Margaux Leroy (Univ. Grenoble Alpes)
• 11:10
A goodness-of-fit test for homogeneous gamma process under a general sampling scheme 20m

Degradation models are more and more studied and used in practice. Most of these models are based on Lévy processes. For such models, estimation methods have been proposed. These models are also considered for developing complex and efficient maintenance policies. However, a main issue remains: goodness-of-fit (GoF) test for these models. In this talk, we propose a GoF test for the homogeneous gamma process under a general sampling scheme.

Speaker: Christian Paroissin (Université de Pau et des Pays de l'Adour)
• 11:30
Degradation Model Selection Using Depth Functions 20m

Degradation modeling is an effective way for the reliability analysis of complex systems. For highly reliable systems, in which their failure is hard to observe, degradation measurements often provide more information than failure time to improve system reliability (Meeker and Escobar 2014). The degradation can be viewed as damage to a system that accumulates over time and eventually leads to failure when the accumulation reaches a failure threshold, either random or stipulated by industrial standards (Ye and. Xie 2015). Two large classes of degradation models are stochastic processes and general path models. The stochastic-process-based models show great flexibility in describing the failure mechanisms caused by degradation (Lehmann 2009).

The aim of degradation modeling in presence of degradation data is to select a model from a set of competing models, capturing the features of the underlying degradation phenomenon. An efficient statistical tool is able to discard irrelevant models. The concept of statistical depth could be employed as a statistical tool for model selection. A depth function reflects the centrality of the observation to a statistical population (Staerman et al 2020).

Tukey (1975) introduced a data depth to extend the notion of a median to multi-variate random variables. Depth function have been extended by Frairman and Muniz (2001) and Cuevas et al. (2006, 2007) for functional data, the data which are recorded densely over time with one observed function per subject (Hall et al 2006). An alternative point of view based on the graphic representation of curves is proposed by Lopez-Pintado (2009).

In this paper, stochastic processes such as Lévy processes or stochastic differential equations are considered to model the degradation. After model calibration in presence of data, the models that show high values of depth function are compared and a methodology to exploit and analyze the depth function results is proposed.

Speaker: Ms Arefe Asadi (University of technology of Troyes)
• 12:00 14:00
Lunch 2h
• 14:00 15:20
Reliability models
• 14:00
Modeling Spatially Clustered Failure Time Data via Multivariate Gaussian Random Fields 20m

Consider a fixed number of clustered areas identified by their geographical coordinate that monitored for the occurrences of an event such as pandemic, epidemic, migration to name a few. Data collected on units at all areas include time varying covariates and environmental factors. The collected data is considered pairwise to account for spatial correlation between all pair of areas. The pairwise right censored data is probit-transformed yielding a multivariate gaussian random field preserving the spatial correlation function. The data is analyzed using counting process machinery and geostatistical formulation that led to a class of weighted pairwise semiparametric estimating functions. Estimators of models unknowns are shown to be consistent and asymptotically normally distributed under infill-type spatial statistics asymptotic. Detailed small sample numerical studies that are in agreement with theoretical results are provided. The foregoing procedures are applied to leukemia survival data in Northeast England.

Speaker: akim adekpedjou (University of Missouri)
• 14:20
Phase-type models for competing risks 20m

A phase-type distribution can be defined to be the distribution of time to absorption for an absorbing finite state Markov chain in continuous time. Phase-type distributions have received much attention in applied probability, in particular in queuing theory, generalizing the Erlang distribution. Among other applications, they have for a long time been used in reliability and survival analysis. Particular interest has been in the use of so-called Coxian phase-type models. Their usefulness stems from the fact that they are able to model phenomena where an object goes through stages (phases) in a specified order, and may transit to the absorbing state (corresponding to the event of interest) from any phase. It is noteworthy that Coxian phase-type models have recently, in a number of papers, been successfully applied to model hospital length of stay in health care studies. These authors typically claim the superiority of Coxian phase-type models over common parametric models like gamma and lognormal for this kind of data. Similar models are apparently appropriate for reliability modeling of complex degrading systems.

The main purpose of the present talk is to study how the phase-type methodology can be modified to include competing risks, thereby enabling the modeling of failure distributions with several failure modes, or, more generally, event histories with several types of events. One then considers a finite state Markov chain with more than one absorbing state, each of which corresponds to a particular risk. Standard functions from the theory of competing risks can now be given in terms of the transition matrix of the underlying Markov chain. We will be particularly concerned with the uniqueness of parameterizations of phase-type models for competing risks, which is of particular interest in statistical inference. We will briefly consider maximum likelihood estimation in Coxian competing risks models, using the EM algorithm. A real data example will be analyzed for illustration.

Speaker: Bo Henry Lindqvist (Norwegian University of Science and Technology)
• 14:40
RELSYS : A new method based on damage physical-chemical processes with uncertainties and hazard. 20m

Jerome de Reffye
Engineering Degrees from Gustave Eiffel University ( ESIEE )
and Pierre et Marie Curie University ( Paris VI )
PhD in applied mathematics and theoretical physics
from Pierre et Marie Curie University

On opposite way of empirical approachs we develop an analytic method in Reliability - Maintainability - Availability - Safety (RAMS) area which led to the RELSYS model, (RELiability of SYStems), allowing to take into account all the physical-chemical parameters of the degradation models of the system components as well as their uncertainties in a model of randomized physical-chemical evolution allowing the complete calculation of the probabilities of their failures. This calculation is sufficiently complete to be compatible with the actuarial calculation of insurance, both of which allow the association between the RAMS and the calculation of the guarantee of the system costs.
We show that it is possible, with precise calculations, to evaluate the risks of uninsurable systems by time series because of the lack of data due to the rarity of the feared events. The probabilities of occurrence of these events being very low, the calculations must be based on justified models.
It uses the Langevin’s equation for phenomena that evolve slowly over time. The notions of Limit State in Service and Ultimate Limit State are introduced.
This model gives access to dynamic reliability which studies time-dependent phenomena and provides failure probabilities as functions of time through numerical simulation.
We obtain random functions of time whose parameters are themselves random. The uncertainties are thus divided into two parts according to their origin: Uncertainties on the physical-chemical parameters (randomness concentrated at the origin) and uncertainties on the realization of the degradation processes (randomness distributed in time).
We thus obtain the failure probabilities with their confidence intervals. The numerous examples show that RELSYS can be applied to any man-made system. We show the importance of taking into account the probabilistic aspect of the problem from the beginning of the modeling and to develop determinism within the random model. Finally, for the solution of particular problems, one will find original methods in signal processing.
The RELSYS application into system maintenance is using classical theory of random processes. The preventive maintenance parameters can be calculated by RELSYS from the failure probabilities and the technical specifications about the residual failure probabilities. The corrective maintenance cost can be deduced from the previous analysis.
The application of RELSYS to the calculation of the cost of the commitments of guarantees of a program uses the concept of Value at Risk. The used techniques are derived from reinsurance.
We will show numerous examples illustrating the theory by practice in engineering. RELSYS supplies a whole tool to dynamical RAMS analysis.

Speaker: Jerome de Reffye (individual researcher ( retired))
• 15:00
Combining AI with Model based Design: battery State-of-charge estimator using Deep Learning 20m

Across industries, the growing dependence on battery pack energy storage has underscored the importance of battery management systems (BMS) whose role is to monitor battery state, ensure safe operation and maximize performance. For example, the BMS helps avoid overcharging and over discharging, it manages the temperature of the battery and so on, and it does so by collecting information from sensors on the battery for current, voltage, temperature etc. So, this is a closed-loop system by design.
One of the things that cannot be directly measured but is required for many of these operations is the battery state of charge (SOC). So, this quantity needs to be estimated somehow. One way to solve this problem is using recursive estimation based on a Kalman filter. However, the Kalman filter requires a dynamical model of the battery – which may or may not be accurate – and is very time-consuming. Besides, handling just the algorithm is not enough. Models need to be incorporated into an entire system design workflow to deliver a product or a service to the market. The bridge between engineering and science workflows is one of the most important pieces of such an application. Combining Model-Based-Design with Artificial Intelligence will enrich the model and make collaboration between teams robust and more automated.

We will explore, in detail, the workflow involved in developing, testing, and deploying an AI-based state-of-charge estimator for batteries using Model-Based Design:
- Designing and training deep learning models
- Demonstrate a workflow for how you can research, develop, and deploy your own deep learning application with Model-Based Design
- Integrating deep learning and machine learning models into Simulink for system-level simulation
- Generate optimized C code and Performed Processor-in-the-loop (PIL) test