Welcome to the ENBIS Spring Meeting 2022  May 1920, 2022
The 2022 ENBIS Spring Meeting will be dedicated to
Degradation and Maintenance, Modelling and Analysis
Aim of the meeting
In the field of reliability studies, the multiplication of equipment control and monitoring systems implies that degradation models become more and more prominent over lifetime models. The modeling of degradation processes, the statistical analysis of the corresponding data and their use for the predictive maintenance of industrial systems are important and challenging issues. The aim of the 2022 ENBIS Spring Meeting is to bring together both academic and industrial statisticians interested in theoretical developments and practical applications in this field.
The topics of the meeting include :
Stochastic degradation processes
Statistical analysis of degradation data
Predictive maintenance
Maintenance modelling and optimization
Statistical reliability
Accelerated degradation tests
Prognostic and health management
Software development for degradation and maintenance analysis
Case studies in reliability analysis
Contact information
For any question about the meeting venue and scientific programme, feel free to contact the 2022 ENBIS Spring meeting organizer Olivier Gaudoin : olivier.gaudoin@univgrenoblealpes.fr.
For any question regarding registration and paper submission, please contact the ENBIS Permanent Office : office@enbis.org.
Programme committee
Inmaculada Torres Castro, Universidad de Extremadura, Badajoz, Spain
Marcel Chevalier, Schneider Electric, Grenoble, France
Mitra Fouladirad, Ecole Centrale de Marseille, France
Christian Paroissin, Université de Pau et des Pays de l'Adour, France
Gianpaolo Pulcini, Istituto Motori, Consiglio Nazionale delle Ricerche, Napoli, Italy
Emmanuel Remy, EDF R&D, Chatou, France
Organizing committee
Olivier Gaudoin, Grenoble INP, Université Grenoble Alpes (chair)
Christophe Bérenguer, Grenoble INP, Université Grenoble Alpes
Franck Corset, Université Grenoble Alpes
Laurent Doyen, Université Grenoble Alpes
Rémy Drouilhet, Université Grenoble Alpes
ENBIS Spring Meeting 2022 Highlights
Plenary speakers
Massimiliano Giorgio (Universita' degli Studi di Napoli, Federico II, Italy): About some extensions of the gamma process and their applications in reliability and maintenance.
David Coit (Rutgers University, USA): System reliability modeling with dependent degradation paths: a review of models including clustering and machine learning
The meeting will also include a number of contributed papers sessions. A particular focus will be given on industrial applications of reliability.
A special issue of the Wiley Journal Applied Stochastic Models in Business and Industry will be published on the topics of the meeting. Call for papers.
Registration fees

Early bird registration fee until April 29th, 2022 
Normal registration fee from April 30th, 2022 
Regular 
180 € 
220 € 
Student 
90 € 
90 € 
Conference registration fees include
Access to all sessions
Access to coffee breaks and lunches
Conference materials
Admission to the banquet on May 19th
Univariate Lévy processes have become quite common in the reliability literature for modelling accumulative deterioration. In case of correlated deterioration indicators, several possibilities have been suggested for modelling their dependence. The point of this study is the analysis and comparison of three different dependence models considered in the most recent literature: 1. Use of a regular copula, where the dependence in a multivariate increment is modelled through a timeindependent regular copula; 2. Superposition of independent univariate Lévy processes, where each marginal process is constructed as the sum of independent univariate Lévy processes $\{X_j(t), t\geq 0\}$ with possibly common $\{X_j(t), t\geq 0\}$ between margins; 3. Use of a Lévy copula. The three methods are first presented and analysed. As for the model based on a regular copula, it is shown that the corresponding multivariate process cannot have independent increments in general, so that it is not a Lévy process. This means that the distribution of the multivariate process is not fully characterized in this way. The second and third models both lead to a multivariate Lévy process, with a limited dependence range for the second superpositionbased model, which is not the case for the third Lévy copulabased model. However, this last model requires a higher technicity for its use and numerical methods (such as MonteCarlo simulations) have to be used for its numerical assessment. Practical details are given in the paper and two MonteCarlo simulation procedures are compared.
A twocomponent series system is next considered, with joint deterioration level modelled by one of the three previous models. Each component is considered as failed as soon as its deterioration level is beyond a given failure threshold. The impact of a wrong choice for the model is explored, based on data simulated from one of the three models and next adjusted to all three models. It is shown that a wrong choice for the model can lead to either surestimate or underestimate the reliability function of the twocomponent series system, which could be problematic in an applicative context.
In the context of semiconductor reliability, predictive maintenance and calculation of residual useful life are important topics under the greater umbrella of prognostics and health management.
Especially in automotive applications, with higher expected usage times of selfdriving autonomous vehicles, it becomes more and more important to recognize degradation processes early, so that preventive maintenance actions can be taken automatically. For semiconductor producers, it is important to account for lifetime degradation of electronic devices when guaranteeing quality standards for their customer.
For this, accurate and fast statistical models are needed to identify degradation by parameter drift. Typically, electrical parameters have specified limits in which they need to stay over their whole life cycle.
Efficient lifetime simulations are performed by socalled accelerated stress tests. In those tests, electrical parameters are measured before, during, and after higherthan usual stress conditions. These stress test data represent the expected lifetime behavior of these parameters.
Using models based on these data, tighter limits, so called test limits, are then introduced at production testing to guarantee lifetime quality of the devices for the customer.
Based on this data, quality control measures like guard bands are introduced. Guard bands are the differences between specification and test limits and account, amongst others, for lifetime drift effects of electrical parameters.
Models to calculate lifetime drift have to be flexible enough to accurately represent a large number of stress test behaviors while being computationally lightweight enough to run on edge devices in the vehicles.
We present a statistical model for discrete parameters based on nonparametric interval estimation of conditional transition probabilities in Markov chains that allows for flexible modelling and fast computation. We then show how to use the model to formulate an integer optimization problem to calculate optimal test limits. Calculation for both arbitrary parameter distributions at production testing as well as defined initial distributions are shown. Finally, we give an approach to calculate remaining useful lifetime for electronic components.
The work has been performed in the project ArchitectECA2030 under grant agreement No 877539. The project is cofunded by grants from Germany, Netherlands, Czech Republic, Austria, Norway and  Electronic Component Systems for European Leadership Joint Undertaking (ECSEL JU).
All ArchitectECA2030 related communication reflects only the author’s view and ECSEL JU and the Commission are not responsible for any use that may be made of the information it contains.
Fuel cells use hydrogen and oxygen as reactants to produce electricity through electrochemical reactions with the only byproduct of water. They are widely used in various applications, e.g. transport, due to their high efficiency, energy density, and limited impact on environmental resources, however fuel cells deployment is held down by multiple barriers such as their high cost or their shorter than required lifetime. To cross over these barriers, using multistack fuel cells (MFC) instead of a single one is a promising solution. Firstly, MFC offers improved reliability thanks to the multistack structure. Another advantage is that the durability of multistack FC can also be increased by optimally distributing the power demand among different stacks by an efficient Energy Management Strategy (EMS), and thus avoiding degraded mode operation [1]. In short, MFC systems are relevant to meet this challenge if properly dimensioned and managed by an appropriate EMS taking into account the deterioration of the cells. In order to implement such a degradationaware EMS, it is mandatory to build a degradation model that integrates the dynamic behavior of MFC according to the operating conditions. Fuel cell performance degradation is linked to complex electrochemical, mechanical, and thermal mechanisms, which are difficult to model using a “whitebox” approach, relying on the exact laws of physics. Within this context, the aim of the present work is to propose a fuel cell degradation model adapted for the energy management of MFC.
The deterioration behavior of an MFC is characterized by two main features : (i) it is loaddependent, i.e. the degradation is affected by the load distributed by the stack ; (ii) it is stochastic and exhibits a stacktostack variability. A degradationaware energy management system allocates a load to deliver to the different stacks of the MFC system as a function of their degradation state and of their predicted degradation behavior. The deterioration dynamics must thus be modeled as a function of the load power. Another specificity of fuel cells is their individual deterioration variability, which can be due to stochasticity in the intrinsic fuel cell deterioration phenomena. This stochasticity varies the deterioration levels even for the identical stacks operating under identical load profiles.
In order to meet these modelling requirements, this work develops a loaddependent stochastic deterioration model for an MFC. First, the overall stack resistance is chosen as the degradation indicator, as it carries the key aging information of a fuel cell stack. Then, a stochastic nonhomogeneous Gamma process is used to model the deterioration of the fuel cell, i.e. the increase in the fuel cell resistance. The shape parameter of the considered Gamma process is further modeled by an empirical function of the fuel cell operation load in order to make the resistance deterioration loaddependent. Finally, to model the individual deterioration heterogeneity, a random effect is added to the Gamma process on its scale parameter, taken as a random variable following a probability distribution (a Gamma law is chosen in this work).
Resistance degradation paths can then be simulated based on the proposed deterioration model, based on which the first hitting time distribution of a failure threshold (or equivalently a remaining useful life distribution) can be estimated and the reliability of the system can be analyzed. The proposed model can also be used to optimize the load allocation strategy for an MFC [2].
Keywords: Multistack fuel cells, loaddependent deterioration model, stochastic modelling, Gamma process, random effect.
References:
[1]Marx, Neigel, et al. "On the sizing and energy management of an hybrid multistack fuel cell–Battery system for automotive applications." International Journal of Hydrogen Energy 42.2 (2017): 15181526.
[2] Zuo, J., C. Cadet, Z. Li, C. Bérenguer, and R. Outbib (2022). Postprognostics decisionmaking strategy for load allocation on a stochastically deteriorating multistack fuel cell system. To appear in Proc. Inst. Mech. Eng  Part O: Journal of Risk and Reliability.
Let $\boldsymbol{X}=(X_t)_{t\ge 0}$ be a process behaving as a general increasing Lévy process (subordinator) prior to hitting a given unknown level $m_0$, then behaving as another different subordinator once this threshold is crossed. We address the detection of this unknown threshold $m_0\in [0,+\infty]$ from an observed trajectory of the process. These kind of model and issue are encountered in many areas such as reliability and quality control in degradation problems. More precisely, we construct, from a sample path and for each $\epsilon >0$, a socalled detection level $L_\epsilon$ by considering a CUSUM inspired procedure. Under mild assumptions, this level is such that, while $m_0$ is infinite (i.e. when no changes occur), its expectation $ \mathbb{E}_{\infty}(L_{\epsilon})$ tends to $+\infty$ as $\epsilon$ tends to $0$, and the expected overshoot $ \mathbb{E}_{m_0}([L_{\epsilon}  m_0]^+)$, while the threshold $m_0$ is finite, is negligible compared to $ \mathbb{E}_{\infty}(L_{\epsilon})$ as $\epsilon$ tends to $0$. Numerical illustrations are provided when the Lévy processes are gamma processes with different shape parameters. This is joint work with Z.Al Masry and G.Verdier.
Optimisation of maintenance policies for items with their repair processes modelled by the geometric process (GP) has received a good amount of attention. The extended Poisson process (EPP), one of the extensions of the GP, can be used to model the repair process with its timesbetweenfailures possessing a nonmonotonic trend. A central issue in the applications of the EPPs in maintenance policy optimisation is to find when the EPP has an increasing timesbetweenfailures. This paper aims to answer this question. Numerical examples are provided to illustrate the proposed maintenance policies.
Models that describe the deterioration processes of the components are key to determining the lifetime of a system and play a fundamental role in predicting the system reliability and planning the system maintenance. In most systems there is heterogeneity among the degradation paths of the units. This variability is usually introduced in the model through random effects, that is, considering random coefficients on the model.
A degrading system subject to multiple degradation processes whose initiation times follow a shotCox noise process is studied. The growth of these processes is modeled by a homogeneous gamma process. A condition based maintenance policy with periodic inspections is applied to reduce the impact of failures and optimise the total expected maintenance cost. The heterogeneities between components are included in the model considering that the scale parameter of the gamma process follows a uniform distribution. Numerical examples of this maintenance policy are given comparing both models, with and without heterogeneities.
In this paper, a multicomponent series system is considered which is periodically inspected and at inspection times the failed components are replaced by a new one. Therefore, this maintenance action is perfect corrective maintenance for the failed component, and it can be considered as imperfect corrective maintenance for the system. The inspection interval is considered as a decision parameter and the maintenance policy is optimized using longrun cost rate function. It is assumed that there is no information related to components' lifetime distributions and their parameters. Therefore, an optimal decision parameter is derived considering historical data (a data storage for the system that includes information related to past repairs) using density estimation and random forest algorithms. Eventually, the efficiency of the proposed optimal decision parameter according to available data is compared to the one derived when all information on the system is available.
keywords: Maintenance Optimization, Datadriven Estimation, Random Forest Algorithm.
A Reliability degradation model was developed in a project, which involved LCD TV screens installed on railway. One critical item in the project was an LED strip.
Accelerated Life tests data are the data source for the determination of LED Reliability. The expected life of LEDs being about 1015 years  testing them until death is not practical.
Luminosity (vs time measurements provided by the manufacturer – were fitted to a degradation model. As opposed to [1] and [2], where an exponential degradation model for the average luminosity was applied our degradation model assumed degradation equations, based on the second law of thermodynamics developed by A. Einstein (1905), Fokker (1919) , Planck (1930) and Kolmogorov (1931). The main difference in the approaches is that the degradation function's Taylor expansion contains terms of the first and second derivative, while the models of [1] and [2] contain only the first.
As opposed to [2] we did not fit the results to an assumed Reliability function (Weibull, Normal, Lognormal) and left it in a tabular form. The table allows to determine required reliability and maintenance information.
The following results are derived:
1. PDF of the failure rate as a function of time
2. Reliability as a function of time and temperature (for simple and complex components)
3. MTBF for a device used for limited and unlimited life.
A model developed for maintenance costs and spare parts provisioning allows development Optimal Preventive Maintenance Policy :
1. Optimal preventive maintenance without individual monitoring.
2. Optimal preventive maintenance based on Rest of Useful Life (with monitoring)
References
A Digital Twin (DT) is a new and powerful concept that maps a physical structure operating in a specific context to the digital space. The development and deployment of a DT improves forecasting prognostic performance and decision support for operators and managers. , DT have been introduced in various industries across a range of application areas including design, manufacturing and maintenance. Due to the large impact of maintenance on the proper functioning of a system, maintenance is one of the most studied DT applications. In the case of trains, poor maintenance can put the rolling carts out of service or, worse, pose a safety risk to passengers and operators. Implementing intelligent maintenance strategies can therefore offer tremendous benefits. This study addresses the development of an architecture for DT designed to formulate and evaluate new hypotheses in predictive maintenance by iterating between physical experiments and computational experiments. The designed DT supports a broad perspective on statistical aspects of simulations and experiments. In addition, the DT enables realtime prediction and optimization of the actual behavior of a system at any stage of its life cycle. Examples of safety valves and suspension systems will be given.
In the coming years, with the development of intermittent renewable energy sources and the gradual phasing out of coalfired power plants, combined cycle gas turbines (CCGT) will play an essential role in regulating electricity production. This need for flexibility will increase the demands on the equipment of these CCGT and the issue of optimizing their maintenance will become increasingly important. To this end, the use of statistical tools to enhance the value of data from the operation and maintenance of these plants' equipment is a possible approach to provide decision support elements.
It is in this industrial context that an important collection of data was carried out for several conventional repairable equipment (turbines, pumps...) of three EDF CCGT. The second step consisted in a preprocessing / cleaning of these raw data with the support of field experts, an essential requirement for the statistical modeling stage. A wide range of imperfect maintenance models implemented in the free R VAM (for Virtual Age Models) package (https://rpackages.imag.fr/VAM#) was tested to evaluate the ability of these models, on the one hand to reproduce the field reality, on the other hand to bring useful insights to help the development of equipment maintenance plans.
The communication will present this work, illustrating it on a piece of equipment and insisting on its industrial application dimension.
This work is motivated by the nondocumented, practical learnings gained by a predictive maintenance (PdM) development team in the Danish energy company Ørsted. The team implements PdM solutions for power plant machinery to monitor for faults in the making. Their learnings support the hypothesis that there are not significant enough benefits to be gained from using overly complicated condition monitoring models on selected machinery.
To explore this hypothesis, we set out to compare two different methodologies for detecting faults in process data. We compare a classical latent structurebased method from the field of statistical process control (SPC) with a standard autoencoder deep neural network. Furthermore, we compare the fault detection performance of these methods with two more experimental deep learning models recently proposed in literature [1]. The reason for these specific models is that they a priori seem very well suited for the modelling task the PdM team had undertaken due to the models’ alleged ability to automate domain knowledge in a datadriven way.
We benchmark all methods against each other using first the wellknown Tennessee Eastman Process (TEP) data, and subsequently data collected from two feedwater pumps (FWP) at a large Danish combined heat and power plant.
The TEP data stems from a simulation tool for generating data from the process, and thus a large number of datasets are generated for each of 20 process disturbances. For the FWP data, six historical faults in the form of leaks are used to test the methods against each other in their ability to detect faults as they develop over time. Each methods’ ability to detect faults is measured using a weighted combination of performance metrics such as mean absolute error, ROC AUC and average precision AP.
Preliminary results of the experiments suggest that detection performance is comparable between the different models on both datasets, but that each model seems to come with its own set of advantages in terms of fault detection performance, as in the case of reaction time to certain types of faults.
Based on the mentioned datasets and models, we discuss the quantitative results of these experiments, as well as other pros and cons, such as number of modelling decisions, hyperparameters etc. of each paradigm that may influence the choice of detection model in an industrial setting.
Battery prognostics and health management has recently become a very important and strategic topic specially with the rise of electric vehicles and electric mobility in general, which is seen as a key tool to reduce the impact of global warming. In order for battery health management to be viable, it is necessary to quantify and understand battery state of health (SOH) and its degradation mechanisms. A lot of research has been developed to understand the different degradation processes in a battery [1], to identify and understand the impacts of stress factors [2], and to quantify tendencies of degradation and predict remaining useful life [3]. In this presentation, an overview on battery degradation modelling and prognostics is given, exploring the definitions of the main stress factors, and covering the most common degradation models in the literature. Particular attention is given to an empirical model based on stress cycle decomposition linking the degradation to the history of charge and discharge cycles, which is flexible and accurate [4]. This model can be used to estimate the SoH variation of the battery based on the way the vehicle is driven. A threestep method is thus proposed to develop a comprehensive model for degradation induced by driving conditions, combining the aforementioned degradation model, with battery and vehicle dynamics models, as well as information on road topography and vehicle parameters. The first step of this overall degradation model consists in using the available information topography, speed limits and crossroads to estimate the electrical power required to carry on a given displacement. In the second step, the required electrical power is used to infer a state of charge (SoC) trajectory. Finally, in the last step, the SoC profile is decomposed into stress cycles that serve as input for the degradation model. Such a comprehensive SOH evolution model linking route profiles and driving conditions to battery degradation is suitable for decisionmaking problems related to the optimal management of electric vehicles. The use of this degradation model is illustrated on a usecase of a fleet of electric vehicles that must perform a set of missions: it is shown how the order of those missions can be decided by optimizing not only energy consumption but also battery degradation.
[1] Anthony Barré, Benjamin Deguilhem, Sébastien Grolleau, Mathias Gérard, Frédéric Suard, Delphine Riu, A review on lithiumion battery ageing mechanisms and estimations for automotive applications, Journal of Power Sources,Volume 241,
[2] Saxena, Saurabh, Darius Roman, Valentin Robu, David Flynn, and Michael Pecht. 2021. "Battery Stress Factor Ranking for Accelerated Degradation Test Planning Using Machine Learning" Energies 14, no. 3: 723.
[3] Jiaming Fan et al. “A novel machine learning method based approach for Liion battery prognostic and health management”. In: IEEE Access 7 (2019), pp. 160043–
160061 ,2013, Pages 680689,
[4] Xu ,B., Oudalov, A., Ulbig, A., Andersson, G., and Kirschen, D.S. (2018). Modeling of lithiumion battery degradation for cell life assessment. IEEE Transactions on Smart Grid, 9(2), 1131–1140. doi:10.1109/TSG.2016.2578950
Along with the shortening of production period, manufacturing industry utilizes an accelerated degradation test (ADT) to estimate the reliability of newly developed products as quickly as possible. In ADT, the stress factor which is related to the failure mechanisms is imposed to cause the failure of products faster than those under normal use condition. By increasing the degree of stress such as voltage, temperature, humidity or other external factors, the performance of new products continuously degrades and leads to the failure. In some applications, accelerated destructive degradation test (ADDT) is conducted when testing units should be destroyed to measure the performance of the degrading product.
For general ADT and ADDT models, the mean estimators have been considered as the location measurement. However, the estimation result using mean estimator can be inappropriate for highly skewed data, because the lifetime estimation of degradation data with outliers or irregularity can be distorted.
In this paper, the ADDT modeling based on quantile regression (QR) is suggested as a comprehensive approach for asymmetric observations. QRbased ADDT requires fewer assumptions than the general parametric methods to construct the model, and enables the interpretation to be more flexible. Through the data application, our approach provides a great advantage by inferring a nonlinear degradation path without bias and partiality.
A system subject to degradation is considered. The degradation is modelled by a gamma process. A conditionbased maintenance policy with perfect corrective and an imperfect preventive actions is proposed. The maintenance cost is derived considering a Markovrenewal process. The statistical inference of the degradation and maintenance parameters by the maximum likelihood method is proposed. A sensibility analysis to different parameters is carried out and the perspectives are detailed.
In this article, technological or industrial equipment that are subject to degradation are considered. These units undergo maintenance actions, which reduce their degradation level.
The paper considers a degradation model with imperfect maintenance effect. The underlying degradation process is a Wiener process with drift. The maintenance effects are described with an Arithmetic Reduction of Degradation ($ARD_1$) model. The system is regularly inspected and the degradation levels are measured.
Four different observation schemes are considered so that degradation levels can be observed between maintenance actions as well as just before or just after maintenance times. In each scheme, observations of the degradation level between successive maintenance actions are made. In the first observation scheme, degradation levels just before and just after each maintenance action are observed. In the second scheme, degradation levels just after each maintenance action are not observed but are observed just before. On the contrary, in the third scheme, degradation levels just before the maintenance actions are not observed but are observed just after. Finally, in the fourth observation scheme, the degradation levels are not observed neither just before nor just after the maintenance actions.
The paper studies the estimation of the model parameters under these different observation schemes. The maximum likelihood estimators are derived for each scheme.
Several situations are studied in order to assess the impact of different features on the estimation quality.
Among them, the number of observations between successive maintenance actions, the number of maintenance actions, the maintenance efficiency parameter and the location of the observations are considered. These situations are used to assess the estimation quality and compare the observation schemes through an extensive simulation and performance study.
Degradation models are more and more studied and used in practice. Most of these models are based on Lévy processes. For such models, estimation methods have been proposed. These models are also considered for developing complex and efficient maintenance policies. However, a main issue remains: goodnessoffit (GoF) test for these models. In this talk, we propose a GoF test for the homogeneous gamma process under a general sampling scheme.
Degradation modeling is an effective way for the reliability analysis of complex systems. For highly reliable systems, in which their failure is hard to observe, degradation measurements often provide more information than failure time to improve system reliability (Meeker and Escobar 2014). The degradation can be viewed as damage to a system that accumulates over time and eventually leads to failure when the accumulation reaches a failure threshold, either random or stipulated by industrial standards (Ye and. Xie 2015). Two large classes of degradation models are stochastic processes and general path models. The stochasticprocessbased models show great flexibility in describing the failure mechanisms caused by degradation (Lehmann 2009).
The aim of degradation modeling in presence of degradation data is to select a model from a set of competing models, capturing the features of the underlying degradation phenomenon. An efficient statistical tool is able to discard irrelevant models. The concept of statistical depth could be employed as a statistical tool for model selection. A depth function reflects the centrality of the observation to a statistical population (Staerman et al 2020).
Tukey (1975) introduced a data depth to extend the notion of a median to multivariate random variables. Depth function have been extended by Frairman and Muniz (2001) and Cuevas et al. (2006, 2007) for functional data, the data which are recorded densely over time with one observed function per subject (Hall et al 2006). An alternative point of view based on the graphic representation of curves is proposed by LopezPintado (2009).
In this paper, stochastic processes such as Lévy processes or stochastic differential equations are considered to model the degradation. After model calibration in presence of data, the models that show high values of depth function are compared and a methodology to exploit and analyze the depth function results is proposed.
Consider a fixed number of clustered areas identified by their geographical coordinate that monitored for the occurrences of an event such as pandemic, epidemic, migration to name a few. Data collected on units at all areas include time varying covariates and environmental factors. The collected data is considered pairwise to account for spatial correlation between all pair of areas. The pairwise right censored data is probittransformed yielding a multivariate gaussian random field preserving the spatial correlation function. The data is analyzed using counting process machinery and geostatistical formulation that led to a class of weighted pairwise semiparametric estimating functions. Estimators of models unknowns are shown to be consistent and asymptotically normally distributed under infilltype spatial statistics asymptotic. Detailed small sample numerical studies that are in agreement with theoretical results are provided. The foregoing procedures are applied to leukemia survival data in Northeast England.
A phasetype distribution can be defined to be the distribution of time to absorption for an absorbing finite state Markov chain in continuous time. Phasetype distributions have received much attention in applied probability, in particular in queuing theory, generalizing the Erlang distribution. Among other applications, they have for a long time been used in reliability and survival analysis. Particular interest has been in the use of socalled Coxian phasetype models. Their usefulness stems from the fact that they are able to model phenomena where an object goes through stages (phases) in a specified order, and may transit to the absorbing state (corresponding to the event of interest) from any phase. It is noteworthy that Coxian phasetype models have recently, in a number of papers, been successfully applied to model hospital length of stay in health care studies. These authors typically claim the superiority of Coxian phasetype models over common parametric models like gamma and lognormal for this kind of data. Similar models are apparently appropriate for reliability modeling of complex degrading systems.
The main purpose of the present talk is to study how the phasetype methodology can be modified to include competing risks, thereby enabling the modeling of failure distributions with several failure modes, or, more generally, event histories with several types of events. One then considers a finite state Markov chain with more than one absorbing state, each of which corresponds to a particular risk. Standard functions from the theory of competing risks can now be given in terms of the transition matrix of the underlying Markov chain. We will be particularly concerned with the uniqueness of parameterizations of phasetype models for competing risks, which is of particular interest in statistical inference. We will briefly consider maximum likelihood estimation in Coxian competing risks models, using the EM algorithm. A real data example will be analyzed for illustration.
Jerome de Reffye
Engineering Degrees from Gustave Eiffel University ( ESIEE )
and Pierre et Marie Curie University ( Paris VI )
PhD in applied mathematics and theoretical physics
from Pierre et Marie Curie University
On opposite way of empirical approachs we develop an analytic method in Reliability  Maintainability  Availability  Safety (RAMS) area which led to the RELSYS model, (RELiability of SYStems), allowing to take into account all the physicalchemical parameters of the degradation models of the system components as well as their uncertainties in a model of randomized physicalchemical evolution allowing the complete calculation of the probabilities of their failures. This calculation is sufficiently complete to be compatible with the actuarial calculation of insurance, both of which allow the association between the RAMS and the calculation of the guarantee of the system costs.
We show that it is possible, with precise calculations, to evaluate the risks of uninsurable systems by time series because of the lack of data due to the rarity of the feared events. The probabilities of occurrence of these events being very low, the calculations must be based on justified models.
It uses the Langevin’s equation for phenomena that evolve slowly over time. The notions of Limit State in Service and Ultimate Limit State are introduced.
This model gives access to dynamic reliability which studies timedependent phenomena and provides failure probabilities as functions of time through numerical simulation.
We obtain random functions of time whose parameters are themselves random. The uncertainties are thus divided into two parts according to their origin: Uncertainties on the physicalchemical parameters (randomness concentrated at the origin) and uncertainties on the realization of the degradation processes (randomness distributed in time).
We thus obtain the failure probabilities with their confidence intervals. The numerous examples show that RELSYS can be applied to any manmade system. We show the importance of taking into account the probabilistic aspect of the problem from the beginning of the modeling and to develop determinism within the random model. Finally, for the solution of particular problems, one will find original methods in signal processing.
The RELSYS application into system maintenance is using classical theory of random processes. The preventive maintenance parameters can be calculated by RELSYS from the failure probabilities and the technical specifications about the residual failure probabilities. The corrective maintenance cost can be deduced from the previous analysis.
The application of RELSYS to the calculation of the cost of the commitments of guarantees of a program uses the concept of Value at Risk. The used techniques are derived from reinsurance.
We will show numerous examples illustrating the theory by practice in engineering. RELSYS supplies a whole tool to dynamical RAMS analysis.
Across industries, the growing dependence on battery pack energy storage has underscored the importance of battery management systems (BMS) whose role is to monitor battery state, ensure safe operation and maximize performance. For example, the BMS helps avoid overcharging and over discharging, it manages the temperature of the battery and so on, and it does so by collecting information from sensors on the battery for current, voltage, temperature etc. So, this is a closedloop system by design.
One of the things that cannot be directly measured but is required for many of these operations is the battery state of charge (SOC). So, this quantity needs to be estimated somehow. One way to solve this problem is using recursive estimation based on a Kalman filter. However, the Kalman filter requires a dynamical model of the battery – which may or may not be accurate – and is very timeconsuming. Besides, handling just the algorithm is not enough. Models need to be incorporated into an entire system design workflow to deliver a product or a service to the market. The bridge between engineering and science workflows is one of the most important pieces of such an application. Combining ModelBasedDesign with Artificial Intelligence will enrich the model and make collaboration between teams robust and more automated.
We will explore, in detail, the workflow involved in developing, testing, and deploying an AIbased stateofcharge estimator for batteries using ModelBased Design:
 Designing and training deep learning models
 Demonstrate a workflow for how you can research, develop, and deploy your own deep learning application with ModelBased Design
 Integrating deep learning and machine learning models into Simulink for systemlevel simulation
 Generate optimized C code and Performed Processorintheloop (PIL) test