BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Combining AI with Model based Design: battery State-of-charge esti
mator using Deep Learning
DTSTART;VALUE=DATE-TIME:20220520T130000Z
DTEND;VALUE=DATE-TIME:20220520T132000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-344@conferences.enbis.org
DESCRIPTION:Speakers: Moubarak GADO (MathWorks)\nAcross industries\, the g
rowing dependence on battery pack energy storage has underscored the impor
tance of battery management systems (BMS) whose role is to monitor battery
state\, ensure safe operation and maximize performance. For example\, the
BMS helps avoid overcharging and over discharging\, it manages the temper
ature of the battery and so on\, and it does so by collecting information
from sensors on the battery for current\, voltage\, temperature etc. So\,
this is a closed-loop system by design.\nOne of the things that cannot be
directly measured but is required for many of these operations is the batt
ery state of charge (SOC). So\, this quantity needs to be estimated someho
w. One way to solve this problem is using recursive estimation based on a
Kalman filter. However\, the Kalman filter requires a dynamical model of t
he battery – which may or may not be accurate – and is very time-consu
ming. Besides\, handling just the algorithm is not enough. Models need to
be incorporated into an entire system design workflow to deliver a produc
t or a service to the market. The bridge between engineering and science w
orkflows is one of the most important pieces of such an application. Combi
ning Model-Based-Design with Artificial Intelligence will enrich the model
and make collaboration between teams robust and more automated.\n\nWe wil
l explore\, in detail\, the workflow involved in developing\, testing\, an
d deploying an AI-based state-of-charge estimator for batteries using Mode
l-Based Design:\n - Designing and training deep learning models \n - Demon
strate a workflow for how you can research\, develop\, and deploy your own
deep learning application with Model-Based Design\n - Integrating deep le
arning and machine learning models into Simulink for system-level simulati
on\n - Generate optimized C code and Performed Processor-in-the-loop (PIL)
test\n\nhttps://conferences.enbis.org/event/16/contributions/344/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/344/
END:VEVENT
BEGIN:VEVENT
SUMMARY:RELSYS : A new method based on damage physical-chemical processes
with uncertainties and hazard.
DTSTART;VALUE=DATE-TIME:20220520T124000Z
DTEND;VALUE=DATE-TIME:20220520T130000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-389@conferences.enbis.org
DESCRIPTION:Speakers: Jerome de Reffye (individual researcher ( retired))\
nJerome de Reffye\nEngineering Degrees from Gustave Eiffel University ( ES
IEE ) \nand Pierre et Marie Curie University ( Paris VI )\nPhD in applied
mathematics and theoretical physics \nfrom Pierre et Marie Curie Universit
y\n\nOn opposite way of empirical approachs we develop an analytic method
in Reliability - Maintainability - Availability - Safety (RAMS) area which
led to the RELSYS model\, (RELiability of SYStems)\, allowing to take int
o account all the physical-chemical parameters of the degradation models o
f the system components as well as their uncertainties in a model of rando
mized physical-chemical evolution allowing the complete calculation of the
probabilities of their failures. This calculation is sufficiently complet
e to be compatible with the actuarial calculation of insurance\, both of w
hich allow the association between the RAMS and the calculation of the gua
rantee of the system costs. \nWe show that it is possible\, with precise c
alculations\, to evaluate the risks of uninsurable systems by time series
because of the lack of data due to the rarity of the feared events. The pr
obabilities of occurrence of these events being very low\, the calculation
s must be based on justified models. \nIt uses the Langevin’s equation f
or phenomena that evolve slowly over time. The notions of Limit State in S
ervice and Ultimate Limit State are introduced.\nThis model gives access t
o dynamic reliability which studies time-dependent phenomena and provides
failure probabilities as functions of time through numerical simulation. \
nWe obtain random functions of time whose parameters are themselves random
. The uncertainties are thus divided into two parts according to their ori
gin: Uncertainties on the physical-chemical parameters (randomness concent
rated at the origin) and uncertainties on the realization of the degradati
on processes (randomness distributed in time). \nWe thus obtain the failur
e probabilities with their confidence intervals. The numerous examples sho
w that RELSYS can be applied to any man-made system. We show the importanc
e of taking into account the probabilistic aspect of the problem from the
beginning of the modeling and to develop determinism within the random mod
el. Finally\, for the solution of particular problems\, one will find orig
inal methods in signal processing.\nThe RELSYS application into system mai
ntenance is using classical theory of random processes. The preventive mai
ntenance parameters can be calculated by RELSYS from the failure probabili
ties and the technical specifications about the residual failure probabili
ties. The corrective maintenance cost can be deduced from the previous ana
lysis. \nThe application of RELSYS to the calculation of the cost of the c
ommitments of guarantees of a program uses the concept of Value at Risk. T
he used techniques are derived from reinsurance.\nWe will show numerous ex
amples illustrating the theory by practice in engineering. RELSYS supplies
a whole tool to dynamical RAMS analysis.\n\nhttps://conferences.enbis.org
/event/16/contributions/389/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/389/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Phase-type models for competing risks
DTSTART;VALUE=DATE-TIME:20220520T122000Z
DTEND;VALUE=DATE-TIME:20220520T124000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-342@conferences.enbis.org
DESCRIPTION:Speakers: Bo Henry Lindqvist (Norwegian University of Science
and Technology)\nA phase-type distribution can be defined to be the distri
bution of time to absorption for an absorbing finite state Markov chain in
continuous time. Phase-type distributions have received much attention in
applied probability\, in particular in queuing theory\, generalizing the
Erlang distribution. Among other applications\, they have for a long time
been used in reliability and survival analysis. Particular interest has be
en in the use of so-called Coxian phase-type models. Their usefulness stem
s from the fact that they are able to model phenomena where an object goes
through stages (phases) in a specified order\, and may transit to the abs
orbing state (corresponding to the event of interest) from any phase. It
is noteworthy that Coxian phase-type models have recently\, in a number of
papers\, been successfully applied to model hospital length of stay in he
alth care studies. These authors typically claim the superiority of Coxian
phase-type models over common parametric models like gamma and lognormal
for this kind of data. Similar models are apparently appropriate for relia
bility modeling of complex degrading systems. \n\nThe main purpose of the
present talk is to study how the phase-type methodology can be modified to
include competing risks\, thereby enabling the modeling of failure distri
butions with several failure modes\, or\, more generally\, event histories
with several types of events. One then considers a finite state Markov ch
ain with more than one absorbing state\, each of which corresponds to a pa
rticular risk. Standard functions from the theory of competing risks can n
ow be given in terms of the transition matrix of the underlying Markov cha
in. We will be particularly concerned with the uniqueness of parameterizat
ions of phase-type models for competing risks\, which is of particular int
erest in statistical inference. We will briefly consider maximum likelihoo
d estimation in Coxian competing risks models\, using the EM algorithm. A
real data example will be analyzed for illustration.\n\nhttps://conference
s.enbis.org/event/16/contributions/342/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/342/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modeling Spatially Clustered Failure Time Data via Multivariate Ga
ussian Random Fields
DTSTART;VALUE=DATE-TIME:20220520T120000Z
DTEND;VALUE=DATE-TIME:20220520T122000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-385@conferences.enbis.org
DESCRIPTION:Speakers: akim adekpedjou (University of Missouri)\nConsider a
fixed number of clustered areas identified by their geographical coordina
te that monitored for the occurrences of an event such as pandemic\, epide
mic\, migration to name a few. Data collected on units at all areas includ
e time varying covariates and environmental factors. The collected data is
considered pairwise to account for spatial correlation between all pair o
f areas. The pairwise right censored data is probit-transformed yielding
a multivariate gaussian random field preserving the spatial correlation fu
nction. The data is analyzed using counting process machinery and geostati
stical formulation that led to a class of weighted pairwise semiparametri
c estimating functions. Estimators of models unknowns are shown to be con
sistent and asymptotically normally distributed under infill-type spatial
statistics asymptotic. Detailed small sample numerical studies that are i
n agreement with theoretical results are provided. The foregoing procedure
s are applied to leukemia survival data in Northeast England.\n\nhttps://c
onferences.enbis.org/event/16/contributions/385/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/385/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A condition-based maintenance policy in a system with heterogeneit
ies
DTSTART;VALUE=DATE-TIME:20220519T122000Z
DTEND;VALUE=DATE-TIME:20220519T124000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-363@conferences.enbis.org
DESCRIPTION:Speakers: Inma T. Castro (Universidad de Extremadura)\nModels
that describe the deterioration processes of the components are key to de
termining the lifetime of a system and play a fundamental role in predicti
ng the system reliability and planning the system maintenance. In most sys
tems there is heterogeneity among the degradation paths of the units. This
variability is usually introduced in the model through random effects\, t
hat is\, considering random coefficients on the model. \nA degrading syste
m subject to multiple degradation processes whose initiation times follow
a shot-Cox noise process is studied. The growth of these processes is mode
led by a homogeneous gamma process. A condition based maintenance policy w
ith periodic inspections is applied to reduce the impact of failures and o
ptimise the total expected maintenance cost. The heterogeneities between c
omponents are included in the model considering that the scale parameter o
f the gamma process follows a uniform distribution. Numerical examples of
this maintenance policy are given comparing both models\, with and without
heterogeneities.\n\nhttps://conferences.enbis.org/event/16/contributions/
363/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/363/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Degradation Model Selection Using Depth Functions
DTSTART;VALUE=DATE-TIME:20220520T093000Z
DTEND;VALUE=DATE-TIME:20220520T095000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-365@conferences.enbis.org
DESCRIPTION:Speakers: Arefe Asadi (University of technology of Troyes)\nDe
gradation modeling is an effective way for the reliability analysis of com
plex systems. For highly reliable systems\, in which their failure is hard
to observe\, degradation measurements often provide more information than
failure time to improve system reliability (Meeker and Escobar 2014). Th
e degradation can be viewed as damage to a system that accumulates over ti
me and eventually leads to failure when the accumulation reaches a failure
threshold\, either random or stipulated by industrial standards (Ye and.
Xie 2015). Two large classes of degradation models are stochastic processe
s and general path models. The stochastic-process-based models show great
flexibility in describing the failure mechanisms caused by degradation (Le
hmann 2009). \n\nThe aim of degradation modeling in presence of degradatio
n data is to select a model from a set of competing models\, capturing the
features of the underlying degradation phenomenon. An efficient statistic
al tool is able to discard irrelevant models. The concept of statistical
depth could be employed as a statistical tool for model selection. A depth
function reflects the centrality of the observation to a statistical popu
lation (Staerman et al 2020). \n\nTukey (1975) introduced a data depth to
extend the notion of a median to multi-variate random variables. Depth fun
ction have been extended by Frairman and Muniz (2001) and Cuevas et al. (2
006\, 2007) for functional data\, the data which are recorded densely over
time with one observed function per subject (Hall et al 2006). An alterna
tive point of view based on the graphic representation of curves is propos
ed by Lopez-Pintado (2009).\n\nIn this paper\, stochastic processes such a
s Lévy processes or stochastic differential equations are considered to m
odel the degradation. After model calibration in presence of data\, the mo
dels that show high values of depth function are compared and a methodolog
y to exploit and analyze the depth function results is proposed.\n\nhttps:
//conferences.enbis.org/event/16/contributions/365/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/365/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A goodness-of-fit test for homogeneous gamma process under a gener
al sampling scheme
DTSTART;VALUE=DATE-TIME:20220520T091000Z
DTEND;VALUE=DATE-TIME:20220520T093000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-340@conferences.enbis.org
DESCRIPTION:Speakers: Christian Paroissin (Université de Pau et des Pays
de l'Adour)\nDegradation models are more and more studied and used in prac
tice. Most of these models are based on Lévy processes. For such models\,
estimation methods have been proposed. These models are also considered f
or developing complex and efficient maintenance policies. However\, a main
issue remains: goodness-of-fit (GoF) test for these models. In this talk\
, we propose a GoF test for the homogeneous gamma process under a general
sampling scheme.\n\nhttps://conferences.enbis.org/event/16/contributions/3
40/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/340/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Statistical inference for a Wiener-based degradation model with im
perfect maintenance actions under different observation schemes
DTSTART;VALUE=DATE-TIME:20220520T085000Z
DTEND;VALUE=DATE-TIME:20220520T091000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-331@conferences.enbis.org
DESCRIPTION:Speakers: Margaux Leroy (Univ. Grenoble Alpes)\nIn this articl
e\, technological or industrial equipment that are subject to degradation
are considered. These units undergo maintenance actions\, which reduce the
ir degradation level.\nThe paper considers a degradation model with imperf
ect maintenance effect. The underlying degradation process is a Wiener pro
cess with drift. The maintenance effects are described with an Arithmetic
Reduction of Degradation ($ARD_1$) model. The system is regularly inspecte
d and the degradation levels are measured.\n \nFour different observation
schemes are considered so that degradation levels can be observed between
maintenance actions as well as just before or just after maintenance time
s. In each scheme\, observations of the degradation level between successi
ve maintenance actions are made. In the first observation scheme\, degrada
tion levels just before and just after each maintenance action are observe
d. In the second scheme\, degradation levels just after each maintenance a
ction are not observed but are observed just before. On the contrary\, in
the third scheme\, degradation levels just before the maintenance actions
are not observed but are observed just after. Finally\, in the fourth obs
ervation scheme\, the degradation levels are not observed neither just bef
ore nor just after the maintenance actions.\n \nThe paper studies the est
imation of the model parameters under these different observation schemes.
The maximum likelihood estimators are derived for each scheme.\nSeveral s
ituations are studied in order to assess the impact of different features
on the estimation quality.\nAmong them\, the number of observations betwee
n successive maintenance actions\, the number of maintenance actions\, the
maintenance efficiency parameter and the location of the observations are
considered. These situations are used to assess the estimation quality an
d compare the observation schemes through an extensive simulation and perf
ormance study.\n\nhttps://conferences.enbis.org/event/16/contributions/331
/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/331/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Imperfect condition-based maintenance for a gamma degradation proc
ess in presence of unknown parameters
DTSTART;VALUE=DATE-TIME:20220520T083000Z
DTEND;VALUE=DATE-TIME:20220520T085000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-338@conferences.enbis.org
DESCRIPTION:Speakers: Franck Corset (LJK - Université Grenoble Alpes)\nA
system subject to degradation is considered. The degradation is modelled b
y a gamma process. A condition-based maintenance policy with perfect corre
ctive and an imperfect preventive actions is proposed. The maintenance cos
t is derived considering a Markov-renewal process. The statistical inferen
ce of the degradation and maintenance parameters by the maximum likelihood
method is proposed. A sensibility analysis to different parameters is car
ried out and the perspectives are detailed.\n\nhttps://conferences.enbis.o
rg/event/16/contributions/338/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/338/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quantile Regression via Accelerated Destructive Degradation Modeli
ng for Reliabilty Estimation
DTSTART;VALUE=DATE-TIME:20220520T081000Z
DTEND;VALUE=DATE-TIME:20220520T083000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-366@conferences.enbis.org
DESCRIPTION:Speakers: Munwon Lim (Hanyang University)\, Gyu Ri Kim (Depart
ment of Industrial Engineering\, Hanyang University)\nAlong with the short
ening of production period\, manufacturing industry utilizes an accelerate
d degradation test (ADT) to estimate the reliability of newly developed pr
oducts as quickly as possible. In ADT\, the stress factor which is related
to the failure mechanisms is imposed to cause the failure of products fas
ter than those under normal use condition. By increasing the degree of str
ess such as voltage\, temperature\, humidity or other external factors\, t
he performance of new products continuously degrades and leads to the fail
ure. In some applications\, accelerated destructive degradation test (ADDT
) is conducted when testing units should be destroyed to measure the perfo
rmance of the degrading product. \nFor general ADT and ADDT models\, the m
ean estimators have been considered as the location measurement. However\,
the estimation result using mean estimator can be inappropriate for highl
y skewed data\, because the lifetime estimation of degradation data with o
utliers or irregularity can be distorted.\nIn this paper\, the ADDT modeli
ng based on quantile regression (QR) is suggested as a comprehensive appro
ach for asymmetric observations. QR-based ADDT requires fewer assumptions
than the general parametric methods to construct the model\, and enables t
he interpretation to be more flexible. Through the data application\, our
approach provides a great advantage by inferring a nonlinear degradation p
ath without bias and partiality.\n\nhttps://conferences.enbis.org/event/16
/contributions/366/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/366/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Battery degradation model for mission assignment in a fleet of ele
ctric vehicles
DTSTART;VALUE=DATE-TIME:20220519T145000Z
DTEND;VALUE=DATE-TIME:20220519T151000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-380@conferences.enbis.org
DESCRIPTION:Speakers: Pedro Dias Longhitano (Volvo Group)\nBattery prognos
tics and health management has recently become a very important and strate
gic topic specially with the rise of electric vehicles and electric mobili
ty in general\, which is seen as a key tool to reduce the impact of global
warming. In order for battery health management to be viable\, it is nec
essary to quantify and understand battery state of health (SOH) and its de
gradation mechanisms. A lot of research has been developed to understand t
he different degradation processes in a battery [1]\, to identify and unde
rstand the impacts of stress factors [2]\, and to quantify tendencies of d
egradation and predict remaining useful life [3]. In this presentation\, a
n overview on battery degradation modelling and prognostics is given\, exp
loring the definitions of the main stress factors\, and covering the most
common degradation models in the literature. Particular attention is given
to an empirical model based on stress cycle decomposition linking the deg
radation to the history of charge and discharge cycles\, which is flexible
and accurate [4]. This model can be used to estimate the SoH variation of
the battery based on the way the vehicle is driven. A three-step method i
s thus proposed to develop a comprehensive model for degradation induced b
y driving conditions\, combining the aforementioned degradation model\, wi
th battery and vehicle dynamics models\, as well as information on road to
pography and vehicle parameters. The first step of this overall degradatio
n model consists in using the available information topography\, speed lim
its and crossroads to estimate the electrical power required to carry on a
given displacement. In the second step\, the required electrical power is
used to infer a state of charge (SoC) trajectory. Finally\, in the last s
tep\, the SoC profile is decomposed into stress cycles that serve as input
for the degradation model. Such a comprehensive SOH evolution model linki
ng route profiles and driving conditions to battery degradation is suitabl
e for decision-making problems related to the optimal management of electr
ic vehicles. The use of this degradation model is illustrated on a use-cas
e of a fleet of electric vehicles that must perform a set of missions: it
is shown how the order of those missions can be decided by optimizing not
only energy consumption but also battery degradation. \n\n[1] Anthony Barr
é\, Benjamin Deguilhem\, Sébastien Grolleau\, Mathias Gérard\, Frédér
ic Suard\, Delphine Riu\, A review on lithium-ion battery ageing mechanism
s and estimations for automotive applications\, Journal of Power Sources\,
Volume 241\,\n[2] Saxena\, Saurabh\, Darius Roman\, Valentin Robu\, David
Flynn\, and Michael Pecht. 2021. "Battery Stress Factor Ranking for Accele
rated Degradation Test Planning Using Machine Learning" Energies 14\, no.
3: 723. \n[3] Jiaming Fan et al. “A novel machine learning method based
approach for Li-ion battery prognostic and health management”. In: IEEE
Access 7 (2019)\, pp. 160043–\n160061 \,2013\, Pages 680-689\,\n[4] Xu
\,B.\, Oudalov\, A.\, Ulbig\, A.\, Andersson\, G.\, and Kirschen\, D.S. (2
018). Modeling of lithium-ion battery degradation for cell life assessment
. IEEE Transactions on Smart Grid\, 9(2)\, 1131–1140. doi:10.1109/TSG.20
16.2578950\n\nhttps://conferences.enbis.org/event/16/contributions/380/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/380/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Statistical process control versus deep learning for predictive ma
intenance of power plant process data
DTSTART;VALUE=DATE-TIME:20220519T143000Z
DTEND;VALUE=DATE-TIME:20220519T145000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-362@conferences.enbis.org
DESCRIPTION:Speakers: Henrik Hansen (DTU & Ørsted (Industrial PhD))\n# Ab
stract \n\nThis work is motivated by the non-documented\, practical learni
ngs gained by a predictive maintenance (PdM) development team in the Danis
h energy company Ørsted. The team implements PdM solutions for power plan
t machinery to monitor for faults in the making. Their learnings support t
he hypothesis that there are not significant enough benefits to be gained
from using overly complicated condition monitoring models on selected mach
inery.\n\nTo explore this hypothesis\, we set out to compare two different
methodologies for detecting faults in process data. We compare a classica
l latent structure-based method from the field of statistical process cont
rol (SPC) with a standard autoencoder deep neural network. Furthermore\, w
e compare the fault detection performance of these methods with two more e
xperimental deep learning models recently proposed in literature [1]. The
reason for these specific models is that they a priori seem very well suit
ed for the modelling task the PdM team had undertaken due to the models’
alleged ability to automate domain knowledge in a data-driven way.\n\nWe
benchmark all methods against each other using first the well-known Tennes
see Eastman Process (TEP) data\, and subsequently data collected from two
feedwater pumps (FWP) at a large Danish combined heat and power plant. \n\
nThe TEP data stems from a simulation tool for generating data from the pr
ocess\, and thus a large number of datasets are generated for each of 20 p
rocess disturbances. For the FWP data\, six historical faults in the form
of leaks are used to test the methods against each other in their ability
to detect faults as they develop over time. Each methods’ ability to det
ect faults is measured using a weighted combination of performance metrics
such as mean absolute error\, ROC AUC and average precision AP.\n\nPrelim
inary results of the experiments suggest that detection performance is com
parable between the different models on both datasets\, but that each mode
l seems to come with its own set of advantages in terms of fault detection
performance\, as in the case of reaction time to certain types of faults.
\n\nBased on the mentioned datasets and models\, we discuss the quantitati
ve results of these experiments\, as well as other pros and cons\, such as
number of modelling decisions\, hyperparameters etc. of each paradigm tha
t may influence the choice of detection model in an industrial setting. \n
\n# References\n\n 1. Schulze\, J.-P\, Sperl\, P\, Böttinger\, K. 2022\,
“Anomaly Detection by Recombining Gated Unsupervised Experts”\, arXiv
preprint: https://doi.org/10.48550/arXiv.2008.13763\n\nhttps://conferences
.enbis.org/event/16/contributions/362/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/362/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The long road from data collection to maintenance optimization of
industrial equipment
DTSTART;VALUE=DATE-TIME:20220519T141000Z
DTEND;VALUE=DATE-TIME:20220519T143000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-383@conferences.enbis.org
DESCRIPTION:Speakers: Emmanuel REMY (EDF R&D)\nIn the coming years\, with
the development of intermittent renewable energy sources and the gradual p
hasing out of coal-fired power plants\, combined cycle gas turbines (CCGT)
will play an essential role in regulating electricity production. This ne
ed for flexibility will increase the demands on the equipment of these CCG
T and the issue of optimizing their maintenance will become increasingly i
mportant. To this end\, the use of statistical tools to enhance the value
of data from the operation and maintenance of these plants' equipment is a
possible approach to provide decision support elements.\nIt is in this in
dustrial context that an important collection of data was carried out for
several conventional repairable equipment (turbines\, pumps...) of three E
DF CCGT. The second step consisted in a pre-processing / cleaning of these
raw data with the support of field experts\, an essential requirement for
the statistical modeling stage. A wide range of imperfect maintenance mod
els implemented in the free R VAM (for Virtual Age Models) package (https:
//rpackages.imag.fr/VAM#) was tested to evaluate the ability of these mode
ls\, on the one hand to reproduce the field reality\, on the other hand to
bring useful insights to help the development of equipment maintenance pl
ans.\nThe communication will present this work\, illustrating it on a piec
e of equipment and insisting on its industrial application dimension.\n\nh
ttps://conferences.enbis.org/event/16/contributions/383/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/383/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Development of an Operational Digital Twin of a Locomotive Systems
DTSTART;VALUE=DATE-TIME:20220519T135000Z
DTEND;VALUE=DATE-TIME:20220519T141000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-364@conferences.enbis.org
DESCRIPTION:Speakers: Gabriel Davidyan (Israel railway)\nA Digital Twin (D
T) is a new and powerful concept that maps a physical structure operating
in a specific context to the digital space. The development and deployme
nt of a DT improves forecasting prognostic performance and decision supp
ort for operators and managers. \, DT have been introduced in various ind
ustries across a range of application areas including design\, manufacturi
ng and maintenance. Due to the large impact of maintenance on the proper f
unctioning of a system\, maintenance is one of the most studied DT applica
tions. In the case of trains\, poor maintenance can put the rolling carts
out of service or\, worse\, pose a safety risk to passengers and operators
. Implementing intelligent maintenance strategies can therefore offer trem
endous benefits. This study addresses the development of an architecture f
or DT designed to formulate and evaluate new hypotheses in predictive main
tenance by iterating between physical experiments and computational experi
ments. The designed DT supports a broad perspective on statistical aspects
of simulations and experiments. In addition\, the DT enables real-time pr
ediction and optimization of the actual behavior of a system at any stage
of its life cycle. Examples of safety valves and suspension systems will b
e given.\n\nhttps://conferences.enbis.org/event/16/contributions/364/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/364/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reliability degradation and optimal maintenance for information eq
uipment installed on railway cars
DTSTART;VALUE=DATE-TIME:20220519T130000Z
DTEND;VALUE=DATE-TIME:20220519T132000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-367@conferences.enbis.org
DESCRIPTION:Speakers: Haim Livni (Oslo Reliability)\nA Reliability degrada
tion model was developed in a project\, which involved LCD TV screens inst
alled on railway. One critical item in the project was an LED strip. \nAcc
elerated Life tests data are the data source for the determination of LED
Reliability. The expected life of LEDs being about 10-15 years - testing t
hem until death is not practical. \nLuminosity (vs time measurements prov
ided by the manufacturer – were fitted to a degradation model. As oppose
d to [1] and [2]\, where an exponential degradation model for the average
luminosity was applied our degradation model assumed degradation equatio
ns\, based on the second law of thermodynamics developed by A. Einstein (1
905)\, Fokker (1919) \, Planck (1930) and Kolmogorov (1931). The main diff
erence in the approaches is that the degradation function's Taylor expansi
on contains terms of the first and second derivative\, while the models of
[1] and [2] contain only the first.\nAs opposed to [2] we did not fit the
results to an assumed Reliability function (Weibull\, Normal\, Lognormal)
and left it in a tabular form. The table allows to determine required rel
iability and maintenance information. \n\nThe following results are de
rived:\n1. PDF of the failure rate as a function of time\n2. Reliability
as a function of time and temperature (for simple and complex components)\
n3. MTBF for a device used for limited and unlimited life. \nA model deve
loped for maintenance costs and spare parts provisioning allows developme
nt Optimal Preventive Maintenance Policy :\n1. Optimal preventive mainten
ance without individual monitoring.\n2. Optimal preventive maintenance bas
ed on Rest of Useful Life (with monitoring)\n\nReferences\n\n1. Ott \,Mela
nie "Capabilities and Reliability of LEDs and Laser Diodes" Internal NASA
Parts and Packaging Publication (1996).\n2. J.Fan K_C Yung\,M.Pecht Lifet
ime Estimation of High-Power White LED Using Degradation-Data-Driven Metho
d IEEE TRANSACTIONS ON DEVICE AND MATERIALS RELIABILITY\, VOL. 12\, NO. 2
\, JUNE 2012\n3. Si\, Xiao-Sheng\, et al. "Remaining useful life estimatio
n based on a nonlinear diffusion degradation process." IEEE Transactions o
n reliability 61.1 (2012): 50-67\n4. Livni\, Haim. "Life cycle maintenance
costs for a non-exponential component." Applied Mathematical Modelling 10
3 (2022): 261-286.\n\nhttps://conferences.enbis.org/event/16/contributi
ons/367/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/367/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Data-driven Maintenance Optimization Using Random Forest Algorithm
s
DTSTART;VALUE=DATE-TIME:20220519T124000Z
DTEND;VALUE=DATE-TIME:20220519T130000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-346@conferences.enbis.org
DESCRIPTION:Speakers: HASAN MISAII (University of TEHRAN and University of
Technology of TROYES)\nIn this paper\, a multi-component series system is
considered which is periodically inspected and at inspection times the fa
iled components are replaced by a new one. Therefore\, this maintenance ac
tion is perfect corrective maintenance for the failed component\, and it c
an be considered as imperfect corrective maintenance for the system. The i
nspection interval is considered as a decision parameter and the maintenan
ce policy is optimized using long-run cost rate function. It is assumed th
at there is no information related to components' lifetime distributions a
nd their parameters. Therefore\, an optimal decision parameter is derived
considering historical data (a data storage for the system that includes i
nformation related to past repairs) using density estimation and random fo
rest algorithms. Eventually\, the efficiency of the proposed optimal decis
ion parameter according to available data is compared to the one derived w
hen all information on the system is available.\n\n keywords: Maintenance
Optimization\, Data-driven Estimation\, Random Forest Algorithm.\n\nhttps:
//conferences.enbis.org/event/16/contributions/346/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/346/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maintenance policies for items with their repair processes modelle
d by the extended Poisson processes
DTSTART;VALUE=DATE-TIME:20220519T120000Z
DTEND;VALUE=DATE-TIME:20220519T122000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-379@conferences.enbis.org
DESCRIPTION:Speakers: Jiaqi Yin (University of Kent)\nOptimisation of main
tenance policies for items with their repair processes modelled by the geo
metric process (GP) has received a good amount of attention. The extended
Poisson process (EPP)\, one of the extensions of the GP\, can be used to m
odel the repair process with its times-between-failures possessing a non-m
onotonic trend. A central issue in the applications of the EPPs in mainten
ance policy optimisation is to find when the EPP has an increasing times-b
etween-failures. This paper aims to answer this question. Numerical exampl
es are provided to illustrate the proposed maintenance policies.\n\nhttps:
//conferences.enbis.org/event/16/contributions/379/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/379/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Change-level detection for Lévy subordinators
DTSTART;VALUE=DATE-TIME:20220519T093000Z
DTEND;VALUE=DATE-TIME:20220519T095000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-361@conferences.enbis.org
DESCRIPTION:Speakers: Landy Rabehasaina (Laboratoire de Mathématiques\, U
niversité Franche Comté)\nLet $\\boldsymbol{X}=(X_t)_{t\\ge 0}$ be a pro
cess behaving as a general increasing Lévy process (subordinator) prior t
o hitting a given unknown level $m_0$\, then behaving as another different
subordinator once this threshold is crossed. We address the detection of
this unknown threshold $m_0\\in [0\,+\\infty]$ from an observed trajectory
of the process. These kind of model and issue are encountered in many are
as such as reliability and quality control in degradation problems. More p
recisely\, we construct\, from a sample path and for each $\\epsilon >0$\,
a so-called detection level $L_\\epsilon$ by considering a CUSUM inspire
d procedure. Under mild assumptions\, this level is such that\, while $m_0
$ is infinite (i.e. when no changes occur)\, its expectation $ \\mathbb{E}
_{\\infty}(L_{\\epsilon})$ tends to $+\\infty$ as $\\epsilon$ tends to $0$
\, and the expected overshoot $ \\mathbb{E}_{m_0}([L_{\\epsilon} - m_0]^+)
$\, while the threshold $m_0$ is finite\, is negligible compared to $ \\ma
thbb{E}_{\\infty}(L_{\\epsilon})$ as $\\epsilon$ tends to $0$. Numerical i
llustrations are provided when the Lévy processes are gamma processes wit
h different shape parameters. This is joint work with Z.Al Masry and G.Ver
dier.\n\nhttps://conferences.enbis.org/event/16/contributions/361/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/361/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fuel cell stochastic deterioration modeling for energy management
in a multi-stack system
DTSTART;VALUE=DATE-TIME:20220519T091000Z
DTEND;VALUE=DATE-TIME:20220519T093000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-360@conferences.enbis.org
DESCRIPTION:Speakers: Jian Zuo (Phd student)\nFuel cells use hydrogen and
oxygen as reactants to produce electricity through electrochemical reactio
ns with the only byproduct of water. They are widely used in various appli
cations\, e.g. transport\, due to their high efficiency\, energy density\,
and limited impact on environmental resources\, however fuel cells deploy
ment is held down by multiple barriers such as their high cost or their sh
orter than required lifetime. To cross over these barriers\, using multi-s
tack fuel cells (MFC) instead of a single one is a promising solution. Fir
stly\, MFC offers improved reliability thanks to the multi-stack structure
. Another advantage is that the durability of multi-stack FC can also be i
ncreased by optimally distributing the power demand among different stacks
by an efficient Energy Management Strategy (EMS)\, and thus avoiding degr
aded mode operation [1]. In short\, MFC systems are relevant to meet this
challenge if properly dimensioned and managed by an appropriate EMS taking
into account the deterioration of the cells. In order to implement such a
degradation-aware EMS\, it is mandatory to build a degradation model that
integrates the dynamic behavior of MFC according to the operating conditi
ons. Fuel cell performance degradation is linked to complex electrochemica
l\, mechanical\, and thermal mechanisms\, which are difficult to model usi
ng a “white-box” approach\, relying on the exact laws of physics. With
in this context\, the aim of the present work is to propose a fuel cell de
gradation model adapted for the energy management of MFC.\nThe deteriorati
on behavior of an MFC is characterized by two main features : (i) it is lo
ad-dependent\, i.e. the degradation is affected by the load distributed by
the stack \; (ii) it is stochastic and exhibits a stack-to-stack variabil
ity. A degradation-aware energy management system allocates a load to deli
ver to the different stacks of the MFC system as a function of their degra
dation state and of their predicted degradation behavior. The deterioratio
n dynamics must thus be modeled as a function of the load power. Another s
pecificity of fuel cells is their individual deterioration variability\, w
hich can be due to stochasticity in the intrinsic fuel cell deterioration
phenomena. This stochasticity varies the deterioration levels even for the
identical stacks operating under identical load profiles.\nIn order to me
et these modelling requirements\, this work develops a load-dependent stoc
hastic deterioration model for an MFC. First\, the overall stack resistanc
e is chosen as the degradation indicator\, as it carries the key aging inf
ormation of a fuel cell stack. Then\, a stochastic non-homogeneous Gamma p
rocess is used to model the deterioration of the fuel cell\, i.e. the incr
ease in the fuel cell resistance. The shape parameter of the considered Ga
mma process is further modeled by an empirical function of the fuel cell o
peration load in order to make the resistance deterioration load-dependent
. Finally\, to model the individual deterioration heterogeneity\, a random
effect is added to the Gamma process on its scale parameter\, taken as a
random variable following a probability distribution (a Gamma law is chose
n in this work). \nResistance degradation paths can then be simulated base
d on the proposed deterioration model\, based on which the first hitting t
ime distribution of a failure threshold (or equivalently a remaining usefu
l life distribution) can be estimated and the reliability of the system ca
n be analyzed. The proposed model can also be used to optimize the load al
location strategy for an MFC [2]. \n \nKeywords: Multi-stack f
uel cells\, load-dependent deterioration model\, stochastic modelling\, Ga
mma process\, random effect. \n\nReferences:\n[1]Marx\, Neigel\, et al. "
On the sizing and energy management of an hybrid multistack fuel cell–Ba
ttery system for automotive applications." International Journal of Hydrog
en Energy 42.2 (2017): 1518-1526.\n[2] Zuo\, J.\, C. Cadet\, Z. Li\, C. B
érenguer\, and R. Outbib (2022). Post-prognostics decision-making strateg
y for load allocation on a stochastically deteriorating multi-stack fuel c
ell system. To appear in Proc. Inst. Mech. Eng - Part O: Journal of Risk a
nd Reliability.\n\nhttps://conferences.enbis.org/event/16/contributions/36
0/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/360/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stochastic Drift Model for Discrete Parameters
DTSTART;VALUE=DATE-TIME:20220519T085000Z
DTEND;VALUE=DATE-TIME:20220519T091000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-341@conferences.enbis.org
DESCRIPTION:Speakers: Lukas Sommeregger (Infineon Technologies Austria AG)
\, Horst Lewitschnig (Infineon Technologies Austria AG)\nIn the context of
semiconductor reliability\, predictive maintenance and calculation of res
idual useful life are important topics under the greater umbrella of progn
ostics and health management.\nEspecially in automotive applications\, wit
h higher expected usage times of self-driving autonomous vehicles\, it bec
omes more and more important to recognize degradation processes early\, so
that preventive maintenance actions can be taken automatically. For semic
onductor producers\, it is important to account for life-time degradation
of electronic devices when guaranteeing quality standards for their custom
er.\nFor this\, accurate and fast statistical models are needed to identif
y degradation by parameter drift. Typically\, electrical parameters have s
pecified limits in which they need to stay over their whole life cycle. \n
Efficient life-time simulations are performed by so-called accelerated str
ess tests. In those tests\, electrical parameters are measured before\, du
ring\, and after higher-than usual stress conditions. These stress test da
ta represent the expected life-time behavior of these parameters.\nUsing m
odels based on these data\, tighter limits\, so called test limits\, are t
hen introduced at production testing to guarantee life-time quality of the
devices for the customer.\nBased on this data\, quality control measures
like guard bands are introduced. Guard bands are the differences between s
pecification and test limits and account\, amongst others\, for lifetime d
rift effects of electrical parameters.\nModels to calculate lifetime drift
have to be flexible enough to accurately represent a large number of stre
ss test behaviors while being computationally light-weight enough to run o
n edge devices in the vehicles.\nWe present a statistical model for discre
te parameters based on nonparametric interval estimation of conditional tr
ansition probabilities in Markov chains that allows for flexible modelling
and fast computation. We then show how to use the model to formulate an i
nteger optimization problem to calculate optimal test limits. Calculation
for both arbitrary parameter distributions at production testing as well a
s defined initial distributions are shown. Finally\, we give an approach t
o calculate remaining useful lifetime for electronic components. \nThe wo
rk has been performed in the project ArchitectECA2030 under grant agreemen
t No 877539. The project is co-funded by grants from Germany\, Netherlands
\, Czech Republic\, Austria\, Norway and - Electronic Component Systems fo
r European Leadership Joint Undertaking (ECSEL JU).\nAll ArchitectECA2030
related communication reflects only the author’s view and ECSEL JU and t
he Commission are not responsible for any use that may be made of the info
rmation it contains.\n\nhttps://conferences.enbis.org/event/16/contributio
ns/341/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/341/
END:VEVENT
BEGIN:VEVENT
SUMMARY:On the modelling of dependence between univariate Lévy wear proce
sses and impact on the reliability function
DTSTART;VALUE=DATE-TIME:20220519T083000Z
DTEND;VALUE=DATE-TIME:20220519T085000Z
DTSTAMP;VALUE=DATE-TIME:20230605T223033Z
UID:indico-contribution-16-345@conferences.enbis.org
DESCRIPTION:Speakers: Sophie MERCIER (University of Pau and Pays de l'Adou
r)\nUnivariate Lévy processes have become quite common in the reliability
literature for modelling accumulative deterioration. In case of correlate
d deterioration indicators\, several possibilities have been suggested for
modelling their dependence. The point of this study is the analysis and c
omparison of three different dependence models considered in the most rece
nt literature: 1. Use of a regular copula\, where the dependence in a mult
ivariate increment is modelled through a time-independent regular copula\;
2. Superposition of independent univariate Lévy processes\, where each m
arginal process is constructed as the sum of independent univariate Lévy
processes $\\{X_j(t)\, t\\geq 0\\}$ with possibly common $\\{X_j(t)\, t\\g
eq 0\\}$ between margins\; 3. Use of a Lévy copula. The three methods are
first presented and analysed. As for the model based on a regular copula\
, it is shown that the corresponding multivariate process cannot have inde
pendent increments in general\, so that it is not a Lévy process. This me
ans that the distribution of the multivariate process is not fully charact
erized in this way. The second and third models both lead to a multivariat
e Lévy process\, with a limited dependence range for the second superposi
tion-based model\, which is not the case for the third Lévy copula-based
model. However\, this last model requires a higher technicity for its use
and numerical methods (such as Monte-Carlo simulations) have to be used fo
r its numerical assessment. Practical details are given in the paper and t
wo Monte-Carlo simulation procedures are compared.\n\nA two-component ser
ies system is next considered\, with joint deterioration level modelled by
one of the three previous models. Each component is considered as failed
as soon as its deterioration level is beyond a given failure threshold. Th
e impact of a wrong choice for the model is explored\, based on data simul
ated from one of the three models and next adjusted to all three models. I
t is shown that a wrong choice for the model can lead to either surestimat
e or underestimate the reliability function of the two-component series sy
stem\, which could be problematic in an applicative context.\n\nhttps://co
nferences.enbis.org/event/16/contributions/345/
LOCATION:Grenoble
URL:https://conferences.enbis.org/event/16/contributions/345/
END:VEVENT
END:VCALENDAR