BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//CERN//INDICO//EN
BEGIN:VEVENT
SUMMARY:Lessons Learned from a Career of Design of Experiments Collaborati
ons
DTSTART;VALUE=DATE-TIME:20210913T151500Z
DTEND;VALUE=DATE-TIME:20210913T161500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-178@conferences.enbis.org
DESCRIPTION:Speakers: Christine Anderson-Cook ()\nGeorge Box made many pro
found and enduring theoretical and practical contributions to statistical
design of experiment and response surface methodology and their influence
on industrial engineering and quality control applications. His focus on u
sing statistical tools in the right way to solving the right real-world pr
oblem has been the inspiration throughout my career. Our statistical train
ing often leads us to focus narrowly on optimality\, randomization and qua
ntifying performance. However\, the practical aspects of implementation\,
matching the design to what the experimenter really needs\, using availabl
e knowledge about the process under study to improve the design\, and prop
er respect for the challenges of collecting data are often under-emphasize
d and could undermine the success of design of experiment collaborations.
In this talk\, I share some key lessons learned and practical advice from
100+ data collection collaborations with scientists and engineers across a
broad spectrum of applications.\n\nhttps://conferences.enbis.org/event/11
/contributions/178/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/178/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Infusing Statistical Engineering at NASA
DTSTART;VALUE=DATE-TIME:20210913T123000Z
DTEND;VALUE=DATE-TIME:20210913T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-216@conferences.enbis.org
DESCRIPTION:Speakers: Peter A. Parker (NASA Langley Research Center)\nThe
discipline of statistical engineering has gained recognition within NASA b
y spurring innovation and efficiency\, and it has demonstrated significant
impact. Aerospace research and development benefits from an application-
focused statistical engineering perspective to accelerate learning\, maxim
ize knowledge\, ensure strategic resource investment\, and inform data-dri
ven decisions. In practice\, a statistical engineering approach features
immersive collaboration and teaming with non-statistical disciplines to de
velop solution strategies that integrate statistical methods with subject-
matter expertise to meet challenging research objectives. This presentati
on provides an overview of infusing statistical engineering at NASA and il
lustrates its practice through pioneering case studies in aeronautics\, sp
ace exploration\, and atmospheric science.\n\nhttps://conferences.enbis.or
g/event/11/contributions/216/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/216/
END:VEVENT
BEGIN:VEVENT
SUMMARY:An Ode to Tolerance: beyond the significance test and p-values
DTSTART;VALUE=DATE-TIME:20210915T124000Z
DTEND;VALUE=DATE-TIME:20210915T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-147@conferences.enbis.org
DESCRIPTION:Speakers: Bernard Francq ()\nIn comparative statistical tests
of parallel treatment groups\, a new drug is commonly considered superior
to the current version if the results are statistically significant. Signi
ficance is then based on confidence intervals and p-values\, the reporting
of which is requested by most top-level medical journals. However\, in re
cent years there have been ongoing debates on the usefulness of these para
meters\, leading to a ‘significance crisis’ in science.\n\nWe will sho
w that this conventional quest for statistical significance can lead to co
nfusing and misleading conclusions for the patient\, as it focuses on the
average difference between treatment groups. By contrast\, prediction or t
olerance intervals deliver information on the individual patient level\, a
nd allow a clear interpretation following both frequentist and Bayesian pa
radigms. \n\nAdditionally\, treatment successes on the patient level can b
e compared using the concept of individual superiority probability (ISP).
While a p-value for mean treatment effects converges to 0 or 1 when the sa
mple size gets large\, the ISP is shown to be independent of the sample si
ze\, which constitutes a major advantage over the conventional concept of
statistical significance. The relationship between p-values\, ISP\, confid
ence intervals and tolerance intervals will be discussed and illustrated w
ith analysis of some real world data sets.\n\nhttps://conferences.enbis.or
g/event/11/contributions/147/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/147/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Accreditation of statisticians
DTSTART;VALUE=DATE-TIME:20210915T122000Z
DTEND;VALUE=DATE-TIME:20210915T124000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-222@conferences.enbis.org
DESCRIPTION:Speakers: Magnus Pettersson ()\nAccreditation of statisticians
has been offered by ASA and RSS earlier\, and since 2020 by FENStatS. \n\
nThe purpose of accreditation is to focus on the professionality\, develop
ment and quality of applied statistical work. We believe that the need for
good statistics and good statisticians is increasing and an accreditation
programme can provide one tool in this process. \n\nThe accreditation sum
marizes the progress and professionality of the applicant. It is a career
path for especially applied statistians that adds value to the universiy
exam. \n\nAn applicant shall provide proof of:\n\nA - Education\, minimum
a MSc according to Bologna process\nB - Experience\, minimum 5 years work
experience\nC - Development\, ongoing professional development\nD - Commun
ication\, samples of work done\nE - Ethics\, knowledge and adherence to re
levant ethical standards\nF - Membership in a FENStatS member association\
n\nFENStatS provides\, in cooperation with its member organisation\, a sta
ndardised system for accreditation that is valid in all its member area. C
urrently\, accreditation is availible for by members in Austria\, France\,
Italy\, Portugal\, Spain\, Sweden and Switzerland. \n\nFENStatS accredita
tion is also mutually recognisied with ASA\, PStat(R). \n\nFurther informa
tion about FENStatS accreditation can be found at: www.fenstats.eu. Applic
ations are submitted through the application portal at the same page.\n\nh
ttps://conferences.enbis.org/event/11/contributions/222/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/222/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Addressing statistics and data science educational challenges wit
h simulation platforms
DTSTART;VALUE=DATE-TIME:20210915T120000Z
DTEND;VALUE=DATE-TIME:20210915T122000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-119@conferences.enbis.org
DESCRIPTION:Speakers: Ron Kenett (KPA Group and Samuel Neaman Institute\,
Technion\, Israel)\, Chris Gotwalt (JMP Division\, SAS\, Research Triangle
)\nComputer age statistics\, machine learning and\, in general\, data anal
ytics is having an ubiquitous impact on industry\, business and services.
This data transformation requires a growing workforce which is up to the j
ob in terms of knowledge\, skills and capabilities. The deployment of anal
ytics needs to address organizational needs\, invoke proper methods\, buil
d on adequate infrastructures and providing the right skills to the right
people. The talk will show how embedding simulations in analytic platforms
can provide an efficient educational experience to both students\, in col
leges and universities\, and company employees engaged in lifelong learnin
g initiatives. Specifically\, we will show how a simulator\, such as the o
nes provided in https://intelitek.com/\, can be used to learn tools invok
ed in monitoring\, diagnostic\, prognostic and prescriptive analytics. We
will also emphasize that such upskilling requires a focus on conceptual un
derstanding affecting both the pedagogical approach and the learning asses
sment tools. The topics covered\, from an educational perspective include
information quality\, data science\, industrial statistics\, hybrid teachi
ng\, simulations and conceptual understanding. Throughout the presentation
\, the JMP platform (www.jmp.com ) will be used to demonstrate the points
made in the talk.\n\nReference\n• Marco Reis & Ron S. Kenett (2017) A st
ructured overview on the use of computational simulators for teaching stat
istical methods\, Quality Engineering\, 29:4\, 730-744.\n\nhttps://confere
nces.enbis.org/event/11/contributions/119/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/119/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Customer prioritization for marketing actions
DTSTART;VALUE=DATE-TIME:20210915T104000Z
DTEND;VALUE=DATE-TIME:20210915T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-153@conferences.enbis.org
DESCRIPTION:Speakers: Ignasi Puig ()\nSelecting customers for marketing ac
tions is an important decision for companies. The profitability of a custo
mer and his inactivity risk are two important aspects of this selection pr
ocess. These indicators can be obtained using the known Pareto/NBD model.
This work proposes clustering customers based on their purchase frequency
and purchase value per period before implementing the Pareto/NBD model ont
o each cluster. This initial cluster model allows estimating the customers
purchase value and improves the parameter estimation accuracy of the Pare
to/NBD by using alike individuals in the fitting. Models are implemented u
sing Bayesian inference as to determine the uncertainty behind the differe
nt estimates. Finally\, using the outputs of both models\, the initial clu
ster and the Pareto/NBD\, the project developed a guideline to classify cl
ients into interpretable groups to facilitate their prioritization for mar
keting actions. The methodology was developed and implemented on a set of
25\,600 sales from a database of 1\,500 customers from beauty products who
lesaler.\n\nhttps://conferences.enbis.org/event/11/contributions/153/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/153/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Greenfield Challenge 2021
DTSTART;VALUE=DATE-TIME:20210913T161500Z
DTEND;VALUE=DATE-TIME:20210913T171500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-236@conferences.enbis.org
DESCRIPTION:Speakers: Antonella Bodini (CNR-IMATI)\nI will present a brief
overview of my most recent experiences in disseminating statistical cultu
re: participation in the virtual event STEMintheCity 2020 and the creation
of statistics pills for a general public\, available on the Outreach webs
ite of the National Resear Council of Italy.\n\nI will conclude with a sho
rt presentation of the ongoing multidisciplinary research activity on card
iology and of the related aspects of dissemination.\n\nhttps://conferences
.enbis.org/event/11/contributions/236/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/236/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Harnessing the recondite role of randomization in today's scientif
ic\, engineering\, and industrial world
DTSTART;VALUE=DATE-TIME:20210915T144500Z
DTEND;VALUE=DATE-TIME:20210915T154500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-231@conferences.enbis.org
DESCRIPTION:Speakers: Tirthankar Dasgupta (Rutgers University)\nRandomized
experiment is a quintessential methodology in science\, engineering\, bus
iness and industry for assessing causal effects of interventions on outcom
es. Randomization tests\, conceived by Fisher\, are useful tools to analyz
e data obtained from such experiments because they assess the statistical
significance of estimated treatment effects without making any assumptions
about the underlying distribution of the data. Other attractive features
of randomization tests include flexibility in the choice of test statistic
and adaptability to experiments with complex randomization schemes and no
n-standard (e.g.\, ordinal) data. In the past\, these tests' major drawbac
k was their possibly prohibitive computational requirements. Modern comput
ing resources make randomization tests pragmatic\, useful tools driven pri
marily by intuition. In this talk we will discuss a principled approach to
conducting randomization-based inference in a wide array of industrial an
d engineering settings and demonstrate their advantage using examples. We
will also briefly argue that randomization tests are natural and effective
tools for data fusion\, that is\, combining results from an ensemble of s
imilar or dissimilar experiments. Finally\, if time permits\, we will also
discuss how this knowledge can be easily communicated to students and pra
ctitioners and mention some available computing resources.\n\nhttps://conf
erences.enbis.org/event/11/contributions/231/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/231/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A comparison of a new\, open-source graphical user interface to R
DTSTART;VALUE=DATE-TIME:20210914T102000Z
DTEND;VALUE=DATE-TIME:20210914T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-213@conferences.enbis.org
DESCRIPTION:Speakers: Bryan Dodson (SKF USA Inc.)\nOrganizations\, both la
rge and small\, have a difficult time trying to standardize. In the field
of statistical methods standardizing on a software package is especially d
ifficult. There are over 50 commercial options\, over 40 open source optio
ns\, and add-ins for spreadsheets and engineering tools. Educational licen
ses provide low costs to universities\, but graduates often find their org
anization does not use the same software they were taught at the universit
y. One of the most popular software solutions is **R**. **R** is popular b
ecause of it is free\, powerful\, and covers virtually every statistical r
outine. Many frown upon **R** because it requires the user to learn script
ing. There are some graphical user interfaces for **R**\, such as RStudio\
, but these have not met the ease-of-use level desired by most users. To a
ddress this issue\, several leading universities have collaborated and hav
e created a new\, user-friendly interface for **R**. The project is called
**JASP**\, and it is open source. This paper will demonstrate some key in
terfaces and capabilities using standard data sets for verification.\n\nht
tps://conferences.enbis.org/event/11/contributions/213/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/213/
END:VEVENT
BEGIN:VEVENT
SUMMARY:When\, Why and How Shewhart Control Chart Constants need to be ch
anged?
DTSTART;VALUE=DATE-TIME:20210914T100000Z
DTEND;VALUE=DATE-TIME:20210914T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-186@conferences.enbis.org
DESCRIPTION:Speakers: Vladimir Shper ()\nShewhart Control Charts (ShCCs) a
re part and parcel of stability and capability analysis of any process. Th
ey have long since been known and widely used all over the World. The perf
ormance of ShCCs depends critically on the values of control limits which
in turn depend on the values of so-called control chart constants that are
considered invariable (for any given sample size) in all SPC literature f
or practitioners (standards\, guides\, handbooks\, etc.). \nOn the other
hand\, many researchers proved that for non-normal distribution functions
(DF) the control limits may notably differ from standard values. However\,
there have not been even discussion about changing the values of ShCCs co
nstants yet. Meanwhile\, this is\, obviously\, the simplest (for practitio
ners) way to take the effect of non-normality into consideration. \nFirstl
y\, we discuss what specific change of the chart constants should be taken
into account. Secondly\, we simulated different DFs lying in different pl
aces of the well-known (β1-β2) plane and calculated (by direct simulatio
n) the values of the bias correction factors (d2\, d3\, d4) which are the
basis for all chart constants. Our results agree very well with the previo
us data\, but the further analysis showed that the impact of non-normality
on the ShCCs construction and interpretation in no way can’t be neglect
ed. Thirdly\, we suggest rejecting the prevalent belief of constancy of th
e control chart constants and explain when and how they should be changed.
\n\nhttps://conferences.enbis.org/event/11/contributions/186/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/186/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Analysis of resistance of spot welding process data in the automot
ive industry via functional clustering techniques
DTSTART;VALUE=DATE-TIME:20210914T104000Z
DTEND;VALUE=DATE-TIME:20210914T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-207@conferences.enbis.org
DESCRIPTION:Speakers: Christian Capezza (University of Naples Federico II)
\nQuality evaluation of resistance spot welding (RSW) joints of metal shee
ts in the automobile sector is generally dependent on expensive and time-c
onsuming offline testing\, which are impracticable in full-scale manufactu
ring on a vast scale. A great opportunity to face this problem is the incr
easing digitization in the industry 4.0 framework\, which makes on-line me
asurements of process parameters available for every joint manufactured. A
mong possible parameters that can be monitored\, the so-called dynamic res
istance curve (DRC) is considered as the spot welds' technological signatu
re. This work aims to demonstrate in this context the potential and practi
cal relevance of clustering algorithms to functional data\, i.e.\, data re
presented by curves varying over a continuum. The objective is to partitio
n DRCs into homogenous groups related to spot welds with common mechanical
and metallurgical characteristics. The functional data approach has the a
dvantage that it does not need feature extraction\, which is arbitrary and
problem specific.\nWe discuss the most promising functional clustering te
chniques and apply them to a real-case study on DRCs acquired during lab t
ests at Centro Ricerche Fiat. Through the functional clustering approach\,
we found that the partitions obtained appear to be related to the electro
des wear status\, which is surmised to affect the final quality of the RSW
joint. R code and the ICOSAF project data are made available at https://g
ithub.com/unina-sfere/funclustRSW/\, where we provide also an essential tu
torial on how to implement the proposed clustering algorithms.\n\nhttps://
conferences.enbis.org/event/11/contributions/207/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/207/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Application of domain-specific language models for quality and tec
hnical support in the Food and Beverage Industry
DTSTART;VALUE=DATE-TIME:20210914T154500Z
DTEND;VALUE=DATE-TIME:20210914T160500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-205@conferences.enbis.org
DESCRIPTION:Speakers: Peng Liu ()\, Chiara Mondino ()\nIssue Resolution is
a critical process in the manufacturing sector to sustain productivity an
d quality\, especially in the Food and Beverage Industry\, where aseptic p
erformance is critical. As a leader in this industry\, Tetra Pak has built
a database regarding quality events reported by Tetra Pak technicians\, e
ach containing domain knowledge from experts. In this paper\, we present a
model framework we have internally developed\, which is using a domain-sp
ecific language model to address two primary natural language challenges i
mpacting the resolution time: \n\n1. Automatically classify a new reported
event to the proper existing class \n2. Suggest existing solutions when
a new event is being reported\, ranked by relevance of the descriptions of
the issues (free text documented by the technician) \n\nOur study shows t
hat the language model could benefit from training on domain-specific data
compared with those trained on open-domain data. For task 1\, the languag
e model is trained on the domain-specific data with an accuracy of over 85
%. F1 score average is over 80%. For task 2\, the domain-specific deep le
arning model is combined with a bag-of-words retrieval function-based algo
rithm to build an advanced search engine with an average precision of 53%.
\n\nhttps://conferences.enbis.org/event/11/contributions/205/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/205/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Real-time monitoring of functional data
DTSTART;VALUE=DATE-TIME:20210914T152500Z
DTEND;VALUE=DATE-TIME:20210914T154500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-188@conferences.enbis.org
DESCRIPTION:Speakers: Fabio Centofanti (University of Naples)\nRecent impr
ovements in data acquisition technologies have produced data-rich environm
ents in every field. Particularly relevant is the case where data are apt
to be modelled as functions defined on multidimensional domain\, which ar
e referred to as functional data. A typical problem in industrial applica
tions deals with evaluating the stability over time of some functional qua
lity characteristics of interest. To this end\, profile monitoring is the
suite of statistical process control (SPC) methods that deal with quality
characteristics that are functional data. While the main aim of the profi
le monitoring methods is to assess the stability of the functional quality
characteristic\, in some applications\, the interest relies in understand
ing if the process is working properly before its completion\, i.e.\, in t
he real-time monitoring of a functional quality characteristic. This wor
k presents a new solution to this task\, based on the idea of real-time al
ignment and simultaneous monitoring of phase and amplitude variations. The
proposal is to iteratively apply at each time point a procedure consistin
g of three main steps: i) alignment of the partially observed functional d
ata to the reference observation through a registration procedure\; ii) di
mensionality reduction through a modification of the functional principal
component analysis (FPCA) specifically designed to consider the phase vari
ability\; iii) monitoring of the resulting coefficients. The effectiveness
of the proposed method is demonstrated through both an extensive Monte Ca
rlo simulation and a real-data example.\n\nhttps://conferences.enbis.org/e
vent/11/contributions/188/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/188/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Constructing nonparametric control charts for correlated and indep
endent data using resampling techniques
DTSTART;VALUE=DATE-TIME:20210914T150500Z
DTEND;VALUE=DATE-TIME:20210914T152500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-180@conferences.enbis.org
DESCRIPTION:Speakers: Priscila Guayasamín (Dep. de Matemática\, Escuela
Politécnica Nacional)\nNon-parametric control charts based on data depth
and resampling techniques are designed to monitor multivariate independent
and dependent data.\n\nPhase I\n-------\n\nDependent and independent case
\n\n 1. The depths $ D_F (X_i) $ ordered in ascending order are obtained.\
n 2. The lower control limit $ (LCI) $ is calculated as the quantile at th
e $ \\alpha $ level of the observations under null hypothesis such that th
e percentage of false alarms are approximately equal to $ \\alpha $.\n 3.
If $ D (X_i) \\leq LCI $ then the process is out of control.\n\nFor the es
timation of the quantile\, smoothing bootstrap\, stationary bootstrap have
been applied for independent and dependent case.\n\nPhase II\n--------\n\
n 1. From the reference sample $ \\{X_1\, ...\, X_n \\} $ the depth of the
data $ D(X_i) $ is calculated with $ i = 1\, ...\, n $ and based on this
the depths of the monitoring sample $ D(Y_j) $ are obtained with $ j = n +
1\, ...\, m $ based on the calibration sample\n 2. Monitor the process\,
if you have observations $ D (Y_j) \\leq LCL $ then the process is out of
control.\n 3. Calculate the percentage of rejection as the average of obse
rvations under the lower control limit.\n\nThe simplicial depth in general
has a better performance for all sample sizes. It is noted that as the sa
mple size increases\, the Tukey and Simplicial measures yield better resul
ts.\n\nhttps://conferences.enbis.org/event/11/contributions/180/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/180/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Shiryaev-Roberts Control Chart for Markovian Count Time Series
DTSTART;VALUE=DATE-TIME:20210914T144500Z
DTEND;VALUE=DATE-TIME:20210914T150500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-129@conferences.enbis.org
DESCRIPTION:Speakers: Sebastian Ottenstreuer ()\nThe research examines the
zero-state and the steady-state behavior of the Shiryaev-Roberts (SR) pro
cedure for Markov-dependent count time series\, using the Poisson INARCH(1
) model as the representative data-generating count process. For the purpo
se of easier evaluation\, the performance is compared to existing CUSUM re
sults from the literature. The comparison shows that SR performs at least
as well as its more popular competitor in detecting changes in the process
distribution. In terms of usability\, however\, the SR procedure has a pr
actical advantage\, which is illustrated by an application to a real data
set. In sum\, the research reveals the SR chart to be the better tool for
monitoring Markov-dependent counts.\n\nhttps://conferences.enbis.org/event
/11/contributions/129/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/129/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Design-Expert and Stat-Ease360: Easy and Efficient as Illustrated
by Examples
DTSTART;VALUE=DATE-TIME:20210915T133000Z
DTEND;VALUE=DATE-TIME:20210915T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-229@conferences.enbis.org
DESCRIPTION:Speakers: Martin Bezener (StatEase®)\nThe book "Applications
of DoE in Engineering and Science" by Leonard Lye contains a wealth of des
ign of experiments (DOE) case studies\, including factorial designs\, frac
tional factorial designs\, various RSM designs\, and combination designs.
A selection of these case studies will be presented using the latest versi
on of Design Expert®\, a software package developed for use in DOE applic
ations\, and Stat-Ease®360\, a cutting-edge advanced statistical engineer
ing package. The presentation includes the design creation as well as the
analysis of the data. The talk will allow interaction with the attendees b
y discussing every step of building the design as well as the analysis of
the data. This demonstration will prove the ease and the thoroughness of S
tat-Ease software.\n\nReference:\nLye\, L.M. (2019) Applications of DOE in
Engineering and Science: A Collection of 26 Case Studies\, 1st ed.\n\nhtt
ps://conferences.enbis.org/event/11/contributions/229/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/229/
END:VEVENT
BEGIN:VEVENT
SUMMARY:What’s New In JMP 16
DTSTART;VALUE=DATE-TIME:20210915T130000Z
DTEND;VALUE=DATE-TIME:20210915T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-183@conferences.enbis.org
DESCRIPTION:Speakers: Chris Gotwalt ()\nJMP 16 marks a substantial expansi
on of JMP’s capabilities. In the area of DoE\, JMP 16 introduces the can
didate set designer\, which gives the user complete control over the possi
ble combinations of factor settings that will be run in the experiment. Th
e candidate set design capability is also very useful as an approach to De
sign for Machine Learning\, where we use principles of optimal design to c
hoose a candidate set. JMP Pro 16 also introduces Model Screening which au
tomates fitting of a variety of machine learning models\, reducing time sp
ent in manual process of fitting and comparing various machine learning mo
dels\, such as neural networks\, tree based models\, and Lasso regressions
. JMP Pro's Text Explorer platform can now perform Sentiment Analysis\, wh
ich extracts a measure of how positive or negative a document is. It also
introduces Term Selection\, a patented approach to identifying words and p
hrases that are predictive of a response. The SEM platform has seen major
upgrades in the interactivity of the path diagram. I will also introduce a
new platform called Model Screening which automates the process of findin
g the best machine learning model across many different families of models
\, including neural networks\, regression trees\, the Lasso\, and much mor
e. Along the way\, we will also give pointers to other user useful capabil
ities that make JMP and JMP Pro 16 a powerful tool for data science and st
atistics.\n\nhttps://conferences.enbis.org/event/11/contributions/183/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/183/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explainable AI and Predictive Maintenance
DTSTART;VALUE=DATE-TIME:20210914T140000Z
DTEND;VALUE=DATE-TIME:20210914T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-226@conferences.enbis.org
DESCRIPTION:Speakers: Florian Sobieczky (Software Competence Center Hagenb
erg GmbH)\nNon-linear predictive machine learning models (such as deep lea
rning) have emerged as a successful approach in many industrial applicatio
ns\, as the accuracy of predictions often surpasses classical statistical
approaches in a significant\, and also effective way. Predictive maintenan
ce tasks (such as predicting change points or detecting anomalies) are par
ticularly susceptible to this improvement. However\, the ability to interp
ret the increase in accuracy isn't generally delivered alongside with the
application of these models. In several manufacturing scenarios\, however\
, a prescriptive solution is in high demand. The talk surveys several meth
ods to render non-linear predictive models for time series data explainabl
e and also introduces a new change point detection technique involving a L
ong Short Term Memory neural network. The focus on time series is due to t
he specific need of methods for this data type in manufacturing and theref
ore predictive maintenance scenarios.\n\nhttps://conferences.enbis.org/eve
nt/11/contributions/226/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/226/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Variable importance analysis of railway vehicle responses
DTSTART;VALUE=DATE-TIME:20210914T133000Z
DTEND;VALUE=DATE-TIME:20210914T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-158@conferences.enbis.org
DESCRIPTION:Speakers: Anna Pichler (Virtual Vehicle Research GmbH)\nIn the
development process of railway vehicles several requirements considering
reliability and safety have to be met. These requirements are commonly ass
essed by using Multi-Body-Dynamics (MBD) simulations and on-track measurem
ents.\nIn general\, the vehicle/track interaction is significantly influen
ced by varying\, unknown or non-quantifiable operating conditions (e.g. co
efficient of friction) resulting in a high variance of the vehicle respons
es (forces and accelerations). The question is\, which statistical methods
allow to identify the significant operating conditions to be considered i
n the simulation?\n\nThis paper proposes a methodology to quantify the eff
ects of operating conditions (independent variables) on vehicle responses
(dependent variables) based on measurements and simulations. A variable im
portance analysis is performed considering the nonlinear behaviour of the
vehicle/track interaction as well as the correlation between the independe
nt variables. Hence\, two statistical modelling approaches are considered.
The focus is on linear regression models\, which make it possible to incl
ude the correlation behaviour of the independent variables in the analyses
. Further\, random forest models are used to reflect the non-linearity of
the vehicle/track interaction.\n\nThe variable importance measures\, deriv
ed from both approaches\, result in an overview of the effects of operatin
g conditions on vehicle responses\, considering the complexity of the data
. Finally\, the proposed methodology provides a determined set of operatin
g conditions to be considered in the simulation.\n\nhttps://conferences.en
bis.org/event/11/contributions/158/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/158/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Classification of On-Road Routes for the Reliability Assessment of
Drive-Assist Systems in Heavy-Duty Trucks based on Electronic Map Data
DTSTART;VALUE=DATE-TIME:20210914T130000Z
DTEND;VALUE=DATE-TIME:20210914T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-163@conferences.enbis.org
DESCRIPTION:Speakers: Nikolaus Haselgruber (CIS consulting in industrial s
tatistics GmbH)\nThe development of drive assist systems\, such as traffic
sign recognition and distance regulation\, is one of the most important t
asks on the way to autonomous driving. With focus on the definition of rel
iability as the ability to perform a required function under specific cond
itions over a given period of time\, the most challenging aspect appears t
o be the description of the usage conditions. In particular\, the variety
of these conditions\, caused by country-specific road conditions and infra
structure as well as volatile weather and traffic\, needs to be described
sufficiently to recognize which requirements have to be met by the assist
systems during their operational life.\nEspecially for the development of
heavy duty trucks\, where the execution of physical vehicle measurements i
s expensive\, electronic map data provide a powerful alternative to analys
e routes regarding their road characteristics\, infrastructure\, traffic a
nd environmental conditions. Data generation is fast and cheap via online
route planning and analysis can take place directly without using any vehi
cle resources. This presentation shows a systematic approach to classify h
eavy-duty truck routes regarding their usage conditions based on electroni
c map data and how this can be used to provide a reference stress profile
for the reliability assessment of drive assist systems.\n\nhttps://confere
nces.enbis.org/event/11/contributions/163/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/163/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modern Methods of Quantifying Parameter Uncertainties via Bayesian
Inference
DTSTART;VALUE=DATE-TIME:20210915T140000Z
DTEND;VALUE=DATE-TIME:20210915T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-201@conferences.enbis.org
DESCRIPTION:Speakers: Nando Farchmin (Physikalisch-Technische Bundesanstal
t)\nIn modern metrology an exact specification of unknown characteristic v
alues\, such as shape parameters or material constants\, is often not poss
ible due to e.g. the ever decreasing size of the objects under investigati
on. Using non-destructive measurements and inverse problems is both an ele
gant and economical way to obtain the desired information while also provi
ding the possibility to determine uncertainties of the reconstructed param
eter values. In this talk we present state-of-the-art approaches to quanti
fy these parameter uncertainties by Bayesian inference. Among others\, we
discuss surrogate approximations for high-dimensional problems to circumve
nt computationally demanding physical models\, error correction via the in
troduction of an additional model error to automatically correct systemati
c model discrepancies and transport of measure approaches using invertible
neural networks which accelerate sampling from the problem posterior dras
tically in comparison to standard MCMC strategies. The presented methods a
re illustrated by applications in optical shape reconstruction of nano-str
uctures\, in particular photo-lithography masks\, with scattering and graz
ing incidence X-ray fluorescence measurements.\n\nhttps://conferences.enbi
s.org/event/11/contributions/201/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/201/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Statistical models for measurement uncertainty evaluation in coord
inate metrology
DTSTART;VALUE=DATE-TIME:20210915T133000Z
DTEND;VALUE=DATE-TIME:20210915T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-200@conferences.enbis.org
DESCRIPTION:Speakers: Alistair Forbes (National Physical Laboratory)\nCoor
dinate metrology is a key technology supporting the quality infrastructure
associated with manufacturing. Coordinate metrology can be thought of as
a two-stage process\, the first stage using a coordinate measuring machine
(CMM) to gather coordinate data $\\mathbf{x}_{1:m} = \\{\\mathbf{x}_i\,i
=1\, \\ldots\,m\\}$\, related to a workpiece surface\, the second extract
ing a set of parameters (features\, characteristics) $\\mathbf{a} = (a_1\
,\\ldots\,a_n)^\\top$ from the data e.g.\, determining the parameters ass
ociated with the best-fit cylinder to data. The extracted parameters can t
hen be compared with the workpiece design to assess whether or not the man
ufactured workpiece conforms to design within prespecified tolerance.\n\nT
he evaluation of the uncertainties associated with geometric features $\\m
athbf{a}$ derived from coordinate data $\\mathbf{x}_{1:m}$ is also a two
stage process\, the first in which a $3m \\times 3m$ variance matrix $V_X$
associated with the coordinate data is evaluated\, the second stage in wh
ich these variances are propagated through to those for the features $\\ma
thbf{a}$ derived from $\\mathbf{x}_{1:m}$. While the true variance matrix
associated with a point cloud may be difficult to evaluate\, a reasonable
estimate can be determined using approximate models of CMM behaviour. \nI
n this paper we describe approximate models of CMM behaviour in terms of s
patial correlation models operating at different length scales and show ho
w the point cloud variance matrix generated using these approximate models
can be propagated through to derived features. We also use the models to
derive explicit formulae that characterise the uncertainties associated wi
th commonly derived parameters such as the radius of a fitted cylinder.\n\
nhttps://conferences.enbis.org/event/11/contributions/200/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/200/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hypothesis-based acceptance sampling for modules F and F1 of the E
uropean Measuring Instruments Directive
DTSTART;VALUE=DATE-TIME:20210915T130000Z
DTEND;VALUE=DATE-TIME:20210915T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-130@conferences.enbis.org
DESCRIPTION:Speakers: Katy Klauenberg (Physikalisch-Technische Bundesansta
lt (PTB))\nMillions of measuring instruments are verified each year before
being placed on the markets worldwide. In the EU\, such initial conformit
y assessments are regulated by the Measuring Instruments Directive (MID) a
nd its modules F and F1 allow for statistical acceptance sampling. \n\nThi
s paper re-interprets the acceptance sampling conditions formulated by the
MID in the formal framework of hypothesis testing. The new interpretation
is contrasted with the one advanced in WELMEC guide 8.10 [1]\, and its ad
vantages are elaborated. Besides the conceptual advantage of agreeing with
a well-known\, statistical framework\, the new interpretation entails als
o economic advantages. Namely\, it bounds the producers' risk from above\,
such that measuring instruments with sufficient quality are accepted with
a guaranteed probability of no less than 95%. Furthermore\, the new inter
pretation applies unambiguously to finite-sized lots\, even very small one
s. A new acceptance sampling scheme is derived\, because re-interpreting t
he MID conditions implies that currently available sampling plans are eith
er not admissible or not optimal. \n\nWe conclude that the new interpretat
ion is to be preferred and suggest re-formulating the statistical sampling
conditions in the MID. Exchange with WELMEC WG 8 is ongoing to revise its
guide 8.10 and to recommend application of the new sampling scheme. \n\n[
1] WELMEC European Cooperation in Legal Metrology: Working Group 8 (2018)\
, “Measuring Instruments Directive (2014/32/EU): Guide for Generating Sa
mpling Plans for Statistical Verification According to Annex F and F1 of M
ID 2014/32/EU”\n\nhttps://conferences.enbis.org/event/11/contributions/1
30/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/130/
END:VEVENT
BEGIN:VEVENT
SUMMARY:AdaPipe: A Recommender System for Adaptive Computation Pipelines i
n Cyber-Manufacturing Computation Services
DTSTART;VALUE=DATE-TIME:20210915T140000Z
DTEND;VALUE=DATE-TIME:20210915T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-209@conferences.enbis.org
DESCRIPTION:Speakers: Ran Jin ()\nThe industrial cyber-physical systems (I
CPS) will accelerate the transformation of offline data-driven modeling to
fast computation services\, such as computation pipelines for prediction\
, monitoring\, prognosis\, diagnosis\, and control in factories. However\,
it is computationally intensive to adapt computation pipelines to heterog
eneous contexts in ICPS in manufacturing.\nIn this paper\, we propose to r
ank and select the best computation pipelines to match contexts and formul
ate the problem as a recommendation problem. The proposed method Adaptive
computation Pipelines (AdaPipe) considers similarities of computation pipe
lines from word embedding\, and features of contexts. Thus\, without explo
ring all computation pipelines extensively in a trial-and-error manner\, A
daPipe efficiently identifies top-ranked computation pipelines. We validat
ed the proposed method with 60 bootstrapped data sets from three real manu
facturing processes: thermal spray coating\, printed electronics\, and add
itive manufacturing. The results indicate that the proposed recommendation
method outperforms traditional matrix completion\, tensor regression meth
ods\, and a state-of-the-art personalized recommendation model.\n\nhttps:/
/conferences.enbis.org/event/11/contributions/209/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/209/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Image-Based Feedback Control Using Tensor Analysis
DTSTART;VALUE=DATE-TIME:20210915T133000Z
DTEND;VALUE=DATE-TIME:20210915T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-203@conferences.enbis.org
DESCRIPTION:Speakers: Kamran Paynabar (Georgia Tech)\nIn manufacturing sys
tems\, many quality measurements are in the form of images\, including ove
rlay measurements in semiconductor manufacturing\, and dimensional deforma
tion profiles of fuselages in an aircraft assembly process. To reduce the
process variability and ensure on-target quality\, process control strateg
ies should be deployed\, where the high-dimensional image output is contro
lled by one or more input variables. To design an effective control strate
gy\, one first needs to estimate the process model off-line by finding the
relationship between the image output and inputs\, and then to obtain the
control law by minimizing the control objective function online. The main
challenges in achieving such a control strategy include (i) the high-dime
nsionality of the output in building a regression model\, (ii) the spatial
structure of image outputs and the temporal structure of the images seque
nce\, and (iii) non-iid noises. To address these challenges\, we propose a
novel tensor-based process control approach by incorporating the tensor t
ime series and regression techniques. Based on the process model\, we can
then obtain the control law by minimizing a control objective function. Al
though our proposed approach is motivated by the 2D image case\, it can be
extended to the higher-order tensors such as point clouds. Simulation and
case studies show that our proposed method is more effective than benchma
rks in terms of relative mean square error.\n\nhttps://conferences.enbis.o
rg/event/11/contributions/203/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/203/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Parameter Calibration in wake effect simulation model with Stochas
tic Gradient Descent and stratified sampling
DTSTART;VALUE=DATE-TIME:20210915T130000Z
DTEND;VALUE=DATE-TIME:20210915T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-197@conferences.enbis.org
DESCRIPTION:Speakers: Eunshin Byon ()\nAs the market share of wind energy
has been rapidly growing\, wake effect analysis is gaining substantial att
ention in the wind industry. Wake effects represent a wind shade cast by u
pstream turbines to the downwind direction\, resulting in power deficits i
n downstream turbines. To quantify the aggregated influence of wake effect
s on a wind farm's power generation\, various simulation models have been
developed\, including Jensen's wake model. These models include parameters
that need to be calibrated from field data. Existing calibration methods
are based on surrogate models that impute the data under the assumption th
at physical and/or computer trials are computationally expensive\, typical
ly at the design stage. This\, however\, is not the case where large volum
es of data can be collected during the operational stage. Motivated by win
d energy applications\, we develop a new calibration approach for big data
settings without the need for statistical emulators. Specifically\, we ca
st the problem into a stochastic optimization framework and employ stochas
tic gradient descent to iteratively refine calibration parameters using ra
ndomly selected subsets of data. We then propose a stratified sampling sch
eme that enables choosing more samples from noisy and influential sampling
regions and thus\, reducing the variance of the estimated gradient for im
proved convergence\n\nhttps://conferences.enbis.org/event/11/contributions
/197/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/197/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robust bootstraped h and k Mandel’s statistics for outlier detec
tion in Interlaboratory Studies
DTSTART;VALUE=DATE-TIME:20210915T124000Z
DTEND;VALUE=DATE-TIME:20210915T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-228@conferences.enbis.org
DESCRIPTION:Speakers: Génesis Moreno (Escuela Politécnica Nacional)\, Cr
istian Solorzano (Escuela Politécnica Nacional)\nA new methodology based
on bootstrap resampling techniques is proposed to estimate the distributio
n of the h and k Mandel's statistics\, commonly applied to identify labora
tories that supply inconsistent results usually utilized to detect those o
utlier laboratories by testing the hypothesis of reproducibility and repea
tability (R & R)\, in the framework of Interlaboratory Studies (ILS). \n\n
Traditionally\, the statistical tests involved in the ILS have been develo
ped under theoretical assumptions of normality in the study variables. The
n\, if the variable measured by the laboratories is far from being assumed
normal distributed\, the application of nonparametric techniques could be
very useful to estimate more accurately the distribution of these statist
ics and consequently those critic values.\n\nFor the validation of the pro
posed algorithm\, several scenarios were created in a simulation study whe
re the statistics h and k were generated from different distributions such
as Normal\, Laplace\, and Skew Normal where sample size and the number of
laboratories are considered. Also\, emphasize on the power of the test to
verify the capacity of the methodology for detect inconsistencies.\n\nAs
general result\, the new bootstrap methodology presents better results tha
n those obtained using the parametric traditional methodology\, essentiall
y when the data is generated by a Skew distribution and the sample size is
small. Finally\, this methodology was applied to a real case study of dat
a obtained through a computational technique of hematic biometry between c
linical laboratories and a dataset corresponding to serum glucose testing
implemented on ILS R package.\n\nhttps://conferences.enbis.org/event/11/co
ntributions/228/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/228/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Outliers and the instrumental variables estimator in the linear re
gression model with endogeneity
DTSTART;VALUE=DATE-TIME:20210915T122000Z
DTEND;VALUE=DATE-TIME:20210915T124000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-148@conferences.enbis.org
DESCRIPTION:Speakers: Aleš Toman (School of Economics and Business\, Univ
ersity of Ljubljana)\nIn a linear regression model\, endogeneity (i.e.\, a
correlation between some explanatory variables and the error term) makes
the classical OLS estimator biased and inconsistent. When instrumental var
iables (i.e.\, variables that are correlated with the endogenous explanato
ry variables but not with the error term) are available to partial out end
ogeneity\, the IV estimator is consistent and widely used in practice. The
effect of outliers on the OLS estimator is carefully studied in robust st
atistics\, but surprisingly\, the effect of outliers on the IV estimator h
as received little attention in previous research\, with existing work mos
tly focusing on robust covariance estimation.\n\nIn this presentation\, we
use the forward search algorithm to investigate the effect of outliers (a
nd other contamination schemes) on various aspects of the IV-based estimat
ion process. The algorithm begins the analysis with a subset of observatio
ns that does not contain outliers and then increases the subset by adding
one observation at a time until all observations are included and the enti
re sample is analyzed. Contaminated observations are included in the subse
t in the final iterations. During the process\, various statistics and res
iduals are monitored to detect the effects of outliers. \n\nWe use simulat
ion studies to investigate the effect of known outliers occurring in the (
i) dependent\, (ii) exogenous or (iii) endogenous exploratory\, or (iv) in
strumental variable. Summarizing the results\, we propose and implement a
method to identify outliers in a real data set where contamination is not
known in advance.\n\nhttps://conferences.enbis.org/event/11/contributions/
148/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/148/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Outlier detection in sensor networks
DTSTART;VALUE=DATE-TIME:20210915T120000Z
DTEND;VALUE=DATE-TIME:20210915T122000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-121@conferences.enbis.org
DESCRIPTION:Speakers: Martial AMOVIN-ASSAGBA (Arpege Master K / Universite
́ de Lyon\, Lyon 2\, ERIC UR 3083 )\nEmerging technologies ease the recor
ding and collection of high frequency data produced by sensor networks. Fr
om a statistical point of view\, these data can be view as discrete observ
ations of random functions. Our industrial goal is to detect abnormal meas
urement. Statistically\, it consists in detecting outliers in a multivaria
te functional data set.\nWe propose a robust procedure based on contaminat
ed mixture model for both clustering and detecting outliers in multivariat
e functional data. For each measurement\, our algorithm either classify it
into one of the normal clusters (identifying typical normal behaviours of
the sensors) or as an outlier.\nAn Expectation-Conditional Maximization
algorithm is proposed for model inference\, and its efficiency is numerica
lly proven through numerical experiments on simulated datasets.\nThe model
is then applied on the industrial data set which motivated this study\, a
nd allowed us to correctly detect abnormal behaviours.\n\nhttps://conferen
ces.enbis.org/event/11/contributions/121/
LOCATION:Room 5
URL:https://conferences.enbis.org/event/11/contributions/121/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A fixed-sequence approach for selecting best performing classifier
s
DTSTART;VALUE=DATE-TIME:20210915T124000Z
DTEND;VALUE=DATE-TIME:20210915T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-227@conferences.enbis.org
DESCRIPTION:Speakers: Amalia Vanacore (Department of Industrial Engineerin
g University of Naples Federico II)\, Maria Sole Pellegrino (Dept. of Indu
strial Engineering)\nAn important issue in classification problems is the
comparison of classifiers predictive performance\, commonly measured as pr
oportion of correct classifications and often referred to as accuracy or s
imilarity measure. \nThis paper suggests a two-step fixed-sequence approac
h in order to identify the best performing classifiers among those selecte
d as suitable for the problem at hand. At the first step of the fixed-sequ
ence approach\, the hypothesis that each classifier accuracy exceeds a des
ired performance threshold is tested via a simultaneous inference procedur
e accounting for the joint distribution of individual test statistics and
the correlation between them. At the second step\, focusing only on classi
fiers selected at first step\, significant performance differences are inv
estigated via a homogeneity test. \nThe applicability and usefulness of th
e two-step approach is illustrated through two real case studies concernin
g nominal and ordinal multi-class classification problems. The accuracy of
three machine learning algorithms (i.e. Deep Neural Network\, Random Fore
st\, Extreme Gradient Boosting) is assessed via Gwet’s Agreement Coeffic
ient (AC) and compared against similarity measure and Cohen Kappa. Case st
udies results reveal the absence of paradoxical behavior in AC coefficient
and the positive effect of a weighting scheme accounting for misclassific
ation severity with ordinal classifications\, shedding light on the advant
ages of AC as measure of classifier accuracy.\n\nhttps://conferences.enbis
.org/event/11/contributions/227/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/227/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Commented Summary of a Year of Work in Covid-19 Statistical Modeli
ng
DTSTART;VALUE=DATE-TIME:20210915T122000Z
DTEND;VALUE=DATE-TIME:20210915T124000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-123@conferences.enbis.org
DESCRIPTION:Speakers: Jorge Romeu (Emeritus State Univ. of NY (SUNY))\nWe
summarize eleven months of pro-bono work on statistical modeling and analy
sis of Covid-19 topics. For each of the papers and tutorials included here
we provide a one-paragraph summary and commentary\, including methods use
d\, results\, and possible public health applications\, as well as the Res
earchGate url to access them. Section 1 is an Introduction. In Section 2 w
e describe the web page created\, and its main sections. In Section 3 we s
ummarize three papers on Design of Experiments and Quality Control Applica
tions. In Section 4\, we summarize four papers on Reliability\, Survival A
nalysis and Logistics Applications to Vaccine development. In Section 5 we
summarize three papers on Multivariate Analysis (Principal Components\, D
iscriminant Analyses) and Logistics Regression. In Section 6 we summarize
three Stochastic Process papers that implement Markov Chain models to anal
yze herd immunization. In Section 7\, we summarize three papers on Socio-e
conomic analyses of vaccine rollout\, and race\, ethnicity and class probl
ems\, derived from Covid-19. In Section 8\, we conclude\, discussing the p
rocedures used to produce these papers\, and the audiences we hope to reac
h.\n\nhttps://conferences.enbis.org/event/11/contributions/123/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/123/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Analyzing categorical time series in the presence of missing obser
vations
DTSTART;VALUE=DATE-TIME:20210915T120000Z
DTEND;VALUE=DATE-TIME:20210915T122000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-120@conferences.enbis.org
DESCRIPTION:Speakers: Christian Weiß (Helmut Schmidt University)\nIn real
applications\, time series often exhibit missing observations such that s
tandard analytical tools cannot be applied. While there are approaches of
how to handle missing data in quantitative time series\, the case of categ
orical time series seems not to have been treated so far. Both for the cas
e of ordinal and nominal time series\, solutions are developed that allow
to analyze their marginal and serial properties in the presence of missing
observations. This is achieved by adapting the concept of amplitude modul
ation\, which allows to obtain closed-form asymptotic expressions for the
derived statistics' distribution (assuming that missingness happens indepe
ndently of the actual process). The proposed methods are investigated with
simulations\, and they are applied in a project on migraine patients\, wh
ere the monitored qualitative time series on features such as pain peak se
verity or perceived stress are often incomplete.\n\nThe talk relies on the
open-access publication\n\nWeiß (2021) Analyzing categorical time series
in the presence of missing observations.\nStatistics in Medicine\, in pre
ss.\nhttps://doi.org/10.1002/sim.9089\n\nhttps://conferences.enbis.org/eve
nt/11/contributions/120/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/120/
END:VEVENT
BEGIN:VEVENT
SUMMARY:IMPORTANCE OF SPATIAL DEPENDENCE IN THE CLUSTERING OF NDVI FUNCTIO
NAL DATA ACROSS THE ECUADORIAN ANDES
DTSTART;VALUE=DATE-TIME:20210915T124000Z
DTEND;VALUE=DATE-TIME:20210915T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-212@conferences.enbis.org
DESCRIPTION:Speakers: Jeysson Chuquin (Escuela Politécnica Nacional)\, Al
exandra Maigua (Escuela Politécnica Nacional)\nThe spatial dependence on
environmental data is an influential criterion in clustering processes\, s
ince the results obtained provide relevant information. As classical metho
ds do not consider spatial dependence\, considering this structure produce
s unexpected results\, and groupings of curves that cannot be similar in s
hape/behavior.\nIn this work\, the clustering is performed using the modif
ied k-means method for spatially correlated functional data applied to NDV
I data from the ecuadorian Andes. NDVI studies are important because it is
used mainly to measure biomass\, assess crop health\, help forecast fire
danger zones\, etc.\nFor this\, quality indexes are implemented that can o
btain the appropriate number of groups. Based on the methodology used in t
he hierarchical approach for functional data with spatial correlation\, an
d given that the functional data belong to the Hilbert space of square-int
egrable functions\; the analysis is developed considering the distance bet
ween curves through the $\\mathcal{L}^2$ norm\, obtaining a reduced repres
entation of the data through a finite Fourier-type basis. Then\, the empir
ical variogram is calculated and a parametric theoretical model is fitted
in order to weight the distance matrix between the curves by the trace-var
iogram and multivariogram calculated with the coefficients of the base fun
ctions\, this matrix carries out the grouping of spatially correlated func
tional data. For the validation of the method\, some simulation scenarios
were carried out\, obtaining more than $80 \\%$ of good classification and
complemented with a case of application to NDVI data\; obtaining five lat
itudinally distributed regions\; these regions are influenced by the hydro
graphic basins of Ecuador.\n\nhttps://conferences.enbis.org/event/11/contr
ibutions/212/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/212/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Machine Learning Approach to Predict Land Prices using Spatial Dep
endency Factors
DTSTART;VALUE=DATE-TIME:20210915T122000Z
DTEND;VALUE=DATE-TIME:20210915T124000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-199@conferences.enbis.org
DESCRIPTION:Speakers: Supun Delpagoda (Department of Physical Sciences\, F
aculty of Applied Sciences\, Rajarata University of Sri Lanka.)\, Kaushaly
a Premachandra (Department of Physical Sciences\, Faculty of Applied Scien
ces\, Rajarata University of Sri Lanka.)\, Ranjan Dissanayake (Department
of Physical Sciences\, Faculty of Applied Sciences\, Rajarata University o
f Sri Lanka.)\nIn real estate models\, spatial variation is an important f
actor in predicting land prices. Spatial dependency factors (SDFs) under s
patial variation play a key role in predicting land prices. The objective
of this study was to develop a novel real estate model that is suitable fo
r Sri Lanka by exploring the factors affecting the prediction of land pric
es using ordinary least squares regression (OLS) and artificial neural net
works (ANNs). For this purpose\, a total of 1000 samples on land prices (d
ependent variable) were collected from the Kesbewa Division in Colombo met
ropolitan city\, using various web commercials\, and explored spatial depe
ndency factors (independent variable) such as distance from the particula
r land to the nearest main road\, city\, public or private hospital and s
chool. The real estate model was developed and validated using the SDFs th
at were calculated using Google Maps and R-Studio. The OLS model showed th
at SDFs have a significant effect on land pricing $(p<0.05)$\, giving a me
an squared error of $0.9599$ (MSE) and a mean absolute percentage error of
$0.107$ (MAPE). Single-layer ANN was trained to predict land prices. This
trained model showed MSE and MAPE are $0.9054$ and $0.0976$ respectively.
\nIt could be concluded that the SDFs are suitable to develop the real es
tate model for the Sri Lankan context since these factors showed a signifi
cant effect on land prices. Furthermore\, the MSE and MAPE values of the O
LS and ANN models proved that the ANN model performed better than the OLS
model in this context.\n\nhttps://conferences.enbis.org/event/11/contribut
ions/199/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/199/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Study of the effectiveness of Bayesian kriging for the decommissio
ning and dismantling of nuclear sites.
DTSTART;VALUE=DATE-TIME:20210915T120000Z
DTEND;VALUE=DATE-TIME:20210915T122000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-133@conferences.enbis.org
DESCRIPTION:Speakers: Martin Wieskotten (CEA)\nThe decommissioning of nucl
ear infrastructures such as power plants arises as these facilities age an
d come to the end of their lifecycle. The decommissioning projects expect
a complete radiological characterization of the site\, of both the soil an
d the civil engineering structure to optimize efficiency and minimize the
costs of said project. To achieve such goal\, statistical tools such as ge
ostatistics are used for the spatial characterization of radioactive conta
mination. One of the recurring problem using kriging is its sensitivity to
parameters estimation. Even though tools such as the variogram are availa
ble for parameter estimation\, they do not allow for uncertainty quantific
ation in parameter estimation\, leading to over-optimistic prediction vari
ances. A solution to this problem is Bayesian kriging\, which takes into a
ccount uncertainty in parameter estimation by considering parameters as ra
ndom variables and assigning them prior specifications. We chose to study
the efficiency of Bayesian kriging in comparison with standard kriging met
hods\, by varying the size of the data set available\, and tested its effe
ctiveness against misspecification\, such as wrong priors hyperparameters
or covariance models. These comparisons were made on simulated data sets\,
as well as on a real data set from the decommissioning project of the G3
reactor in CEA Marcoule.\n\nhttps://conferences.enbis.org/event/11/contrib
utions/133/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/133/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Predictive Maintenance in plasma etching processes: a statistical
approach
DTSTART;VALUE=DATE-TIME:20210915T124000Z
DTEND;VALUE=DATE-TIME:20210915T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-218@conferences.enbis.org
DESCRIPTION:Speakers: Riccardo Borgoni (Università di Milano-Bicocca)\nTh
is contribution is a joint work of academicians and a research group of ST
Microelectronics (Italy) a leading industry in semiconductor manufacturing
.\nThe problem under investigation refers to a predictive maintenance manu
facturing system in Industry 4.0. Modern predictive maintenance is a condi
tion-driven preventive maintenance program that uses possibly huge amount
of data for monitoring the system to evaluate its condition and efficiency
. Machine learning and statistical learning techniques are nowadays the ma
in tool by which predictive maintenance operates in practice. We have test
ed the efficacy of such tools in the context of plasma etching processes.
More specifically the data considered in this paper refers to an entire pr
oduction cycle and had been collected for roughly six months between Decem
ber 2018 and July 2019. 2874 timepoints were considered in total. Quartz d
egradation was monitored in terms of the reflected power (RF). In additio
n to the reflected power\, the values of more than one hundred other varia
bles have been collected. Results suggest that the considered variables ar
e related to the quartz degradation differently in different period of the
production cycle. Blending different penalized methods to shed light on t
he subset of covariate expected to be prone of signals of the degradation
process\, it was possible to reduce complexity allowing the industrial res
earch group to focus on them to fine tune the best time for maintenance.\n
\nhttps://conferences.enbis.org/event/11/contributions/218/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/218/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Digital Twin Approach for Statistical Process Monitoring of a Hi
gh-Dimensional Microelectronic Assembly Process
DTSTART;VALUE=DATE-TIME:20210915T122000Z
DTEND;VALUE=DATE-TIME:20210915T124000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-191@conferences.enbis.org
DESCRIPTION:Speakers: Marco P. Seabra dos Reis (University of Coimbra\, De
partment of Chemical Engineering)\nWe address a real case study of Statist
ical Process Monitoring (SPM) of a Surface Mount Technology (SMT) producti
on line at Bosch Car Multimedia\, where more than 17 thousand product vari
ables are collected for each product. The basic assumption of SPM is that
all relevant “common causes” of variation are represented in the refer
ence dataset (Phase 1 analysis). However\, we argue and demonstrate that t
his assumption is often not met\, namely in the industrial process under a
nalysis. Therefore\, we derived a digital twin from first principles model
ing of the dominant modes of common cause variation. With such digital twi
n\, it is possible to enrich the historical dataset with simulated data re
presenting a comprehensive coverage of the actual operational space. This
methodology avoids the excessive false alarm problem that affected the uni
t and that prevented the use of SPM. We also show how to compute the monit
oring statistics and set their control limits\, as well as to conduct faul
t diagnosis when an abnormal event is detected.\n\nhttps://conferences.enb
is.org/event/11/contributions/191/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/191/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Imbalanced multi-class classification in process industries. Case
study: Emission levels of SO2 from an industrial boiler
DTSTART;VALUE=DATE-TIME:20210915T120000Z
DTEND;VALUE=DATE-TIME:20210915T122000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-149@conferences.enbis.org
DESCRIPTION:Speakers: Tomás Carmo (Federal University of Minas Gerais)\nI
mbalanced classes often occur in classification tasks including process in
dustry applications. This scenario usually results in the overfitting of t
he majority classes. Imbalanced data techniques are then commonly used to
overcome this issue. They can be grouped into sampling procedures\, cost-s
ensitive strategies and ensemble learning. This work investigates some of
them for the classification of SO2 emissions from a kraft boiler belonging
to a pulp mill in Brazil. There are six classes of emission levels\, wher
e the available number of samples of the highest one is considerably small
er since it reflects negative operating conditions. Four oversampling proc
edures\, namely SMOTE\, ADASYN\, Borderline-SMOTE and Safe-level-SMOTE\, a
nd the bagging (Bootstrap Aggregating) ensemble method\, were investigated
. All tests used an MLP neural network with a single hidden layer. The num
ber of hidden units ([1:1:16])\, the activation function (logistic\, hyper
bolic tangent)\, and the learning algorithm (Rprop\, LM\, BFGS)\, as well
as the imbalance ratio\, were also varied. The best results increased the
AUC for the minority class from 83.9% to 93.6%\, and from 80.4% to 89.1%\,
which represents a gain of about 10%\, while keeping the AUCs of the rema
ining classes practically unchanged. This significantly increased the indi
vidual g-mean metric for the minority class from 60.9% to 79.8%\, and from
52.9% to 76.3%\, respectively\, without significant changes in the overal
l g-mean metric\, as desired. All results are given in average values. Imb
alanced multi-class data generally appear in process industries\, which cl
aims the use of data imbalanced strategies to achieve high accuracy for al
l classes.\n\nhttps://conferences.enbis.org/event/11/contributions/149/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/149/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The numerical statistical fan and model selection
DTSTART;VALUE=DATE-TIME:20210915T104000Z
DTEND;VALUE=DATE-TIME:20210915T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-131@conferences.enbis.org
DESCRIPTION:Speakers: Arkadius Kalka (Dortmund University of Applied Scien
ces and Arts)\nIdentifiability of polynomial models is a key requirement f
or multiple\nregression. We consider an analogue of the so-called statisti
cal fan\, the set of\nall maximal identifiable hierarchical models\, for c
ases of noisy design of experiments or measured covariate vectors with a g
iven tolerance vector. This\ngives rise to the definition of the numerical
statistical fan. It includes all\nmaximal hierarchical models that avoid
approximate linear dependence of the\ndesign vectors. We develop an algori
thm to compute the numerical statistical\nfan using recent results on the
computation of all border bases of a design\nideal from the field of algeb
ra. \nIn the low-dimensional case and for sufficiently small data sets the
numerical statistical fan is effectively computable and much smaller than
the respective statistical fan. The gained\nenhanced knowledge of the spa
ce of all stable identifiable hierarchical models\nenables improved model
selection procedures. We combine the recursive computation of the numerica
l statistical fan with model selection procedures for linear models and GL
Ms\, and we provide implementations in R.\n\nhttps://conferences.enbis.org
/event/11/contributions/131/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/131/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Calibrating Prediction Intervals for Gaussian Processes using Cros
s-Validation method
DTSTART;VALUE=DATE-TIME:20210915T102000Z
DTEND;VALUE=DATE-TIME:20210915T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-127@conferences.enbis.org
DESCRIPTION:Speakers: ACHARKI Naoufal ()\nGaussian Processes are considere
d as one of the most important Bayesian Machine Learning methods (Rasmusse
n and Williams [1]\, 2006). They typically use the Maximum Likelihood Esti
mation or Cross-Validation to fit parameters. Unfortunately\, these method
s may give advantage to the solutions that fit observations in average (F.
Bachoc [2]\, 2013)\, but they do not pay attention to the coverage and th
e width of Prediction Intervals. This may be inadmissible\, especially for
systems that require risk management. Indeed\, an interval is crucial and
offers valuable information that helps for better management than just pr
edicting a single value.\n\nIn this work\, we address the question of adju
sting and calibrating Prediction Intervals for Gaussian Processes Regressi
on. First we determine the model's parameters by a standard Cross-Validati
on or Maximum Likelihood Estimation method then we adjust the parameters t
o assess the optimal type II Coverage Probability to a nominal level. We a
pply a relaxation method to choose parameters that minimize the Wasserstei
n distance between the Gaussian distribution of the initial parameters (Cr
oss-Validation or Maximum Likelihood Estimation) and the proposed Gaussian
distribution among the set of parameters that achieved the desired Covera
ge Probability.\n\nReferences :\n1. Rasmussen\, C.E.\, Williams\, C.K.
I.: Gaussian Processes for Machine Learning (Adaptive Computation and
Machine Learning). The MIT Press (2005).\n2. Bachoc\, F.: Cross valid
ation and maximum likelihood estimations of hyper-parameters of ga
ussian processes with model misspecification. Computational Statistic
s & Data Analysis66\, 55–69 (2013).\n\nhttps://conferences.enbis.org/eve
nt/11/contributions/127/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/127/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Explainable AI in preprocessing
DTSTART;VALUE=DATE-TIME:20210915T100000Z
DTEND;VALUE=DATE-TIME:20210915T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-122@conferences.enbis.org
DESCRIPTION:Speakers: Golnoosh Babaei ()\nThe use of eXplainable Artificia
l Intelligence (XAI) in many fields\, especially in finance has been an im
portant issue not only for researchers but also for regulators and benefic
iaries. In this paper\, despite recent researches in which XAI methods are
utilized for improving the explainability and interpretability of opaque
machine learning models\, we consider two mostly used model-agnostic expla
inable approaches namely\, Local Interpretable Model Agnostic Explanations
(LIME) and SHapley Additive exPlanations (SHAP) as preprocessors and try
to understand if the application of XAI methods for preprocessing could im
prove machine learning models or not. Moreover\, we make a comparison betw
een the mentioned XAI methods to understand which performs better for this
purpose in a decision-making framework. To validate the proposed decompos
ition\, we use the Lending Club\, a Peer-to-Peer lending platform in the U
S\, dataset which is a reliable dataset containing information of individu
al borrowers.\n\nhttps://conferences.enbis.org/event/11/contributions/122/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/122/
END:VEVENT
BEGIN:VEVENT
SUMMARY:MODELLING WIND TURBINE POWER PRODUCTION WITH FUZZY LINEAR REGRESSI
ON METHODS
DTSTART;VALUE=DATE-TIME:20210915T102000Z
DTEND;VALUE=DATE-TIME:20210915T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-150@conferences.enbis.org
DESCRIPTION:Speakers: SADIK OZKAN GUNDUZ (HACETTEPE UNIVERSITY)\nWind ener
gy is an immensely popular renewable energy source\, due to the increase i
n environmental awareness\, the decrease in the number of fossil fuels\, a
nd the increase in costs. Therefore\, the amount of energy produced in win
d turbine farms should be estimated accurately. Although wind turbine manu
facturers estimate energy production depending on wind speed and wind dire
ction\, mostly actual productions are different from these estimates. Such
differences may be observed not only because of model errors or randomnes
s\, but also from uncertainty in the environment\, or lack of data in the
sample. In this study\, energy production is estimated by using wind speed
and wind direction\, where either measurement errors or vagueness mostly
exist. In order to deal with this disadvantage\, fuzzy logic is implemente
d in the proposed regression models. Four different fuzzy regression model
s are constructed according to the fuzziness situation. Crisp (non-fuzzy)
input crisp output\, crisp input fuzzy output\, and fuzzy input fuzzy outp
ut situations are considered\, and the results are compared. Numerous fuzz
y regression models are used in this study and it is concluded that fuzzy
models can both suggest effective solutions where fuzziness exists\, and p
rovide more flexible estimations and decisions.\n\nhttps://conferences.enb
is.org/event/11/contributions/150/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/150/
END:VEVENT
BEGIN:VEVENT
SUMMARY:PREDICTION OF PRECIPITATION THROUGH WEATHER VARIABLES BY FUNCTIONA
L REGRESSION MODELS
DTSTART;VALUE=DATE-TIME:20210915T100000Z
DTEND;VALUE=DATE-TIME:20210915T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-134@conferences.enbis.org
DESCRIPTION:Speakers: Angel Omar Llambo Delgado (Departamento de Matemát
ica\, Escuela Politécnica Nacional)\, Danilo Leandro Loza Quispillo (De
partamento de Matemática\, Escuela Politécnica Nacional)\nIn this work\,
we are going to predict precipitation through the use of different functi
onal regression models (FRM) and the best fit is selected between: Functio
nal Linear Model with Basic Representation (FLR)\, Functional Linear Model
with Functional Basis by Principal Components (PC)\, Functional Linear Mo
del with Functional Basis of Principal Components by Partial Least Squares
(PLS) and the adaptation of a Functional Linear Model with two independen
t variables.\n\nThe results obtained by these models are very useful to un
derstand the behavior of precipitation. When compare the results it is ded
uced that the functional regression model that includes two explanatory fu
nctional variables provides a better fit since the variation of precipitat
ion is explained to through temperature and wind speed by 91%. Finally\, w
ith this model\, tests are carried out that allow the stability of its par
ameters to be analyzed.\n\nThis study allows us to establish meteorologica
l parameters that help us to illustrate scenarios (favorable and adverse)
in order to better cope with the temporal that arise during the year\, so
that projects or studies can be put into practice that allow improving soc
ioeconomic conditions of the agricultural sector.\n\nhttps://conferences.e
nbis.org/event/11/contributions/134/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/134/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A novel online PCA algorithm for large variable space dimensions
DTSTART;VALUE=DATE-TIME:20210915T104000Z
DTEND;VALUE=DATE-TIME:20210915T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-198@conferences.enbis.org
DESCRIPTION:Speakers: Philipp Froehlich (University of Wuerzburg)\, Rainer
Göb ()\nPrincipal component analysis (PCA) is a basic tool for reducing
the dimension of a space of variables. In modern industrial environments l
arge variable space dimensions up to several thousands are common\, where
data are recorded live in high time resolution and have to be analysed wit
hout time delay. Classical batch PCA procedure start from the full covaria
nce matrix and construct the exact eigenspace of the space defined by the
covariance matrix. The latter approach is infeasible under large dimension
s\, and even if feasible live updating of the PCA is impossible. Several s
o-called online PCA algorithms are available in the literature who try to
handle large dimensions and live updating with different approaches. The p
resent study compares the performance of available online PCA algorithms a
nd suggests a novel online PCA algorithm. The algorithm is derived by solv
ing a simplified maximum trace problem where the optimisation is restricte
d on the curve on the unit sphere\, which directly connects the respective
old principal component estimation with a projection of the newly observe
d data point. The algorithm scales linearly in runtime and in memory with
the data dimension. The advantage of the novel algorithm lies in providing
exactly orthogonal vectors whereas other algorithms lead to approximately
orthogonal vectors. Nevertheless\, the runtime of the novel algorithm is
not worse and sometimes even better than the one of existing online PCA al
gorithms.\n\nhttps://conferences.enbis.org/event/11/contributions/198/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/198/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fault detection in continuous chemical processes using a PCA-based
local approach
DTSTART;VALUE=DATE-TIME:20210915T102000Z
DTEND;VALUE=DATE-TIME:20210915T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-160@conferences.enbis.org
DESCRIPTION:Speakers: Leticia Almada (Federal University of Minas Gerais)\
nEarly fault detection in the process industry is crucial to mitigate pote
ntial impacts. Despite being widely studied\, fault detection remains a pr
actical challenge. Principal components analysis (PCA) has been commonly u
sed for this purpose. This work employs a PCA-based local approach to impr
ove fault detection efficiency. This is done by adopting individual contro
l limits for the principal components. Several numbers of retained compone
nts (d = [5:45]\, in steps of 5) were investigated. The false alarm rate (
FAR) was set at 1%. The level of significance () for the control limits
was a function of d. The well-known Tennessee benchmark was used as the c
ase study\, whose faults can be grouped into easy\, intermediate\, hard an
d very hard detection faults. Significant improvements were reached for th
e intermediate and hard groups in comparison to the classic use of PCA. Re
lative gains around 50% in MDR (missed detection rate) were obtained for t
wo out of the three intermediate faults\, given the T2 statistic. In the h
ard to detect group\, all six faults except one presented relative gain in
MDR above 50% for both statistics T2 and Q. In general\, the local approa
ch was superior for 16\, equivalent for 2\, and inferior for 3 (easy detec
tion faults) faults given T2. These values were\, respectively\, equal to
11\, 5 and 5 (four easy and one intermediate detection faults)\, for the Q
statistic. The overall results suggest that the local approach was more p
rone to detect more difficult faults\, which is of most interest in practi
ce.\n\nhttps://conferences.enbis.org/event/11/contributions/160/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/160/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A novel fault detection and diagnosis approach based on orthogonal
autoencoders
DTSTART;VALUE=DATE-TIME:20210915T100000Z
DTEND;VALUE=DATE-TIME:20210915T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-132@conferences.enbis.org
DESCRIPTION:Speakers: Davide Cacciarelli (Technical University of Denmark
(DTU))\nThe need to analyze complex nonlinear data coming from industrial
production settings is fostering the use of deep learning algorithms in St
atistical Process Control (SPC) schemes. In this work\, a new SPC framewor
k based on orthogonal autoencoders (OAEs) is proposed. A regularized loss
function ensures the invertibility of the covariance matrix when computing
the Hotelling $T^2$ statistic and non-parametric upper control limits are
obtained from a kernel density estimation. When an out-of-control situati
on is detected\, we propose an adaptation of the integrated gradients meth
od to perform a fault contribution analysis by interpreting the bottleneck
of the network. The performance of the proposed method is compared with t
raditional approaches like principal component analysis (PCA) and Kernel P
CA (KPCA). In the analysis\, we examine how the detection performances are
affected by changes in the dimensionality of the latent space. Determinin
g the right dimensionality is a challenging problem in SPC since the model
s are usually trained on phase I data solely\, with little to no prior kno
wledge on the true latent structure of the underlying process. Moreover\,
data containing faults is quite scarce in industrial settings\, reducing t
he possibility to perform a thorough investigation on the detection perfor
mances for different numbers of extracted features. The results show how O
AEs offer robust results despite radical changes in the latent dimension w
hile the detection performances of traditional methods witness significant
fluctuations.\n\nhttps://conferences.enbis.org/event/11/contributions/132
/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/132/
END:VEVENT
BEGIN:VEVENT
SUMMARY:An algorithm for robust designs against data loss
DTSTART;VALUE=DATE-TIME:20210915T104000Z
DTEND;VALUE=DATE-TIME:20210915T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-145@conferences.enbis.org
DESCRIPTION:Speakers: Roberto Fontana ()\nOptimal experimental designs are
extensively studied in the statistical literature. In this work we focus
on the notion of robustness of a design\, i.e. the sensitivity of a design
to the removal of design points. This notion is particularly important wh
en at the end of the experimental activity the design may be incomplete i.
e. response values are not available for all the points of the design itse
lf. We will see that the definition of robustness is also related\, but no
t equivalent\, to D-optimality.\nThe methodology for studying robust desig
ns is based on the circuit basis of the design model matrix. Circuits are
minimal dependent sets of the rows of the design model matrix and provide
a representation of its kernel with special properties. The circuit basis
can be computed through several packages for symbolic computation.\nWe pre
sent a simple algorithm for finding robust fractions of a specified size.
The basic idea of the algorithm is to improve a given fraction by exchangi
ng\, for a certain number of times\, the worst point of the fraction with
the best point among those which are in the candidate set but not in the f
raction. Some practical examples are presented\, from classical combinator
ial designs to two-level factorial designs including interactions.\n\nhttp
s://conferences.enbis.org/event/11/contributions/145/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/145/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adhesive bonding process optimization via Gaussian Process models
DTSTART;VALUE=DATE-TIME:20210915T102000Z
DTEND;VALUE=DATE-TIME:20210915T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-141@conferences.enbis.org
DESCRIPTION:Speakers: Jeroen Jordens (ProductionS\, Flanders Make)\nAdhesi
ves are increasingly used in the manufacturing industry because of their d
esirable characteristics e.g. high strength-to-weight ratio\, design flexi
bility\, damage tolerance and fatigue resistance. The manufacturing of adh
esive joints involves a complex\, multi-stage process in which product qua
lity parameters\, such as joint strength and failure mode\, are highly imp
acted by the applied process parameters. Optimization of the bonding proce
ss parameters is therefore important to guarantee the final product qualit
y and minimize production costs.\n\nAdhesive bonding processes are traditi
onally determined through expert knowledge and trial and error\, varying o
nly one factor at a time. This approach generally yields suboptimal result
s and depends highly on the experience and knowledge of the process design
er. Additionally\, the bonding process parameters\, jointly determine perf
ormance and cost metrics in a complex\, nonlinear way. Therefore\, a more
efficient optimization method is desired.\n\nThis research discusses the u
se of Design of Experiments with Bayesian Optimization and Gaussian proces
s models to optimize six bonding process parameters for maximal joint stre
ngth. The approach was first applied in a simulation environment and later
validated via physical experiments. In the intermediate result\, this nov
el method showed 2% reduction in production cost and 15% reduction in opti
mal solution search\, compared to the traditional approach with similar jo
int strengths. Final results will be presented at the conference.\n\nThis
research received funding from the Flemish Government under the “Onderzo
eksprogramma Artificiële Intelligentie AI Vlaanderen” program. This res
earch was supported or partially supported by Flanders Make vzw.\n\nhttps:
//conferences.enbis.org/event/11/contributions/141/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/141/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Enumeration of large mixed four-and-two-level regular designs
DTSTART;VALUE=DATE-TIME:20210915T100000Z
DTEND;VALUE=DATE-TIME:20210915T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-126@conferences.enbis.org
DESCRIPTION:Speakers: Alexandre Bohyn (KU Leuven)\nA protocol for a bio-as
say involves a substantial number of steps that may affect the end result.
To identify the influential steps\, screening experiments can be employed
with each step corresponding to a factor and different versions of the st
ep corresponding to factor levels. The designs for such experiments usuall
y include factors with two levels only. Adding a few four-level factors wo
uld allow inclusion of multi-level categorical factors or quantitative fac
tors that may show quadratic or even higher-order effects. However\, while
a reliable investigation of the vast number of different factors requires
designs with larger run sizes\, catalogs of designs with both two-level f
actors and four-level factors are only available for up to 32 runs. In thi
s presentation\, we discuss the generation of such designs. We use the pri
nciples of **extension** (adding columns to an existing design to form can
didate designs) and **reduction** (removing equivalent designs from the se
t of candidates). More specifically\, we select three algorithms from the
current literature for the generation of complete sets of two-level design
s\, adapt them to enumerate designs with both two-level and four-level fac
tors\, and compare the efficiency of the adapted algorithms for generating
complete sets of non-equivalent designs. Finally\, we use the most effici
ent method to generate a complete catalog of designs with both two-level a
nd four-level factors for run sizes 32\, 64\, 128 and 256.\n\nhttps://conf
erences.enbis.org/event/11/contributions/126/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/126/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bayesian Designs for Progressively Type-I Censored Simple Step-Str
ess Accelerated Life Tests Under Cost Constraint and Order-Restriction
DTSTART;VALUE=DATE-TIME:20210914T154500Z
DTEND;VALUE=DATE-TIME:20210914T160500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-179@conferences.enbis.org
DESCRIPTION:Speakers: Crystal Wiedner (The University of Texas at San Anto
nio)\nIn this work\, we investigate order-restricted Bayesian cost constra
ined design optimization for progressively Type-I censored simple step-str
ess accelerated life tests with exponential lifetimes under continuous ins
pections. Previously we showed that using a three-parameter gamma distribu
tion as a conditional prior ensures order restriction for parameter estima
tion and that the conjugate-like structure provides computational simplici
ty. Adding on to our Bayesian design work\, we explore incorporating a cos
t constraint to various criteria based on Shannon information gain and the
posterior variance-covariance matrix. We derive the formula for expected
termination time and expected total cost and propose estimation procedures
for each. We conclude with results and a comparison of the efficiencies f
or the constrained vs. unconstrained tests from an application of these me
thods to a solar lighting device dataset.\n\nhttps://conferences.enbis.org
/event/11/contributions/179/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/179/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adaptive Design and Inference for a Step-Stress Accelerated Life T
est
DTSTART;VALUE=DATE-TIME:20210914T152500Z
DTEND;VALUE=DATE-TIME:20210914T154500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-156@conferences.enbis.org
DESCRIPTION:Speakers: Haifa Ismail-Aldayeh (The University of Texas at San
Antonio)\nAdvancement in manufacturing has significantly extended the lif
etime of a product while at the same time it made harder to perform life t
esting at the normal operating condition due to the extensively long life
spans. Accelerated life tests (ALT) can mitigate this issue by testing uni
ts at higher stress levels so that the lifetime information can be acquire
d more quickly. The lifetime of a product at normal operation can then be
estimated through extrapolation using a regression model. However\, there
are potential technical difficulties since the units are subjected to high
er stress levels than normal. In this work\, we develop an adaptive design
of a step-stress ALT in which stress levels are determined sequentially b
ased on the information obtained from the preceding steps. After each stre
ss level\, the estimates of the model parameters are updated and the decis
ion is made on the direction of the next stress level by using a design cr
iteria such as D- and C-optimality. Assuming the popular log-linear assump
tion between the mean lifetime and stress levels\, this adaptive design an
d inference are illustrated based on exponential lifetimes with progressiv
e Type-I censoring.\n\nhttps://conferences.enbis.org/event/11/contribution
s/156/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/156/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Inference for the Progressively Type-I Censored $K$-Level Step-Str
ess Accelerated Life Tests Under Interval Monitoring with the Lifetimes fr
om a Log-Location-Scale Family
DTSTART;VALUE=DATE-TIME:20210914T150500Z
DTEND;VALUE=DATE-TIME:20210914T152500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-157@conferences.enbis.org
DESCRIPTION:Speakers: Aruni Jayathilaka (The University of Texas at San An
tonio)\nAs the field of reliability engineering continues to grow and adap
t with time\, accelerated life tests (ALT) have progressed from luxury to
necessity. ALT subjects test units to higher stress levels than normal con
ditions\, thereby generating more failure data in a shorter time period. I
n this work\, we study a progressively Type-I censored k-level step-stress
ALT under interval monitoring. In practice\, the financial and technical
barriers to ascertaining precise failure times of test units could be insu
rmountable\, therefore\, it is often practical to collect failure counts a
t specific points in time during ALT. Here\, the latent failure times are
assumed to have a log-location-scale distribution as the observed lifetime
s may follow Weibull or log-normal distributions\, which are members of th
e log-location-scale family. Here\, we develop the inferential methods for
the step-stress ALT under the general log-location-scale family\, assumin
g that the location parameter is linearly linked to the stress level. The
methods are illustrated using three popular lifetime distributions: Weibul
l\, lognormal and log-logistic.\n\nhttps://conferences.enbis.org/event/11/
contributions/157/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/157/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Design Optimization for the Step-Stress Accelerated Degradation Te
st under Tweedie Exponential Dispersion Process
DTSTART;VALUE=DATE-TIME:20210914T144500Z
DTEND;VALUE=DATE-TIME:20210914T150500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-155@conferences.enbis.org
DESCRIPTION:Speakers: David Han ()\nThe accelerated degradation test (ADT)
is a popular tool for assessing the reliability characteristics of highly
reliable products. Henceforth\, designing an efficient ADT has been of gr
eat interest\, and it has been studied under various well-known stochastic
degradation processes\, including Wiener process\, gamma process\, and in
verse Gaussian process. In this work\, Tweedie exponential dispersion proc
ess is considered as a unified model for general degradation paths\, inclu
ding the aforementioned processes as special cases. Its flexibility can pr
ovide better fits to the degradation data and thereby improve the reliabil
ity analyses. For computational tractability\, the saddle-point approximat
ion method is applied to approximate its density. Based on this framework\
, the design optimization for the step-stress ADT is formulated under the
C-optimality. Under the constraint that the total experimental cost does n
ot exceed a pre-specified budget\, the optimal design parameters such as m
easurement frequency and test termination time are determined via minimizi
ng the approximate variance of the estimated mean time to failure of a pro
duct/device under the normal operating condition.\n\nhttps://conferences.e
nbis.org/event/11/contributions/155/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/155/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vibration signal analysis to classify spur gearbox failure.
DTSTART;VALUE=DATE-TIME:20210914T154500Z
DTEND;VALUE=DATE-TIME:20210914T160500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-152@conferences.enbis.org
DESCRIPTION:Speakers: Antonio Pérez-Torres (Universidad Politécnica de V
alencia)\nA gearbox is a fundamental component in a rotating machine\; the
refore\, detecting a fault or malfunction is indispensable early to avoid
accidents\, plan maintenance activities and reduce downtime costs. The vib
ration signal is widely used to monitor the condition of a gearbox because
it reflects the dynamic behavior in a non-invasive way. The objective of
this research was to perform a ranking of condition indicators to classify
the severity level of a series mechanical faults efficiently. \nThe vibr
ation signal was acquired with six accelerometers located in different pos
itions by modifying the load and frequency of rotation using a spur gearbo
x with different types and severity levels of failures simulated in labora
tory conditions. Firstly\, to summarize the vibration signal condition\, i
ndicators (statistical parameters)\, both in time and frequency domain wer
e calculated. Then\, Random Forest (RF) selected the leading condition ind
icators\, and finally\, the k nearest neighbors and RF ranking methods wer
e used and compared for the severity level. \nIn conclusion\, the leading
condition indicators were determined for the time and frequency domain to
classify the severity level\, being the most efficient classification meth
od Random Forest.\n\nhttps://conferences.enbis.org/event/11/contributions/
152/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/152/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Interactive tool for clustering and forecasting patterns of Taiwan
COVID-19 speared
DTSTART;VALUE=DATE-TIME:20210914T152500Z
DTEND;VALUE=DATE-TIME:20210914T154500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-142@conferences.enbis.org
DESCRIPTION:Speakers: Frederick Kin Hing Phoa (Academia Sinica)\nThe COVID
-19 data analysis is essential for policymakers in analyzing the outbreak
and managing the containment. Many approaches based on traditional time se
ries clustering and forecasting methods such as hierarchical clustering an
d exponential smoothing have been proposed to cluster and forecast the COV
ID-19 data. However\, most of these methods do not scale up with the high
volume of cases. Moreover\, the interactive nature of the application dema
nds further critically complex yet effective clustering and forecasting te
chniques. In this paper\, we propose a web-based interactive tool to clust
er and forecast the available data on Taiwan COVID-19 confirmed infection
cases. We apply the Model-based (MOB) tree and domain-relevant attributes
to cluster the dataset and display forecasting results using the Ordinary
Least Square (OLS) method. In this OLS model\, we apply a model produced b
y the MOB tree to forecast all series in each cluster. Our user-friendly p
arametric forecasting method is computationally cheap. A web app based on
R's Shiny App makes it easier for the practitioners to find clustering and
forecasting results while choosing different parameters such ad domain-re
levant attributes. These results could help determine the spread pattern a
nd be utilized by researchers in medical fields.\n\nhttps://conferences.en
bis.org/event/11/contributions/142/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/142/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Spatial correction of low-cost sensors observations for fusion of
air quality measurements
DTSTART;VALUE=DATE-TIME:20210914T150500Z
DTEND;VALUE=DATE-TIME:20210914T152500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-162@conferences.enbis.org
DESCRIPTION:Speakers: Jean-Michel Poggi (University Paris-Saclay)\nThe con
text is the statistical fusion of air quality measurements coming from dif
ferent monitoring networks. The first one of fixed sensors of high quality
\, the reference network\, and the second one of micro-sensors of less qua
lity. Pollution maps are obtained from the correction of numerical model o
utputs using the measurements from the monitoring stations of air quality
networks. Increasing the density of sensors would then improve the quality
of the reconstructed map. The recent availability of low-cost sensors in
addition to reference station measurements makes it possible without prohi
bitive cost. \nUsually\, a geostatistical approach is used for the fusion
of measurements but the first step is to correct micro-sensors measures th
anks to those given by the reference sensors by prior offline fitting a mo
del issued from a costly and sometimes impossible colocation period. We pr
opose to complement these approaches by considering online spatial correct
ion of micro-sensors performed simultaneously with data fusion. The basic
idea is to use the reference network to correct the measures from network
2: the reference measurements are first estimated by kriging only the meas
urements of network 2\; then the residuals of the estimation on network 1
are calculated\; and finally\, the correction to be applied to the micro-s
ensors is obtained by kriging these residuals. Then we can iterate or not
this sequence of steps\, and alternate or not the role of the networks dur
ing the iterations. \nThis algorithm is first introduced\, then explored b
y simulation\, and then applied to a real-world dataset.\n\nhttps://confer
ences.enbis.org/event/11/contributions/162/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/162/
END:VEVENT
BEGIN:VEVENT
SUMMARY:DATA MINING FOR DISCOVERING DEFECT ASSOCIATIONS AND PATTERNS TO IM
PROVE PRODUCT QUALITY: A CASE FOR PRINTED CIRCUIT BOARD ASSEMBLY
DTSTART;VALUE=DATE-TIME:20210914T144500Z
DTEND;VALUE=DATE-TIME:20210914T150500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-125@conferences.enbis.org
DESCRIPTION:Speakers: Ayse Merve Parlaktuna ()\nMeeting customer quality e
xpectations and delivering high quality products is the key for operationa
l excellence. In this study\, a printed circuit board (PCB) assembly proce
ss is considered for improvement. Associations between the defects as well
as patterns of the defects over time are investigated. A priori algorithm
for association rule mining and Sequential Pattern Discovery using Equiva
lence classes (SPADE) algorithm for pattern mining were implemented in R a
nd SPMF\, respectively. A dataset consisting of seven years of defect data
standardized according to the IPC Standard was prepared for this purpose.
Association analysis was done on the basis of card types and the years. I
t is concluded that associations between defect types change according to
the card type due to design parameters. Pattern analysis indicated that so
me defect types are recurring over time. For example\, insufficient solder
and tombstone defect types recurred over and over. On the other hand\, th
ere were also some defect types\, such as excess solder defects causing so
lder balls\, that occurred sequentially. As the root causes of excess sold
er defects were eliminated\, most of the potential solder ball defects wer
e also eliminated. In the following\, preparation of the dataset for analy
ses\, implementation\, and results of the study are discussed with example
s.\n\nhttps://conferences.enbis.org/event/11/contributions/125/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/125/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hands-on Projects for Teaching DoE
DTSTART;VALUE=DATE-TIME:20210914T144500Z
DTEND;VALUE=DATE-TIME:20210914T161500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-225@conferences.enbis.org
DESCRIPTION:Speakers: Sonja Kuhnt (Dortmund University of Applied Sciences
and Arts)\, Shirley Coleman ()\n__About the Session:__\n\nAre you interes
ted in case studies and real-world problems for active learning of statist
ics? Then come and join us in this one-hour interactive session organised
by the SIG Statistics in Practice. The session follows on from a similar e
vent in ENBIS 2020. \nA famous project for students to apply the acquired
knowledge of design of experiments is Box's paper helicopter. Although bei
ng quite simple and cheap to build\, it covers various aspects of DoE. Bey
ond this\, what other possible DoE projects are realistic in a teaching e
nvironment? What are your experiences in using them? Can we think of new o
nes? There are lots of ideas we could explore\, involving more complex sce
narios like time series dependents with cross overs\, functional data anal
ysis\, as well as mixture experiments.\nWe want to share projects\, discus
s pitfalls and successes and search our mind for new ideas. Come and join
us for this session. You may just listen\, enjoy and hopefully contribute
to the discussion or even share a project idea. \n\n\n__Planned Contributi
ons:__\n\nNadja Bauer (SMF and Dortmund University of Applied Sciences and
Arts\, Germany) presents a __color mixing DoE problem__\, where the adjus
table parameters such as\, among others\, the proportion and temperature o
f the incoming colors (cyan\, magenta and yellow) influence the color and
temperature of the mixture.\n\nMark Anderson\, lead author of the DOE/RSM/
Formulation Simplified book trilogy\, will demonstrate a fun __experiment
on bouncing balls__ that illustrates the magic of multifactor DoE.\n\nJacq
ueline Asscher (Kinneret College on the Sea of Galilee and Technion\, Isra
el) shares her __water beads DoE project__. Water beads are small\, cheap
balls made from a water-absorbing polymer. They are added to the soil in g
ardens and planters\, as they absorb a large amount of water and release i
t slowly. This is a simple but not entirely trivial process. It can be inv
estigated using experiments run either at home or in the classroom.\n\nJon
athan Smyth-Renshaw (Jonathan Smyth-Renshaw & Associates Limited\, UK) pre
sents a __DoE problem with a food manufacturer__\, where a Plackett and Bu
rman Design experiment is used to understand the impact of 7 factors - 5 i
ngredients and 2 process settings. \n\nThejasvi TV (India) presents applic
ations of __DoE in dentistry__.\n\nhttps://conferences.enbis.org/event/11/
contributions/225/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/225/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Generalized additive models for ensemble electricity demand foreca
sting
DTSTART;VALUE=DATE-TIME:20210914T154500Z
DTEND;VALUE=DATE-TIME:20210914T160500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-215@conferences.enbis.org
DESCRIPTION:Speakers: Matteo Fasiolo ()\nFuture grid management systems wi
ll coordinate distributed production and storage resources to manage\, in
a cost-effective fashion\,\nthe increased load and variability brought by
the electrification of transportation and by a higher share of weather-dep
endent production.\nElectricity demand forecasts at a low level of aggrega
tion will be key inputs for such systems. In this talk\, I'll focus on for
ecasting demand at the individual household level\,\nwhich is more challen
ging than forecasting aggregate demand\, due to the lower signal-to-noise
ratio and to the heterogeneity of consumption patterns across households.\
nI'll describe a new ensemble method for probabilistic forecasting\, which
borrows strength across the households while accommodating their individu
al idiosyncrasies.\nThe first step consists of designing a set of models o
r 'experts' which capture different demand dynamics and fitting each of th
em to the data from each household.\nThen the idea is to construct an aggr
egation of experts where the ensemble weights are estimated on the whole d
ata set\, the main innovation being that we let the weights vary with the
covariates by adopting an additive model structure. In particular\, the pr
oposed aggregation method is an extension of regression stacking (Breiman\
, 1996) where the mixture weights are modelled using linear combinations o
f parametric\, smooth or random effects.\nThe methods for building and fit
ting additive stacking models are implemented by the gamFactory R package\
, available at https://github.com/mfasiolo/gamFactory\n\nReferences:\n- Br
eiman\, L.\, 1996. Stacked regressions. Machine learning\, 24(1)\, pp.49-6
4.\n\nhttps://conferences.enbis.org/event/11/contributions/215/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/215/
END:VEVENT
BEGIN:VEVENT
SUMMARY:PHEBUS\, a Python package for the probabilistic seismic Hazard Est
imation through Bayesian Update of Source models
DTSTART;VALUE=DATE-TIME:20210914T152500Z
DTEND;VALUE=DATE-TIME:20210914T154500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-211@conferences.enbis.org
DESCRIPTION:Speakers: Merlin Keller (EDF R&D\, France)\nWe propose a metho
dology for the selection and/or aggregation of probabilistic seismic hazar
d analysis (PSHA) models\, which uses Bayes's theory by optimally exploiti
ng all available observations\, in this case\, the seismic and acceleromet
ric databases. When compared to the actual method of calculation\, the pro
posed approach\, simpler to implement\, allows a significant reduction in
computation time\, and more exhaustive use of the data.\nWe implement the
proposed methodology to select the seismotectonic zoning model\, consistin
g of a subdivision of the national territory into regions that are assumed
homogeneous in terms of seismicity\, amongst a list of models proposed in
the literature. Computation of Bayes factors allows comparing the adjustm
ent performances of each model\, in relation to a given seismic catalog. W
e provide a short description of the resulting PHEBUS Python package struc
ture and illustrate its application to the French context.\n\nhttps://conf
erences.enbis.org/event/11/contributions/211/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/211/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Online Hierarchical Forecasting for Power Consumption Data
DTSTART;VALUE=DATE-TIME:20210914T150500Z
DTEND;VALUE=DATE-TIME:20210914T152500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-139@conferences.enbis.org
DESCRIPTION:Speakers: Margaux Brégère ()\nWe propose a three-step approa
ch to forecasting time series of electricity consumption at different leve
ls of household aggregation. These series are linked by hierarchical const
raints -global consumption is the sum of regional consumption\, for exampl
e. First\, benchmark forecasts are generated for all series using generali
zed additive models\; second\, for each series\, the aggregation algorithm
`ML-Poly'\, introduced by Gaillard\, Stoltz and van Erven in 2014\, finds
an optimal linear combination of the benchmarks\; Finally\, the forecasts
are projected onto a coherent subspace to ensure that the final forecasts
satisfy the hierarchical constraints. By minimizing a regret criterion\,
we show that the aggregation and projection steps improve the root mean sq
uare error of the forecasts. Our approach is tested on household electrici
ty consumption data\; experimental results suggest that successive aggrega
tion and projection steps improve the benchmark forecasts at different lev
els of household aggregation.results suggest that successive aggregation a
nd projection steps improve the benchmark forecasts at different levels of
household aggregation. Results suggest that successive aggregation and pr
ojection steps improve the benchmark forecasts at different levels of hous
ehold aggregation.\n\nhttps://conferences.enbis.org/event/11/contributions
/139/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/139/
END:VEVENT
BEGIN:VEVENT
SUMMARY:ShapKit: a Python module dedicated to local explanation of machine
learning models
DTSTART;VALUE=DATE-TIME:20210914T144500Z
DTEND;VALUE=DATE-TIME:20210914T150500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-138@conferences.enbis.org
DESCRIPTION:Speakers: Vincent Thouvenot ()\nMachine Learning is enjoying a
n increasing success in many applications: defense\, cyber security\, etc.
However\, models are often very complex. This is problematic\, especially
for critical systems\, because end-users need to fully understand the dec
isions of an algorithm (e.g. why an alert has been triggered or why a pers
on has a high probability of cancer recurrence). One solution is to offer
an interpretation for each individual prediction based on attribute releva
nce. Shapley Values\, coming from cooperative game theory\, allow to distr
ibute fairly contributions for each attribute in order to understand the d
ifference between a predicted value for an observation and a base value (e
.g. the average prediction of a reference population). While these values
have many advantages\, including their theoretical guarantees\, they have
a strong drawback: the complexity increases exponentially with the number
of features. In this talk\, we will present and demonstrate ShapKit\, a Py
thon module developed by Thales and available in Open Source dedicated to
Shapley Values computation in an efficient way for local explanation of ma
chine learning\nmodel. We will apply ShapKit on a cybersecurity use case.\
n\nhttps://conferences.enbis.org/event/11/contributions/138/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/138/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Active coffee break: A short poll on some fun and some constructiv
e topics
DTSTART;VALUE=DATE-TIME:20210914T143000Z
DTEND;VALUE=DATE-TIME:20210914T144500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-232@conferences.enbis.org
DESCRIPTION:Speakers: Kristina Krebs (prognostica GmbH)\, Anja Zernig (KAI
)\nThis contribution offers an active coffee break. This 15-minutes activi
ty is practically an informal survey and may even be a bit funny. Together
\, we create a quick picture on two selected topics within the ENBIS commu
nity: a) the fashion topic Artificial Intelligence and b) ENBIS in general
\, which might give ideas for future events and developments within ENBIS.
Everybody is invited to take part in this short survey. Voting takes plac
e via mentimeter - either via desktop or mobile phone - and the results ca
n be seen immediately. The ENBIS community’s view on these topics will b
e visualised and might be published as ENBIS’ social media posts and mad
e accessible in the ENBIS Media Centre.\n\nhttps://conferences.enbis.org/e
vent/11/contributions/232/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/232/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Spectral-CUSUM for Online Community Change Detection
DTSTART;VALUE=DATE-TIME:20210914T140000Z
DTEND;VALUE=DATE-TIME:20210914T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-175@conferences.enbis.org
DESCRIPTION:Speakers: Yao Xie ()\nDetecting abrupt structural changes in a
dynamic graph is a classic problem in statistics and machine learning. In
this talk\, we present an online network structure change detection algor
ithm called spectral-CUSUM to detect such changes through a subspace proje
ction procedure based on the Gaussian model setting. Theoretical analysis
is provided to characterize the average run length (ARL) and expected dete
ction delay (EDD). Finally\, we demonstrate the good performance of the sp
ectral-CUSUM procedure using simulation and real data examples on earthqua
ke detection in seismic sensor networks. This is a joint work with Minghe
Zhang and Liyan Xie.\n\nhttps://conferences.enbis.org/event/11/contributio
ns/175/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/175/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Multivariate Non Parametric Monitoring Procedure Based on Convex
Hulls
DTSTART;VALUE=DATE-TIME:20210914T133000Z
DTEND;VALUE=DATE-TIME:20210914T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-230@conferences.enbis.org
DESCRIPTION:Speakers: Sotiris Bersimis (University of Piraeus\, Greece)\nB
ersimis et al. (2007) motivated by Woodall and Montgomery (1999) statement
published an extensive review paper of the field of MSPM. According to Be
rsimis et al. (2007) open problems in the field of MSPM\, among others are
robust design of monitoring procedures and non-parametric control charts.
In this work\, we introduce a non-parametric control scheme based on conv
ex hulls. The proposed non-parametric control chart is using bootstrap for
estimating the kernel of the multivariate distribution and then appropria
te statistics based on convex hull are monitored. The performance of the p
roposed control chart is very promising.\n\nReferences:\nBersimis\, S.\, P
sarakis\, S. and Panaretos\, J. (2007). "Multivariate statistical process
control charts: an overview". Quality and Reliability Engineering Internat
ional\, 23\, 517-543.\nWoodall\, W. H. and Montgomery\, D. C. (1999). "Res
earch Issues and Ideas in Statistical Process Control". Journal of Quality
Technology\, 31\, 376-386.\n\nhttps://conferences.enbis.org/event/11/cont
ributions/230/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/230/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Detecting changes in Multistream Sequences
DTSTART;VALUE=DATE-TIME:20210914T130000Z
DTEND;VALUE=DATE-TIME:20210914T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-140@conferences.enbis.org
DESCRIPTION:Speakers: George Moustakides (University of Patras)\nMultiple
statistically independent data streams are being observed sequentially and
we are interested in detecting\, as soon as possible\, a change in their
statistical behavior. We study two different formulations of the change de
tection problem. 1) In the first a change appears at a single unknown stre
am but then the change starts switching from one stream to the other follo
wing a switching mechanism for which we have absolutely no prior knowledge
. Under the assumption that we can sample simultaneously all streams\, we
identify the exactly optimum sequential detector when the streams are homo
geneous while we develop an asymptotically optimum solution in the inhomog
eneous case. 2) The second formulation involves a permanent change occurri
ng at a single but unknown stream and\, unlike the previous case\, we are
allowed to sample only a single stream at a time. We propose a simple dete
ction structure based on the classical CUSUM test which we successfully ju
stify by demonstrating that it enjoys a strong asymptotic optimality prope
rty.\n\nhttps://conferences.enbis.org/event/11/contributions/140/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/140/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Copula-based robust optimal block designs
DTSTART;VALUE=DATE-TIME:20210914T140000Z
DTEND;VALUE=DATE-TIME:20210914T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-170@conferences.enbis.org
DESCRIPTION:Speakers: Werner G. Mueller ()\nBlocking is often used to redu
ce known variability in designed experiments by collecting together homoge
neous experimental units. A common modeling assumption for such experiment
s is that responses from units within a block are dependent. Accounting fo
r such dependencies in both the design of the experiment and the modeling
of the resulting data when the response is not normally distributed can be
challenging\, particularly in terms of the computation required to find a
n optimal design. The application of copulas and marginal modeling provide
s a computationally efficient approach for estimating population-average t
reatment effects. Motivated by an experiment from materials testing\, we d
evelop and demonstrate designs with blocks of size two using copula models
. Such designs are also important in applications ranging from microarray
experiments to experiments on human eyes or limbs with naturally occurring
blocks of size two. We present a methodology for design selection\, make
comparisons to existing approaches in the literature\, and assess the robu
stness of the designs to modeling assumptions.\n\nhttps://conferences.enbi
s.org/event/11/contributions/170/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/170/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Non-parametric multivariate control charts based on data depth not
ion
DTSTART;VALUE=DATE-TIME:20210914T133000Z
DTEND;VALUE=DATE-TIME:20210914T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-169@conferences.enbis.org
DESCRIPTION:Speakers: Carmela Iorio (University of Naples Federico II)\nA
control chart is used to monitor a process variable over time by providing
information about the process behavior. Monitoring the process of related
variables is usually called a multivariate quality control problem. Multi
variate control charts\, needed when dealing with more than one quality va
riable\, relies on very specific models for the data generating process. W
hen large historical data set are available\, previous knowledge of the pr
ocess may not be available or a unique model for all the features cannot b
e adopted\, and no specific parametric model turns out to be appropriate a
nd some alternative solutions should be adopted. Hence\, exploiting non-pa
rametric methods to build a control chart appears a reasonable choice. Non
-parametric control charts require no distributional assumptions on the pr
ocess data and generally enjoy more robustness\, i.e. are less sensitive t
o outlier\, over parametric control schemes. Among the possible non-parame
tric statistical techniques\, data depth functions are gaining a growing i
nterest in multivariate quality control. These are nonparametric functions
which are able to provide a dimension reduction to high-dimensional probl
ems. Several depth measures are effective for purposes\, even in the case
of deviation from the normality assumption. However\, the use of the L^p d
ata depth for constructing nonparametric multivariate control charts has b
een neglected so far. Hence\, the contribution of this work is to discuss
how a non-parametric approach based on the notion of the L^p data depth fu
nction can be exploited in the Statistical Process Control framework.\n\nh
ttps://conferences.enbis.org/event/11/contributions/169/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/169/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Application of machine learning models to discriminate tourist lan
dscapes using eye-tracking data
DTSTART;VALUE=DATE-TIME:20210914T130000Z
DTEND;VALUE=DATE-TIME:20210914T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-168@conferences.enbis.org
DESCRIPTION:Speakers: Gianpaolo Zammarchi (University of Cagliari)\nNowada
ys tourist websites make extensive use of images to promote their structur
e and the its location. Many images\, such as landscapes\, are used extens
ively on destination tourism websites to draw tourists’ interest and inf
luence their choices. The use of eye-tracking technology has improved the
level of knowledge of how different types of pictures are observed. An eye
-tracker enables to accurately define the eye location and therefore to ca
rry out precise measurement of the eye movements during the visualization
of different stimuli (e.g. pictures\, documents). \nEye-tracking data can
be analyzed to convert the viewing behavior in terms of quantitative measu
rements and they might be collected for a variety of purposes in a variety
of fields\, such as grouping clients\, improving the usability of a websi
te\, and in neuroscience studies. Our work aims to use eye-tracking data f
rom a publicly available repository to get insight of user behavior regard
ing two main categories of images: natural landscapes and city landscapes.
We choose to analyze these data using supervised and unsupervised methods
. Finally\, we evaluate the results in terms of which choice should be mad
e between possible options to shed light on how decision-makers should tak
e this information into account.\n\nhttps://conferences.enbis.org/event/11
/contributions/168/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/168/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Strategies for Supersaturated Screening: Group Orthogonal and Cons
trained Var(s) Designs
DTSTART;VALUE=DATE-TIME:20210914T140000Z
DTEND;VALUE=DATE-TIME:20210914T143000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-195@conferences.enbis.org
DESCRIPTION:Speakers: Maria Weese (Miami University)\nDespite the vast amo
unt of literature on supersaturated designs (SSDs)\, there is a scant reco
rd of their use in practice. We contend this imbalance is due to conflict
ing recommendations regarding SSD use in the literature as well as the des
igns' inabilities to meet practitioners' analysis expectations. To address
these issues\, we first summarize practitioner concerns and expectations
of SSDs as determined via an informal questionnaire. Next\, we discuss and
compare two recent SSDs that pair a design construction method with a par
ticular analysis method. The choice of a design/analysis pairing is shown
to depend on the screening objective. Group orthogonal supersaturated des
igns\, when paired with our new\, modified analysis\, are demonstrated to
have high power even with many active factors. Constrained positive Var(s)
-optimal designs\, when paired with the Dantzig selector\, are recommended
when effect directions can be reasonably specified in advance\; this stra
tegy reasonably controls type 1 error rates while still identifying a high
proportion of active factors.\n\nhttps://conferences.enbis.org/event/11/c
ontributions/195/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/195/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deep Multistage Multi-Task Learning for Quality Prediction and Dia
gnostics of Multistage Manufacturing Systems
DTSTART;VALUE=DATE-TIME:20210914T133000Z
DTEND;VALUE=DATE-TIME:20210914T140000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-194@conferences.enbis.org
DESCRIPTION:Speakers: Hao Yan ()\nIn multistage manufacturing systems\, mo
deling multiple quality indices based on the process sensing variables is
important. However\, the classic modeling technique predicts each quality
variable one at a time\, which fails to consider the correlation within or
between stages. We propose a deep multistage multi-task learning framewor
k to jointly predict all output sensing variables in a unified end-to-end
learning framework according to the sequential system architecture in the
MMS. Our numerical studies and real case study have shown that the new mod
el has a superior performance compared to many benchmark methods as well a
s great interpretability through developed variable selection techniques.\
n\nhttps://conferences.enbis.org/event/11/contributions/194/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/194/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Understanding and Addressing Complexity in Problem Solving
DTSTART;VALUE=DATE-TIME:20210914T130000Z
DTEND;VALUE=DATE-TIME:20210914T133000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-193@conferences.enbis.org
DESCRIPTION:Speakers: Roger Hoerl (Union College)\, Jeroen De Mast (Univer
sity of Waterloo)\nComplexity manifests itself in many ways when attemptin
g to solve different problems\, and different tools are needed to deal wit
h the different dimensions underlying that complexity. Not all complexity
is created equal. We find that most treatments of complexity in problem-so
lving within both the statistical and quality literature focus narrowly on
technical complexity\, which includes the complexity of subject matter kn
owledge as well as complexity in the data access and analysis of that data
. The literature lacks an understanding of how political complexity or org
anizational complexity interferes with good technical solutions when tryin
g to deploy a solution. Therefore\, people trained in statistical problem
solving are ill-prepared for the situations they are likely to face on rea
l projects. We propose a framework that illustrates examples of complexity
from our own experiences\, and the literature. This framework highlights
the need for more holistic problem-solving approaches and a broader view o
f complexity. We also propose approaches to successfully navigate complexi
ty.\n\nhttps://conferences.enbis.org/event/11/contributions/193/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/193/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Statistical analysis of simulation experiments: Challenges for ind
ustrial applications
DTSTART;VALUE=DATE-TIME:20210914T123000Z
DTEND;VALUE=DATE-TIME:20210914T130000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-177@conferences.enbis.org
DESCRIPTION:Speakers: Bertrand Iooss ()\nThis talk will concern developing
and disseminating statistical tools for answering some industrial issues.
It will be fully based on my 20-years’ experience as a statistician res
earch engineer and expert in the French research institute of nuclear ener
gy (CEA) and the French company of electricity (EDF). I will particularly
focus on the domain of uncertainty quantification in numerical simulation
and computer experiments modeling. For my company\, in a small-size data c
ontext (that occur in the frequent cases of expensive experiments and/or l
imited available information)\, the numerical model exploration techniques
allow to better understand a risky situation and\, sometimes\, to solve a
safety issue. I will highlight some successful projects (always collectiv
e)\, emphasizing on the scientific innovative parts (kriging metamodeling
and global sensitivity analysis in high dimension) but also the organizati
onal reasons of the success.\n\nhttps://conferences.enbis.org/event/11/con
tributions/177/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/177/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A robust method for detecting sparse changes in high-dimensional (
heteroskedastic) data
DTSTART;VALUE=DATE-TIME:20210914T120000Z
DTEND;VALUE=DATE-TIME:20210914T123000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-176@conferences.enbis.org
DESCRIPTION:Speakers: Inez Zwetsloot ()\nBecause of the curse-of-dimension
ality\, high-dimensional processes present challenges to traditional multi
variate statistical process monitoring (SPM) techniques. In addition\, the
unknown underlying distribution and complicated dependency among variable
s such as heteroscedasticity increase uncertainty of estimated parameters\
, and decrease the effectiveness of control charts. In addition\, the requ
irement of sufficient reference samples limits the application of traditio
nal charts in high dimension low sample size scenarios (small n\, large p)
. More difficulties appear when detecting and diagnosing abnormal behavior
s that are caused by a small set of variables\, i.e.\, sparse changes. In
this talk\, I will propose a change-point monitoring method to detect spar
se shifts in the mean vector of high-dimensional processes. Examples from
manufacturing and finance are used to illustrate the effectiveness of the
proposed method in high-dimensional surveillance applications.\n\nhttps://
conferences.enbis.org/event/11/contributions/176/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/176/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A Predictive Maintenance Model Proposal for a Manufacturing Compan
y
DTSTART;VALUE=DATE-TIME:20210914T104000Z
DTEND;VALUE=DATE-TIME:20210914T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-224@conferences.enbis.org
DESCRIPTION:Speakers: Cemal Aydın (TÜBİTAK)\, Volkan Sonmez (Hacettepe
University)\nMaintenance planning is one of the most important problems fo
r manufacturing enterprises. Maintenance strategies applied in an industry
are corrective and preventive maintenance strategies. The development of
sensor technologies has led to a widespread use of preventive maintenance
methods. However\, it can be costly for small and medium-sized enterprises
to install such sensor systems. This study aims to propose a predictive m
aintenance model based on the loss data of production lines without such r
ecorded data for production equipment. \nIn the study\, data belonging to
a company that produces PVC profiles\, such as amount of loss based on shi
ft and line\, production speed differences and number of shifts passed ove
r the last maintenance\, were used. At first\, a threshold value was deter
mined considering planned maintenances. Then\, models that estimate the am
ount of loss for the production line for the following shift\, were traine
d. Statistical learning algorithms such as linear regression\, neural netw
orks\, random forest\, and gradient boosting were used to train the models
. When the performance of the trained models was compared\, it was seen th
at the most successful model was the neural network. \nAt the end of the s
tudy\, it is explained how to decide whether to perform maintenance or not
for a production line. According to the proposed method\, amount of loss
in the related production line will be estimated and this is compared with
the threshold value. If the estimated loss is greater than the threshold
value\, maintenance should be performed\, otherwise\, no maintenance will
be performed.\n\nhttps://conferences.enbis.org/event/11/contributions/224/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/224/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tensor based Modelling of Human Motion
DTSTART;VALUE=DATE-TIME:20210914T102000Z
DTEND;VALUE=DATE-TIME:20210914T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-219@conferences.enbis.org
DESCRIPTION:Speakers: Philipp Wedenig ()\nFor future industrial applicatio
ns\, collaborative robotic systems will be a key technology. A main task i
s to guarantee the safety of humans. To detect hazardous situations\, comm
ercially available robotic systems rely on direct physical contact to the
co-working person\, opposed to those systems equipped with predictive capa
bilities. To predict potential episodes\, where the human and the robot mi
ght collide\, data of a motion tracking sensor system are used. Based on t
he provided information\, the robotic system can avoid the unwanted physic
al contact by adjusting the speed or the position. A common approach of su
ch systems is to perform human motion prediction by machine learning metho
ds like Artificial Neural Networks. Our aim is to perform human motion pre
diction of a repetitive assembly task by using a Tensor-on-Tensor regressi
on. To record human motion by means of the OptiTrack motion capture system
\, infrared reflective markers are placed on corresponding joints of the h
uman torso. The system provides unique traceable Cartesian coordinates (x\
, y\, z) over time for each marker. Furthermore\, the recorded data of joi
nt positions was transformed into the joint angle space to obtain the angl
es of joint points. To predict the human motion\, the contracted tensor pr
oduct for the linear prediction of an outcome array Y from the predictor a
rray X is defined as Y = ⟨X\,B⟩ + E\, where B is the coefficient tens
or and E the error term. The first results are promising for receiving mul
tivariate predictions of highly correlated data in real-time.\n\nhttps://c
onferences.enbis.org/event/11/contributions/219/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/219/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Predicting migration patterns in Sweden using a gravity model and
neural networks
DTSTART;VALUE=DATE-TIME:20210914T100000Z
DTEND;VALUE=DATE-TIME:20210914T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-174@conferences.enbis.org
DESCRIPTION:Speakers: Magnus Pettersson ()\nAccurate estimations of intern
al migration is crucial for successful policy making and community plannin
g. This report aims to estimate internal migration between municipalities
in Sweden. \n\nTraditionally\, spatial flows of people have been modelled
using gravity models\, which assume that each region attracts or repels pe
ople based on the populations of regions and distances between them. More
recently\, artificial neural networks\, which are statistical models inspi
red by biological neural networks\, have been suggested as an alternative
approach. Traditional models\, using a generalized linear framework\, have
been implemented and are used as a benchmark to evaluate the precision an
d efficiency of neural network procedures.\n\nData on migration between mu
nicipalities in Sweden during the years 2001 to 2020 have been extracted f
rom official records. There are 290 municipalities (LAU 2 according to Eur
oStat categories) in Sweden with a population size between 2 391 (Bjurholm
) and 975 277 (Stockholm). Additional data\, including demographics and so
cio-economics factors\, have been analyzed in an attempt to understand wha
t drives internal migration.\n\nhttps://conferences.enbis.org/event/11/con
tributions/174/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/174/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Univariate Self-Starting Shiryaev (U3S): A Bayesian Online Change
Point Model for Short Runs
DTSTART;VALUE=DATE-TIME:20210914T104000Z
DTEND;VALUE=DATE-TIME:20210914T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-204@conferences.enbis.org
DESCRIPTION:Speakers: Konstantinos Bourazas (Athens University of Economic
s and Business)\nIn Statistical Process Control/Monitoring (SPC/M) our int
erest is in detecting when a process deteriorates from its “in control
” state\, typically established after a long phase I exercise. Detecting
shifts in short horizon data of a process with unknown parameters\, (i.e.
without a phase I calibration) is quite challenging. \nIn this work\, we
propose a self-starting Bayesian change point scheme\, which is based on t
he cumulative posterior probability that a change point has been occurred.
We will focus our attention on univariate Normal data\, aiming to detect
persistent shifts for the mean or the variance. The proposed methodology i
s a generalization of Shiryaev’s process\, as it allows both the paramet
ers and shift magnitude to be unknown. Furthermore\, the Shiryaev’s assu
mption that the prior probability on the location of the change point is c
onstant will be relaxed. Posterior inference for the unknown parameters an
d the location of a (potential) change point will be provided. \nTwo real
data sets will illustrate the Bayesian self-starting Shiryaev’s scheme\,
while a simulation study will evaluate its performance against standard c
ompetitors in the cases of mean changes and variance inflations.\n\nhttps:
//conferences.enbis.org/event/11/contributions/204/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/204/
END:VEVENT
BEGIN:VEVENT
SUMMARY:CUSUM control charts for monitoring BINARCH(1) processes
DTSTART;VALUE=DATE-TIME:20210914T102000Z
DTEND;VALUE=DATE-TIME:20210914T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-182@conferences.enbis.org
DESCRIPTION:Speakers: Maria Anastasopoulou ()\nIn this work\, we develop a
nd study upper and lower one-sided CUSUM control charts for monitoring cor
related counts with finite range. Often in practice\, data of that kind ca
n be adequately described by a first-order binomial integer-valued ARCH mo
del (or BINARCH(1)). The proposed charts are based on the likelihood ratio
and can be used for detecting upward or downward shifts in process mean l
evel. The general framework for the development and the practical implemen
tation of the proposed charts is given. Using Monte Carlo simulation\, we
compare the performance of the proposed CUSUM charts with the correspondin
g one-sided Shewhart and EWMA charts for BINARCH(1) processes. A real-data
application of the proposed charts in epidemiology is also discussed.\n\n
https://conferences.enbis.org/event/11/contributions/182/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/182/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dubious new control chart designs — a disturbing trend
DTSTART;VALUE=DATE-TIME:20210914T100000Z
DTEND;VALUE=DATE-TIME:20210914T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-144@conferences.enbis.org
DESCRIPTION:Speakers: Sven Knoth (Helmut Schmidt University Hamburg\, Germ
any)\nFor the last twenty years\, a plethora of new ``memory-type'' contro
l charts have been proposed. They share some common features: (i) deceptiv
ely good zero-state average run-length (ARL) performance\, but poor steady
-state performance\, (ii) design\, deployment and analysis significantly m
ore complicated than for established charts\, (iii) comparisons made to un
necessarily weak competitors\, and (iv) resulting weighting of the observe
d data overemphasizing the distant past. For the most prominent representa
tive\, the synthetic chart\, these problems have been already discussed (D
avis/Woodall 2002\; Knoth 2016)\, but these and other approaches continue
to gain more and more popularity despite their substantial weaknesses. Rec
ently\, Knoth et al. (2021a\,b) elaborated on issues related to the PM\, H
WMA\, and GWMA charts. Here\, we want to give an overview on this control
chart jumble. We augment the typical zero-state ARL analysis by calculatin
g the more meaningful conditional expected delay (CED) values and their li
mit\, the conditional steady-state ARL. Moreover\, we select the competito
r (EWMA) in a more reasonable way. It is demonstrated that in all cases th
e classical chart should be preferred. The various abbreviations (DEWMA ..
. TEWMA) will be explained during the talk. \n\nDAVIS\, WOODALL (2002).\n
”Evaluating and Improving the Synthetic Control Chart”.\nJQT 34(2)\, 2
00–208.\n\nKNOTH (2016).\n”The Case Against the Use of Synthetic Contr
ol Charts”.\nJQT\, 48(2)\, 178–195.\n\nKNOTH\, TERCERO-GÓMEZ\, KHAKIF
IROOZ\, WOODALL (2021a).\n”The Impracticality of Homogeneously Weighted
Moving Average and Progressive Mean\nControl Chart Approaches”.\nTo appe
ar in QREI.\n\nKNOTH\, WOODALL\, TERCERO-GÓMEZ (2021b).\n”The Case agai
nst Generally Weighted Moving Average (GWMA) Control Charts”. Submitted.
\n\nhttps://conferences.enbis.org/event/11/contributions/144/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/144/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Six-Sigma and Obesity – Part 2
DTSTART;VALUE=DATE-TIME:20210914T104000Z
DTEND;VALUE=DATE-TIME:20210914T110000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-166@conferences.enbis.org
DESCRIPTION:Speakers: Roland Caulcutt (Caulcutt Associates)\nWhen the Covi
d19 pandemic is no longer the prime burden on British health services\, it
might be possible to refocus on the three concerns that threatened to ove
rwhelm the National Health Service in 2019. Namely\, heart disease\, canc
er and obesity.\nWhilst the NHS can reasonably claim to have made progress
with the first two\, it is faced with an ever-increasing level in obesity
. To non-clinical members of society this may seem rather surprising\, co
nsidering the relative simplicity of the fat producing process\, compared
with the extreme complexity of cancer and heart disease. It may seem even
more surprising to the many statisticians and process improvement profess
ionals who witnessed the great success of blackbelts improving organisatio
nal processes whilst working within a culture of “Six-Sigma”.\nPart 2
of this presentation will suggest how the blackbelt way of working can be
adapted to improve processes within the human body. It will offer an appr
oach that might help to reduce the ever-increasing level of obesity that h
as blighted so many lives.\n\nhttps://conferences.enbis.org/event/11/contr
ibutions/166/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/166/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Six-Sigma and Obesity – Part 1
DTSTART;VALUE=DATE-TIME:20210914T102000Z
DTEND;VALUE=DATE-TIME:20210914T104000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-165@conferences.enbis.org
DESCRIPTION:Speakers: Roland Caulcutt (Caulcutt Associates)\nWhen the Covi
d19 pandemic is no longer the prime burden on British health services\, it
might be possible to refocus on the three concerns that threatened to ove
rwhelm the National Health Service in 2019. Namely\, heart disease\, cance
r and obesity.\nWhilst the NHS can reasonably claim to have made progress
with the first two\, it is faced with an ever-increasing level in obesity.
To non-clinical members of society this may seem rather surprising\, con
sidering the relative simplicity of the fat producing process\, compared w
ith the extreme complexity of cancer and heart disease. It may seem even
more surprising to the many statisticians and process improvement professi
onals who witnessed the great success of blackbelts improving organisation
al processes whilst working within a culture of “Six-Sigma”.\nPart 1 o
f this presentation will explain why many blackbelts have had such amazing
success by improving organisational processes many of which had a history
of chronic under-performance\n\nhttps://conferences.enbis.org/event/11/co
ntributions/165/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/165/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cleanliness an underestimated area when solving problems on Safety
Critical Aerospace parts
DTSTART;VALUE=DATE-TIME:20210914T100000Z
DTEND;VALUE=DATE-TIME:20210914T102000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-187@conferences.enbis.org
DESCRIPTION:Speakers: Sören Knuts ()\nCleaning is a method that has stand
ards and specifications within Aerospace industry of how to fulfil a clean
ing requirement with respect to a certain material. Nevertheless\, it is a
n area where underlying technical problems tend to be of an intermittent a
nd long-term nature. Cause and effect-wise relationships are hard to deriv
e that makes the problem solving more of a guessing game. The lack of unde
rstanding of the underlying mechanisms of how the cleaning method is inter
acting with the material\, is limiting the C&E-analysis and makes it almos
t impossible to reach common understanding of how-to priorities improvemen
t initiatives in the cross functional product team. This is even further h
ampered by the lack of a precise measurement system and standardized proce
dures of how to evaluate the capability of the measurements relative clean
ing variations on a regular basis. A measurement system including visualiz
ation methods that not only detects bad performances of the cleaning metho
d but is also monitors its nominal performance within limits over time\, t
hat is\, control limits. \nIn this presentation a technical cleanliness
problem related to background fluorescence on a safety critical aero engin
e part is shown. The background fluorescence limits the inspectability of
the part\, and further cleaning must be done on the part in order to make
it possible to inspect the part. The fuzzy origin and different hypothesis
are discussed\, and the way to attack the difficulty of measurement probl
em is also discussed.\n\nhttps://conferences.enbis.org/event/11/contributi
ons/187/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/187/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A hybrid method for degradation assessment and fault detection in
rolling element bearings
DTSTART;VALUE=DATE-TIME:20210914T094000Z
DTEND;VALUE=DATE-TIME:20210914T100000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-190@conferences.enbis.org
DESCRIPTION:Speakers: Yonatan Nissim ()\nRolling Element Bearings (REBs) a
re key components in rotary machines\, e.g.\, turbines and engines. REBs t
end to suffer from various faults causing serious damage to the whole syst
em. Therefore\, many techniques and algorithms have been developed over th
e past years\, to detect and diagnose\, as early as possible\, an incipien
t fault and its propagation using vibration monitoring. Moreover\, some of
the methods attempt to estimate the severity of the degrading system\, to
achieve better prognostics and Remaining Useful Life (RUL) estimation.\nW
hile data-driven methods\, such as machine and deep learning continue to g
row\, they still lack physical awareness and are yet sensitive to some phe
nomena not related to the fault. In this paper\, we present a hybrid metho
d for REBs fault diagnosis which includes physics-based pre-processing tec
hniques combined with deep learning models for a semi-supervised fault det
ection. To compare and evaluate our results\, we also compare performance
of different detection methods on data from an endurance test with a propa
gating fault in the outer race. The methods we compare are both from physi
cs-based and data-driven fields. The results show that the presented hybri
d method including physical-aware signal processing techniques and feature
extraction related to the bearing fault\, can increase the reliability an
d interpretability of the data-driven model. The health indicator received
from the proposed method showed better trendiness indicating the severity
of the fault and improved the health track of the degrading system.\n\nht
tps://conferences.enbis.org/event/11/contributions/190/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/190/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Modelling electric vehicle charging load with point processes and
multivariate mixtures
DTSTART;VALUE=DATE-TIME:20210914T092000Z
DTEND;VALUE=DATE-TIME:20210914T094000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-181@conferences.enbis.org
DESCRIPTION:Speakers: Yvenn Amara-Ouali ()\nNumerous countries are making
electric vehicles their key priority to reduce emissions in their transpor
t sector. This emerging market is subject to multiple unknowns and in part
icular the charging behaviours of electric vehicles. The lack of data desc
ribing the interactions between electric vehicles and charging points hind
ers the development of statistical models describing this interaction [1].
In this work\, we want to address this gap by proposing a data-driven mod
el of the electric vehicle charging load benchmarked on open charging sess
ion datasets. These open datasets cover all common charging behaviours: (a
) public charging\, (b) workplace charging\, (c) residential charging. The
model introduced in this work focuses on three variables that are paramou
nt for reconstructing the electric vehicle charging load in an uncontrolle
d charging environment: the arrival time\, the charging duration\, and the
energy demanded for each charging session. The arrivals of EVs at chargin
g points are characterised by as a non-homogenous Poisson Process\, and th
e charging duration and energy demanded are modelled conditionally to thes
e arrival times as a bivariate mixture of Gaussian distributions. We compa
re the performances of the model proposed on all these datasets across dif
ferent metrics.\n[1] Amara-Ouali\, Y. et al. 2021. A Review of Electric Ve
hicle Load Open Data and Models. Energies. 14\, 8 (Apr. 2021)\, 2233. DOI:
https://doi.org/10.3390/en14082233.\n\nhttps://conferences.enbis.org/event
/11/contributions/181/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/181/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Estimating the Time to Reach the Curing Temperature in Autoclave C
uring Processes
DTSTART;VALUE=DATE-TIME:20210914T090000Z
DTEND;VALUE=DATE-TIME:20210914T092000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-154@conferences.enbis.org
DESCRIPTION:Speakers: Gözdenur Kırdar (Hacettepe University)\nAutoclave
curing process is one of the important stages in manufacturing. In this pr
ocess\, multiple parts are loaded in the autoclave as a batch\, they are h
eated up to their curing temperature (heating phase) and cured at that tem
perature for their dwell period. There are two main considerations that af
fect how parts are placed in the autoclave. Firstly\, if some parts reach
the curing temperature earlier than the others\, they are overcured until
the remaining parts reach that temperature. This overcuring worsens the qu
ality of the final products. Secondly\, shorter curing cycles are preferre
d to increase productivity of the whole system. Both considerations can be
addressed if the time required for each part to reach the curing temperat
ure (heating time) is known in advance. However\, there are no established
relationships between part properties and their heating times. In this st
udy\, we develop the relation between part and batch properties with the h
eating times. We consider the effects of location\, part weight\, part siz
e\, and batch properties on the heating times. The autoclave charge floor
is imaginarily divided in 18 areas and for each area multiple linear regre
ssion models that estimate the heating times are developed. Additionally\,
a biobjective optimization model is developed that finds efficient placem
ents of parts\, minimizing the maximum overcuring duration and the duratio
n of the heating phase. The approach is applied on a real case\, and an ef
ficient solution is implemented. The regression models result in significa
ntly close estimations to the realizations.\n\nhttps://conferences.enbis.o
rg/event/11/contributions/154/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/154/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bayesian Transfer Learning for modelling the hydrocracking process
using kriging
DTSTART;VALUE=DATE-TIME:20210914T084000Z
DTEND;VALUE=DATE-TIME:20210914T090000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-146@conferences.enbis.org
DESCRIPTION:Speakers: Loïc IAPTEFF ()\nHydrocracking process reaction tak
es place in presence of a catalyst\, and when supplying a catalyst\, a ven
dor must guarantee its performance. In this work\, the linear and the krig
ing model are considered to model the process. The construction of predict
ive models is based on experimental data and experiments are very expensiv
e. New catalysts are constantly being developed so that each new generatio
n of a catalyst requires a new model that is until now built from scratch
from new experiments. The aim of this work is to build the best predictive
model for a new catalyst from fewer observations and using the observatio
ns of previous generation catalysts. This task is known as transfer learni
ng.\n\nThe method used is the transfer knowledge of parameters approach\,
which consists in transferring regression models from an old dataset to a
new one. \nIn order to adapt the past knowledge to the new catalyst\, a Ba
yesian approach is considered. The idea of the approach is to take as prio
r a distribution centered on the previous model parameters. A pragmatic ap
proach to chose the prior variance ensuring that it is large enough to all
ow parameter change and small enough to retain the information is proposed
.\n\nWith the Bayesian transfer approach\, the RMSE scores for the transfe
rred models are always lower than those obtained without transfer\, especi
ally when the number of observations is low. Satisfactory models can be fi
tted with only five new observations. Without transfer\, reaching the same
model quality requires about fifty observations.\n\nhttps://conferences.e
nbis.org/event/11/contributions/146/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/146/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deciphering Random Forest models through conditional variable impo
rtance
DTSTART;VALUE=DATE-TIME:20210914T094000Z
DTEND;VALUE=DATE-TIME:20210914T100000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-164@conferences.enbis.org
DESCRIPTION:Speakers: Marta Rotari ()\, murat kulahci ()\nIn many data ana
lytics applications based on machine learning algorithms\, the main focus
is usually on predictive modeling. In certain cases\, as in many applicati
ons in manufacturing\, understanding the data-driven model plays a crucial
role in complementing the engineering knowledge about the production proc
ess. There is therefore a growing interest in describing the contributions
of the input variables to the model in the form of “variable importance
”\, which is readily available in certain machine learning methods such
as random forest (RF). In this study\, we focus on the Boruta algorithm\,
which is an effective tool in determining the importance of variables in R
F models. In many industrial applications with multiple input variables\,
it becomes likely to observe high correlation among these variables. It is
shown that the correlation among the input variables distorts and overest
imates the importance of variables. The Boruta algorithm is also affected
by this resulting in a larger set of input variables deemed important. To
overcome this\, in this study we present an extension of the Boruta algori
thm for the correlated data by exploiting the conditional importance\, whi
ch takes into consideration the correlation structure of the variables for
computing the importance scores. This leads to a significant improvement
of the variable importance scores in the case of a high correlation among
variables and to a more precise ranking of the variables that contribute t
o the model significantly. We believe this approach can be used in many in
dustrial applications by providing more transparency and understanding of
the process.\n\nhttps://conferences.enbis.org/event/11/contributions/164/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/164/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Forecasting count time series in retail
DTSTART;VALUE=DATE-TIME:20210914T092000Z
DTEND;VALUE=DATE-TIME:20210914T094000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-208@conferences.enbis.org
DESCRIPTION:Speakers: Bruno Flores (ICMAT-CSIC)\nLarge-scale dynamic forec
asting of non-negative count series is a major challenge in many areas lik
e epidemic monitoring or retail management. We propose Bayesian state-spac
e models that are flexible enough to adequately forecast high and low coun
t series and exploit cross-series relationships with a multivariate approa
ch. This is illustrated with a large scale sales forecasting problem faced
by a major retail company\, integrated within its inventory management pl
anning methodology. The company has hundreds of shops in several countries
\, each one with thousands of references.\n\nhttps://conferences.enbis.org
/event/11/contributions/208/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/208/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Prediction intervals for real estate price prediction
DTSTART;VALUE=DATE-TIME:20210914T084000Z
DTEND;VALUE=DATE-TIME:20210914T090000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-151@conferences.enbis.org
DESCRIPTION:Speakers: Moritz Beck (University of Wuerzburg)\, Rainer Göb
()\nAutomated procedures of real estate price estimation and prediction ha
ve been used in the real estate sector since 15 years. Various providers o
f real estate price predictions are available\, e. g.\, the platform Zillo
w\, or Immoscout 24 from Germany. Simultaneously\, the problem of real est
ate price prediction has become a subject of statistical and machine learn
ing literature. The current providers and theory strongly focus on point p
redictions. For users\, however\, interval predictions are more useful and
reliable. A perspective approach for obtaining prediction intervals is qu
antile regression. We analyse several methods of quantile regression\, in
particular linear quantile regression\, support vector quantile regression
\, quantile gradient boosting\, quantile random forest\, $k$-nearest neigh
bour quantile regression\, $L_1$-norm quantile regression. The performance
of the methods are evaluated on a large data set of real estate prices wi
th relevant covariates. It turns out that the best predictive power is obt
ained by linear quantile regression and $k$-nearest neighbour quantile reg
ression.\n\nhttps://conferences.enbis.org/event/11/contributions/151/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/151/
END:VEVENT
BEGIN:VEVENT
SUMMARY:A tailored analysis of data from OMARS designs
DTSTART;VALUE=DATE-TIME:20210914T094000Z
DTEND;VALUE=DATE-TIME:20210914T100000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-128@conferences.enbis.org
DESCRIPTION:Speakers: Mohammed Saif Ismail Hameed (KU Leuven)\nExperimenta
l data are often highly structured due to the use of experimental designs.
This does not only simplify the analysis\, but it allows for tailored me
thods of analysis that extract more information from the data than generic
methods. One group of experimental designs that are suitable for such met
hods are the orthogonal minimally aliased response surface (OMARS) designs
(Núñez Ares and Goos\, 2020)\, where all main effects are orthogonal to
each other and to all second order effects. The design based analysis met
hod of Jones and Nachtsheim (2017) has shown significant improvement over
existing methods in powers to detect active effects. However\, the applica
tion of their method is limited to only a small subgroup of OMARS designs
that are commonly known as definitive screening designs (DSDs). In our wor
k\, we not only improve upon the Jones and Nachtsheim method for DSDs\, bu
t we also generalize their analysis framework to the entire family of OMAR
S designs. Using extensive simulations\, we show that our customized metho
d for analyzing data from OMARS designs is highly effective in selecting t
he true effects when compared to other modern (non-design based) analysis
methods\, especially in cases where the true model is complex and involves
many second order effects. \n\n\nReferences:\n\nJones\, Bradley\, and Chr
istopher J. Nachtsheim. 2017. “Effective Design-Based Model Selection fo
r Definitive Screening Designs.” Technometrics 59(3):319–29.\n\nNúñe
z Ares\, José\, and Peter Goos. 2020. “Enumeration and Multicriteria Se
lection of Orthogonal Minimally Aliased Response Surface Designs.” Techn
ometrics 62(1):21–36.\n\nhttps://conferences.enbis.org/event/11/contribu
tions/128/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/128/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Parameter Diagram as a DoE Planning Tool
DTSTART;VALUE=DATE-TIME:20210914T092000Z
DTEND;VALUE=DATE-TIME:20210914T094000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-136@conferences.enbis.org
DESCRIPTION:Speakers: Matthew Barsalou ()\nStatisticians are often called
upon to work together with Subject Matter Experts (SMEs) to perform Design
of Experiments (DoEs). The statistician may have mastered DoE\; however\,
the SME’s input may be critical in determining the correct factors\, le
vels\, and response variable of interest. The SME may be an engineer or ev
en the machine operator responsible for the daily activities at the proces
s that is being considered for a DoE. They may not understand what a DoE i
s or what is needed for a DoE. To facilitate DoE planning\, a Parameter di
agram (p-diagram) may be helpful. A p-diagram is not a new tool and it is
often used in the automotive industry for the creation of Design Failure M
odes and Effects Analysis. The use of a p-diagram as a DoE preparation too
l\, however\, is a new application of the concept.\n\n\nThis talk will des
cribe the p-diagram and its application in DoE. Examples will be presented
using actual DoEs from the literature. These case studies are the identif
ication of the AA battery configuration with the longest life\, improving
the quality of a molded part\, increasing the life of a molded tank deterr
ent device\, and the optimization of a silver powder production process. A
fter attending this talk\, participants will be able to use a p-diagram fo
r DoE planning.\n\nhttps://conferences.enbis.org/event/11/contributions/13
6/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/136/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Influence of process parameters on part dimensional tolerances: An
Industrial Case Study
DTSTART;VALUE=DATE-TIME:20210914T090000Z
DTEND;VALUE=DATE-TIME:20210914T092000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-135@conferences.enbis.org
DESCRIPTION:Speakers: Narendra Akhadkar (Schneider Electric Industries)\nI
njection molded parts are widely used in power system protection products.
One of the biggest challenge in an injection molding process is shrinkage
and warpage of the molded parts. All these geometrical variations may hav
e an adverse effect on the quality of product\, functionality\, cost and t
ime-to-market. Our aim is to predict the spread of the functional dimensio
ns and geometrical variations on the part due to variations in the input p
arameters such as\, material viscosity\, packing pressure\, mold temperatu
re\, melt temperature and injection speed. \n\nThe input parameters may va
ry during batch production or due to variations in the machine process set
tings. To perform the accurate product assembly variation simulation\, the
first step is to perform an individual part variation simulation to rende
r realistic tolerance ranges. \nWe present a method to simulate part varia
tions\, coming from the input parameters variation during batch production
. The method is based on computer simulations and experimental validation
using full factorial Design of Experiments (DoE). Robustness of the simula
tion model is verified through input parameter wise sensitivity analysis s
tudy performed using simulations and experiments\, all the results shows a
very good correlation in the material flow direction. There exists a non-
linear interaction between material and the input process variables. It is
observed that the parameters such as\, packing pressure\, material and mo
ld temperature plays an import role in spread on the functional dimensions
and geometrical variations. This method will allow us in future to develo
p the accurate/realistic virtual prototypes based on trusted simulated pro
cess variation.\n\nhttps://conferences.enbis.org/event/11/contributions/13
5/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/135/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bayesian I-optimal designs for choice experiments with mixtures
DTSTART;VALUE=DATE-TIME:20210914T084000Z
DTEND;VALUE=DATE-TIME:20210914T090000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-124@conferences.enbis.org
DESCRIPTION:Speakers: Mario Becerra (KU Leuven)\nDiscrete choice experimen
ts are frequently used to quantify consumer preferences by having responde
nts choose between different alternatives. Choice experiments involving mi
xtures of ingredients have been largely overlooked in the literature\, eve
n though many products and services can be described as mixtures of ingred
ients. As a consequence\, little research has been done on the optimal des
ign of choice experiments involving mixtures. The only existing research h
as focused on D-optimal designs\, which means that an estimation-based app
roach was adopted. However\, in experiments with mixtures\, it is crucial
to obtain models that yield precise predictions for any combination of ing
redient proportions. This is because the goal of mixture experiments gener
ally is to find the mixture that optimizes the respondents' utility. As a
result\, the I-optimality criterion is more suitable for designing choice
experiments with mixtures than the D-optimality criterion because the I-op
timality criterion focuses on getting precise predictions with the estimat
ed statistical model. In this paper\, we study Bayesian I-optimal designs\
, compare them with their Bayesian D-optimal counterparts\, and show that
the former designs perform substantially better than the latter in terms o
f the variance of the predicted utility.\n\nhttps://conferences.enbis.org/
event/11/contributions/124/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/124/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Long short-term memory neural network for statistical process cont
rol of autocorrelated multiple stream process with an application to HVAC
systems in passenger rail vehicles
DTSTART;VALUE=DATE-TIME:20210914T094000Z
DTEND;VALUE=DATE-TIME:20210914T100000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-206@conferences.enbis.org
DESCRIPTION:Speakers: Gianluca Sposito (Department of Industrial Engineeri
ng\, University of Naples Federico II)\nRail transport demand in Europe ha
s increased over the last few years\, and passenger thermal comfort has be
en playing a key role in the fierce competition among different transporta
tion companies. Furthermore\, European standards settle operational requir
ements of passenger rail coaches in terms of air quality and comfort level
. To meet these standards and the increasing passenger thermal comfort dem
and\, data from on-board heating\, ventilation and air conditioning (HVAC)
systems have been collected by railway companies to improve maintenance p
rograms in the industry 4.0 scenario. Usually\, a train consists of severa
l coaches equipped with a dedicated HVAC system\, and the sensor signals c
oming from each HVAC system produce multiple data streams. This setting ca
n thus be regarded as a multiple stream process (MSP). Unfortunately\, the
massive amounts of data collected at high rates makes each stream more li
kely to be autocorrelated. This scenario calls for a new methodology capab
le of overcoming the simplifying assumptions on which traditional MSP mode
ls are based. This work is intended to propose a new control charting proc
edure based on a long short-term memory neural network trained to solve th
e binary classification problem of detecting whether the MSP is in control
or out of control\, i.e.\, to recognize mean shifts in autocorrelated MSP
s. A simulation study is performed to assess the performance of the propos
ed approach and its practical applicability is illustrated by an applicati
on to the monitoring of HVAC system data\, made available by the rail tran
sport company Hitachi Rail based in Italy.\n\nhttps://conferences.enbis.or
g/event/11/contributions/206/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/206/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Railway track degradation prediction using Wiener process modellin
g
DTSTART;VALUE=DATE-TIME:20210914T092000Z
DTEND;VALUE=DATE-TIME:20210914T094000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-159@conferences.enbis.org
DESCRIPTION:Speakers: mahdieh sedghi ()\nTrack geometry is critical for ra
ilway infrastructures\, and the geometry condition and the expected degrad
ation rate are vital for planning maintenance actions to assure the tracks
’ reliability and safety. The degradation prediction accuracy is\, there
fore\, essential. The Wiener process has been widely used for degradation
analysis in various applications based on degradation measurements. In rai
lway infrastructure\, however\, Wiener process-based degradation models ar
e uncommon. This presentation explores the Wiener process for predicting r
ailway track degradation. First\, we review different data-driven approach
es found in the literature to estimate the Wiener process parameters and u
pdating them when new measurements are collected. We study different proce
dures to estimate and update the Wiener process parameters and evaluate th
eir computational performance and prediction errors based on measurement d
ata for a track line in northern Sweden. The result can help to balance th
e computational complexity and the prediction accuracy when selecting a Wi
ener process-based degradation model for predictive maintenance of the rai
lway track.\n\nhttps://conferences.enbis.org/event/11/contributions/159/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/159/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Attribute-Variable Alternating Inspection (AVAI): The use of $np_x
-S^2$ mixed control chart in monitoring the process variance
DTSTART;VALUE=DATE-TIME:20210914T090000Z
DTEND;VALUE=DATE-TIME:20210914T092000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-137@conferences.enbis.org
DESCRIPTION:Speakers: Leandro Alves da Silva (Universidade de São Paulo\,
São Paulo SP\, Brazil.)\nThe presence of variation is an undesirable (bu
t natural) factor in processes. Quality improvement practitioners search c
onstantly for efficient ways to monitor it\, a primary requirement in SPC.
Generally\, inspections by attributes are cheaper and simpler than inspec
tions by variables\, although they present poor performance in comparison.
The $S^2$ chart is widely applied in monitoring process variance\, facing
the need for more economical strategies that provide good performance is
the motivation of this work. Many practitioners use four to six units to b
uild the $S^2$ chart\, the reduction of sample size decreases their power
to detect changes in process variance. This work proposes the application
of alternating inspections (by attributes and variables) using sequentiall
y samples of size $n_a$ and $n_b$ ($n_a > n_b$). The items of sample of si
ze $n_a$ are classified according to the $np_x$ chart procedure\, using a
GO / NO GO gauge and counting the number of non-approved items ($Y_{n_a}$)
. The items of sample of size $n_b$ are measured and calculated its sample
variance $S^2_{n_b}$. If $Y_{n_a} > UCL_{n_a}$ or $S^2_{n_b} > UCL_{n_b}$
the process is judged out of control. The inspection always restarts with
sample size $n_a$ (using the $np_x$ chart)\, otherwise\, the process cont
inues. The parameters of the proposed chart are optimized by an intensive
search\, in order to outperform the $S^2$ chart (in terms of $ARL_1$\, for
a fixed $ARL_0$)\, restricted to have average sample size closer to the s
ample used for $S^2$\, from their results was possible to reduce about 10%
in $ARL_1$.\n\nhttps://conferences.enbis.org/event/11/contributions/137/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/137/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The study of variability in engineering design—An appreciation a
nd a retrospective
DTSTART;VALUE=DATE-TIME:20210914T084000Z
DTEND;VALUE=DATE-TIME:20210914T090000Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-161@conferences.enbis.org
DESCRIPTION:Speakers: Tim Davis (We Predict Ltd. & timdavis consulting ltd
.)\nWe explore the concept of parameter design applied to the production o
f glass beads in the manufacture of metal-encapsulated transistors. The ma
in motivation is to complete the analysis hinted at in the original public
ation by Jim Morrison in 1957\, which was possibly the first example of ex
ploring the idea of transmitted variation in engineering design\, and an
influential paper in the development of analytic parameter design as a sta
tistical engineering activity. Parameter design (the secondary phase of en
gineering activity) is focussed on selecting the nominals of the design va
riables\, to simultaneously achieve the required functional target\, with
minimum variance.\n\nMorrison\, SJ (1957) The study of variability in engi
neering design. Applied Statistics 6(2)\, 133–138.\n\nhttps://conference
s.enbis.org/event/11/contributions/161/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/161/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Application of the Bayesian conformity assessment framework from J
CGM 106 to lot inspection on the basis of single items
DTSTART;VALUE=DATE-TIME:20210913T144500Z
DTEND;VALUE=DATE-TIME:20210913T151500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-185@conferences.enbis.org
DESCRIPTION:Speakers: Steffen Uhlig (QuoData)\, Bertrand Colson (QuoData)\
nThe ISO 2859 and ISO 3951 series provide acceptance sampling procedures f
or lot inspection\, allowing both sample size and acceptance rule to be de
termined\, starting from a specific value either for the consumer or produ
cer risk. However\, insufficient resources often prohibit the implementati
on of “ISO sampling plans.” In cases where the sample size is already
known\, determined as it is by external constraints\, the focus shifts fro
m determining sample size to determining consumer and producer risks. More
over\, if the sample size is very low (e.g. one single item)\, prior infor
mation should be included in the statistical analysis. For this reason\, i
t makes sense to work within a Bayesian theoretical framework\, such as th
at described in JCGM 106. Accordingly\, the approach from JCGM 106 is adop
ted and broadened so as to allow application to lot inspection. The discus
sion is based on a “real-life” example of lot inspection on the basis
of a single item. Starting from simple assumptions\, expressions for both
the prior and posterior distributions are worked out\, and it is shown how
the concepts from JCGM 106 can be reinterpreted in the context of lot ins
pection. Finally\, specific and global consumer and producer risks are cal
culated\, and differences regarding the interpretation of these concepts i
n JCGM 106 and in the ISO acceptance sampling standards are elucidated.\n\
nhttps://conferences.enbis.org/event/11/contributions/185/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/185/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Autocorrelated processes in metrology with examples from ISO and J
CGM documents
DTSTART;VALUE=DATE-TIME:20210913T141500Z
DTEND;VALUE=DATE-TIME:20210913T144500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-184@conferences.enbis.org
DESCRIPTION:Speakers: Maurice Cox (NPL)\nIt is common practice in metrolog
y that the standard uncertainty associated with the average of repeated ob
servations is taken as the sample standard deviation of the observations d
ivided by the square root of the sample size. This uncertainty is an estim
ator of the standard deviation of the sample mean when the observations ha
ve the same mean and variance and are uncorrelated.\n\nIt often happens th
at the observations are correlated\, especially when data is acquired at h
igh frequency sampling rates. In such a process\, there are dependencies a
mong the observations\, especially between closely neighbouring observatio
ns. For instance\, in continuous production such as in the chemical indust
ry\, many process data on quality characteristics are self-correlated over
time. In general\, autocorrelation can be caused by the measuring system\
, the dynamics of the process or both. \n\nFor observations made of an aut
ocorrelated process\, the uncertainty associated with the sample mean as a
bove is often invalid\, being inappropriately low. We consider the evalua
tion of the standard uncertainty associated with a sample of observations
from a stationary autocorrelated process. The resulting standard uncertain
ty is consistent with relevant assumptions made about the data generation
process.\n\nThe emphasis is on a procedure that is relatively straightforw
ard to apply in an industrial context.\n\nExamples from a recent guide of
the Joint Committee for Guides in Metrology and a developing standard from
the International Organization for Standardization are used to illustrate
the points made.\n\nhttps://conferences.enbis.org/event/11/contributions/
184/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/184/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Two questions of "class": Kind of quantity and Classification
DTSTART;VALUE=DATE-TIME:20210913T134500Z
DTEND;VALUE=DATE-TIME:20210913T141500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-143@conferences.enbis.org
DESCRIPTION:Speakers: Leslie Pendrill (RI.SE Metrology)\nThe need to handl
e ordinal and nominal data is currently being addressed in various work go
ing on amongst ontology organisations and various standards bodies dealing
with concept systems in response to big data\, machine reading in applica
tions such as the medical field. At the same time\, some prominent statist
icians have been reticent about accepting someone else telling them what s
cales they should use when analysing data. This presentation reviews how t
wo key concepts - Kind of quantity and Classification - can be defined and
form the basis for comparability\, additivity\, dimensionality\, etc and
are essential to include in any concept system for Quantity. Examples incl
ude on-going research on neurodenegeration as studied in the European EMPI
R project NeuroMET2.\n\nhttps://conferences.enbis.org/event/11/contributio
ns/143/
LOCATION:Room 4
URL:https://conferences.enbis.org/event/11/contributions/143/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Priors Comparison in Bayesian mediation framework with binary outc
ome
DTSTART;VALUE=DATE-TIME:20210913T141500Z
DTEND;VALUE=DATE-TIME:20210913T144500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-217@conferences.enbis.org
DESCRIPTION:Speakers: Jean-Michel GALHARRET ()\nIn human sciences\, mediat
ion refers to a causal phenomenon in which the effect of an exposure varia
ble 𝑋 on an outcome 𝑌 can be decomposed into a direct effect and an
indirect effect via a third variable 𝑀 (called mediator variable). \n
In mediation models\, the natural direct effects and the natural indirect
effects are among the parameters of interest. For this model\, we construc
t different class of prior distributions depending available information.
We extend the 𝐺 -priors from the regression to the mediation model. We
also adapt an informative transfer learning model to include historical in
formation in the prior distribution. This model will be relevant for insta
nce in longitudinal studies with only two or three measurement times. \nOn
e of the usual issues in mediation analysis is to test the existence of th
e direct and the indirect effect. Given the estimation of the posterior di
stribution of the parameters\, we construct critical regions for frequenti
st testing process. Using simulations\, we compare this procedure with the
tests usually used in mediation analysis. Finally\, we apply our approach
to real data from a longitudinal study on the well-being of children in s
chool.\n\nhttps://conferences.enbis.org/event/11/contributions/217/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/217/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Entropy-based Discovery of Summary Causal Graphs in Time Series
DTSTART;VALUE=DATE-TIME:20210913T144500Z
DTEND;VALUE=DATE-TIME:20210913T151500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-214@conferences.enbis.org
DESCRIPTION:Speakers: Karim ASSAAD ()\nWe address in this study the proble
m of learning a summary causal graph between time series. To do so\, we fi
rst propose a new temporal mutual information measure defined on a window-
based representation of time series that can detect the independence and t
he conditional independence between two time series. We then show how this
measure can be used to derive orientation rules under the assumption that
a cause cannot precede its effect. We finally combine these two ingredien
ts in a PC-like algorithm to construct the summary causal graph. This algo
rithm is evaluated on several synthetic and real datasets that show both i
ts efficacy and efficiency.\n\nhttps://conferences.enbis.org/event/11/cont
ributions/214/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/214/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Causal Rules Extraction in Time Series Data
DTSTART;VALUE=DATE-TIME:20210913T134500Z
DTEND;VALUE=DATE-TIME:20210913T141500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-196@conferences.enbis.org
DESCRIPTION:Speakers: Amin Dhaou ()\nThe number of complex infrastructures
in an industrial setting\nis growing and is not immune to unexplained rec
urring events\nsuch as breakdowns or failure that can have an economic and
\nenvironmental impact. To understand these phenomena\, sensors\nhave been
placed on the different infrastructures to track\, monitor\,\nand control
the dynamics of the systems. The causal study of these\ndata allows predi
ctive and prescriptive maintenance to be carried\nout. It helps to underst
and the appearance of a problem and find\ncounterfactual outcomes to bette
r operate and defuse the event.\nIn this paper\, we introduce a novel appr
oach combining the\ncase-crossover design which is used to investigate acu
te triggers\nof diseases in epidemiology\, and the Apriori algorithm which
is a\ndata mining technique allowing to find relevant rules in a dataset.
\nThe resulting time series causal algorithm extracts interesting rules\ni
n our application case which is a non-linear time series dataset.\nIn addi
tion\, a predictive rule-based algorithm demonstrates the\npotential of th
e proposed method.\n\nhttps://conferences.enbis.org/event/11/contributions
/196/
LOCATION:Room 3
URL:https://conferences.enbis.org/event/11/contributions/196/
END:VEVENT
BEGIN:VEVENT
SUMMARY:In-Profile Monitoring for Multivariate Process Data in Advanced Ma
nufacturing
DTSTART;VALUE=DATE-TIME:20210913T144500Z
DTEND;VALUE=DATE-TIME:20210913T151500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-173@conferences.enbis.org
DESCRIPTION:Speakers: chen zhang ()\nNowadays advanced sensing technology
enables real-time data collection of key variables during manufacturing\,
which are referred to as multi-channel profiles. These data facilitate in-
process monitoring and anomaly detection\, which have been extensively stu
died in the past few years. However\, all current studies treat each profi
le as a whole\, such as a high-dimensional vector or a function\, and cons
truct monitoring schemes accordingly. This leads to two limitations. First
\, long detection delay exists\, especially if the anomaly occurs in early
sensing points of the profile. Second\, analyzing a profile as a whole re
quires that profiles of different samples should be synchronized with the
same length\, yet they usually have certain variability due to inherent fl
uctuations. To address this problem\, this paper is the first to monitor m
ulti-channel profiles on the fly. It can not only detect anomalies without
the whole profile\, but also handle the non-synchronization effect of dif
ferent samples. In particular\, our work is built upon the state space mod
el (SSM) framework. To better describe the between-state and between-profi
le correlations\, we further develop the regularized SSM (RSSM). The regul
arizations are imposed as prior information\, and maximum a posterior (MAP
) inference in the Bayesian framework is adopted for parameter learning. B
uilt upon RSSM\, a monitoring statistic based on one-step-ahead forecastin
g error is constructed for in-profile monitoring. The effectiveness and ap
plicability of the proposed monitoring scheme are demonstrated in both the
numerical studies and two real case studies.\n\nhttps://conferences.enbis
.org/event/11/contributions/173/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/173/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sparse abnormality detection based on variable selection for spati
ally correlated multivariate process
DTSTART;VALUE=DATE-TIME:20210913T141500Z
DTEND;VALUE=DATE-TIME:20210913T144500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-172@conferences.enbis.org
DESCRIPTION:Speakers: Shuai Zhang (Henan University of Engineering)\nMonit
oring the manufacturing process becomes a challenging task with a huge num
ber of variables in traditional multivariate statistical process control (
MSPC) methods. However\, the rich information is often loaded with some ra
re suspicious variables\, which should be screened out and monitored. Even
though some control charts based on variable selection algorithms were pr
oven effective for dealing with such issues\, charting algorithms for the
sparse mean shift with some spatially correlated features are scarce. This
article proposes an advanced MSPC chart based on fused penalty-based vari
able selection algorithm. First\, a fused penalised likelihood is develope
d for selecting the suspicious variables. Then\, a charting statistic is e
mployed to detect potential shifts among the variables monitored. Simulati
on experiments demonstrate that the proposed scheme can detect abnormal ob
servation efficiently and provide root causes reasonably. It is shown that
the fused penalty can capture the spatial information and improve the rob
ustness of a variables selection algorithm for spatially correlated proces
s.\n\nhttps://conferences.enbis.org/event/11/contributions/172/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/172/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evaluating and Monitoring the Quality of Online Products and Servi
ces via User-Generated Reviews
DTSTART;VALUE=DATE-TIME:20210913T134500Z
DTEND;VALUE=DATE-TIME:20210913T141500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-171@conferences.enbis.org
DESCRIPTION:Speakers: Qiao Liang ()\nUser-generated content including both
review texts and user ratings provides important information regarding th
e customer-perceived quality of online products and services. The quality
improvement of online products as well as services will benefit from a gen
eral framework of analyzing and monitoring these user-generated content. T
his study proposes a modeling and monitoring method for online user-genera
ted content. A unified generative model is constructed to combine words an
d ratings in customer reviews based on their latent sentiment and topic as
signments\, and a two-chart scheme is proposed for detecting shifts of cus
tomer responses in dimensions of sentiments and topics\, respectively. The
proposed method shows superior performance in shift detection\, especiall
y for the sentiment shifts in customer responses\, based on the results of
simulation and a case study.\n\nhttps://conferences.enbis.org/event/11/co
ntributions/171/
LOCATION:Room 2
URL:https://conferences.enbis.org/event/11/contributions/171/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sparse and smooth cluster analysis of functional data
DTSTART;VALUE=DATE-TIME:20210913T141500Z
DTEND;VALUE=DATE-TIME:20210913T144500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-189@conferences.enbis.org
DESCRIPTION:Speakers: Fabio Centofanti (University of Naples)\nThe sparse
and smooth clustering (SaS-Funclust) method proposed in [1] is presented.
The aim is to cluster functional data while jointly detecting the most inf
ormative portion(s) of the functional data domain. The SaS-Funclust method
relies on a general functional Gaussian mixture model with parameters est
imated by maximizing the sum of a log-likelihood function penalized by a f
unctional adaptive pairwise penalty and a roughness penalty. The functiona
l adaptive penalty is introduced to automatically identify the informative
portion of domain by shrinking the means of separated clusters to some co
mmon values. At the same time\, the roughness penalty imposes some smoothn
ess to the estimated cluster means. The proposed method is shown to effect
ively enhance the solution interpretability while still maintaining flexib
ility in terms of clustering performance. The methods are implemented and
archived in an R package *sasfunclust*\, available on CRAN [2].\n\n[1] Cen
tofanti\, F.\, Lepore\, A.\, Palumbo\, B. (2021). Sparse and Smooth Functi
onal Data Clustering. Preprint arXiv:2103.15224\n[2] Centofanti F.\, Lepor
e A.\, Palumbo B. (2021). sasfunclust: Sparse and Smooth Functional Cluste
ring. R package version 1.0.0. [https://CRAN.R–project.org/package=sasfu
nclust]\n\nhttps://conferences.enbis.org/event/11/contributions/189/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/189/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heteroscedastic Gaussian Process regression for assessing interpol
ation uncertainty of essential climate variables
DTSTART;VALUE=DATE-TIME:20210913T144500Z
DTEND;VALUE=DATE-TIME:20210913T151500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-202@conferences.enbis.org
DESCRIPTION:Speakers: Pietro Colombo ()\nRecent advancements\, [2][3]\, in
interpolation uncertainty estimation for the vertical profiles of ECV (es
sential climate variables)\, have shown the Gaussian process regression to
be a valid interpolator. Gaussian process regression assumes the variance
to be constant along the atmospheric profile. This behaviour is known as
the homoscedasticity of the residuals.\nHowever\, climate variables often
present heteroscedastic residuals. The implementation of Gaussian process
regression that accounts for this latter aspect is a plausible way to impr
ove the interpolation uncertainty estimation. In [2]\, these authors recen
tly showed that Gaussian Process regression gives an effective interpolato
r for relative humidity measurements\, especially when the variability of
underlining natural process is high. \nIn this talk\, we consider Gaussian
methods that allow for heteroscedasticity\, e.g. [1]\, hence handling sit
uations in which we have input-dependent variance. In this way\, we will p
rovide a more precise estimate of the interpolation uncertainty.\n\nRefere
nces\n\n\n\n[1] Wang C.\, (2014) Gaussian Process Regression with Heterosc
edastic Residuals and Fast MCMC Methods\, PhD thesis\, Graduate Department
of Statistics\, University of Toronto.\n[2] Colombo\, P.\, and Fassò A.\
, (2021) Joint Virtual Workshop of ENBIS and MATHMET Mathematical and Stat
istical Methods for Metrology\, MSMM 2021.\n[3] Fassò\, A.\, Michael S.\,
and von Rohden C. (2020) "Interpolation uncertainty of atmospheric temper
ature profiles."\, Atmospheric Measurement Techniques\, 13(12): 6445-6458.
\n\nhttps://conferences.enbis.org/event/11/contributions/202/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/202/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Randomizing versus not randomizing split-plot experiments
DTSTART;VALUE=DATE-TIME:20210913T134500Z
DTEND;VALUE=DATE-TIME:20210913T141500Z
DTSTAMP;VALUE=DATE-TIME:20230924T162908Z
UID:indico-contribution-11-167@conferences.enbis.org
DESCRIPTION:Speakers: Rossella Berni ()\nRandomization is a fundamental pr
inciple underlying the statistical planning of experiments. In this talk\,
we illustrate the impact when the experimenter either cannot or chooses n
ot to randomize the application of the experimental factors to their appro
priate experimental units for split-plot experiments (Berni et al.\, 2020)
. The specific context is an experiment to improve the production process
of an ultrasound transducer for medical imaging. Due to the constraints pr
esented by the company requirements\, some of the design factors cannot be
randomized. Through a simulation study based on the experiment for the tr
ansducer\, we illustrate visually the impact of a linear trend over time f
or both the randomized and nonrandomized situations\, at the whole-plot an
d at the sub-plot levels. We assess the effect of randomizing versus not r
andomizing by considering the estimated model coefficients\, and the whole
-plot and sub-plot residuals. We also illustrate how to detect and to esti
mate the linear trend if the design is properly randomized\, by also analy
zing the impact of different slopes for the trend. We show that the nonran
domized design cannot detect the presence of the linear trend through resi
dual plots because the impact of the trend is to bias the estimated coeffi
cients. The simulation study provides an excellent way to explain to engin
eers and practitioners the fundamental role of randomization in the design
and analysis of experiments. \nREFERENCES:\nRossella Berni\, Francesco Be
rtocci\, Nedka D. Nikiforova & G. Geoffrey Vining (2020) A tutorial on ran
domizing versus not randomizing Split-Plot experiments\, Quality Engineeri
ng\, 32:1\, 25-45\, DOI: 10.1080/08982112.2019.1617422.\n\nhttps://confere
nces.enbis.org/event/11/contributions/167/
LOCATION:Room 1
URL:https://conferences.enbis.org/event/11/contributions/167/
END:VEVENT
END:VCALENDAR