Speaker
Description
Purpose: Industrial applications increasingly rely on complex predictive models for process optimization and quality improvement. However, the relationship between statistical model performance and actual operational benefits remains insufficiently characterized. This research investigates when model complexity provides genuine business value versus statistical over-engineering.
Methods: We systematically compare machine learning and deep learning models against simpler statistical approaches across varying levels of process noise and measurement uncertainty. The analysis integrates predictive modelling with simulation-based optimization to evaluate both technical accuracy and operational impact in manufacturing and service environments.
Key Findings: Complex models demonstrate superior prediction accuracy under controlled conditions but show diminishing operational advantages as system variability increases. The study quantifies critical noise thresholds across multiple case studies where statistical sophistication becomes operationally irrelevant, challenging conventional statistically-based model selection criteria.
Innovation: This research uniquely bridges predictive analytics and industrial statistics by measuring the gap between model evaluation metrics and business outcomes across different uncertainty levels. We provide empirical evidence for when simple statistical models deliver equivalent operational performance to complex alternatives.
Practical Impact: The findings offer practitioners a statistically-grounded framework for model selection based on process characteristics and operational context, preventing over-investment in unnecessarily complex analytical solutions while maintaining performance standards.
Classification | Both methodology and application |
---|---|
Keywords | Operational performance, Predictive models, System complexity |