15–16 May 2024
Dortmund
Europe/Berlin timezone

A Gradient-Enhanced Neural Network as a Surrogate for Elastoplastic Finite Element Analysis

15 May 2024, 14:00
20m
Dortmund

Dortmund

Emil-Figge-Straße 42, 44227 Dortmund
Spring Meeting Contributed session

Speaker

Ali Osman Mert Kilicsoy (TU Dortmund, CRE)

Description

When monitoring complex manufacturing processes, various methods, for instance the optimization of observed systems or quantification of their uncertainty, are applied to support and improve the processes. These methods necessitate repeated evaluations of the systems associated responses. While complex numerical models such as finite element models are capable of this, their solutions come with high computational cost with increasing complexity. In certain cases, artificial neural networks are suitable as surrogate models with less computational cost while maintaining a certain degree of accuracy. In general supervised learning, an artificial neural network trains on data, whereby a data loss evaluates the difference between available data and computed model response. For such a surrogate model, a computationally expensive, but accurate preliminary numerical model evaluates its system according to inputs to produce its training data. Often, these numerical models can provide further data, such as sensitivities, through computationally cheap adjoint methods. By including these sensitivities with respect to the inputs, the performance of the training convergence is improved. This Sobolev training adjusts the defined data loss of the neural network in order to consider the sensitivities. Instead of additional outputs for the neural network model, these sensitivities are computed by the derivatives of the model output through its layers. This expansion of the data loss leads to the consideration of appropriate weighting of each individual response and sensitivity loss. Specifically in this work, the goal is to define a second, parallel optimization process during training, with which to determine the optimal weighting of all individual losses, for optimal convergence performance. A finite element model is prepared, which evaluates various output variables for a given mechanical system of linear or nonlinear behavior, to generate a small dataset. Then a neural network is Sobolev-trained with this dataset. During training, a parallel optimization process occurs for the weighting of each individual loss. We explore this by applying a set of residual weights to the Sobolev loss function and then optimizing a predefined target function in regards to the loss for the set of residual weights. The results demonstrate that applying certain residual weight optimization methods improve convergence performance and not only reduce the total range of accuracy among trained models, but also shift the range to a better accuracy.

Type of presentation Contributed Talk

Primary authors

Presentation materials

There are no materials yet.