Loading…

An EIM-based compression-extrapolation tool for efficient treatment of homogenized cross-section data

Nuclear reactor simulators implementing the widespread two-steps deterministic calculation scheme tend to produce a large volume of intermediate data at the interface of their two subcodes – up to dozens or even hundred of gigabytes – which can be so cumbersome that it hinders the global performance...

Full description

Saved in:
Bibliographic Details
Published in:Annals of nuclear energy 2023-06, Vol.185, p.109705, Article 109705
Main Authors: Truffinet, Olivier, Ammar, Karim, Gérard Castaing, Nicolas, Argaud, Jean-Philippe, Bouriquet, Bertrand
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Nuclear reactor simulators implementing the widespread two-steps deterministic calculation scheme tend to produce a large volume of intermediate data at the interface of their two subcodes – up to dozens or even hundred of gigabytes – which can be so cumbersome that it hinders the global performance of the code. The vast majority of this data consists of “few-groups homogenized cross-sections”, nuclear quantities stored in the form of tabulated multivariate functions which can be precomputed to a large extent. It has been noticed in Tomatis (2021) that few-groups homogenized cross-sections are highly redundant — that is, they exhibit strong correlations, which paves the way for the use of compression techniques. We here pursue this line of work by introducing a new coupled compression/surrogate modeling tool based on the Empirical Interpolation Method, an algorithm originally developed in the framework of partial differential equations (Barrault et al., 2004). This EIM-compression method is based on the infinite norm ∥⋅∥∞, and proceeds in a greedy manner by iteratively trying to approximate the data and incorporating the chunks of information which cause the largest error. In the process, it generates a vector basis and a set of interpolation points, which provide an elementary surrogate model that can be used to approximate future data from little information. The algorithm is also very suitable for parallelization and out-of-core computation (processing of data too large for the computer RAM) and very easy to apprehend and implement. This method enables us to both efficiently compress cross-sections and spare a large fraction of the required lattice calculations. We investigate its performance on large realistic nuclear data replicating the notorious VERA benchmark (Godfrey, 2014) (20 energy groups, pin-by-pin homogenization, 10 particularized isotopes). Compression loss, memory savings and speed are analyzed both from a data-centric point of view in the perspective of applications in neutronics, and by comparison with an existing and widely-used method – stochastic truncated SVD – to assess mathematical efficiency. We discuss the usage of our surrogate model and its sensitivity to the choice of the training set. The method is shown to be competitive in terms of accuracy and speed, provide important memory savings and spare a large amount of physics code computation; all this could facilitate the adoption of fine-grain modelization schemes (pin-by-pin an
ISSN:0306-4549
1873-2100
DOI:10.1016/j.anucene.2023.109705