https://doi.org/10.1051/epjn/2021008
Regular Article
Nuclear data assimilation, scientific basis and current status
^{1} Institute for Radioprotection and Nuclear Safety, FontenayauxRoses, France
^{2}
Alternative Energies and Atomic Energy Commission, CEA DAM DIF, F91297 Arpajon, France
^{3}
The University of Tennessee, Knoxville, USA
^{*} email: evgeny.ivanov@irsn.fr
Received:
9
November
2020
Received in final form:
24
March
2021
Accepted:
9
April
2021
Published online: 6 May 2021
The use of Data Assimilation methodologies, known also as a data adjustment, liaises the results of theoretical and experimental studies improving an accuracy of simulation models and giving a confidence to designers and regulation bodies. From the mathematical point of view, it approaches an optimized fit to experimental data revealing unknown causes by known consequences that would be crucial for data calibration and validation. Data assimilation adds value in a ND evaluation process, adjusting nuclear data to particular application providing socalled optimized designoriented library, calibrating nuclear data involving IEs since all theories and differential experiments provide the only relative values, and providing an evidencebased background for validation of Nuclear data libraries substantiating the UQ process. Similarly, it valorizes experimental data and the experiments, as such involving them in a scientific turnover extracting essential information inherently contained in legacy and newly set up experiments, and prioritizing dedicated basic experimental programs. Given that a number of popular algorithms, including deterministic like Generalized Linear Least Square methodology and stochastic ones like Backward and Hierarchic or Total MonteCarlo, Hierarchic MonteCarlo, etc., being different in terms of particular numerical formalism are, though, commonly grounded on the Bayesian theoretical basis. They demonstrated sufficient maturity, providing optimized designoriented data libraries or evidencebased backgrounds for a sciencedriven validation of generalpurpose libraries in a wide range of practical applications.
© E. Ivanov et al., Published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
The first practical use of Data Assimilation (DA) in the nuclear engineering started in the sixties to take a maximum of benefits from rare − that time − experimental data, developing nuclear reactor design concepts and improving problemoriented nuclear data libraries [1–5].
From the very beginning it involved the deterministic algorithms, such as a Generalized Linear Least Square methodology (GLLSM) associated that time with Ordinary and Generalized Perturbation Theory (OPT and GPT) [1,2,6,7]. Further trend was traced in a progressive growth of DA fidelity supported by an increasing of computational capacities and advances in mathematical and computational physics [8–17].
Nowadays, available and affordable highly performant computational facilities and highfidelity or, even, precise code systems allow stepping toward fully stochastic nonintrusive^{1} adjustment [12–16].
Despite on the continuous evolution of the DA algorithms the methodology always remains to be a kind of Bayesianbased technique [1,5,7,8].
It should be noted that DA is always present in the Nuclear Data (ND) evaluation process because all, without any exemption, data libraries have to be somehow calibration^{2} on the objective experimentally measured invariants. DA helps, in this case, if such invariants were not measured directly but inferred from the measurements [2,13].
In any applications the mathematical models and data libraries to become suitable for the adjustment should be somehow parametrized using either Reduced Order Models (ROM) or variables inherent to nuclear reactions simulations [1,13,15,16].
Summarizing the statements above and bibliography analysis it is easy to see that DA always comprises the following ingredients: (1) objective observations obtained computing representative suitcase of integral experiments data, i.e. calculationtoexperiment ratio for given experimentbased benchmarks, (2) libraries of prior experimental and nuclear data uncertainties needed as the first guess for Data Assimilation process, and (3) relevant Bayesian interference framework that includes, among others, dedicated statistical solvers and parametrized best estimate simulations.
Of course, the DA algorithms in different fields of applications have also different levels of maturity that might be somehow characterized considering the major drawbacks and lessons learned in DA practical implementations [1,13,18].
We are discussing below some examples of good practice and tendencies in DA deployment to characterize in certain extent a technological readiness − maturity − of DA methodologies.
2 Methodological background
As any Kalman filtering, DAhas a premise that, given some disagreement between calculated and experimental values, one adjusts parametrized data, performing the best fit of errorweighted expected and observed parameters [1,5,11,17]. It links together such probabilistic categories as conditional probabilities or probability densities p (⋯⋯), the sets of measurable parameters y and parameters x inherent to modeling, and prior information U from which the prior knowledge on x is assumed:(1)
where the denominator is just a normalization constant [1,17,18]. Then, assuming the highentropy distributions one builds up the following solving rule to estimate an improved distribution from prior one basing on the principle of maximum likelihood [1,8,11,18]:(2)
According to such probabilistic definitions we are considering the results of measurements and calculations as highentropy distributions represented by relevant^{3} Probability Density Function (PDF) [12,14]. The coincidence of two values in this case could be graded in terms of an overlapped area (see Fig. 1).
It is easy to see that what DA means is the conversion of prior errors and uncertainties into correction factors and quantified residual uncertainties following the maximum likelihood principle [1,2,8,21,22].
This is why, the kinds of uncertainties we are dealing with − whether they are of a simple error, epistemic or ontological ones [23,24] −dictate which namely technique to be approached to the adjustment, validation or another illposed inversed problem.
The fist type of uncertainties — simple error — appears when prior knowledge would be inconsistent or even wrong. At this level DA clarifies given nuclear reaction models adjusting their parameters by the best fit to preselected representative sets of IEs [1,7,8,23].
It requires a robust theoretical model of nuclear reactions^{4} and fully representative IEs that − a few or a large set − should be selected in such a way to discriminate (in a statistical sense) all contributors except for one of interest.
Such demand could be met, inter alia, by design of specific, for example, replacement or oscillation experiments like ones performed at numerous facilities [25], including MINERVE facility [26], IPEN/MB1 [27], and so on.
Talking about the robust theory one could remind, for example, such wellknown tools as RMatrix fitting codes SAMMY [9] or REFIT [10], or CONRAD [11] where the last one could uniformly treat the resolved resonance range (RMatrix approximation: ReichMoore and MultiLevel BreitWigner), the unresolved resonance range (average RMatrix and HauserFeshbach theory) and the fast energy region as it is required by a modern evaluation process.
The second ones — epistemic uncertainties — appear due to the imprecise interpolations and extrapolations inherent^{5} to the modeling. Over there, the role of DA would be to not adjust but to provide an evidencebased background^{6} for a sciencedriven validation.
The last— ontological — uncertainties appear due to a different belief system so that the only discovery could resolve such lack of knowledge [19,23]. The role of DA in such case will be the only to contribute in a planning of the further dedicated researches.
Fig. 1 Qualitative illustration of ND and IEs PDFs [20] (a) and a degree of coincidence between prior (b) and posterior (c) calculational (light grey) and experimental (lightblue) values [7,19]. 
2.1 Parametrization strategies
Applying DA, one should take into account that the domain of experiments is discrete and countable while the domain of simulations is continuous and noncountable. Such fundamental discontinuity requires the domain of simulation to be somehow parametrized.
The simplest parametrization strategy is using of ROMto replace physics behind the phenomena of interest by a set of linear response functions, i.e. the following sensitivity coefficients [1,7,28–34]:(3) where S_{R,θ}, R, α, and θ are sensitivity coefficient for a given system's response to a given parameter of nuclear reaction modeling, the system response, nuclear cross section and parameters inherent to nuclear reaction calculations, correspondingly. Two components −explicit and implicit −reflect parameters common for different nuclides and reactions parameters [1,2,7,8].
The next kind of strategies presumes the following dependence of the nuclear reactions' model on the countable set variables:(4)where Lib_{ADJ}, {α_{k} : k ∈ 1 ⋯ K} and {θ_{l} : l ∈ 1 ⋯ L} are the synthesized/adjusted library, adjusted and nonadjusted parameters, correspondingly.
Apparently, such formalism might give a fully consistent evaluation going deeply to the theoretical models and the bestestimated metamodels inherent to nuclear data calculations. Unfortunately, it would require so many experimental cases making adjustment unaffordable. Nevertheless, the strategy was recognized as sufficient in many practical applications [12,13,15].
Another group represents true (but unknown) library as a weighted superposition of different profiles given as follows:(5)where Lib_{syn}, Lib_{n} and a_{n} are the desirable synthetic library, nth generated nuclear data profile (ACE files, typically) and weight factors to be matched, correspondingly, and n ∈ 1 ⋯ N.
For example, Hierarchic MonteCarlo [14] operates with such strategy generating the profiles Lib_{n} in an iterative randomized process fitting to a given set of IEs data.
Basing on the reasoning above and the bibliography overview we could conventionally consider the variety of DA algorithms as it is shown on Figure 2.
Of course, DA always suffers from the dimensionality of physical model. Indeed, if use discretized modeling in the field of particle transport and reactor physics we are dealing with, as minimum, N_{σ} parameters determined as a N_{σ} = N_{IZ} ⋅ N_{REA} ⋅ N_{EG}, where N_{IZ}, N_{REA} and N_{EG} are the number of nuclides, the number of channels and the integral number of energyangular intervals. If use ReichMoore approximation the dimension will be extended on the numbers of particle and γray channels for each resonance areas.
Since DA assimilation algorithms being different in details all are based on the same theoretical basis we could illustrate the major DA ideas via MasterEquations used in a deterministic methodology like the following one:(6) where , , , , and are a vector of correction factors, a vector of relative discrepancies, prior covariance matrix of nuclear data (CND), calculated vector of sensitivity coefficients for IEs, experimental and calculational covariance matrices, respectively [1,2,10,19].
It gives also a quantified posterior covariance matrix () as follows:(7)
where all notations are similar to given above [1,2,10,19].
One can see that posterior covariance matrix () and, therefore, posterior uncertainties for a Quantity of Interest (QoI) do not depend on calculationtomeasurement discrepancies; while correction factors do and the only vector of sensitivities () represents the physics behind the IEs and the applications.
Available nowadays continues energy and arbitrary geometry MonteCarlo sensitivity analysis allows performing fineresolution adjustment as it is demonstrated in such tools as SAMINT (nuclear data adjustment with SAMMY based on Integral Experiments) [9,16] which complements Bayesian fit performed using SAMMY (multilevel Rmatrix fits to neutron and chargedparticle crosssection data using Bayes' equations) tool.
Fig. 2 Parametrization strategy and algorithmic options to be implemented in DA [1–3,5–7,9–12,14,16,18]. 
2.2 Integral experiments data and an evidencebased background
As said, the adjustment critically depends on a quality of IEs data, including consistency of their uncertainties and covariance (see component in Eqs. (6) and (7)).
These uncertainties and experimental covariance matrices are resulted from the physicsbased evaluation of measurements, as such, and the experimental conditions in the manner similar to what has been implemented in the International Criticality Safety Benchmark Experiments Project(ICSBEP) and the International Reactor Physics Experiments (IRPhE) Project, for example [7,25].
Historically, IEs were considered, mainly, as mockups allowing to study major characteristics of nuclear systems, optimizing and examining reactor control systems, radiation shielding and others using zero or low power facilities to minimize all risks associated with nuclear safety and radiation protection. Nowadays, due to a progress in numerical simulations and increased requirements to an accuracy of modeling, such vision, except for very rare cases, seems to be obsolete.
Of course, experiments would be of different kinds, including criticality and reactivity studies, reaction rates measurements, depletion analysis and so on. The only required experimental data to be stringently evaluated. Unfortunately, we have to note that the experimental covariances are scarcely available even in the popular Handbooks.
2.3 Information content of the posterior bias and uncertainties
The next essential ingredient of DA − prior uncertainties − is crucial for any Bayesian inference technique [1,19,35–37].
Historically,nucleardatauncertaintiesareavailableinseveralgroupwise matrix formats associated with the most popular libraries such as JEFF, JENDL, ENDF, TENDL, SCALE, etc. In this context we could mention such covariances data libraries as BOLNA created in a collaboration among BNL, ORNL, LANL, NRG, ANL, COMMARA2.0 derived for ENDFbased nuclear data in one ofOECDNEA project and COMACV1 and others attached to JENDL, TENDL and SCALE projects [35,38].
In the past the covariance matrices have been based, largely, on the expert judgements. Albeit today extensive worldwide efforts were mounted to determine the scientific basis to establish relevant CND (Fig. 3).
However, posterior CND − generated after adjustment − never fully inherits the prior CND. In fact, DA integrate somehow an information brought with the used IEs onto corrected nuclear parameters and their uncertainties [35]. In the years of DA practical implementations an intensive discussion arisen on the question how to interpret an appearance in posterior CND crosscovariance members have not been in prior ones [36]. It was found that the crosscovariance members in a posterior CND always contain the traces of IEs data [1,19,36,37], characterizing in certain extent an efficiency of the adjustment [19].
3 Best practice in data assimilation worldwide
From the very beginning, nuclear technological science intended all concepts and statements to have a solid basis in reality. In all domains of nuclear engineering from design to safety regulation, it seems crucial having access to objective observations, including operational background, basic and dedicated Integral Experimental (IE) programs [39–44].
However, we could use both legacy and newly established IEs to improve or to validated nuclear data libraries. The only issue is that we have to unfold somehow the IEs data using them in a nuclear data evaluation process. It is possible to do if IEs are numerous, their set is statistically significant and there is a robust DA approach consistent for a given field of interest [1,2,8,19].
From more general points of view, one might distinguish three the following major groups of DA practical applications: (1) simple data adjustment contributing to problemoriented and generalpurpose libraries [1,2,4,5,7], (2) sciencedriven validation of nuclear data libraries and simulations [2,6,19], and (3) knowledgebased prioritization of dedicated basic research programs [19].
As far as DA techniques have different backgrounds for different applications one seems reasonable to characterize them below in terms of the level of maturity.
3.1 1st application: simple data adjustment
As said, the very basic idea of nuclear data adjustment is an optimized fit of the modeling of nuclear reaction or of nuclear systems to the wellevaluated consistent and credible IEs data [1]. Such simple adjustment requires to use fully representative [2,39] sets of highfidelity IEs data [25].
In the past, in the early 70s, one of such criteria of representativity factor (r_{IE,QoI}) was derived as follows:(8)
where experiments assumed to be independent and all notations are similar to ones given above [1,39]. In case of correlated experiments, one should implement one or another iterative process to quantify a representative factor for each single experiment [19].
Furthermore, historically, the DA in nuclear engineering have been applied in two the following axes: (1) to generate data libraries adjusted to a given set of applications like, for example, ERALIB1 [4] and early versions of ABBN library [1], and (2) to refine knowledge on certain parameters of nuclear reaction models [5,9,10,12].
Practically, there are only two major ideas of an adjustment: (1) to fit some aggregated parameters like groupwise cross sections and, then, to refine the adjusted integral values, correcting the very basic parameters of the nuclear process model; (2) to correct these parameters directly fitting the models of nuclear processes to IEs data.
One can see that in both axes Data Assimilation demonstrated maturity sufficient for current requirements of nuclear data evaluations [7,38,40,45].
3.2 2nd application: sciencedriven V & UQ
Together with corrected ND section DA quantifies their uncertainties generating posterior CNDs. It could be used evaluating the quality of the adjustment process as well as validating the nuclear data libraries. Such application − to support a validation process − may become even more important than the data correction. Indeed, we have a few wellelaborated and recognized brands of nuclear data projects (ENDF/B, JENDL, JEFF, BROND, ROSFOND, CENDL and TENDL, and some others [7,38]). It seems unlikely to repeat or improve any of them by a single design or scientific organization but they could be characterized in terms of anticipated uncertainties in the field of users' interests.
The Validation through Uncertainty Quantification requires DA algorithms to be, mainly, robust and the only on the second order to be of high resolution.
It should be noted that a sciencedriven validation − that is exactly our case − separates domains of validation and applications. It means that we can use whatever kinds of experiments − critical, reactivity, reaction rates and so on − to estimate biases and uncertainties for any Quantity of Interest (QoI). What is needed is to have relevant sensitivity coefficients or functional models to be combined with corrections and posterior CNDs. Thus, in terms of GLLS methodology the bias of QoI could be computed as follows:(9) where and area scalar bias predicted by DA and a vector of sensitivity coefficient for the QoI, correspondingly; and the uncertainties as(10)where δ_{QoI} is the relative standard deviation when other notations are similar to given above.
3.3 3rd application: step toward ontological uncertainty treatment
While the first two groups of DA were wellillustrated by practical cases, conference and journal papers [1,7] the third group − what to do if we are dealing with an ontological issues − was not pronounced so far. As said, neither adjustment or comparison with observations but the only a kind of discovery would help treating an ontological uncertainty. However, even in this case DA could become useful bounding the impact of such kind of uncertainties and contributing in an establishment of further problemoriented basic research programs [19].
For example, years ago, the nuclear criticality safety community considered one hypothetical case of a criticality on a fuel powdermixing apparatus as one of high priority. Physically, the configuration to be assessed was a moisture in the mixture of reactorgrade plutonium and uranium oxides and the critical conditions were reached with an epithermal spectrum. Because of many reasons, the number of representative integral experiments was very limited while a few available give a discrepancy on several percent of k_{eff}, that correspond to onethird or, even, onehalf of critical mass. Later, the dedicated parametric experimental program was established with undermoderated ^{240}Pu containing critical assemblies [19,25,41,42]. As a result, the experiments confirmed an existence of the issue while used, then, DA helped characterizing specific safety margins by posterior bias and uncertainties [19] remaining, though, unclear which namely nuclidereaction led to these discrepancies.
By chance, in this particular case, we have had two sets of IEs data. The first one −the “basic set” of experimentbased benchmarks − taken from ones available in the Handbooks [25]. The second − complemented − set of the same “basic” ones complemented by the newly obtained ones. Comparing two correction factors derived from these two sets we could estimate the following vector of indicators XS_{ADDED} to point down the nuclide, reaction and energy interval “responsible” for such discrepancy:(11) where and are the factors adjusted using basic and complementary sets of benchmarks, correspondingly. In our case the energy spanned XS_{ADDED} profile depicted on Figure 4 shows that the field to be elaborated, most probably, relates to a right wing of 0.296 eV fission resonance on ^{239}Pu.
It should be noted, that this conclusion has been surprisingly confirmed by an interpretation of some recent tests of modern ND libraries against a fuel depletion experimental benchmark associating the issue with a right wing of the first ^{239}Pu fission resonance^{7}.
Using DA [19] we revealed the questionable area to lie below eV in a total contradiction with the intuitive statements that this area to be validated using experiments with thermal spectra [42,43].
Fig. 4 Energy spanned cross sections and relative data gain indicators (XS_{ADDED}). 
4 Discussion: technology readiness level
Assessing the maturity of DA algorithms, we divided them conventionally by three groups like: (1) ROM/ROM, (2) linearprecise and (3) preciseprecise representations [1–3,5,7–9,12,14]. The analysis is presented on Figure 5 by applications − ND adjustment, ND Validation through Uncertainty Quantification and contribution in basic research planning − and by the groups of algorithms. The bigger relevant circle on the Figure the higher level of maturity.
The first axe − ROM/ROM − means that the models of nuclear reactions and particle transport simulations were replaced by their Reduced Order Modeling analogous such as relevant sensitivity coefficients. The nuclear reaction model (first abbreviation)was represented as a set of groupwise cross sections [1–3] including, normally, microdata with Wescott gfactors and Bondarenko ffactors and, if possible, vectors of subgroups or by the parameters inherent to the highfidelity nuclear reaction modeling [5,9,11,18]. The particle transport model (second abbreviation) was also given as a set of groupwise sensitivity coefficients, comprised, of course, explicit and implicit components of sensitivity [1,2,7]. In ROM/ROM biases and uncertainties can be used immediately [6,19], while ND correction factors should be somehow unfolded and assessed [1,16]. Thus, we believe that the DA maturity over here seems to be sufficient as for data adjustment as for validation.
The second axe–precise/precise (P/P)–means highfidelity or, even, precise modeling as for reactor physics as for nuclear reactions. Apparently, it could generate fully balanced and adjusted libraries. However, it still seems unclear how to adjust some precalibrated semiempiric elements contained in highfidelity theoretical model intended to nuclear reactions calculations.
The third axe − linear/precise (L/P) −represents nuclear data as a superposition of pregenerated highfidelity profiles. It is usually associated with a Hierarchical MonteCarlo being oriented, mainly, on validation [14].
In addition, we identified some bottlenecks for DA. First of all, in terms of DA methodologies, one still needs to elaborate an adjustment for composed − nonlinear − operators like, for example, one of fission production where νbar and PNFS are correlated ().
Concerning the IEs, one still needs in highfidelity experimental covariance matrices that exist but not numerous enough in the Handbooks and do not exist for different functionals, like covariance between the measurements of reaction rates and kinetic parameters, for example.
Finally, one should note that some IEs were applied in the ND tuning. These experiments have to be withdrawn from the adjustment and validation or, at least, users should be informed about them.
Fig. 5 Maturity (Technology Readiness Level) practically demonstrated using DA algorithms with respect to ND adjustment, validation and relevant studies planning. 
5 Conclusions
Data Assimilation belongs, mainly, to a field of information technology, is presented in the nuclear technological science from the sixties of the last century. Known as Nuclear Data adjustment, it was providing users with socalled designoriented multigroups libraries and so on.
Among others, the use of the adjustment was warranted if nuclides to be studied were rare or shortlived, or dangerous complicating or, even, making impossible any differential measurements.
Nowadays, despite or, may be, due to a notable success of Data Assimilation there are no more room for rough adjustment, because of enhanced requirements to Nuclear Data accuracy. We are talking either about the fine Nuclear Data calibration via parameters inherent to the nuclear reaction modeling or about the sciencedriven Validation through Uncertainty Quantification where we could use any, even rough, Data Assimilation algorithm.
As said, Data Assimilation contributes to a nuclear data evaluation combining differential and integral experiments data. In this case we are dealing with the simple errors − discrepancies between calculated and experimental values − and their covariances in order to generate optimally balanced problemoriented libraries.
Eventually, Data Assimilation could substantiate a sciencedriven Validation providing assessor with an evidencebased background.
In addition, Data Assimilation could be used in gap analysis somehow contributing in an establishment of dedicated basic research programs.
Further development of Data Assimilation for Nuclear Data evaluation could be considered, among others, by the following axes: (1) an extension of the applications enhancing comprehensive optimization and validation of nuclear data libraries; (2) an improvement of numerical algorithms involving recently developed data science techniques; and (3) an elaboration of experimental databases.
Summarizing the reasoning above we could conclude that Data Assimilation, as an approach, has a sufficient maturity for the nuclear engineering applications having, at the same time, significant potential for further refinement.
Acknowledgments
The paper is written in a memory of Dr. Massimo Salvatores. He, among others, made a great contribution to the rise of a Perturbation theory and Data Assimilation involvement in a broad range of scientific domains of nuclear engineering, including reactor physics and control, innovative technologies and, on a top of this, in nuclear data evaluation and validation.
We would also extend our appreciation to OECD/NEA staff and expert groups' members for their deep involvement in a scientific discussion on a role and practical implementation of Bayesianbased methodologies in a nuclear technological science.
Author contribution statement
1) Evgeny Ivanov: general coordination and contribution in all the chapters.
2) Cyrille De SaintJean: contribution in the chapters of ethodological background, discussion and conclusions, and in the bibliography.
3) Vladimir Sobes: principle contribution in the chapters of methodological background, best practice in Data Assimilation, discussion and conclusions.
References
 M. Salvatores et al., Methods and issues for the combined use of integral experiments and covariance data: results of a NEA international collaborative study, Nucl. Data Sheets 118, 38–71 (2014) [CrossRef] [Google Scholar]
 L.N. Usachev, Y. Bobkov, Planning on optimum set of microscopic experiments and evaluations to obtain a given accuracy in reactor parameter calculations, INDC CCP19U, IAEA International Nuclear Data Committee (1972) [Google Scholar]
 J.L. Rowlands, L.D. Macdougall, The use of integral measurements to adjust crosssections and predicted reactor properties, Proceedings of the International Conference on Fast Critical Experiments and their Analysis, ANL7320 (1966) [Google Scholar]
 E. Fort, G. Rimpault, JC. Bosq et al., Improved performances of the fast reactor calculational system ERANOSERALIB1 due to improved a priori nuclear data and consideration of additional specific integral data, Ann. Nucl. Energy 30, 1879–1898 (2003) [CrossRef] [Google Scholar]
 C. de SaintJean (Coordinator), Assessment of Existing Nuclear Data Adjustment Methodologies, Report by the Working Party on International Evaluation Cooperation of the NEA Nuclear Science Committee, Vol. 33, NEA/WPEC 33, OECD/NEA, 2011 [Google Scholar]
 G. Palmiotti, M. Salvatores, The role of experiments and of sensitivity analysis in simulation validation strategies with emphasis on reactor physics, Ann. Nucl. Energy 52, 10–21 (2013) [CrossRef] [Google Scholar]
 Dan Gabriel Cacuci (ed.) Handbook of nuclear engineering: Vol. 1: nuclear engineering fundamentals, Springer, Boston, MA (Springer 2010) [Google Scholar]
 G. Evensen, Data Assimilation: The Ensemble Kalman Filter (Springer, Berlin, 2006) [Google Scholar]
 N.M. Larson, ORNL Report ORNL/TM9179/R8, 2008 [Google Scholar]
 M. Moxon et al., UKNSF Report, 2010 [Google Scholar]
 P. Archier, C. De Saint Jean, O. Litaize, G. Noguère, L. Berge, E. Privas, P. Tamagno, CONRAD evaluation code: development status and perspectives, Nucl. Data Sheets 118, 488–490 (2014) [CrossRef] [Google Scholar]
 A.J. Koning, Bayesian Monte Carlo method for nuclear data evaluation, Nucl. Data Sheets 123, 207–213 (2015) [CrossRef] [Google Scholar]
 D. Siefman, M. Hursin, D. Rochman et al., Stochastic vs. sensitivitybased integral parameter and nuclear data adjustments, Eur. Phys. J. Plus 133, 429 (2018) [CrossRef] [Google Scholar]
 A. Hoefer et al., MOCABA: A general Monte Carlo–Bayes procedure for improved predictions of integral functions of nuclear data, Ann. Nucl. Energy 77, 514–521 (2015) [CrossRef] [Google Scholar]
 C. De Saint Jean, P. Archier, E. Privas, G. Noguere, On the use of Bayesian MonteCarlo in evaluation of nuclear data, EPJ Web Conf. 146, 02007 (2017) [CrossRef] [Google Scholar]
 V. Sobes, L.C. Leal, G. Arbanas, Nuclear data adjustment with SAMMY based on integral experiments, Anaheim, California 111, 843–845 (2014) [Google Scholar]
 C. de Saint Jean, P. Archier, E. Privas, G. Nogu‘ere, O. Litaize, P. Leconte, Evaluation of cross section uncertainties using physical constraints: focus on integral experiments, Nucl. Data Sheets 123, 178 (2015) [CrossRef] [Google Scholar]
 W. Martienssen (ed.), Low Energy Neutron Physics, LandoltBornstein (SpringerVerlag, Berlin, 2000) [Google Scholar]
 T. Ivanova, E. Ivanov, I. Hill, Methodology and issues of integral experiments selection for nuclear data validation, EPJ Web Conf. 146 (2017) [Google Scholar]
 S. Pelloni, D. Rochman, Performance assessment of adjusted nuclear data along with their covariances on the basis of fast reactor experiments, Ann. Nucl. Energy 121, 361–373 (2018) [CrossRef] [Google Scholar]
 M.G. Kendall, A. Stuart, in The Advanced Theory of Statistics. Vol. 2, Inference and Relationship (Hafner, New York, 1961), pp. 474–483 [Google Scholar]
 V.F. Turchin et al., The use of mathematicalstatistics methods in the solution of incorrectly posed problems, Soviet Physics Uspekhi 13, 681 (1971) [NASA ADS] [CrossRef] [Google Scholar]
 K. Beven, Facets of uncertainty: epistemic uncertainty, nonstationarity, likelihood, hypothesis testing, and communication, Hydrol. Sci. J. 61, 1652–1665 (2016) [CrossRef] [Google Scholar]
 W. Oberkampf, C. Roy, Verification and Validation in Scientific Computing (Cambridge University Press, UK, 2010) [CrossRef] [Google Scholar]
 J.B. Briggs, J.D. Bess, J. Gulliford, Integral benchmark data for nuclear data testing through the ICSBEP & IRPhEP, Nucl. Data Sheets 118 (2014) [Google Scholar]
 A. Santamarina et al., Reactivity worth measurement of major fission products in MINERVE LWR lattice experiment, Nucl. Sci. Eng. 178, 562–581 (2014) [CrossRef] [Google Scholar]
 L. Leal, A.D. Santos, E. Ivanov, T. Ivanova, Impact of ^{235}U resonance parameter evaluation in the reactivity prediction, Nucl. Sci. Eng. 187, 127–141 (2017) [CrossRef] [Google Scholar]
 K.F. Raskach, An Improvement of the Monte Carlo generalized differential operator method by taking into account first and secondorder perturbations of fission source, Nucl. Sci. Eng. 162, 158–166 (2009) [CrossRef] [Google Scholar]
 B.C. Kiedrowski, F.B. Brown, P.P.H. Wilson, Adjointweighted tallies for keigenvalue calculations with continuousenergy Monte Carlo, Nucl. Sci. Eng. 168, 226–241 (2011) [CrossRef] [Google Scholar]
 M. Aufiero et al., A collision historybased approach to sensitivity/perturbation calculations in the continuous energy Monte Carlo code SERPENT, Ann. Nucl. Energy, Volume 85, 245–258 (2015) [CrossRef] [Google Scholar]
 E. Brun et al., TRIPOLI4®, CEA, EDF and AREVA reference Monte Carlo code, Ann. Nucl. Energy 82, 151–160 (2015) [CrossRef] [Google Scholar]
 A. Jinaphanh, N. Leclaire, Continuousenergy perturbation methods in the MORET 5 code, Ann. Nucl. Energy 114, 395–406 (2018) [CrossRef] [Google Scholar]
 C.M. Perfetti, B.T. Rearden, W.R. Martin, SCALE continuousenergy eigenvalue sensitivity coefficient calculations, Nucl. Sci. Eng. 182, 332–353 (2016) [CrossRef] [Google Scholar]
 M.T. Pigni, M. Herman, P. Oblozinsky, F.S. Dietrich, Sensitivity analysis of neutron total and absorption cross sections within the optical model, Phys. Rev. C83, 24601 (2011) [Google Scholar]
 I. Kodeli, Comments on the status of modern covariance data based on different fission and fusion reactor studies, EPJ Nuclear Sci. Technol. 4, 46 (2018) [CrossRef] [EDP Sciences] [Google Scholar]
 G. Palmiotti, M. Salvatores, Cross section covariances: a user perspective, EPJ Nuclear Sci. Technol. 4, 40 (2018) [CrossRef] [EDP Sciences] [Google Scholar]
 E. Bauge, S. Hilaire, P. DossantosUzarralde, Evaluation of the covariance matrix of neutronic cross sections with the BackwardForward Monte Carlo method, Inter. Conf. Nucl. Sci. Technol. 259–264 (2007) [Google Scholar]
 M.B. Chadwick et al., CIELO collaboration summary results: international evaluations of neutron reactions on uranium, plutonium, iron, oxygen and hydrogen, Nucl. Data Sheets 148, 189–213 (2018) [CrossRef] [Google Scholar]
 D. Kumar, S.B. Alam, H. Sjöstrand, J.M. Palauand, C. De Saint Jean, Influence of nuclear data parameters on integral experiment assimilation using Cook's distance, EPJ Web Conf. 211, 07001 (2019) [Google Scholar]
 C. De Saint Jean et al., Evaluation of neutroninduced cross sections and their related covariances with physical constraints, Nucl. Data Sheets 148, 383–419 (2018) [CrossRef] [Google Scholar]
 T.T. Ivanova, M.N. Nikolaev, K.F. Raskach, E.V. Rozhikhin, A.M. Tsiboulia, Use of international criticality safety benchmark evaluation project data for validation of the ABBN crosssection library with the MMKKENO code, Nucl. Sci. Eng. 145, 247–255 (2003) [CrossRef] [Google Scholar]
 The Need for Integral Critical Experiments with Lowmoderated MOX Fuels, in Proceedings of the Workshop, Paris, France, 14–15 April 2004, OECD NEA, No. 5668, ISBN 9264020780 [Google Scholar]
 L.C. Leal, G. Noguere, C. de Saint Jean, A.C. Kahler, ^{239}Pu resonance evaluation for thermal benchmark system calculations, Nucl. Data Sheets 118, 122–125 (2014) [CrossRef] [Google Scholar]
 J.B. Clarity, W.J. Marshall, B.T. Rearden, I. Duhamel, Selected uses of TSUNAMI in critical experiment design and analysis. In: 2020 ANS Virtual Winter Meeting, Transactions, Volume 123, Number 1, 2020, pp. 804–807 [Google Scholar]
 T. Frosio, T. Bonaccorsi, P. Blaise, Extension of Bayesian inference for multiexperimental and coupled problem in neutronics − a revisit of the theoretical approach, EPJ Nuclear Sci. Technol. 4, 19 (2018) [CrossRef] [EDP Sciences] [Google Scholar]
Cite this article as: Evgeny Ivanov, Cyrille De SaintJean, Vladimir Sobes, Nuclear data assimilation, scientific basis and current status, EPJ Nuclear Sci. Technol. 7, 9 (2021)
All Figures
Fig. 1 Qualitative illustration of ND and IEs PDFs [20] (a) and a degree of coincidence between prior (b) and posterior (c) calculational (light grey) and experimental (lightblue) values [7,19]. 

In the text 
Fig. 2 Parametrization strategy and algorithmic options to be implemented in DA [1–3,5–7,9–12,14,16,18]. 

In the text 
Fig. 3 Prior (a) and posterior (b) covariance matrices for a selected set of nuclidereactions [19]. 

In the text 
Fig. 4 Energy spanned cross sections and relative data gain indicators (XS_{ADDED}). 

In the text 
Fig. 5 Maturity (Technology Readiness Level) practically demonstrated using DA algorithms with respect to ND adjustment, validation and relevant studies planning. 

In the text 