The accurate prediction of spent nuclear fuel (SNF) characteristics such as decay heat and radiation emission rates has a large impact on the design, economics, performance, safety assessment, licensing and implementation of all the steps between the use of the fuel in a nuclear reactor and its final disposal.
At present, producing accurate and reliable experimental measurements of these observables is expensive and/or impractical, and it scales up poorly to the large number of existing SNF assemblies with different designs, irradiation conditions and fuel types. As spent fuel pools worldwide get closer to saturation, the back-end operation of the nuclear fuel cycle is in strong need of validated computational tools to predict these quantities both for commercial and research applications. Uncertainties associated to the model predictions help define the reliability of the calculations.
The general lack of detailed and available assay data – e.g., irradiation history, fuel composition, core parameters – for measured SNF assemblies often leads to introducing physical assumptions or simplifications in the predictive model such as scaling down the core geometrical model. The sensitivity to such approximations is still a subject of investigation, especially given the new industrial needs for higher SNF burnup levels, different fuel types and different reactor types.
Modern computational methods rely on solving the balance equations for the evolution in time for the nuclides involved. The full problem is highly non-linear but in practice the equations are often linearized by assuming constant cross sections in time over a so-called macro time step. The choice of linearization and other simplifications in the physical model will have an impact on the final outcome.
The accurate prediction of the nuclide composition of SNF remains an area where nuclear data generally lack consistent and systematic work of benchmarking and validation. The accurate knowledge of these nuclear data, i.e., best-estimate and (co-)variance data, has a tremendous impact on the reliability of the computational results.
The major constraints to the validation exercise remain, however, the approximations introduced in modelling parameters other than nuclear data – a significant amount of design and reactor operation data required to a great level of detail – and biases in the experimental measurements used for validation. Still, it is important to assess to which level the current knowledge in nuclear data can affect such calculations by systematically propagating the existing nuclear data uncertainties through the computational models.
Computational methods, i.e. the models, simplifications and data, need to be validated by comparing them to real experimental data. In this context, this can be done by comparing the predicted and measured isotopic data in spent nuclear fuel (microscopic comparison) or by comparing integral quantities like decay heat or radiation emission (macroscopic comparison).
For this Research Topic, we would like to collect papers on the following, but not limited to;
• The state-of-the-art of depletion code development,
• Nuclear data improvements important to spent fuel characterization,
• Development and validation of computational models for spent fuel characterization,
• Sensitivity analysis to nuclear data and model simplifications etc.
• Comparison of model and code predictions to either evaluated computational or experimental benchmarks is crucial in this matter.
The accurate prediction of spent nuclear fuel (SNF) characteristics such as decay heat and radiation emission rates has a large impact on the design, economics, performance, safety assessment, licensing and implementation of all the steps between the use of the fuel in a nuclear reactor and its final disposal.
At present, producing accurate and reliable experimental measurements of these observables is expensive and/or impractical, and it scales up poorly to the large number of existing SNF assemblies with different designs, irradiation conditions and fuel types. As spent fuel pools worldwide get closer to saturation, the back-end operation of the nuclear fuel cycle is in strong need of validated computational tools to predict these quantities both for commercial and research applications. Uncertainties associated to the model predictions help define the reliability of the calculations.
The general lack of detailed and available assay data – e.g., irradiation history, fuel composition, core parameters – for measured SNF assemblies often leads to introducing physical assumptions or simplifications in the predictive model such as scaling down the core geometrical model. The sensitivity to such approximations is still a subject of investigation, especially given the new industrial needs for higher SNF burnup levels, different fuel types and different reactor types.
Modern computational methods rely on solving the balance equations for the evolution in time for the nuclides involved. The full problem is highly non-linear but in practice the equations are often linearized by assuming constant cross sections in time over a so-called macro time step. The choice of linearization and other simplifications in the physical model will have an impact on the final outcome.
The accurate prediction of the nuclide composition of SNF remains an area where nuclear data generally lack consistent and systematic work of benchmarking and validation. The accurate knowledge of these nuclear data, i.e., best-estimate and (co-)variance data, has a tremendous impact on the reliability of the computational results.
The major constraints to the validation exercise remain, however, the approximations introduced in modelling parameters other than nuclear data – a significant amount of design and reactor operation data required to a great level of detail – and biases in the experimental measurements used for validation. Still, it is important to assess to which level the current knowledge in nuclear data can affect such calculations by systematically propagating the existing nuclear data uncertainties through the computational models.
Computational methods, i.e. the models, simplifications and data, need to be validated by comparing them to real experimental data. In this context, this can be done by comparing the predicted and measured isotopic data in spent nuclear fuel (microscopic comparison) or by comparing integral quantities like decay heat or radiation emission (macroscopic comparison).
For this Research Topic, we would like to collect papers on the following, but not limited to;
• The state-of-the-art of depletion code development,
• Nuclear data improvements important to spent fuel characterization,
• Development and validation of computational models for spent fuel characterization,
• Sensitivity analysis to nuclear data and model simplifications etc.
• Comparison of model and code predictions to either evaluated computational or experimental benchmarks is crucial in this matter.