https://doi.org/10.1051/epjn/2019047
Regular Article
3D convolutional and recurrent neural networks for reactor perturbation unfolding and anomaly detection
University of Lincoln, School of Computer Science, Machine Learning Group, Brayford Pool, Lincoln LN6 7TS, UK
^{*} email: adurrant@lincoln.ac.uk
Received:
1
July
2019
Accepted:
12
July
2019
Published online: 29 November 2019
With Europe's ageing fleet of nuclear reactors operating closer to their safety limits, the monitoring of such reactors through complex models has become of great interest to maintain a high level of availability and safety. Therefore, we propose an extended Deep Learning framework as part of the CORTEX Horizon 2020 EU project for the unfolding of reactor transfer functions from induced neutron noise sources. The unfolding allows for the identification and localisation of reactor core perturbation sources from neutron detector readings in Pressurised Water Reactors. A 3D Convolutional Neural Network (3DCNN) and Long ShortTerm Memory (LSTM) Recurrent Neural Network (RNN) have been presented, each to study the signals presented in frequency and time domain respectively. The proposed approach achieves stateoftheart results with the classification of perturbation type in the frequency domain reaching 99.89% accuracy and localisation of the classified perturbation source being regressed to 0.2902 Mean Absolute Error (MAE).
© A. Durrant et al., published by EDP Sciences, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
The early detection, classification, and localisation of anomalies within the reactors' core is vital to ensure the safe and efficient operation of the increasingly aging fleet of Europe's reactors. Monitoring of these reactors at nominal conditions provides vital and valuable insights into the functional dynamics of the core, consequently allowing for early identification of anomalies. Analysis of the core operation is achieved through nonintrusive measuring of neutron flux around their mean values from incore and excore detectors. These fluctuations more commonly referred to as noise are induced primarily from turbulent characteristics in the coolant flow in the core, coolant boiling, or mechanical vibrations of reactor's internal components.
Given detailed descriptions of the reactor core geometry, properties of physical perturbations, and probabilities of neutron interactions, by using a Green's function as the reactor transfer function, simulations can be constructed to show the effect of the neutron noise. Green's function holds the relationship between a locally induced perturbation and the response of the neutron flux within the core, therefore, the inversion of this function from noise readings can localise and classify such induced perturbations. This inversion known as the backwards problem or unfolding is trivial given measurements at every position within the core, however, the limited number of incore and excore detectors makes it a complex challenge [1].
Machine learning (ML) is a data analytical process for the approximation of functions mapping a set of inputs to outputs. Therefore, the use of ML to approximate such reactor functions given limited detector readings is advantageous, learning high and lowlevel patterns given substantial training examples. This work presents an extended 3DConvolutional and Recurrent neural network approach to unfold the reactor transfer function and classify induced perturbation types and their source locations in both time and frequency domains.
2 Related work
The application of ML approaches in the field of nuclear safety has been of recent scientific interest, with nuclear energy essential to meeting fast changing climate goals. The ML community has been keen on predicting climate change [2] utilising a variety of approaches across all energy sectors. Nuclear energy relies on safety and availability to achieve such goals, and many recent works have been proposed to ensure this.
In [3] the authors utilised deep convolutional neural networks and NaïveBayes approaches for visionbased crack detection for reactor component surfaces from video sequences. A diagnosis system monitoring the condition of sensors using autoassociative kernel regression and sequential probability was proposed in [4]. Deep rectifier neural networks were implemented in [5] for the accident or transient scenario identification of pressurised water reactors (PWR), whereas others solved similar problem employing artificial neural networks improving conditionbased maintenance [6]. Further ML approaches were implemented by [7] in the form of Adaptive NeuroFuzzy Inference System (ANFIS) for the prediction of critical heat flux. For unfolding, ANFIS approaches have also been utilised for the localisation of simulated induced neutron noise sources in VVER100 rectors, given neutron pulse height distributions as training input [8,9].
Work proposed in [10] unfolds reactor transfer functions by the means of CNNs from simulated neutron noise readings in the frequency domain at differing perturbation types and frequencies. Classification and localisation of the perturbations had been achieved with low error by the means of a 2DCNN. The localisation of the perturbation source was achieved through the spatial splitting of the core volume into 12 and 48 subsections for classification of source perturbation belonging to a particular subsection. Furthermore, an increased unfolding resolution for localisation was implemented, utilising the extracted latent variables from the CNN and clustering. Reference [11] proposed a 3DCNN approach to combat the limitations of the 2D implementation in [10] from the loss of spatial information through the conversion of the 3D volume into a 2D input. Moreover, [11] included the classification of time domain signals processed to extract temporal information via RNNs. This work extends the approaches previously developed in [10,11] to larger, more complex simulation scenarios, including the localisation of perturbations in the time domain.
3 Simulated scenarios and data preprocessing
The process of training ML models requires large amounts of training data, representing instances for which known perturbations are assumed and the corresponding induced neutron noise readings are estimated. The known data allows the system to learn the function mapping detector readings to their classification and origin, i.e. transfer function inversion, or unfolding. To obtain this amount of training data it is necessary to simulate scenarios to practically provide enough examples of differing anomaly types and source locations for effective unfolding. To achieve this, simulations determining the reactor transfer function or Green's function, providing detector readings of the induced neutron noise of a given perturbation scenarios for pressurised water reactors (PWR) have been employed in both the time and frequency domain.
3.1 Frequency domain
Modelling of fluctuations in neutron flux given known perturbations in the frequency domain was achieved through the CORE SIM [12] reactor physics codes, generating neutron detector readings of the induced neutron noise in a PWR for five perturbation scenarios. CORE SIM models the effects of a noise source for a threedimensional reactor core, of cylindrical shape in Cartesian geometry for a reactor transfer function − considered to be the Green's function of the system − capturing the response of the fluctuations of the induced neutron flux from known perturbation distributions. The Green's function provides a onetoone relationship between any location of perturbation and the response of the neutron flux at any position within the core. CORE SIM models a PWR with a radial core of size 15 × 15 fuel assemblies, utilising a fine volumetric mesh of 32 × 32 × 34 voxels modelling subassembly response, including boundary sources. For further details, consult the CORE SIM user manual [12,13].
CORE SIM provides five perturbations scenarios in 34 frequencies (0.1–1.0 Hz with a step of 0.1 Hz and 1.0–25.0 Hz with a step of 1.0 Hz) each with two energy groups, i.e. high and low energy spectrum, referred to as Fast and Thermal groups respectively. The five scenarios include: Absorber of Variable Strength, the perturbation of the thermal macroscopic absorption crosssection; AxialTravelling Perturbations, perturbation of the coolant at the velocity of the coolant flow; Fuel Assembly Vibrations, vibration of a fuel assembly in the x and/or ydirection for differing modes cantilevered beam, simply supported for the first mode (0.8–4.0 Hz), simply supported in the second mode (5.0–10.0 Hz), and cantilevered beam and simply supported for both modes; Control Rod Vibrations, vibration of a onedimensional structure along the zdirection vibrating perpendicularly to the twodimensional (x,y) plane; Core Barrel Vibrations, perpendicular or beam mode of vibration in both the inphase and outofphase modes. Examples of these perturbations can be seen in Figure 1 for an axial cross section of the core volume.
Fig. 1 Examples of the amplitude induced neutron flux in the frequency domain for a single azimuthal slice on the 10th axial plane. Left: Absorber of Variable Strength. Middle: Core Barrel Vibration − Right: Vibrating Fuel Assembly, cantilevered. 
3.1.1 Data preprocessing
The signals produced are complex 3D volumes of the size of the fine volumetric mesh (32 × 32 × 34 voxels), representing the induced neutron noise at every point within the core volume for a given perturbation originating from a specific positional coordinate within the core (i, j, k). The signal volumes are provided as the response in both fast and thermal groups, however, for our experimentation only the thermal group is utilised due to neutron detectors being more sensitive to thermal neutrons. The dataset is comprised of 34 frequencies each containing a minimum of 106,176 data examples across all scenarios, and have been split into training, validation and testing sets via frequency and source location per scenario.
To mimic the signals from real plant detectors, a predetermined number of voxel locations have been selected from the whole 32 × 32 × 34 volume to emulate the number of detectors within the simulated core. In our case 48 incore and 8 excore detectors have been used from their volumetric positions for the modelled core layout. Furthermore, to emulate reality, the AutoPower Spectral Densities (APSD) and CrossPower Spectral Densities (CPSD) for the simulated signals have been calculated to coincide with real plant readings. Additionally, to demonstrate the robustness of the proposed network white Gaussian noise has been added to the signals in two signaltonoise ratios (SNR), SNR = 3 and SNR = 1. Finally, as Deep Neural Networks (DNNs) currently cannot easily implement complex signals, each of the complex 3D volumes is decomposed to its amplitude and phase. The now two volumes are concatenated together channelwise to form a 2 × 32 × 32 × 34 volume.
3.2 Time domain
The determination of the reactor transfer function in the time domain was employed by the Simulate3K (S3K) algorithm [14], modelling 48 incore and 8 excore neutron detectors for the fourloop, Westinghouse, PWR mixed core of the OECD/NEA transient benchmark. S3K has been utilised to perform 27 different scenarios comprised of 6 perturbation settings and their combinations: Fuel Assembly Vibration of the central 5 × 5 cluster, vibrating synchronously in the x or ydirection at a frequency of 1.5 Hz (sine wave) or random (white noise); Fluctuations of the Coolant Flow, at ± 1% from the relative mean amplitude; Fluctuations of the Coolant Temperature, at ± 1 °C from the mean value of 286.7 °C. These perturbations distributions have been performed with core operating conditions similar to the aforementioned frequency domain model.
S3K simulates each of the scenarios with duration of 100 seconds sampled at 0.01 time steps for each of the 48 incore and 8 excore detectors. The detectors are positioned at 8 azimuthal locations at 6 axial levels for incore and distributed at 4 azimuthal locations at 2 different axial locations for the excore. In addition to the above classification scenarios, individual fuel assembly vibrations for all 193 azimuthal locations within the core have been modelled for 5 different scenarios of 4 perturbation settings including combinations of the 4: Fuel Assembly Vibration in the xdirection at a frequency of 1.5 Hz (sine wave) or random (white noise); Fluctuations of the Coolant Flow, at ±1% from the mean value; Fluctuations of the Coolant Temperature, at ±1 °C from the mean value of 286.7 °C. These scenarios have been experimented for the classification and localisation of the perturbing fuel assembly. For further technical details on S3K refer to the user manual [14].
3.2.1 Data preprocessing
The signals produced by S3K are presented as 10001dimensional vectors per each of the 56 detectors for each scenario, representing the neutron readings of the induced neutron flux. Due to the limited number of data samples available, data augmentation was performed to increase the number of samples per detector per scenario, and to reduce the large input size into the DNN. To achieve this a sliding window of width 100 timesteps and stride 25 was used to represent a 1 second input to the network, this produced the vector x ∈ ℝ ^{396 × 100} per detector. Furthermore, splitting the data into training, validation, and testing sets has been accomplished via the position of the detector, this means specific detector locations have been split into differing sets to the description in Figure 2 per axial position of the detectors. Finally, to further test the robustness of the proposed networks, white Gaussian noise has been added to the signals at two SNRs, SNR = 5 and 10.
Additionally, for the localisation of fuel assembly vibrations, the same subsampling process has been undertaken; however, all 56 detectors for a 1 second sample are considered to be one input into the network. Therefore, the split of data has been achieved through the source location of the vibrating fuel assembly, ensuring the same assembly is not present between sets. The same process of applying white Gaussian noise have also been applied to study the effect on the network at SNR = 3 and SNR = 1, at higher levels of noise, due to the added robustness of utilising all possible 56 detectors as input.
Fig. 2 Modelled core layout with 8 incore and 4 excore detector locations shown for one axial plane. Corresponding train, test and validation detector splits shown, with central 5 × 5 FA cluster shown in red. 
4 Approach
ML and more specifically Deep Learning (DL) are a set of powerful algorithmic approaches for data analytics and pattern recognition, applying iteratively learnt knowledge to unseen data for decision making tasks without being explicitly programmed. DL is a subset of ML, utilising multiple stacked layers of Artificial Neural Networks (ANN) − inspired by biological neurons − to extract varying levels of information, hence the term deep. The proposed approaches utilise modern deep learning techniques and architectures extracting valuable pattern information from the input signals to iteratively learn the inverse of the reactor transfer functions.
4.1 3D Convolutional Neural Network
Convolutional Neural Networks (CNNs) [15] are specialised ANNs designed for spatial feature extraction from data with known gridlike topologies, i.e. images. CNNs replace the traditional matrix multiplication of ANNs with the convolution operation extracting spatial features. Moreover, improving efficiency with the capability of learning coarse to fine features through the addition of more CNN layers, extracting complex hierarchical concepts from such features. Convolutional layers utilise a set of kernels, learning a corresponding number of filters that to capture these spatial patterns pertaining to the given input. Formally, computing the activation of a convolutional layer ℓ and featuremap f at positions i, j, k is given by(1)where ϕ is a nonlinear activation function such as Rectified Linear Units (ReLU: f (x) = max(0, x)) and b is a learnt bias is given by(2)where W ^{ [ℓ]} is a kernel of learnt weights in layer ℓ with dimensions X × Y × Z, convolved with the activations from the previous layer W ^{ [ℓ]} _{*} A ^{ [ℓ−1]}. This produces a weighted sum per location of all points within a kernels receptive field of the previous layers' activations. Visual examples of the features learnt via the convolution operation can be seen in Figure 4.
Given the volumetric nature of the signals in the frequency domain and the task of localisation, it is necessary to obtain spatial relationships and patterns within the data volume. Therefore, this work proposes a modified, denselyconnected, 3DCNN for the volumetric feature extraction of simulated neutron detector readings seen depicted in Figure 3.
The network depicted in Figure 3 shows the architectural construction of the 3D CNN, comprised of three dense blocks modified from the 2D variant to allow for the 3D volumetric input. Dense blocks [16] are an DNN architectural design, utilising several CNN developments, with its main advantage being the use of dense connections. These connections allow for a greater flow of information between layers during the forward and backward pass of the backpropagation procedure, resulting in the reduction of vanishing gradients and achieving better performance. These connections are simply concatenations, where the ℓth hidden layer H _{ℓ} receives as input the featuremaps all preceding layers within that block(3)
In addition to the dense connections, the network employs 1 × 1 × 1 kernel convolutions with stride 1 for the reduction in feature dimensionality following dense connections, furthermore, 1 × 1 × 1 kernels reduce network parameters whilst increasing network complexity, further assisting the parameter large 3D convolution operation [17]. The dense blocks each contain l = 20 layers with growth rate of k = 6, for further details please refer to [16]. All convolutional layers are followed by the commonplace procedure: convolutional layer → Batch Normalization (BN) → and ReLU activation. BN normalises the activations output by the convolutional layer improving network stability, ReLU is a nonlinear activation function with sparse activation, further assisting in the reduction of vanishing gradients. Furthermore, the proposed network replaces the pooling operation with strided convolutions for dimensionality reduction, retaining spatial structural information from the input vital for the localisation of perturbation sources.
The last convolutional layer of the network outputs a representational feature vector of the input of size 256 via Global Average Pooling (GAP) layer [17], fully connected to two output layers for perturbation classification and localisation. GAP directly outputs the spatial average over the feature maps, resulting in a vector V ∈ ℝ^{m} where m is the number of feature maps. The output layer for classification is comprised of 9 nonlinear, sigmoid units each for the occurrence of the individual perturbation types (nine types as modes of fuel assembly vibration are considered as classes of perturbation). For localisation three linear units have been employed each representing the (i, j, k) coordinates of the perturbation source to be regressed.
Training the network has been achieved via implementing the multitask loss approach from [11], minimising the weighted sum of losses per task (classification and localisation) with a weight coefficient identifying the impact each tasks loss in the training procedure. For classification the network aims to minimise the negative loglikelihood (NLL) (4)and for localisation regression, minimises the L2 loss, or mean squared error (MSE)(5)where y _{ i } and are the true and predicted values of the network for N number of examples. As previously alluded the 3D CNN network is trained minimising a weighted sum of losses(6)where P and C are the number of perturbation classes and source location coordinates respectively, λ _{1} and λ _{2} are the manually tuned hyperparameter weight coefficients for each task loss, classification and localisation regressing respectively. This objective is minimised given X as input data with respect to W parameters (weights and biases).
Fig. 3 The proposed Denselyconnected 3D CNN architecture, depicting an example dense block of 2 layers and growth rate of 32. The Fullyconnected and output layers can be seen right of the GAP, each unit represents a classification perturbation type or the source (i, j, k) location to be regressed. 
Fig. 4 Sample of 12 learnt featuremaps from the output of first dense block for the input of vibrating fuel assembly at (8,16) given all possible detectors. Visually depicting how the differing layers highlight different features of the image. (a) Shows a peak at the source of vibration, (d) the response on the core barrel, (j) the noise dissipating throughout the core. 
4.2 Long shortterm memory, recurrent neural network
Time domain signals hold temporal information within their sequential structure, therefore, a differing approach to previously described is necessary to capture these timedependent features. To more appropriately capture the relationships within the detector signals, Recurrent Neural Networks (RNN) have been employed. RNNs utilise recurrence to allow information about previous timesteps to persist within the network informing current and future timestep cells across the sequence. RNNs in principle formulate a nonlinear output A _{ t } from both the input data x _{ t } at that given timestep and the activation of the previous timesteps cell A _{ t−1}, where ϕ is a nonlinear activation function such as hyperbolic tangent (tanh):(7)
Long ShortTerm Memory (LSTM) [18], a variation of RNNs have been incorporated in this work for their ability to learn long term dependencies across long sequences, ideal for the 100 timestep sequences in question. It achieves this ability with the use of memory gates, regulating and learning how much to ‘remember’ from previous cell states and how much to contribute from the current data input. Initially, the forget gate determines what to remember from the previous cell state C _{ t−1} given activation A _{ t−1}. To decide what new information will be added to the current cell state, an input gate i_{t} and candidate values are generated.(8)
The outputs of these gates are combined to create an update the previous cell state to the cell state C _{ t } via the forgetting and updating previously computed through learnt weights. The output gate is employed to control what should be output from the newly computed cell states, outputting a nonlinear activation A _{ t } to the subsequent cells.(9)
Further details of the intuition of LSTMs can be found in [18], with the above process visually depicted in Figure 5 within each of the LSTM cells.
The network proposed solely for the classification task incorporates a LSTM network comprised of 2 stacked layers. Each cell within those layers contains 512 units, outputting a 512dimensional feature representation vector of the single sensor input for 1 second, depicted in Figure 5. This network outputs to 6 nonlinear sigmoid units for the classification of the presence of individual perturbations from one detector reading. Dropout [19] of 25% drop probability, has been employed in the LSTM network regularising the effects of overfitting, setting a percentage of the unit activations to zero, limiting the networks learning capacity. The LSTM network has been trained to minimise the negative loglikelihood with respect to the parameters W and input x as noted in (6).
Localising vibrating fuel assemblies has been achieved employing the same core LSTM architecture as aforementioned, with the addition of a linear output layer, fully connected to the 512dimensional representation vector for the regression of azimuthal coordinates. The training of this network has been achieved by minimizing the weighted sum of each loss per task, as to the definition in (6).
Fig. 5 LSTM RNN architecture proposed for the classification task, outputting a 512dimensional representational vector of the input to a 6unit classification layer. The LSTM units take in input from the bottom, x _{ t }, with all gates depicted in each LSTM cell. 
5 Experimental results
5.1 Frequency domain
The subsequent experiments show the results of reactor transfer function unfolding for the classification and localisation of induced perturbations given the neutron flux from simulated neutron detectors in the frequency domain from the proposed densely connected 3D CNN. The experiments have been implemented utilising the Pytorch numerical computation library trained via backpropagation, minimising the multitask loss criterion in Section 4.1 with the Adam optimizer with its proposed parameters as in [20]. A batch size of 32 has been used, trained on an 8core, 16thread Intel CPU system, with 4 Nvidia 1080ti GPUs and 94 GB of RAM, each model being trained 3 times and the mean and standard deviation being taken as the result.
Two experiments were conducted on the volumetric signals, the first using different sized splits of training, validation, and testing data to more appropriately represent the limited amount of data available from real plant readings, the subsequent results can be seen in Table 1. Furthermore, the results from the utilisation of detector readings from all possible voxel positions within the reactor core and only 48 incore detectors are also shown, where the 48 incore detectors are located corresponding to the layout of the core modelled in Section 3.1. For the latter experiment, the volumetric signals were corrupted with white Gaussian noise, as described in Section 3.1.1 to test the robustness of the proposed system in adverse conditions.
The results in Table 1 show that the proposed 3D CNN models perform highly in the classification task across all testing splits, with 99.89 ± 0.010% accuracy in the best case and 99.56 ± 0.061% in the worst, respectively achieving an F1score of 0.9311 ± 0.001 and 0.9141 ± 0.003. F1score is an alternative measure of accuracy of prediction and target, as a function of precision and recall(10)where(11) computed from the confusion matrix of predicted values of the network and true values of the data. F1score lies within the range [0.0,1.0] where 1 is perfect precision and recall. The regression results of the perturbation source coordinates observed in Table 1 show low error was achieved, with a best case of 0.2902 ± 0.011 and 0.3072 ± 0.014 for the mean absolute error (MAE) and mean squared error (MSE) respectively. In relation to the core volume, this is approximately 4cm localisation error in an 4 m × 4 m × 4 m reactor core utilising only 48 detectors. Table 2 shows the results with the addition of singal corruption of the volumetric signals, with a worst case of 99.81 ± 0.036% accuracy, 0.9225 ± 0.002 F1score and 0.3709 ± 0.020 MAE for classification and localisation respectively, demonstrating the robustness of the proposed approach with minimal deviation from the best performance of no corruption.
Results of the proposed 3DCNN for the classification and localisation of perturbation type and source location (i, j, k). Mean and standard deviation of 3 runs.
Results of the proposed 3DCNN for the classification and localisation of perturbation type and source location (i, j, k) with the corruption of input signals at SNR = 3 and SNR = 1.
5.2 Time domain
Experimentation in the time domain for the unfolding of the reactor transfer function for the classification of perturbation type has been achieved via individual neutron detector measurements as described in Section 3.2.1. Table 3 displays the results of the one second samples for the 27 scenarios of 6 perturbation settings under different SNRs of signal noise corruption. The finalised results are the mean and standard deviations of 3 training runs, trained via backpropagation with the RMSprop optimizer [20] with default settings and learning rate of 0.0001, and utilising a batch size of 64. The results show that given just 1 second readings from one neutron detector our approach can accurately classify the perturbation type with a best case of 96.41 ± 0.021% accuracy, the addition of noise has shown that although performance degrades, the system is robust given such minimal data input.
Localisation of vibrating fuel assembly source takes a similar approach utilising the same training procedure except for the minimisation criterion, replacing with the multitask loss in (6). Additionally, all 56 detectors have been utilised − compared to the previous experiment of individual detectors − to obtain spatial information between the detectors to infer the perturbing fuel assembly location. Corrupting the signals with white Gaussian noise has also been applied to test the robustness of the proposed approach, the resulting error of localisation can be seen in Table 4. Localisation in the time domain has been achieved with low localisation error with a worst case of 1.2304 ± 0.102 and 3.2340 ± 0.612 under SNR = 1, and a best of 1.0737 ± 0.006 and 2.3682 ± 0.065 for MAE and MSE respectively.
Classification of perturbation type in the time domain under differing levels of input signal noise corruption from individual detector inputs.
Localisation of the coordinates of a vibrating fuel assembly (i, j), in the timedomain utilising the proposed LSTM architecture, under input signal corruption. Mean and standard deviation of 3 runs.
6 Conclusions and future work
This work proposed an extended approach to the unfolding of reactor transfer functions for the classification and localisation of reactor core perturbations from neutron detector readings produced by simulated core models. The proposed models accurately classify perturbation types and source locations in the time and frequency domain, with extended and more complex simulated perturbation scenarios than previous work [11,12]. Our approach outperforms previous approaches for the same task localising such perturbations to a finer voxel mesh and with fewer detectors available, i.e. 48 incore detectors for a 32 × 32 × 34 core volume.
Our experiments further solidify the applicability and capability of deep learning approaches in the domain of nuclear reactor anomaly detection, specifically for the nontrivial task of reactor transfer function unfolding given very spare neutron flux detector readings. We will continue to extend our approaches to localising and classifying large combinations of perturbations simultaneously. Furthermore, investigations will be made to apply our model to real plant data providing further validation of the capability of our approach for online anomaly detection.
Acknowledgments
The research conducted was made possible through funding from the Euratom research and training programme 20142018 under grant agreement No 754316 for the ‘CORe Monitoring Techniques And EXperimental Validation And Demonstration (CORTEX)’ Horizon 2020 project, 20172021. We would like to thank the Chalmers University of Technology, particularly Dr C. Demaziere, Dr P. Vinai, Dr A. Milonakis and the Paul Scherrer Institute, particularly Dr A. Dokhane and Dr V. Verma for providing the frequency and domain data respectively, for assisting us with their understanding and for collaborating with us in the analysis process.
Author contribution statement
All authors have contributed equally to the conceptualisation and the technical developments pertaining to the machine learning components and the formulation of the problem. The data were provided by some of the consortium members of the EUH2020 project Cortex, mentioned in the Acknowledgements section. Aiden Durrant led the programming aspects of the proposed deep neural network technique and the first draft of the manuscript. Georgios Leontidis and Stefanos Kollias supervised the implementations presented. All authors contributed equally to the evaluation, validation, presentation, review and approval of the final manuscript.
References
 C. Demazière et al., Overview of the CORTEX project, in Proc. Int. Conf. Physics of Reactors − Reactor Physics paving the way towards more efficient systems (PHYSOR2018), Cancun, Mexico, April 2226, 2018 (2018) [Google Scholar]
 D. Rolnick et al., Tackling Climate Change with Machine Learning, arXiv:1906.05433 (2019) [Google Scholar]
 F.C. Chen, M.R. Jahanshahi, NBCNN: deep learningbased crack detection using convolutional neural network and Naïve Bayes data fusion, IEEE Trans. Ind. Electron 65, 4392 (2017) [CrossRef] [Google Scholar]
 W. Li et al., Design of comprehensive diagnosis system in nuclear power plant, Ann. Nucl. Energy 109, 92 (2017) [CrossRef] [Google Scholar]
 M.C. dos Santos et al., Deep rectifier neural network applied to the accident identification problem in a PWR nuclear power plant, Ann. Nucl. Energy 133, 400 (2019) [CrossRef] [Google Scholar]
 R.M. AyoImoru, A.C. Cilliers, Continuous machine learning for abnormality identification to aid conditionbased maintenance in nuclear power plant, Ann. Nucl. Energy 118, 61 (2018) [CrossRef] [Google Scholar]
 S. Zaferanlouei et al., Prediction of critical heat flux using anfis, Ann. Nucl. Energy 37, 813 (2010) [CrossRef] [Google Scholar]
 S.A. Hosseini, I.E.P. Afrakoti, Neutron noise source reconstruction using the adaptive neurofuzzy inference system (anfis) in the vver1000 reactor core, Ann. Nucl. Energy 105, 36 (2017) [CrossRef] [Google Scholar]
 S.A. Hosseini, I.E.P. Afrakoti, Evaluation of a new neutron energy spectrum unfolding code based on an Adaptive NeuroFuzzy Inference System (ANFIS), J. Radiat. Res. 59, 436 (2018) [CrossRef] [Google Scholar]
 F. Calivà et al., A deep learning approach to anomaly detection in nuclear reactors, in Proc. 2018 Int. Joint Conf. Neural Networks (IJCNN2018), Rio de Janeiro, Brazil, July 813, 2018 (2018) [Google Scholar]
 F. De Sousa Ribeiro et al., Towards a deep unified framework for nuclear reactor perturbation analysis, in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2018), Bengaluru, India, November 18–21 (2018) [Google Scholar]
 C. Demazière, Core sim: a multipurpose neutronic tool for research and education, Ann. Nucl. Energy 38, 2698 (2011) [CrossRef] [Google Scholar]
 C. Demazière, User's manual of the core sim neutronic tool, Technical report, Chalmers University of Technology, 2011 [Google Scholar]
 G. Grandi et al., Simulate3k models and methodology, SSP98013, 6 (2006) [Google Scholar]
 Y. LeCun et al., Generalization and network design strategies, Connectionism in perspective (1989), p. 143 [Google Scholar]
 G. Huang et al., Densely connected convolutional networks, in Proc. IEEE Conf. on computer vision & pattern recognition, Honolulu, Hawaii, USA, July 2226, 2017 (2017) [Google Scholar]
 M. Lin et al., Network in Network, arXiv:1312.4400 (2013) [Google Scholar]
 S. Hochreiter, J. Schmidhuber, Long shortterm memory, Neural Comput. 9, 1735 (1997) [CrossRef] [PubMed] [Google Scholar]
 N. Srivastava et al., Dropout: a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15, 1929 (2014) [Google Scholar]
 D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv:1412.6980 (2014) [Google Scholar]
Cite this article as: Aiden Durrant, Georgios Leontidis, Stefanos Kollias, 3D convolutional and recurrent neural networks for reactor perturbation unfolding and anomaly detection, EPJ Nuclear Sci. Technol. 5, 20 (2019)
All Tables
Results of the proposed 3DCNN for the classification and localisation of perturbation type and source location (i, j, k). Mean and standard deviation of 3 runs.
Results of the proposed 3DCNN for the classification and localisation of perturbation type and source location (i, j, k) with the corruption of input signals at SNR = 3 and SNR = 1.
Classification of perturbation type in the time domain under differing levels of input signal noise corruption from individual detector inputs.
Localisation of the coordinates of a vibrating fuel assembly (i, j), in the timedomain utilising the proposed LSTM architecture, under input signal corruption. Mean and standard deviation of 3 runs.
All Figures
Fig. 1 Examples of the amplitude induced neutron flux in the frequency domain for a single azimuthal slice on the 10th axial plane. Left: Absorber of Variable Strength. Middle: Core Barrel Vibration − Right: Vibrating Fuel Assembly, cantilevered. 

In the text 
Fig. 2 Modelled core layout with 8 incore and 4 excore detector locations shown for one axial plane. Corresponding train, test and validation detector splits shown, with central 5 × 5 FA cluster shown in red. 

In the text 
Fig. 3 The proposed Denselyconnected 3D CNN architecture, depicting an example dense block of 2 layers and growth rate of 32. The Fullyconnected and output layers can be seen right of the GAP, each unit represents a classification perturbation type or the source (i, j, k) location to be regressed. 

In the text 
Fig. 4 Sample of 12 learnt featuremaps from the output of first dense block for the input of vibrating fuel assembly at (8,16) given all possible detectors. Visually depicting how the differing layers highlight different features of the image. (a) Shows a peak at the source of vibration, (d) the response on the core barrel, (j) the noise dissipating throughout the core. 

In the text 
Fig. 5 LSTM RNN architecture proposed for the classification task, outputting a 512dimensional representational vector of the input to a 6unit classification layer. The LSTM units take in input from the bottom, x _{ t }, with all gates depicted in each LSTM cell. 

In the text 