Lessons learned from the U.S. nuclear power Plant on-line monitoring programs

  • Published on
    14-Jul-2016

  • View
    219

  • Download
    4

Transcript

  • ELSEVIER

    www.elsevier.eom/locate/pnucene

    Progress in Nuclear Energy, Vol. 46, No. 3-4, pp. 176-189, 2005 Available online at www.sciencedirect.com~ 2005 Elsevier Ltd. All rights reserved

    s c ~ E ~ e E (-d) o, R ~ c V. Printed in Great Britain 0149-1970/$ - see front matter

    do i : l 0 .1016/ j .pnueene.2005.03 .003

    LESSONS LEARNED FROM THE U.S. NUCLEAR POWER

    PLANT ON-L INE MONITORING PROGRAMS

    J. W. HINES a, E. DAVIS h

    ~Nuelear Engineering Department, The University of Tennessee, Knoxville, Tennessee 37996-2300,

    t~Edan Engineering Corporation, 900 Washington St., Suite 830, Vancouver, Washington 98660

    ABSTRACT The investigation and application of on-line monitoring programs has been ongoing for over two decades by the U.S. nuclear industry and researchers. To this date, only limited pilot installations have been demonstrated and the original objectives have changed significantly. Much of the early work centered on safety critical sensor calibration monitoring and reduction. The current focus is on both sensor and equipment monitoring. This paper presents the major lessons learned that contributed to the lengthy development process including model development and implementation issues, and the results of a recently completed cost benefit analysis.

    KEYWORDS On line monitoring; sensor calibration verification; empirical models; fault detection and isolation. 2005 Elsevier Ltd. All rights reserved

    1. INTRODUCTION AND BACKGROUND

    For the past two decades, Nuclear Power Plants (N-PPs) have attempted to move towards condition-based maintenance philosophies using new technologies developed to ascertain the condition of plant equipment. Specifically, techniques have been developed to monitor the condition of sensors and their associated instrument chains. Historically, periodic manual calibrations have been used to assure sensors are operating correctly. This technique is not optimal in that sensor conditions are only checked periodically; therefore, faulty sensors can continue to operate for periods up to the calibration frequency. Faulty sensors can cause poor economic performance and unsafe conditions. Periodic techniques also eanse the unnecessary calibration of instruments that are not faulted which can result in damaged equipment, plant downtime, and improper calibration under non-service conditions.

    176

  • US. On-Line Monitoring Programs 177

    Early pioneers in the use of advanced information processing techniques for instrument condition monitoring included researchers at the University of Tennessee (UT) and Argonne National Laboratory. Dr. Belle Upadhyaya was one of the original investigators in the early 1980's [Upadhyaya 1985, 1989, 1992], through a Department of Energy funded research project to investigate the application of artificial intelligence techniques to nuclear power plants. Researchers at Argonne National Laboratory continued with similar research from the late 1980's [Mort 1987] in which they developed the Multivariate State Estimation System (MSET) which has gained wide interest in the US Nuclear Industry. Lisle, IL. based SmartSignal Corporation licensed the MSET technology for applications in all industries, and subsequently extended and modified the basic MSET technology in developing their commercial Equipment Condition Monitoring (SmartSignal eCM TM ) software [Wegerich 2001]. The Electric Power Research Institute (EPRI) has used a product from Expert Mierosystems called SureSense [Bickford 2003], which also uses the MSET algorithm. Several other US companies such as Pavillion Technologies, ASPEN IQ, and Performance Consulting Services [Griebenow 1995] have also developed sensor validation products. The major European participant in this area is the Halden Research Project where Dr. Paolo Fantoni and his multi-national research team have developed a system termed Plant Evaluation and Analysis by Neural Operators (PEANO) [Fantoni 1998, 1999] and applied it to the monitoring of nuclear power plant sensors. Several other researchers have been involved with inferential sensing and on-line sensor monitoring. A survey of the methods is given by Hines [2000a].

    Early EPRI research included the development of the Instrument Calibration and Monitoring Program (ICMP) for monitoring physically redundant sensors [EPRI 1993a, 1993b]. Subsequent work expanded to monitoring both redtmdant and non-redundant sensors.

    Research and development in the 1990s resulted in Topical Report TR-104965, On-Line Monitoring of Instrument Channel Performance, developed by the EPRI/Utility On-Line Monitoring Working Group. In July 2000, the U.S. Office of Nuclear Reactor Regulation Application issued a safety evaluation (SE), which was released in September 2000. This report focused on the generic application of on-line monitoring techniques to be used as a tool for assessing instrument performance. It proposed to relax the frequency of instrument calibrations required by the U.S. nuclear power plant Technical Specifications (TS) from once every fuel cycle to once in a maximum of 8 years based on the on-line monitoring results.

    1.1 EPRI on-line monitoring (eLM) group

    The EPRI Instrument Monitoring and Calibration (IMC) Users Group formed in 2000 with an objective to demonstrate eLM technology in operating nuclear power plants for a variety of systems and applications. A second objective is to verify that eLM is capable of identifying instrument drift or failure. The On-Line Monitoring Implementation Users Group formed in mid 2001 to demonstrate eLM in multiple applications at many nuclear power plants and has a four-year time frame.

    Current United States nuclear plant participants include Limerick, Salem, Sequoyah, TMI, and VC Summer using a system produced by Expert Mierosystems Inc. (expmierosys.eom), and Harris and Pale Verde, which use a system developed by SmartSignal Inc. (smartsignal.eom). Each of these plants is currently using eLM technology to monitor the calibration of process instrumentation. In addition to monitoring instrumentation, the systems have an inherent dual purpose of monitoring the condition of equipment, which is expected to improve plant performance and reliability. The Sizewell B nuclear power plant in Great Britain is using the eLM services supplied by AMS (www.ams-corp.com).

    1.2 Lessons learned

    This paper presents a brief description of the development activities and the major lessons learned. These lessons will be divided into three main categories. First, the technology changes will be briefly discussed.

  • 178 A ~ Hines and E. Davis

    Next, implementation issues will be presented along with several examples. Lastly, a recently completed cost benefit study will be summarized to show where economies will drive the future application of On- Line Monitoring technologies.

    2. ON LINE MONITORING TECHNIQUES

    The OLM systems use historical plant data to develop empirical models that capture the relationships between correlated plant variables. These models are then used to verify that the relationships have not changed. A change can occur due to sensor drift, equipment faults, or operational error. The systems currently in use in the US are based on the Multivariate State Estimation Technique developed at Argonne National Laboratory (ANL) [Singer 1996, Gross 1997] and further studied at the University of Tennessee (UT) [Gribok 2000].

    Numerous data-based technologies have been used by major researchers in the field including autoassociative neural networks [Fantoni 1998, Hines 1998, Upadhyaya 1992], fuzzy logic [Hines 1997], non-linear partial least squares [Qin 1992, Rasmussen 2000a], and kernel based techniques such as MSET [Singer 1996] and the Advanced Calibration Monitor (ACM) [Hansen 1994]. Three technologies have emerged and have been used in the Electric Power Industry that use different databased prediction methods: a kernel based method (MSET), a neural network based method (PEANO and the University of Tennessee AANN)), and a transformation method (NLPLS). These methods are described and compared in Hines [2000a].

    The major lesson learned in applying empirical modeling strategies are that the methods should

    produce accurate results, produce repeatable and robust results, have an analytical method to estimate the uncertainty of the predictions, be easily trained and easily retrained for new or expanded operating conditions.

    2.1 Accurate results

    Early applications of autoassoeiative techniques, such as MSET, were publicized to perform well with virtually no engineering judgment necessary. One item of interest is the choice of inputs for a model. Early application limits were said to be around 100 inputs per model [EPRI 2000] with no need to choose and subgroup correlated variables. However, experience has shown that models should be constructed with groups of highly correlated sensors resulting in models commonly containing less than 30 signals [EPRI 2002a]. It has been shown that adding irrelevant signals to a model increases the prediction variance while not including a relevant variable biases the estimate [Rasmussen 2003b]. Additionally, automated techniques for sensor groupings have been developed for the MSET model [Hines 2004].

    2.2 Repeatable and robust results

    When empirical modelling techniques are applied to data sets that consist of collinear (highly correlated) data sets, ill-conditioning can result in highly accurate performance on the training data, but highly variable, inaccurate results on unseen data. Robust models perform well on data that have incorrect inputs as expected noisy environments or when a sensor input is faulted. Regularization techniques can be applied to make the predictions repeatable, robust, and with lower variability [Hines 1999, 2000b, Gribok 2000, 2001]. A summary of the methods is given in Gribok [2002], and regularization methods have been applied to many of the systems currently in use.

  • U.S. On-Line Monitoring Programs 179

    2.3 Uncertainty analysis

    The most basic requirements outlined in the NRC safety evaluation [2000] are that of an analysis of the uncertainty in the empirical estimates. Argonne National Laboratory has performed Monte Carlo based simulations to estimate the uncertainty of MSET based technique estimations [Zavaljevski 2000, 2003]. These techniques produce average results for a particular model trained with a particular data set. Researchers at The University of Tennessee have developed analytical techniques to estimate prediction intervals for all of the major techniques (MSET, AANN, PEANO, and NLPLS). The analytical results were verified using Monte Carlo based simulations and provide the desired 95% coverage [Rasmussen 2003a, 2003b, 2004, Gribok 2004]. Each of the techniques performs well, some better than the others, on various data sets.

    2.4 Ease of training and retraining

    As will be shown in section 3, it is virtually impossible for the original training data to cover the entire range of operation. The operating conditions may change over time and the models may need to be retrained to incorporate the new data.

    MSET based methods are not trained, but are non-parametric modelling techniques. These techniques work well in that new data vectors can simply be added to the prototype data matrix.

    Artificial Neural Networks require fairly long training times. Other parametric techniques, such as Non- Linear Partial Least Squares, can be trained much faster. Recently, PEANO system has incorporated a NLPLS algorithm with performed with equalled accuracy to the original AANN algorithm and can be trained in minutes versus days [Fantoni 2002].

    3. OLM PLANT IMPLEMENTATION

    In 2000, EPRI's focus moved from OLM product development to its implementation. In 2001, the On-Line Monitoring Implementation project started with a strategic role to facilitate OLM's implementation and cost effective use in numerous applications at power plants. Specifically, EPRI sponsored on-line monitoring implementations at multiple nuclear power plants. After three years of implementation and installation experience, several lessons have been learned. The major areas include data acquisition and quality, and model development, and results interpretation.

    3.1 Data acquisition and quality

    In order to build a robust model for OLM, one must first collect data covering all the operating conditions in which the system is expected to operate and for which signal validation is desired. This data is historical data that has been collected and stored and may not represent the plant state due to several anomalies that commonly occur. These include interpolation errors, random data errors, missing data, loss of significant figures, stuck data, and others. Data should always be visually observed and corrected or deleted before use.

    3.1.1 Interpolation errors The first problem usually encountered in using historical data for model training is that it is usually not actual data, but instead, data resulting from compression routines normally implemented in data archival programs. For example, the PI Data Historian from OSI Software creates a data archive that is a time- series database. However, all of the data is not stored at each collection time. Only data values that have changed by more than a tolerance are stored along with their time stamp. This method requires much less

  • 180 ~ ~ Hines and E. Davis

    storage but results in a loss of data fidelity. When data is extracted from the historian, data values between logged data points are calculated through a simple linear interpolation. The resulting data appears to be a saw tooth time series and the correlations between sensors may be severely changed. Figure 1 below is a plot of data output by a data historian. The plot shows a perfectly linear increase in power between April 6 and April 7, although this was not the actual operation. Data collected for model training should be actual data and tolerances should be set as small as possible or not used.

    99.8,

    IOO.

    1002 P,~e r

    fOOD

    99J6 5-~pr

    , 1 ! i 6.Apr "l--Apr 8-~pr 9.-~r 10-,~ r 11--Ap~

    TI~p~ Fig. 1. Data Interpolation

    3.1.2 Data quality issues

    Several data quality issues are common. These cases include

    Lost or missing data. * Single or multiple outliers in one sensor or several.

    Stuck data in which the data value does not update. Random data values. Unreasonable data values. Loss of significant digits.

    The figures below show several of these issues:

    ~:

    L~-~..~,I~.~ rT,~~.~l~I~ r : ~ l ~ ~rF~,~ ~-~l~l~J~l~ ~I:~::

    ~ :~,~ ,X ~,V~ ~,~

  • U.S. On-Line Monitoring Programs 181

    Fig. 3. Loss of Significant Digits

    Fig. 4. Unreasonable Data

    Most of these data problems can be visually identified or can be detected by a data clean up utility. These utilities remove bad data or replace it with the most probably data value using some algorithm. It is most common to delete all bad data observations from the training data set. Most OLM...

Recommended

View more >