Structural Integrity Associates | News and Views, Volume 51 | Pitting Corrosion in Conventional Fossil Boilers and Combined Cycle:HRSGs

News & Views, Volume 51 | Materials Lab Featured Damage Mechanism

PITTING CORROSION IN CONVENTIONAL FOSSIL BOILERS AND COMBINED CYCLE/HRSGS

By:  Wendy Weiss

Pitting is a localized corrosion phenomenon in which a relatively small loss of metal can result in the catastrophic failure of a tube. Pitting can also be the precursor to other damage mechanisms, including corrosion fatigue and stress corrosion cracking. Pits often are small and may be filled with corrosion products or oxide, so that identification of the severity of pitting attack by visual examination can be difficult. 

Figure 1. Severe pitting in a tube from a package boiler

Mechanism 

Pitting is a localized corrosion attack involving dissolution of the tube metal surface in a small and well-defined area. Pitting corrosion can occur in any component in contact with water under stagnant oxygenated conditions. Pitting in economizer tubing is typically the result of poor shutdown practices that allow contact with highly-oxygenated, stagnant water. Pitting also may occur in waterwall tubing as a result of acidic attack stemming from an unsatisfactory chemical cleaning or acidic contamination. 

Pits that are associated with low pH conditions tend to be numerous and spaced fairly close together. The pits tend to be deep-walled compared to the length of the defect. A breakdown of the passive metal surface initiates the pitting process under stagnant oxygenated conditions. A large potential difference develops between the small area of the initiated active pit (anode) and the passive area around the pit (cathode). The pit will grow in the presence of a concentrated salt or acidic species. The metal ion salt (M+A-) combines with water and forms a metal hydroxide and a corresponding free acid (e.g., hydrochloric acid when chloride is present). Oxygen reduction at the cathode suppresses the corrosion around the edges of the pit, but inside the pit the rate of attack increases as the local environment within the pit becomes more acidic. In the event that the surfaces along the walls of the pit are not repassivated, the rate of pit growth will continue to increase since the reaction is no longer governed by the bulk fluid environment. Pitting is frequently encountered in stagnant conditions that allow the site initiation and concentration, allowing the attack to continue. 

The most common cause of pitting in steam touched tubing results from oxygen rich stagnant condensate formed during shutdown. Forced cooling and / or improper draining and venting of assemblies may result in the presence of excess moisture. The interface between the liquid and air is the area of highest susceptibility. Pitting can also be accelerated if conditions allow deposition of salts such as sodium sulfate that combine with moisture during shutdown. Volatile carryover is a function of drum pressure, while mechanical carryover can increase when operating with a high drum level or holes in the drum separators. Pitting due to the effects of sodium sulfate may occur in the reheater sections of conventional and HRSG units because the sulfate is less soluble and deposits on the internal surfaces. During shutdowns the moisture that forms then is more acidic. 

Figure 2. Pitting on the ID surface of a waterwall tube

Typical Locations

In conventional units, pitting occurs in areas where condensate can form and remain as liquid during shutdown if the assemblies are not properly vented, drained, or flushed out with air or inert gas. These areas include horizontal economizer tubes and at the bottom of pendant bends or at low points in sagging horizontal tubes in steam touched tubes. 

In HRSGs, damage occurs on surfaces of any component that is intentionally maintained wet during idle periods or is subject to either water retention due to incomplete draining or condensation during idle periods. 

Attack from improper chemical cleaning activities is typically intensified at weld heat affected zones or where deposits may have survived the cleaning. 

Features

Pits often are small in size and may be filled with corrosion products or oxide, so that identification of the severity of pitting attack by visual examination can be difficult. 

Damage to affected surfaces tends to be deep relative to pit width, such that the aspect ratio is a distinguishing feature. 

Root Causes

Figure 3. Pitting on the ID surface of an economizer tube

The primary factor that promotes pitting in boiler tubing is related to poor shutdown practices that allow the formation and persistence of stagnant, oxygenated water with no protective environment. Confirming the presence of stagnant water includes: 

  1. analysis of the corrosion products in and around the pit; 
  2. tube sampling in affected areas to determine the presence of localized corrosion; and 
  3. evaluation of shutdown procedures to verify that conditions promoting stagnant water exist. 

Carryover of sodium sulfate and deposition in the reheater may result in the formation of acidic solutions during unprotected shutdown and can result in pitting attack. Similarly flyash may be pulled into reheater tubing under vacuum and form an acidic environment.

Get News & Views, Volume 51

Structural Integrity Associates | News and Views, Volume 51 | Acoustic Emission Testing Streamlining Requalification of Heavy Lift Equipment

News & Views, Volume 51 | Acoustic Emission Testing

STREAMLINING REQUALIFICATION OF HEAVY LIFT EQUIPMENT

By:  Mike Battaglia and Jason Van Velsor

Structural Integrity Associates | News and Views, Volume 51 | Acoustic Emission Testing Streamlining Requalification of Heavy Lift Equipment

Figure 1. Heavy lift rig attached to reactor head in preparation for removal.

BACKGROUND
Proper control of heavy loads is critical in any industrial application as faulty equipment or practices can have severe consequences.  The lifting technique, equipment, and operator qualifications must all meet or exceed applicable standards to ensure industrial safety.  The significance of heavy lifts at commercial nuclear facilities is, perhaps, even greater.  In addition to the consequences of an adverse event that are common to any industry (bodily injury or human fatality, equipment damage, etc.), the nuclear industry adds additional challenges.  Such an adverse event in the nuclear industry can also affect (depending on the specific lift) fuel geometry / criticality, system shutdown capability, damage to safety systems, etc.  One example of a critical lift in nuclear power facilities is the reactor vessel head / reactor internals lift.  

The requirement to inspect the heavy lifting equipment for structural integrity is prescribed in NUREG-0612, Control of Heavy Loads At Nuclear Power Plants, as enforced by NRC Generic Letter 81-07. The aforementioned NUREG document describes specific requirements for special lifting devices.  The requirements prescribed include: 

  • Special lifting devices are subject to 1.5X rates load followed by visual inspection, or
  • Dimensional testing and non-destructive examination (NDE) of the load bearing welds

In the case of the former requirement, it can be difficult or even dangerous to test these lift rigs, which are designed to carry over 150 tons, at a factor of 1.5x.  In the case of the latter requirement, employing the more traditional NDE techniques of MT, PT, and UT to inspect the lift rigs can be costly (both in terms of labor and radiological dose) and time consuming, in terms of impact to outage critical path, depending on when the inspection is performed.  In PWRs or BWRs, inspections are performed in the reactor containment, or radiation-controlled area, and are typically only performed during the outage.   

Ultimately, the NRC requires licensees to determine how they will comply with the NUREG requirements.  One method that has been adopted (primarily by PWR plants) is Acoustic Emission (AE) testing.  AE testing is a non-destructive testing process that uses high-frequency sensors to detect structure-borne sound emissions from the material or structure when under load.  The process detects these acoustic emission events and, based on sensor locations and the known sound velocity and attenuation, can identify the approximate location of the sources or areas of concern.  If such areas are identified, based on analysis of the data captured under load, those areas must be further investigated to characterize the indication.  Such additional techniques may include surface examination (MT or PT), or volumetric UT to precisely locate, characterize, and size any indications.  

Employing an advanced technique such as AE can significantly reduce the time required to perform this evolution, also reducing both the cost and dose associated with meeting the NUREG requirements.  

The original deployment of this method was championed by a utility in the mid-1980’s and has since been adopted by many of PWR plants as the preferred inspection method.  

APPLICATION OF AE TESTING
In 2021, SI began offering AE testing services for reactor head lift rigs, including the qualified personnel, equipment, and tooling necessary to perform this work.  Our first implementation was at a nuclear plant in the Southeast US in the fall of 2021, and additional implementations are contracted in the spring and fall of 2022, and beyond.  

There are several advantages to AE testing that make it uniquely suited for the vessel head (or internals) lift application.  First, AE is a very sensitive technique, capable of picking up emissions from anomalies that cannot be detected by traditional techniques.  This allows for identifying areas of potential / future concern before they are an imminent safety danger.  Second, AE sensors are capable of sensing relevant emissions from a reasonable distance (up to 10 ft or more) between source emission and sensor placement.  As such, AE testing can monitor the entire lifting structure with a reasonable number of sensors (typically less than 20) placed on the structure.  Thus, sensors are strategically placed on the structure where failure is most likely – i.e., the mechanical or welded connections (joints) between structural members.  

This strategic sensor placement has another inherent advantage unique to the AE process.  If an indication is noted, the system has the capability to isolate the approximate source location (generally within a few inches) of the emission.  This is accomplished using a calculation that considers the arrival time and intensity of the acoustic emission at multiple sensor locations.  This is very beneficial when an indication requiring subsequent traditional NDE is noted as specific areas can be targeted, minimizing the scope of subsequent examinations.  

The ability of AE testing to rapidly screen the entire lift structure for active damage growth saves time and money over the traditional load testing and comprehensive NDE approaches.   

Figure 2. Lift rig turnbuckle outfitted with AE sensor.

Finally, and perhaps most importantly, the test duration is minimal and is, effectively, part of the standard process for reactor vessel head removal.  Sensor placement is performed during the normal window of plant cooldown and vessel head de-tensioning, so outage critical path is not compromised.  The actual test itself is performed as part of the head (or internals) lift; that is, when the head breaks from the vessel flange (and maximum load is achieved), the load is held in place for 10 minutes while monitoring for and recording acoustic emission activity.  Each sensor (channel) is analyzed during the hold period and a determination is immediately made at the end of the 10-minute period as to whether the lifting rig structure is suitable for use.  Unless evidence of an imminent failure is observed, the lift immediately proceeds to the head (or internals) stand.  The gathered data are also analyzed on a graded basis.  Depending on the energy intensity of the events detected at each sensor, subsequent recommendations may range from:  ‘Good-as-is’, to ‘recommend follow-up NDE post-outage’. 

The basic process of implementation is:

  • Calibrate and test equipment offsite (factory acceptance testing)
  • Mount sensors and parametric instrumentation (strain gauges, impactors) during plant cooldown and de-tensioning
  • System check (Pencil Lead Breaks (PLBs), and impactor test)
  • Lift head to the point of maximum load
  • Hold for 10 minutes
  • Continue lift to stand (unless evidence of imminent failure is observed)
  • Final analysis / recommendations (off line, for post-outage consideration)

SI VALUE ADD
During our fall 2021 implementation, SI introduced several specific process improvements over  what has been historically performed.  These advances have enhanced the process from both a quality and schedule perspective.  A few of these enhancements are:

COMMERCIAL GRADE DEDICATION OF THE SYSTEM
SI developed and deployed a commercial grade dedication process for the system and sensors.  Often, licensees procure this work as safety-related, meaning the requirements of 10CFR50 Appendix B apply.  The sensors and processing unit are commercially manufactured by a select few manufacturers that typically do not have QA programs that satisfy the requirements of 10CFR50, Appendix B. For this reason, SI developed a set of critical characteristics (sensor response, channel response to a simulated transient, etc.) and corresponding tests to validate that the system components are responding as-expected and can be adequately deployed in a safety-related application. 

Figure 3. Close-up of AE sensor.

EMPLOYING STRAIN GAUGES FOR MAXIMUM LOAD
The arrival time of an acoustic emission at one of the installed sensors is measured in milliseconds. For this reason, it is critical to initiate the 10-minute hold period precisely when peak load is reached. The historical method for synchronizing peak-load with the start of the hold period relied on the use of a stop-watch and video feed of the readout from the containment polar crane load cell.  When the load cell appears to max out, the time is noted and marked as the commencement of the test.  This approach can be non-conservative from a post-test analysis perspective as the data before the noted start time is typically not considered in the analysis. As the strain gauge correlation provides a much more precise point of maximum load that is directly synchronized with the data acquisition instrument, it is more likely that early acoustic emissions, which are often the most intense and most relevant, are correctly considered in the analysis.

REMOTELY ACTUATED IMPACTORS
One of the methods used in AE testing to ensure that the sensors are properly coupled and connected is a spring-loaded center punch test.  This test employs a center punch to strike the component surface, resulting in an intense sound wave that is picked up by all the sensors.  However, this test has historically been performed manually and required someone to physically approach and touch the lifting equipment.  In certain applications, this can be a safety or radiological dose issue and, additionally, can add time to an already time-critical plant operation.  For this reason, SI has introduced the use of remotely actuated impactors to perform this function. The result is equivalent but entirely eliminates the need to have personnel on the lift equipment for the test as this task is performed remotely and safely from a parametric control center.

Figure 4. Strain gauge output showing precise timing of peak load on lift rig.

CONCLUSION
Employing cutting-edge AE testing for your vessel head / internals heavy lift can save outage critical path time, reduce radiological dose, and identify structural concerns early in the process.  All of this leads to inherently safer, more efficient verification of heavy lift equipment.   

SI has the tools, expertise, and technology to apply cutting-edge AE testing to your heavy lifts.  SI is committed to continually improving the process at every implementation.  Improvements in software processing time, and setup / preparation time are currently in-process.  Finally, other potential applications for the method are also possible, and we stand ready to apply to the benefit of our clients.

Get News & Views, Volume 51

Structural Integrity Associates | News and Views, Volume 51 | Turbine Unit Trip and Event

News & Views, Volume 51 | Turbine Unit Trip and Event

Recovery Best Practices

By:  Dan Tragresser

When a unit trips or experiences an event, the site will incur costs associated with the loss in production and regulatory penalties. Based on the severity, the outage scope may include hardware replacement and, if applicable, the purchase of make-up power. These costs can quickly drive the decision to make the return to service the only priority.

Structural Integrity Associates | News and Views, Volume 51 | Turbine Unit Trip and EventWith the reduction in staffing at power plants over the past 2 decades, many traditionally routine engineering and maintenance tasks have fallen by the wayside.  With limited resources, operations and engineering personnel must focus their time and efforts based on priority.  Quite often, keeping a unit online or quickly returning a unit to service will take priority over continuous improvement actions such as investigations and root cause analysis.

When a unit trips or experiences an event, the site will incur costs associated with the loss in production and regulatory penalties. Based on the severity, the outage scope may include hardware replacement and, if applicable, the purchase of make-up power. These costs can quickly drive the decision to make the return to service the only priority. Unfortunately, the review of event operational data, event precursors, and the collecting evidence through the unit disassembly very often falls below the priority of returning to service.  Collecting or re-creating evidence after the fact is nearly impossible.  This lack of priority often results in a lack of understanding of the root cause of the trip or event.  

Within large, complex plants and turbomachinery, trips or minor events are common but are rarely isolated, one-off events.  Many trips and events are repetitive in nature and, worse, are early indications of a more serious event to come.  While the cost of delays in returning to service may be high, the cost of not solving the root cause may be orders of magnitude higher, particularly if a failure event happens a second time.

Focusing on unit trips, best practices include:

  • Hold regular, cross-functional trip reviews.
  • If available, consider holding reviews across similar sites within a parent company.
    • Utilize knowledge and solutions that may already have been developed.
  • Trend trip events and frequency over a 1-to-3-year period.
    • Measure the success of prior projects based on the reduction of occurrences or elimination over a multi-year period.
    • Trips may be seasonal in nature, and re-occurrence may span timeframes greater than one year.
  • Review each trip as a near miss and assess potential consequences that may not have occurred this time.
  • Consider including trip investigation in site or corporate level procedures and celebrate successes.

Turbine Blade Failure

Focusing on unit events, the cost of an event requiring an outage and hardware replacement, not including make-up power purchase, can very quickly escalate to millions of dollars.  Compare that cost to the cost of a dedicated, independent resource for the duration of time required to perform a comprehensive investigation.  Also, consider the cost of the investigation versus the cost of reoccurrence or a similar event with more serious consequences.  The cost of the resource and investigation will almost always be in the noise of the overall cost.  Best practices include:

  • In nearly all cases, site and outage resources will be dedicated to the speedy rehabilitation of the unit.
    • Critical evidence is often lost or destroyed, unintentionally, based on the need to return to service quickly.
    • A dedicated, independent resource provides the best option to ensure that useful evidence is collected.
  • Assign a dedicated, independent resource to collect and review data and findings.
    • If a site resource is not available, borrow from a sister site or corporate team, ideally someone with an outside perspective and not necessarily an expert in the field.
    • Consider an external independent resource such as an industry consultant.
    • It will likely require a team to complete the overall root cause analysis, however, the likelihood of success will be much greater with facts and details being collected by a dedicated resource.
  • Initial steps as a dedicated, independent resource:
    • Ensure a controller and DCS data and alarm logs backup is completed before they time out.
    • Interview individuals that were on site at the time of the event and or in the days prior.
    • There is no such thing as too many pictures. It is common to find a critical link or detail in the background of a picture taken for another reason.
    • Clearly articulate hold points at which the independent resource will require inspections or data collection through the disassembly process.
    • Collect and preserve samples and evidence.
  • Where available, utilize other fleet assets to enable a detailed causal analysis with corrective and preventative actions.
    • Demonstrating a commitment to fleet risk reduction can minimize impacts with regulators and insurers.
  • Once an event occurs, those limited resources will be fully occupied. Creating a plan at this point is too late.
    • Discuss including the cost of an investigation into an event insurance claim with site insurers and what their expectations would be to cover the cost.
    • Maintain a list of resources, internal and external, to call upon as dedicated, independent resources.

Identifying the root cause of an event might be cumbersome, but far less cumbersome than dealing with the same type of event on a recurring basis.

Structural Integrity has team members and laboratory facilities available to support event investigations and to act as independent consultants on an emergent basis.

View News & Views, Volume 51

Attemperator Monitoring with Wireless Sensors

Attemperator Monitoring with Wireless Sensors

Risk and Cost Reduction in Real Time

Jason Van Velsor, Matt Freeman, Ben Ruchte

Installed sensors and continuous online monitoring are revolutionizing how power plants manage assets and risk by facilitating the transformation to condition-based maintenance routines. With access to near real-time data, condition assessments, and operating trends, operators have the opportunity to safely and intelligently reduce operations and maintenance costs and outage durations, maximize component lifecycles and uptime, and improve overall operating efficiency.
Image

But not all data is created equal and determining what to monitor, where to monitor, selecting appropriate sensors, and determining data frequency are all critical decisions that impact data value. Furthermore, sensor procurement, installation services, data historian/storage, and data analysis are often provided by separate entities, which can lead to implementation challenges and disruptions to efficient data flow.

To provide our clients with a simplified implementation option that expedites the transition of data into intelligence, SI has developed SIIQ, a turnkey monitoring solution consisting of:

  • Assessing the most appropriate (e.g. highest risk) locations and method for monitoring
  • Multipurpose wireless sensor network
  • An independent data transmission infrastructure
  • PlantTrack™ visual database integration
  • Customizable automated alerts
  • Automated engineering insight

While there are many applications in which effective monitoring can be used to more efficiently manage the operation and maintenance of passive assets, such as high energy piping, attemperator management is one specific application that clearly demonstrates the value that can come from an effective monitoring program.

Industry Issue
Attemperators (or desuperheaters), which reduce steam temperature using a water spray, are one of the most problematic components in combined cycle plants. There are several attemperator designs and configurations, but all are potentially vulnerable to damage. If the causes of damage are not addressed early, cracking and steam leaks can occur, leading to costly repairs and replacements. As is typically the case, currently installed data transmitters (pressure taps and thermowells) are located far downstream/upstream and cannot detect local transients that would suggest events like spraywater impingement, pooling, etc. The main challenge is that these events can lead to damage that often goes undetected until it is too late because the damaging temperature transients are not detected by standard plant control instrumentation. Without this local temperature data, it can be hard to predict when re-inspections/other mitigation steps should be pursued.

Monitoring Equipment/Capabilities
To better characterize local temperature events and provide early indication of non-optimal attemperator operating conditions, SI offers a combination of software and hardware components that can be implemented with a range of services from monitoring, detecting, through diagnosing. At the root of these services is the need to collect data from locally installed thermocouples. While some plants chose to run the signal through the data historian and then transmit to SI for processing, an alternative is to use our wireless sensor network to collect and transmit data. SI’s wireless sensor network consists of two primary components: (1) a sensor node that collects the sensor data locally and transmits it wirelessly to (2) a gateway that transfers the data to the cloud.

Figure 1 shows an image of SI’s data collection node, highlighting several of its features. Each node has multiple sensor channels and is capable of collecting data from a variety of sensor types. For the case of temperature monitoring, up to nine different standard thermocouples can be connected to a single node. Additionally, each node is battery powered and is available with an optional solar charging kit for outdoor installations. Furthermore, the data acquisitions nodes are weather-proof and designed to be installed in exposed locations.

As shown in Figure 2, the data acquisition node is installed locally and all thermocouples are hardwired to the node. The node then transmits the data wirelessly to the installed gateway using a proprietary 900MHz wireless protocol. The data collection and transmission frequency is adjustable based on the requirements of the application.

The data from all installed nodes are transmitted to a locally installed wifi/cellular-enabled gateway, which stores the data on a local database until the data is successfully transmitted to a cloud database. Serving as the edge connection to the web, the gateway can be configured to use a cellular network, eliminating the need to connect to any plant networks, or it can be configured with a plant-wide wifi network, if available and accessible. The location of the gateway enclosure is flexible as long as it is within ~1000 ft of all installed data collection nodes.

Figure 1 – Wireless data collection node.

SIIQ/PlantTrack App
Once transmitted off-site, data can be accessed through SI’s PlantTrack software (part of the SIIQ. PlantTrack provides a suite of real-time event and damage tracking applications for common plant components: piping, headers, tubing, attemperators, etc. These applications interface to common DCS / Historian systems allowing for easy implementation, including review and analysis of historical data where that exists.

For Attemperator damage, tracking of temperature differentials with strategically placed TCs provides a means to quantify the number and accumulation of thermal transient events. The signals from the TCs are analyzed to log temperature differential events exceeding some threshold, providing valuable data that can be used to target inspections and plan outage scopes more efficiently. Our software can be configured to provide email alerts when certain magnitude events occur or based on trends in temperature events. Optionally, if PlantTrack Online is connected to the site data historian, SI can fully implement the PlantTrack Attemperator Damage Tracking module, which uses additional sensor data to aid in diagnosing and trending attemperator damage. Actual diagnoses and recommended remediation involves one of SI’s experts reviewing the data. This is made much easier with all the necessary data being compiled automatically within the PlantTrack system. Typical service includes reviewing the data on a periodic basis (e.g. quarterly) and providing a summary of damage events, likely causes, and recommended actions.

Figure 2 – Wireless sensor network configuration.

Figure 3 – Thermocouple data display in PlantTrack.

Figure 4 – Example automated email alert.

To provide some context, the following information provides two (2) case studies of continuous monitoring where some value was realized.

CASE 1:  Bypass spray water stations (Maryland)
Finding: Noted variances in warm-up line functionality and changes to the circumferential temperature differentials/upshock and downshock of the piping.

A select combined cycle plant (2×1) recently experienced a through-wall leak at a girth weld on one of the HRSG’s hot reheat to condenser bypass line. A ring section containing the failed girth weld was removed and submitted to SI’s Materials Lab in Austin, Texas for review. The examination indicated that the crack was consistent with typical thermal fatigue damage, which is the expected damage mechanism for the area considering the proximity of a spray water station. SI recommended that the plant install local thermocouples (TCs) to assess the magnitude of transients experienced during load change events and normal operation – recommendation was made to instrument all 4 areas (2 hot reheat bypass, 2 high pressure bypass). SI also implemented our proprietary wireless sensor network where a node collects the TC data and transmits it wirelessly to a gateway that transfers the data to the cloud (Figure 5). Understanding the transients is the necessary first step, then evaluating/changing the logic, and follow-up with pertinent NDE inspections to ensure there is an understanding of the potential geometric factors here that could exacerbate any issue.  If follow-on inspections find damage then the plant may also consider FEA/fracture mechanics to assess the timing of run/repair/replace options. It is also important to mention that the failed hot reheat bypass girth weld prompted the installation of a new spray water probe assembly to be completed at a later outage.

Figure 5 – SI’s wireless node at the select combined cycle facility that has TC sensors connected.

SI performed a high-level review of the TC data pre- and post-installation of this new spray water probe assembly during a particular outage and also examined all of the bypass location temperature data:

  • Pre-outage (data in Figure 6 shows 10/28/2020 @ ~5:20AM EST)
    • Warm-up line doesn’t appear to be operational – data is similar to post-outage data for the other hot reheat bypass line (has not failed)
    • Several ‘transient’ periods show steady rates of temperature change
      • Sides of the pipe – ambient to ~750-850F and back down to ambient over 20-30 min period
      • Top/bottom – ambient to ~275-400F and back down to ambient over 20-30 min period
      • >400-500F differentials around the circumference

Figure 6 – Pre-outage data for the hot reheat bypass system that experienced a failure.

  • Post-outage (data in Figure 7 shows 12/17/2020 @ ~10:50AM EST)
    • Warm-up line appears to be operational – now differs from the other HRSG hot reheat bypass (warm-up line appears to be malfunctioning/not in operation)
    • Several ‘transient’ periods show much more prominent upshock and downshock (~275F/min in the plot below)
    • Sides of the pipe – steady from ~700-750F
    • Top/bottom – steady from ~700-750F, but then experience differentials after prominent upshocks and downshocks before settling out
      • >250-300F differentials around the circumference
  • Consensus on the pre- and post-outage data
    • Temperature differentials for the hot reheat bypass that failed appear to have improved from pre- to post-outage with a new probe assembly, but now with a functional warm-up line there are periods of more prominent temperature transients
      • Differentials around the circumference still exist
    • Spray nozzles can still be optimized

Figure 7 – Post-outage data for the hot reheat bypass system that experienced a failure.

CASE 2:  Reheat interstage spray water stations (Texas)
Finding: Identified unevaporated spray water is present during cold starts and load changes. Resulting inspections identified prominent cracking of the piping in the vicinity of the spray water probe assembly.

A select combined cycle plant (2×1) has a reheat interstage line that was previously identified by plant personnel as having a prominent sag with the low point located near the desuperheater in 2017. A liner is indicated on the drawing, which should protect pipe ID surface from spraywater. However, SI performed a high-level operating data review and did some localized NDE of this region (January 2018).

This initial data review considered existing transmitters (pressure, temperature, valve positions, combustion turbine loads, etc.) and found that there is some indication that the reheater desuperheater spray control valve is not fully closed, or may be leaking under some conditions. A leaking spray water valve could contribute to pipe bowing as that would make the bottom of the pipe colder than the top. Normally, if the desuperheater piping is able to flex, then when it is cold on the bottom and hot on the top it will hog (bow up). If, however, the piping flex is constrained so it cannot hog, then the pipe remains horizontal and a significant tensile stress is developed in the bottom of the pipe. This causes the pipe to effectively “stretch” on the bottom so the bottom is longer, and over time this can lead to a bow down. During the warm start there are a few minutes where the desuperheater pipe is at or below saturation temperature, which could result in condensation in that line. There could also be spray water that has collected in the line prior to startup that takes some time to evaporate. In either case the result would be a top to bottom temperature difference in the pipe.

From the inspection side there were no major issues noted, but a recommendation was made to install surface-mounted thermocouples (TCs) at pertinent locations to assess the magnitude of thermal transients experienced during load change events and normal operation. Plant personnel installed 5 TCs (2 upstream of the liner/2 downstream of the liner at the top and bottom of the piping; 1 at the extrados of the downstream bend). Plant personnel routed the TC sensors to their data historian (PI) for continuous monitoring.

SI was then requested to perform a review of this second dataset to determine if there are problematic temperature differentials within this line (October 2020). The data indicated that during the cold start and at low load operation (Figure 8 and Figure 9), the spray flow is not fully mixed and saturated steam is impinging on the top of the pipe downstream of the spray. This prompted another inspection (January 2021) now that saturated steam was identified and also prompted a review of the liner/probe assembly port.

This particular inspection identified circumferential indications consistent with ID-initiated thermal fatigue noted within liner boundary. This damage started at the downstream side of nozzle port and continued axially for ~5’ before dissipating and was located from 10:00à2:00 (top circumference). The through-wall depths were prominent – through-wall failure and several other locations with 40%+ (some rough measurements of ~60-80% through-wall noted, as well). It appeared that possible condensate may be inadvertently leaking around the nozzle assembly and entering the reheat interstage line through the nozzle port/flanged connection – once it hits a little bit of steam flow in the line it may push this into the void between the liner and ID of the pipe.

In areas that were originally slated for inspection (exit of the liner, downstream extrados of the bend), no findings were noted. The unevaporated spraywater that was identified by the secondary data review is obviously not ideal, but damage development is driven by the magnitude of the temperature transient and the cycle count and does not appear to have manifested in service damage at this stage. Continuous monitoring is advisable.

Figure 8 – TC and existing transmitter data for a cold start that revealed unevaporated spray water in the reheat interstage line.

Figure 9 – TC and existing transmitter data for a load change that revealed unevaporated spray water in the reheat interstage line.

(Summary)

The crucial aspect in assessing the performance of these systems with spray water stations is being able to determine the magnitude and frequency of thermal transients. With the nearest temperature transmitters (thermoelements) typically located far downstream, local thermal transients at the conditioning valve and desuperheaters are often not detected. Surface-mounted thermocouples routed to the data historian/digital control system (DCS) or collected wirelessly help to evaluate temperature differentials around the pipe circumference and at geometrical impingement points. This, in conjunction with existing transmitters, allows for early detection of potentially damaging events so that appropriate mitigations (maintenance, logic updates, etc.) can be performed before costly repairs are required.

 

Structural Integrity Associates | Wireless Sensor Node Featured Image

High Energy Piping Monitoring

High Energy Piping Monitoring

SI moves beyond the pilot application of a High Energy Piping monitoring program designed to reduce operational risk and optimize maintenance activities.

Structural Integrity Associates | Wireless Sensor Node 6.17ESI has successfully implemented the initial application of an integrated monitoring solution that provides insight into damage evolution and operational risk using real-time data and automated engineering intelligence. This solution will assist in the optimization of maintenance activities and downtime, helping utilities get the most out of their O&M budgets.  “This is a decisive step toward a more modern asset management approach that will lower O&M cost for our clients,” said Steve Gressler, Vice President, SI Energy Services Group, a division of Structural Integrity Associates, Inc. (SI) focused on power plant asset integrity.

Informed by decades of material performance knowledge, the SI team has refined a proprietary risk-ranking method to optimize sensor placement and deliver a high-value monitoring platform supported by the PlantTrack™ asset data management platform.  The integration of monitoring information into the platform further enhances equipment asset integrity data to simplify stakeholder decision making.   The SI solution incorporates various sensors working on a distributed wireless network to feed real-time data to SI’s state-of-the-art algorithms and is also capable of integrating with existing plant data historians to pull in other valuable operational data. The outcome is a cost-effective damage monitoring approach to focus resources and the timing of comprehensive field inspections.

“The architecture enables asset managers to obtain real-time feedback, alerts, and trends that clearly link actual operating conditions to the lifecycle of critical components.,” said Jason Van Velsor, Director of Integrated Monitoring Technology at SI.

“We have supported clients with asset integrity insights for decades and now offer enhanced monitoring technology that will help automate risk management for high energy piping and help obtain the most value out of field inspection and other maintenance activities during outages.”

Unique Features of the SI Solution include:

  • Design and application of a monitoring program that focuses on safety and reliability and is consistent with guidance contained in the ASME B31.1 regulatory code.
  • Expert assessment (or Gap Analysis) to optimize monitoring including health checkups to validate optimum monitoring for plant operation.
  • Decades of material analysis insights as algorithms to expertly inform decision making.
  • Customized automated alerts to notify operators of abnormal or undesirable operating conditions affecting the life of high-energy components.

Contact Steve or Jason to learn more (info@structint.com)

News and Views Volume 49, Attemperator Monitoring with Wireless Sensors 02

Read Our Related News and Views Story

News and Views Volume 49, Attemperator Monitoring with Wireless Sensors

SI Selected in The Corporate Magazine’s “Top 20 Most Dynamic Leaders”

The Corporate Magazine (www.thecorporatemagazine.com) approached us recently to be featured in their “Top 20 Most Dynamic Leaders” issue. We saw this as a unique opportunity to elevate our brand by briefly discussing our two-year journey under Mark, expanding on our history, highlighting our offerings, and sharing our unique value to the industries we serve.

To read the full article, click here.

Structural Integrity Associates | News and Views, Volume 51 | Managing Forecasting the Life of a Mass Concrete Structure

News & Views, Volume 51 | Forecasting the Life of a Mass Concrete Structure, Part One

A CASE STUDY FROM THE FERMILAB LONG BASELINE FACILITY

By:  Keith Kubischta and Andy Coughlin, PE, SE

Structural Integrity Associates | News and Views, Volume 51 | Managing Forecasting the Life of a Mass Concrete Structure

All around us is aging concrete infrastructure. From the dams holding back water, to the nuclear power plants creating carbon free electricity, to the foundations of our homes and offices. Though many advances have been made in the design of concrete structures, how do we know these structures will stand the test of time. Can we see the future of a concrete structure? Can we know the damage built into a structure during construction, normal life, and extreme events?
Answer:  Yes we can.

Background

In Batavia, Illinois a facility being built that is the first of its kind in the world. Fermilab’s Long Baseline Neutrino Facility will accelerate protons using electromagnets up to incredible speeds in a particle accelerator. After traveling through the campus, the particles are redirected to a graphite target where the collision breaks them into their component particles: pions and muons. These components decay and are segregated off. What is left is believed to be the building blocks of the universe: neutrinos, which can pass undisturbed through matter. A beam of neutrinos passes through near detectors and travels over 800 miles underground to a detection facility in an old mineshaft at Sanford Underground Research Facility in South Dakota, a facility that can also detect neutrinos hitting the earth from exploding stars.

Figure 1. Fermilab Long Baseline Neutrino Facility (source https://mod.fnal.gov/mod/stillphotos/2019/0000/19-0078-02.jpg)

After the graphite collision what is left behind has the potential to create some harmful biproducts such as tritium, or hydrogen-3, which needs to be kept out of the surrounding atmosphere, soil, and ground water. This occurs in the decay region slightly downstream from the target complex, which is 630-ft long concrete tunnel with 18 feet of concrete surrounding the beam line. Exiting the decay tunnel any leftover particles are absorbed downstream in the absorber hall. 

Figure 2. Overview of Decay Region

The tunnel of the decay region houses an octagonal shielding concrete structure to provide shielding for the byproducts. This octagonal structure is over fifty feet tall and wide with 42,000 cubic yards of concrete, enough concrete to construct a baseball stadium. At the center of the tunnel is a double walled stainless steel pressure vessel charged with helium on the inside and a chilled flow of nitrogen gas within the annulus. The octagonal shielding concrete structure is surrounded by an access area to inspect the structure, the outer decay tunnel walls, and the surrounding soil.

Figure 3. Typical Decay Region Tunnel Cross Section

The octagonal shape of the shielding concrete was not always so octagonal. Starting off with small steps, Structural Integrity demonstrated advanced capabilities to model thermal structural behavior of mass concrete, while developing and expanding on existing capabilities. SI’s positive impact on the early stages of the project earned us a larger role where we displayed additional capabilities to positively influence the design of the structure.

SI followed the design progression and answered some critical questions, such as: 

  1. Will the decay region be within acceptable temperatures when subject to the extreme energy deposition from the decaying particles? 
  2. What thermal expansion joints will be required to prevent cracking, and movement of the underground structures in a harmful way? 
  3. How can we best optimize the reinforcement of such a massive structure? 

SI answered these questions and more through expert analysis, expanding our capabilities through proprietary simulation ranging from earlier design concepts, construction stages, and up to including a 50-year design life of the structure. 

Part One of this article will look at the influence our work had of the design of the massive structure and the benefits of “seeing the cracks” before they happen.

Figure 4. EDEP Axial and Radial Distribution along the Beamline

Energy Deposition and Cooling Thermodynamics

Concrete that gets too hot can vaporize the pore water and even break apart. The transfer of heat in concrete is a critical component of the analysis and is both added to the structure and removed. Thermal loading was provided by Fermilab in the form of volumetric energy deposition (EDEP) on the concrete and steel based on particle physics software simulation program MARS. The distribution of EDEP varies both radial outward from the beamline and compounded by its positioning along the length of the tunnel. SI would need to convert the distribution into a subroutine of distributed flux for use by the analysis program. The distribution was first translated for use in 2D analysis, expanded into 3D space, and then rotated in coordinate space to account for the slope of the beamline. With the EDEP adding heat to the system, chilled nitrogen is needed to remove heat.

Figure 5. Accurate Thermal Distribution along the Decay Tunnel Shielding Concrete Structure.

A bit of “back-to-school” was needed to solve the thermodynamic problem. The heat transfer coefficient and temperatures of the nitrogen gas cooling system were calculated using classical methods on convection relationships in annular spaces. With the known EDEP into the concrete and steel, which dominates in regions closer to the center, it was decided as a design condition that all heat be taken by the nitrogen to obtain the outflow temperature of the nitrogen gas. The nitrogen temperature was calculated in 10m increments along the annular pressure vessel and at outflow based on an energy balance equation. The heat transfer coefficient was calculated using three different empirical relationships for Nusselt number utilizing the lower bound conservative estimate in the analysis. Our efforts created an accurate model in 3D space of the heat transfer into the shielding concrete. As a result of the nitrogen cooling system, we were able to keep concrete temperatures below the limit of 110 degrees Celsius. With the thermodynamic problem solved, SI progressed, coupling the solution to the mechanical stress model.

Concrete Capabilities

If there is one thing concrete is guaranteed to do, it is to crack. SI’s proprietary concrete constitutive model, ANACAP, is designed to predict concrete cracking and preform under various states of those cracks opening and closing. The behavior of concrete is highly nonlinear with low tensile strength, shear stiffness and strength that depend on crack widths, and plasticity compression. The main components of the concrete model utilized in the design phase analyses are tensile cracking, post-cracking shear performance, and compressive yielding when the compressive strength is reached. The use of the ANACAP concrete model has been validated and verified through 30 years of use and a key component for the nonlinear assessments.

Figure 6. Stepped vs Octagonal Cross Section, Thermal Distributions and Concrete Strain (Cracking)

Influence on Design

Accurate modeling of thermodynamics / thermal analysis, coupling with the mechanical model / stress analysis, and the capabilities of the nonlinear constitutive concrete model allow for the simulation of a full 3D model of the shielding concrete under full power operations. The design team sought to minimize the cracking of the structure, monitor elongations and other movements affecting the beam line, and design connections at the structure boundaries. SI coordinated with research and design teams to facilitate several cross-section iterations with different shapes and layers of shielding. Each design iteration was analyzed to demonstrate the benefits or consequences. An early iteration of the shield concrete cross-section was a stepped block shape. The corners of the stepped cross-section displayed the potential for cracking. SI addressed this potential design trait through influencing the development of the octagonal section shape. This optimization allows the design to minimize the amount of reinforcement needed to control cracking.

Figure 7. Lateral Displacements of Single Return Pipe and Dual Return Pipe

In addition to the cooling annulus at the center of the structure, there are the return ducts for the system to bring the nitrogen back to the target complex facility. The design initially used four return pipes spread out at four different corners. In one iteration the design team attempted to reduce the four return ducts to one larger return pipe to reduce the concrete volume required for shielding. The design iteration with one return duct was attempting to reduce costs by reducing the overall amount of concrete needed. Our calculations quickly identified unintended consequences. The asymmetrical shape was creating displacements along the transverse horizontal direction, pushing the beam alignment off-center by over an inch (~30 mm). The shape was quickly updated to be symmetric with two return pipes. 

From room temperature to 60 degrees Celsius, concrete is going to expand. Traditional thermal breaks cannot be utilized in this structure to maintain continuity and provide shielding. The design needed to allow the structure to expand at the downstream end. Most of the structure is supported by rails where it was designed to freely slide at the bottom of the octagonal section during the expansion phase. A section of these rails needs to be fixed at the upstream end where it was designed to resist the gravity of the structure along the slope. SI provided valuable design influence with where the fixed rails were to be positioned as the thermal loading created immense stress at the location between the fixed boundary and sliding boundary. In the original position, SI’s calculations identified a concentrated area of cracking. To minimize the amount of cracking and additional reinforcement needed, SI proposed moving the position of restraint towards the cool / upstream end of the tunnel.

Figure 8. Effect of Fixed Rail Boundary Condition Position on Strain.

How do you stop 42,000 cubic yards of concrete from expanding?

Answer:  You Don’t. 

Conclusions

Structural Integrity successfully developed expanded capabilities to model thermodynamics for the energy deposition and nitrogen cooling system. SI used the capabilities of our concrete model to influence the structural design by “seeing the cracks” before they happen, making design adjustments, and reducing reliance on additional reinforcement. SI was able to give key insights for the concrete structure and potential cost savings through optimization.

Part Two of this article will look at the life of the structure form the day concrete is first poured to 30 years of power cycles. Delving into the future to see this structures test of time and monitoring methods to see if our predictions come true.

Get News & Views, Volume 51

Structural Integrity Associates | News and Views, Volume 51 | High Temperature Ultrasonic Thickness Monitoring

News & Views, Volume 51 | High Temperature Ultrasonic Thickness Monitoring

TECHNOLOGY INNOVATION – THICK FILM SENSORS

By:  Jason Van Velsor and Robert Chambers

Figure 1 – Photograph of an ultrasonic thick-film array for monitoring wall-thickness over a critical area of a component.

The ability to continuously monitor component thickness at high temperatures has many benefits in the power generation industry, as well as many other industries. Most significantly, it enables condition-based inspection and maintenance, as opposed to schedule-based, which assists plant management optimizing operations and maintenance budgets and streamlining outage schedules. Furthermore, it can assist with the early identification of potential issues, which may be used to further optimize plant operations and provides ample time for contingency and repair planning.

Over the last several years, Structural Integrity has been working on the development of a real-time thickness monitoring technology that utilizes robust, unobtrusive, ultrasonic thick-film sensor technology that is enabling continuous operation at temperatures up to 800°F. Figure 1 shows a photograph of an installed ultrasonic thick-film array, illustrating the low-profile, surface-conforming nature of the sensor technology. The current version of this sensor technology has been demonstrated to operate continuously for over two years at temperatures up to 800°F, as seen in the plot in Figure 2. These sensors are now offered as part of SI’s SIIQ™ intelligent monitoring system.

 

ultrasonic signal amplitude

Figure 2 – A plot of ultrasonic signal amplitude over time for a sensor operating continuously at an atmospheric and component temperature of 800°F.

In addition to significant laboratory testing, the installation, performance, and longevity of Structural Integrity’s thick-film ultrasonic sensor technology has been demonstrated in actual operating power plant conditions, as seen in the photograph in Figure 3, where the sensors have been installed on multiple high-temperature piping components that are susceptible to wall thinning from erosion. In this application, the sensors are fabricated directly on the external surface of the pipe, covered with a protective coating, and then covered with the original piping insulation. Following installation, data can either be collected and transferred automatically using an installed data acquisition instrument, or a connection panel can be installed that permits users to periodically acquire data using a traditional off-the-shelf ultrasonic instrument.

Figure 4 shows two sets of ultrasonic data that were acquired approximately eight months apart at an operating power plant. The first data set was acquired at the time of sensor installation and the second data set was acquired after approximately eight months of typical cycling, with temperatures reaching up to ~500°F. Based on the observed change in the time-of-flight between the multiple backwall echoes observed in the signals, it is possible to determine that there has been approximately 0.005 inches of wall loss over the 8-month period. Accurately quantifying such as small loss in wall thickness can often provide meaningful insight into plant operations and processes, can provide an early indication of possible issues, and is only possible when using installed sensors.

Other potential applications of Structural Integrity’s ultrasonic thick-film sensor technology include the following:

  • Real-time thickness monitoring
    • Flow Accelerated Corrosion (FAC)
    • Erosion / Corrosion
  • Crack Monitoring
    • Real-time PAUT
    • Full Matrix Capture
    • Critical Area Monitoring
  • Other Applications
    • Bolt Monitoring
    • Guided Wave Monitoring

In addition to novel sensor technologies to generate data, Structural Integrity offers customizable asset integrity management solutions, as part of the SIIQ platform, such as PlantTrackª, for storing and managing critical data. Many of these solutions are able to connect with plant historians to gather additional data that feed our engineering-based analytical algorithms, which assist in converting data into actionable information regarding plant assets. These algorithms are based on decades of engineering consulting and assessment experience in the power generation industry.

Reach out to one of our NDE experts to learn more about SI’s cutting-edge thick-film UT technology.

Figure 3 – Photograph showing Structural Integrity’s thick-film ultrasonic sensor technology installed on two high-temperature piping elbows that are susceptible to thinning from erosion.

 

Ultrasonic waveforms acquired approximately 8 months

Figure 4 – Ultrasonic waveforms acquired approximately 8 months apart showing 0.005 inches of wall loss at the sensor location over this period.

 

Get News & Views, Volume 51

Structural Integrity Associates | News and Views, Volume 51 | Selective Seam Weld Corrosion Engineering Critical Assessment

News & Views, Volume 51 | Selective Seam Weld Corrosion

ENGINEERING CRITICAL ASSESSMENT

By:  Pete Riccardella, Scott Riccardella and Chris TippleStructural Integrity Associates | News and Views, Volume 51 | Selective Seam Weld Corrosion Engineering Critical Assessment

The Structural Integrity Associates, Inc. Oil and Gas Pipeline group recently supported an Engineering Critical Assessment to assist a pipeline operator manage the Selective Seam Weld Corrosion (SSWC) threat to an operating pipeline.  SSWC occurs when the fusion zone of a certain type of seam weld used in vintage (pre-1970) transmission pipelines experiences accelerated galvanic corrosion relative to the pipe body material.  It has led to numerous pipeline failures because the weld fusion zone often exhibits low fracture toughness.  The ECA included several technical advancements in applying fracture mechanics to this threat.

READ MORE

Structural Integrity Associates | News and Views, Volume 51 | Structural Integrity Associates | PEGASUS A Versatile Tool for Used Fuel Modeling

News & Views, Volume 51 | PEGASUS A Versatile Tool for Used Fuel Modeling

By:  Wenfeng Liu

PEGASUS, a finite element fuel code developed at SIA, represents a new modeling paradigm. This new paradigm treats all fuel behavior regimes in one continuous analysis.

Introduction
PEGASUS, a finite element fuel code developed at SIA, represents a new modeling paradigm.  This new paradigm treats all fuel behavior regimes in one continuous analysis.  This approach differs significantly from the current conservative practice of bounding analysis to ensure uncertainties are accounted for which results in sub-optimal used fuel management strategies.  Using PEGASUS in used fuel evaluation results in significant savings in engineering cost and work force utilization, reduces conservatism, and provides flexibility in the management of used fuel.

READ MORE