EASA Part 145 MoC Best Practice - Change Monitoring and Verification Processes

Posted by on
  • Hits: 431

Sofema Online (SOL) considers best practices to monitor and verify change as part of the MoC Process

Introduction

MOC Change Monitoring & Verification is the critical final phase of the Management of Change (MoC) workflow.

• It shifts the organization's focus from "planning safety" to "verifying reality."

Here we consider the necessary steps, tools, and strategies for developing a robust monitoring and verification process within an Aviation Safety Management System (SMS).

The Strategic Necessity of Verification

The MoC process does not end when a change is implemented. The "Risk Profile" created during the planning phase is theoretical; verification is the process of checking if that theory holds up in the real world.

• The Safety Gap: A common pitfall is the "Tick the Box" culture, where the process ends once the MoC form is signed. However, the assessment must not end when the change starts; organizations must verify if the change is working as planned.

• The Goal: The monitoring process aims to bridge the gap between the "Future State" design and the operational reality, identifying "teething problems" before they become incidents.

Monitoring the "Transition Period"

The most critical phase for monitoring is the Transition Period, the interval between the old state and the fully implemented new state.

• Why it requires monitoring: This period is often the riskiest phase because procedures are hybrid, and confusion is high.

• Actionable Monitoring Strategies:

>>Increased Supervision: You must explicitly assign increased supervision during this specific window to manage the heightened risk.

>>High-Volume Feedback Loops: Organizations must actively hunt for misunderstandings rather than waiting for reports.

>>The "Pulse Check": Use quick, anonymous digital surveys during the transition asking simple questions like, "Do you understand what is required of you tomorrow?" or "Do you have the tools you need?".

>>Management by Walking Around (MBWA): Leaders need to be visible on the floor, asking informal questions to gauge actual sentiment and understanding.

Using Data and Leading Indicators

An effective verification process moves beyond anecdotal evidence by utilizing data-driven indicators to detect "unplanned change".

• Leading Indicators: Organizations should use their Internal Safety Reporting Scheme (145.A.202) as a barometer for the change.

• Warning Signs: A spike in reports regarding "lack of tooling," "time pressure," or "fatigue" serves as a leading indicator that the operating environment has changed, even if no formal hazard was initially identified.

• Safety Performance Indicators (SPIs): As part of the verification process, specific SPIs, SPTs (Safety Performance Targets), and KPIs (Key Performance Indicators) should be developed to measure the progress and safety of the change.

The Post-Implementation Review (PIR)

The formal mechanism for verification is the Post-Implementation Review (PIR), which serves as the final "Safety Gate."

• Scheduling: The MoC plan is not static; a review must be scheduled (e.g., 3 months after implementation).

• The "Three Questions" of Verification: The review meeting must formally address three specific questions:

  1. Did the change work? (Operational effectiveness).
  2. Did the mitigations work? (e.g., "We hired contractors to fill the gap—are they actually competent?").
  3. Did unexpected hazards pop up? (Hazards that were not identified during the initial risk profile).

Continuous Improvement and "Looping Back"

The verification process must be cyclical, not linear.

• Teething Problems: Any errors or "teething problems" captured during the monitoring phase must be fed back into the internal investigation process.

• Demonstrating "Just Culture": If staff report that "we didn't follow the new procedure because it doesn't work," the system must respond with improvements to the procedure rather than punishment.

• Closing the Loop: Even if the decision is to accept a risk, that decision must be communicated back to the floor to prevent disengagement.

Summary of the Verification Workflow

To effectively develop this process, an organization should follow this logical flow:

  1. Define the Transition Period: Identify the specific dates where "Hybrid" risk exists.
  2. Deploy Active Monitoring: Use "Pulse Checks" and MBWA during the transition.
  3. Analyze Leading Indicators: Watch for spikes in fatigue or tooling reports.
  4. Conduct the PIR: Hold a formal review meeting (e.g., 3 months post-change).
  5. Verify Mitigations: Confirm that controls (e.g., temporary staff) were actually effective.

Management of Change (MoC) Post-Implementation Review Checklist

MoC Reference Number: ___________________ Date of Review: ___________________

Review Team Members: ___________________

Operational Effectiveness (Did the change work?)

This section verifies if the intended "Future State" was actually achieved.

• Objective Achievement: Did the change achieve its primary business or operational goal (e.g., increased capacity, cost reduction, new software implementation)?

• Procedural Adherence: Are staff currently following the new procedures as written, or have informal "workarounds" developed?

>>Check: If workarounds exist, is it because the new procedure is flawed/impractical?

• Teething Problems: Were there significant operational disruptions or errors during the initial rollout?

• Feedback Integration: Has feedback from the floor (e.g., "The new tool doesn't fit") been acknowledged and acted upon?

Mitigation Verification (Did the controls work?)

This section checks if the safety barriers promised in the planning phase were actually effective.

• Mitigation Implementation: Were all agreed-upon mitigations physically implemented?

>>Example: If we agreed to hire 3 temporary staff, did we actually hire them?

• Mitigation Effectiveness: Did the mitigations effectively reduce the risk as intended?

>>Example: Did the "extended turnaround times" successfully prevent time-pressure errors, or was the team still rushed?

• Resource Availability: Were the necessary resources (tools, training, staff) actually available when the change went live?

• Competence Check: If contractors or temporary staff were used as a mitigation, were they competent and adequately supervised?

The Transition Period (How was the "Hybrid" phase managed?)

This section reviews the specific period between the old and new states.

• Confusion Levels: Was there significant confusion regarding which procedure (Old vs. New) was in force during the transition?

• Supervision: Was supervision increased during the transition window as planned?

• Communication: Did the "Pulse Check" surveys or floor feedback indicate that staff understood what was required of them during the handover?

Hazard Re-Assessment (Did unexpected things happen?)

This section looks for "unknown unknowns" that appeared after implementation.

• Unexpected Hazards: Did any new hazards emerge that were not identified in the initial risk profile?

• Leading Indicators: Did internal safety reporting (145.A.202) show a spike in specific issues (e.g., fatigue, lack of tooling) following the change?

• Human Factors: Did the change introduce unforeseen human performance issues, such as higher-than-expected cognitive load or fatigue?

Conclusion & Action

• Status of MoC:

>>[ ] Closed: Change is stable, mitigations are effective.

>>[ ] Open (Action Required): Unresolved issues or ineffective mitigations found.

• Follow-Up Action: (If Open, define what must be fixed before closing).

Next Steps

Sofema Aviation Services (SAS) and Sofema Online (SOL) Provide Classroom, Webinar and Online Training – Offering almost 1000 Courses, Packages and Diploma’s compliant with EASA, FAA, UAE GCAA, Saudi GACA, UK CAA, UK OTAR.

Rate this blog entry:
0