Tailings Dam Monitoring Instrumentation System Design and Maintenance for Improved Functionality

Tailings Dam Monitoring Instrumentation System Design and  Maintenance for Improved Functionality
Tailings Dam Monitoring Instrumentation System Design and Maintenance for Improved Functionality Banner Image

Tailings storage facilities (TSFs) represent one of the highest-risk assets in the mining industry, where failures can result in catastrophic environmental, social, and economic consequences. Although dam safety is increasingly recognised as a critical component of responsible mining, monitoring systems often lack the design rigour and investment afforded to other safety-critical functions.

This paper applies a systems engineering approach to the design, management, and lifecycle maintenance of dam monitoring instrumentation systems. It outlines a management framework that incorporates critical element identification, redundancy planning, documentation, cyber security integration, preventative maintenance, and disaster recovery planning.

A case study from a large-scale operational site is presented, demonstrating measurable improvements achieved through structured implementation of the framework. Within 12 months, the site achieved a 67% reduction in operational downtime and a 30% reduction in alarm occurrences, significantly improving operational efficiency and risk governance.

The results highlight that even complex, organically grown monitoring systems can be transformed into resilient, reliable, and transparent safety-critical systems through deliberate, structured management practices.

Introduction

Tailings storage facilities (TSF) around Australia, both operational and decommissioned represent a portfolio of structures that require careful and considered management. TSF’s incorporate multiple and varied risks that have the potential to affect environmental, social, and economic outcomes of a region, often with catastrophic effect. The significant collapses at Cadia, Brumadinho, and Samarco forced the global mining industry to confront the serious inadequacies in its TSF management frameworks and performance standards. This recognition led to the release of the Global Industry Standard for Tailings Management (GISTM). The GISTM calls for a greater level of rigour and transparency in the management framework of tailings storage facilities as defined by the six key areas shown in Figure 1.

Among these six areas, monitoring and surveillance is fundamental to ensuring the early identification of abnormal dam behaviour. However, in practice, monitoring systems often lack the design rigour, lifecycle planning, and integration afforded to other critical safety functions. Monitoring instrumentation systems frequently evolve organically, leading to reduced reliability, operational inefficiencies, and governance challenges.

This paper addresses the monitoring and surveillance of tailings facilities and suggests management practices that can assist operators in meeting best practice. Furthermore, the monitoring framework presented is designed to meet the requirements of monitoring across the lifecycle of the facility and ensuring the monitoring system is robust and reliable, thereby engendering confidence in decision makers. A systems engineering approach has been applied to the dam monitoring instrumentation system to ensure high levels of reliability, redundancy, and documentation. Learnings from high-reliability industries such as aviation have been incorporated into the design philosophy. This paper presents key principles addressed in the system design philosophy and presents an anonymous case study to highlight the benefits and improvements that can be realised by implementing this approach.

Figure 1 Global Industry Standard on Tailings Management – Management Framework 

Design Philosophy & Criteria

The Dam Monitoring Instrumentation (DMI) system is a safety-critical system that must be deliberately designed, maintained, and governed throughout the life of a facility. To ensure its effectiveness, a systems engineering approach has been applied, drawing on lessons from high-reliability industries. The design philosophy is underpinned by four core principles:

1. Reliability

A reliable DMI system provides consistent, accurate data to decision-makers and enables early detection of abnormal conditions. It must remain operational during critical phases such as embankment construction, post-seismic events, or following heavy rainfall.

To achieve this, the system should:

  • Target ≥98% availability, allowing for reasonable unplanned maintenance periods.
  • Include redundancy (minimum N+1) for all safety-critical elements (SCEs).
  • Be simplified to reduce points of failure and interdependencies.
  • Be hardened against environmental damage (e.g., fauna, moisture, UV exposure).

The design must also consider the expected operational lifespan of each instrument. Where sensors are inaccessible (e.g., buried VWPs or SAAVs), a minimum OEM-rated lifespan of 20 years should be targeted. Documentation of all existing instrumentation should be reverse-engineered to ensure whole-of-system coherence and minimise legacy performance issues.

2. Safety

The DMI system plays a fundamental role in risk mitigation by verifying the performance of dam design and construction. It must provide early warnings with minimal delay.

To maximise safety:

  • Alert times must be minimised, particularly during construction or periods of elevated downstream population at risk.
  • Instrumentation should be physically protected against damage from ongoing or future site activities.
  • Sensor cabling must be designed to prevent mechanical, moisture, and fauna-related degradation.
  • Instrument placement and trigger thresholds should align with identified failure modes from FMEA studies.

In addition, critical elements must be identified and documented in the Instrument Register, with clear linkage to the facility’s Trigger Action Response Plan (TARP).

3. Cost-Efficiency

A well-designed DMI system reduces total financial cost of ownership while maintaining critical performance. Cost efficiency is achieved by:

  • Integrating instrumentation planning into future embankment designs to reduce relocation and retrofit costs,
  • Standardising equipment types (e.g., dataloggers, sensors, communications systems) to lower spare part inventory, simplify technician training, and reduce complexity,
  • Carefully setting trigger thresholds to prevent unnecessary operational interruptions due to false positives.

Proactive investment in system design avoids reactive costs and operational disruptions later in the facility’s life.

4. Documentation and Transparency

Robust documentation and transparency are essential to ensure system governance, facilitate knowledge transfer, and support regulatory compliance. The DMI system must include:

  • Comprehensive as-built documentation of all existing and new instrumentation,
  • Clear records of system design rationales, maintenance schedules, and calibration requirements,
  • Active management of instrument registers, network addresses, maintenance logs, and performance tracking.

Thorough documentation ensures that instrumentation data is trustworthy, and that system performance can be independently verified, particularly as facility management teams evolve over time.

Integration with Facility Design Requirements

The DMI system design philosophy complements the surveillance and monitoring requirements identified by the facility designer through risk classification, failure modes effects analysis (FMEA), and regulatory obligations. It extends beyond specifying data types and trigger thresholds by ensuring that instrumentation selection, installation, maintenance, and redundancy are deliberately engineered to deliver safe, reliable, and cost-effective monitoring across the facility’s entire operational and closure lifecycle.

DMI Management Framework

The DMI Management Framework includes the development of a hierarchy of documentation that defines all aspects of the design, procurement, installation, management, and decommissioning of the DMI system. This framework guides the development of the management system and identities specific documentation required. The intention of this framework is to integrate with and support the legislated Operation, Maintenance and Surveillance Manual.

Figure 2: Dam Monitoring Instrumentation Management System

Safety Critical Elements and Instrument Criticality

Process Safety literature defines a safety critical element (SCE) as a control measure, the failure of which could lead to a material unwanted event. In the context of a Tailings Storage Facility a simple bowtie analysis identifies the material unwanted event as embankment failure, the controls being the design and construction verification. The DMI is thus the control verification as it monitors the ongoing performance and health of the facility. For the purpose of this paper, Critical dam monitoring instruments and/or portions of the DMI system can be considered SCE’s. SCE’s can include sensors as well as power supplies and communications links that are vital to the performance of the system and are determined by the system architecture and instrument criticality. Where possible, SCEs should be reduced to only include critical sensors by ensuring adequate redundancy in the power and communications systems. This can also be achieved by decentralising power and communications wherever possible. A simplified system with reduced layers of complexity thus achieves are higher level of reliability and availability.

Furthermore, not all sensors are necessarily considered critical. As a facility ages, additional sensors are often installed following additional construction phases or deformation events. This organic growth of the system should always occur following the DMI management framework to ensure integration of new sensors into the existing system. This long-term evolution of the system can often lead to a situation where not all sensors are considered critical to the monitoring of the facility. All sensors provide value and can be used to provide a broad understanding of the facility’s health; however critical instruments are defined as those directly monioring failure modes identified during the FMEA that have been classified as possible or likely. These sensors should be identified in the Instrument Register and should inform the TARP. Thus, the determination of critical instruments should play a key role in the subsequent identification of SCE’s. An item of hardware that has been identified as an SCE is subject to greater requirements for lifespan, maintenance efforts and redundancy.

Instrument Lifespan

Instrument lifespan is determined by instrument criticality and any SCE designation. Where sensor replacement is restricted, such as those that are buried, a minimum OEM lifespan of 20 years should be targeted. End-of-life replacement of critical instruments should be scheduled and budgeted at the time of system design. Sensor replacement should occur on schedule or on failure, whichever occurs first.

Sensor lifespan can be impacted by installation methodology and the quality of the installation. All care should be taken during installation to ensure the maximum lifespan can be achieved. Detailed record keeping of installations assists with identifying installation practices that may be leading to reduced instrument lifespan.

Redundancy & Standardisation

At a minimum, all SCEs should be designed and installed with redundancy level of N+1. Where redundant hardware is installed, the determination of standby configuration is determined by an analysis of a hypothetical failure event.

Where an outage of the primary hardware will result in delays or expense that is not acceptable to the business, it is recommended to operate redundant hardware in the Hot Standby configuration. In this situation both the primary and redundant systems are powered on and operating simultaneously, with the backup system continuously mirroring or tracking the primary system’s performance. If the primary unit fails, the redundant system immediately takes over with minimal or no interruption. This approach shortens the operational lifespan of the redundant hardware but avoids any data loss or delay downtime.

The preference for standby configuration is cold standby. In a cold standby configuration, the redundant system remains powered off or in a passive state until the primary system fails. Upon failure, the backup system is manually or automatically activated. This maximises the lifespan of the redundant hardware, minimises cost, reduces complexity and has a lower risk of impacting data integrity. However, slightly longer recovery time compared to hot standby, as activation processes are required.

Figure 3 Hot Standby and Cold Standby Assessment

Standardisation of the DMI system is a highly effective method to reducing complexity and minimising cost. For example, by standardising dataloggers, the number of models required to be stored as crucial spares is reduced and the total stock inventory is reduced. Further benefits include  reduced training requirements and improved response times from maintenance technicians as the hardware will always be recognised and familiar. Standardisation can be difficult, or impossible to realise for older facilities. Strategies to better achieve standardisation in the DMI of older facilities can include:

  • New-for-old replacement strategy – as hardware malfunctions and requires replacement, it should be replaced with a standard model.
  • Targeted upgrade strategy – as budget becomes available conduct targeted upgrade programs to achieve standardisation where possible.
  • Consistency – avoid the temptation to buy the newest models. If an installed system is functioning maintain it with consistent sensor models, rather than the newest model.

Survey Control

A well-designed and maintained survey control network is fundamental to the performance of dam monitoring instrumentation systems, supporting both geotechnical surveillance and construction activities. The quality of the survey control network directly determines the accuracy, precision, and reliability of instruments such as robotic total stations and monitoring prisms.

Survey control networks should be established by experienced surveyors with a strong understanding of geotechnical monitoring requirements. The design of the network must consider the scale of the facility, the precision required for monitoring deformation, and the operational conditions across the site.

Best practice involves designing a survey network with:

  • Cross-braced triangles between control points to strengthen geometric integrity and reduce cumulative errors,
  • Redundant backsights to ensure reliable positioning even if a control point is lost or damaged,
  • Extended coverage to accommodate future facility expansions and monitoring needs.

All survey control points should be tied into the broader mine survey network and, where practical, linked to a known State Map Grid to ensure spatial consistency and to assist in emergency response scenarios.

Redundancy in the control network is critical. Loss or damage to a single control point should not impair the accuracy of the monitoring system. By implementing cross-bracing, redundant backsights, and multiple fixed control points, the survey network remains resilient to localised failures, ensuring ongoing data integrity during both normal operations and emergency events.

Preventative Maintenance Programs

Preventative maintenance schedules and procedures should be developed for each type or model of sensor in use. Where system-wide standardisation has not been achieved, this may require additional procedures to address differences in power supplies or communications setups. Preventive maintenance regimes are determined based on the instrument criticality, power supply and exposure. Preventative maintenance programs should address all aspects of the sensor including power supply, communications, general condition, data handling and software. Detailed and consistent record keeping of preventative maintenance activities can be used to identify trends in sensor performance, providing lead indicators for hardware failure.

Automation, Remote Access, Cyber Security & trigger levels

Remote Access and Automation

To maximise system availability and operational efficiency, all sensors, dataloggers, and communication systems should allow for secure remote access. Remote connectivity minimises downtime during unplanned maintenance, facilitates rapid troubleshooting, and reduces personnel exposure to hazardous areas.

Where feasible, it is recommended to automate sensor data collection, reporting, and alarm generation. Automation improves response consistency, reduces decision fatigue, and eliminates manual errors in interpreting data streams. A well-designed automated system supports timely decision-making while freeing operational teams to focus on higher-level analysis.

Cyber Security Considerations

The increased use of remote access and automation introduces heightened cyber security risks. Given the safety-critical nature of dam monitoring instrumentation systems, cyber security must be treated as integral to system design and maintenance. Key protective measures include:

  • Implementing secure access controls and multifactor authentication,
  • Encrypting all data transmissions,
  • Regularly updating and patching system firmware and software,
  • Isolating operational monitoring systems from corporate IT networks where practical,
  • Developing a specific cyber incident response plan for the monitoring system.

Maintaining strong cyber security protects the integrity of monitoring data, preserves system availability, and ensures that alarms and trigger actions are not compromised by malicious interference.

Trigger Level Design

Trigger thresholds must be carefully configured to reflect the facility’s risk profile, expected behaviour, and historical instrument performance. Thresholds should:

  • Be set above the known precision limits of the monitoring equipment,
  • Account for natural noise, environmental variability, and seasonal fluctuations.

For example, prism monitoring thresholds must consider instrument precision, baseline distances, and atmospheric influences on measurement. If the required detection sensitivity cannot be achieved with existing instrumentation, alternative or supplementary monitoring methods should be explored to maintain early warning capability.

Record Keeping, Registers & Performance Tracking

Consistent and thorough record keeping is highly valuable and can assist in developing the integrated knowledge base. Further, thorough record keeping at all levels allows the operation to analyse the performance of the DMI system at multiple levels. A data driven approach to identifying poor performance in different instrument types, or different segment of the system can assist with the diagnosis and rectification of underlying problems.

The Instrument Register records key details regarding the instrument installation and trigger thresholds. This document is a live document and is used to maintain an active record of the commissioned instruments in use. This is the primary document that informs the TARP.

The Network Address Register is used to record the network addresses associated with each instrument, datalogger and communications node. This document is primarily used to maintain a record of network addresses and is useful in diagnosing issues in the system.

The Maintenance Register records all instances of planned and unplanned maintenance. It is useful for identifying nodes in the system that are displaying ongoing poor performance as well as ensuring that maintenance requirements are complied with.

The Bridge Register records events where a Safety Critical element has been bypassed, either intentionally during maintenance or unintentionally during an outage. This Register can also be used to record TARP trigger events. These events are not classified as bridge events; however, it is useful to record these events. Analysis of unintentional bridge events can develop a lead indicator of instruments or nodes in the system that may be nearing end of life and require replacement. Analysis of past TARP trigger events is important for the facilities governance team to understand the health and ongoing performance of the facility.

The Survey Control Register records the details including location, accuracy, type, and installation details of each survey control point.

The above registers are most powerful when maintained regularly as live documents. Ensure to maintain a complete metadata record of all changes to the registers for transparency purposes. Furthermore, these registers are ideally maintained in a relational database or geomatic systems to assist with ease of data lookup and to facilitate simple and efficient reporting. This approach also allows the operator to establish automated alerts or alarms based on frequency of events for individual or groups of instruments. Thus, it can become a powerful tool for identifying trends of change in the facility performance that may otherwise be missed.

Warehousing & Inventory Management

Maintaining an inventory of critical spares is effective at minimising downtime and improving the efficiency of the technicians maintaining the system. It is crucial to consider items such as software or programming scripts as critical spares as well. Duplicates of each datalogger script should be maintained in a centralised storage location. This can avoid significant downtime and data loss during the replacement of dataloggers.

Maintenance of a stock of non-critical consumables is important, especially during times of construction and instrument relocations.

Regular stocktakes of critical spares and non-critical consumables ensures the DMI system can be adequately maintained and managed at all times during its lifecycle.

Personnel Competencies

Personnel involved in the design, installation, management, and maintenance of the DMI system should have a sound understanding of embankment dam monitoring practices and understand the principles regarding how and what each sensor is monitoring. Furthermore, these personnel should have a strong understanding of earthen embankment visual inspection and know the tell-tale signs of an embankment in distress. The DMI personnel should work closely with the governance team to ensure the best outcome for the facility is achieved.

Further competencies that are highly useful for the DMI team include:

  • A sound understanding of drilling and investigative techniques used in and around embankment dams.
  • A sound understanding of extra low voltage electrical systems.
  • An understanding of wireless communications architecture and a basic understanding of IT network architecture.
  • A moderate to strong understanding of programming languages such a python or C++.
  • Competency in the use of UAVs to conduct inspections is desirable.
  • Engineers involved in the design, planning, and management of DMI systems should have a strong understanding of earthen embankment dam design principles and considerations as well as a strong understanding of tailings management.

Disaster Preparedness & Recovery

A Disaster Preparedness & Recovery Plan (DPRP) for the DMI system should be established as a matter of priority. A large portion of the value that the DMI system provides is the data records that are maintained. This data provides valuable insights into the performance of the facility during phases of its lifecycle and provides a record that transcends the movement of personnel in and out of the business. A well-established database and DMI system leave a clear and traceable record of facility performance thereby providing assurance to the business that the risks associated with the facility are managed now and into the future.

In the first instance, the DPRP should identify and document how the DMI system is prepared for and hardened against a system wide disaster. This process, similar to the SCE identification process, identifies key infrastructure in the system that poses a point of failure and where data integrity can be lost.

The DPRP should address a number of key scenarios, identified by a panel of subject matter experts and deemed as credible, that could lead to total or partial loss of the DMI system or data. A thorough understanding of the DMI system, and complete documentation of the system, is key to developing an effective Disaster Recovery Plan.

The DPRP should encompass all aspects of the DMI system, including all hardware and software onsite as well as all third-party suppliers that provide a service or host data. Data hosting suppliers should be requested to provide the principal their own Disaster Recovery Plan.

Further, the DPRP should identify and define a series of steps to be undertaken immediately and through to full recovery following a DMI disaster event. These steps should address items such as:

  • How the facility will continue to be monitored during any extended period of system loss?
  • How the DMI system will be managed during a partial or complete outage?
  • How the DMI system will be recovered during a partial or complete outage?
  • How the assurance of the data will be undertaken following the recovery of the DMI system?

Case Study

This case study examines the application of the DMI Management Framework at a large-scale operational site in Australia. A snapshot of the monitoring system is provided to illustrate the scale, complexity, and unique challenges faced during implementation. Baseline system performance metrics prior to the framework rollout are also presented to highlight the opportunities for improvement and set the context for the results achieved.

System Description & Performance

The DMI system monitored five declared facilities spanning over 17 square kilometres and 11 kilometres of earthen embankments. It comprised more than 773 sensors, including robotic total stations, monitoring radars, doppler radars, shape array accelerometers, monitoring prisms, moisture sensors, vibrating wire piezometers, settlement systems, CCTV and thermal cameras, seismic arrays, earth pressure cells, and smart markers.

Installed organically over the preceding eight years, the system lacked a cohesive design philosophy or management strategy. This led to excessive complexity, with numerous interdependencies between nodes, poor documentation, and a reliance on a small number of personnel for system knowledge. Repairs were often delayed, and overall system reliability was poor.

By 2023, the DMI system was experiencing significant performance issues, averaging 93.3 hours of downtime per month and totalling 1,119 hours across the year. During this period, 127 TARP events were recorded — with 75% classified as grey alerts linked to power or communications failures, highlighting the critical impact of system instability on operational efficiency.

Management Framework Implementation

Challenges

The implementation of the Dam Monitoring Instrumentation (DMI) Management Framework at the case study site encountered several notable challenges, reflective of the site’s complex operational history and organically developed monitoring system.

Legacy System Complexity

The existing DMI system had evolved over an eight-year period without a unified design philosophy, resulting in a highly complex and interdependent network of sensors, dataloggers, and communication nodes. Integration efforts required extensive reverse engineering and documentation to uncover undocumented installations and dependencies. The lack of a coherent as-built record significantly delayed the early phases of the framework roll-out.

Knowledge and Documentation Gaps

Knowledge of the system’s configuration and functionality was concentrated among a small group of individuals, many of whom were involved in original installations. Due to incomplete documentation and inconsistent record-keeping, accurately mapping the system and assessing criticality of individual components proved challenging. Bridging this knowledge gap required significant stakeholder engagement and verification of historical installations.

Resource and Budget Constraints

Resource availability and budget prioritisation presented early hurdles. While the value of proactive maintenance, standardisation, and critical spares management was clear to the project team, securing funding and resourcing required extensive advocacy. Initial hesitation was evident to allocate resources toward long-term reliability goals when immediate operational pressures demanded attention.

Resistance to Change

Cultural resistance also emerged as a major challenge. Maintenance teams were accustomed to reactive troubleshooting rather than structured preventative maintenance routines. Shifting practices to a proactive approach with greater emphasis on documentation, standardised procedures, and governance integration required not only training, but ongoing leadership support and reinforcement.

Cyber Security Risks

The increased focus on remote access and automation introduced heightened cyber security concerns. The existing system lacked standardised access controls, encrypted communications, and formal network segmentation. Retrofitting appropriate cyber security measures onto the legacy infrastructure demanded careful planning to avoid operational disruptions while ensuring data integrity and system resilience.

Data Governance and Integration

Consolidating multiple diverse data streams into a unified relational database was a technically complex undertaking. Differences in sensor types, communications protocols, and data handling practices had to be rationalised into a consistent structure. Establishing rigorous metadata standards and integrating historical data into the new system was critical to ensuring long-term data integrity and performance analysis capability.

Although implementation challenges were considerable, the application of the DMI Management Framework ultimately delivered significant improvements in operational performance, system reliability, and governance transparency.

This case study illustrates that while transitioning from an organically developed system to a structured, lifecycle-oriented management approach requires perseverance and careful change management, the long-term benefits to dam safety and business performance are substantial. The realised improvements are outlined below.

Realised Improvement

The structured implementation of the Dam Monitoring Instrumentation (DMI) Management Framework delivered significant and measurable operational improvements at the case study site within just 12 months.

Key outcomes included:

  • 67% reduction in average monthly operational downtime associated with dam monitoring alarms and system faults.
  • 30% reduction in monthly TARP (Trigger Action Response Plan) occurrences, significantly decreasing operational disruptions and the frequency of unnecessary alarm escalations.
  • Reduction in total annual operational downtime from 1,119 hours to 369 hours, translating to a recovery of over 750 productive hours across the calendar year.
  • Reduction in total annual TARP occurrences from 127 to 89, reducing the number of events by 38 over the calendar year.

Figure 4 Case Study TARP Performance Metrics

In addition to these quantifiable improvements, the site also realised several qualitative benefits:

  • Increased confidence among operational and governance teams in the reliability of instrumentation data and alarm systems.
  • Enhanced responsiveness to maintenance issues through improved remote access, preventative maintenance scheduling, and system documentation.
  • Improved resource efficiency, with reduced reliance on specialist troubleshooting and a broader base of personnel trained in system maintenance and interpretation.
  • Strengthened alignment with the Global Industry Standard on Tailings Management (GISTM) guidelines, positioning the site ahead of regulatory compliance timelines.

Critically, the reduction in false positive alarms and avoidable TARP escalations helped to refocus site attention on genuine dam performance risks rather than on system malfunctions. This strengthened the overall risk management framework, improved stakeholder assurance, and contributed to a demonstrable uplift in the site’s dam safety culture.

These results highlight that even highly complex, organically grown monitoring systems can be transformed into reliable, resilient, and transparent safety-critical systems with the application of a disciplined management framework.

Conclusion

The management of tailings storage facilities demands not only technical excellence in dam design and construction but also a disciplined, proactive approach to ongoing monitoring and surveillance.

This case study demonstrates that the implementation of a structured Dam Monitoring Instrumentation (DMI) Management Framework can deliver significant improvements in system reliability, operational efficiency, and overall governance performance — even within complex, organically developed monitoring environments.

By applying systems engineering principles, identifying and protecting Safety Critical Elements, and embedding strong maintenance, cyber security, and data management practices, the site was able to recover over 800 productive hours annually and achieve a substantial reduction in false alarms and unnecessary operational disruptions.

Beyond measurable operational gains, the framework strengthened stakeholder confidence, improved regulatory alignment with standards such as the Global Industry Standard on Tailings Management (GISTM), and contributed meaningfully to the site’s broader dam safety culture. Organisations that invest early in the design, documentation, and lifecycle management of their dam monitoring systems are better positioned to manage risk, maintain compliance, and meet the evolving expectations of regulators, communities, and investors.

The mining industry must continue to shift from reactive to structured, proactive monitoring approaches if it is to sustainably manage tailings storage facilities and protect the communities, environments, and businesses that depend on their safe operation.

Acknowledgements

The authors would like to acknowledge the contributions of the site Operations and Maintenance teams for their support and collaboration throughout the implementation of the DMI Management Framework. Their operational insights and commitment to improvement were critical to the success of the project.

Appreciation is also extended to the Dam Governance Team and site Leadership for their support in resourcing and prioritising the framework rollout.

Thanks are given to the survey and instrumentation personnel whose efforts in system mapping, reverse engineering, and data verification enabled the foundation for successful integration.

Finally, the authors acknowledge Spectrum Mining Consultants for supporting the development of the framework and providing technical resources during implementation.

References

Global Industry Standard on Tailings Management (GISTM), International Council on Mining and Metals (ICMM), 2020.

ANCOLD Guidelines on Tailings Dams – Planning, Design, Construction, Operation and Closure, Australian National Committee on Large Dams (ANCOLD), 2019.

Pells, P.J.N. (2017). Lessons from Tailings Dam Failures. Proceedings of the 7th International Conference on Mining Waste Management.

ICOLD Bulletin 121 (2001). Monitoring of Dams and their Foundations. International Commission on Large Dams.

Hatheway, A. (2012). Geotechnical Earthquake Engineering for Tailings Dams, Tailings and Mine Waste Conference Proceedings.

 

Get the Grit

The latest insights, trends, projects, and updates straight to your inbox.

This field is for validation purposes and should be left unchanged.