VIEWPOINT

Vol. 138 No. 1616 |

DOI: 10.26635/6965.6939

Optimising the use of certification findings to support healthcare quality measurement and improvement

There is an ongoing debate about whether certification and accreditation lead to better healthcare, health outcomes and whānau experiences. Regardless, certification is a mandatory requirement of the Health and Disability Services (Safety) Act 2001, which ensures the safe provision of health and disability services to the public and enables the establishment of standards for safe service provision in Aotearoa New Zealand.

Full article available to subscribers

There is an ongoing debate about whether certification and accreditation lead to better healthcare, health outcomes and whānau experiences.1,2 Regardless, certification is a mandatory requirement of the Health and Disability Services (Safety) Act 2001, which ensures the safe provision of health and disability services to the public and enables the establishment of standards for safe service provision in Aotearoa New Zealand.3,4 Under section 9 of the Act, service providers such as hospitals and rest homes and person(s) providing healthcare must only do so while certified and must meet all relevant service standards.

Health and disability service providers seeking certification are required to provide evidence and findings that demonstrate their adherence to the Ngā Paerewa Health and Disability Services Standard (HDSS) NZS 8134:2021.5 The HDSS sets minimum standards that service providers must comply to and contains a total of six outcomes. While the Health and Disability Services (Safety) Act and the HDSS are administered by the Ministry of Health – Manatū Hauora, the actual auditing of service providers is carried out by independent external designated auditing agencies (DAA). The DAA audit health and disability service providers against the HDSS. Service providers, regulators and auditors spend significant effort and resources to prepare for, undertake, administer and report the audits.

Given certification is mandatory and a significant amount of effort is required from all involved, it is sensible to optimise the use of certification findings to support system learning and quality improvement. However, in reality and practice, the narrative nature of the 60–90-page certification report presents a significant challenge for time-pressured staff to read it in its entirety, which results in under-utilisation of the intended benefits of the findings. Furthermore, the qualitative data obtained from the certification report cannot easily be incorporated within succinct 1–2-page organisational quality scorecards and electronic health intelligence dashboards for routine progress monitoring. Out of sight (and out of mind) from routine organisational quality monitoring, with data that cannot be easily viewed at a glance in time-poor senior leadership meetings, it is perhaps unsurprising to observe in practice that staff do not always engage with certification reports or their corrective actions. Further, the qualitative nature of existing certification reports hinders comparability and leads to difficulty in understanding variation in practice across organisations. Consequently, identifying and spreading learning from the best performers becomes challenging.

The current narrative and qualitative certification reports provide rich context and a comprehensive account of the audit findings and should not be rejected outright. Instead, we suggest a complementary quantitative approach to present certification data, which may be more easily incorporated within routine quality measurement scorecards and dashboards to support engagement, measure and monitor progress over time and identify variation in practice to enable learning from best performers. In this viewpoint article, we describe our proposed quantitative approach and reflect on its use in practice to enable and support clinical governance and quality improvement (QI) in and across organisations. Finally, we consider the limitations of our proposed approach and implications for practice, research and policy.

Quantitative approach to presenting certification data

To augment the narrative and qualitative nature of the certification report, one approach is to  visually display them as radar charts—see Figures 1–3.

View Figure 1–3.

As can be seen in Figure 1, the HDSS contains a total of six outcomes, 34 subsections and 221 criteria. As part of the certification process, service providers are required to provide documents, evidence and findings that demonstrate their adherence to the HDSS criteria alongside onsite visits. For example, for Section 2: Workforce and Structure: Subsection 2.2 (Quality and risk)—criterion 2.2.3: Service providers shall evaluate progress against quality outcomes, a hospital may provide copies of their organisational quality scorecard/dashboards and clinical governance group meeting minutes that illustrate their use for informing quality improvement to demonstrate as evidence. Based on this evidence and what is sighted in the onsite visit, the DAA rates each of the criterion using a four-point scale (i.e., unattained [UA], partially attained [PA], fully attained [FA] and continuous improvement [CI]). Where criteria have not or have only partially been met and are determined to be a risk by the DAA, their risk severity is highlighted (low, moderate, high), and corrective actions request (CAR) are assigned, which have to be achieved by the service provider within the specified time.

Our proposed approach converts the qualitative rating into an equivalent quantitative score for each criterion—see Figure 1. For example, if the previous criterion 2.2.3 example was rated as FA, the converted equivalent score would be 7 out of 10 (or 70%). If the same criterion was rated as UA, this would be 3 out of 10 (or 30%) and so on. In this way, each criterion (and so on) can be converted to an equivalent quantitative score out of 10. Because there are 221 criteria in total, the total score is 2,210. The overall score for an organisation is, thus, the added total score for each of the assessed criterion divided by the total score. For instance, as can be seen in Figure 1, if the total score for the organisation was 1,547, this would equate to 70% overall (i.e., divided by 2,210).

The proposed scoring aligns with the CI grade intent in that there is always room for improvement (vs no improvement required), so it is not possible to get 100%. Thus, even if all criteria were rated at CI, the highest possible score for an organisation would be 90%. The inability to achieve a “perfect” 100% score is in keeping with the definition of quality care “being the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”6 Since professional knowledge and technologies are continuously changing, what is deemed as high-quality care today may not be so tomorrow. And thus, there is always room for continuous improvement.

Some criteria will not be applicable for certain settings. Non-applicable criteria can be excluded in these cases, and the denominator score adjusted accordingly. For example, Subsection 1.1: (Requirement of donation and surrogacy) may not apply in a public hospital. Because there are five criterion for this subsection, a total of 50 points (i.e., 10 points per criteria) are deducted from the total 2,210 score, and the denominator is adjusted to 2,160.

Figure 2 shows the converted quantitative data for each criterion, which can be grouped and analysed in different ways, such as by outcomes, by subsections and by criterion. Presented in a radar chart, it enables at-a-glance understandability of key strengths and flags potential areas for improvement. To support engagement and understanding of what the result means, an equivalent grade descriptor7 used commonly by schools and universities and familiar to most people, including governors and the general public, can be used. For example, the overall organisation score in Figure 2 was 69%, equating to a B−, which indicates generally good to strong quality in key areas, with opportunities for further improvement.

Health and disability service providers are expected to undergo an audit cycle of a fixed 3-year certification period with an 18-month surveillance audit. With certification being undertaken at one point in time, our proposed approach to convert quantitative ratings into qualitative scores can be used to monitor progress over time. For instance, as can be seen in Figure 3, certification scores between 2022 and 2024 can be compared, and in this case, the improvement of Subsection 2.2 (Quality and risk) from 68% to 73%. Because all health and disability service providers are required to be certified and the HDSS is consistently used for assessment, the quantitative scoring approach can also be used to compare and contrast different organisations to enable the identification of variation in practice to identify and support learning from best performers. The proposed quantitative approach can also be used to group different individual organisations, such as district public hospitals, together to provide an overall score, for a region or nation, as an example.

Reflections on application in practice

Based on and building on a similar method created for medication safety self-assessment,8 the proposed quantitative approach to using certification data was first developed in 2020. Relatively simple, straightforward and logical in concept and method, it has been well received in practice, having been in use by several Aotearoa New Zealand public hospitals. Recognising its value, Health New Zealand – Te Whatu Ora recently adopted our quantitative approach to consolidate certification data from different organisations, thereby facilitating national quality measurement, improvement and reporting. Five years on since the conception of the quantitative approach, this section provides an overview of our reflections on the benefits, limitations and potential future implications for practice, research and policy.

Benefits of the proposed approach

As outlined at the start of this viewpoint article, certification reports, while comprehensive, present challenges for practical application due to their qualitative and narrative nature. By converting qualitative findings into quantitative data and visualising this using radar charts, and with the addition of peer data for relativity, we have observed more attention being given by senior clinical and non-clinical leaders and managers to certification results and overall clinical governance. Moreover, in fact, as can be seen in Figure 3, the data were used to inform and support planning and improvement to clinical governance and quality systems, resulting in an improvement to the quality and risks scores for Subsection 2.2: (Quality and risk) from 68% to 73% in 2022 and 2024, respectively, for a large metropolitan district. The next surveillance is due in 2026. Examining the CARs provided by auditors, some criteria, such as staffing levels, new fit-for-purpose buildings and quality of food, cannot be significantly improved by the district unless there are significant uplifts to budgets and/or change to national contracts. And so, if only CARs that can be feasibly controlled and improved by the districts are included and improved to FA levels, it is conservatively estimated that a target score of 75–78% is possible. The score can potentially be higher if it can be improved to CI levels.

Several factors appear to have contributed to the positive outcome observed. Firstly, clinical governance and senior leadership meetings are often time-poor with numerous competing agenda items, and so any key points and requests for action must be clear, succinct and concise. Alongside written and verbal commentary, the use of radar charts and overall scores expressed as a percentage or grade provides at-a-glance understanding. Secondly, the inclusion of peer scores alongside other quality metrics supports a more complete and balanced picture of quality in the organisation, which enables better interpretation of the data in terms of its size, magnitude and relativity for informing not just whether action is required, but also its urgency and extent. Thirdly, we have found that when a district’s grade is significantly low (e.g., C−) or lower than its peers, senior leaders are typically more motivated to direct resources, act and close the gap. Fourthly, because quantitative scores enable monitoring over time, we have also observed that it supports positive reinforcement of quality improvement behaviours, since progress is measurable, tangible and validated by independent auditors.

An interesting and unexpected benefit observed in practice is that our proposed quantitative approach is generating a culture of incremental innovation (cf vs radical innovation9), applying the same concept and approach for different settings. For example, the same proposed approach to convert qualitative certification findings into quantitative data has been applied to other certification and assessment tools such as the Te Arawhiti Māori Crown Relations Capability Framework for the Public Service,10 HealthShare’s He Ritenga11 and the Fundamentals of Care Programme12 to support the measurement and monitoring of the maturity of organisations in giving effect to Te Tiriti o Waitangi and providing core healthcare essentials, respectively.

Limitations

While there are benefits to our proposed approach to convert qualitative certification findings into quantitative data, there are also limitations. Notably, the proposed approach was developed empirically with no formal content or construct validation undertaken. Consequently, there is the potential that it oversimplifies the rich and nuanced narrative of the qualitative certification reports. Furthermore, each criterion is weighted equally and not adjusted for its importance and risk severity, which may potentially lead to overestimation or underestimation of the actual quality of the organisation audited. Given the proposed approach is largely based on the average of converted scores from individual criterion, it is possible that significant criterion showing high risk may not be immediately visible without in-depth analysis. Moreover, the reliance on averaged results means there is a risk that the data will appear “ok” with little change over time, potentially fostering a sense of complacency and hindering proactive measures to address quality concerns and cultivate a strong quality culture. In Figure 3, it was noted that there was an improvement in HDSS score from 68% to 73% from 2022 to 2024, respectively. While this improvement may be due, in part, to our proposed novel approach, other possible reasons for the positive change may be due to factors such as inherently strong and continuously improving culture, leadership changes and general continuous improvement efforts.

Implications for practice, research and policy

As previously mentioned, our proposed approach has been adopted in practice by several districts nationally, and has been applied in different settings. The approach can be used to aggregate individual district results to support regional and national reporting, monitoring and improvement, which is particularly pertinent for the current Aotearoa New Zealand healthcare environment, where there is increased regionalisation and centralisation due to the health reforms. The limitations of the proposed approach need to be understood and taken into account when it is interpreted and used to make decisions. In our view, our proposed approach is a practical, albeit blunt, tool that can be used to augment and complement pre-existing qualitative and narrative certification reports to support clinical governance and inform quality measurement and improvement. We suggest that our proposed approach is used alongside other quality metrics, such as those measuring outcomes, processes and organisational culture, and includes leading and lagging indicators, so that insights are triangulated and interpreted in a more balanced manner to inform more precise improvement.

Future research examining the importance and extent of association to outcomes for each of the 221 criterion in the HDSS standards will be helpful in informing weighting and the development of more precise assessment. For example, if a particular criterion is found to have a higher extent of association to outcomes compared to another, weighting can be applied to provide a more complete and accurate picture of the quality of the organisation. Healthcare resources are limited, and in this way, scarce resources can be directed to improve the most significant areas.

Beyond metrics largely focussed on throughput, timeliness and experiences of care, there are relatively less quality measures focussing on patient safety, clinical effectiveness, health equity and the giving of effect to Te Tiriti o Waitangi that are used by Health New Zealand – Te Whatu Ora.13 The HDSS is unique because it aligns with and gives effect to Te Tiriti o Waitangi, He Korowai Oranga – Māori Health Strategy, United Nation (UN) Treaties and the UN Declaration on the Rights of Indigenous Peoples. The HDSS was updated in 2021 to give a stronger focus on increasing positive life outcomes and achieving pae ora, healthy futures for Māori and for those traditionally underserved by the health system such as Pacific peoples, disabled, rural and rainbow communities.3 The revised standards also strengthened focus on infection prevention and antimicrobial stewardship, meeting Te Tiriti o Waitangi obligations, and clinical governance, to ensure people’s care and support needs are appropriately met. Because the HDSS is comprehensive and brings together various policy obligations, our proposed approach can support optimal use of certification results to measure and monitor the maturity of organisations across these dimensions.

Conclusion

This viewpoint article highlights the significant effort and resources required to certify and audit health and disability service providers. We argued that given the effort and the fact that it is a mandatory requirement, certification findings should be optimally used to support system learning and quality improvement. Despite this, in reality, the long, narrative and qualitative nature of certification reports means that they can be challenging to use as part of routine quality measurement, monitoring and reporting, and do not always engage people for quality improvement or support learning from best performers. We describe a proposed complementary approach to augment narrative certification reports by converting them into quantitative data, which enables them to be understood at a glance, used to measure and monitor progress over time in and across organisations and enable learning from best performers. Overall, we believe our proposed approach offers a simple, logical and straightforward solution that complements existing narrative and qualitative certification reporting to support better quality measurement and improvement with a view to achieve better and more equitable health outcomes and experiences of patients and whānau.

An erratum has been published for this article.

Certification is one of several regulatory tools intended to support and ensure the safe provision of health and disability services, such as hospitals and rest homes, to the public. Health and disability service providers must be certified and meet all relevant service standards if they are to provide healthcare services. Not surprisingly, service providers, regulators and auditors spend a significant amount of effort and resources to prepare for, undertake, administer and report the audits. Given the substantial investment by all involved, it is essential to optimise the use of the findings to support system learning and quality improvement. However, in reality and practice, the qualitative and narrative nature of the audit findings means that they are unable to be used to optimise their return for the commensurate effort. In this viewpoint article, we propose and describe a complementary quantitative approach to using certification data to enable and support clinical governance and quality improvement in and across organisations. We reflect on our proposed approach in practice and consider its limitations and implications on practice, research and policy.

Authors

Jerome Ng: Clinical Director, Clinical Governance, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Jacky Chan: Patient Safety and Quality Assurance Lead, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Jerson Valencia: Nurse Consultant, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Kaushik Kaushik: Feedback Central Manager, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Fran Voykovich: Risk and Quality Management Lead, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Marama Tauranga: Interim Chief Nurse | GM Innovation and Transformation, Hauora Māori Service Directorate.

Andrew Connolly: Chief Medical Officer, Office of the Chief Medical Officer, Health New Zealand Counties Manukau, Auckland.

Vanessa Thornton: Group Director of Operations, Health New Zealand Counties Manukau, Auckland.

Correspondence

Jerome Ng: Office of the Chief Medical Officer, Te Whatu Ora – Health New Zealand Counties Manukau, Office of the Chief Medical Officer, L1 Exec Management Office, Bray Building, Middlemore Hospital, Private Bag 93311, Otahuhu, Auckland 1640.

Correspondence email

jerome.ng2@middlemore.co.nz

Competing interests

Nil.

1)       Connor L, Dufour K, Zadvinskis IM, et al. Does Hospital Accreditation or Certification Impact Patient Outcomes? Findings From a Scoping Review for Healthcare Industry Leaders. J Nurs Adm. 2025;55(1):53-60. doi: 10.1097/NNA.0000000000001528.

2)       Mumford V, Forde K, Greenfield D, et al. Health services accreditation: what is the evidence that the benefits justify the costs? Int J Qual Health Care. 2013;25(5):606-20. doi: 10.1093/intqhc/mzt059.

3)       Health and Disability Services (Safety) Act 2001 (NZ).

4)       Ministry of Health – Manatū Hauora. Health and Disability Services (Safety) Act [Internet]. Wellington (NZ): Minitry of Health – Manatū Hauora; 2024 [cited 2024 Dec 27]. Available from: https://www.health.govt.nz/regulation-legislation/certification-of-health-care-services/health-and-disability-services-safety-act

5)       Standards New Zealand. Ngā paerewa health and disability services standards (NZ8134: 2021) [Internet]. Wellington (NZ): Ministry of Health – Manatū Hauora; 2021 [cited 2024 Dec 27]. Available from: https://www.standards.govt.nz/shop/nzs-81342021

6)       Australian Commission on Safety and Quality in Health Care. The state of patient safety and quality in Australian hospitals 2019 [Internet]. Sydney, NSW (AU): Australian Commission for Safety and Quality in Health Care; 2019 [cited 2024 Nov 18]. Available from: https://www.safetyandquality.gov.au/publications-and-resources/resource-library/state-patient-safety-and-quality-australian-hospitals-2019

7)       The University of Auckland. Grade Descriptors Policy [Internet]. Auckland (NZ): The University of Auckland; 2024 [cited 2024 Dec 31]. Available from: https://www.auckland.ac.nz/en/about-us/about-the-university/policy-hub/education-student-experience/assessment/grade-descriptors-policy.html

8)       Ng J, Andrew P, Crawley M, et al. Assessing a hospital medication system for patient safety: Findings and lessons learnt from trialling an Australian modified tool at Waitemata District Health Board. N Z Med J. 2016;129(1430):63-77.

9)       Acemoglu D, Akcigit U, Celik MA. Radical and incremental innovation: The roles of firms, managers, and innovators. AEJ: Macroeconomics. 2022;14(3):199-249. doi: 10.1257/mac.20170410.

10)    Te Arawhiti. Māori Crown Relations Capability Framework for the Public Service – Organisational Capability Component [Internet]. Wellington (NZ): Te Arawhiti; 2019 [cited 2024 Sep 27]. Available from: https://whakatau.govt.nz/assets/Tools-and-Resources/Maori-Crown-Relations-Capability-Framework-Organisational-Capability-Component.pdf

11)    HealthShare. He Ritenga. Hamilton: HealthShare; 2020.

12)    Parr JM, Bell J, Koziol-McLain J. Evaluating fundamentals of care: The development of a unit-level quality measurement and improvement programme. J Clin Nurs. 2018;27(11-12):2360-72. doi: 10.1111/jocn.14250.

13)    Health New Zealand – Te Whatu Ora. Quarterly Performance Report: Quarter ending 31 March 2024 [Internet]. Wellington (NZ): Health New Zealand – Te Whatu Ora; 2024 [cited 2024 Jul 5]. Available from: https://www.tewhatuora.govt.nz/assets/Publications/Quarterly-Reports/Quarterly-Performance-Report-quarter-ending-31-March-2024-updated-120724.pdf