Trust in health institutions is easily lost, and once damaged can be hard to repair. Evidence for predictors of patient trust suggests that demographic characteristics such as gender, ethnicity and age may play a role in one’s level of trust; however, other individual factors such as past personal experiences with healthcare appear to be a strong and consistent predictor of trust levels. Research in Aotearoa New Zealand has demonstrated that any past experiences of racism, regardless of the source, negatively impact patients’ confidence and trust in their general practitioner (GP).
Full article available to subscribers
Patient trust in health institutions and health professionals has long been recognised as key to effective healthcare delivery.1,2 While definitions and measures of this complex concept have varied, it is theorised that a patient’s level of trust in health professionals is largely based on the domains of competence, compassion, privacy, confidentiality, reliability and communication.2 It is also considered that trust in healthcare is tied to expectations that health professionals will prioritise the patients’ best interests while adhering to the principles of beneficence, fairness and integrity.3 Patient trust in their healthcare professionals is thought to be distinct from patient satisfaction, and is a stronger indicator of the relationship between the patient and their clinician, defining a patient’s expectations for their clinician’s motivations.4 A patient’s level of trust in healthcare institutions, influenced by the media and the broader atmosphere of social trust, is also important to consider in the framing of individual patient–clinician rapport.2 Studies have shown that patient confidence in health professionals to act in their best interest is associated with more positive health behaviours, improved quality of life and reduced symptoms.5–8 Conversely, low levels of trust or mistrust in health professionals and services are linked to non-engagement in services, low compliance with care recommendations and poorer health outcomes.9,10
Trust in health institutions is easily lost, and once damaged can be hard to repair. Evidence for predictors of patient trust suggests that demographic characteristics such as gender,11 ethnicity11–14 and age11 may play a role in one’s level of trust; however, other individual factors such as past personal experiences with healthcare appear to be a strong and consistent predictor of trust levels.11–13 Research in Aotearoa New Zealand has demonstrated that any past experiences of racism, regardless of the source, negatively impact patients’ confidence and trust in their general practitioner (GP).15 COVID-19 provided the world with a sobering example of the fundamental nature of people’s trust in health professionals, systems and agencies in determining care-seeking behaviours and vaccine uptake.1,16 Critically, trust in key institutions among New Zealanders continues to decline in the post-COVID pandemic era,17 with just over half of New Zealanders (53%) reporting in 2023 that they trust the Aotearoa New Zealand health system to give them the best treatment.18
It is worth noting that declining trust in health professionals and healthcare institutions is not occurring in isolation. Declining trust in public institutions has been documented in democratic countries internationally in recent years and may be attributed to numerous uncertainties including the COVID-19 pandemic, rising inflation and economic downturn, wars, major geopolitical upheavals and major weather events.19 These issues, while significant in their own right, may be exacerbated by the visibility of political polarisation, misinformation and disinformation and political disengagement.19 A complexity of individual and societal factors may therefore contribute to people’s sense of government inability to respond to complex policy issues—including healthcare-related issues—appropriately and in people’s best interest.
Legitimate use of personal data is considered to be one major driver of trust in public institutions.19 While already used routinely for research and quality improvement, and to inform planning and policy, secondary use of patient data and its impact on patient trust is under renewed spotlight with the rapid advancement and public awareness of artificial intelligence (AI) in the healthcare setting. For the purpose of this viewpoint, the Organisation for Economic Co-operation and Development (OECD) definition of AI is used:
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”20
AI is thought to have the potential to transform medical practice with applications spanning nearly all aspects of healthcare, from risk prediction to treatment monitoring.21 It also has the potential to improve equitable outcomes by increasing access and affordability of medical care and addressing biases.22 However, to fully realise this potential, it requires the availability of large amounts of contextually relevant patient data to train and test new AI tools,23 for which individual patient consent has often not been obtained. While many AI tools are developed through research with well-established approval and ethics procedures, health services are increasingly responsible for deciding whether to provide access to patient data for AI training and testing in the real-world setting.24 They also must decide whether and how the resultant AI tools should be integrated into clinical care.25
In this rapidly evolving environment, building and maintaining patient trust is of growing importance as it is not only essential for permitting AI development and implementation to continue, but it also has implications for AI’s potential to positively impact healthcare provision and patient outcomes once implemented. Moreover, public levels of knowledge about AI appear to be low,26,27 and thus trust in health professionals and healthcare institutions to be good stewards of their sensitive health information and care in the face of AI-based health technologies is critical.
In terms of secondary use of patient data for AI, emerging evidence suggests that while healthcare consumers are supportive of the use of their data in certain circumstances, this acceptance comes with conditions: that the data are used primarily for public benefit rather than commercial gain; that they are well informed and given a choice about the use of their data where possible; and that their data are kept secure and their privacy maintained through de-identification when shared with external parties.26–28 In short, patients want to be able to trust that their data will be used ethically and responsibly, and will be kept safe.
Appropriate regulation and responsible use of AI are also considered key drivers of trust in public institutions.19 Regarding the integration of AI into clinical practice, research shows that while patients anticipate benefits such as improved diagnostic accuracy, increased efficiency, reduced errors and reduced workload for health professionals, there are also a number of concerns and doubts.27,29 These include a reduced emphasis on clinicians’ own expertise and agency, potential for biases and increased health inequities, care interactions lacking human compassion, transparency and communication, the inappropriateness of AI for some situations and a lack of confidence in AI’s readiness and reliability for effective and safe healthcare delivery.27,29
The potential for AI to improve healthcare can only be achieved with patient and public trust that their health services and professionals will use their data and apply AI technologies safely, ethically, effectively and appropriately. The remainder of this paper will discuss our own explorations into patient perspectives on the secondary use of personal health information and the implementation of AI in healthcare in Aotearoa New Zealand, with a focus on the role of trust in healthcare consumers’ comfort with the development and implementation of these technologies. This is followed by a set of recommendations to researchers, policymakers and health service providers on how to prioritise and maintain patient trust in this rapidly evolving environment.
The evidence base for patient or consumer perspectives on the use of patient data for AI development and integration into healthcare in Aotearoa New Zealand is small but growing.30–32 To date, our group has conducted research to explore consumer views on this topic and on the secondary use of data more broadly, with some findings already published.32,33 Findings have been used to inform national guidelines and policies in our national public health service (Health New Zealand – Te Whatu Ora) including in the development of the AI Governance Framework used by the National AI and Algorithm Expert Advisory Group (NAIAEAG).34
Two of the studies involved individual interviews with potential AI end users (adult health service users and health professionals) of Health New Zealand – Te Whatu Ora (combined n=63) between October 2021 and February 2024. The interviews were semi-structured and scenario-based, as prior work with consumers revealed differing levels of literacy with regards to AI. Interviews were recorded, de-identified and transcribed. Both studies received ethical approval (New Zealand Health and Disability Ethics Committee reference numbers 20/NTA/2; 2023 EXP 18411). Recruitment methods varied based on COVID-related restrictions but included clinicians approaching patients, sharing the study flyer through clinical and academic networks and posting about the study on relevant patient forums. Analysis methods were specific to each study and included thematic analysis35 and the rapid assessment process.36 During the interviews the participant was read a series of hypothetical scenarios in which de-identified patient data were collected to inform the development of an AI tool and the tool subsequently used in clinical practice. Participants were asked to provide their thoughts on the scenarios and potential issues.
Participants across both studies ranged in age from 22 to 77 years and included representation of priority ethnic groups (including 20.6% Māori, 6.3% Pacific and 7.9% Asian). Our interviews revealed several key themes in which maintaining patient trust was central to participants’ overall sentiments about AI in healthcare. In particular, participants felt that the following conditions must be present in order to maintain patient trust in an AI-enabled environment:
Throughout these interviews and in line with the broader evidence on patient trust, it was apparent that participants place high value on their relationships with health professionals as a proxy for trust in the health institutions. Additionally, consistent with recent studies done elsewhere on patient perceptions of AI,27,29 the rapport and quality of interactions with health professionals was seen as vital and superior to potential interactions with AI-based technologies. For this reason, AI was not viewed as a trustworthy replacement for clinicians possessing the “human touch” and should instead be viewed as a tool in the clinician’s toolbox to improve accuracy and efficiency, and allow them to spend more time providing patient care.
While the work to date has provided valuable information, more evidence is needed to understand how patient levels of trust vary under different circumstances. As we interviewed patients belonging to a variety of groups, it became clear that group-level emphases were nuanced (for example, patients with rare conditions had different views on international involvement than patients with mental health conditions). More research to clarify and deepen our understanding of consumer views on these issues is needed. It was apparent in this research that understanding and perceptions of AI were changing over time; therefore, as AI becomes more commonplace within society, it is likely that patient trust in these technologies will evolve. Ongoing engagement with end users will ensure health institutions can adapt to accommodate the advances in AI while maintaining patient trust.
These findings have led to the following recommendations for maintaining patient trust in health institutions and their health professionals as AI’s role in healthcare grows. Importantly, there must be collaboration between government, researchers and clinical leadership to ensure that health professionals are equipped to support patients and prevent AI-related concerns from interfering with quality healthcare provision.
Recommendation 1: Ensure there is a culture of transparency around AI in health, including secondary data use and AI involvement in healthcare. Transparency means that sufficient information is available to all interest holders, at all stages of the AI lifecycle (from development and design to evaluation and monitoring), to facilitate meaningful public discussion and debate on how an AI tool is designed and deployed. This includes accurate information about the potential limitations and risks of the technology, the nature and extent of patient data use and algorithms used for training and testing.25
a. Due to the complexity of AI integration and the breadth of datasets needed for AI development/training/testing, the principle of transparency must be adopted and led nationally rather than be the responsibility of individual health professionals and services.
b. Transparency must be coupled with education to ensure adequate general AI literacy is achieved across the population through combined governmental (as stewards of the healthcare system) and sectoral approaches, so that the public has a clear understanding of not just what AI is but also how it may be used in their healthcare.
c. Health professionals must receive the necessary training to ensure they have the language and skills to communicate and be transparent with their patients around AI.
Recommendation 2: Support and enable good governance over AI in health in Aotearoa New Zealand following best-practice guidelines and the latest evidence. This will ensure that ethical, legal and privacy considerations are met (see37–39), tools are reliable and safe and development and use of AI in health is appropriate.
a. Governance must include representation of end users, including health professionals and patients.
b. Good governance must be enabled to establish or update guidelines, policies and regulations (e.g., ethics guidelines) according to emerging international best practices (e.g., from the World Health Organization and other global agencies20,40).
Recommendation 3: Ensure that AI is developed and used in health for the benefit of the Aotearoa New Zealand public. This includes that its development and use are safe and culturally and clinically appropriate, as well as for the primary purpose of improving health outcomes rather than commercial gain. This is particularly important for AI tools developed outside of Aotearoa New Zealand.
a. Use of patient data for AI development and testing should be only endorsed where there is clear benefit to New Zealanders. This includes that, where the sharing of data occurs with external or overseas organisations, there is sharing of benefits in return.
b. Health data from New Zealanders should be protected as taonga and commitment to Māori data sovereignty must be maintained.41
c. AI use in care should benefit those who need it most and not disadvantage population groups, nor be a blanket replacement for in-person care.
AI’s potential contributions to healthcare cannot be realised without a high degree of patient and societal trust that their health services and professionals are acting in their best interests. The rapid emergence of AI in health has not been coupled with consistent increases in AI literacy among the general public, so it is vital that patients can trust their institutions to make decisions on their behalf through good governance over patient data for AI development and implementation of AI in care settings. Alongside governance, transparency in the use of data and AI will be fundamental to maintaining and building trust in health institutions. Our research has shown that with good governance and transparency we can help to ensure that health services are equipped to support health professionals and patients through the safe and timely implementation of AI into care.
Patient trust is key to the delivery of healthcare and realisation of artificial intelligence’s (AI) benefits in health. Trust in health institutions and the health professionals working within them directly impacts patient engagement with health services and their health outcomes. Patients want to be able to trust the health system and health services to respect, protect and use their data responsibly to minimise any potential harms. Further, when integrating AI within health services, patients want to be able to trust that this is done with good governance, including the correct approvals and processes, to ensure equitable and safe care. Due to the complexity and fast-changing landscape of AI and the varied levels of AI literacy, trust is arguably even more important. Patients need to be able to trust services to use their health information responsibly and integrate AI in care appropriately regardless of whether they fully understand the technology. Through transparency and good AI governance, trust can be built and maintained, but if broken or lost, it will be difficult to repair and will have wider implications. This paper provides recommendations for actions to be taken to build and maintain trust in health institutions within the context of the evolving AI landscape.
Rosie Dobson: Associate Professor, School of Population Health, The University of Auckland; Group Manager, Health Services Research and Evaluation, Health New Zealand – Te Whatu Ora.
Melanie Stowell: PhD Candidate and Research Assistant, School of Population Health, The University of Auckland.
Robyn Whittaker: Professor, School of Population Health, The University of Auckland; Director Evidence, Research and Clinical Trials, Health New Zealand – Te Whatu Ora.
This paper was part of a joint series on trust in institutions by Te Herenga Waka—Victoria University Wellington and The University of Auckland. We would like to acknowledge the individuals who participated in the interview studies described in this viewpoint.
Rosie Dobson: School of Population Health, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand.
Funding from the New Zealand Ministry of Health supported this viewpoint paper. The interview studies described in this viewpoint were funded through grants from Precision Driven Health and the Health Research Council.
RD received payment of travel to DigiFest 2025 in Melbourne via Medtech iQ Aotearoa Tāmaki Makaurau.
1) Taylor LA, Nong P, Platt J. Fifty Years of Trust Research in Health Care: A Synthetic Review. Milbank Q. 2023;101(1):126-178. doi: 10.1111/1468-0009.12598.
2) Pearson SD, Raeke LH. Patients’ trust in physicians: many theories, few measures, and little data. J Gen Intern Med. 2000;15(7):509-513. doi: 10.1046/j.1525-1497.2000.11002.x.
3) Davies H. Falling public trust in health services: implications for accountability. J Health Serv Res Policy. 1999;4(4):193-194. doi: 10.1177/135581969900400401.
4) Thom DH, Hall MA, Pawlson LG. Measuring patients’ trust in physicians when assessing quality of care. Health Aff (Millwood). 2004;23(4):124-132. doi: 10.1377/hlthaff.23.4.124.
5) Birkhäuer J, Gaab J, Kossowsky J, et al. Trust in the health care professional and health outcome: A meta-analysis. PLoS One. 2017;12(2):e0170988. doi: 10.1371/journal.pone.0170988.
6) Greene J, Ramos C. A Mixed Methods Examination of Health Care Provider Behaviors That Build Patients’ Trust. Patient Educ Couns. 2021;104(5):1222-1228. doi: 10.1016/j.pec.2020.09.003.
7) Kerse N, Buetow S, Mainous AG 3rd, et al. Physician-patient relationship and medication compliance: a primary care investigation. Ann Fam Med. 2004;2(5):455-461. doi: 10.1370/afm.139.
8) Cassim S, Kidd J, Rolleston A, et al. Hā Ora: Barriers and enablers to early diagnosis of lung cancer in primary healthcare for Māori communities. Eur J Cancer Care (Engl). 2021;30(2):e13380. doi: 10.1111/ecc.13380.
9) Ward PR. Improving Access to, Use of, and Outcomes from Public Health Programs: The Importance of Building and Maintaining Trust with Patients/Clients. Front Public Health. 2017;5:22. doi: 10.3389/fpubh.2017.00022.
10) Graham R, Masters-Awatere B. Experiences of Māori of Aotearoa New Zealand’s public health system: a systematic review of two decades of published qualitative research. Aust N Z J Public Health. 2020;44(3):193-200. doi: 10.1111/1753-6405.12971.
11) Croker JE, Swancutt DR, Roberts MJ, et al. Factors affecting patients’ trust and confidence in GPs: evidence from the English national GP patient survey. BMJ Open. 2013;3(5):e002762. doi: 10.1136/bmjopen-2013-002762.
12) Nguyen AL, Schwei RJ, Zhao YQ, et al. What Matters When It Comes to Trust in One’s Physician: Race/Ethnicity, Sociodemographic Factors, and/or Access to and Experiences with Health Care? Health Equity. 2020;4(1):280-289. doi: 10.1089/heq.2019.0101.
13) Schwei RJ, Kadunc K, Nguyen AL, Jacobs EA. Impact of sociodemographic factors and previous interactions with the health care system on institutional trust in three racial/ethnic groups. Patient Educ Couns. 2014;96(3):333-338. doi: 10.1016/j.pec.2014.06.003.
14) Keating NL, Gandhi TK, Orav EJ, Bates DW, Ayanian JZ. Patient characteristics and experiences associated with trust in specialist physicians. Arch Intern Med. 2004;164(9):1015-1020. doi: 10.1001/archinte.164.9.1015.
15) Harris R, Cormack D, Waa A, et al. The impact of racism on subsequent healthcare use and experiences for adult New Zealanders: a prospective cohort study. BMC Public Health. 2024;24:136. doi: 10.1186/s12889-023-17603-6.
16) Thaker J. The Persistence of Vaccine Hesitancy: COVID-19 Vaccination Intention in New Zealand. J Health Commun. 2021;26(2):104-111. doi: 10.1080/10810730.2021.1899346.
17) Stats NZ | Tatauranga Aotearoa. New Zealanders’ trust in key institutions declines [Internet]. 2024 Sep 25 [cited 2025 Jan 9]. Available from: https://www.stats.govt.nz/news/new-zealanders-trust-in-key-institutions-declines/
18) Hercock C, Dudding A. NZ Edition - the Ipsos Global Health Service Monitor [Internet]. Ipsos; 2023 [cited 2025 Jan 9]. Available from: https://www.ipsos.com/en-nz/nz-edition-ipsos-global-health-service-monitor
19) Organisation for Economic Co-operation and Development. OECD Survey on Drivers of Trust in Public Institutions - 2024 Results: Building Trust in a Complex Policy Environment [Internet]. Paris: OECD Publishing; 2024 [cited 2025 Oct 23]. Available from: https://doi.org/10.1787/9a20554b-en
20) Organisation for Economic Co-operation and Development. OECD AI Principles overview [Internet]: 2019 [cited 2025 Mar 10]. Available from: https://oecd.ai/en/ai-principles
21) Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689. doi: 10.1186/s12909-023-04698-z.
22) Capraro V, Lentsch A, Acemoglu D, et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus. 2024;3(6):pgae191. doi: 10.1093/pnasnexus/pgae191.
23) Ahmed MI, Spooner B, Isherwood J, et al. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus. 2023;15(10):e46454. doi: 10.7759/cureus.46454.
24) Alami H, Lehoux P, Denis JL, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag. 2021;35(1):106-114. doi: 10.1108/JHOM-03-2020-0074.
25) World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance [Internet]. Geneva; 2021 [cited 2024 Jun 24]. Available from: https://iris.who.int/bitstream/handle/10665/341996/9789240029200-eng.pdf?sequence=1
26) McCradden MD, Sarker T, Paprica PA. Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research. BMJ Open. 2020;10(10):e039798. doi: 10.1136/bmjopen-2020-039798.
27) Vo V, Chen G, Aquino YSJ, et al. Multi-stakeholder preferences for the use of artificial intelligence in healthcare: A systematic review and thematic analysis. Soc Sci Med. 2023;338:116357. doi: 10.1016/j.socscimed.2023.116357.
28) Aggarwal R, Farag S, Martin G, et al. Patient Perceptions on Data Sharing and Applying Artificial Intelligence to Health Care Data: Cross-sectional Survey. J Med Internet Res. 2021;23(8):e26162. doi: 10.2196/26162.
29) Wu C, Xu H, Bai D, et al. Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis. BMJ Open. 2023;13(1):e066322. doi: 10.1136/bmjopen-2022-066322.
30) Yap A, Wilkinson B, Chen E, et al. Patients Perceptions of Artificial Intelligence in Diabetic Eye Screening. Asia Pac J Ophthalmol (Phila). 2022;11(3):287-293. doi: 10.1097/APO.0000000000000525.
31) Jayamini WKD, Mirza F, Bidois-Putt MC, et al. Perceptions Toward Using Artificial Intelligence and Technology for Asthma Attack Risk Prediction: Qualitative Exploration of Māori Views. JMIR Form Res. 2024;8:e59811. doi: 10.2196/59811.
32) Dobson R, Wihongi H, Whittaker R. Exploring patient perspectives on the secondary use of their personal health information: an interview study. BMC Med Inform Decis Mak. 2023;23(1):66. doi: 10.1186/s12911-023-02143-1.
33) Dobson R, Whittaker R, Wihongi H, et al. Patient perspectives on the use of health information. N Z Med J. 2021;134(1547):48-62.
34) Whittaker R, Dobson R, Jin CK, et al. An example of governance for AI in health services from Aotearoa New Zealand. NPJ Digit Med. 2023;6(1):164. doi: 10.1038/s41746-023-00882-z.
35) Braun V, Clarke V. Thematic Analysis: A Practical Guide to Understanding and Doing. Thousand Oaks: SAGE Publications; 2021.
36) Beebe J. Rapid Assessment Process: An Introduction. Walnut Creek, CA: AltaMira Press; 2001.
37) Office of the Privacy Commissioner. Privacy Act 2020 [Internet]. [cited 2025 Oct 14]. Available from: https://www.privacy.org.nz/privacy-principles/
38) Stats NZ | Tatauranga Aotearoa, New Zealand Government. Algorithm charter for Aotearoa New Zealand [Internet]. 2023 [cited 2025 Oct 14]. Available from: https://data.govt.nz/toolkit/data-ethics/government-algorithm-transparency-and-accountability/algorithm-charter
39) Digital.govt.nz. Public Service Artificial Intelligence Framework [Internet]. Department of Internal Affairs | Te Tari Taiwhenua, New Zealand Government; 2025 Jan 29 [2025 Mar 10]. Available from: https://www.digital.govt.nz/standards-and-guidance/technology-and-architecture/artificial-intelligence/public-service-artificial-intelligence-framework
40) World Health Organization. Regulatory considerations on artificial intelligence for health [Internet]. Geneva; 2023 [cited 2025 Oct 22]. Available from: https://www.who.int/publications/i/item/9789240078871
41) Taiuru K. Compendium of Māori Data Sovereignty VERSION 2 [Internet]. Te Kete o Karaitiana Taiuru (Blog); 2022 Feb 12 [cited 2025 Mar 26]. Available from: http://taiuru.co.nz/compendium-of-maori-data-sovereignty/#Maori_Data_Ethical_Framework_Taiuru_K_2020
Sign in to view your account and access
the latest publications by the NZMJ.
Don't have an account?
Let's get started with creating an account.
Already have an account?
Become a member to enjoy unlimited digital access and support the ongoing publication of the New Zealand Medical Journal.
The New Zealand Medical Journal is fully available to individual subscribers and does not incur a subscription fee. This applies to both New Zealand and international subscribers. Institutions are encouraged to subscribe. The value of institutional subscriptions is essential to the NZMJ, as supporting a reputable medical journal demonstrates an institution’s commitment to academic excellence and professional development. By continuing to pay for a subscription, institutions signal their support for valuable medical research and contribute to the journal's continued success.
Please email us at nzmj@pmagroup.co.nz