AI system implementation issues and risk mitigation: living evidence

Living evidence tables provide high level summaries of key studies and evidence on a particular topic, and links to sources. They are reviewed regularly and updated as new evidence and information is published.

This page is part of the Artificial Intelligence series.

This living evidence brief describes some of the major issues with the implementation of artificial intelligence (AI) in healthcare systems, with reference to the major medical ethical principles of beneficence, non-maleficence, autonomy and justice.1, 2

For each section, three aspects are described:

  • the issue
  • potential enablers and solutions – also known as “best bets” – to mitigate risk
  • frameworks to support governance and/or solutions trialled in real world systems.

At present, many of the best bets in this brief, however, are theoretical, advocated by subject matter experts or policy-driven, rather than strategies that have been tested or implemented in actual healthcare systems. The inclusion of an article here does not represent an endorsement by NSW Health but rather is a reflection of the available and dominant literature in the field at the time of writing.

This brief has been developed via a PubMed search of relevant AI literature, targeted searches of the grey literature, and via a weekly screening process of top medical journals since 2023 (e.g. Nature, JAMA, Lancet, BMJ, etc).

Regular checks are conducted for new content and any updates are highlighted.

Summary and leading proposed solutions

Healthcare is at the precipice of huge changes in both the clinical care and administration due to AI developments. The benefits and potential of AI in clinical care are detailed in two other AI living tables.

However, the development of appropriate AI systems and ensuring their smooth and ethical implementation present many challenges, particularly since the science of AI is moving faster than the regulatory developments .3-5

  • From a regulatory perspective, different jurisdictions are taking different approaches. AI solutions may be classified as “medical devices” or as software that carries a higher risk which therefore requires a greater level of oversight.6-8
  • There are well-recognised risks with AI surrounding bias, privacy and security risks, discrimination, lack of transparency, lack of oversight, job displacement and de-personalisation, as well as misapplication of context-dependent algorithms.5, 9
  • Patients' rights to high quality clinical care, as well as explanations of algorithms' output and data protection compliance are pertinent issues in AI implementation.5, 10-13

Experts and professional organisations advocate that successful, ethical and sustainable adoption of “responsible AI” will be underpinned  by:10, 15-34

  • thinking about AI utility. There may be no need to implement a new AI solution if existing/conventional solutions are superior.35
  • strong governance and minimum standards
  • a risk management approach to the entire AI development process
  • explicit consideration of bias at all stages of the AI development and implementation pipeline, from problem selection and data collection to post-deployment
  • engagement of all stakeholders across AI design and implementation phases
  • high-quality and diverse datasets, with external validation of data and models
  • transparency around the use of technology and methods used to develop AI models. This includes adoption of explainable AI models, rather than black box approaches.
  • continuous monitoring and improvement processes
  • development, implementation and evaluation of models which support clinical practice and create benefit, rather than a drive purely for productivity and efficiency36
  • creating new models of care from scratch which are suited to data and AI integration, include innovative staff roles, focus on continuous improvement and provide incentives for using AI37
  • a realisation that AI is better suited to supporting, rather than replacing clinicians, and should be viewed as a tool rather than an autonomous entity. Many medical decisions require ethical judgements, rapport-building, interdisciplinary collaborations and empathy to engage in shared decision making.17, 38, 39

Lack of legal and  regulatory frameworks

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to develop appropriate governance around AI.

  • Develop robust regulatory frameworks for AI in healthcare ensures the technology is deployed safely, ethically, and effectively.31
  • A comprehensive Health Technologies Assessment framework for AI-based medical devices can provide valuable insights into the effectiveness, cost-effectiveness, and societal impact of AI-based medical devices, guiding their responsible implementation and maximising their benefits for patients and health care systems.40
  • Develop health specific guidance for AI, however, this can also be based on lessons learnt from other industries with similar risk profiles (e.g. aeronautics) and data sensitivity (e.g. finance).10
  • Develop thorough definitions of AI in healthcare, to allow for robust legislation and regulation.41
  • Develop best practice industry standards, auditing and public reporting requirements for AI developers and users to comply with.20, 42
  • Ensure there are associated deterrent penalties for data misuse.30
  • Undertake a review of existing legislation to identify where greater clarity is required with respect to new legislation, or legislative amendment.42
  • Co-operate across jurisdictions and countries, allowing joint action that reduces the cost of development of AI solutions, progressing regulatory effectiveness and efficiency, and improving safety of AI solutions in cases of poor outcomes or unintended consequences.10
  • Embed the leadership of Aboriginal and Torres Strait Islander experts in government responses to AI on an ongoing basis, in staffing and advisory functions.42

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of AI governance, or examples of how other jurisdictions are moving forward in regulatory and legislative changes.

Global frameworks

  • The Organisation for Economic Co-operation and Development (OECD) has proposed AI values-based principles: inclusive growth, sustainable development, and well-being;  human-centred values and fairness; transparency and explainability; robustness, security, and safety; and, accountability.43 They discuss the implementation of these in different jurisdictions in a number of  reports.44, 45
  • The OECD has proposed and continues to update its definition of AI as well as AI incidents. It argues that if all governments can agree on the same definition, it allows for interoperability across jurisdictions.41, 46
  • The World Health Organization (WHO) has proposed a framework to promote good governance of AI in health and promotion of ethical principles.22, 47, 48

Australia

  • The Australian Government published Australia’s AI Action Plan in 2021. One of its actions specific to healthcare was to fund AI-focused projects under the Medical Research Future Fund.49
  • Australian authors have proposed SALIENT, an end‐to‐end clinical artificial intelligence implementation framework. At the time of publication SALIENT was the only framework with full coverage of all reporting guidelines and provided a starting place for establishing that AI is tested and suitable for implementation in the Australian context.50
  • eHealth NSW is working with the NSW Ministry of Health to adapt the NSW Artificial Intelligence Assurance Framework for NSW Health. Successful and safe adoption of AI within NSW Health requires effective leadership and governance to ensure a coordinated approach to the longer-term development of clinical AI and research activities.51

Different countries are taking different approaches to AI regulation, with varying levels of precaution.52

  • The NHS in the United Kingdom has developed an AI centre of excellence.53
  • The NHS in England created an Artificial Intelligence Laboratory (NHS AI Lab) to bring together government, health care providers, academics and technology companies. It is an environment for collaboration to address barriers and deploy AI systems in health care.53, 54
  • In the NHS, a blueprint for AI implementation has been used to rollout AI in radiology.32
  • The AI and Digital Regulations Service provides guidance for NHS, social care adopters and digital health innovators. It is a multi-agency collaboration to  provide comprehensive guidance at each stage of the adoption pathway.55
  • The European Union Agency for Cybersecurity has proposed a platform to share experiences, challenges and opportunities to support policy makers.56
  • The European Union is in the process of developing an “AI Act”.57 It recognises that it may not be possible for each person to give explicit consent to every action that will be performed on their data in the future. More broad consent processes may allow for great data sharing practices.57, 58
  • The U.S. Food and Drug Administration (FDA) has created a Digital Health Advisory Committee to help the agency explore the complex, scientific and technical issues related to digital health technologies, such as AI/machine learning, augmented reality, virtual reality, digital therapeutics, wearables, remote patient monitoring and software.59
  • In the United States, the National Academy of Medicine is running The Artificial Intelligence Code of Conduct project, which aims to provide a guiding framework to ensure that AI algorithms and their application in health, healthcare, and biomedical science perform accurately, safely, reliably and ethically.
  • The Canadian Government tabled an Artificial Intelligence and Data Act (AIDA) in 2022 which was described as a first step towards a new regulatory system encouraging the responsible adoption of AI.60
  • Singapore’s Ministry of Health has developed AI guidelines which cover recommendations for development (design, build, test) and implementation (use, monitor and review).18
  • The Canadian Institute for Advanced Research (CIFAR)’s Building a Learning Health System for Canadians report highlights the need to develop AI infrastructure, accelerate the development of safe, high performance AI applications, and ensure that relevant policies, investments, partnerships, and regulatory frameworks are in place.61
  • Specifically for radiology AI, governance and implementation frameworks have been proposed which cover:21, 62
    • regulation, legislation and ethics leadership and staff management
    • stakeholder alignment
    • pipeline integration
    • training of staff
    • validation and evaluation
    • AI auditing and quality assurance
    • AI research and innovation.

Underdeveloped and biased models

Issues

There are a number of issues related to biased AI models.

  • Patient data is collected primarily for clinical care in ways that do not support data sharing or building AI models. It is often highly sensitive, stored in siloed databases in inconsistent formats, and governed by different access rules.34
  • AI algorithms incorporate the values, choices, beliefs, and norms of their developers and from research design.38, 63
  • AI models will ultimately reflect biases in data and practices which already exist.31, 64-67
  • Most AI models to date are built on small, poor quality and/or unrepresentative data sets. They are often made up of retrospective, single-institution data that are unpublished and considered proprietary. These cannot easily generalise to other hospitals, countries or ethnicities.24, 31, 65, 67-72 Most are not clinically validated on external or alternative data sets.73
  • Even a well-designed AI model can subsequently be used in a context for which it was not developed.63
  • Even experienced clinicians can struggle to consistently distinguish between accurate and inaccurate AI predictions and can be misled by inaccurate ones.74
  • Adoption of biased models may exacerbate global inequalities in healthcare.75

Implications of clinical adoption

  • adopting models that reflect inequities in the practice of medicine will only magnify existing inequities31, 67, 75, 76
  • these biases have significant implications for accurate diagnosis, under or overestimation of risks38
  • this may lead to significant associated medicolegal and malpractice consequences77
  • many AI models are ‘black boxes’ where only inputs and outputs are known, meaning it isn’t clear how they develop their final results; and they appear less trustworthy and it can be more difficult to improve them.31, 78, 79

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure AI models are designed in ways that minimise bias.

Education is crucial for fostering a shared understanding and promoting fairness in healthcare.15, 66

  • educate clinicians and patients on the biases inherent in AI
  • encourage open discussions on the implications of AI in healthcare decision-making.

Design electronic health record systems and ways of recording patient data that will promote efficient data cleaning, organising, anonymisation and sharing practices.34

  • This could include:80
    • full digitalisation of patient medical information
    • establishment of data trusts
    • engaging with federated learning, where instead of needing to move all training data to a central location, models are trained locally on local datasets to ensure data security81
  • This allows:11, 62
    • sufficient and more widely representative data to be available for designing stronger and less biased models
    • contributions and learning across institutional boundaries.

Governance, legislation and stewardship specifically to support equity and safety.22

  • Develop strong policies around patient informed consent for data sharing and use.15, 20, 30, 38, 82
  • Develop minimum requirements for:15, 20
    • deidentification of patient data
    • disclosure of methodologies
    • disclosure of data sources
    • model accuracy and performance metrics.
  • Establish:
    • incentives for AI developers to take measures to minimise bias82
    • ethical frameworks to identify and minimise the impact of biased models, as well as to guide design choices38
    • oversight committees and processes to ensure minimum requirements are met
    • data governance panels made up of a representative group of patients, clinical experts, and experts in AI, ethics and law. They monitor and review datasets and algorithms used for training AI to ensure that the data is representative and that the algorithms used are impartial.38
    • effective national safety monitoring systems so that cases of patient risk and harm related to AI use are rapidly detected and communicated to all relevant parties.20

Liability and accountability

  • Develop clear guidelines for responsibility and accountability in healthcare AI.15, 30, 82
    • This includes determining the roles and responsibilities of various stakeholders, such as physicians, AI developers, and healthcare institutions, in cases where misdiagnoses or other patient harm occur.
    • Some authors suggest clinicians should be ultimately responsible for verifying AI-generated diagnoses and integrating them into the clinical decision-making process.15 However, others have suggested that this places an unrealistic burden on clinicians and instead advocate that AI systems should be designed to support existing ways of working.83
  • Update legislation for handling AI-related medical disputes.38
    • Some authors suggest transferring existing common law principles of negligence and malpractice to AI agents.38

Transparency in methods

  • Promotion of complete, accurate, and transparent reporting of studies that develop prediction models or evaluate their performance.84
  • Develop the capability to provide meaningful and personalised explanations about the results generated by algorithms.38
  • Demonstrate the reliability of the AI models.38
  • Openly report on what or whose data might be missing from an AI model’s development to date.30
  • Choose and/or require interpretable and explainable AI models over and above ‘black-box’ models.11, 22-24, 79, 81, 82
    • Explainable AI involves understanding how a specific algorithm works and knowing who is responsible for its implementation.38, 79
    • It involves algorithm source codes, data sets and training conditions being transparently reported.11, 22, 85, 86
    • However, some have cautioned that explainable AI models have their own challenges:81
      • AI models are necessarily complex and non-linear due to the large amounts of data and variables they handle, which make them difficult to ‘explain’ or for humans to understand.87
      • Explainable AI can be (necessarily) simpler in design, make more approximations and in turn can produce models which are less fair for minority populations88 or even explain themselves incorrectly in an attempt to oversimplify complex models.89

Best practices in data sources and methods67, 90

  • Ensure the use of datasets which are diverse and representative of heterogeneous populations and disease presentations, during AI development and training.15, 71
  • Train models on data that are representative of the population they serve, encapsulating characteristics such as age, ethnicity, gender, sexual orientation, and socioeconomic background.65
  • Integrate patient data from various sources to improve model bias and explainability of clinical decisions.38, 76
  • Implement an AI health equity framework and bias mitigation strategies at each stage of implementation.91
  • Incorporate the use of different algorithms to account for disparities in dataset sample sizes.76
  • Consider datasets which oversample minority populations. This can ensure their data is accounted for when models are designed.76 However, this requires judicious choices around how minorities are selected, conceptualised and labelled.92
  • Complementing real datasets with synthetic data can improve the accuracy of clinical diagnosis within underrepresented groups.93
  • Require AI models which have been developed via cross-validation and experimental designs. These will demonstrate higher external validity and reproducibility (e.g. high accuracy when the AI model encounters a novel dataset).11, 36, 94
  • Promote clinical validation studies of medical AI devices via corporate contracts and independent review studies.73
  • Ensure that external validation of datasets are16, 71
    • representative of the population and setting in which the AI system is intended to be deployed
    • independent of the dataset used for developing the AI model during training and testing.
  • Develop benchmarking datasets and methods to thoroughly evaluate AI models. This would require the creation of public datasets suited to evaluations, maintaining private test sets to mitigate leakage, and ensuring secure access through verified identities and data use agreements.95
  • Consider modification of AI algorithms by learning from local data, to ensure they are a good fit with local contexts.38, 86
  • Consider local and contextual variables when building predictive models in order to minimise the impact of algorithmic bias on clinical decisions.38
  • Implement an AI health equity framework and bias mitigation strategies at each stage of implementation  91.
  • Incorporate the use of different algorithms to account for disparities in dataset sample sizes.76
  • Consider datasets which oversample minority populations. This can ensure their data is accounted for when models are designed.76 However, this requires judicious choices around how minorities are selected, conceptualised and labelled.92

AI model audits and ongoing monitoring

  • Implement public reporting of new models’ performance in the real-world.95
  • Implement regular, multi-disciplinary audit and AI validation of dependability, performance, safety and ethical compliance. This can identify potential biases and ensure that AI systems remain fair, accurate, and effective in diverse healthcare settings.15, 38, 82, 96
  • Examine disparities between less and more socially advantaged populations across model performance metrics then brainstorm solutions to address the disparities before implementation of models in clinical practice.19
  • Monitor a model’s validity over time via feedback systems. These can suggest when model re-training might be required.11, 25, 35, 97

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of AI models that minimise bias.

Data sharing repositories and guidelines

  • The MAIDA initiative has established a framework for global medical-imaging data sharing, to address the shortage of public health data and enable rigorous evaluation of AI models across all populations. They discuss the challenges and lessons learnt in establishing the initiative.98
  • The NSW Health Data Lake can be used in the future to integrate data, AI and machine learning methods into the Data Engineering processes currently used for complex analysis of data. Increased automation will drastically reduce the time involved around cleaning data.99
  • The STANDING Together (standards for data diversity, inclusivity and generalisability) initiative is an international, consensus-based initiative that aims to develop recommendations for the composition (who is represented) and reporting (how they are represented) of datasets that underpin medical AI systems.100
  • Researchers in Canada have reported on a model which allows multiple hospitals and jurisdictions to share data for AI model development without compromising on privacy and data security. Using the platform allowed the project to develop more robust AI models than would otherwise have been possible. 101
  • The Japan Medical Image Database (J-MID), established in 2018, contains CT and magnetic resonance imaging (MRI) scans and diagnostic reports uploaded from major university hospitals in Japan. Since moving to cloud-based infrastructure in 2023, J-MID now contains approximately 500 million images. Japan’s national health insurance system provides CT and MRI scans for all citizens, which allows for the collection of unbiased image data regardless of age or socioeconomic status.102
  • Researchers in Europe have developed a 13 step guide to develop and evaluate a clinical prediction model, including freely available R code.103
  • Researchers as part of the New York Genome Center’s ALS consortium have developed a secure framework that allows clinical and genetic data to be stored securely for queries and analysis using blockchain technology.104

Collaborative AI testing labs

  • Public-private partnerships can support AI assurance labs. These can serve as a shared resource for the industry to validate AI models and accelerate development and successful market adoptionex.105, 106
  • The University of Melbourne’s AI assurance Lab, for example, validates AI technologies with respect to quality, safety, privacy, and reliability.107

Developing and sharing foundational AI models

  • Foundational AI models are trained on broad data and can be applied across a wide range of use cases.108
  • Foundational models such as MONET enable AI transparency across the entire system development pipeline.109

AI model audit and monitoring

  • ‘Post-deployment monitoring’ is one of the 10 guiding principles identified by the joint bodies of U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency, in the development of Good Machine Learning Practice (GMLP).110
  • Singapore’s Ministry of Health AI guidelines refer to this process as ‘ground-truthing’.18

Data security and intellectual property

Issues

  • The collection of personal health data risks exposing patients to:38, 111
    • privacy invasion
    • repurposing of data for uses for which consent was not given112
    • fraud
    • algorithmic bias
    • information leakage
    • identity theft.
  • The way that AI models learn exposes them to novel risks. They can be attacked and controlled by a bad actor with malicious intent, an ’AI attack’.
  • A machine learning model can be attacked in three different ways. It can be:113
    • misled into making a wrong prediction
    • altered through data e.g., to make it biased, inaccurate or even malicious
    • replicated or stolen e.g., IP theft through continuous querying of the model.

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to promote safe and secure use of AI models in healthcare and ensure data protection.

  • Prioritise privacy and data protection during all design stages and deployment of AI systems.16, 114
  • Protect the privacy of individuals whose data is used to train AI systems.16, 20
  • Ensure AI systems meet an organisation’s data residency or sovereignty obligations, with respect to where (globally or in the cloud) data is stored.115
  • Ensure systems can log and monitor AI model input, output and high frequency, repetitive prompts.115
  • Enact broader rulemaking authority for patient data protection so it can act quickly as new privacy and security threats emerge.115
  • Development of benchmark approaches that effectively measure the balance between privacy and the utility of large language models.111

Technological solutions

  • Modern technical solutions need to be employed by developers.38
  • There are four pillars of protection: training data privacy, input privacy, output privacy, and model privacy.116
  • Privacy protection mechanisms exist at different points along the pipeline and can include:116
    • cryptographic techniques (e.g. homomorphic encryption, garbled circuits)
    • non-cryptographic techniques (e.g. differential privacy)
    • hybrid techniques (e.g. federated learning)
    • decentralised systems (e.g. blockchain).
  • There is preliminary evidence that large language models can be taught to shield or protect specific categories of personal information under simulated scenarios.111

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support the development of secure AI.

  • The Australian Cyber Security Centre have published guidance together with other peak bodies to highlight best practices in AI adoption and security.115
  • The Office of the Victorian Information Officer has published Artificial Intelligence – Understanding Privacy Obligations.117
  • In the US, the National Institute of Standards and Technology have published an Artificial Intelligence Risk Management Framework.118
  • The Office of the Privacy Commissioner of Canada have published Principles for responsible, trustworthy and privacy-protective generative AI technologies.119

Poor uptake – patients

Issues

Patients can mistrust AI due to:

  • data privacy concerns112, 120
    • many patients are unwilling to share their health data, even for developing algorithms that might improve quality of care38
  • concerns around accuracy or adequate performance120-122
  • lack of model and performance explanations.120

AI literacy

  • Large language models can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.114

Patients may be less likely to adopt AI systems or apps in cases of:120, 122

  • female patients and high risk situations123
  • poor usability or user interface
  • wanting to maintain human patient-clinician relationships
  • lack of perceived empathy in AI and apps
  • inappropriate or over-detailed information, or a lack of actionable recommendations.

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure that patients and their needs are included in the AI development process and patient uptake of new AI technologies is more likely.

Stakeholder (patient and advocacy groups) buy-in and co-design124

  • Promote and fund AI clinical trials which prioritise patient-relevant outcomes to fully understand AI's true effects and limitations in health care.125
  • Gain a thorough understanding of the unique challenges and concerns faced by various patient populations.15
  • Promote the development of more equitable and effective AI solutions tailored to patient needs.15, 126

AI system and app development and design

  • Embed ethical principles into AI design to promote a trusting relationship between patients and AI.38
  • Adopt a patient-centred approach in designing medical AI to promote informed choices aligned with patient values and respect patient autonomy.38
  • Seek to maximise explainability.120

Maximising user experience

  • Involve stakeholders in interface design.15, 127
  • Design AI applications which have options to personalise information such as the explanation of diagnoses, recommendations or patient education.121
  • Design applications that interconnect with other information sources such as the electronic health record, calendars, and smart devices.121

Humanness and AI

  • Design AI systems and chatbots to display humanness (e.g. recognition, personification and empathy). 31, 38, 120 This can
    • provide meaningful and ethical care
    • be more likely to create connection, trust and appeal.
  • Create (real or perceived) anonymity in order to promote patients being comfortable to discuss sensitive topics.121

Patient rights and psychological safety

  • Be transparent that AI is in use,18 how data will be anonymised, stored and secured,121 and explain the role of AI in supporting rather than replacing clinical care.128
  • Design informed consent processes to be both thorough and clear.38, 129
  • Consider data ownership and ethical considerations carefully before sharing data, particularly if a profit corporation is involved.129
  • Consider data sovereignty and allow for flexibility in how patients (particularly subgroups of marginalised patients) have ownership over their own data where there are privacy and ethical concerns.130
  • Develop guidelines and resources for patients and carers to maximise AI literacy, so that stakeholders can interpret AI data and have increased trust in AI.20, 131
  • Encourage people to become more proactive in sharing and disseminating data about themselves via (secure) personal health repositories.38

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of AI models or apps that have been co-designed, as well as initiatives to support others to co-design healthcare AI.

  • A number of case examples exist where a co-design process with clinicians and AI developers has been used to produce or plan:
    • a patient-driven project to use AI to analyse the effect of lithium medication on kidney function132
    • a chatbot for mental health support in young people133
    • a chatbot for patients living with COPD134
    • an AI-enhanced social robot to be used as a distraction tool for children in the emergency department135
    • co-creation of a digital learning health system called the BrainHealth Databank.136
  • Researchers from the UK have shared open access materials on Patient and public involvement to build trust in artificial intelligence: a framework, tools and case studies.132, 137

Poor uptake – clinical staff

Issues

  • Poor uptake of novel systems can occur with staff for a number of reasons.
  • Useful AI solutions can still fail at the implementation stage without proper planning for system and end-user needs.138
  • Acceptability of AI for healthcare professionals is underpinned by:39, 121, 139-145
  • user factors: trust in AI (and its accuracy and implications), system understanding, AI literacy, positive attitude, anxiety and perceived risk
  • system usage factors: added value, time savings, burden, interpretability and explainability of AI results, interface and user friendliness, workflow integration and interoperability of systems
  • socio-organisational-cultural factors: social influence, organisational readiness, ethical aspects, perceived threat to professional identity.

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to promote trust in AI in healthcare staff and ensure their needs are incorporated into product designs.

Culture, systems and novel models of care which:37, 146, 147

  • evaluate new technologies in clinical contexts148
  • support clinicians across their entire workflow
  • promote learning through experience
  • allow for the accommodation of clinicians’ autonomy
  • consider change management principles35
  • encourage the idea that change is vital and positive, and that progress may involve some experimentation and failure along the way.

Interoperability of systems and modernisation of organisational data infrastructure and procedures, such as22, 37, 149-151

  • Develop integrated care systems via co-design, considering staff needs and relational considerations and incorporating change management.152, 153
  • Establish remote access methods and cross-site sharing of patient data and images.
  • Invest in distributed data platforms and cloud computing infrastructure.
  • Improve data storage and increasing computational power for advanced analytics.
  • Make data available seamlessly across integrated systems to enable faster, easier and more accurate patient data analysis.140
  • Promoting the development of forms of AI which support clinicians existing workflows and decisions, rather than autonomous AI systems which then require clinician sign off.83

Sustainable and context-relevant AI development

  • Develop strong local leadership who are responsible for adapting AI to the local contexts.38
  • Ensure inclusive local leadership that includes the perspectives of all stakeholders.20, 38
  • Ensure health systems and vendors meet the needs of the end-users of clinical decision support tools.19, 20
  • Review and address the frequency of alerts or escalations generated by systems to ensure that clinicians do not experience “alarm fatigue”.154
  • Coordinate multidisciplinary teams and align projects with key institutional values, for more seamless clinical implementation.62, 154
  • Incorporate the perspectives and feedback of clinicians in order to maximise their knowledge of workflows. A study of nurses using AI showed that nurses can provide real-world insight and solutions for implementation issues.155

Transparency:

  • around the role of AI in supporting rather than replacing clinical care31, 156
  • around a model’s accuracy and how a recommendation is derived (who developed the system, the system reasoning and reliability)121
  • so that users of a system understand how decisions are made, then they are more likely to adopt it.17
  • Transparency might not be enough to create trust, however:
    • staff are also likely to want high accuracy and reliability in AI models in order to implement them with patients19
    • focus AI design on technical performance of the technology, the infrastructure and processes that ensure technical performance and safety (rather than attempting to explicitly build in trust features).19

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of AI models or apps that have been co-designed, as well as initiatives to support others to co-design healthcare AI.

  • A number of case examples exist where a co-design process with clinicians and AI developers has been used to produce or plan:
    • an AI model to classify skin lesions157
    • co-creation of a digital learning health system called the Brain Health Databank136
    • an AI-enhanced social robot to be used as a distraction tool for children in the emergency department.135

Workforce changes and challenges

Issues

  • AI outputs require clinicians to interpret them, which requires appropriate levels of AI literacy.114, 158
  • The impact that AI will have on health workforce is not well understood, particularly in terms of knowledge and skills gaps, and curriculum requirements.131
  • Some staff have fears and mistrust around medicolegal implications and job losses.159, 160
  • The increased use of AI in the workplace may result in increased collection and analysis of data on workers. Data may or may not be personal, and could include information such as worker movements and digital activities or even biometric data. Workers may have concerns around data security and use of such data.161

Best bets – solutions and enablers of good practice

These best bets have been suggested by subject matter experts or governing bodies as ways to ensure workforce changes are evidence based and well managed.

Accurate projections of workforce requirements

  • Develop a thorough understanding of knowledge and skills gaps and current capability building efforts.20
  • Investigate the impact on AI development and implementation on different staff employed by health services, including
    • data scientists
    • end-user clinicians
    • end-user administrative staff
    • positions which might increase in demand due to AI’s increased capability e.g., geneticists.

Workforce training is an important step in facilitating AI adoption .22, 131, 162, 163 It can

  • increase digital and AI literacy in the workforce
  • make workflows more efficient
  • ensure clinicians can identify where AI models have deviated from intended use and may behave in a way that increases the risk of liability23
  • reduce likelihood of incorrect applications of AI models.

AI and digital literacy

  • Prioritise training and retention of local expertise.38
  • Develop high levels of digital and genomic literacy in clinical staff164
    • think about ways to nurture digital literacy in a way that leaves no one behind148
    • conduct in-person live training and ensure there is access to ongoing support.148
  • Prepare future clinical staff to not only use AI in care delivery and research but also critically evaluate its applicability and limitations.19, 165
  • Ensure that workforce training in AI and digital literacy covers a range of increasing competencies including:20, 148, 166-168
    • core knowledge around computer science and information technology
    • skills in the application of AI technology, pedagogy, ethics, healthcare policy, and clinical practice
    • specialist skills and capabilities where relevant for clinical implementation, to deploy and maintain technologies, including the potential for personalised educational elements.

Culture and stakeholder buy-in62, 160, 162, 164, 169, 170

  • View the evolution of healthcare roles as something that requires active planning and shaping, rather than a passive act.
  • Create a shared vision for how professions and occupations can develop with greater use of technology.
  • Develop new team structures that tightly integrate data scientists and engineers with frontline clinical staff to foster cross-disciplinary communication and ensure that AI tools are fit for implementation in healthcare environments.31
  • Adopt a change management approach which incorporates structured AI adoption programs and is grounded in implementation science. This is particularly relevant in cases where it will replace administrative roles.

Transparency

  • whenever AI is use in the workplace, wherever feasible161

Worker data161

  • Restricting the collection, use, inference, and disclosure of staff's personal information.
  • Requirements to safeguard staff personal information and appropriate handling of data.

Frameworks to support governance and/or solutions trialled in real world systems

These are examples of frameworks or initiatives that have been proposed to support or inform upcoming changes in workforce requirements due to the increasing role of AI in healthcare.

  • NSW Health has published a report on the impacts of technology on the health workforce.171
  • In the UK, the Topol Review: Preparing the healthcare workforce to deliver the digital future outlines recommendations for the NHS to integrated digital innovations including AI into workforce planning. It highlights that healthcare staff will need high levels of digital and genomics literacy.164
  • An EIT Health and McKinsey & Company report estimates what proportion of staff hours could be automated by AI and provides recommendations on investing in new talent, creating new roles and change management.162
  • A report from the American Hospital Association’s Center for Health Innovation provides useful frameworks and tools for hospital and health system leaders to successfully integrate AI technologies into their workforce and workflows. It outlines new potential roles, desirable digital skills and discusses overcoming workforce challenges.172
  • The report Building Canada’s Future AI Workforce outlines the support needed for Canada’s digital workforce to acquire AI skills through various training pathways: broad upskilling initiatives to target widely needed digital skills and strategic cross-training programs to address acute needs like those in the field of AI, across health and other sectors.173

References

  1. Möllmann NRJ, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics Journal. 2021;27(4):14604582211052391. DOI: 10.1177/14604582211052391
  2. Gillon R. Medical ethics: four principles plus attention to scope. BMJ. 1994;309(6948):184. DOI: 10.1136/bmj.309.6948.184
  3. Olver IN. Ethics of artificial intelligence in supportive care in cancer. Medical Journal of Australia. 2024;n/a(n/a). DOI: https://doi.org/10.5694/mja2.52297
  4. Gerke S, Babic B, Evgeniou T, et al. The need for a system view to regulate artificial intelligence/machine learning-based software as medical device. npj Digital Medicine. 2020;3(1):53. DOI: 10.1038/s41746-020-0262-2
  5. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics. 2021;22(1):122. DOI: 10.1186/s12910-021-00687-3
  6. Wubineh BZ, Deriba FG, Woldeyohannis MM. Exploring the opportunities and challenges of implementing artificial intelligence in healthcare: A systematic literature review. Urologic Oncology: Seminars and Original Investigations. 2024;42(3):48-56. DOI: https://doi.org/10.1016/j.urolonc.2023.11.019
  7. Medicine and Healthcare products Regulatory Agency (MHRA). Software and Artificial Intelligence (AI) as a Medical Device. London: MHRA; 2023 [cited 22 Feb 2024]. Available from: https://www.gov.uk/government/publications/software-and-artificial-intelligence-ai-as-a-medical-device/software-and-artificial-intelligence-ai-as-a-medical-device
  8. US Food and Drug Administration (FDA). Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan. Washington, DC: FDA; 2021 [cited 22 Feb 2024]. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
  9. Fraser AG, Biasin E, Bijnens B, et al. Artificial intelligence in medical device software and high-risk medical devices – a review of definitions, expert recommendations and regulatory initiatives. Expert Review of Medical Devices. 2023;20(6):467-91. DOI: 10.1080/17434440.2023.2184685
  10. Organisation for Economic Co-operation and Development (OECD). Collective action for responsible AI in health. Paris: OECD; 2024 [cited 14 Feb 2024]. Available from: https://www.oecd.org/publications/collective-action-for-responsible-ai-in-health-f2050177-en.htm
  11. Cobanaj M, Corti C, Dee EC, et al. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer. 2024;198:113504. DOI: 10.1016/j.ejca.2023.113504
  12. Sutherland E. Artificial intelligence in health: big opportunities, big risks. Paris: OECD; 2023 [cited 12 Feb 2024]. Available from: https://oecd.ai/en/wonk/artificial-intelligence-in-health-big-opportunities-big-risks
  13. Gilbert S, Harvey H, Melvin T, et al. Large language model AI chatbots require approval as medical devices. Nature Medicine. 2023;29(10):2396-8. DOI: 10.1038/s41591-023-02412-6
  14. Komesaroff PA, Felman ER. How to make sense of the ethical issues raised by artificial intelligence in medicine. Internal Medicine Journal. 2023;53(8):1304-5. DOI: 10.1111/imj.16180
  15. Ueda D, Kakinuma T, Fujita S, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Japanese Journal of Radiology. 2024;42(1):3-15. DOI: 10.1007/s11604-023-01474-3
  16. World Health O. Regulatory considerations on artificial intelligence for health. Geneva: World Health Organization; 2023 2023.
  17. Cresswell K, Rigby M, Magrabi F, et al. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision. Health Policy. 2023;136:104889. DOI: 10.1016/j.healthpol.2023.104889
  18. Ministry of Health Singapore. Artificial intelligence in healthcare guidelines. Singapore Government; 2023 [cited 14 Feb 2024]. Available from: https://www.moh.gov.sg/licensing-and-regulation/artificial-intelligence-in-healthcare
  19. Rojas JC, Teran M, Umscheid CA. Clinician Trust in Artificial Intelligence: What is Known and How Trust Can Be Facilitated. Crit Care Clin. 2023;39(4):769-82. DOI: 10.1016/j.ccc.2023.02.004
  20. Australian Alliance for Artificial Intelligence in Healthcare (AAAiH). A Roadmap for AI in Healthcare for Australia. Sydney: AAAiH; 2021 [cited 10 Jan 2024]. Available from: https://aihealthalliance.org/2021/12/01/a-roadmap-for-ai-in-healthcare-for-australia/
  21. Stogiannos N, Malik R, Kumar A, et al. Black box no more: a scoping review of AI governance frameworks to guide procurement and adoption of AI in medical imaging and radiotherapy in the UK. The British Journal of Radiology. 2023;96(1152):20221157. DOI: 10.1259/bjr.20221157
  22. Fisher S, Rosella LC. Priorities for successful use of artificial intelligence by public health organizations: a literature review. BMC Public Health. 2022;22(1):2146. DOI: 10.1186/s12889-022-14422-z
  23. Hedderich DM, Weisstanner C, Van Cauter S, et al. Artificial intelligence tools in clinical neuroradiology: essential medico-legal aspects. Neuroradiology. 2023;65(7):1091-9. DOI: 10.1007/s00234-023-03152-7
  24. Dorr DA, Adams L, Embí P. Harnessing the Promise of Artificial Intelligence Responsibly. JAMA. 2023;329(16):1347-8. DOI: 10.1001/jama.2023.2771
  25. Widner K, Virmani S, Krause J, et al. Lessons learned from translating AI from development to deployment in healthcare. Nature Medicine. 2023;29(6):1304-6. DOI: 10.1038/s41591-023-02293-9
  26. Wang Y, Li N, Chen L, et al. Guidelines, Consensus Statements, and Standards for the Use of Artificial Intelligence in Medicine: Systematic Review. Journal of Medical Internet Research. 2023;25(1). DOI: 10.2196/46089
  27. Lammons W, Silkens M, Hunter J, et al. Centering Public Perceptions on Translating AI Into Clinical Practice: Patient and Public Involvement and Engagement Consultation Focus Group Study. J Med Internet Res. 2023;25:e49303. DOI: 10.2196/49303
  28. Chan A. The EU AI Act: Adoption Through a Risk Management Framework. Schaumburg, IL: ISACA; 2023 [cited 6 Mar 2024]. Available from: https://www.isaca.org/resources/news-and-trends/industry-news/2023/the-eu-ai-act-adoption-through-a-risk-management-framework
  29. Baquero J, Burkhardt R, Govindarajan A, et al. Derisking AI by design: How to build risk management into AI development. New York: McKinsey & Company; 2000 [cited 6 Mar 2024]. Available from: https://www.mckinsey.com/capabilities/quantumblack/our-insights/derisking-ai-by-design-how-to-build-risk-management-into-ai-development
  30. Goldberg CB, Adams L, Blumenthal D, et al. To do no harm — and the most good — with AI in health care. Nature Medicine. 2024;30(3):623-7. DOI: 10.1038/s41591-024-02853-7
  31. Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artificial Intelligence in Medicine. 2024;151:102861. DOI: https://doi.org/10.1016/j.artmed.2024.102861
  32. Shelmerdine SC, Togher D, Rickaby S, et al. Artificial intelligence (AI) implementation within the National Health Service (NHS): the South West London AI Working Group experience. Clinical Radiology. 2024;79(9):665-72. DOI: 10.1016/j.crad.2024.05.018
  33. Nair M, Svedberg P, Larsson I, et al. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLOS ONE. 2024;19(8):e0305949. DOI: 10.1371/journal.pone.0305949
  34. Nuffield Trust. AI and the NHS: is it the silver bullet that will improve the health service’s productivity? London: Nuffield Trust; 2024 [cited 21 Aug 2024]. Available from: https://www.nuffieldtrust.org.uk/news-item/ai-and-the-nhs-is-it-the-silver-bullet-that-will-improve-the-health-service-s-productivity?utm_source=Nuffield+Trust+weekly+newsletter&utm_campaign=a9852d33b5-EMAIL_CAMPAIGN_2020_03_19_04_06_COPY_01&utm_medium=email&utm_term=0_39741ccd5c-a9852d33b5-95048137
  35. Warren BE, Bilbily A, Gichoya JW, et al. An Introductory Guide to Artificial Intelligence in Interventional Radiology: Part 2: Implementation Considerations and Harms. Canadian Association of Radiologists Journal. 2024;75(3):568-74. DOI: 10.1177/08465371241236377
  36. How to support the transition to AI-powered healthcare. Nature Medicine. 2024;30(3):609-10. DOI: 10.1038/s41591-024-02897-9
  37. Kamel Rahimi A, Pienaar O, Ghadimi M, et al. Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers. J Med Internet Res. 2024;26:e49655. DOI: 10.2196/49655
  38. Siala H, Wang Y. SHIFTing artificial intelligence to be responsible in healthcare: A systematic review. Social Science & Medicine. 2022;296:114782. DOI: 10.1016/j.socscimed.2022.114782
  39. Giddings R, Joseph A, Callender T, et al. Factors influencing clinician and patient interaction with machine learning-based risk prediction models: a systematic review. The Lancet Digital Health. 2024;6(2):e131-e44. DOI: 10.1016/S2589-7500(23)00241-8
  40. Farah L, Borget I, Martelli N, et al. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. J Med Internet Res. 2024;26:e51514. DOI: 10.2196/51514
  41. OECD Policy Observatory. Updates to the OECD’s definition of an AI system explained. Paris: OECD; 2023 [cited 14 Feb 2024]. Available from: https://oecd.ai/en/wonk/ai-system-definition-update
  42. James Martin Institute for Public Policy (JMI). Leadership for Responsible AI: A Constructive Agenda for NSW. Sydney: JMI; 2023 [cited 06 Mar 2024]. Available from: https://jmi.org.au/wp-content/uploads/2023/12/FINAL-REVIEWED-Leadership-for-Responsible-AI-v3.pdf
  43. Organisation for Economic Co-operation and Development (OECD). OECD AI Principles. Paris: OECD; 2019 [cited 19 Feb 2024]. Available from: https://oecd.ai/en/ai-principles
  44. Organisation for Economic Co-operation and Development (OECD). The state of implementation of the OECD AI Principles four years on. 2023. DOI: 10.1787/835641c9-en
  45. Castonguay A, Wagner G, Motulsky A, et al. AI maturity in health care: An overview of 10 OECD countries. Health Policy. 2024;140:104938. DOI: https://doi.org/10.1016/j.healthpol.2023.104938
  46. OECD. Defining AI incidents and related terms. 2024. DOI: doi:https://doi.org/10.1787/d1a8d965-en
  47. World Health Organization (WHO). Ethics and governance of artificial intelligence for health. Geneva: WHO; 2021 [cited 17 Jul 2023]. Available from: https://www.who.int/publications/i/item/9789240029200
  48. World Health Organization (WHO). Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. Geneva: WHO; 2024 [cited 13 Mar 2024]. Available from: https://www.who.int/publications/i/item/9789240084759
  49. Department of Industry Science and Resources. Australia’s Artificial Intelligence Action Plan. Canberra: Australian Government; 2021 [cited 21 Feb 2024]. Available from: https://www.industry.gov.au/publications/australias-artificial-intelligence-action-plan
  50. van der Vegt A, Campbell V, Zuccon G. Why clinical artificial intelligence is (almost) non-existent in Australian hospitals and how to fix it. Med J Aust. 2023. DOI: 10.5694/mja2.52195
  51. Digital NSW. Artificial intelligence assurance framework. Sydney: NSW Government; 2022 [cited 23 Oct 2023]. Available from: https://www.digital.nsw.gov.au/policy/artificial-intelligence/nsw-artificial-intelligence-assurance-framework
  52. Hutson M. Rules to keep AI in check: nations carve different paths for tech regulation. London: Nature; 2023 [cited 19 Feb 2024]. Available from: https://www.nature.com/articles/d41586-023-02491-y
  53. AI Centre for Value Based Healthcare. The AI centre for value based healthcare is pioneering AI technology for the NHS. London: AI centre for value based healthcare; 2023 [cited 17 Jul 2023]. Available from: https://www.aicentre.co.uk/
  54. The NHS AI Lab. The NHS AI Lab. London: NHS; 2023 [cited 17 Jul 2023]. Available from: https://transform.england.nhs.uk/ai-lab/
  55. National Institute for Health and Care Excellence (NICE). One-stop-shop for AI and digital regulations for health and social care launched. London: NICE; 2023 [cited 19 Feb 2024]. Available from: https://www.nice.org.uk/News/Article/one-stop-shop-for-ai-and-digital-regulations-for-health-and-social-care-launched
  56. European union agency for cybersecurity (ENISA). Is Secure and Trusted AI Possible? The EU Leads the Way. Athens: ENISA; 2023 [cited 17 Jul 2023]. Available from: https://www.enisa.europa.eu/news/is-secure-and-trusted-ai-possible-the-eu-leads-the-way
  57. Meszaros J, Minari J, Huys I. The future regulation of artificial intelligence systems in healthcare services and medical research in the European Union. Frontiers in Genetics. 2022;13. DOI: 10.3389/fgene.2022.927721
  58. Andreotta AJ, Kirkham N, Rizzi M. AI, big data, and the future of consent. AI & SOCIETY. 2022;37(4):1715-28. DOI: 10.1007/s00146-021-01262-5
  59. US Food and Drug Administration (FDA). FDA Establishes New Advisory Committee on Digital Health Technologies. Silver Spring, Maryland: FDA; 2023 [cited 19 Feb 2024]. Available from: https://www.fda.gov/news-events/press-announcements/fda-establishes-new-advisory-committee-digital-health-technologies
  60. Canadian Government. The Artificial Intelligence and Data Act (AIDA) – Companion document.: Canadian Government; 2023 [cited 23 July 2023]. Available from: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
  61. The Canadian Institute for Advanced Research (CIFAR). Building a Learning Health System for Canadians: Report of the Artificial Intelligence for Health Task Force. Toronto, Canada: CIFAR; 2020 [cited 6 Mar 2024]. Available from: https://cifar.ca/cifarnews/2020/07/01/building-a-learning-health-system-for-canadians/
  62. Chae A, Yao MS, Sagreiya H, et al. Strategies for Implementing Machine Learning Algorithms in the Clinical Practice of Radiology. Radiology. 2024;310(1):e223170. DOI: 10.1148/radiol.223170
  63. Gray M, Samala R, Liu Q, et al. Measurement and Mitigation of Bias in Artificial Intelligence: A Narrative Literature Review for Regulatory Science. Clinical Pharmacology & Therapeutics. 2024;115(4):687-97. DOI: https://doi.org/10.1002/cpt.3117
  64. Physician–machine partnerships boost diagnostic accuracy, but bias persists. Nature Medicine. 2024. DOI: 10.1038/s41591-023-02733-6
  65. Chan SCC, Neves AL, Majeed A, et al. Bridging the equity gap towards inclusive artificial intelligence in healthcare diagnostics. BMJ. 2024;384:q490. DOI: 10.1136/bmj.q490
  66. Whitehead M, Carrol E, Kee F, et al. Equity in medical devices: trainers and educators play a vital role. BMJ. 2024;385:q1091. DOI: 10.1136/bmj.q1091
  67. Chen F, Wang L, Hong J, et al. Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Med Inform Assoc. 2024;31(5):1172-83. DOI: 10.1093/jamia/ocae060
  68. Rajpurkar P, Chen E, Banerjee O, et al. AI in health and medicine. Nature Medicine. 2022;28(1):31-8. DOI: 10.1038/s41591-021-01614-0
  69. Seyyed-Kalantari L, Zhang H, McDermott MBA, et al. Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations. Nature Medicine. 2021;27(12):2176-82. DOI: 10.1038/s41591-021-01595-0
  70. Grant C. Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism. USA: American Civil Liberties Union; 2022 [cited 13 Jul 2023]. Available from: https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism
  71. Liu X, Reigle J, Prasath VBS, et al. Artificial intelligence image-based prediction models in IBD exhibit high risk of bias: A systematic review. Computers in Biology and Medicine. 2024;171:108093. DOI: https://doi.org/10.1016/j.compbiomed.2024.108093
  72. Wu J, Liu X, Li M, et al. Clinical Text Datasets for Medical Artificial Intelligence and Large Language Models — A Systematic Review. NEJM AI. 2024;1(6):AIra2400012. DOI: doi:10.1056/AIra2400012
  73. Chouffani El Fassi S, Abdullah A, Fang Y, et al. Not all AI health tools with regulatory authorization are clinically validated. Nature Medicine. 2024. DOI: 10.1038/s41591-024-03203-3
  74. Yu F, Moehring A, Banerjee O, et al. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nature Medicine. 2024;30(3):837-49. DOI: 10.1038/s41591-024-02850-w
  75. Dychiao RG, Nazer L, Mlombwa D, et al. Artificial intelligence and global health equity. BMJ. 2024;387:q2194. DOI: 10.1136/bmj.q2194
  76. Suran M, Hswen Y. How to Navigate the Pitfalls of AI Hype in Health Care. JAMA. 2024;331(4):273-6. DOI: 10.1001/jama.2023.23330
  77. Duffourc M, Gerke S. Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients. JAMA. 2023;330(4):313-4. DOI: 10.1001/jama.2023.9630
  78. Wadden JJ. Defining the undefinable: the black box problem in healthcare artificial intelligence. Journal of Medical Ethics. 2022;48(10):764-8. DOI: 10.1136/medethics-2021-107529
  79. Marcus E, Teuwen J. Artificial intelligence and explanation: How, why, and when to explain black boxes. European Journal of Radiology. 2024;173. DOI: 10.1016/j.ejrad.2024.111393
  80. O’Dowd A. Sell access to NHS data to boost health innovation, say Blair and Hague. BMJ. 2024;384:q225. DOI: 10.1136/bmj.q225
  81. Unger M, Kather JN. Deep learning in cancer genomics and histopathology. Genome Medicine. 2024;16(1):44. DOI: 10.1186/s13073-024-01315-6
  82. Zhou K, Gattinger G. The Evolving Regulatory Paradigm of AI in MedTech: A Review of Perspectives and Where We Are Today. Therapeutic Innovation & Regulatory Science. 2024;58(3):456-64. DOI: 10.1007/s43441-024-00628-3
  83. Adler-Milstein J, Redelmeier DA, Wachter RM. The Limits of Clinician Vigilance as an AI Safety Bulwark. JAMA. 2024. DOI: 10.1001/jama.2024.3620
  84. Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;385:e078378. DOI: 10.1136/bmj-2023-078378
  85. Reddy S. Navigating the AI Revolution: The Case for Precise Regulation in Health Care. J Med Internet Res. 2023;25:e49989. DOI: 10.2196/49989
  86. Aristidou A, Jena R, Topol EJ. Bridging the chasm between AI and clinical implementation. The Lancet. 2022;399(10325):620. DOI: 10.1016/S0140-6736(22)00235-5
  87. Han H, Liu X. The challenges of explainable AI in biomedical data science. BMC Bioinformatics. 2022;22(12):443. DOI: 10.1186/s12859-021-04368-1
  88. Anderer S, Hswen Y. AI Developers Should Understand the Risks of Deploying Their Clinical Tools, MIT Expert Says. JAMA. 2024. DOI: 10.1001/jama.2023.22981
  89. Kahn J. What’s wrong with “explainable A.I.”. New York: Fortune; 2022 [cited 6 Mar 2024]. Available from: https://fortune.com/2022/03/22/ai-explainable-radiology-medicine-crisis-eye-on-ai/
  90. Yang Y, Lin M, Zhao H, et al. A survey of recent methods for addressing AI fairness and bias in biomedicine. Journal of Biomedical Informatics. 2024;154:104646. DOI: https://doi.org/10.1016/j.jbi.2024.104646
  91. Mihan A, Pandey A, Van Spall HGC. Mitigating the risk of artificial intelligence bias in cardiovascular care. The Lancet Digital Health. 2024;6(10):e749-e54. DOI: 10.1016/S2589-7500(24)00155-9
  92. Kuehn BM. Citing Harms, Momentum Grows to Remove Race From Clinical Algorithms. JAMA. 2024;331(6):463-5. DOI: 10.1001/jama.2023.25530
  93. Ktena I, Wiles O, Albuquerque I, et al. Generative models improve fairness of medical classifiers under distribution shifts. Nature Medicine. 2024. DOI: 10.1038/s41591-024-02838-6
  94. Anderer S, Hswen Y. “Scalable Privilege”—How AI Could Turn Data From the Best Medical Systems Into Better Care for All. JAMA. 2024;331(6):459-62. DOI: 10.1001/jama.2023.21719
  95. Bedi S, Jain SS, Shah NH. Evaluating the clinical benefits of LLMs. Nature Medicine. 2024. DOI: 10.1038/s41591-024-03181-6
  96. Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Frontiers in Surgery. 2022;9. DOI: 10.3389/fsurg.2022.862322
  97. Akhlaghi H, Freeman S, Vari C, et al. Machine learning in clinical practice: Evaluation of an artificial intelligence tool after implementation. Emergency Medicine Australasia. 2024;36(1):118-24. DOI: https://doi.org/10.1111/1742-6723.14325
  98. Saenz A, Chen E, Marklund H, et al. The MAIDA initiative: establishing a framework for global medical-imaging data sharing. The Lancet Digital Health. 2024;6(1):e6-e8. DOI: 10.1016/S2589-7500(23)00222-4
  99. eHealth NSW. Data Lake. Sydney: NSW Health; 2024 [cited 7 Mar 2024]. Available from: https://www.ehealth.nsw.gov.au/solutions/data-analytics/data-management/data-lake
  100. Ganapathi S, Palmer J, Alderman JE, et al. Tackling bias in AI health datasets through the STANDING Together initiative. Nature Medicine. 2022;28(11):2232-3. DOI: 10.1038/s41591-022-01987-w
  101. Fang C, Dziedzic A, Zhang L, et al. Decentralised, collaborative, and privacy-preserving machine learning for multi-hospital data. eBioMedicine. 2024;101. DOI: 10.1016/j.ebiom.2024.105006
  102. Nature Research. Why Japan is a leader in radiological research. London: Nature; 2024 [cited 08 Apr 2024]. Available from: https://www.nature.com/articles/d42473-023-00449-2
  103. Efthimiou O, Seo M, Chalkou K, et al. Developing clinical prediction models: a step-by-step guide. BMJ. 2024;386:e078276. DOI: 10.1136/bmj-2023-078276
  104. Elhussein A, Baymuradov U, Phatnani H, et al. A framework for sharing of clinical and genetic data for precision medicine applications. Nature Medicine. 2024. DOI: 10.1038/s41591-024-03239-5
  105. Shah NH, Halamka JD, Saria S, et al. A Nationwide Network of Health AI Assurance Laboratories. JAMA. 2024;331(3):245-9. DOI: 10.1001/jama.2023.26930
  106. Anderson B. How to bridge innovation and regulation for responsible AI in healthcare. Nature Medicine. 2024;30(5):1231-. DOI: 10.1038/s41591-024-02983-y
  107. University of Melbourne. Artificial Intelligence Assurance Lab. Melbourne: University of Melbourne; 2024 [cited 6 Mar 2024]. Available from: https://cis.unimelb.edu.au/ai-assurance
  108. Jones E. Explainer: What is a foundational model. London: Ada Lovelace Institute; 2023 [cited 20 Apr 2024]. Available from: https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/
  109. Kim C, Gadgil SU, DeGrave AJ, et al. Transparent medical image AI via an image–text foundation model grounded in medical literature. Nature Medicine. 2024;30(4):1154-65. DOI: 10.1038/s41591-024-02887-x
  110. Medicine and Healthcare products Regulatory Agency (MHRA). Good Machine Learning Practice for Medical Device Development: Guiding Principles. London: MHRA; 2024 [cited 22 Feb 2024]. Available from: https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles
  111. Ong JCL, Chang SY-H, William W, et al. Ethical and regulatory challenges of large language models in medicine. The Lancet Digital Health. 2024;6(6):e428-e32. DOI: 10.1016/S2589-7500(24)00061-X
  112. Morley J, Hamilton N, Floridi L. Selling NHS patient data. BMJ. 2024;384:q420. DOI: 10.1136/bmj.q420
  113. OECD Policy Observatory. Why policymakers worldwide must prioritise security for AI. Paris: Organisation for Economic Co-operation and Development; 2023 [cited 17 Oct 2023]. Available from: https://oecd.ai/en/wonk/policymakers-prioritise-security
  114. World Health Organization (WHO). WHO calls for safe and ethical AI for health. Geneva: WHO; 2023 [cited 24 Jul 2023]. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
  115. Australian Signals Directorate. Engaging with Artificial Intelligence. Canberra: Australian Government; 2024 [cited 6 Mar 2024]. Available from: https://www.cyber.gov.au/resources-business-and-government/governance-and-user-education/governance/engaging-with-artificial-intelligence
  116. Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Computers in Biology and Medicine. 2023;158:106848. DOI: 10.1016/j.compbiomed.2023.106848
  117. Office of the Victorian Information Commissioner. Artificial Intelligence – Understanding Privacy Obligations. Melbourne: Victorian Government; 2021 [cited 6 Mar 2024]. Available from: https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/
  118. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework. Washington, DC: US Government; 2023 [cited 6 Mar 2024]. Available from: https://www.nist.gov/itl/ai-risk-management-framework
  119. The Office of the Privacy Commissioner of Canada (OPC). Principles for responsible, trustworthy and privacy-protective generative AI technologies. Quebec: OPC; 2023 [cited 6 Mar 2024]. Available from: https://priv.gc.ca/en/privacy-topics/technology/artificial-intelligence/gd_principles_ai/
  120. He X, Zheng X, Ding H. Existing Barriers Faced by and Future Design Recommendations for Direct-to-Consumer Health Care Artificial Intelligence Apps: Scoping Review. J Med Internet Res. 2023;25:e50342. DOI: 10.2196/50342
  121. Chew HSJ, Achananuparp P. Perceptions and Needs of Artificial Intelligence in Health Care to Increase Adoption: Scoping Review. J Med Internet Res. 2022;24(1):e32939. DOI: 10.2196/32939
  122. Reis M, Reis F, Kunde W. Influence of believed AI involvement on the perception of digital medical advice. Nature Medicine. 2024. DOI: 10.1038/s41591-024-03180-7
  123. Zondag AGM, Rozestraten R, Grimmelikhuijsen SG, et al. The Effect of Artificial Intelligence on Patient-Physician Trust: Cross-Sectional Vignette Study. J Med Internet Res. 2024;26:e50853. DOI: 10.2196/50853
  124. Sullivan C, Pointon K. Artificial intelligence in health care: nothing about me without me. Medical Journal of Australia. 2024;n/a(n/a). DOI: https://doi.org/10.5694/mja2.52282
  125. Han R, Acosta JN, Shakeri Z, et al. Randomised controlled trials evaluating artificial intelligence in clinical practice: a scoping review. The Lancet Digital Health. 2024;6(5):e367-e73. DOI: 10.1016/S2589-7500(24)00047-5
  126. Banerjee S. Involving patients in AI research to build trustworthy systems. Cambridge, UK: University of Cambridge; 2022 [cited 6 Mar 2024]. Available from: https://acceleratescience.github.io/machine-learning/2022/11/08/involving-patients-in-ai-research-to-build-trustworthy-systems.html
  127. Rogan J, Bucci S, Firth J. Health Care Professionals’ Views on the Use of Passive Sensing, AI, and Machine Learning in Mental Health Care: Systematic Review With Meta-Synthesis. JMIR Ment Health. 2024;11:e49577. DOI: 10.2196/49577
  128. Nair D, Raveendran KU. Consumer satisfaction, palliative care and artificial intelligence (AI). BMJ Supportive & Palliative Care. 2024;14(2):171-7. DOI: 10.1136/spcare-2023-004634
  129. Neri E, Aghakhanyan G, Zerunian M, et al. Explainable AI in radiology: a white paper of the Italian Society of Medical and Interventional Radiology. La radiologia medica. 2023;128(6):755-64. DOI: 10.1007/s11547-023-01634-5
  130. Editorial. Data sovereignty in genomics and medical research. Nature Machine Intelligence. 2022;4(11):905-6. DOI: 10.1038/s42256-022-00578-1
  131. Australian Alliance for Artificial Intelligence in Healthcare (AAAIH). A roadmap for artificial intelligence in healthcare for Australia. Sydney: AAAIH; 2021 [cited 24 Jul 2023]. Available from: https://aihealthalliance.org/wp-content/uploads/2021/12/AAAiH_Roadmap_1Dec2021_FINAL.pdf
  132. Banerjee S, Alsop P, Jones L, et al. Patient and public involvement to build trust in artificial intelligence: A framework, tools, and case studies. Patterns. 2022;3(6). DOI: 10.1016/j.patter.2022.100506
  133. Wrightson-Hester AR, Anderson G, Dunstan J, et al. An Artificial Therapist (Manage Your Life Online) to Support the Mental Health of Youth: Co-Design and Case Series. JMIR Hum Factors. 2023;10:e46849. DOI: 10.2196/46849
  134. Easton K, Potter S, Bec R, et al. A Virtual Agent to Support Individuals Living With Physical and Mental Comorbidities: Co-Design and Acceptability Testing. J Med Internet Res. 2019;21(5):e12996. DOI: 10.2196/12996
  135. Hudson S, Nishat F, Stinson J, et al. Perspectives of Healthcare Providers to Inform the Design of an AI-Enhanced Social Robot in the Pediatric Emergency Department. Children (Basel). 2023;10(9). DOI: 10.3390/children10091511
  136. Yu J, Shen N, Conway S, et al. A holistic approach to integrating patient, family, and lived experience voices in the development of the BrainHealth Databank: a digital learning health system to enable artificial intelligence in the clinic. Front Health Serv. 2023;3:1198195. DOI: 10.3389/frhs.2023.1198195
  137. Banerjee S. Patient and public involvement to build trust in artificial intelligence: a framework, tools and case studies. UK: GitHub; 2022 [cited 6 Mar 2024]. Available from: https://github.com/neelsoumya/outreach_ppi
  138. van de Loo B, Linn AJ, Medlock S, et al. AI-based decision support to optimize complex care for preventing medication-related falls. Nature Medicine. 2024. DOI: 10.1038/s41591-023-02780-z
  139. Hua D, Petrina N, Young N, et al. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med. 2024;147:102698. DOI: 10.1016/j.artmed.2023.102698
  140. MIT Technology Review Insights. The AI Effect: How artificial intelligence is making health care more human. Cambridge, MA: MIT; 2019 [cited 14 Nov 2023]. Available from: https://www.gehealthcare.co.uk/-/jssmedia/61b7b6b1adc740e58d4b86eef1bb6604.pdf
  141. Pelly M, Fatehi F, Liew D, et al. Artificial intelligence for secondary prevention of myocardial infarction: A qualitative study of patient and health professional perspectives. International Journal of Medical Informatics. 2023;173:105041. DOI: 10.1016/j.ijmedinf.2023.105041
  142. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94-8. DOI: 10.7861/futurehosp.6-2-94
  143. Kovoor JG, Bacchi S, Sharma P, et al. Artificial intelligence for surgical services in Australia and New Zealand: opportunities, challenges and recommendations. Medical Journal of Australia. 2024;n/a(n/a). DOI: https://doi.org/10.5694/mja2.52225
  144. Cheng R, Aggarwal A, Chakraborty A, et al. Implementation considerations for the adoption of artificial intelligence in the emergency department. The American Journal of Emergency Medicine. 2024;82:75-81. DOI: https://doi.org/10.1016/j.ajem.2024.05.020
  145. Dingel J, Kleine A-K, Cecil J, et al. Predictors of Health Care Practitioners’ Intention to Use AI-Enabled Clinical Decision Support Systems: Meta-Analysis Based on the Unified Theory of Acceptance and Use of Technology. J Med Internet Res. 2024;26:e57224. DOI: 10.2196/57224
  146. Henry KE, Kornfield R, Sridharan A, et al. Human–machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system. npj Digital Medicine. 2022;5(1):97. DOI: 10.1038/s41746-022-00597-7
  147. The King's Fund. Preparing staff for the digital NHS of the future. London: The King's Fund,; 2021 [cited 7 Mar 2024]. Available from: https://www.kingsfund.org.uk/events/preparing-staff-digital-nhs-future
  148. Visram S, Rogers Y, Sebire NJ. Developing a conceptual framework for the early adoption of healthcare technologies in hospitals. London: Nature Medicine; 2024 [
  149. Mandl KD, Gottlieb D, Mandel JC. Integration of AI in healthcare requires an interoperable digital data ecosystem. Nature Medicine. 2024. DOI: 10.1038/s41591-023-02783-w
  150. Tejani AS, Cook TS, Hussain M, et al. Integrating and Adopting AI in the Radiology Workflow: A Primer for Standards and Integrating the Healthcare Enterprise (IHE) Profiles. Radiology. 2024;311(3):e232653. DOI: 10.1148/radiol.232653
  151. Gim N, Wu Y, Blazes M, et al. A Clinician's Guide to Sharing Data for AI in Ophthalmology. Investigative Ophthalmology & Visual Science. 2024;65(6):21-. DOI: 10.1167/iovs.65.6.21
  152. Mistry P, Maguire D, Chickwira K, et al. Interoperability is more than technology. London: King's Fund; 2022 [cited 6 Mar 2024]. Available from: https://www.kingsfund.org.uk/insight-and-analysis/reports/digital-interoperability-technology
  153. Gombolay GY, Silva A, Schrum M, et al. Effects of explainable artificial intelligence in neurology decision support. Annals of Clinical and Translational Neurology. 2024;11(5):1224-35. DOI: https://doi.org/10.1002/acn3.52036
  154. Rabindranath M, Naghibzadeh M, Zhao X, et al. Clinical Deployment of Machine Learning Tools in Transplant Medicine: What Does the Future Hold? Transplantation. 2024;108(8).
  155. Sodeau A, Fox A. Influence of nurses in the implementation of artificial intelligence in health care: a scoping review. Australian Health Review. 2022;46(6):736-41.
  156. Australian Medical Association. Artificial Intelligence in Healthcare. Australia: AMA; 2023 [cited 19 Dec 2023]. Available from: https://www.ama.com.au/articles/artificial-intelligence-healthcare
  157. Zicari RV, Ahmed S, Amann J, et al. Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier. Frontiers in Human Dynamics. 2021;3. DOI: 10.3389/fhumd.2021.688152
  158. European Parliament. Artificial intelligence in healthcare: Applications, risks, and ethical and societal impacts. European Parliament; 2022 [cited 13 Jul 2023]. Available from: https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf
  159. Ganapathi S, Duggal S. Exploring the experiences and views of doctors working with Artificial Intelligence in English healthcare; a qualitative study. PLoS One. 2023;18(3):e0282415. DOI: 10.1371/journal.pone.0282415
  160. Moulds A, Horton T. What do technology and AI mean for the future of work in health care? London: The Health Foundation; 2023 [cited 7 Mar 2024]. Available from: https://www.health.org.uk/publications/long-reads/what-do-technology-and-ai-mean-for-the-future-of-work-in-health-care
  161. Organisation for Economic Co-operation and Development (OECD). Using AI in the workplace. Paris: OECD; 2024 [cited 28 Mar 2024]. Available from: https://www.oecd.org/publications/using-ai-in-the-workplace-73d417f9-en.htm
  162. Spatharou A, Hieronimus S, Jenkins J. Transforming healthcare with AI: The impact on the workforce and organizations. New York: McKinsey & Company,; 2020 [cited 17 Jul 2023]. Available from: https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai
  163. Misra R, Keane PA, Hogg HDJ. How should we train clinicians for artificial intelligence in healthcare? Future Healthcare Journal. 2024;11(3):100162. DOI: https://doi.org/10.1016/j.fhj.2024.100162
  164. Topol E. Preparing the healthcare workforce to deliver the digital future. London: Health Education England; 2019 [cited 11 Jan 2024]. Available from: https://topol.hee.nhs.uk/
  165. O'Connor S, Vercell A, Wong D, et al. The application and use of artificial intelligence in cancer nursing: A systematic review. European Journal of Oncology Nursing. 2024;68. DOI: 10.1016/j.ejon.2024.102510
  166. Silverberg M. Preparing Radiology Trainees for AI and ChatGPT. Oak Brook, IL: Radiological Society of North America; 2023 [cited 12 Feb 2024]. Available from: https://www.rsna.org/news/2023/july/radiology-trainees-ai-and-chatgpt
  167. NHS Digital Academy. Developing healthcare workers’ confidence in artificial intelligence. London: NHS England; 2023 [cited 11 Jan 2024]. Available from: https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/digital-literacy
  168. Schuur F, Rezazade Mehrizi MH, Ranschaert E. Training opportunities of artificial intelligence (AI) in radiology: a systematic review. European Radiology. 2021;31(8):6021-9. DOI: 10.1007/s00330-020-07621-y
  169. Chebrolu K, Shukla M, Varla H, et al. Health care’s quest for an enterprisewide AI strategy. United States: Deloitte Insights; 2022 [cited 17 Jul 2023]. Available from: https://www2.deloitte.com/us/en/insights/industry/health-care/ai-led-transformations-in-health-care.html
  170. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implementation Science. 2024;19(1):27. DOI: 10.1186/s13012-024-01357-9
  171. KPMG. Future of Work: Understanding the impacts of technology on the health workforce. Sydney: NSW Health; 2020 [cited 22 Feb 2024]. Available from: https://www.health.nsw.gov.au/workforce/horizons/Documents/future-of-work-healthcare-workforce.PDF
  172. American Hospital Association (AHA) Centre for Health Innovation. AI and the Health Workforce. Washington, DC: AHA; 2019 [cited 6 Mar 2024]. Available from: https://www.aha.org/center/emerging-issues/market-insights/ai/ai-and-health-care-workforce
  173. Hamoni R, Lin O, Matthews M, et al. Building Canada’s Future AI Workforce: In the Brave New (Post-Pandemic) World. Ottawa: The Information and Communications Technology Council, Canada; 2021 [cited 6 Mar 2024]. Available from: https://ictc-ctic.ca/reports/building-canadas-future-ai-workfore

Living evidence tables include some links to low quality sources and an assessment of the original source has not been undertaken. Sources are monitored regularly but due to rapidly emerging information, tables may not always reflect the most current evidence. The tables are not peer reviewed, and inclusion does not imply official recommendation nor endorsement of NSW Health.

Last updated on 6 Nov 2024

Back to top